John vs. Marlowe
We are getting valuable help from a several experienced and forward leaning recruiters. One of those are John Rose - you may know him as @resourcefuljohn. Here are his initial machine training experiences.
John has been helping us train one of our algorithms, a machine learning algorithm that matches candidates to job ads based on publicly available candidate data.
An engineer by training, and a bit of a geek having taught himself programming on the Spectrum ZX, he is uniquely able to provide valuable insights into the uses and limitations of machine learning in recruiting.
John’s experience with our AI candidate recommendation API, Marlowe, very much fell in line with his general take on technology and automation, and he is very happy to share his initial machine training experiences.
"It’s like the robot visions of the future,” he explains. “We think robots will take over all the jobs, all the tasks - but in reality there are some tasks that can and should be automated while there are other tasks we can do better ourselves".
- As far as recruiting is concerned, I believe there are very clear limits to how many websites, mouse clicks and searches a human being can do in a day - while machines have no equivalent limit. For that reason, we should let technology be our researcher.”
...we should let technology be our researcher.
John was asked to share 3 job ads with RelinkLabs to be used for machine training. He received 5 profiles back for each position and was tasked with reporting back on which ones were relevant.
- The 3 roles were all in the same industry, but were for different roles (QA Manager, Marketer and Scientist, respectively) in different geographies. Out of the 15 candidates, 3 were ‘wrong planet’, 4 were a definite no, 4 were spot on, and 4 were maybes. The ‘wrong planet’ ones are there because RelinkLabs have to send ‘bad fit’ profiles to get training data, so I’m not reading too much into that.
- The interesting ones are the ‘maybes’. They could be relevant, but there is no information available to substantiate why. This is where the algorithms have worked some magic in the background by filling in the blanks.
- I see that it will be necessary for me, at least initially, to see why a recommendation is made before I can trust it. It’s very much a confirmation of my beliefs about our ‘robot future’ - I’m very happy to let machines do specific tasks that they can do better than me - but I need to understand why choices are being made. While I am happy to take recommendations - I want to be the one making the decision
- The screening out is as important as the screening in - what are the criteria for agreeable or acceptable, why did we choose candidate A over candidate B? Did we exclude candidates simply because they did not list the correct keyword?”
Because we left out geographies in this test, there is no way to know for sure - so it’s clear what we’ll be testing for moving forward. “We need to dot the i’s and cross the t’s.”
- This is very clearly one sourcing task machines can do better than human beings; identifying candidates based on big data such as ‘likelihood of relevant skills’ that human beings would not and could not possibly unearth - at least not at scale.
However moving forward we’ll be looking for help from human beings - John and his peers again - in finding out whether our next batch of candidates with “no data” are actually purple squirrels.
PS - make sure to follow John on Twitter (@resourcefuljohn).