Bias and Machine Learning in Recruiting
In a space where there is plenty of talk about efficiency and technology, it is surprising to see that most companies are still hiring like it is 1998, with the same biases and bottlenecks. How can machine learning alleviate these issues and elevate HR into its rightful place in the future?
This summer I was asked to share my thoughts on how, or rather if, machine learning systems can reduce bias in hiring. My talk ended more up as a discussion with myself than a conclusion. This topic that is very relevant for companies like Relink, and I wanted to share my thoughts with you guys as well.
Every time I have a discussion with friends about Relink and machine learning, I get the feeling that it turns into a “them versus us” case. The machines vs. the humans. In my mind, the systems that we, and other companies, are building are more like extensions, a helper if you will, our humble servant, when trained properly.
As humans, we make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing our decisions off to machines, we expect them to do the same as us, only better and way faster. Imagine the best version of us: objective, rational, omnipotent.
"The systems that we are building are more like extensions, a helper if you will, our humble servant, when trained properly."
While most forms of discrimination are unintentional, it is widely known that they have severe costs for companies, both monetarily and in terms of brand equity. In today's hypercompetitive world, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and a larger potential market.
Machine Learning systems can really be a tremendous force for good and I strongly believe that this type of technology can help us reduce and avoid bias in hiring (don’t take my word for it, take Ray Dalio’s). Let me illustrate using a simple example using what we do at Relink:
When it comes to bias its easy to only think about the usual suspects like gender, ethnicity, religion, political views etc. Still, companies are losing a lot of great talent due to unconscious biases. This is typically due to the limited understanding they have in terms of what is actually relevant for any given job. It is easy for recruiters to get distracted by spelling mistakes in resumes, people who chose to write their headlines in bright orange or add a picture of their dog hoping to stand out. (That actually happened to me once, no kidding).
"Machine Learning systems can really be a tremendous force for good and I strongly believe that this type of technology can help us reduce and avoid bias in hiring."
A bigger problem for us humans is actually understanding the relevance of experience. Let's say you’re hiring a junior data scientist and you get an applicant that has majored in wind engineering. Without actually knowing that there is a strong relevancy between that particular background and the skillset you need in the position, you would probably reject the applicant. Not based on facts, but based on incorrect assumptions.
By understanding the relevance of any background to any job, and showing standardized scorecards instead of resumes (with pictures of dogs and spelling mistakes), recruiters can focus decisions on facts also in jobs he or she really doesn't understand. Basically the actual relevance of the candidate. By focusing on facts, we also reduce the focus on the usual suspects: names, gender and other non-merit data points are not a part of any scorecard.
That's just one example where Machine Learning from us is in fact reducing bias in hiring. With that said, it is important to remember the limitations and pitfalls of such technology.
"By focusing on facts, we also reduce the focus on the usual suspects."
Machines don't reveal pure, objective truth just because they’re mathematical. Us humans must teach the systems what we consider suitable, train it on which information is relevant, and indicate what outcomes we consider best - ethically, legally, and, of course, financially. To create systems that are free from bias we need to train the systems with data that are free from bias, conscious or otherwise. Or we need mitigations in place to take out biased data from the training sets.
"There’s a whole lot about the world that we don’t understand at all. The real world, not our world. Myself included.
Let’s take a step back for a second. When I look at the tech industry in 2017— we have apps that let you find a person to hook up in 5 minutes, hire a private helicopter to the Hamptons, outsource your laundry— I am quite certain that there’s a whole lot about the world that we don’t understand at all. The real world, not our world. Myself included.
We call ourselves innovators, but are we really? Seriously, are we?
We call ourselves problem-solvers, but the evidence suggests the problems we want to solve are what usually is referred to as “First World” problems.
Way back when this space was just getting started, for me it seems that most technologists used to work on big problems. Not First World problems, but whole world problems -- ending poverty, ending disease, creating a more equal world. They didn’t do it because it was gonna get them a speaking gig at a technology conference, the front page or Business Insider or even a great little M&A deal.
Now, before you chalk me off as a mother-teresa type— let me bring this back down to earth.
Very often in the HR tech world, we forget about the real effects of what a job has to a person. One job can change a life, save a life, ruin a life, and so on. The right person at the right job can elevate a company to greatness, and of course the inverse is true as well.
If we can help people, without bias, to get that one right job, with minimal cost, should we? More importantly, if we can do that at scale, helping thousands, why shouldn’t we?
We should naturally also be aware of the potential pitfalls and make sure we have a clear understanding of what we want technology to solve and what we don’t want it to solve.
"We can’t just sit back and hope that technology will make this world a better place for us."
There are a lot of predictions we can do that we chose not to. Not because it wouldn’t be valuable for a company but because used wrongly it can create a job market with higher bias and less fairness. The opportunities and responsibilities are ours. Yours and mine. It’s up to us how this all plays out.
We can’t just sit back and hope that technology will make this world a better place for us. Just as machine learning systems needs to be trained by the best version of us, the real fight against bias also starts with us. We still keep hiring and hanging with people who are like ourselves. I mean seriously, take a look at your own group of friends.
Sure - technology will help us see the blind spots, and connect the dots where we just aren't able to. But at the end of the day we chose who we hire.
If you are reading this on your own computer, chances are you were born with a set of resources and opportunities that 95% of the humans on the planet can only imagine.
We’re so isolated from the world outside our incubators and startup offices and chase for venture capital. We often don’t see the very real problems that the rest of the world faces.
HR tech can only do so much, but if we all get on this bus- then maybe we can start seeing and creating a world that we are really, so fucking proud of.