Artificial intelligence

Keep Bias Out of Your AI Hiring Tools With These 3 Steps

Many companies already use Artificial Intelligence (AI) tools to make their hiring process quicker, more accurate, and, crucially, fairer. In fact, 43% of recruiters and hiring managers say a key benefit of AI is its potential to remove human bias.

But this AI revolution has been far from perfect — and in some cases, has actually made things worse. Algorithms created with the best intentions sometimes pick up the flaws and unconscious biases of their human creators. Left unchecked, these faulty algorithms can hold back a company’s diversity efforts and screen out qualified candidates.

From the Google job ads algorithm that only showed high-paying jobs to men, to the facial recognition software that struggles to recognize non-white faces, these tools are not infallible. That doesn’t mean they need to be scrapped altogether — just fine-tuned.

If you’re experimenting with building AI tools into your hiring process, here are three steps you can take to keep bias out of the equation.

1. Teach your AI to appreciate diversity by using a balanced data set that is demographically representative

AI doesn’t operate in a vacuum. In order to make decisions (like which candidates are screened out and which get an interview), the machine looks at a sample set of data (like the resumes of top performers at the company) to learn what it should look for in an ideal candidate.

On the surface, that sounds great — the machine can rapidly filter a high volume of applications to find qualified candidates. Theoretically, it shouldn’t be distracted by information that can cause bias in humans, like female-sounding names.

The trouble is, if a company already has a lack of diversity in its workforce, the AI can learn to perpetuate this rather than change it.

“Algorithms do not automatically eliminate bias,” explains Cynthia Dwork, a Professor of Computer Science at Harvard and researcher for Microsoft. “Suppose a university, with admission and rejection records dating back for decades and faced with growing numbers of applicants, decides to use a machine learning algorithm that, using the historical records, identifies candidates who are more likely to be admitted. Historical biases in the training data will be learned by the algorithm, and past discrimination will lead to future discrimination.”

This has proven to be an especially big problem at tech companies, where data from historically white, male workforces can sometimes teach AI to screen out women.

Take Amazon’s recently scrapped experimental hiring tool, which used AI to screen and rate resumes based on historical hiring patterns. Since the number of men hired over a 10-year period far outweighed the number of women, the system taught itself that men were preferable — so much so that it actively penalized resumes that mentioned the word “women’s” or women’s colleges.

To avoid teaching your AI to discriminate, you need to vigilant about preventing selection bias in the data you use to train the AI. This occurs when the data is not representative of the diverse population you’re trying to recruit, causing the system to optimize for the dominant group.

For companies that already have relatively strong diversity in their workforces, creating a representative data set will involve modeling it on demographic data. For example, even if the company doesn’t have a perfect 50:50 gender ratio yet but still has a lot of female data about to draw on, it may be able to replicate this demographically representative ratio in the breakdown of resumes it feeds the AI. Similar modeling can be done around ethnicity and other diversity targets.

Of course, this creates a problem for companies that don’t have the best track record for diversity, since they will struggle to create a large enough data set that remains representative of society as a whole. Introducing one female candidate into a huge pool of male resumes will not make a big difference.

In these instances, MIT Sloan suggests weighting underrepresented data points more heavily to make the AI aware of its importance. Other researchers, like those at Microsoft, also use different algorithms to analyze different groups, allowing them to get around AI’s blind spots.

2. Diversify the team working on the AI to expose it to a range of perspectives

Algorithms (the rules that govern how AI “thinks”) are created by humans. And if you have a group of people who all look and think alike creating those algorithms, the AI may start thinking just like that group.

“Technology inevitably reflects its creators in a myriad of ways, conscious and unconscious,” says Kriti Sharma, VP of AI at Sage, a business software company specializing in accounting and payroll. “The tech industry remains very male and fairly culturally homogeneous.”

To ensure the long-term applicability of these tools, Kriti argues that diversifying the teams working on AI is crucial. And that goes beyond diversity of race and gender.

“Currently, AI development is a PhD’s game,” she says. “While the focus on quality and utility needs to remain intact, expanding the diversity of people working on AI to include people with nontechnical professional backgrounds and less advanced degrees is vital to AI’s sustainability.”

Kriti recommends that companies should “consider hiring creatives, writers, linguists, sociologists, and passionate people from nontraditional professions” for their AI teams. These people can bring a wide range of perspectives to the table, making AI more representative of a diverse society.

In the long term, Kriti also suggests that companies building their own AI tools should create training programs to help employees with different educational backgrounds join the AI team.

“Recruiting diverse sets of people will also help to improve and reinvent AI user experiences,” she says.

3. Monitor the results your AI tools produce and consider hiring an auditor to test them

The most important step any company can take when adopting a new AI tool is to closely monitor the results it produces. That goes for both third-party tools and proprietary ones.

Even if the purpose of the tool is not to eliminate bias, it may still have an impact on diversity. Gather and analyze data about the candidates the AI selects and the ones it screens out to spot any unforeseen trends. From there, you can figure out whether the algorithm itself needs to be tweaked.

Ongoing data collection and analysis is especially important with tools that incorporate machine learning, since these tools will keep learning and adapting over time — so one round of testing won’t really cut it.

Regular auditing is one way to ensure the algorithms behind the tools aren’t biased. Loren Larsen, CTO of AI-powered video interviewing tool HireVue, recommends that companies conduct both internal auditing of their results and hire third-party experts to review their algorithms.

“It is extremely important to audit the algorithms used in hiring to detect and correct for any bias,” he tells CNBC. “No company doing this kind of work should depend only on a third-party firm to ensure that they are doing this work in a responsible way. Third parties can be very helpful, and we have sought out third-party data-science experts to review our algorithms and methods to ensure they are state-of-the-art. However, it's the responsibility of the company itself to audit the algorithms as an ongoing, day-to-day process.”

AI can take bias out of hiring — unless it picks up bad habits

As AI becomes increasingly involved in the hiring process, companies must pay close attention to ensure it doesn’t do more harm than good.

Used effectively, AI has the potential to streamline the hiring process while boosting diversity. But that can only happen if the algorithms behind the tools are immune to human bias.

Experiment, welcome new perspectives, measure the results, and iterate to ensure the AI you use today will build the diverse and capable workforce of tomorrow.

To receive blog posts like this one straight in your inbox, subscribe to the blog newsletter.

Have blog stories delivered to your inbox