An Amazon AI Recruitment Tool Taught Itself That Men Were Better Candidates Than Women

Chip Somodevilla/Getty Images News/Getty Images

Artificial intelligence (AI) is just as capable of bias as humans, in part because humans often unknowingly incorporate their biases into the algorithms during development. That's what happened with a project first publicized by a Reuters report on Wednesday: Amazon's experimental AI recruitment tool taught itself gender discrimination and was later dropped by the company for that reason, according to Reuters.

The idea behind the project was to streamline Amazon's hiring process by creating AI that could evaluate resumes and highlight candidates to consider. To build the tool, developers designed algorithms that analyzed 10 years' worth of resumes submitted to Amazon.

"Everyone wanted this holy grail," a source told Reuters. "They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those." The concept itself isn't totally novel; according to RecruitingTools.com, many AI tools have been developed to help with recruitment.

But here's the problem: The tech industry is notoriously male-dominated. According to a 2014 study from the U.S. Equal Employment Opportunity Commission, only 20 percent of high tech executives are women, which is nearly 10 percent worse than the overall private sector average. Women make up 25 percent of computing jobs, per a 2016 report from the National Center for Women & Information Technology.

Because most of Amazon's applicants were male, the AI started evaluating resumes as successful when they used words that coded masculine and less successful when they coded feminine. Resumes that included the word "women's" (e.g. "women's chess club") were given a lower score. Verbs that were more common on men's resumes, like "executed," led to higher scores. Graduates from two women's colleges (which sources did not name) were also docked points.

The company launched the tool in 2014 and quickly realized its potential to discriminate. Developers tried to tweak the algorithms so that they were more gender-neutral, but realized that the changes didn't completely eliminate the potential for bias. Both because of this flaw and because the AI kept turning out applicants who were clearly unqualified, Amazon ultimately decided to stop using the software. They are, however, still using a "much-watered down version" for some tasks.

Sources told Reuters that Amazon did consider the AI's recommendations during hiring for a period of time but that they were always weighed alongside other factors. It is not clear how many people's applications were analyzed by the tool during that period.

"The algorithms that drive AI don't reveal pure, objective truth just because they're mathematical," a 2017 report in D!gistalist Magazine explained. "Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best." ACLU attorney Rachel Goodman agrees, telling Reuters, "We are increasingly focusing on algorithmic fairness as an issue."

Amazon apparently still believes that it's possible to create an AI recruitment tool that is free from bias if they're intentional about it. According to Reuters, a team in Edinburgh is currently working on one that will specifically work towards employee diversity.