The Spoke / Is AI Sexist? Subscribe

This is a picture of three young people and a sketch of a robot
Amazon’s AI Recruiting Tool Didn’t Like Women

Amazon recently scrapped an experimental artificial intelligence (AI) recruiting tool that was found to be biased against women. At this point, I hope you might have a few questions, such as: What is an AI recruiting tool and how does it work? Why was it biased against women? I’ll try to answer them for you in the following.

The AI Recruiting Tool

You have certainly heard of human recruiters. They are matchmakers between employers and potential employees. They travel, send cold emails, and “network” at conferences and job fairs. When recruiters make a successful match, they get paid, sometimes by one party, sometimes by both. As you can see, this matchmaking dance is often expensive and time-consuming. Surely technology can help, right? A human recruiter can review at most a few dozen applicants per day, before she gets tired. In contrast, artificial intelligence can “read” thousands of applications in seconds and rank them based on desired criteria, showing the most promising candidates at the top. Understandably then, compared to a human recruiter, an AI recruiter would be more time and cost efficient. And now that the human recruiter doesn’t need to sift through and rank candidates, she can devote her time to reaching out to the best candidates and wooing them to accept an offer. What nice team-work between the human and AI recruiters! 

Unfortunately, things are never so simple. How can we ensure that the AI recruiter is being fair to all candidates? Can it offer explanations for why it didn’t suggest any women for a certain job opening? To answer these new questions, we need to understand how the AI tool “learns” to do its job.

It all starts with a big “training” set of job applications. For years, companies have been requiring job applicants to submit all their materials online. For example, if you have been on the academic job market, you were probably asked to upload your resume, cover letter, and letters of recommendation in a website like AcademicJobsOnline.org. Big corporations like Amazon and Google, unlike universities, run their own job application sites. Therefore, over time, they have amassed thousands and thousands of application materials, all in electronic form. Additionally, they have recorded which applicants were successful in their job hunts. Thus, they have examples of the materials submitted by applicants who were hired and by applicants who were rejected. This information is then given to the AI tool to “learn” the characteristics that reflect a successful candidate. In the case of Amazon’s tool, the AI “learned” that words like “executed” and “captured” in a resume correlate with success. Meanwhile, it also “learned” that the presence of a phrase like “women’s” (as in “women’s chess captain”) correlates with rejection, and so the corresponding resume was downgraded. 

Artificial intelligence, despite all the hype (it will save the planet) and all the fear (it will kill mankind), is not, actually, intelligent. It has no idea what a word like “women’s” means and how it corresponds to entities in the real world. This kind of AI is only good at detecting patterns and finding relationships in the data we give it. So the data we provide to the AI, and what we tell it to do with it, is what matters the most.

Why was the AI tool biased against women?

The Amazon employees who talked to Reuters anonymously said that the AI tool downgraded applications of graduates from two women’s colleges, without specifying which colleges. This detail is what compelled me to write about the tool. 

I am a woman computer science professor who teaches Artificial Intelligence at Wellesley College, which is a women’s college. As is typical at a liberal arts college, my students not only take computer science and mathematics courses for their major, but also courses in social sciences, arts, and humanities, courses with titles such as “Introduction to Women’s and Gender Studies,” “Almost Touching the Sky: Women’s Coming of Age Stories,” or “From Mumbet to Michelle Obama: Black Women’s History.” They are more likely than many other students to have the phrase “women’s” in their job application materials. Some of these students might have even been in the pool of applicants deemed as “not worthy to be recruited” by Amazon’s AI tool.

Every day, I stand in front of classrooms full of intelligent women, eager to learn about the beauty and power of algorithms. It pains me to find out that a major player like Amazon created and used algorithms that, ultimately, could have been used to crush their dreams of making their mark in the world by denying them the opportunity to join the teams of engineers who are designing and building our present and future technologies. 

Why did the AI tool downgrade women’s resumes? Two reasons: data and values. The jobs for which women were not being recommended by the AI tool were in software development. Software development is studied in computer science, a discipline whose enrollments have seen many ups and downs over the past two decades. For example, in 2008, when I joined Wellesley, the department graduated only 6 students with a CS degree. Compare that to 55 graduates in 2018, a nine-fold increase. Amazon fed its AI tool historical application data collected over 10 years. Those years most likely corresponded to the drought-years in CS. Nationally, women have received around 18% of all CS degrees for more than a decade. The issue of underrepresentation of women in technology is a well-known phenomenon that people have been writing about since the early 2000s. The data that Amazon used to train its AI reflected this gender gap that has persisted in years: few women were studying CS in the 2000s and fewer were being hired by tech companies. At the same time, women were also abandoning the field, which is infamous for its awful treatment of women. All things being equal (e.g., the list of courses in CS and math taken by female and male candidates, or projects they worked on), if women were not hired for a job at Amazon, the AI “learned” that the presence of phrases like “women’s” might signal a difference between candidates. Thus, during the testing phase, it penalized applicants who had that phrase in their resume. The AI tool became biased, because it was fed data from the real-world, which encapsulated the existing bias against women. Furthermore, it’s worth pointing out that Amazon is the only one of the five big tech companies (the others are Apple, Facebook, Google, and Microsoft), that hasn’t revealed the percentage of women working in technical positions. This lack of public disclosure only adds to the narrative of Amazon’s inherent bias against women.

Could the Amazon team have predicted this? Here is where values come into play. Silicon Valley companies are famous for their neoliberal views of the world. Gender, race, and socioeconomic status are irrelevant to their hiring and retention practices; only talent and demonstrable success matter. So, if women or people of color are underrepresented, it’s because they are perhaps too biologically limited to be successful in the tech industry. The sexist cultural norms or the lack of successful role models that keep women and people of color away from the field are not to blame, according to this world view. 

To recognize such structural inequalities requires that one be committed to fairness and equity as fundamental driving values for decision-making. If you reduce humans to a list of words containing coursework, school projects, and descriptions of extra-curricular activities, you are subscribing to a very naive view of what it means to be “talented” or “successful.” Gender, race, and socioeconomic status are communicated through the words in a resume. Or, to use a technical term, they are the hidden variables generating the resume content. 

Most likely, the AI tool was biased against not just women, but other less privileged groups as well. Imagine that you have to work three jobs to finance your education. Would you have time to produce open-source software (unpaid work that some people do for fun) or attend a different hackathon every weekend? Probably not. But these are exactly the kinds of activities that you would need in order to have words like “executed” and “captured” in your resume, which the AI tool “learned” to see as signs of a desirable candidate. 

Let’s not forget that Bill Gates and Mark Zuckerberg were both able to drop out of Harvard to pursue their dreams of building tech empires because they had been learning code and effectively training for a career in tech since middle-school. The list of founders and CEOs of tech companies is composed exclusively of men, most of them white and raised in wealthy families. Privilege, across several different axes, fueled their success.

Artificial Intelligence is not at fault here. The twisted values of what it means to be successful in the tech industry are the culprit. We need to expose these values and hold companies like Amazon accountable for continuing to abide by them. They must take responsibility for their fundamentally unfair practice of reducing humans to a “bag of words” in a resume, instead of nurturing and advancing their human potential. 

 

Image Credit: pathdoc, “Cartoon robot sitting in line with applicants for a job interview.” Shutterstock. Web. 30 October, 2018.

Related