News

Artificial Intelligence Has Been Programmed To Call Men “Doctors” & Women “Nurses”

by Virginia Chamlee
UniversalImagesGroup/Universal Images Group/Getty Images

A team of researchers are working to fight the threat posed by Artificial Intelligence — not that robots will somehow take over the world anytime soon, but that AI has blind spots affecting women and minorities. Those blind spots, say scientists, could prove harmful in the long run.

Scientists have known about technology's blind spots for some time — after all, computers learn from programmers and can therefore inherit biases based on how they were programmed.

Jeannette Wing, the director of Columbia University's Data Sciences Institute, tells Bloomberg Technology that the inherent biases found in technology could be harmful to society, considering so many large companies are using AI to make important decisions: “The worry is if we don't get this right, we could be making wrong decisions that have critical consequences to someone's life, health, or financial stability."

Scientists at Boston University and Microsoft Research New England took a closer look at how gender bias, specifically, could impact algorithms. What they found is that "word-embeddings" (turning text into numbers to allow for machine learning) contain biases "that reflect gender stereotypes present in broader society." So, for instance, "doctor" is seen by some machines to be a masculine word, while "nurse" is deemed feminine.

The problem, the scientists say, is that "word embeddings not only reflect such stereotypes but can also amplify them. This poses a significant risk and challenge for machine learning and its applications."

The result of the researchers' work was the creation of a gender-bias-free public dataset — like designating the words "doctor" and "nurse" as equally male or female. They are currently also conducting research on a dataset that would be without racial biases, though it will still take several years before bias can be entirely removed from machine-learning.

In 2016, for instance, the first international beauty contest to be decided by an algorithm drew ire when it was discovered that the technology overwhelmingly picked white winners. That same year, Microsoft unleashed a “millennial” chatbot which quickly "learned" to support Nazis and demoralize feminists, based on its interactions with users on 4chan and 8chan.

The company shut off the chatbot shortly thereafter, and aimed to improve the technology, stating "As [the AI] learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it." But even the evolved technology has exhibited bias. In 2017, Microsoft released a new bot (named "Zo") which it programmed specifically to avoid discussing politics and religion. Still, in Zo's fourth message to a BuzzFeed reporter, it mentioned unprompted, that the Qur'an is “very violent.” Microsoft said these instances are rare and that it has taken action to stop those kind of responses.

The real-world implications of machine bias came into public view earlier this year, thanks to a ProPublica investigation of a machine-learning model called COMPAS. According to the report, the system — which aims to predict the likelihood that a defendant will commit a future crime — may be biased against minorities. The technology is sometimes used in the criminal justice system, to determine whether an inmate is granted parole.

There are several groups looking to tackle the problem of machine bias. Microsoft has a team called FATE (Fairness, Accountability, Transparency and Ethics) which aims to "address the need for transparency, accountability, and fairness in AI and ML systems." At Google, AI chief John Giannandrea has also been vocal about technology's blind spots — and how dangerous they can be, particularly as the use of AI becomes more mainstream and more respected.

In a Google conference held in October, Giannandrea dismissed claims that AI would one day "make humans obsolete,” saying, "I understand why people are concerned about it but I think it’s gotten way too much airtime. I just see no technological basis as to why this is imminent at all.” A very real concern, he argued, is the blind spots we've already seen exhibited by technology. “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased."

The gender-neutral system built by the Boston University and Microsoft researchers could be crucial to preventing that from happening. Having gender bias in machines could only amplify the gender bias in the real world, the researchers argued, while ensuring that AI is not biased "can hopefully contribute to reducing gender bias in society."