Life

AI Is Sexist & Racist & It's All Our Fault

We tend to think of machines as objective, unlike people, who can hold subjective beliefs. But according to a new study published in the journal Science, this is far from the case. In fact, artificial intelligence can be sexist and racist. And that's because humans are the ones designing it — which, in turn, shows just how far-reaching prejudice can be.

The researchers studied word embedding, a machine-learning tool that helps computers make sense of speech and writing and could eventually allow them to reason the way humans do. Machines that engage in word embedding assign numbers to words to represent how closely associated their meanings are. These assignments are based on how often the words appear together. The study tested words from Google News and the “common crawl” dataset, which includes around 840 billion words published online. Both yielded similar results.

Some of the associations were innocuous. Flowers were associated with good things, for example, while bugs were bunched in with bad things. But it got really interesting when you looked at words for people. "Male" and "man" got grouped near words related to STEM fields, and "woman" and "female" were linked to household activities. White-sounding names ended up near nice words like "happy" and "gift," and black-sounding names were with not-so-nice words.

As AI gets more advanced and makes decisions based on these ways of thinking, these biases could become even more dangerous than they are with humans. Unlike us, AI doesn't have a moral compass motivating it to challenge the discriminatory ideas it learns — and, in fact, we can't look at AI's bias as an issue separate from our own. "A lot of people are saying this is showing that AI is prejudiced," study author and University of Bath computer scientist Joanna Bryson told The Guardian. "No. This is showing we’re prejudiced and that AI is learning it."

We've seen this in action in our everyday world, too. Twitter bots that go on racist rants due to what they learned from Twitter's live population are a result of AI learning from human bias; the same is true of search engine results, which are often determined by algorithms picking up the content people put in. If people provide a barometer for where we're at as a society, it should tell us a lot that a search for "girl" yields women, while one for "boy" yields kids, and that when you type "why do women," you get autocomplete suggestions like "why do women always complain?"

The moral of the story is that until we literally have conscious machines that can think for themselves, we have to take responsibility for what AI does. Our machines may appear to hold outdated and prejudiced beliefs, but they don't actually have those beliefs — we do. And since technology has the ability to reflect them back at us, we should use this opportunity to examine them — and to figure out how we can be better moving forwards.