Artificial Intelligence Can Be Sexist And Racist, & Here’s How It Can Affect You
Artificial intelligence isn't just science fiction any more. It's all over the place, and it has the potential to change the way we do everything from harvest crops to conduct wars. However, if your image of an artificially intelligent machine is one that thinks for itself and is capable of perfect, rational thinking without any biases, think again. Evidence is increasingly suggesting that the artificial intelligence currently being built is deeply vulnerable to the biases of the people and information programming them. New research from the University of Virginia found that artificial intelligence can exhibit sexist behaviors, even if programmers don't intentionally instill those values. Considering how much the future of many industries (not to mention our personal lives) may depend on various forms of AI, bias of all kinds is a serious issue.
Data released recently suggests that the global impact of AI will be around $15 trillion — yes, trillion with a T. Analysts think robots will influence everything from health and medicine to manufacturing and the financial sector. And with the first ever album composed by an AI in collaboration with a human is also being released this week, the sky's the limit. But looking at how AIs learn about the world, and figuring out what they do with that knowledge, has revealed that artificial intelligence may just reflect the same biases and prejudices that their creators exhibit — namely, sexism and racism.
Artificial Intelligence Regurgitates Our Own Biases
AIs that learn using images as their guide often use a collection of language and meanings attached to the images to help with their recognition and understanding. Unfortunately, scientists at the University of Virginia found that those language cues often contain embedded sexism, and once the AIs notice those patterns, they train themselves to recognize them and make them worse. "The activity 'cooking' is over 33 percent more likely to involve females than males in a training set [of images]," the scientists wrote in their recently released paper, "and a trained model further amplifies the disparity to 68 percent at test time."
This isn't the first time this connection has been made. Earlier in 2017, research published in Science found that programs designed to help computers make sense of language by building up word-associations brought a whole host of sexist baggage with them. "Female" and "girl" were built into association-webs with things like kitchens and domesticity, while "male" was associated with doctoring and engineering.
AIs can be not just sexist, but racist, too: In 2015, Google users found that its visual algorithms were labeling photographs of Black people as "gorillas." A video that went viral recently showed an automatic soap dispenser that was unable to detect darker skin tones, demonstrating a huge need for programmers that can account for these issues. Another algorithm used for risk assessment in the prison system was proven to be biased against Black incarcerated people, marking them as likely reoffenders at twice the rate of white people.
The difficulty lies in how these machines are taught about the world. While some are taught by a process of trial and error (known as 'reinforcement learning'), many are simply fed a huge quantity of data and given ways to interpret it. This can go badly wrong: Tay, the Microsoft Twitter chatbot placed online in March 2016, was famously taken offline because it started to mimic horrific hate speech learned from human users on the social media platform.
In other words, as artificial intelligence learns from humans, it picks up on and amplifies some of our worse qualities. Fixing it the problem could well mean fixing the issues of prejudice in society as a whole.
As We Rely More On AI, It Might Become A Weapon For Stereotypes
Because of the potential impact of artificial intelligence on our society, this problem needs to be fixed soon. The potential for AIs in the future to use sexist associations to make poor choices that disadvantage women and people of color is vast, and worrying.
AIs were already primed to make women's positions more precarious: As Foreign Policy argued earlier this year, many of the positions and areas AI will dominate in the economy will remove work from women, as they'll be focused on routine information processing tasks where women climbing the career ladder often find themselves. But experts also point out that machines may be given the power to make decisions — by conducting job interviews, for instance, or determining who gets into a college or is awarded a loan — that could be heavily influenced by their unacknowledged biases.
So how can we stop the problem? Some people advocate for more women to work in AI production, but while that is undoubtedly a step in the right direction, a lot of the problem comes from the world the AIs are learning about, and how they are taught to correct what they absorb. "If we look at how systems can be discriminatory now," wrote Kate Crawford in the New York Times, "we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process."
Another issue lies within the mechanisms of capitalism itself. Because a lot of these machines are programmed by private companies, they don't want to release their AI algorithms to be analyzed for bias. The company behind the recidivism controversy, Northpointe Inc., refuses to let anybody investigate their algorithm because it's "commercially sensitive." Tactics like this avoid accountability, which is crucial for eliminating bias in these programs.
Whether or not reinforcement learning could help programmers teach artificial intelligence to be less biased remains an open question, but one thing's for sure: The days of dominant artificial intelligence permeating our lives are coming — and if they're allowed to parrot and reinforce stereotypes, they certainly aren't going to make the world a better place.