Have We Made Google Auto-Complete Sexist, Or Is It The Other Way Around?

A study out of England's Lancaster University has analyzed auto-complete forms in Google Search, and concluded that we've "taught stereotypes to search engines."

Confused about how a search engine learnt to stereotype? Well, open up Google Search, and...

And the gay community is far from alone.

It's a chicken-and-egg situation. Does Google promote racism by populating our most-searched queries, or is it parroting our own racism back at us? Does being greeted with evidence of stereotypical thinking quell the same impulse in us, or are we going to click on the new searches, thus further encouraging the algorithm? And perhaps most pertinently, does Google have an ethical responsibility to eradicate this kind of autocomplete stereotyping, or does that limit free speech?

It's a topical debate: Twitter, Apple and Facebook have all faced similar controversies of late. Twitter came under fire for allowing abusive speech to reign free, prompting it to embed a 'report abuse button,' and Facebook was flamed for censoring images of breastfeeding and mastectomies, while leaving rape memes on the site. Siri has been accused of being anti-abortion for flat-out refusing to direct users to abortion clinics, and iPhones still refuse to autocorrect the word "abortion."

With the exception of Siri's apparent pro-life movement, the central issue here is whether platforms have a responsibility to censor the content its users provide. Legally, they don't have to. In the U.S., where these corporations are based, the First Amendment of course protects against virtually all kinds of speech.

That said, Apple, Facebook and Twitter have an image to protect, and have rushed furiously to defend themselves when accused of propagating hate speech. Google, for its part, hasn't yet responded to this most recent study. Nor has it commented on media misinterpretation of it. "Is Google making us RACIST?" asks the MailOnline, when in fact, as the study's researchers pointed out, they set out to prove the reverse:

Humans may have already shaped the Internet in their image, having taught stereotypes to search engines and even trained them to hastily present these as results of ‘top relevance'.

BuzzFeed noted that training search engines to echo our stereotypes is a more far-flung issue, since it affects the next generation. These days, kids are learning about the birds and the bees from YouTube and Google, and educating them about prevalent stereotypes when they're Google Search-ing to be educated might not be so wise. If one Googled 'women' and based a judgment from it, noted BuzzFeed, they'd reach the conclusion that "women are crazy, money-grubbing, submissive, unfunny, beautiful, ugly, smart, stupid, and physically ill-equipped to do most things."

In May, a judge in Italy ruled that if Google were to detect libelous words being added to a name (in that particular case, the plaintiff's name alongside the word "fraud") the company would legally have to change it. Responded Google at the time:

We believe that Google should not be held liable for terms that appear in autocomplete as these are predicted by computer algorithms based on searches from previous users, not by Google itself. We are currently reviewing our options.

The protections of the First Amendment will halt cases of a similar nature from occurring in America. But the question remains: does Google have an ethical obligation to hold back on regurgitating stereotypes? We'll have to wait and see what the company decides.