The nuances of human behavior pose a major problem to engineers responsible for programming the robots of the future. We can't expect to teach androids how to act like humans when we gaffe every day. Some researchers want to know: could robots learn human behavior from books?
The idea of a reading robot isn't unfamiliar. Johnny Five, Data, and Bender exhibited speed-reading capabilities in Short Circuit, Star Trek: The Next Generation, and Futurama. In fact, Johnny Five used reading specifically to learn about the world around him, while Data, ostensibly, flexed his reading muscles in pursuit of his mission to study humans.
But the idea of robots learning human behavior from books isn't strictly the stuff of science fiction anymore. Although speech recognition and natural language processing haven't yet been perfected, they're far more advanced now than they were when Johnny Five and Data burst onto the scene in the 1980s. An AI developed at Keen Software House has already shown the ability to retain information in order to solve future problems.
In "Using Stories to Teach Human Values to Artificial Agents," researchers Mark O. Riedl and Brent Harrison of the Georgia Institute of Technology's School of Interactive Computing write:
We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, “reverse engineer” the values tacitly held by the culture that produced them. These values can be complete enough that they can align the values of an intelligent entity with humanity. In short, we hypothesize that an intelligent entity can learn what it means to be human by immersing itself in the stories it produces. Further, we believe this can be done in a way that compels an intelligent entity to adhere to the values of a particular culture.
The researchers cite "[f]ables and allegorical tales" as safe reading material for robots — that is, stories that will not teach them bad behavior. If Riedl and Harrison's hypothesis is correct, these stories could be easily used to program androids to fit into their home cultures. Then again, if a robot could learn multiple cultural etiquettes and how to code switch between them, we'd be living in the age of C-3PO.
Of course, reading alone won't program robots to carry out basic tasks that involve human-cyborg relations. Both The Verge and Georgia Institute of Technology researchers describe similar scenarios in which a robot is asked to run a shopping errand, but winds up committing robbery in order to carry it out, because the reward for completing the task outweighs the punishment of criminal behavior. It's another tricky nuance of human behavior. Program the robot to be too focused on its end goal, and you wind up with a seemingly-"psychotic" android. Teach it to be too considerate of others, however, and it will never complete its task, because it will let everyone else go ahead of it in the checkout line.
Even though we're closer now than ever before to a truly functional AI, we're still years away. Like, decades. Or maybe even millennia. But we still have plenty of technological advancements coming in the next 5-10 years to look forward to. Trust me.
And if the AI revolution happens a little bit faster than expected, at least we'll all be here to share our favorite books with the androids. I, for one, welcome our new robot overlords.
Images: Columbia Pictures; Giphy (2)