You might recall recently hearing Elon Musk say at the National Governor's Association meeting that he has "exposure to the very most cutting-edge AI, and... people should be really concerned about it," and that, "by the time we are reactive in AI regulation, [it will be] too late" — but if you're anything like me, you probably shrugged it off as a some kind of doomsday paranoia. But, with the news that one of Facebook's AI systems was shut down after creating its own language to communicate with each other that also happened to be uninterpretable to humans, Musk's concerns become reasonable.
Our day-to-day interactions with AI are mostly characterized by comical flaws that don't worry us, because we're not reliant on them — yet. Like when Alexa randomly inserts herself into a conversation by confirming an Amazon order of toilet paper that you never ordered. Or, when you say "seriously" too slowly, and Siri is summoned — the confused genies coming out of their lamps with nothing helpful or intelligible to offer. For many of us, our AI experiences are lighthearted and game-like.
But for researchers around the world, dedicating their lives to the evolution of multi disciplined artificial intelligence for great future value, the flaws are great markers for discovery. Back in June, Facebook took to their blog to announce that in their lab, they'd be training their AI bots to negotiate with hopes of increasing the depth of their communication and interaction with humans. And over the last month, their researchers have been working closely with these AI bots, which they named Alice and Bob, to get a better sense of how their linguistic ability affects their greater goals of negotiation. But in these trials, something unexpected happened: the bots created their own language.
Here's some context. In the trials, the bots are presented with a list of items, including hats, balls, and books. Each bot was assigned a different perceived value of each item, and a common perceived value in the act of completing a compromise. Essentially both bots wanted to come to an agreement, but get something out of it, too.
But when they were being programmed, they were not given guidelines that forbade them from interacting outside of a human language. But because AI are dynamic systems, meant to constantly improve themselves and problem solve, they took it upon themselves to create a language that was more efficient to communicate with one another.
Unfortunately, though, it was too efficient for humans to understand. I know, cue The Twilight Zone theme song. Here's how the conversation went down:
At first glance, this interaction might appear to be a system failure, and while the bots indeed failed to interact in a way that was expected of them, it's actually significant evidence of a success — just not the success Facebook was looking for. The dialog agents essentially created a shorthand language that improved their communication with each other. But Facebook was interested in training bots of be more efficient in communicating with humans, which is why they shut the systems down. Not because they were worried that these bots were going to use their secret language to take over the human race, despite how easy and exciting it can be to sensationalize reports from futurist labs. Facebook simply pulled the plug because it was time for them to redirect their efforts to get closer to their own goals.
The AI discoveries have been noted and logged but are not worthy of igniting mass pandemonium just yet. Could this be a more worrisome discovery if the bots were negotiating with something more serious than hypothetical books and balls? Sure. But it's important to remember that in this instance, the stakes were benign. Facebook isn't panicking, so there's no need for us to start up with the end of the world hashtags — yet.