Life

Research Just Proved Virtual Voice Assistants Can Spy On You Via Third Party Apps

Smart home assistant
Shutterstock

It might sound like something out of a dystopian Black Mirror episode, but a research group has just proven that there is potential for at-home virtual voice assistants to spy on customers through modified apps. Berlin-based hacking research collective and think tank Security Research Labs (SRL) created applications to use in an experiment that would prove personal data (including passwords) could be comprised via the use an Amazon Echo or Google Home. The apps turned the voice assistants into “Smart Spies,” as SRL has dubbed them.

“As the functionality of smart speakers grows, so too does the attack surface for hackers to exploit them ... The flaws allow a hacker to phish for sensitive information and eavesdrop on users,” SRL says. Smart speakers are activated by a key phrase (in this case, “OK Google” or “Alexa”), and then the user can use any number of commands, ranging from a request to play music to placing a hands-free phone call. In most instances, when the user is done with their request, it is assumed the app is no longer listening.

However, the app SRL created — a fake horoscope reader — had mechanisms in place that allowed it to keep listening rather than deactivating immediately. During those few seconds before actual deactivation, if an Amazon Echo user said a phrase with the word "I" in it, what they said was transcribed and sent back to SRL. And for Google Home, SRL found that “there is no need to specify certain trigger words and the hacker can monitor the user’s conversations infinitely.”

SRL were also able to take data from phishing messages that said: “An important security update is available for your device. Please say 'start update followed' by your password.” Then, everything said after the word “start” would be sent to the hackers. And it doesn’t have to stop at passwords; hackers could change the input to ask for an email address or even for credit card details.

"Users should be very suspicious when any smart speaker asks for a password, which no regular app is supposed to do," SRL's chief scientist Karsten Nohl told BBC News.

Nohl revealed that these hacks were fairly easy to to pull off, and programming experience wasn’t needed either.

Nohl reminded users that, if the voice assistant’s light is still on after they believe it’s been turned off, it’s an indication that the device is still listening.

This is everyone’s technological nightmare, but SRL has shared these vulnerabilities with Amazon and Google in order to help them tighten up security so personal data isn’t being mistreated. Google said it has removed SRL's Actions, also stating: "We are putting additional mechanisms in place to prevent these issues from occurring in the future." Amazon said: "Customer trust is important to us and we conduct security reviews as part of the skill certification process. We quickly blocked the Skill in question and put mitigations in place to prevent and detect this type of Skill behaviour and reject or take them down when identified."