Hey Alexa Go Hack Yourself: Researchers Detail Wild Self-Issued Smart Speaker Hijacks
That's what security researchers from the University of London's Royal Holloway and Catania University in Italy discovered is entirely possible. Through a few different methods, including social engineering or just being within earshot of an Echo device, Alexa can be activated and used fairly easily.
Tested on the third generation of the Echo Dot -- though believed to be exploitable on fourth gen devices as well -- researchers found that playing audio files with the right wake words will activate the Alexa Voice-enabled device it is playing from. Dubbed "Alexa Versus Alexa" by the researchers, the exploits can be used to order products, make modifications to the device's settings, install skills, and leverage a whole host of other functionality that the Amazon Echo product line
Amazon has issued a patch (check your software version here), which can be installed by asking your device to 'check for updates'. However, the issue remains if the attacker is in close enough proximity to pair to the Echo over Bluetooth, or even just use another nearby speaker loud enough to be picked up. To test the Bluetooth part of the exploit, the researchers utilized (PDF) Google's Text-To-Speech system to generate an audio file with the appropriate wake-words. Then, once played over the Echo Dot speaker, it hears itself and wakes up to perform commands as issued. One particular skill, "go on", can even be used to listen in on someone's commands to the device.
The "go on" functionality will grab the audio payload from the user, as skills are designed to infer the intent when a proper command is issued. Utilizing an intercepting skill in a loop allows the hacker to receive that intent, then ascertain what the issued command is, and even have the device respond appropriately. In turn, that allows the attacker to intercept any commands, and replace them, which could render the device incapable of performing commands as normal. Not to mention the breach of privacy at play here.
Just to see if a similar attack vector was possible on the biggest competitor to Amazon's Echo line, we decided to run a few of the same style attack vectors on our own on Google Home/Nest devices. We utilized media broadcasts, loud volumes, multiple devices, recorded audio, repeat after me commands, all attempting to have the device or other nearby devices self-activate, mostly to no avail. All we succeeded in doing was turning our lights on and off; an annoyance yes, but slightly harder to pull off without direct access. Additionally, Google's Home and Nest devices do typically require more validation if you do things that would normally cost money, like order products via their shopping associations. So this ultimately means there is significantly less risk of something like this happening with Google's Nest and Home line, at least from what we could tell with our short experiments. We can't say it will never happen, because no matter your preffered device, there is always a risk.
The vulnerability for Echo Dot 3rd and 4th generation has garnered a decent amount of attention and now is labeled as CVE-2022-25809, which you can view here.