As voice assistant becomes more popular, security concern on the always-listening devices is also rising. Most recently, Chinese researchers from Zhejiang University have discovered a way to hack every voice assistant currently in the market, including Siri, Google Assistant, Amazon Alexa and Microsoft Cortana.
The technique is named DolphinAttack, which translates a typical voice command into a sound of ultrasonic frequencies which are not audible to human ears but can be picked up by microphones and the software powering the voice assistant.
The research team added an ultrasonic transducer, an amplifier and battery to a regular smartphone to produce the ultrasonic command, costing just US$3 in total. As humans cannot hear sound well above the 20kHz frequency, the sound ranging between 25kHz and 39kHz become a good weapon to compromise voice assistant.
In the research team’s experiment, they succeeded in asking an iPhone to call a phone number and to take a photo, and also managed to make a MacBook and a Nexus 7 tablet open up a malicious website. However, systems like Google Home that can be trained to respond to only certain users’ voice will not follow the ultrasonic commands.
The team managed to launch attacks from a distance of 170cm, making it a possible means of attack in a city life context.
Companies such as Google and Amazon are already reviewing the claims made by the researchers. They might disable the device software’s ability to answer to sound frequencies outside of human voice.
On the user side, smartphone owners can switch off the wake word activation so that voice assistants won’t be activated without permission. Smart speaker owners can also mute their devices so that they are no longer an always-listening existence.
The loophole exists partly because some companies need the device to react to ultrasonic transmission for device-to-device communications. For example, Amazon’s Dash Button pairs with smartphones on around the 18kHz frequency.