Both Siri and Google Assistant have been exploited through ultrasonic frequencies in a new form of hack dubbed a DolphinAttack.
Voice assistants have become an increasingly ubiquitous feature on modern smartphones, with the likes of Siri, Google Assistant, Cortana, Bixby, and Alexa ruling the roost. However, with new features come new vulnerabilities – and now hackers have managed to undermine voice recognition software through what has been called a ‘DolphinAttack’.
Why DolphinAttack? For one, hackers have leveraged ultrasonic frequencies – beyond what human ears can hear – to issue commands to voice assistants. The technique is similar to the ultrasonic frequencies that dolphins use for echolocation, hence the name.
The immediate issue is that there’s no simple fix to this dilemma; microphones can vary from smartphone to smartphone, but few can block ultrasounds – meaning that a majority of smartphone users are at risk of having their voice assistant hijacked out from under them.
For example, hackers could leverage a DolphinAttack to force a digital assistant to execute certain commands – hypothetically sending errant messages or altering the system settings of a user’s phone – while the user in question could be blissfully unaware that any activity was occurring on their handset.
The risk can be exponentially increased in many ways when one considers that many voice assistants are capable of controlling smart home technology, and certain digital assistants (such as Bixby) can unlock a smartphone through use of a voice password.
Take a look at the video below, wherein hackers perform a DolphinAttack on an iPhone:
What are your thoughts? How can we make voice technology more secure? Should we rely on our digital assistants as much as we do? Be sure to let us know your opinion in the comments below!