Researchers show Siri and Alexa can be exploited with silent commands hidden in songs

Researchers at UC Berkeley have shown they can embed within anthems stealthy commands for popular voice deputies that can prompt platforms like Siri, Alexa or Google Assistant to carry out actions without humans getting wise.

The research, reported earlier by The New York Times, is a more actionable evolution of something security researchers have been indicating great interest in — fooling Siri 😛 TAGEND

Last year, researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by use frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s answers, either.

The technique, which the Chinese researchers called DolphinAttack, can inform smart devices to visit malicious websites, initiate telephone call, take a image or send text messages. While DolphinAttack has its restrictions — the transmitter must be close to the receive device — experts warned that more powerful ultrasonic systems were possible.

That warning was borne out in April, when researchers at the University of Illinois at Urbana-Champaign demonstrated ultrasound assaults from 25 feet away. While the commands couldn’t penetrate walls, they could control smart devices through open windows from outside a building.

The specific research arising as a result of Berkeley can hide commands to attain calls or visit specific websites without human listeners being able to discern them. The modifications add some digital noise but nothing that sounds like English.

These exploits are still in their infancy, as are the security capabilities of the voice assistants. As abilities widen for smart deputies that make it easier for users to send emails, messages and money with their voice, things like this are a bit worrisome.

One takeaway is that digital assistant makers may have to get more serious about voice authentication so they can determine with greater accuracy whether the owner of a device is the one voicing commands, and if not, lock down the digital assistant’s capabilities. Amazon’s Alexa and Google Assistant both offer optional features that lock down personal information to a specific user on the basis of their voice pattern; meanwhile, most sensitive info on iOS devices necessitates the device to be unlocked before it’s accessed.

The potential here is nevertheless frightening and something that should be addressed early on, publicly. As we considered from some of Google’s demoes with their Duplex software at I/ O the coming week, the company’s aspirations for their voice deputy are building rapidly, and as the company begins to release Smart Display devices with its partners that integrate cameras, possibilities for abuse is widening.

Make sure to visit:


SoundHound has raised a big $100M round to take on Alexa and Google Assistant

As SoundHound looks to leverage its ten-plus years of experience and data to create a voice recognition tool that companies can bake into any platform, it’s raising another big $100 million round of funding to try to make its Houndify platform a third neutral alternative compared to Alexa and Google Assistant.

While Amazon works to get developers to adopt Alexa, SoundHound has been collecting data since it started as an early mobile app for the iPhone and Android devices. That’s devoted it more than a decade of data to work with as it tries to build a robust audio recognition engine and tie it into a system with dozens of different queries and options that it can tie to those voices. The outcome was always a better SoundHound app, but it’s increasingly started to try to open up that technology to developers and indicate it’s more powerful( and accurate) than the rest of the voice assistants on the market — and get them to employ it in their services.

” We launched[ Houndify] before Google and Amazon ,” CEO Keyvan Mohajer told.” Patently, good ideas get copied, and Google and Amazon have copied us. Amazon has the Alexa fund to invest in smaller companies and bribe them to adopt the Alexa Platform. Our reaction to that was, we can’t give $100 million away, so we came up with a strategy which was the reverse. Instead of us the investment in smaller companies, let’s go after big successful companies that will invest in us to accelerate Houndify. We think it’s a good strategy. Amazon would be betting on companies that are not yet successful, we would bet on companies that are already successful .”

This round is all coming in from strategic investors. Component of the reason is that taking on these strategic investments permits SoundHound to capture important partnerships that it can leverage to get wider adoption for its technology. The companies investing, too, have a stake in SoundHound’s success and will want to get onto wherever possible. The strategic investors include Tencent Holding Limited, Daimler AG, Hyundai Motor Company, Midea Group, and Orange S.A. SoundHound already has a number of strategic investors that include Samsung, NVIDIA, KT Corporation, HTC, Naver, LINE, Nomura, Sompo, and Recruit. It’s a ridiculously long list, but again, the company is trying to get that technology baked in wherever it can.

So it’s pretty easy to see what SoundHound is going to get out of this: access to China through partners, deeper integration into vehicles, as well as increased expansion to other avenues through all of its investors. Mohajer said the company could try to get into China on its own( or ignore it altogether ), but there has been a very limited number of companies that have had any success there whatsoever. Google and Facebook, two of the largest technology companies in the world, are not on that list of successes.

” China is a very important marketplace, it’s very big and has a lot of potential, and it’s growing ,” Mohajer said.” You can go to Canada without having to rethink a big strategy, but China is so different. We saw even companies like Google and Facebook tried to do that and didn’t succeed. When those bigger companies didn’t succeed, it was a signal to us that strategy wouldn’t work.[ Tencent] was looking at the space and they watched we have the best technology in the world. They appreciated it and were respectful, they helped us get there. We looked at so many partners and[ Tencent and Midea Group] were the ones that worked out .”

The idea here is that developers in all sorts of different marketplaces — whether that’s vehicles or apps — will want to have some part of voice interaction. SoundHound is betting that companies like Daimler will want to control the experience in their vehicles, and not be telling ” Alexa” whenever they want to make a request while driving. Instead, it may come down to something as simple as a aftermath word that could change the entire user experience, and that’s why SoundHound is pitching Houndify as a flexible and customizable option that isn’t demanding a brand on top of it.

SoundHound still does have its stable of apps. The original SoundHound app is around, though those features are also cooked into Hound, its main consumer app. That is more of a personal assistant-style voice recognition service where you can string together a sentence of as many as a dozen parameters and get a decent search result back. It’s more of a party trick than anything else, but it is a good demo of the technical capabilities SoundHound has as it appears to embed that software into lots of different pieces of hardware and software.

SoundHound may have raised a big round with a fresh set of strategic partners, but that certainly doesn’t mean it’s a surefire gamble. Amazon is, after all, one of the most valuable companies in the world and Alexa has proven to be a quite popular platform, even if it’s mostly for nominal petitions and listening to music( and party tricks) at this point. SoundHound is going to have to convince companies — small and large — to cook in its tools, rather than go with massive competitors like Amazon with pockets deep enough to buy a whole grocery chain.

” We believe every company is going to need to have a strategy in voice AI, jus like 10 years ago everyone needed a mobile strategy ,” Mohajer told.” Everyone should think about it. There aren’t many providers, principally because it takes a long time to build the core technology. It took us 12 years. To Houndify everything we need to be global, we need to support all the main languages and regions in the world. We built the technology to be speech independent, but there’s a lot of resources and execution involved .”

Make sure to visit:

Kia and Hyundai cars will include AI assistants starting in 2019

Vehicles made by Korean carmakers Hyundai and Kia will include built-in virtual assistants with AI-powered smart-aleckies beginning in 2019( via Engadget ). The plan to build smart assistants into vehicles will make use of tech created by SoundHound, the music identification company that has recently been focusing more on building AI agent software more akin to things like Siri and Google Assistant.

The assistants would be able to do things like suggest destinations based on what’s next in your calendar, or based on your past preferences and selections. They could also offer remote auto and smart home control use voice commands, and do Alexa-like stuff including providing news and weather.

The so-called Intelligent Personal Assistant will get its official debut at CES this year, “whos just come up” in a couple of weeks- but it’s aiming to begin testing in actual autoes on actual roads starting in 2018, with a broad ship window of 2019, as mentioned.

Kia and Hyundai might seem like they’re making a future-focused bet on in-car AI here, but it’s actually a growing trend and area of focus for automakers. Toyota, Nissan and Honda have all demonstrated concept versions of car-specific deputies they intend to eventually make available, for instance, and other automakers including Ford are working with Amazon and others to bring existing virtual assistants to their vehicles, too.

Make sure to visit: