Researchers show Siri and Alexa can be exploited with silent commands hidden in songs

Researchers at UC Berkeley have shown they can embed within anthems stealthy commands for popular voice deputies that can prompt platforms like Siri, Alexa or Google Assistant to carry out actions without humans getting wise.

The research, reported earlier by The New York Times, is a more actionable evolution of something security researchers have been indicating great interest in — fooling Siri 😛 TAGEND

Last year, researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by use frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s answers, either.

The technique, which the Chinese researchers called DolphinAttack, can inform smart devices to visit malicious websites, initiate telephone call, take a image or send text messages. While DolphinAttack has its restrictions — the transmitter must be close to the receive device — experts warned that more powerful ultrasonic systems were possible.

That warning was borne out in April, when researchers at the University of Illinois at Urbana-Champaign demonstrated ultrasound assaults from 25 feet away. While the commands couldn’t penetrate walls, they could control smart devices through open windows from outside a building.

The specific research arising as a result of Berkeley can hide commands to attain calls or visit specific websites without human listeners being able to discern them. The modifications add some digital noise but nothing that sounds like English.

These exploits are still in their infancy, as are the security capabilities of the voice assistants. As abilities widen for smart deputies that make it easier for users to send emails, messages and money with their voice, things like this are a bit worrisome.

One takeaway is that digital assistant makers may have to get more serious about voice authentication so they can determine with greater accuracy whether the owner of a device is the one voicing commands, and if not, lock down the digital assistant’s capabilities. Amazon’s Alexa and Google Assistant both offer optional features that lock down personal information to a specific user on the basis of their voice pattern; meanwhile, most sensitive info on iOS devices necessitates the device to be unlocked before it’s accessed.

The potential here is nevertheless frightening and something that should be addressed early on, publicly. As we considered from some of Google’s demoes with their Duplex software at I/ O the coming week, the company’s aspirations for their voice deputy are building rapidly, and as the company begins to release Smart Display devices with its partners that integrate cameras, possibilities for abuse is widening.

Make sure to visit:


Voice interfaces beginning to find their way into business

Imagine attending a business meeting with an Amazon Echo( or any voice-driven device) sitting on the conference table. A topic starts about the month’s sales numbers in the Southeast region. Instead of opening a laptop, opening a program like Excel and find the numbers, you simply ask the device and get the answer instantly.

That kind of scenario is increasingly becoming a reality, although it is still far from common place in business just yet.

With the increasing popularity of devices like the Amazon Echo, people are beginning to get used to the idea of interacting with computers using their voices. Anytime a phenomenon like this enters the consumer realm, it is only a matter of time before we see it in business.

Chuck Ganapathi, CEO at Tact, an AI-driven marketings tool that uses voice, type and touch, says with our devices changing, voice makes a lot of sense. “There is no mouse on your telephone. You don’t want to use a keyboard on your telephone. With a smart watch, “were not receiving” keyboard. With Alexa, “were not receiving” screen. You have to think of more natural ways to interact with the device.”

As Werner Vogels, Amazon’s chief technology policeman, pointed out during his AWS re: Invent keynote at the end of last month, up until now we have been limited by the technology as to how we interact with computers. We type some keywords into Google employing a keyboard because this is the only way the technology we had allowed us to enter information.

“Interfaces to digital systems of the future will no longer be machine driven. They will be human centric. We can construct human natural interfaces to digital systems and with that a whole environment will become active, ” he said.

Amazon will of course be happy to help in this regard, introducing Alexa for Businessas a cloud service at re: Invent, but other cloud companies are also exposing voice services for developers, constructing it ever easier to build voice into an interface.

While Amazon took aim at business immediately for the first time with this move, some companies had been experimenting with Echo integration much earlier. Sisense, a BI and analytics tool company, introduced Echo integration as early as July 2016.

But not everyone wants to cede voice to the big cloud vendors , no matter how attractive they might make it for developers. We saw this when Cisco introduced the Cisco Voice Assistant for Spark in November, using voice technology it acquired with the MindMeld buy the previous May to provide voice commands for common meeting tasks.

Roxy, a startup that got $2.2 million in seed fundin November, decided to build its own voice-driven software and hardware, taking aim, for starters, at the hospitality industry. They have broader aspiration beyond that, but one early lesson they have learned is that not all companies want to give their data to Amazon, Google, Apple or Microsoft. They want to maintain control of their own client interactions and a solution like Roxy devotes them that.

In yet another example, Synqq introduced a notes app at the beginning of the year that uses voice and natural language processing to add notes and calendar entries to their app without having to type.

As we move to 2018, we should start ensure even more examples of this type of integration both with the help of big cloud companies, and companies trying to build something independent of those vendors. The keyboard won’t be rendered to the dustbin just yet, but in scenarios where it makes sense, voice could begin to replace the need to type and offer a more natural route of interacting with computers and software.

Make sure to visit:

Amazon and Microsoft agree their voice assistants will talk (to each other)

Those betting big on AI inducing voice the dominant user interface of the future are not betting so big as to believe their respective artificially intelligent voice deputies will be the sole vocal prophecy that Internet users want or need.

And so Microsoft’s Satya Nadella and Amazon’s Jeff Bezos are today announcing a tie-up, which will — at an unspecified point later this year — enable users of the latter’s Alexa voice assistant to ask her to summon Microsoft’s Cortana voice assistant to ask it to do stuff, and vice versa.

Here are the pair’s respective statements on the move 😛 TAGEND

Quoth Satya Nadella, CEO, Microsoft : “Ensuring Cortana is available for our customers everywhere and across any device is a key priority for us. Bringing Cortana’s knowledge, Office 365 integration, commitments, and reminders to Alexa is a great step toward that goal.”

Said Jeff Bezos, founder and CEO, Amazon : “The world is big and so multifaceted. There are going to be multiple successful intelligent agents, each with access to different sets of data and with different specialized skill regions. Together, their strengths will complement each other and provide clients with a richer and even more helpful experience. It’s great for Echo owners to get easy access to Cortana.”

And here’s how they sum up the win-win benefits they see for their respective users by letting their voice deputies interoperate 😛 TAGEND

Alexa customers will be able to access Cortana’s unique features like booking a session or accessing work calendars, reminding you to pick up blooms on your way home, or reading your work email- all utilizing simply your voice. Similarly, Cortana customers can ask Alexa to control their smart home devices, shop on, interact with many of the more than 20,000 abilities built by third-party developers, and much more.

The main thing to note here — aside from how clumsy it’s going to be having one voice deputy summon another — is that Cortana and Alexa play in very different realms; one being productivity and business user focused, and the other being ecommerce/ entertainment and consumer focused.

Which entails there’s little strategic reason for Alexa or Cortana to be overly territorial vis-a-vis one another at this point vs — on the flip side — the extra utility they reckon they can reap by agreeing to integrate their products and expanding the relative capabilities of each.

So really this alliance is largely a commentary on the slender individual utility currently offered by each/ any of these heavily hyped voice assistant technologies.

In an interview about the tie-up with The New York Times, Bezos saw a future where people are turning to different AIs for different areas of expertise — akin to asking one friend for advice about hiking and the other for restaurant recommendations.

“I want them to have access to as many of those AIs as is practicable, ” he is quoted as saying.

Bezos also professed himself open to the idea of interoperating with Apple’s Siri and Google’s eponymous voice AI — although he corroborated neither had been approached.

And, to be clear, there seems zero opportunity of Apple and Google inking on the interoperability line, given they control the two dominant mobile ecosystems and therefore have different strategic ecosystem priorities vs Amazon and Microsoft( the two companies which, let us not forget, lost something in the mobile platform race ).

So, in sum, if you can’t beat the dominant mobile platforms, you can at the least forge wider product consolidations to try to offer a more compelling app proposition.

Make sure to visit: