Researchers show Siri and Alexa can be exploited with silent commands hidden in songs

Researchers at UC Berkeley have shown they can embed within anthems stealthy commands for popular voice deputies that can prompt platforms like Siri, Alexa or Google Assistant to carry out actions without humans getting wise.

The research, reported earlier by The New York Times, is a more actionable evolution of something security researchers have been indicating great interest in — fooling Siri 😛 TAGEND

Last year, researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by use frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s answers, either.

The technique, which the Chinese researchers called DolphinAttack, can inform smart devices to visit malicious websites, initiate telephone call, take a image or send text messages. While DolphinAttack has its restrictions — the transmitter must be close to the receive device — experts warned that more powerful ultrasonic systems were possible.

That warning was borne out in April, when researchers at the University of Illinois at Urbana-Champaign demonstrated ultrasound assaults from 25 feet away. While the commands couldn’t penetrate walls, they could control smart devices through open windows from outside a building.

The specific research arising as a result of Berkeley can hide commands to attain calls or visit specific websites without human listeners being able to discern them. The modifications add some digital noise but nothing that sounds like English.

These exploits are still in their infancy, as are the security capabilities of the voice assistants. As abilities widen for smart deputies that make it easier for users to send emails, messages and money with their voice, things like this are a bit worrisome.

One takeaway is that digital assistant makers may have to get more serious about voice authentication so they can determine with greater accuracy whether the owner of a device is the one voicing commands, and if not, lock down the digital assistant’s capabilities. Amazon’s Alexa and Google Assistant both offer optional features that lock down personal information to a specific user on the basis of their voice pattern; meanwhile, most sensitive info on iOS devices necessitates the device to be unlocked before it’s accessed.

The potential here is nevertheless frightening and something that should be addressed early on, publicly. As we considered from some of Google’s demoes with their Duplex software at I/ O the coming week, the company’s aspirations for their voice deputy are building rapidly, and as the company begins to release Smart Display devices with its partners that integrate cameras, possibilities for abuse is widening.

Make sure to visit: CapGeneration.com

Advertisements

SoundHound has raised a big $100M round to take on Alexa and Google Assistant

As SoundHound looks to leverage its ten-plus years of experience and data to create a voice recognition tool that companies can bake into any platform, it’s raising another big $100 million round of funding to try to make its Houndify platform a third neutral alternative compared to Alexa and Google Assistant.

While Amazon works to get developers to adopt Alexa, SoundHound has been collecting data since it started as an early mobile app for the iPhone and Android devices. That’s devoted it more than a decade of data to work with as it tries to build a robust audio recognition engine and tie it into a system with dozens of different queries and options that it can tie to those voices. The outcome was always a better SoundHound app, but it’s increasingly started to try to open up that technology to developers and indicate it’s more powerful( and accurate) than the rest of the voice assistants on the market — and get them to employ it in their services.

” We launched[ Houndify] before Google and Amazon ,” CEO Keyvan Mohajer told.” Patently, good ideas get copied, and Google and Amazon have copied us. Amazon has the Alexa fund to invest in smaller companies and bribe them to adopt the Alexa Platform. Our reaction to that was, we can’t give $100 million away, so we came up with a strategy which was the reverse. Instead of us the investment in smaller companies, let’s go after big successful companies that will invest in us to accelerate Houndify. We think it’s a good strategy. Amazon would be betting on companies that are not yet successful, we would bet on companies that are already successful .”

This round is all coming in from strategic investors. Component of the reason is that taking on these strategic investments permits SoundHound to capture important partnerships that it can leverage to get wider adoption for its technology. The companies investing, too, have a stake in SoundHound’s success and will want to get onto wherever possible. The strategic investors include Tencent Holding Limited, Daimler AG, Hyundai Motor Company, Midea Group, and Orange S.A. SoundHound already has a number of strategic investors that include Samsung, NVIDIA, KT Corporation, HTC, Naver, LINE, Nomura, Sompo, and Recruit. It’s a ridiculously long list, but again, the company is trying to get that technology baked in wherever it can.

So it’s pretty easy to see what SoundHound is going to get out of this: access to China through partners, deeper integration into vehicles, as well as increased expansion to other avenues through all of its investors. Mohajer said the company could try to get into China on its own( or ignore it altogether ), but there has been a very limited number of companies that have had any success there whatsoever. Google and Facebook, two of the largest technology companies in the world, are not on that list of successes.

” China is a very important marketplace, it’s very big and has a lot of potential, and it’s growing ,” Mohajer said.” You can go to Canada without having to rethink a big strategy, but China is so different. We saw even companies like Google and Facebook tried to do that and didn’t succeed. When those bigger companies didn’t succeed, it was a signal to us that strategy wouldn’t work.[ Tencent] was looking at the space and they watched we have the best technology in the world. They appreciated it and were respectful, they helped us get there. We looked at so many partners and[ Tencent and Midea Group] were the ones that worked out .”

The idea here is that developers in all sorts of different marketplaces — whether that’s vehicles or apps — will want to have some part of voice interaction. SoundHound is betting that companies like Daimler will want to control the experience in their vehicles, and not be telling ” Alexa” whenever they want to make a request while driving. Instead, it may come down to something as simple as a aftermath word that could change the entire user experience, and that’s why SoundHound is pitching Houndify as a flexible and customizable option that isn’t demanding a brand on top of it.

SoundHound still does have its stable of apps. The original SoundHound app is around, though those features are also cooked into Hound, its main consumer app. That is more of a personal assistant-style voice recognition service where you can string together a sentence of as many as a dozen parameters and get a decent search result back. It’s more of a party trick than anything else, but it is a good demo of the technical capabilities SoundHound has as it appears to embed that software into lots of different pieces of hardware and software.

SoundHound may have raised a big round with a fresh set of strategic partners, but that certainly doesn’t mean it’s a surefire gamble. Amazon is, after all, one of the most valuable companies in the world and Alexa has proven to be a quite popular platform, even if it’s mostly for nominal petitions and listening to music( and party tricks) at this point. SoundHound is going to have to convince companies — small and large — to cook in its tools, rather than go with massive competitors like Amazon with pockets deep enough to buy a whole grocery chain.

” We believe every company is going to need to have a strategy in voice AI, jus like 10 years ago everyone needed a mobile strategy ,” Mohajer told.” Everyone should think about it. There aren’t many providers, principally because it takes a long time to build the core technology. It took us 12 years. To Houndify everything we need to be global, we need to support all the main languages and regions in the world. We built the technology to be speech independent, but there’s a lot of resources and execution involved .”

Make sure to visit: CapGeneration.com

Apple, in a very Apple move, is reportedly working on its own Mac chips

Apple is planning to use its own chips for its Mac devices, which could replace the Intel chips currently running on its desktop and laptop hardware, according to a report from Bloomberg.

Apple already designs a lot of custom silicon, including its chipsets like the W-series for its Bluetooth headphones, the S-series in its watches, its -Aseries iPhone chips, as well as customized GPU for the new iPhones. In that sense, Apple has in a lot of ways built its own internal fabless chip firm, which makes sense as it looks for its devices to tackle more and more specific use cases and eliminate some of its reliance on third party for their equipment. Apple is already in the middle of in a very public spat with Qualcomm over royalties, and while the Mac is sort of a tertiary product in its lineup, it still contributes a significant portion of revenue to the company.

Creating an entire suite of custom silicon could do a lot of things for Apple, the least of which bringing in the Mac into a system whereby the devices can talk to each other more efficiently. Apple already has a lot of tools to shift user activities between all its devices, but constructing that more seamless means it’s easier to lock users into the Apple ecosystem. If you’ve ever compared connecting headphones with a W1 chip to the iPhone and just typical Bluetooth headphones, you’ve likely ensure certain differences, and that could be even more robust with its own chipset. Bloomberg reports that Apple may implement the chips as soon as 2020.

Intel may be the clear loser here, and the market is reflecting that. Intel’s stock is down nearly 8% after research reports “re coming out”, as it would be a clear transformation away from the company’s typical architecture where it have all along held its ground as Apple moves on from traditional silicon to its own custom designs. Apple, too, is not the only company looking to design its own silicon, with Amazon looking into building its own AI chips for Alexa in another move to create a lock-in for the Amazon ecosystem. And while the biggest players are looking at their own architecture, there’s an entire suite of startups getting a lot of funding house custom silicon geared toward AI.

Apple declined to comment.

Make sure to visit: CapGeneration.com

Voice shopping estimated to hit $40 billion across U.S. and U.K. by 2022

The growing popularity of smart speakers like Amazon Echo and Google Home will lead to an detonation in voice-based shopping, according to a new market research report from OC& C Strategy Consultants out this week. The firm is bullishly predicting that voice shopping will grow to a whopping $40 billion-plus in 2022, up from$ 2 billion today across the U.S. and the U.K.

This sizable increase will be driven by Amazon’s smart speaker sales, in particular, research reports said.

This forecast far surpasses earlier estimates of voice shopping revenues in the years ahead. While not an exact comparing, RBC Capital Markets lately predicted Amazon would generate $10 to $11 billion in marketings from Alexa devices- including device marketings themselves and voice shopping- by the year 2020.

Now you can have a conversation with Alexa without screaming Hey, Alexa for every request

Those with digital home assistants know this phenomenon all too well. You ask Siri, Google or Alexa to hook it up with the facts, they offer written answers, but then you have a follow-up question. In order to ask that follow-up question, you have to say “Hey, Siri, ” “Hey, Google” or “Alexa” all over again. It’s a true aggravation in this first-world we live in.

The feature is available on all hands-free Alexa-enabled devices, like the Echo, Echo Dot and Echo Spot. The follow-up mode won’t work, however, if Alexa isn’t “confident you’re speaking to her, ” according to Amazon.

This opt-in feature enables Alexa to listen for five seconds after she’s served up her first answer. You’ll know if Alexa is ready for a follow-up topic if the blue indicator illuminated is on after Alexa answers your first issue. If you have nothing else to tell, homegirl will go back to sleep until you “Hey, Alexa” her again.

The feature is available on all hands-free Alexa-enabled devices. The follow-up mode won’t work, however, if Alexa isn’t “confident you’re speaking to her, ” according to Amazon.

“For example, if she detects that speech was background noise or that the intent of the speech was not clear, ” Amazon explained in a customer service page for Alexa.

It’ll be interesting to see how well this works. If you have an Alexa device, be sure to let us know how Follow-Up mode works for you.

This past holiday season, Amazon’s Echo Dot was the top-selling Amazon device, as well as the top-selling product can be found at any manufacturer across all categories on Amazon.com, with millions sold.

Make sure to visit: CapGeneration.com

Alexa has literally lost her voice as users report outages and unresponsiveness

Amazon’s Alexa smart assistant seems to be down this morning. We’ve been hearing reports over the last hour of either delayed answers or simply total loss of connection.

While Amazon doesn’t have a status page for its consumer products, Down Detector is reporting a huge spike in Alexa-related complaints over the last hour.

For example, Alexa is giving me replies like “I’m not sure what went wrong”, “sorry, something went wrong”, or a loud chime followed by “sorry, your echo dot lost its connection” and the red ring of sadness. The problem appears to be related to Alexa’s voice recognition servers, as it’s passing across both native devices like an Echo and 3rd party devices running Alexa like the Sonos One.

Some Alexa services still work if you access them through the Alexa app. For an ultimate instance of first world problems, I couldn’t turn on my illuminates this morning with Alexa, but I was eventually able to manually toggle them on and off by using the Alexa iPhone app( because there was no was I was actually strolling over to them and bending down to the floor switching ).

We hope Alexa feelings better soon- plus, it’s hard to miss the irony here- deeming Amazon merely ran a Super Bowl ad campaign a few weeks ago where Alexa “lost her voice”.

We’ve reached out to Amazon and will update this when we get more information.

Make sure to visit: CapGeneration.com

Storyline lets you build and publish Alexa skills without coding

Thirty-nine million Americans now own a smart speaker device, but the voice app ecosystem is still developing. While Alexa today has over 25,000 skills available, a number of companies haven’t yet built a skill for the platform, or offer only a very basic skill that doesn’t run that well. That’s where the startup Storyline comes in. The company is offering an easy to utilize, drag-and-drop visual interface for building Amazon Alexa skills that doesn’t require you to have knowledge of coding.

As the company describes it, they’re constructing the “Weebly for voice apps”- a reference to the drag-and-drop website building platform that’s now a popular route for non-developers to generate websites without code.

Storyline was co-founded in September 2017 by Vasili Shynkarenka( CEO) and Maksim Abramchuk( CTO ). Hailing from Belarus, the two has hitherto operate a software development bureau that built chat-based applications, including chatbots and voice apps, for their clients.

Their work resulted them to be submitted with Storyline, explains Vasili.

“We realized there was this big struggle with creating conversational apps, ” he says. “We became aware that creative people and content inventors are not really good at writing code. That was the major insight.”

The company is targeting brands, businesses and individuals who want to reach their clients- or, in the case of publishers, their readers- using a voice platform like Alexa, and later, Google Home.

The software itself is designed to be very simple, and can be used to create either a custom skill or a Flash Briefing.

For the most basic skill, it only takes five to seven minutes , notes Vasili.

To get started with Storyline, you sign up for an account, then click which type of skill you want to build- either a Flash Briefing or custom ability. You then offer some basic information like the skill’s name and speech, and it launches into a canvas where you can begin creating the skill’s conversational workflow.

Here, you’ll see a block you click on and customize by entering in your own text. This “wouldve been” first thing your voice app says when launched, like “Hello, welcome to…” followed by the app’s name, for example.

You edit this and other blocks of text in the panel on the left side of the screen, while Storyline presents a visual overview of the conversation flow on the right.

In the editing panel, you are still click on other buttons to add more voice interactions- like other questions the skill will ask, user replies, and Alexa’s reply to those.

Each of these items is connected to one of the text blocks on the main screen, as a flow chart of sorts. You can also configure how the skill must be held accountable if the user says something unexpected.

When you’re finished, you can test the ability in a browser by clicking “Play.” That style, you can hear how the skill sounds and test various user responses.

Once satisfied that your skill is ready to go, you click the “Deploy” button to publish. This redirects you to Amazon where you sign in with your Amazon account and publish.( If you don’t have an Amazon Developer account, Storyline will guide you to create one .)

This sort of visual skill developing system may be easier to manage for simpler skills that have a limited number of questions and replies, but the startup says that even more advanced abilities have been constructed utilizing its service.

It was also used by two of the finalists in the Alexa Skills Challenge: Kids.

Since launching the first version of Storyline in October 2017, some 3,000 people have signed up for an account, and have created roughly the same number of abilities. Around 200 of those have gone live to Amazon’s Skill Store.

Storyline isn’t the only company focused on helping business build voice apps without code these days, however.

For example, Sayspringlets designers create voice-enabled apps without code, as well, but instead of was published ability directly, it’s meant to be the first step in the voice app creation process. It’s where designers can flesh out how a ability should work before handing off the coding to a development team.

Vasili says this is a big differentiator between the two companies.

“Prototyping tools are great to play with and explain ideas, but it’s super hard to retain users by being a prototyping tool- because they use the tool to prototype and then that’s it, ” he explains. With Storyline, customers will stay throughout the process of launching and iterating upon their voice app, he states. “We can use data from when the skill is published to improve the design, ” notes Vasili.

Alexa is coming to wearable devices, including headphones, smartwatches and fitness trackers

Amazon wants to bring Alexa to more devices than smart speakers, Fire TV and various other customer electronics for the home, like alarm clocks. The company yesterday announced developer tools that would allow Alexa to be used in microwave ovens, for example- so you could just tell the oven what to do. Today, Amazon is rolling out a new situated of developer tools, including one called the “Alexa Mobile Accessory Kit, ” that would allow Alexa to work Bluetooth products in the wearable space, like headphones, smartwatches, fitness trackers, other audio devices, and more.

This kit is already being used by several companies, including device manufacturers and solution providers Bose, Jabra, iHome, Linkplay, Sugr, Librewireless, Beyerdynamic, Bowers and Wilkins.

Amazon said Bose in particular had been working with the company to build, design and test the answer, and refine the new kit, which will be made more broadly available to developers this summer.( Sign up is here .)

“Bose is excited to add a remarkable new Alexa experience for our clients, ” said Brian Maguire, Director of Product Management at Bose, said in a statement. “Accessing Alexa’s music, datum, and vast number of skills on our headphones will become easier than ever, and we’re looking forward to bringing our collaboration to life.”

The Mobile Accessory Kit would allow a brand like Bose to better compete with the likes of Apple and Google, each who have tied their own voice assistants to their respective Bluetooth headphones- Apple’s AirPods and Google’s Pixel Buds. As voice computing and virtual assistance continues to grow in popularity, it will be increasingly important for other brands- that are not Apple or Google- to have a route to compete. Amazon’s big gamble here is that it can capture that larger marketplace by offering access to Alexa, permitting the hardware device makers to do what they do well- which is not inevitably voice computing.

The addition of the Mobile Accessory Kit follows on last year’s launching of the AVS Device SDK, which allowed device manufacturers to incorporate Alexa in their connected products. This new kit, however, is more of an alternative, is targeted at those who need a more “lightweight method to build on-the-go products, ” explains Amazon.

That’s because devices employing this kit won’t have Alexa built-in- they’ll is attached to Alexa by pairing with Bluetooth to the Amazon Alexa App.

The kit was one of two developer tool announcements out today, ahead of CES.

The other is an update to the Amazon Alexa 7-Mic Far-Field Development Kit called the Amazon Alexa Premium Voice Development Kit. Aimed at commercial device makers, the kit allows them to enable high-quality, far-field voice experiences in their products, says Amazon. The kit is based on the technology that Amazon introduced in its latest Echo family of devices, which offered upgraded experiences over the original Echo.

This kit includes support for either 7-mic broadside or 8- mic rectangular array committees, for different types of devices. It also incorporates Amazon’s proprietary software and algorithm technology for “Alexa” wake word recognition, ray forming , noise reduction, and acoustic echo cancellation. More info on that kit is here.

Make sure to visit: CapGeneration.com

Amazons Alexa passes 15,000 skills, up from 10,000 in February

Amazons Alexa voice platform has now passed 15,000 abilities the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. The figure is up from the 10,000 abilities Amazon officially announced back in February, which had then represented a 3x increase from September.

The new 15,000 figure was first reported via third-party analysis from Voicebot, and Amazon has now confirmed to TechCrunch that the number is accurate.

According to Voicebot, which merely analyzed abilities in the U.S ., the milestone was reached for the first time on June 30, 2017. During the month of June, new skill introductions increased by 23 percent, up from the less than 10 percentage growth that was seen in each of the prior three months.

The milestone also represents a more than doubling of the number of abilities that were available at the beginning of the year, when Voicebotreported there were then 7,000 abilities. That number wasofficiallyconfirmed by Amazon at CES.

Voicebot also noted that Flash Briefings are still one of the more popular categories of skills, in terms of those the hell is live on the Alexa Skill Store today. These news and information-focused voice apps include those from major media publications like The Wall St. Journal, NPR, Washington Post( ahem, TechCrunch ), and others.

Because theyre one of the easiest abilities to develop, Flash Briefings have grown to account for around 20 percent of the available abilities. You can see this figure for yourself here on the Alexa Skills store, which indicates there are2, 891 news abilities live now.

The number of available abilities is an important metric for tracking Amazons success in the voice calculating space.

Amazon is currently the leader in voice-powered devices, where its expected to control 70 percentof the market this year well ahead of Google Home, Lenovo, LG and others. If anything, its success played a role in Apple releasing its own Siri-powered device, the HomePod .Apples entrant aims to capture a portion of the market by attracting those who care more about the speakers quality than the virtual assistant that ships with it. But one thing Apple is not talking about yet is whether third-party developers will be able to create HomePod-compatible apps.

In the meantime, Amazons Alexa is surging ahead, constructing out an entire voice app ecosystem so quickly that it hasnt even been able to implement the usual precautions like a squad that closely inspects apps for terms of service violations, for example, or even tools that allow developers to make money from their creations.( For now, Amazon is simply handing out money rewards to those build popular game abilities a category it insures has some early traction .)

In the long run, Amazons focus on growth over app ecosystem infrastructure could catch up with it. But for now, its Alexa platform is much further ahead than its nearest competitor. Though Google Home saw a spikefrom holiday sales, its the Echo Dot thats being adopted in droves thanks to its lower cost point.

In addition, Google Home has just 378 voice apps available as of June 30, Voicebot notes. Microsofts Cortana has only 65.

While theres been some criticism that many of Amazons abilities are low-quality, theres also something to be said for being able to build out an app stores long tail. Maybe not all the skills are as useful as getting your daily dose of NPR or being able to order an Uber by voice, but having more than 15,000 to choose from means you have a better shot at finding one that they are able to suit your needs.

Image credits top: Adobe; chart: Voicebot

Make sure to visit: CapGeneration.com

The Amazon Echo now doubles as a home intercom system

Amazon will officially release the Show in a few days, but in the meantime, the company is introducing a long-awaited intercom feature for existing Echo devices. The addition use Drop-In, a teleconferencing feature introduced on the Show that lets close friends and family members call into one anothers device with little warning.

I actually didnt like the feature when I tested the device the coming week I detected it to be pretty intrusive compared to standard calling, but this implementation makes a lot more sense. This upgrade brings Drop-In to the Echo and Echo Dot, letting users communicate between devices in a network. So, you are able to, tell, yell at the kids to come to dinner through the kitchen Echo.

The feature runs across the three devices. In order to take advantage of the intercom, users have to name their individual Echoes( by room likely induces the most sense) and enable the Drop-In feature via the Alexa app. Once everythings put up, the feature can be fired up by saying Alexa, call the kitchen or Alexa, drop in on the kitchen.

The system runs through household groups created during the setup process, rather than in-home Wi-Fi. That means the app can also be used to check in on loved ones from afar, for those who have children or elderly relatives or, one imagines, for more nefarious reasons. According to Amazon, the intercom ability was among its most requested features for the popular home deputy.

Its also likely to raise the ire of the people behindthe smart home intercom that apparently inspired the creation of the Echo Show in the first place.

Make sure to visit: CapGeneration.com