Imagine attending a business meeting with an Amazon Echo( or any voice-driven device) sitting on the conference table. A topic starts about the month’s sales numbers in the Southeast region. Instead of opening a laptop, opening a program like Excel and find the numbers, you simply ask the device and get the answer instantly.
That kind of scenario is increasingly becoming a reality, although it is still far from common place in business just yet.
With the increasing popularity of devices like the Amazon Echo, people are beginning to get used to the idea of interacting with computers using their voices. Anytime a phenomenon like this enters the consumer realm, it is only a matter of time before we see it in business.
Chuck Ganapathi, CEO at Tact, an AI-driven marketings tool that uses voice, type and touch, says with our devices changing, voice makes a lot of sense. “There is no mouse on your telephone. You don’t want to use a keyboard on your telephone. With a smart watch, “were not receiving” keyboard. With Alexa, “were not receiving” screen. You have to think of more natural ways to interact with the device.”
As Werner Vogels, Amazon’s chief technology policeman, pointed out during his AWS re: Invent keynote at the end of last month, up until now we have been limited by the technology as to how we interact with computers. We type some keywords into Google employing a keyboard because this is the only way the technology we had allowed us to enter information.
“Interfaces to digital systems of the future will no longer be machine driven. They will be human centric. We can construct human natural interfaces to digital systems and with that a whole environment will become active, ” he said.
Amazon will of course be happy to help in this regard, introducing Alexa for Businessas a cloud service at re: Invent, but other cloud companies are also exposing voice services for developers, constructing it ever easier to build voice into an interface.
While Amazon took aim at business immediately for the first time with this move, some companies had been experimenting with Echo integration much earlier. Sisense, a BI and analytics tool company, introduced Echo integration as early as July 2016.
But not everyone wants to cede voice to the big cloud vendors , no matter how attractive they might make it for developers. We saw this when Cisco introduced the Cisco Voice Assistant for Spark in November, using voice technology it acquired with the MindMeld buy the previous May to provide voice commands for common meeting tasks.
Roxy, a startup that got $2.2 million in seed fundin November, decided to build its own voice-driven software and hardware, taking aim, for starters, at the hospitality industry. They have broader aspiration beyond that, but one early lesson they have learned is that not all companies want to give their data to Amazon, Google, Apple or Microsoft. They want to maintain control of their own client interactions and a solution like Roxy devotes them that.
In yet another example, Synqq introduced a notes app at the beginning of the year that uses voice and natural language processing to add notes and calendar entries to their app without having to type.
As we move to 2018, we should start ensure even more examples of this type of integration both with the help of big cloud companies, and companies trying to build something independent of those vendors. The keyboard won’t be rendered to the dustbin just yet, but in scenarios where it makes sense, voice could begin to replace the need to type and offer a more natural route of interacting with computers and software.
Make sure to visit: CapGeneration.com