Storyline lets you build and publish Alexa skills without coding

Thirty-nine million Americans now own a smart speaker device, but the voice app ecosystem is still developing. While Alexa today has over 25,000 skills available, a number of companies haven’t yet built a skill for the platform, or offer only a very basic skill that doesn’t run that well. That’s where the startup Storyline comes in. The company is offering an easy to utilize, drag-and-drop visual interface for building Amazon Alexa skills that doesn’t require you to have knowledge of coding.

As the company describes it, they’re constructing the “Weebly for voice apps”- a reference to the drag-and-drop website building platform that’s now a popular route for non-developers to generate websites without code.

Storyline was co-founded in September 2017 by Vasili Shynkarenka( CEO) and Maksim Abramchuk( CTO ). Hailing from Belarus, the two has hitherto operate a software development bureau that built chat-based applications, including chatbots and voice apps, for their clients.

Their work resulted them to be submitted with Storyline, explains Vasili.

“We realized there was this big struggle with creating conversational apps, ” he says. “We became aware that creative people and content inventors are not really good at writing code. That was the major insight.”

The company is targeting brands, businesses and individuals who want to reach their clients- or, in the case of publishers, their readers- using a voice platform like Alexa, and later, Google Home.

The software itself is designed to be very simple, and can be used to create either a custom skill or a Flash Briefing.

For the most basic skill, it only takes five to seven minutes , notes Vasili.

To get started with Storyline, you sign up for an account, then click which type of skill you want to build- either a Flash Briefing or custom ability. You then offer some basic information like the skill’s name and speech, and it launches into a canvas where you can begin creating the skill’s conversational workflow.

Here, you’ll see a block you click on and customize by entering in your own text. This “wouldve been” first thing your voice app says when launched, like “Hello, welcome to…” followed by the app’s name, for example.

You edit this and other blocks of text in the panel on the left side of the screen, while Storyline presents a visual overview of the conversation flow on the right.

In the editing panel, you are still click on other buttons to add more voice interactions- like other questions the skill will ask, user replies, and Alexa’s reply to those.

Each of these items is connected to one of the text blocks on the main screen, as a flow chart of sorts. You can also configure how the skill must be held accountable if the user says something unexpected.

When you’re finished, you can test the ability in a browser by clicking “Play.” That style, you can hear how the skill sounds and test various user responses.

Once satisfied that your skill is ready to go, you click the “Deploy” button to publish. This redirects you to Amazon where you sign in with your Amazon account and publish.( If you don’t have an Amazon Developer account, Storyline will guide you to create one .)

This sort of visual skill developing system may be easier to manage for simpler skills that have a limited number of questions and replies, but the startup says that even more advanced abilities have been constructed utilizing its service.

It was also used by two of the finalists in the Alexa Skills Challenge: Kids.

Since launching the first version of Storyline in October 2017, some 3,000 people have signed up for an account, and have created roughly the same number of abilities. Around 200 of those have gone live to Amazon’s Skill Store.

Storyline isn’t the only company focused on helping business build voice apps without code these days, however.

For example, Sayspringlets designers create voice-enabled apps without code, as well, but instead of was published ability directly, it’s meant to be the first step in the voice app creation process. It’s where designers can flesh out how a ability should work before handing off the coding to a development team.

Vasili says this is a big differentiator between the two companies.

“Prototyping tools are great to play with and explain ideas, but it’s super hard to retain users by being a prototyping tool- because they use the tool to prototype and then that’s it, ” he explains. With Storyline, customers will stay throughout the process of launching and iterating upon their voice app, he states. “We can use data from when the skill is published to improve the design, ” notes Vasili.

dashboard page

canvas page

canvas page( 2)

canvas page( 3)

skill sharing page

skill preview page

skills page

landing page


This company will tell you which vitamins and supplements to take based on your DNA

Nutrigene believes your genes may hold the secret to what you might be missing in your diet. The company will send you tailor-made liquid vitamin supplements based on a lifestyle quiz and your DNA.

You fill out an assessment on the startup’s website, prefer a recommended package, such as essentials, improve performance or optimize gut health, and Nutrigene will send you liquid supplements built just for you. It is also going to start letting customers to upload their 23 andMe data to find get an assessment of their nutritional needs based on DNA.

Founder Min FitzGerald launched the startup out of Singularity and later accepted a Google fellowship for the idea. Nutrigene is now “re going through” the present YC class. Her co-founder and CTO Van Duesterberg comes from a biotech and epigenetics background and holds a PhD from Stanford.

The idea voices a little bit far-fetched at first — simply take a quiz, import your Dna and you magically have all your nutritional needs taken care of. However, Dawn Barry, former VP at Illumina and now chairwoman of Luna DNA, a biotech company powered by the blockchain, says it could have some scientific underpinnings. But, she cautioned, nutrigenetics is still an early science.

Amir Trabelsi, founder of genetic analysis platform Genoox, concurs. And, he pointed out, these types of companies don’t need to provide any proof.

“That doesn’t mean it’s completely wrong, ” Trabelsi told TechCrunch. “But we don’t know enough to say this person should use Vitamin A, for example … There needs to be more trials and observation.”

Still, the vitamin industry is big business, pulling in more than $36 billion dollars in just the U.S. last year. With or without the genetic component, Nutrigene promises to deliver high-quality ingredients, optimized in liquid form.

Fitzgerald says the liquid component helps the supplements work 10 times better in your body than powder-based pills and, she points out, some people can’t swallow pills.

Hesitant, I agreed to try it out for myself. The process was fairly easy and the lifestyle quiz only took about 10 minutes. Then, I sent in my raw data regarding my 23 andMe account.

Though genetics are a factor in Nutrigene’s ultimate formulation, FitzGerald told me the DNA part is pretty new and that my biometric details and objectives were more indicative of how the company tailored my dosages.

However, I did apparently require more B12, according to FitzGerald. “Hence we gave you a good dosage of B12 in your elixir, ” she told me.

Does the stuff work? Tough to say. I didn’t feel any different on Nutrigene’s liquid vitamins than I do normally. Though, full disclosure, I’ve been taking what I believe to be some pretty good prenatal vitamins from New Chapter and a DHA supplement from Nordic Naturals for almost a year now while I’ve been building a baby in my womb. My doctor tested my nutritional levels at the beginning of my pregnancy through a blood sample, seemed pleased with my choice to take prenatals and didn’t tell me to do anything different.

Would Nutrigene’s formula be ideal for someone else? Perhaps, especially if that person holds a high standard for ingredients in their supplements or has a hard time swallowing pills. However, it seems the jury is still out on the social sciences behind vitamins tailored to your genetics and, like Trabelsi mentioned earlier, we likely need a lot more study on the matter.

For those interested in trying out Nutrigene, you can do so by ordering on the website. Package pricing differs and depends on nutritional needs, but starts at around $85 per month.

Make sure to visit:

Escher Reality is building the backend for cross-platform mobile AR

The potential of mobile augmented reality is clear. Last summer Pokemon Go gave a glimpse of just how big this craze could be, as thousands of aroused humans converged on parks, bus stops and other locations around the world to chase virtual ogres through the lens of their smartphones.

Apple was also watching. And the summer months the company signaled its own conviction in the technology by announcing ARKit: a developer toolkit to support iOS developers to build augmented reality apps. CEO Tim Cook said iOS will become the worlds biggest augmented reality platform once iOS 11 hits consumers devices in fall underlining Cupertinos expectation that big things are coming down the mobile AR pipe.

Y Combinator-backed, MIT spin-out Escher Realitys notion in the social power of mobile AR predates both sets of trigger phases. Its constructing a cross-platform toolkit and custom backend for mobile AR developers, aiming to lower the barrier to entry to building compelling experiences, as the co-founders set it.

Keep in mind this was before Pokemon Go, says CEO Ross Finman, discussing how he and CTO Diana Hu founded the company approximately one year and a half ago, initially as a bit of a side project before going all in full hour last November. Everyone thought we were crazy at that time, and now this summer its the summer for mobile augmented reality ARKit has been the best thing ever for us.

But if Apple has ARKit, and you can bet Google will be coming out with an Android equivalent in the not-too-distant future, where exactly does Escher Reality come in?

Think of us more as the backend for augmented reality, says Finman. What we offer is the cross-platform, multiuser and persistent experiences so those are three things that Apple and ARKit doesnt do. So if you want to do any type of shared AR experience you need to connect the two different devices together so then thats what we offer Theres a lot of computer vision problems associated with that.

Think about their own problems of what ARKit doesnt offer you, adds Hu. If youve insured a lot of the present demos outside, theyre okay-ish, you can see 3D models there, but when you start thinking longer term what does it take to create obligating AR experiences? And part of that is a lot of the tooling and a lot of the SDK are not there to provide that functionality. Because as game developers or app developers they dont want to think about all that low level stuff and theres a lot of genuinely complex techs going on that we have built.

If you think about in the future, as AR becomes a bigger movement, as the next computing platform, it will need a backend to support a lot of the networking, it will need a lot of the tools that were building in order to build compelling AR experiences.

We will be offering Android support for now, but then we imagine Google will probably come out with something like that in the future, adds Finman, couching that part of the business as the free bit in freemium and one theyre hence more than happy to hand off to Google when the time comes.

The team has put together a demo to illustrate the kinds of mobile AR gaming experiences theyre aiming to support in which two people play the same mobile AR game, each employing their own device as a paddle

What youre looking at here is very low latency, custom computer vision network protocols enabling two players to share augmented reality at the same hour, as Hu explains it.

Sketching another scenario the tech could enable, Finman tells it could support a version of Pokemon Go in which friends could battle each other at the same hour and see their Pokemons opposed in real day. Or allow players to locate a Gym at a very specific location that stimulates sense in the real-world.

In essence, the teams bet is that mobile AR especially mobile AR gaming gets a whole lot more interesting with is supportive of richly interactive and multiplayer apps that the project works cross-platform and cross device. So theyre build tools and a backend to supporting developers wanting to build apps that can connect Android users and iPhone owners in the same augmented play space.

After all, Apple especially isnt incentivized to help support AR collaboration on Android. Which leaves room for a neutral third party to help bridge platform and hardware gaps and smooth AR play for every mobile gamer.

The core tech is basically knitting different SLAM maps and network connections together in an efficient way, tells Finman, i.e. without the latency that would make a game unplayable, so that it runs in real-time and is a consistent experience. So tuning everything up for mobile processors.

We go down to , not just even the network layer, but even to the general assembly level so that we can run some of the execution instructions very efficiently and some of the image processing on the GPU for telephones, tells Hu. So on a high level it is a SLAM system, but the exact method and how we engineered it is novel for efficient mobile devices.

Consider ARKit as step one, were steps two and three, adds Finman. You can do multi-user experiences, but then you can also do persistent experiences once you turn off the app, once “youre starting” it up again, all the objects that “youve left” will be in the same location.

Consider ARKit as step one, were steps two and three .

People can collaborate in AR experiences at the same hour, adds Hu. Thats one main thing that we can really offer, that Google or Apple wouldnt provide.

Hardware wise, their system supports premium smartphones from the last three years. Although, looking ahead, they say they see no reason why they wouldnt expand to supporting additional types of hardware such as headsets when/ if those start gaining traction too.

In mobile theres a billion devices out there that can run augmented reality right now , notes Finman. Apple has one part of the market, Android has a larger part. Thats where youre go to the most adoption by developers in the short term.

Escher Reality was founded approximately one year and a half ago, spun out of MIT and initially bootstrapped in Finmans dorm room first as a bit of a side project, before they went all in full time in November. The co-founders go back a decade or so as friends, and say they had often kicked around startup ideas and been interested in augmented reality.

Finman describes the business theyve objective up co-founding as actually just a nice combination of both of our backgrounds. For me I was working on my PhD at MIT in 3D perception its the same type of technology underneath, he tells TechCrunch.

Ive been in industry running a lot of different squads in computer vision and data science, adds Hu. So a lot of experience bringing research into production and house large scale data systems with low latency.

They now have five people working full time on the startup, and two part period. At this point the SDK is being used by a limited number of developers, with a wait-list for new sign ups. Theyre aiming to open up to all up-and-comers in fall.

Were targeting games studios to begin with, tells Finman. The technology can be used across many different industries but were going after gaming first because they are usually at the cutting edge of new technology and adoption, and then theres a whole bunch of really smart developers that are going after interesting new projects.

One of the reasons why augmented reality is considered so much bigger, the shared experiences in the real world really opens up a whole lot of new capabilities and interactions and experiences that are going to improve the current guess of augmented reality. But truly it opens up the door for so many different possibilities, he adds.

Discussing some of the compelling experiences the team see coming down the mobile AR pipe, he points to three regions he reckons the technology can especially support namely: instruction, visualization and entertainment.

When you have to look at a piece of paper and imagine whats in the real world for building anything, get direction, having distance professions, thats all going to need shared augmented reality experiences, he suggests.

Although, in the nearer word, customer entertainment( and specifically gaming) is the teams first bet for traction.

In the amusement space in the consumer side, youre go to short films so beyond just Snapchat, its kind of real time special effects, that you are able to video and set up your own various kinds of movie scene, he suggests.

Designing games in AR does also present developers with new conceptual and design challenges, of course, which in turn bring additional development challenges and the toolkit is being designed to help with those challenges.

If you think about augmented reality theres two new mechanics that you can work with; one is the position of the phone now matters , notes Finman. The second thing is the real world become content. So like the map data, the real world, can be integrated into the game. So this really is two mechanics that didnt exist in any other medium before.

From a developer standpoint, one added constraint with augmented reality is because it depends on the real world its difficult to debug so weve developed tools so that you can play back logs. So then you can actually run through videos that were in the real world and interact with it in a simulated environment.

Discussing some of the ideas and clever mechanics theyre seeing early developers playing with, he indicates color as one interesting area. Guessing about the real world as content is really fascinating, he tells. Think about color as a resource. So then you can mine color from the real world. So if you want more gold, put up more Post-It notes.

The business model for Escher Realitys SDK is usage based, entailing they will charge developers for usage on a sliding scale that reflects the success of their applications. Its also offered as a Unity plug-in so the specific objectives developers can easily integrate into current dev environments.

Its a very similar model to Unity, which encourages a very healthy indie developer ecosystem where theyre not charging any fund until you actually start making money, tells Hu. So developers can start working on it and during developing hour they dont get charged anything, even when they launch it, if they dont have that many users they dont get charged, its only when they start making money we also start making money so in that sense a lot of the incentives align pretty well.

The startup, which is graduating YC in the summer 2017 batch and now headed towards demo day, will be looking to raise funding so that they are able to amp up their bandwidth to subsistence more developers. Once theyve got additional outside investment procured the plan is to sign on and work with as many gaming studios as is practicable, tells Finman, as well as be head down on constructing the product.

The AR space is just exploding at the moment so we need to make sure we can move fast enough to keep up with it, he adds.

Make sure to visit: