Duplex shows Google failing at ethical and creative AI design

Google CEO Sundar Pichai milked the woo from a clappy, home-turf developer mob at its I/ O conference in Mountain View this week with a demo of an in-the-works voice deputy feature that is conducive to the AI to make phone calls on behalf of its human owner.

The so-called’ Duplex’ feature of the Google Assistant was shown calling a hair salon to volume a woman’s hair cut, and ringing a eatery to try to book a table — merely to be told it did not accept bookings for less than five people.

At which point the AI changed tack and asked about wait times, earning its owner and controller, Google, the reassuring intel that there wouldn’t be a long wait at the elected time. Job done.

The voice system deployed human-sounding vocal cues, such as’ ums’ and’ ahs’ — to make the” conversational experience more comfy“, as Google couches it in a blog about its aims for the tech.

The voices Google used for the AI in the demos were not synthesized robotic tones but distinctly human-sounding, in both the female and male flavors it showcased.

Indeed, the AI pantomime was apparently realistic enough to convince some of the genuine humans on the other objective of the line that they were speaking to people.

At one point the bot’s’ mm-hmm’ answer even described appreciative giggles from a techie audience that clearly felt in on the’ joke’.

But while the home mob cheered enthusiastically at how capable Google had apparently constructed its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and industries day — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.

One it does not allow to trouble the trajectory of its engineering ingenuity.

A consideration which only seems to get a look in years into the AI dev process, at the cusp of a real-world rollout — which Pichai said would be coming shortly.

Deception by design

” Google’s experimentations do appear to have been designed to deceive ,” concurred Dr Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethic Lab, discussing the Duplex demo.” Because their main hypothesis was’ can you distinguish this from a real person ?’. In this case it’s unclear why their hypothesis was about misrepresentation and not the user experience … You don’t inevitably need to deceive someone to give them a better user experience by voicing naturally. And if they had instead tested the hypothesis’ is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.

” As for whether the technology itself is deceptive, I can’t really say what their intent is — but … even if they don’t intend it to delude you can say they’ve been negligent in not inducing sure it doesn’t deceive … So I can’t say it’s definitely deceptive, but there should be some kind of mechanism there to let people know what it is they are speaking to .”

” I’m at colleges and universities and if you’re going to do something which involves deception you have to really demonstrate there’s a scientific value in doing this ,” he added, agreeing that, as a general principle, humen should always be able to know that an AI they’re interacting with is not a person.

Because who — or what — you’re interacting with” shapes how we interact”, as he set it.” And if you start blurring the lines … then this can sew mistrust into all kinds of interactions — where we would become more suspicious as well as needlessly replacing people with meaningless agents .”

No such ethical dialogues troubled the I/ O stage, however.

Yet Pichai told Google had been working on the Duplex technology for “many years”, and went in so far as to claim the AI can ” understand the nuances of conversation” — albeit still plainly in very narrow scenarios, such as booking an appointment or reserving a table or asking a business for its opening hours on a specific date.

” It brings together all our investments over the years in natural language appreciation, deep learning, text to speech ,” he said.

What was yawningly absent from that list, and seemingly also lacking from the design of the tricksy Duplex experiment, was any sense that Google has a deep and nuanced appreciation of the ethical concerns at play around AI technologies that are powerful and capable enough of passing off as human — thereby playing lots of real people in the process.

The Duplex demos were pre-recorded, rather than live telephone call, but Pichai described the calls as “real” — indicating Google representatives had not in fact called the businesses ahead of time to warn them its robots might be calling in.

” We have many of these instances where the calls quite don’t go as expected but our assistant understands the context, the subtlety … and managed the interaction gracefully ,” he added after airing the restaurant unable-to-book example.

So Google appears to have trained Duplex to be robustly deceptive — i.e. to be able to reroute around derailed conversational expectations and still pass itself off as human — specific features Pichai lauded as’ graceful’.

And even if the AI’s performance was more patchy in the wild than Google’s demo suggested it’s clearly the CEO’s goal for the tech.

While trickster AIs might bring to mind the iconic Turing Test — where chatbot developers compete to develop conversational software capable of convincing human judges it’s not artificial — it should not.

Because the application of the Duplex technology does not sit within the context of a high profile and so clear competition. Nor was there a set of rules that everyone was shown and agreed to beforehand( at least so far as we are aware — if there were any rules Google wasn’t publicizing them ). Rather it seems to have unleashed the AI onto unsuspecting business staff who were just going about their day chores. Can you insure the ethical disconnect?

” The Turing Test has come to be a bellwether of testing whether your AI software is good or not, based on whether you can tell it apart from a human being ,” is King’s suggestion on why Google might have chosen a similar trick as an experimental showcase for Duplex.

” It’s very easy to tell seem how great our software is, people cannot tell it apart from a real human being — and perhaps that’s a much stronger selling point than if you say 90% of users favor this software to the previous software ,” he posits.” Facebook does A/ B testing but that’s probably less exciting — it’s not going to wow anyone to tell well consumers favor this slightly deeper shade of blue to a lighter shade of blue .”

Had Duplex been deployed within Turing Test conditions, King also induces the point that it’s rather less likely it would have taken in so many people — because, well, those slightly jarringly day ums and ahs would soon have been spotted, uncanny valley style.

Ergo, Google’s PR flavored’ AI test’ for Duplex is also rigged in its favor — to further supercharge a one-way promotional marketing message around artificial intelligence. So, in other words, say hello to yet another layer of fakery.

How could Google introduce Duplex in a way that would be ethical? King reckons it would need to state up front that it’s a robot and/ or use an appropriately synthetic voice so it’s instantly clear to anyone picking up the phone the caller is not human.

” If you were to use a robotic voice there would also be less of a risk that all of your voices that you’re synthesizing merely represent a small minority of the population spoke of’ BBC English’ and so, perhaps in a sense, use a robotic voice would even be less biased as well ,” he adds.

And of course , not being up front that Duplex is artificial embeds all sorts of other knock-on hazards, as King explained.

” If it’s not obvious that it’s a robot voice there’s a risk that people come to expect that most of these phone calls are not genuine. Now experiments have shown that many people do interact with AI software that is conversational just as they would another person but at the same period there is also evidence showing that some people do the exact opposite — and they become a lot ruder. Sometimes even abusive towards conversational software. So if you’re constantly interacting with these bots you’re not going to be as polite, perhaps, as you usually would, and who are likely have effects for when you get a genuine caller that you do not know is real or not. Or even if you know they’re real perhaps the route you interact with people has changed a bit .”

Safe to say, as autonomous systems get more powerful and capable of performing tasks that we would normally expect a human to be doing, the ethical considerations around those systems scale as exponentially large as the health risks applications. We’re really just getting started.

But if the world’s biggest and most powerful AI developers believe it’s totally fine to set ethics on the backburner then risks are going to spiraling up and out and things could go very badly indeed.

We’ve seen, for example, how microtargeted advertising platforms have been hijacked at scale by would-be election fiddlers. But the overarching hazard where AI and automation technologies are concerned is that humans become second class citizens vs the tools that are being claimed to be here to help us.

Pichai said the first — and still, as he set it, experimental — use of Duplex will be to supplement Google’s search services by filling in information about industries’ opening periods during periods when hours might inconveniently vary, such as public holidays.

Though for a company on a general mission to’ organize the world’s information and make it universally accessible and useful’ what’s to stop Google from — down the line — deploying vast phalanx of phone bots to ring and ask humen( and their associated businesses and organizations) for all sorts of expertise which the company can then liberally extract and inject into its multitude of connected services — monetizing the freebie human-augmented intel via our extra-engaged attention and the ads it serves alongside?

During the course of writing this article we reached out to Google’s press line several times to ask to discuss the ethics of Duplex with a relevant company representative. But ironically — or perhaps fittingly enough — our hand-typed emails received only automated responses.

Pichai did emphasize that the technology is still in development, and told Google wants to” work hard to get this right, get the user experience and the high expectations right for both businesses and users “.

But that’s still ethics as a tacked on afterthought — not where it should be: Locked in place as the keystone of AI system design.

And this at a time when platform-fueled AI problems, such as algorithmically fenced fake news, have snowballed into huge and ugly global scandals with very far reaching societal implications indeed — be it election interference or ethnic violence.

You really have to wonder what it would take to shake the’ first transgres it, later fix it’ethos of some of the tech industry’s major players…

Ethical guidance relating to what Google is doing here with the Duplex AI is actually pretty clear if you bother to read it — to the phase where even legislators are agreed on foundational basics, such as that AI needs to operate on “principles of intelligibility and fairness”, to borrow phrasing from just one of several political reports that have been published on the topic in recent years.

In short, misrepresentation is not cool. Not in humans. And absolutely not in the AIs that are supposed to be helping us.

Transparency as AI standard

The IEEE technical professional association put under a first draft of a framework to guide ethically designed AI systems at the back end of 2016 — which included guiding principles such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable.

In the same year the UK’s BSI criteria body developed a specific criterion — BS 8611 Ethics design and application robots — which explicitly names identity deception( intentional or unintentional) as a societal risk, and warns that this approach will eventually erode trust in the technology.

” Avoid misrepresentation due to the behaviour and/ or appearance of the robot and is transparency of robotic nature ,” the BSI’s criterion advises.

It also warns against anthropomorphization due to the associated danger of misinterpretation — so Duplex’s ums and ahs don’t only suck because they’re fake but because they are misinforming and so deceptive, and also therefore carry the knock-on danger of undermining people’s trust in your service but also more widely still, in other people generally.

” Avoid unnecessary anthropomorphization ,” is the standard’s general guidance, with the further steer that the technique be reserved” merely for well-defined, limited and socially-accepted intents “.( Tricking workers into remotely conversing with robots probably wasn’t what they were thinking of .)

The standard also urges” clarification of intent to simulate human or not, or intended or expected behaviour “. So, yet again, don’t try and pass your bot off as human; you will be required to make it really clear it’s a robot.

For Duplex the transparency that Pichai said Google now intends to think about, at this late stage in the AI development process, would have been trivially easy to attains: It could just have programmed the assistant to tell up front:’ Hi, I’m a robot calling on behalf of Google — are you happy to talk to me ?’

Instead, Google chose to prioritize a demo’ wow’ factor — of proving Duplex pulling the woolen over busy and trusting humans’ eyes — and by doing so proved itself tonedeaf on the topic of ethical AI design.

Not a good look for Google. Nor indeed a good outlook for the rest of us who are subject to the algorithmic whims of tech giants as they flick the control switches on their society-sized platforms.

” As the development of AI systems grows and more research is carried out, it is important that ethical hazards associated with their utilize are highlighted and considered as part of the design ,” Dan Palmer, head of manufacturing at BSI, told us.” BS 8611 was developed … alongside scientists, academics, ethicists, philosophers and users. It explains that any autonomous system or robot should be accountable, truthful and unprejudiced.

” The standard creates a number of potential ethical hazards that are relevant to the Google Duplex; one of these is the risk of AI machines becoming sexist or racist due to a biased data feed. This surfaced prominently when Twitter users influenced Microsoft’s AI chatbot, Tay, to spew out offensive messages.

” Another contentious topic is whether forming an emotional bond with a robot is desirable, especially if the voice assistant interacts with the elderly or children. Other guidelines on new hazards that should be considered include: robot misrepresentation, robot addiction and the potential for a learning system to surpass its remit.

” Ultimately, it must always be transparent who is responsible for the behavior of any voice assistant or robot, even if it behaves autonomously .”

Yet despite all the thoughtful ethical guidance and research that’s already been produced, and is out there for the reading, here we are again being proven the same tired tech industry playbook praising engineering abilities in a shiny bubble, stripped of human context and societal consideration, and hung in front of an uncritical audience to see how loud they’ll cheer.

Leaving important matter — over the ethics of Google’s AI experimentations and also, more broadly, over the mainstream vision of AI assistance it’s so keenly trying to sell us — to hang and hang.

Questions like how much genuine utility there might be for the kinds of AI applications it’s telling us we’ll all want to use, even as it prepares to push these apps on us, because it can — as a consequence of its great platform power and reach.

A core’ uncanny valley-ish’ contradiction may explain Google’s choice of deception for its Duplex demo: Humen don’t inevitably like speaking to machines. Indeed, oftentimes they prefer to speak to other humans. It’s just more meaningful to have your existence registered by a fellow pulse-carrier. So if an AI exposes itself to be a robot the human who picked up the phone are most likely only put it straight back down again.

” Going back to the subterfuge, it’s fine if it’s replacing meaningless interactions but not if it’s intending to replace meaningful interactions ,” King told us.” So if it’s clear that it’s synthetic and you can’t inevitably use it in a context where people actually want a human to do that job. I think that’s the right approach to take.

” It matters not just that your hairdresser appears to be listening to you but that they are actually listening to you and that they are mirroring some of your emotions. And to replace that kind of work with something synthetic — I don’t think it stimulates much sense.

” But at the same time if you expose it’s synthetic it’s not likely to replace that kind of work .”

So actually Google’s Duplex sleight of hand may be trying to conceal the fact AIs won’t be able to replace as many human tasks as technologists like to think they will. Not unless lots of currently meaningful interactions are rendered meaningless. Which would be a massive human expense that societies would have to — at very least — debate long and hard.

Trying to avoid such a debate from taking place by pretending there’s nothing ethical to see here is, hopefully , not Google’s designed intention.

King also stimulates the point that the Duplex system is( at least for now) computationally costly.” Which means that Google cannot and should not only release this as software that anyone can run on their home computers.

” Which means they can also control how it is used, and in what context — and they can also is ensured will only be used with certain safeguards built in. So I believe the experiments are maybe not the best of signs but the real exam is likely to be how they release it — and will they construct the safeguards that people demand into the software ,” he adds.

As well as a lack of visible precautions in the Duplex demo, there’s also — I would argue — a curious absence of imagination on display.

Had Google been bold enough to disclose its robot interlocutor it might have guessed more about how it could have designed that experience to be both clearly not human but also fun or even funny. Suppose of how much life can be injected into animated cartoon characters, for example, which are very clearly not human yet are enormously popular because people find them entertaining and feel they come alive in their own way.

It actually attains you wonder whether, at some foundational level, Google absence trust in both what AI technology can do and “in ones own” creative abilities to breath new life into these emergent synthetic experiences.

Make sure to visit: CapGeneration.com


Say hello to Google One

Google is revamping its consumer storage plans today by adding a new $2.99/ month tier for 200 GB of storage and falling the price of its 2 TB plan from $19.99/ month to $9.99/ month( and dropping the $9.99/ month 1 TB plan ). It’s also rebranding these storage plans( but not Google Drive itself) as” Google One .”

Going forward, you’ll also be able to share your storage quota with up to five family members.

That by itself would be interesting, devoted how easy it is to max out 100 GB with 4K videos and high-res images these days, but there is one other feature here that explains the new brand name: free one-tap access to Google Experts for help with any Google consumer product and service.

That access to live experts — not some barely functional AI chatbot — comes with every Google One plan, including the $1.99/ month 100 GB plan. In the U.S ., these experts will be available 24/7 over chat, email and telephone. In other countries, this lineup of support alternatives may differ, but the company tells me that its objective is” to provide users with great one-tap supporting and constantly improve it over time .”

Google already offered 24/7 support for paying business users with a G Suite account, but this is the first time it actively offers live is supportive of consumers.

It’s worth stressing that the existing free quota of 15 GB will remain.

In addition to access to experts, the company also promises to provide subscribers with other benefits. Google One’s director Larissa Fontaine told me that those could include discounts on hotels you find in Google Search, preferred rates for other Google services or credits on Google Play.” We hope to build those out over day ,” she noted.

Brandon Badger, Google’s group product manager for Google One, told me the team looked at how people use the storage scheme. Users now have more devices, shoot more 4K video and share those files with more family members, who in turn also have more devices.” We are seeming with this plan to accommodate that ,” he said.

In addition, Fontaine noted that users with paid storage accounts also tend to be heavy Google users in general, so combining storage and support seemed logical.

Sadly, this isn’t an immediate change. Over the course of the next few months, Google will upgrade all existing storage plans to Google One accounts starting in the U.S ., with a global rollout after that. Google also tells me that it will roll out a new Android app to help users manage their plans( not their files ).

While the focus of today’s announcement is on storage, it’s hard not to look at this new offering in the context of the additional supporting and other bonus features that Google promises. Google One is clearly about more than simply a better storage plan offering. Instead, it feels like the beginning of a new, more ambitious offering that could be expanded to include other services over period. Maybe a single subscription to all Google consumer services, including Drive, YouTube Red and Play Music( or whatever becomes of that )? Despite its name, Google One is currently only one of many subscription services the company offers, after all.

Make sure to visit: CapGeneration.com

8 big announcements from Google I/O 2018

Google kicked off its annual I/ O developer conference at Shoreline Amphitheater in Mountain View, California. Here are some of the biggest proclamations from the Day 1 keynote. There will be more to come over the next couple of days, so follow along on everything Google I/ O on TechCrunch.

Google goes all in on artificial intelligence, rebranding its research division to Google AI

Just before the keynote, Google announced it is rebranding its Google Research division to Google AI. The move signals how Google has increasingly focused R& D on computer vision, natural language processing, and neural networks.

Google makes talking to the Assistant more natural with “continued conversation”

What Google announced: Google announced a” continued dialogue” update to Google Assistant that stimulates talking to the Assistant feel more natural. Now, instead of having to say “Hey Google” or “OK Google” every time you want to say a command, you’ll merely have to do so the first time. The company also is adding a new feature that allows you to ask multiple questions within the same request. All this will roll out in the coming weeks.

Why it’s important : When you’re having a typical dialogue, odds are you are asking follow-up questions if you didn’t get the answer you wanted. But it is feasible to jarring to have to say ” Hey Google” every single hour, and it violates the whole flowing and makes the process feel pretty unnatural. If Google wants to be a significant player when it comes to voice interfaces, the actual interaction has to feel like a dialogue — not only a series of queries.

Google Photos gets an AI boost

What Google announced: Google Photos already induces it easy for you to correct photos with built-in editing tools and AI-powered features for automatically creating collages, movies and stylized photos. Now, Photos is getting more AI-powered fixes like B& W photo colorization, brightness correction and suggested rotations. A new version of the Google Photos app will suggest quick fix and tweaks like rotations, brightness corrections or adding pops of color.

Why it’s important : Google is working to become a hub for all of your photos, and it’s be permitted to woo potential users by offering powerful tools to edit, kind, and modify those photos. Each additional photo Google gets offers it more data and helps them get better and better at image recognition, which in the end not only improves the user experience for Google, but also constructs its own tools for its services better. Google, at its heart, is a search company — and it needs a lot of data to get visual search right.

Google Assistant and YouTube are coming to Smart Displays

What Google announced : Smart Displays were the talk of Google’s CES push this year, but we haven’t heard much about Google’s Echo Show competitor since. At I/ O, we got a little more insight into the company’s smart showing efforts. Google’s first Smart Displays will launch in July, and of course will be powered by Google Assistant and YouTube. It’s clear that the company’s expended some resources into building a visual-first version of Assistant, justifying the addition of a screen to the experience.

Why it’s important: Users are increasingly getting accustomed to the idea of some smart device sitting in their living room that will answer their questions. But Google is looking to create a system where a user can ask questions and then have an option to have some kind of visual display for actions that only can’t be resolved with a voice interface. Google Assistant handles the voice part of that equation — and having YouTube is a good service that runs alongside that.

Google Assistant is coming to Google Maps

What Google announced : Google Assistant is coming to Google Maps, available on iOS and Android the summer months. The addition is meant to provide better recommendations to users. Google have all along worked to stimulate Maps seem more personalized, but since Maps is now about far more than merely directions, the company is introducing new features to give you better recommendations for local places.

The maps integration also combines the camera, computer vision technology, and Google Maps with Street View. With the camera/ Maps combination, it really looks like you’ve jumped inside Street View. Google Lens can do things like identify houses, or even dog breeds, merely by pointing your camera at the object in question. It will also be able to identify text.

Why it’s important: Maps is one of Google’s biggest and most important products. There’s a lot of exhilaration around augmented reality — you can point to phenomena like Pokemon Go — and companies are just starting to scratch the surface of the best use instances for it. Figuring out directions seems like such a natural use lawsuit for a camera, and while it was a bit of a technical accomplishment, it dedicates Google yet another perk for its Maps users to keep them inside the service and not switch over to alternatives. Again, with Google, everything comes back to the data, and it’s able to capture more data if users stick around in its apps.

Google announces a new generation for its TPU machine learning hardware

What Google announced : As the war for creating customized AI hardware heats up, Google said that it is rolling out its third generation of silicon, the Tensor Processor Unit 3.0. Google CEO Sundar Pichai said the new TPU is 8x more powerful than last year per pod, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations.

Why it’s important: There’s a race to create the best machine learning tools for developers. Whether that’s at the framework level with tools like TensorFlow or PyTorch or at the actual hardware level, the company that’s be permitted to lock developers into its ecosystem will have an advantage over the its competitors. It’s especially important as Google appears to construct its cloud platform, GCP, into a massive business while going up against Amazon’s AWS and Microsoft Azure. Devoting developers — who are already adopting TensorFlow en masse — a way to speed up their operations can help Google continue to woo them into Google’s ecosystem.

MOUNTAIN VIEW, CA- MAY 08: Google CEO Sundar Pichai delivers the keynote address at the Google I/ O 2018 Conference at Shoreline Amphitheater on May 8, 2018 in Mountain View, California. Google’s two day developer conference operates through Wednesday May 9.( Photo by Justin Sullivan/ Getty Images)

Google News gets an AI-powered redesign

What Google announced : Watch out, Facebook. Google is also planning to leverage AI in a revamped version of Google News. The AI-powered, redesigned news destination app will” allow users to keep up with the news they care about, understand the full narrative, and enjoy and support the publishers they trust .” It will leverage parts found in Google’s digital magazine app, Newsstand and YouTube, and introduces new features like “newscasts” and “full coverage” to help people get a summing-up or a more holistic position of a news story.

Why it’s important : Facebook’s main product is literally called ” News Feed ,” and it serves as a major source of information for a non-trivial section of countries around the world. But Facebook is embroiled in a scandal over personal data of as many as 87 million users ending up in the hands of a political research firm, and there are a lot of questions over Facebook’s algorithms and whether they surface up legitimate information. That’s a huge hole that Google could exploit by offering a better news product and, once again, lock users into its ecosystem.

Google unveils ML Kit, an SDK that constructs it easy to add AI smarts to iOS and Android apps

What Google announced : Google unveiled ML Kit, a new software growth kit for app developers on iOS and Android that allows them to integrate pre-built, Google-provided machine learning models into apps. The models support text recognition, face detecting, barcode scan, image labeling and landmark recognition.

Why it’s important: Machine learning tools have enabled a new wave of use cases that include use examples built on top of image recognition or speech detecting. But even though frameworks like TensorFlow have attained it easier to build applications that tap those tools, it can still take a high level of expertise to get them off the ground and operating. Developers often figure out the best use suits for new tools and devices, and growth kits like ML Kit assist lower the barrier to entry and give developers without a ton of expertise in machine learning a playground to start figuring out interesting use suits for those appliocations.

So when will you be able to actually play with all these new features? The Android P beta is available today, and you can find the upgrade here .

Make sure to visit: CapGeneration.com

Google adds Morse code input to Gboard

Google is adding morse code input to its mobile keyboard. It’ll be available as a beta on Android later today. The company announced that new feature at Google I/ O after proving a video of Tania Finlayson.

Finlayson has been having a hard time communicating with other people due to her condition. She found a great way to write sentences and talk with people use Morse code.

Her husband developed a custom device that analyzes her head movements and transcodes them into Morse code. When she triggers the left button, it adds a short signal, while the right button triggers a long signal. Her device then converts the text into speech.

Google’s implementation will replace the keyboard with two areas for short and long signals. There are multiple term suggestions above the keyboard just like on the normal keyboard. The company has furthermore created a Morse poster so that you can learn Morse code more easily.

As with all accessibility features, the more input techniques the better. Everything that builds technology more accessible is a good thing.

Of course, Google used its gigantic I/ O conference to introduce this feature to build the company appear good too. But it’s a fine trade-off, a win-win for both Google and users who can’t use a traditional keyboard.

Correction: A previous version of this article used to say Morse code is available on iOS and Android. The beta is only available on Android .

Make sure to visit: CapGeneration.com

Google announces a new generation for its TPU machine learning hardware

As the war for creating customized AI hardware heats up, Google announced at Google I/ O 2018 that is rolling out out the work of its third generation of silicon, the Tensor Processor Unit 3.0.

Google CEO Sundar Pichai said the new TPU pod is eight times more powerful than last year, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations. And while multiple frameworks for developing machine learning tools have emerged, including PyTorch and Caffe2, this one is optimized for Google’s TensorFlow. Google is looking forward to construct Google Cloud an omnipresent platform at the scale of Amazon, and offering better machine learning tools is promptly becoming table stakes.

Amazon and Facebook are both working on their own kind of custom silicon. Facebook’s hardware is optimized for its Caffe2 framework, which is designed to handle the massive datum graph it has on its users. You can think about it as taking everything Facebook knows about you — your birthday, your friend graph, and everything that goes into the news feed algorithm — fed into a complex machine learning framework that works best for its own operations. That, in the end, may have ended up requiring a customized approach to hardware. We know less about Amazon’s aims here, but it also wants to own the cloud infrastructure ecosystem with AWS.

All this has also spun up an increasingly large and well-funded startup ecosystem looking to create a customized piece of hardware targeted toward machine learning. There are startups like Cerebras System, SambaNova System, and Mythic, with a half dozen or so beyond that as well( not even including the activity in China ). Each is looking forward to exploit a similar niche, which is find a way to outmaneuver Nvidia on cost or performance for machine learning chores. Most of those startups have raised more than $30 million.

Google unveiled its second-generation TPU processor at I/ O last year, so it wasn’t a huge astound that we’d see another one this year. We’d heard from sources for weeks that it was coming, and that the company is already hard at work figuring out what happened next. Google at the time touted performance, though the point of all these tools is to make it a little easier and more palatable in the first place.

Google also said this is the first time the company has had to include liquid cool in its data centers, CEO Sundar Pichai told. Heat dissipation is increasingly a difficult problem for companies looking to create customized hardware for machine learning.

There are a lot of questions around building custom silicon, however. It may be that developers don’t need a super-efficient piece of silicon when an Nvidia card that’s a few years old can do the trick. But data sets are getting increasingly larger, and having the biggest and best data set is what makes a defensibility for any company these days. Just the prospect of stimulating it easier and cheaper as companies scale may be enough to get them to adopt something like GCP.

Intel, too, is looking to get in here with its own products. Intel has been beating the drum on FPGA as well, which is designed to be more modular and flexible as the requirements for machine learning change over period. But again, the knocking there is price and difficulty, as programming for FPGA can be a hard problem in which not many engineers have expertise. Microsoft is also betting on FPGA, and unveiled what it’s calling Brainwave just yesterday at its BUILD meeting for its Azure cloud platform — which is increasingly a significant portion of its future potential.

Microsoft launches Project Brainwave, its deep learning acceleration platform


Google more or less seems to want to own the entire stack of how we operate on the internet. It starts at the TPU, with TensorFlow layered on top of that. If it manages to succeed there, it gets more data, stimulates its tools and services faster and faster, and eventually reaches a phase where its AI tools are too far ahead and locks developers and users into its ecosystem. Google is at its heart an advertising business, but it’s gradually expanding into new business segments that all necessitate robust data sets and operations to learn human behavior.

Now the challenge will be having the best pitch for developers to not only get them into GCP and other services, but also keep them locked into TensorFlow. But as Facebook increasingly seems to challenge that with alternating frameworks like PyTorch, there may be more difficulty than originally guessed. Facebook unveiled a new version of PyTorch at its main annual meeting, F8, only last month. We’ll have to see if Google is able to respond adequately to stay ahead, and that starts with a new generation of hardware.

Make sure to visit: CapGeneration.com

Maps walking navigation is Googles most compelling use for AR yet

Google managed to elicited an audible gasp from the crowd at I/ O today when it presented off a new augmented feature for Maps. It was a clear standout during a keynote that contained plenty of iterative updates to existing software, and demonstrated a key glimpse into what it will take to move AR from interesting novelty to compelling employ occurrence.

Along with the standard array of ARCore-based gaming offerings, the new AR mode for Maps is arguably one of the first truly indispensable real-world applications. As a person who had expended the better part of an hour yesterday attempting to navigate the long, unfamiliar blocks of Palo Alto, California by following an arrow on a small blue circle, I can personally vouch for the usefulness of such an application.

It’s still early days — the company admitted that it’s playing around with a few ideas here. But it’s easy to see how offering visual overlays of a real-time image would make it a heck of a lot easier to navigate unfamiliar spaces.

In a sense, it’s a like a real-time version of Street View, combining real-world images with map overlays and location-based positioning. In the demo, a majority of the screen is devoted to the street image captured by the on-board camera. Turn by turn directions and large arrows are overlaid onto the video, while a small half-circle displays a sliver of the map to give you some context of where you are and how long it will take to get where you’re going.

Of course, such a system that’s heavily reliant on visuals wouldn’t make sense in the context of driving, unless, of course, it’s presented in a kind of heads up showing. Here, however, it runs seamlessly, presuming, of course, you’re willing to look a bit dorky by holding up your phone in front of your face.

There are a lot of moving portions here too, naturally. In order to sync up to a display like this, the map is going to have to get things just right — and anyone who’s ever walked through the city streets on Maps knows how often that can misfire. That’s likely a big part of the reason Google wasn’t actually willing to share specifics with regards to timing. For now, we just “re going to have to” presume this is a sort of proof of theory — along with the fun little fox strolling guy the company trotted out that had shades of a certain Johnny Cash-voiced coyote.

But if this is what trying to find my style in a new city looks like, sign me up.

Make sure to visit: CapGeneration.com

Google confirms some of its own services are now getting blocked in Russia over the Telegram ban

A shower of newspaper airliners darted through the skies of Moscow and other towns in Russia today, as users answered the call of entrepreneur Pavel Durov to send the blank missives out of their windows at a pre-appointed time in support of Telegram, a messaging app he founded that was blocked last week by Russian regulator Roskomnadzor( RKN) that uses a paper airplane icon. RKN believes the service is violating national laws by failing to provide it with encryption keys to access messages on the service( Telegram has refused to comply ).

The paper plane send-off was a small, flashmob turn in a “Digital Resistance” — Durov’s preferred word — that has otherwise largely been played out online: currently, nearly 18 million IP addresses are knocked out from being accessed in Russia, all in the name of blocking Telegram.

And in the latest development, Google has now confirmed to us that its own services are now also being impacted. From what we understand, Google Search, Gmail and push notifications for Android apps are among the products being affected.

” We are aware of reports that some users in Russia are unable to access some Google products, and are investigating those reports ,” said a Google spokesperson in an emailed answer. We’d been trying to contact Google all week about the Telegram blockade, and this is the first time that the company has both replied and acknowledged something related to it.

( Amazon has acknowledged our messages but has yet to reply to them .)

Google’s comments come on the heels of RKN itself also announcing today that it had expanded its IP blocks to Google’s services. At its peak, RKN had blocked nearly 19 million IP addresses, with dozens of third-party services that also use Google Cloud and Amazon’s AWS, such as Twitch and Spotify, also getting caught in the crossfire.

Russia is among the countries in the world that has enforced a kind of digital firewall, blocking periodically or permanently certain online content. Some turn to VPNs to access that content anyway, but it turns out that Telegram hasn’t needed to rely on that workaround to get used.

” RKN is embarrassingly bad at blocking Telegram, so most people keep using it without any intermediaries ,” said Ilya Andreev, COO and co-founder of Vee Security, which has been providing a proxy service to bypass the ban. Currently, it is supporting up to two million users simultaneously, although this is a relatively small proportion considering Telegram has around 14 million users in the country( and, likely, more considering all the free publicity it’s been getting ).

As we described earlier the coming week, the reason so many IP addresses are get blocked is because Telegram has been using a technique that enable us to “hop” to a new IP address when the one that it’s using is blocked from getting accessed by RKN. It’s a technique that a much smaller app, Zello, had also resorted to use for nearly a year when the RKN announced its own ban.

Zello ceased its activities earlier this year when RKN got wise to Zello’s styles and chose to start blocking entire subnetworks of IP addresses to avoid so many hops, and Amazon’s AWS and Google Cloud kindly asked Zello to stop as other services also started to get blocked. So, when Telegram started the same various kinds of hopping, RKN, in fact, knew just what to do to turn the fucking.( And it also took the hot off Zello, which miraculously got restored .)

So far, Telegram’s cloud partners have held strong and have not taken the same route, although get its own services blocked could see Google’s resolve tested at a new level.

Some believe that one outcome could be the regulator playing out an elaborate game of chicken with Telegram and the rest of the internet companies that are in some way aiding and abetting it, spurred in part by Russia’s larger profile and how such blocks would appear to international audiences.

” Russia can’t keep blocking random things on the Internet ,” Andreev said.” Russia is working hard to make its image more alluring to foreigners in preparation for the World Cup ,” which is taking place this June and July.” They can’t have tourists coming and realising Google doesn’t work in Russia .”

We’ll update this post and continue to write on further education as we learn more.

Make sure to visit: CapGeneration.com

Android blatantly copies the iPhone X navigation gestures

Google unveiled some of the new features in the next version of Android at its developer seminar. One feature seemed particularly familiar. Android P will get new navigation gestures to switch between apps. And it runs just like the iPhone X.

“As part of Android P, we’re introducing a new system navigation that we’ve been working on for more than a year now, ” VP of Android Engineering Dave Burke told. “And the new design attains Android multitasking more approachable and easier to understand.”

While Google has probably been working on a new multitasking screen for a year, it’s hard to believe that the company didn’t transcript Apple. The iPhone X was unveiled in September 2017.

On Android P, the traditional home, back and multitasking buttons are gone. There’s a single pill-shaped button at the center of the screen. If you swipe up from this button, you get a new multitasking position with your most recent apps. You can swipe left and right and select the app you’re looking for.

If you swipe up one more time, you get the app drawer with suggested apps at the very top. At any time, you can tap on the button to go back to the home screen. These gestures also work when you’re use an app. Android P adds a back button in the bottom left corner if you’re in an app.

But the most shameless inspiration is the left and right gestures. If you swipe left and right on the pill-shaped button, you can switch to the next app, precisely like on the iPhone X. You can scrub through multiple apps. As soon as you release your thumb, you’ll jump to the selected app.

You can get Android P beta for a handful of devices starting today. Aim users will get the new version in the coming months.

It’s hard to blame Google with this one as the iPhone X gestures are unbelievably elegant and efficient — and yes, it looks a lot like the Palm Pre. Employing a phone that runs the current version of Android after using the iPhone X is much slower as it requires multiple taps to switch to the most recent app.

Apple moved the needle and it’s clear that all smartphones should work like the iPhone X. But Google still deserves to be called out.

Make sure to visit: CapGeneration.com

Google wants to cure our phone addiction. How about that for irony? | Matt Haig

It helped us get hooked on tech , now it wants to wean us off by employing more tech. Is this about business , not wellbeing ?, says author Matt Haig

Worried about the hours you spend scrolling your phone, sinking into despair, gazing at glamorous Instagrammers leaning against palm trees while you try to get out of bed?

Worry no longer: help is coming. And it’s coming from, um, Google. Yes, that’s right. Google is now trying to improve our” digital wellbeing'” by making our telephones less addictive. Its newest version of Android includes an array of features with the stated objective of maintaining us from our phones.

Among the many latest additions is a “dashboard” app that tells you at a glance how- and how often – you’ve been using your telephone. It will enable you to set time limits via an app timer, and give you warns when you’ve been using it for too long.

This is Google doing what it always does. It is trying to be the solution to every aspect of our lives. It already wants to be our librarian, our encyclopedia, our dictionary, our map, our navigator, our billfold, our postman, our calendar, our newsagent, and now it wants to be our therapist. It wants us to believe it’s on our side.

There is something suspect about deploying more technology to use less technology. And something ironic about a company that fuels our tech craving telling us that it holds the key to weaning us off it. It doubles as good PR, and pre-empts any future criticism about corporate irresponsibility.

Google may be the world’s most valuable brand, but it is has been consistently dogged by criticisms including over privacy, search neutrality and paying its fair share of tax. Amid a new era of scepticism towards the privacy-neglecting practices of Silicon Valley behemoths and awareness of technology’s potential harm to our mental health, Google’s move looks like a classic attempt to get ahead of the game. People no longer want a life-work balance, they want a life-tech balance. And Google is here to assist.

” Seventy percent of people want more help striking this balance ,” told Sameer Samat, vice-president of product management at Google at its annual showcase last Tuesday. So they could be seen to be acting as the will of the people, a wise move for a company which boasts- for its search engine alone- route over a billion users.

The trouble is that while Google professes to acknowledge the dangers of technology taking over our lives, it keeps on attempting new ways for, well, technology to take over our lives. At the very same showcase, it unveiled something else it is working on. A Google Assistant, straight out of a dystopian sci-fi movie: a type of AI that constructs telephone call on your behalf.

An audience of tech fans watched with palpable exhilaration as Google CEO Sundar Pichai proved a demo of Google Assistant booking a hair appointment over the phone. The bit that really got them when the “assistant” dropped a casually affirmative “mmm-hmm” into the call. Pichai told the crowd,” The amazing thing is that Assistant can actually understand the subtleties of conversation .” It also unveiled Google Lens, a visual search tool that looks for information in the objects around you, and proved a demo of it identifying everything in your friend’s apartment, even the blurb of a Zadie Smith novel.( Zadie Smith, as a self-described” luddite abstainer”, was a brave selection .)

Ultimately, it looks like Google is ready to wean us off our telephone craving because tech is no longer just about phones and laptops. Google’s ultimate award is to be involved with every aspect of their own lives. Like an overbearing mother, it wants you to sit down and take it easy, as it does everything for you, even telephone the hairdresser. It wants to know everything about you. It wants, quite literally, to get inside your eyeballs. And it will sell us this, the way it sells everything: without us even noticing. It will construct something so convenient we’ll wonder how we got by without it.

In the name of convenience, Google is not just mining our data, it is eroding our unique humanity. We require a time-out. Technology is evolving far faster than we are. We need to be asking Google bigger topics than:” Can you book my hair appointment ?” Starting with: if tech can do everything we can do, what is the point of us?

* Matt Haig is a novelist and journalist. His book Note on a Nervous Planet is published in July

Make sure to visit: CapGeneration.com

Google Maps goes beyond directions

Google today announced a new version of Google Maps that will launch later the summer months. All of the core Google Maps features for getting directions aren’t going away, of course, but on top of that, the team has now built a new define of features that are all about exploration.

” About a year ago, when we started to talk to users, one of the things we asked them was: how can we really help you? What else do you want Google Maps to do? And one of the overwhelming answers that we got back was just really a lot of petitions around helping users investigate an region, help me choose where to go ,” Sophia Lin, Google’s senior product director on the Google Maps squad, told me.” So we really started excavating in to thinking about what we can really do here from Google that would really help people .”

Right now, Google Maps is obviously best known for helping people get where they want to go, but for a while now, Google has featured all kinds of additional content in the service. Many users never touch those features, though, it seems. While I couldn’t get Lin to tell me about the percentage of users who currently use the existing Google Maps exploration tools, this new initiative is also part of an attempt to get users to move beyond directions when they think about Maps.

And because this is Google, that new experience is all about personalization with the help of AI.

So in the new Maps, you’ll find the new” For you” tab that’s basically a newsfeed-like experience with recommendations for you. You’ll be able to “follow” certain neighborhoods and cities( or maybe a place you plan to visit soon ), similar to a social networking experience. When Google Maps observes interesting updates in that area — maybe a eatery that’s trending or a new coffee shop that opens — it’ll tell you about that in your feed.

” People had problems used to identify what’s new ,” Lin told me.” Sometimes you are really lucky and you’re walking down the street and stumble across something, but oftentimes that’s not the case and you find out about something six months after it opened, so what we started looking into was can we understand, from anonymized population dynamics, what places are trending, what are the places that people are going .”

There are also algorithmically designed “Foodie List” and” Trending the coming week” listings that depict you what’s new and interesting and where the trendmakers in an area are hanging out. As Lin told me, the Foodie List is based on an anonymized cohort analysis that looks at where people who used to go a lot meet. Because those are often the first to try new places, too, their movements often tend to foreshadow trends. Similarly, the “Trending” listing looks at the overall population, so that listing can change based on season, with an ice cream parlor trending in the summer, for example.

For other items in the” For you” feed, Google Maps will actually investigates articles about local news to see what’s new, too.

Lin stressed that the feed isn’t so much about the volume of information but about presenting the right information at the right time and for the right person.

In addition to the” For you” feed, there are also a number of new basic exploration features, which are all powered by AI, too. Maps will generate listings of Michelin-starred restaurants, for example, or popular brunch spots depending on your context and the time of day.

Another major new feature that’s coming to Maps soon is” your match .” If you regularly peruse the starring ratings of various types of restaurants before you decide where to go, then you know that those ratings can only tell you so much. Now, with” your match ,” Maps will present you with a personalized rating that tells you how closely a restaurant matches your own preferences.

Google Maps learns about those predilections based on how you have rated this and other places and your own preferences, which you can actually define manually in the Google Maps settings once this update runs live. Interestingly, Google does not try to base these scores on how other people like you have rated a place.

The third major new feature of the new app is group planning. Based on the demo I assured, the team actually did a really nice job with this. The general notion here is to allow you to easily create a listing of suggestions for a group outing( or only a dinner with your significant other) by long-pressing on a place listing. Google Maps will then pop up a chat head-like bubble that follows you around as you browse for other places. Once you have compiled your listing, you can share it with your friends, who can then vote for their favorites.

Google will launch this new Google Maps experience subsequently the summer months. It will come to both iOS and Android, though the team hasn’t decided which one will come first yet. For now, all of these new features will only come to the app , not the web.

Make sure to visit: CapGeneration.com