Some low-cost Android phones shipped with malware built in

Avast has found that many low-cost , non-Google-certifed Android phones shipped with a stres of malware built in that could send users to download apps they didn’t intend to access. The malware, called called Cosiloon, overlays ads over the operating system in order to promote apps or even trick users into downloading apps. Devices effected shipped from ZTE, Archos and myPhone.

The app consists of a dropper and a warhead.” The dropper is a small application with no obfuscation, located on the/ system partition of affected devices. The app is wholly passive, only visible to the user in the list of system applications under’ puts .’ We have watched the dropper with two different names,’ CrashService’ and’ ImeMess, ‘” wrote Avast. The dropper then connects with a website to grab the payloads that the hackers wish to install on the phone.” The XML manifest contains information about what to download, which services to start and contains a whitelist programmed to potentially exclude specific countries and devices from infection. However, we’ve ever seen the country whitelist use, and simply a few devices were whitelisted in early versions. Currently , no countries or devices are whitelisted. The entire Cosiloon URL is hardcoded in the APK .”

The dropper is part of the system’s firmware and is not easily removed.

To summarize 😛 TAGEND

The dropper can install application packages defined by the manifest downloaded via an unencrypted HTTP connection without the user’s permission or knowledge.
The dropper is preinstalled somewhere in the furnish chain, by the manufacturer, OEM or carrier.
The user cannot withdraw existing dropper, because it is a system application, part of the device’s firmware.

Avast can see and remove the warheads and they recommend following these instructions to disable the dropper. If the dropper spots antivirus software on your telephone it will actually stop notifications but it will still recommend downloads as you browse in your default browser, a gateway to grabbing more( and worse) malware. Engadget notes that this vector is similar to the Lenovo ” Superfish” exploit that shipped thousands of computers with malware built in.

Make sure to visit:


Facebook, Google face first GDPR complaints over forced consent

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation( GDPR ), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four grievances relating to( certain) companies” take it or leave it’ posture when it comes to consent.

The complaints have been filed on behalf of( unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of” forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service.( And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting .)

” It’s simple: Anything strictly necessary for a service does not require consent boxes anymore. For everything else users must have a real option to tell’ yes’ or’ no’ ,” Schrems writes in a statement.

” Facebook has even blocked accounts of users who have not given consent ,” he adds.” In the end users merely had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process .”

We’ve reached out to all the companies involved for comment and will update this story with any response. Update: Facebook has now sent the following statement, attributed to its chief privacy policeman, Erin Egan:” We have prepared for the past 18 months to ensure we gratify the requirements of the GDPR. We have induced our policies clearer, our privacy decideds easier to find and introduced better tools for people to access, download, and delete their datum. Our work to improve people’s privacy doesn’t stop on May 25 th. For example, we’re building Clear History: a way for everyone to see the websites and apps that send us datum when you use them, clear this information from your account, and turn off our ability to store it associated with your account going forward .”

Schrems most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb( aka’ none of your business ‘).

As we pointed out in our GDPR explainer, the provision in the regulation may be required for collective enforcement of individuals’ data rights is an important one, with the health risks to strengthen the implementation of the law by enabling non-profit organisations such as noyb to file complaints on behalf of individuals — thereby helping to redress the power imbalance between corporate giants and consumer rights.

That told, the GDPR’s collective redress provision is a component that Member Country can choose to derogate from, which helps explain why the first four complaints have been filed with data protection bureaux in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record of defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection bureau will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation as a data protection rights champion.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint subjects of concern and where a service crosses perimeters — so noyb’s action seems are aiming to exam this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract penalties as large as 4% of a company’s global revenue which, in the case of Facebook or Google, connotes they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That told, devoted how freshly fixed in place the regulation is, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/ or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

” We likely will not immediately have billions of penalty payments, but the corporations have intentionally contravened the GDPR, so we expect a corresponding penalty under GDPR ,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new permission flow.

” We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the great majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads ,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new permission flow Facebook uncovered ahead of GDPR only offers the’ choice’ of ceasing Facebook solely if a person does not want to accept targeting advertising. Which, well, isn’t much of a option dedicated how powerful the network is.( Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data .)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change –” because dedicating people control of how their data is employed has been a core principle of Facebook since the beginning “.

” The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past ,” he claimed.” I mean I don’t want to downplay it — there are strong new rules that we’ve needed to set a bunch of work into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

” In order to be able to give people the tools to connect in all the ways they want and build community a lot of doctrine that is encoded in a regulation like GDPR is really how we’ve was just thinking about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and enforce but I also don’t want to make it seem like this is a massive deviation in how we’ve was just thinking about this stuff .”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first exam of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy statutes are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at the least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also stimulates the point that small startups and local companies are less likely to be able to use the kind of strong-arm’ take it or leave it’ tactics on users that big tech is able to unilaterally apply and extract’ consent’ as a consequence of the reach and power of their platforms — arguing there’s an underlying competition concern that GDPR has the potential to help to redress.

” The fight against forced consent ensures that the corporations cannot force users to consent ,” he writes.” This is especially important so that monopolies have no advantage over small and medium-sized companies .”

Make sure to visit:

Starbuckss mobile payment service is slightly outpacing Apples

People actually love getting their coffee more quickly. Starbucks, which has operated its own mobile payments service since 2011, is the market leader to its implementation of mobile payments users, beating out Apple Pay, Google Pay, and Samsung Pay, according to a new reporter from eMarketer out this morning. However, Starbucks’ result over Apple Pay is only a small one- in 2017, it had 20.7 million U.S. users compared with Apple Pay’s 19.7 million. And that gap will remain small this year, with 23.4 million using Starbucks’ mobile payments compared with 22 million using Apple Pay.

The broad adoption of the Starbucks mobile pay service is not only due to speed and convenience that the barcode-based payment system offers – it’s also because payments are tied to loyalty, and the Starbucks app is where customers can monitor and manage their card balance and their” superstar rewards .” In addition, Starbucks has the benefit of being able to offer a consistent payments experience across its stores- there’s never a question in consumers’ minds as to whether they can use its mobile pays service. They know they can.

Other mobile proximity payment services don’t have the same advantage, as many retailers still don’t offer pay terminals that support the tap-to-pay services like Apple Pay and Google Pay.

According to eMarketer’s forecast, 23.4 million people ages 14 and older will use the Starbucks app to make a point-of-sale purchase at least once every six months, compared with 22 million who will use Apple Pay, 11.1 million who will use Google Pay, and 9.9 million who will use Samsung Pay.

Those numbers will increase across the board through 2022, but the rankings will remain the same- with Starbucks then seeing 29.8 million users to Apple Pay’s 27.5 million.

However, this forecast appears to be discounting the impact of the recent expansion of Apple Pay, which will allow users to send payments to friends through iMessage. When you receive this fund, it’s added to an Apple Pay Cash card in your iPhone’s Wallet, which can then be used in stores, in addition to in apps or online. This built-in payments service inside one of the largest messaging platforms could prompt more users to adopt Apple Pay, even if they hadn’t before.

Another note: it seems which services are more popular than others is also tied to how long they’ve been around.

Apple Pay launched before Samsung and Google Pay, and is now accepted at more than half of U.S. merchants. Google Pay isn’t as widely accepted, but is pre-installed on Android, which will help it grow. Samsung Pay, meanwhile, has the lowest adoption in terms of users, but is most accepted by merchants, tells eMarketer.

The rankings of the various pay services wasn’t the only notable find from eMarketer’s new report.

The analysts also found that this year, for the first time, more than 25 percent of U.S. smartphone users ages 14 and older, will have utilized a mobile pay service at the least once every six months. The number of pays users will increase by 14.5 percentage to reaching 55 million by the end of 2018, the firm estimates.

But over the next several years, these top four services will see their share of the mobile payments drop, even as their user numbers grow. That’s because they’ll face increased competition from other new payment apps, including those from merchants themselves.

“Retailers are increasingly making their own payment apps, which allow them to capture valuable data about their users. They can also build in rewards and perks to boost customer loyalty, ” eMarketer forecasting analyst Cindy Liu says.

eMarketer’s forecast( paywalled) is based on an analysis of third-party data, including Forrester, Juniper Research, and Crone Consulting’s data on U.S. mobile pays users.

Note: Updated after publication to clarify the data is focused on U.S. mobile users

Make sure to visit:

Duplex shows Google failing at ethical and creative AI design

Google CEO Sundar Pichai milked the woo from a clappy, home-turf developer mob at its I/ O conference in Mountain View this week with a demo of an in-the-works voice deputy feature that is conducive to the AI to make phone calls on behalf of its human owner.

The so-called’ Duplex’ feature of the Google Assistant was shown calling a hair salon to volume a woman’s hair cut, and ringing a eatery to try to book a table — merely to be told it did not accept bookings for less than five people.

At which point the AI changed tack and asked about wait times, earning its owner and controller, Google, the reassuring intel that there wouldn’t be a long wait at the elected time. Job done.

The voice system deployed human-sounding vocal cues, such as’ ums’ and’ ahs’ — to make the” conversational experience more comfy“, as Google couches it in a blog about its aims for the tech.

The voices Google used for the AI in the demos were not synthesized robotic tones but distinctly human-sounding, in both the female and male flavors it showcased.

Indeed, the AI pantomime was apparently realistic enough to convince some of the genuine humans on the other objective of the line that they were speaking to people.

At one point the bot’s’ mm-hmm’ answer even described appreciative giggles from a techie audience that clearly felt in on the’ joke’.

But while the home mob cheered enthusiastically at how capable Google had apparently constructed its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and industries day — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.

One it does not allow to trouble the trajectory of its engineering ingenuity.

A consideration which only seems to get a look in years into the AI dev process, at the cusp of a real-world rollout — which Pichai said would be coming shortly.

Deception by design

” Google’s experimentations do appear to have been designed to deceive ,” concurred Dr Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethic Lab, discussing the Duplex demo.” Because their main hypothesis was’ can you distinguish this from a real person ?’. In this case it’s unclear why their hypothesis was about misrepresentation and not the user experience … You don’t inevitably need to deceive someone to give them a better user experience by voicing naturally. And if they had instead tested the hypothesis’ is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.

” As for whether the technology itself is deceptive, I can’t really say what their intent is — but … even if they don’t intend it to delude you can say they’ve been negligent in not inducing sure it doesn’t deceive … So I can’t say it’s definitely deceptive, but there should be some kind of mechanism there to let people know what it is they are speaking to .”

” I’m at colleges and universities and if you’re going to do something which involves deception you have to really demonstrate there’s a scientific value in doing this ,” he added, agreeing that, as a general principle, humen should always be able to know that an AI they’re interacting with is not a person.

Because who — or what — you’re interacting with” shapes how we interact”, as he set it.” And if you start blurring the lines … then this can sew mistrust into all kinds of interactions — where we would become more suspicious as well as needlessly replacing people with meaningless agents .”

No such ethical dialogues troubled the I/ O stage, however.

Yet Pichai told Google had been working on the Duplex technology for “many years”, and went in so far as to claim the AI can ” understand the nuances of conversation” — albeit still plainly in very narrow scenarios, such as booking an appointment or reserving a table or asking a business for its opening hours on a specific date.

” It brings together all our investments over the years in natural language appreciation, deep learning, text to speech ,” he said.

What was yawningly absent from that list, and seemingly also lacking from the design of the tricksy Duplex experiment, was any sense that Google has a deep and nuanced appreciation of the ethical concerns at play around AI technologies that are powerful and capable enough of passing off as human — thereby playing lots of real people in the process.

The Duplex demos were pre-recorded, rather than live telephone call, but Pichai described the calls as “real” — indicating Google representatives had not in fact called the businesses ahead of time to warn them its robots might be calling in.

” We have many of these instances where the calls quite don’t go as expected but our assistant understands the context, the subtlety … and managed the interaction gracefully ,” he added after airing the restaurant unable-to-book example.

So Google appears to have trained Duplex to be robustly deceptive — i.e. to be able to reroute around derailed conversational expectations and still pass itself off as human — specific features Pichai lauded as’ graceful’.

And even if the AI’s performance was more patchy in the wild than Google’s demo suggested it’s clearly the CEO’s goal for the tech.

While trickster AIs might bring to mind the iconic Turing Test — where chatbot developers compete to develop conversational software capable of convincing human judges it’s not artificial — it should not.

Because the application of the Duplex technology does not sit within the context of a high profile and so clear competition. Nor was there a set of rules that everyone was shown and agreed to beforehand( at least so far as we are aware — if there were any rules Google wasn’t publicizing them ). Rather it seems to have unleashed the AI onto unsuspecting business staff who were just going about their day chores. Can you insure the ethical disconnect?

” The Turing Test has come to be a bellwether of testing whether your AI software is good or not, based on whether you can tell it apart from a human being ,” is King’s suggestion on why Google might have chosen a similar trick as an experimental showcase for Duplex.

” It’s very easy to tell seem how great our software is, people cannot tell it apart from a real human being — and perhaps that’s a much stronger selling point than if you say 90% of users favor this software to the previous software ,” he posits.” Facebook does A/ B testing but that’s probably less exciting — it’s not going to wow anyone to tell well consumers favor this slightly deeper shade of blue to a lighter shade of blue .”

Had Duplex been deployed within Turing Test conditions, King also induces the point that it’s rather less likely it would have taken in so many people — because, well, those slightly jarringly day ums and ahs would soon have been spotted, uncanny valley style.

Ergo, Google’s PR flavored’ AI test’ for Duplex is also rigged in its favor — to further supercharge a one-way promotional marketing message around artificial intelligence. So, in other words, say hello to yet another layer of fakery.

How could Google introduce Duplex in a way that would be ethical? King reckons it would need to state up front that it’s a robot and/ or use an appropriately synthetic voice so it’s instantly clear to anyone picking up the phone the caller is not human.

” If you were to use a robotic voice there would also be less of a risk that all of your voices that you’re synthesizing merely represent a small minority of the population spoke of’ BBC English’ and so, perhaps in a sense, use a robotic voice would even be less biased as well ,” he adds.

And of course , not being up front that Duplex is artificial embeds all sorts of other knock-on hazards, as King explained.

” If it’s not obvious that it’s a robot voice there’s a risk that people come to expect that most of these phone calls are not genuine. Now experiments have shown that many people do interact with AI software that is conversational just as they would another person but at the same period there is also evidence showing that some people do the exact opposite — and they become a lot ruder. Sometimes even abusive towards conversational software. So if you’re constantly interacting with these bots you’re not going to be as polite, perhaps, as you usually would, and who are likely have effects for when you get a genuine caller that you do not know is real or not. Or even if you know they’re real perhaps the route you interact with people has changed a bit .”

Safe to say, as autonomous systems get more powerful and capable of performing tasks that we would normally expect a human to be doing, the ethical considerations around those systems scale as exponentially large as the health risks applications. We’re really just getting started.

But if the world’s biggest and most powerful AI developers believe it’s totally fine to set ethics on the backburner then risks are going to spiraling up and out and things could go very badly indeed.

We’ve seen, for example, how microtargeted advertising platforms have been hijacked at scale by would-be election fiddlers. But the overarching hazard where AI and automation technologies are concerned is that humans become second class citizens vs the tools that are being claimed to be here to help us.

Pichai said the first — and still, as he set it, experimental — use of Duplex will be to supplement Google’s search services by filling in information about industries’ opening periods during periods when hours might inconveniently vary, such as public holidays.

Though for a company on a general mission to’ organize the world’s information and make it universally accessible and useful’ what’s to stop Google from — down the line — deploying vast phalanx of phone bots to ring and ask humen( and their associated businesses and organizations) for all sorts of expertise which the company can then liberally extract and inject into its multitude of connected services — monetizing the freebie human-augmented intel via our extra-engaged attention and the ads it serves alongside?

During the course of writing this article we reached out to Google’s press line several times to ask to discuss the ethics of Duplex with a relevant company representative. But ironically — or perhaps fittingly enough — our hand-typed emails received only automated responses.

Pichai did emphasize that the technology is still in development, and told Google wants to” work hard to get this right, get the user experience and the high expectations right for both businesses and users “.

But that’s still ethics as a tacked on afterthought — not where it should be: Locked in place as the keystone of AI system design.

And this at a time when platform-fueled AI problems, such as algorithmically fenced fake news, have snowballed into huge and ugly global scandals with very far reaching societal implications indeed — be it election interference or ethnic violence.

You really have to wonder what it would take to shake the’ first transgres it, later fix it’ethos of some of the tech industry’s major players…

Ethical guidance relating to what Google is doing here with the Duplex AI is actually pretty clear if you bother to read it — to the phase where even legislators are agreed on foundational basics, such as that AI needs to operate on “principles of intelligibility and fairness”, to borrow phrasing from just one of several political reports that have been published on the topic in recent years.

In short, misrepresentation is not cool. Not in humans. And absolutely not in the AIs that are supposed to be helping us.

Transparency as AI standard

The IEEE technical professional association put under a first draft of a framework to guide ethically designed AI systems at the back end of 2016 — which included guiding principles such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable.

In the same year the UK’s BSI criteria body developed a specific criterion — BS 8611 Ethics design and application robots — which explicitly names identity deception( intentional or unintentional) as a societal risk, and warns that this approach will eventually erode trust in the technology.

” Avoid misrepresentation due to the behaviour and/ or appearance of the robot and is transparency of robotic nature ,” the BSI’s criterion advises.

It also warns against anthropomorphization due to the associated danger of misinterpretation — so Duplex’s ums and ahs don’t only suck because they’re fake but because they are misinforming and so deceptive, and also therefore carry the knock-on danger of undermining people’s trust in your service but also more widely still, in other people generally.

” Avoid unnecessary anthropomorphization ,” is the standard’s general guidance, with the further steer that the technique be reserved” merely for well-defined, limited and socially-accepted intents “.( Tricking workers into remotely conversing with robots probably wasn’t what they were thinking of .)

The standard also urges” clarification of intent to simulate human or not, or intended or expected behaviour “. So, yet again, don’t try and pass your bot off as human; you will be required to make it really clear it’s a robot.

For Duplex the transparency that Pichai said Google now intends to think about, at this late stage in the AI development process, would have been trivially easy to attains: It could just have programmed the assistant to tell up front:’ Hi, I’m a robot calling on behalf of Google — are you happy to talk to me ?’

Instead, Google chose to prioritize a demo’ wow’ factor — of proving Duplex pulling the woolen over busy and trusting humans’ eyes — and by doing so proved itself tonedeaf on the topic of ethical AI design.

Not a good look for Google. Nor indeed a good outlook for the rest of us who are subject to the algorithmic whims of tech giants as they flick the control switches on their society-sized platforms.

” As the development of AI systems grows and more research is carried out, it is important that ethical hazards associated with their utilize are highlighted and considered as part of the design ,” Dan Palmer, head of manufacturing at BSI, told us.” BS 8611 was developed … alongside scientists, academics, ethicists, philosophers and users. It explains that any autonomous system or robot should be accountable, truthful and unprejudiced.

” The standard creates a number of potential ethical hazards that are relevant to the Google Duplex; one of these is the risk of AI machines becoming sexist or racist due to a biased data feed. This surfaced prominently when Twitter users influenced Microsoft’s AI chatbot, Tay, to spew out offensive messages.

” Another contentious topic is whether forming an emotional bond with a robot is desirable, especially if the voice assistant interacts with the elderly or children. Other guidelines on new hazards that should be considered include: robot misrepresentation, robot addiction and the potential for a learning system to surpass its remit.

” Ultimately, it must always be transparent who is responsible for the behavior of any voice assistant or robot, even if it behaves autonomously .”

Yet despite all the thoughtful ethical guidance and research that’s already been produced, and is out there for the reading, here we are again being proven the same tired tech industry playbook praising engineering abilities in a shiny bubble, stripped of human context and societal consideration, and hung in front of an uncritical audience to see how loud they’ll cheer.

Leaving important matter — over the ethics of Google’s AI experimentations and also, more broadly, over the mainstream vision of AI assistance it’s so keenly trying to sell us — to hang and hang.

Questions like how much genuine utility there might be for the kinds of AI applications it’s telling us we’ll all want to use, even as it prepares to push these apps on us, because it can — as a consequence of its great platform power and reach.

A core’ uncanny valley-ish’ contradiction may explain Google’s choice of deception for its Duplex demo: Humen don’t inevitably like speaking to machines. Indeed, oftentimes they prefer to speak to other humans. It’s just more meaningful to have your existence registered by a fellow pulse-carrier. So if an AI exposes itself to be a robot the human who picked up the phone are most likely only put it straight back down again.

” Going back to the subterfuge, it’s fine if it’s replacing meaningless interactions but not if it’s intending to replace meaningful interactions ,” King told us.” So if it’s clear that it’s synthetic and you can’t inevitably use it in a context where people actually want a human to do that job. I think that’s the right approach to take.

” It matters not just that your hairdresser appears to be listening to you but that they are actually listening to you and that they are mirroring some of your emotions. And to replace that kind of work with something synthetic — I don’t think it stimulates much sense.

” But at the same time if you expose it’s synthetic it’s not likely to replace that kind of work .”

So actually Google’s Duplex sleight of hand may be trying to conceal the fact AIs won’t be able to replace as many human tasks as technologists like to think they will. Not unless lots of currently meaningful interactions are rendered meaningless. Which would be a massive human expense that societies would have to — at very least — debate long and hard.

Trying to avoid such a debate from taking place by pretending there’s nothing ethical to see here is, hopefully , not Google’s designed intention.

King also stimulates the point that the Duplex system is( at least for now) computationally costly.” Which means that Google cannot and should not only release this as software that anyone can run on their home computers.

” Which means they can also control how it is used, and in what context — and they can also is ensured will only be used with certain safeguards built in. So I believe the experiments are maybe not the best of signs but the real exam is likely to be how they release it — and will they construct the safeguards that people demand into the software ,” he adds.

As well as a lack of visible precautions in the Duplex demo, there’s also — I would argue — a curious absence of imagination on display.

Had Google been bold enough to disclose its robot interlocutor it might have guessed more about how it could have designed that experience to be both clearly not human but also fun or even funny. Suppose of how much life can be injected into animated cartoon characters, for example, which are very clearly not human yet are enormously popular because people find them entertaining and feel they come alive in their own way.

It actually attains you wonder whether, at some foundational level, Google absence trust in both what AI technology can do and “in ones own” creative abilities to breath new life into these emergent synthetic experiences.

Make sure to visit:

Say hello to Google One

Google is revamping its consumer storage plans today by adding a new $2.99/ month tier for 200 GB of storage and falling the price of its 2 TB plan from $19.99/ month to $9.99/ month( and dropping the $9.99/ month 1 TB plan ). It’s also rebranding these storage plans( but not Google Drive itself) as” Google One .”

Going forward, you’ll also be able to share your storage quota with up to five family members.

That by itself would be interesting, devoted how easy it is to max out 100 GB with 4K videos and high-res images these days, but there is one other feature here that explains the new brand name: free one-tap access to Google Experts for help with any Google consumer product and service.

That access to live experts — not some barely functional AI chatbot — comes with every Google One plan, including the $1.99/ month 100 GB plan. In the U.S ., these experts will be available 24/7 over chat, email and telephone. In other countries, this lineup of support alternatives may differ, but the company tells me that its objective is” to provide users with great one-tap supporting and constantly improve it over time .”

Google already offered 24/7 support for paying business users with a G Suite account, but this is the first time it actively offers live is supportive of consumers.

It’s worth stressing that the existing free quota of 15 GB will remain.

In addition to access to experts, the company also promises to provide subscribers with other benefits. Google One’s director Larissa Fontaine told me that those could include discounts on hotels you find in Google Search, preferred rates for other Google services or credits on Google Play.” We hope to build those out over day ,” she noted.

Brandon Badger, Google’s group product manager for Google One, told me the team looked at how people use the storage scheme. Users now have more devices, shoot more 4K video and share those files with more family members, who in turn also have more devices.” We are seeming with this plan to accommodate that ,” he said.

In addition, Fontaine noted that users with paid storage accounts also tend to be heavy Google users in general, so combining storage and support seemed logical.

Sadly, this isn’t an immediate change. Over the course of the next few months, Google will upgrade all existing storage plans to Google One accounts starting in the U.S ., with a global rollout after that. Google also tells me that it will roll out a new Android app to help users manage their plans( not their files ).

While the focus of today’s announcement is on storage, it’s hard not to look at this new offering in the context of the additional supporting and other bonus features that Google promises. Google One is clearly about more than simply a better storage plan offering. Instead, it feels like the beginning of a new, more ambitious offering that could be expanded to include other services over period. Maybe a single subscription to all Google consumer services, including Drive, YouTube Red and Play Music( or whatever becomes of that )? Despite its name, Google One is currently only one of many subscription services the company offers, after all.

Make sure to visit:

8 big announcements from Google I/O 2018

Google kicked off its annual I/ O developer conference at Shoreline Amphitheater in Mountain View, California. Here are some of the biggest proclamations from the Day 1 keynote. There will be more to come over the next couple of days, so follow along on everything Google I/ O on TechCrunch.

Google goes all in on artificial intelligence, rebranding its research division to Google AI

Just before the keynote, Google announced it is rebranding its Google Research division to Google AI. The move signals how Google has increasingly focused R& D on computer vision, natural language processing, and neural networks.

Google makes talking to the Assistant more natural with “continued conversation”

What Google announced: Google announced a” continued dialogue” update to Google Assistant that stimulates talking to the Assistant feel more natural. Now, instead of having to say “Hey Google” or “OK Google” every time you want to say a command, you’ll merely have to do so the first time. The company also is adding a new feature that allows you to ask multiple questions within the same request. All this will roll out in the coming weeks.

Why it’s important : When you’re having a typical dialogue, odds are you are asking follow-up questions if you didn’t get the answer you wanted. But it is feasible to jarring to have to say ” Hey Google” every single hour, and it violates the whole flowing and makes the process feel pretty unnatural. If Google wants to be a significant player when it comes to voice interfaces, the actual interaction has to feel like a dialogue — not only a series of queries.

Google Photos gets an AI boost

What Google announced: Google Photos already induces it easy for you to correct photos with built-in editing tools and AI-powered features for automatically creating collages, movies and stylized photos. Now, Photos is getting more AI-powered fixes like B& W photo colorization, brightness correction and suggested rotations. A new version of the Google Photos app will suggest quick fix and tweaks like rotations, brightness corrections or adding pops of color.

Why it’s important : Google is working to become a hub for all of your photos, and it’s be permitted to woo potential users by offering powerful tools to edit, kind, and modify those photos. Each additional photo Google gets offers it more data and helps them get better and better at image recognition, which in the end not only improves the user experience for Google, but also constructs its own tools for its services better. Google, at its heart, is a search company — and it needs a lot of data to get visual search right.

Google Assistant and YouTube are coming to Smart Displays

What Google announced : Smart Displays were the talk of Google’s CES push this year, but we haven’t heard much about Google’s Echo Show competitor since. At I/ O, we got a little more insight into the company’s smart showing efforts. Google’s first Smart Displays will launch in July, and of course will be powered by Google Assistant and YouTube. It’s clear that the company’s expended some resources into building a visual-first version of Assistant, justifying the addition of a screen to the experience.

Why it’s important: Users are increasingly getting accustomed to the idea of some smart device sitting in their living room that will answer their questions. But Google is looking to create a system where a user can ask questions and then have an option to have some kind of visual display for actions that only can’t be resolved with a voice interface. Google Assistant handles the voice part of that equation — and having YouTube is a good service that runs alongside that.

Google Assistant is coming to Google Maps

What Google announced : Google Assistant is coming to Google Maps, available on iOS and Android the summer months. The addition is meant to provide better recommendations to users. Google have all along worked to stimulate Maps seem more personalized, but since Maps is now about far more than merely directions, the company is introducing new features to give you better recommendations for local places.

The maps integration also combines the camera, computer vision technology, and Google Maps with Street View. With the camera/ Maps combination, it really looks like you’ve jumped inside Street View. Google Lens can do things like identify houses, or even dog breeds, merely by pointing your camera at the object in question. It will also be able to identify text.

Why it’s important: Maps is one of Google’s biggest and most important products. There’s a lot of exhilaration around augmented reality — you can point to phenomena like Pokemon Go — and companies are just starting to scratch the surface of the best use instances for it. Figuring out directions seems like such a natural use lawsuit for a camera, and while it was a bit of a technical accomplishment, it dedicates Google yet another perk for its Maps users to keep them inside the service and not switch over to alternatives. Again, with Google, everything comes back to the data, and it’s able to capture more data if users stick around in its apps.

Google announces a new generation for its TPU machine learning hardware

What Google announced : As the war for creating customized AI hardware heats up, Google said that it is rolling out its third generation of silicon, the Tensor Processor Unit 3.0. Google CEO Sundar Pichai said the new TPU is 8x more powerful than last year per pod, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations.

Why it’s important: There’s a race to create the best machine learning tools for developers. Whether that’s at the framework level with tools like TensorFlow or PyTorch or at the actual hardware level, the company that’s be permitted to lock developers into its ecosystem will have an advantage over the its competitors. It’s especially important as Google appears to construct its cloud platform, GCP, into a massive business while going up against Amazon’s AWS and Microsoft Azure. Devoting developers — who are already adopting TensorFlow en masse — a way to speed up their operations can help Google continue to woo them into Google’s ecosystem.

MOUNTAIN VIEW, CA- MAY 08: Google CEO Sundar Pichai delivers the keynote address at the Google I/ O 2018 Conference at Shoreline Amphitheater on May 8, 2018 in Mountain View, California. Google’s two day developer conference operates through Wednesday May 9.( Photo by Justin Sullivan/ Getty Images)

Google News gets an AI-powered redesign

What Google announced : Watch out, Facebook. Google is also planning to leverage AI in a revamped version of Google News. The AI-powered, redesigned news destination app will” allow users to keep up with the news they care about, understand the full narrative, and enjoy and support the publishers they trust .” It will leverage parts found in Google’s digital magazine app, Newsstand and YouTube, and introduces new features like “newscasts” and “full coverage” to help people get a summing-up or a more holistic position of a news story.

Why it’s important : Facebook’s main product is literally called ” News Feed ,” and it serves as a major source of information for a non-trivial section of countries around the world. But Facebook is embroiled in a scandal over personal data of as many as 87 million users ending up in the hands of a political research firm, and there are a lot of questions over Facebook’s algorithms and whether they surface up legitimate information. That’s a huge hole that Google could exploit by offering a better news product and, once again, lock users into its ecosystem.

Google unveils ML Kit, an SDK that constructs it easy to add AI smarts to iOS and Android apps

What Google announced : Google unveiled ML Kit, a new software growth kit for app developers on iOS and Android that allows them to integrate pre-built, Google-provided machine learning models into apps. The models support text recognition, face detecting, barcode scan, image labeling and landmark recognition.

Why it’s important: Machine learning tools have enabled a new wave of use cases that include use examples built on top of image recognition or speech detecting. But even though frameworks like TensorFlow have attained it easier to build applications that tap those tools, it can still take a high level of expertise to get them off the ground and operating. Developers often figure out the best use suits for new tools and devices, and growth kits like ML Kit assist lower the barrier to entry and give developers without a ton of expertise in machine learning a playground to start figuring out interesting use suits for those appliocations.

So when will you be able to actually play with all these new features? The Android P beta is available today, and you can find the upgrade here .

Make sure to visit:

Google adds Morse code input to Gboard

Google is adding morse code input to its mobile keyboard. It’ll be available as a beta on Android later today. The company announced that new feature at Google I/ O after proving a video of Tania Finlayson.

Finlayson has been having a hard time communicating with other people due to her condition. She found a great way to write sentences and talk with people use Morse code.

Her husband developed a custom device that analyzes her head movements and transcodes them into Morse code. When she triggers the left button, it adds a short signal, while the right button triggers a long signal. Her device then converts the text into speech.

Google’s implementation will replace the keyboard with two areas for short and long signals. There are multiple term suggestions above the keyboard just like on the normal keyboard. The company has furthermore created a Morse poster so that you can learn Morse code more easily.

As with all accessibility features, the more input techniques the better. Everything that builds technology more accessible is a good thing.

Of course, Google used its gigantic I/ O conference to introduce this feature to build the company appear good too. But it’s a fine trade-off, a win-win for both Google and users who can’t use a traditional keyboard.

Correction: A previous version of this article used to say Morse code is available on iOS and Android. The beta is only available on Android .

Make sure to visit:

Google announces a new generation for its TPU machine learning hardware

As the war for creating customized AI hardware heats up, Google announced at Google I/ O 2018 that is rolling out out the work of its third generation of silicon, the Tensor Processor Unit 3.0.

Google CEO Sundar Pichai said the new TPU pod is eight times more powerful than last year, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations. And while multiple frameworks for developing machine learning tools have emerged, including PyTorch and Caffe2, this one is optimized for Google’s TensorFlow. Google is looking forward to construct Google Cloud an omnipresent platform at the scale of Amazon, and offering better machine learning tools is promptly becoming table stakes.

Amazon and Facebook are both working on their own kind of custom silicon. Facebook’s hardware is optimized for its Caffe2 framework, which is designed to handle the massive datum graph it has on its users. You can think about it as taking everything Facebook knows about you — your birthday, your friend graph, and everything that goes into the news feed algorithm — fed into a complex machine learning framework that works best for its own operations. That, in the end, may have ended up requiring a customized approach to hardware. We know less about Amazon’s aims here, but it also wants to own the cloud infrastructure ecosystem with AWS.

All this has also spun up an increasingly large and well-funded startup ecosystem looking to create a customized piece of hardware targeted toward machine learning. There are startups like Cerebras System, SambaNova System, and Mythic, with a half dozen or so beyond that as well( not even including the activity in China ). Each is looking forward to exploit a similar niche, which is find a way to outmaneuver Nvidia on cost or performance for machine learning chores. Most of those startups have raised more than $30 million.

Google unveiled its second-generation TPU processor at I/ O last year, so it wasn’t a huge astound that we’d see another one this year. We’d heard from sources for weeks that it was coming, and that the company is already hard at work figuring out what happened next. Google at the time touted performance, though the point of all these tools is to make it a little easier and more palatable in the first place.

Google also said this is the first time the company has had to include liquid cool in its data centers, CEO Sundar Pichai told. Heat dissipation is increasingly a difficult problem for companies looking to create customized hardware for machine learning.

There are a lot of questions around building custom silicon, however. It may be that developers don’t need a super-efficient piece of silicon when an Nvidia card that’s a few years old can do the trick. But data sets are getting increasingly larger, and having the biggest and best data set is what makes a defensibility for any company these days. Just the prospect of stimulating it easier and cheaper as companies scale may be enough to get them to adopt something like GCP.

Intel, too, is looking to get in here with its own products. Intel has been beating the drum on FPGA as well, which is designed to be more modular and flexible as the requirements for machine learning change over period. But again, the knocking there is price and difficulty, as programming for FPGA can be a hard problem in which not many engineers have expertise. Microsoft is also betting on FPGA, and unveiled what it’s calling Brainwave just yesterday at its BUILD meeting for its Azure cloud platform — which is increasingly a significant portion of its future potential.

Microsoft launches Project Brainwave, its deep learning acceleration platform

Google more or less seems to want to own the entire stack of how we operate on the internet. It starts at the TPU, with TensorFlow layered on top of that. If it manages to succeed there, it gets more data, stimulates its tools and services faster and faster, and eventually reaches a phase where its AI tools are too far ahead and locks developers and users into its ecosystem. Google is at its heart an advertising business, but it’s gradually expanding into new business segments that all necessitate robust data sets and operations to learn human behavior.

Now the challenge will be having the best pitch for developers to not only get them into GCP and other services, but also keep them locked into TensorFlow. But as Facebook increasingly seems to challenge that with alternating frameworks like PyTorch, there may be more difficulty than originally guessed. Facebook unveiled a new version of PyTorch at its main annual meeting, F8, only last month. We’ll have to see if Google is able to respond adequately to stay ahead, and that starts with a new generation of hardware.

Make sure to visit:

Maps walking navigation is Googles most compelling use for AR yet

Google managed to elicited an audible gasp from the crowd at I/ O today when it presented off a new augmented feature for Maps. It was a clear standout during a keynote that contained plenty of iterative updates to existing software, and demonstrated a key glimpse into what it will take to move AR from interesting novelty to compelling employ occurrence.

Along with the standard array of ARCore-based gaming offerings, the new AR mode for Maps is arguably one of the first truly indispensable real-world applications. As a person who had expended the better part of an hour yesterday attempting to navigate the long, unfamiliar blocks of Palo Alto, California by following an arrow on a small blue circle, I can personally vouch for the usefulness of such an application.

It’s still early days — the company admitted that it’s playing around with a few ideas here. But it’s easy to see how offering visual overlays of a real-time image would make it a heck of a lot easier to navigate unfamiliar spaces.

In a sense, it’s a like a real-time version of Street View, combining real-world images with map overlays and location-based positioning. In the demo, a majority of the screen is devoted to the street image captured by the on-board camera. Turn by turn directions and large arrows are overlaid onto the video, while a small half-circle displays a sliver of the map to give you some context of where you are and how long it will take to get where you’re going.

Of course, such a system that’s heavily reliant on visuals wouldn’t make sense in the context of driving, unless, of course, it’s presented in a kind of heads up showing. Here, however, it runs seamlessly, presuming, of course, you’re willing to look a bit dorky by holding up your phone in front of your face.

There are a lot of moving portions here too, naturally. In order to sync up to a display like this, the map is going to have to get things just right — and anyone who’s ever walked through the city streets on Maps knows how often that can misfire. That’s likely a big part of the reason Google wasn’t actually willing to share specifics with regards to timing. For now, we just “re going to have to” presume this is a sort of proof of theory — along with the fun little fox strolling guy the company trotted out that had shades of a certain Johnny Cash-voiced coyote.

But if this is what trying to find my style in a new city looks like, sign me up.

Make sure to visit:

Google confirms some of its own services are now getting blocked in Russia over the Telegram ban

A shower of newspaper airliners darted through the skies of Moscow and other towns in Russia today, as users answered the call of entrepreneur Pavel Durov to send the blank missives out of their windows at a pre-appointed time in support of Telegram, a messaging app he founded that was blocked last week by Russian regulator Roskomnadzor( RKN) that uses a paper airplane icon. RKN believes the service is violating national laws by failing to provide it with encryption keys to access messages on the service( Telegram has refused to comply ).

The paper plane send-off was a small, flashmob turn in a “Digital Resistance” — Durov’s preferred word — that has otherwise largely been played out online: currently, nearly 18 million IP addresses are knocked out from being accessed in Russia, all in the name of blocking Telegram.

And in the latest development, Google has now confirmed to us that its own services are now also being impacted. From what we understand, Google Search, Gmail and push notifications for Android apps are among the products being affected.

” We are aware of reports that some users in Russia are unable to access some Google products, and are investigating those reports ,” said a Google spokesperson in an emailed answer. We’d been trying to contact Google all week about the Telegram blockade, and this is the first time that the company has both replied and acknowledged something related to it.

( Amazon has acknowledged our messages but has yet to reply to them .)

Google’s comments come on the heels of RKN itself also announcing today that it had expanded its IP blocks to Google’s services. At its peak, RKN had blocked nearly 19 million IP addresses, with dozens of third-party services that also use Google Cloud and Amazon’s AWS, such as Twitch and Spotify, also getting caught in the crossfire.

Russia is among the countries in the world that has enforced a kind of digital firewall, blocking periodically or permanently certain online content. Some turn to VPNs to access that content anyway, but it turns out that Telegram hasn’t needed to rely on that workaround to get used.

” RKN is embarrassingly bad at blocking Telegram, so most people keep using it without any intermediaries ,” said Ilya Andreev, COO and co-founder of Vee Security, which has been providing a proxy service to bypass the ban. Currently, it is supporting up to two million users simultaneously, although this is a relatively small proportion considering Telegram has around 14 million users in the country( and, likely, more considering all the free publicity it’s been getting ).

As we described earlier the coming week, the reason so many IP addresses are get blocked is because Telegram has been using a technique that enable us to “hop” to a new IP address when the one that it’s using is blocked from getting accessed by RKN. It’s a technique that a much smaller app, Zello, had also resorted to use for nearly a year when the RKN announced its own ban.

Zello ceased its activities earlier this year when RKN got wise to Zello’s styles and chose to start blocking entire subnetworks of IP addresses to avoid so many hops, and Amazon’s AWS and Google Cloud kindly asked Zello to stop as other services also started to get blocked. So, when Telegram started the same various kinds of hopping, RKN, in fact, knew just what to do to turn the fucking.( And it also took the hot off Zello, which miraculously got restored .)

So far, Telegram’s cloud partners have held strong and have not taken the same route, although get its own services blocked could see Google’s resolve tested at a new level.

Some believe that one outcome could be the regulator playing out an elaborate game of chicken with Telegram and the rest of the internet companies that are in some way aiding and abetting it, spurred in part by Russia’s larger profile and how such blocks would appear to international audiences.

” Russia can’t keep blocking random things on the Internet ,” Andreev said.” Russia is working hard to make its image more alluring to foreigners in preparation for the World Cup ,” which is taking place this June and July.” They can’t have tourists coming and realising Google doesn’t work in Russia .”

We’ll update this post and continue to write on further education as we learn more.

Make sure to visit: