Who’s to blame for the leaking of 50 million Facebook users’ data? Facebook founder and CEO Mark Zuckerberg transgressed several days of stillnes in the face of a raging privacy blizzard to go on CNN the coming week to say he was sorry. He also admitted the company had attained mistakes; said it had breached the trust of users; and said he regretted not telling Facebookers at the time their info had been misappropriated.
Meanwhile, shares in the company have been taking a battering. And Facebook is now facing multiple shareholder and user suits.
Pressed to the reasons why he didn’t inform users, in 2015, when Facebook says it found out about this policy breach, Zuckerberg avoided a direct answer — instead fixing on what the company did( asked Cambridge Analytica and the developer whose app was used to suck out data to delete the data) — rather than explaining the thinking behind the thing it did not do( tell affected Facebook users their personal information had been embezzled ).
Essentially Facebook’s line is that it believed the data had been deleted — and presumably, hence, it calculated( incorrectly) that it didn’t need to inform users because it had constructed the leak problem go forth via its own backchannels.
Except of course it hadn’t. Because people who want to do nefarious things with data rarely play precisely by your rules just because you ask them to.
There’s an interesting parallel here with Uber’s response to a 2016 data breach of its systems. In that case, instead of informing the~ 57 M affected users and drivers that their personal data had been compromised, Uber’s senior management also decided to try and make their own problems go forth — by asking( and in their case paying) hackers to delete the data.
Aka the trigger answer for both tech companies to massive data protection fuck-ups was: Cover up; don’t disclose.
Facebook denies the Cambridge Analytica instance is a data breach — because, well, its systems were so laxly designed as to actively encourage vast amounts of data to be sucked out, via API, without the check and balance of those third parties having to gain individual level consent.
So in that sense Facebook is entirely right; technically what Cambridge Analytica did wasn’t a breach at all. It was a feature , not a bug.
Clearly that’s also the opposite of reassuring.
Yet Facebook and Uber are companies whose businesses rely entirely on users trusting them to safeguard personal data. The disconnect here is gapingly obvious.
What’s also crystal clear is that rules and systems designed to protect and control personal data, be included with active enforcement of those rules and robust security to precaution systems, are absolutely essential to prevent people’s information being misused at scale in today’s hyperconnected era.
But before you say hindsight is 20/ 20 vision, the history of this epic Facebook privacy fail is even longer than the under-disclosed events of 2015 suggest — i.e. when Facebook claims it found out about the breach as a result of investigations by journalists.
What the company very clearly turned a blind eye to is the risk posed by its own system of loose app permissions that in turn enabled developers to suck out vast amounts of data without having to worry about pesky user consent. And, ultimately, for Cambridge Analytica to get its hands on the specific characteristics of~ 50 M US Facebookers for dark ad political targeting purposes.
European privacy campaigner and lawyer Max Schrems — a long time critic of Facebook — was actually raising very concerned about the Facebook’s lax posture to data protection and app permissions as long ago as 2011.
Indeed, in August 2011 Schrems filed a complaint with the Irish Data Protection Commission exactly flagging the app permissions data sinkhole( Ireland being the focal point for the complaint because that’s where Facebook’s European HQ is based ).
“[ T] his means that not the data subject but “friends” of the data subject are consenting to the use of personal data ,” wrote Schrems in the 2011 complaint, fleshing out permission concerns with Facebook’s friends’ data API.” Since an average facebook user has 130 friends, it is very likely that only one of the user’s friends is installing some kind of spam or phishing application and is consenting to the use of all data of the data topic. There are many applications that do not need to access the users’ friends personal data( e.g. games, quizs, apps that only post things on the user’s page) but Facebook Ireland does not offer a more limited level of access than “all the basic information of all friends”.
” The data subject is not given an unambiguous consent to the processing of personal data by applications( no opt-in ). Even if a data subject is aware of this entire process, the data topic cannot foresee which be applied in which developer will be using which personal data in the future. Any sort of permission can therefore never be specific ,” he added.
As a result of Schrems’ complaint, the Irish DPC audited and re-audited Facebook’s systems in 2011 and 2012. The outcome of those data audits included a recommendation that Facebook stiffen app permissions on its platform, according to a spokesman for the Irish DPC, who we spoke to this week.
The spokesman said the DPC’s recommendation formed the basis of the major platform change Facebook announced in 2014 — aka shutting down the Friends data API — albeit too late to prevent Cambridge Analytica from being able to harvest millions of profiles’ worth of personal data via a survey app because Facebook merely made the change gradually, ultimately closing the door in May 2015.
” Following the re-audit … one of recommendations issued we induced was in the area of the ability to use friends data through social media ,” the DPC spokesman told us.” And that recommendation that we built in 2012, that was implemented by Facebook in 2014 as part of a wider platform change that they induced. It’s that change that they built that means that the Cambridge Analytica thing cannot happen today.
” They attained the platform change in 2014, their change was for anybody new coming onto the platform from 1st May 2014 they couldn’t do this. They gave a 12 month period for existing users to migrate across to their new platform … and it was in that period that … Cambridge Analytica’s use of the information for their data emerged.
” But from 2015 — for absolutely everybody — this issue with CA cannot happen now. And that was following our recommendation that we constructed in 2012.”
Given his 2011 objection about Facebook’s expansive and abusive historical app permissions, Schrems has this week raised an eyebrow and expressed surprise at Zuckerberg’s claim to be “outraged” by the Cambridge Analytica revelations — now snowballing into a massive privacy scandal.
In a statement reflecting on developments he writes:” Facebook has millions of times illegally distributed data of its users to various dodgy apps — without the consent of those affected. In 2011 we sent a legal grievance to the Irish Data Protection Commissioner on this. Facebook argued that this data transfer is perfectly legal and no changes were built. Now after the outrage surrounding Cambridge Analytica the Internet giant abruptly feels betrayed seven years later. Our records display: Facebook knew about this betrayal for years and previously highlights the fact that these practices are perfectly legal.”
So why did it take Facebook from September 2012 — when the DPC built its recommendations — until May 2014 and May 2015 to implement the changes and stiffen app permissions?
The regulator’s spokesman told us it was ” engaging” with Facebook over that period of time” to ensure that the change was built “. But he also said Facebook spent some time pushing back — questioning why changes to app permissions were necessary and dragging its feet on shuttering the friends’ data API.
” I believe the reality is Facebook had questions as to whether they felt there was a need for them to induce the changes that we were recommending ,” said the spokesman.” And that was, I suppose, the level of participation that we had with them. Because we were relatively strong that we felt yes we stimulated the recommendation because we felt the change needed to be made. And that was the nature of the discussion. And as I say ultimately, ultimately current realities is that the change has been attained. And it’s been made to an extent that such an issue couldn’t occur today .”
” That is a matter for Facebook themselves to answer as to why they took that period of time ,” he added.
Of course we asked Facebook why it pushed back against the DPC’s recommendation in September 2012 — and whether it unhappiness not acting more swiftly to implement the changes to its APIs, given the crisis its business is now faced having breached user trust by failing to safeguard people’s data.
We also asked why Facebook users should trust Zuckerberg’s claim, also made in the CNN interview, that it’s now’ open to being governed’ — when its historical playbook is packed with examples of the polar opposite behaviour, including ongoing attempts to circumvent existing EU privacy rules.
A Facebook spokeswoman recognise receipt of our questions this week — but the company has not responded to any of them.
The Irish DPC chief, Helen Dixon, also went on CNN the coming week to produce her response to the Facebook-Cambridge Analytica data misuse crisis — calling for assurances from Facebook that it will properly police its own data protection policies in future.
” Even where Facebook have terms and policies in place for app developers, it doesn’t necessarily give us such assurances that those app developers are abiding by the policies Facebook have defined, and that Facebook is active in terms of overseeing that there’s no leakage of personal data. And that conditions, such as the prohibition on selling on data to farther third party is being adhered to by app developers ,” told Dixon.
” So I suppose what we want to see change and what we want to oversee with Facebook now and what we’re demanding answers from Facebook in relation to, is first of all what pre-clearance and what pre-authorization do they do before permitting app developers onto their platform. And secondly, once those app developers are operative and have apps collecting personal data what kind of follow up and active oversight steps does Facebook take to give us all reassurance that the type of issue that appears to have occurred in relation to Cambridge Analytica won’t happen again .”
Firefighting the raging privacy crisis, Zuckerberg has committed to conducting an historic audit of every app that had access to” a large quantity” of user data around the time that Cambridge Analytica was able to harvest so much data.
So it remains to be seen what other data misuses Facebook will unearth — and “re going to have to” confess to now, long after the fact.
But any other embarrassing data leaks will sit within the same unfortunate context — which is to say that Facebook could have prevented this type of problem if it had listened to the very valid concerns data protection experts were creating more than six years ago.
Instead, it chose to drag its feet. And the listing of awkward questions for the Facebook CEO keeps getting longer.
Make sure to visit: CapGeneration.com