UK parliaments call for Zuckerberg to testify goes next level

The UK parliament has issued an impressive ultimatum to Facebook in a last-ditch attempt to get Mark Zuckerberg to take its questions: Come and give evidence voluntarily or next time you fly to the UK you’ll get a formal summons to appear.

” Following reports that he will be giving evidence to the European Parliament in May, we would like Mr Zuckerberg to come to London during the course of its European journey. We would like the session here to place by 24 May ,” the committee writes in its latest letter to the company, signed by its chair, Conservative MP Damian Collins.

” It is important to recognize that, while Mr Zuckerberg does not usually go under the jurisdiction of the UK Parliament, he will do so the next time he enters the country ,” he adds.” We hope that he will respond positively to our request, but if not the Committee will resolve to issue a formal summons for him to appear when he is next in the UK .”

Facebook has repeatedly ignored the DCMS committee‘s requests that its CEO and founder appear before it — preferring to send various minions to answer questions related to its enquiry into online disinformation and the role of social media in politics and democracy.

The most recent Zuckerberg alternative to appear before it was also the most senior: Facebook’s CTO, Mike Schroepfer, who claimed he had personally volunteered to make the trip-up to London to give evidence.

However for all Schroepfer’s sweating drudgery to try to stand in for the company’s chief exec, his answers failed to impress UK parliamentarians. And immediately following the hearing the committee issued a press release repeating their call for Zuckerberg to testify , noting that Schroepfer had failed to provide adequate answers to as many of 40 of its questions.

Schroepfer did sit through around five hours of grilling on a wide range of topics with the Cambridge Analytica data misuse scandal front and center — the tale having morphed into a major global scandal for the company after fresh revelations were published by the Guardian in March( although the newspaper actually published its first tale about Facebook data misuse by the company all the style back in December 2015) — though in last week’s hearing Schroepfer often fell back on claiming he didn’t know the answer and would have to “follow up”.

Yet the committee has been asking Facebook for straight answers for months. So you can see why it’s really mad now.

We reached out to Facebook to ask whether its CEO will now agree to personally testify in front of the committee by May 24, per its request, but the company declined to provide a public statement on the issue.

A company spokesperson did say it would be following up with the committee to answer any outstanding questions it had after Schroepfer’s session.

It’s fair to say Facebook has handled this issue exceptionally badly — leaving Collins to express public frustration about the lack of co-operation when, for example, he had asked it for help and information related to the UK’s Brexit referendum — turning what could have been a somewhat easy to manage process into a major media circus-cum-PR nightmare.

Last week Schroepfer was on the sharp objective of lots of awkward questions from visibly outraged committee members, with Collins pointing to what he dubbed a” pattern of behavior” by Facebook that he told suggested an” unwillingness to engage, and a desire to hold onto information and not disclose it “.

Committee members also interrogated Schroepfer about why another Facebook employee who appeared before it in February had not disclosed an existing agreement between Facebook and Cambridge Analytica.

” I remain to be convinced that your company has integrity ,” he was told bluntly at one point during the hearing.

If Zuckerberg does agree to testify he’ll be in for an even bumpier ride. And, well, if he doesn’t it looks pretty clear the Facebook CEO won’t be making any personal journeys to the UK for a while.

Make sure to visit: CapGeneration.com

Advertisements

Facebook shrinks fake news after warnings backfire

Tell someone not to do something and sometimes they just wishes to do it more. That’s what happened when Facebook set red flags on debunked fake news. Users who wanted to believe the false stories had their fevers ignited and they actually shared the hoaxes more. That resulted Facebook to trench the incendiary red flags in favor of demonstrating Related Articles with more level-headed perspectives from trusted news sources.

But now it’s got two more tactics to reduce the spread of misinformation, which Facebook detailed at its Fighting Abuse @Scale event in San Francisco. Facebook’s director of News Feed integrity Michael McNally and data scientist Lauren Bose held a talk discussing all the way it intervenes. The company is trying to walk a fine line between censorship and sensibility.

These red warning labels actually backfired and made some users more likely to share, so Facebook switched to showing Related Articles

First, rather than bellow more attention to fake news, Facebook wants to make it easier to miss these stories while scrolling. When Facebook’s third-party fact-checkers verify an article is inaccurate, Facebook will shrink the size of the link post in the News Feed.” We reduce the visual prominence of feed tales that are fact-checked false ,” a Facebook spokesperson confirmed to me.

As you can see below in the image on the left, confirmed-to-be-false news narratives on mobile show up with their headline and image rolled into a single smaller row of space. Below, a Related Articles box proves “Fact-Checker” -labeled stories debunking the original connection. Meanwhile on the right, a real news article’s image appears about 10 times larger, and its headline gets its own space.

Second, Facebook is now utilizing machine learning to look at newly published articles and scan them for signs of misrepresentation. Combined with other signals like user reports, Facebook can use high misrepresentation prediction ratings from the machine learning systems to prioritize articles in its queue for fact-checkers. That way, the fact-checkers can expend their hour reviewing articles that are already qualified to likely be wrong.

” We use machine learning to assist predict things that might be more likely to be false news, to assist prioritize material we send to fact-checkers( given the large volume of potential material ),” a spokesperson from Facebook corroborated. The social network now works with 20 fact-checkers in several countries around the world, but it’s still trying to find more to partner with. In the meantime, the machine learning will ensure their period is used efficiently.

Bose and McNally also walked the audience through Facebook’s ” ecosystem ” approach that fights fake news at every step of its development 😛 TAGEND

Account Creation- If accounts are created using fake identities or networks of bad actors, they’re removed.

– If accounts are created using fake identities or networks of bad actors, they’re removed. Asset Creation- Facebook looks for similarities to shut down clusters of fraudulently generated Pages and inhibit the domains they’re connected to.

– Facebook looks for similarities to shut down clusters of fraudulently created Pages and inhibit the domains they’re connected to. Ad Policy- Malicious Pages and domains that exhibit signatures of wrong employ lose the ability to buy or host ads, which deters them from growing their audience or monetizing it.

– Malicious Pages and domains that exhibit signatures of incorrect use lose the ability to buy or host ads, which deters them from growing their audience or monetizing it. False Content Creation- Facebook applies machine learning to text and images to find patterns that indicate hazard.

– Facebook applies machine learning to text and images to find patterns that indicate risk. Distribution- To restriction the spread of false news, Facebook works with fact-checkers. If they debunk an article, its size shrinks, Related Articles are appended and Facebook downranks the tales in News Feed.

Zuckerberg fires back at Tim Cook, opens up about fake news

Zuckerberg has been on a bit of a publicity tour following the Cambridge Analytica scandal and a generally tough year for the social media behemoth.

This morning, an interview with Zuck was published on Vox’s The Ezra Klein Show. In it, the Facebook CEO waded through some of the company’s most pressing issues, including how to deal with fake news and help support good journalism and how to deal with governing a community of 2 billion people. Zuck also clapped back at Tim Cook who has blamed Facebook’s model of producing revenue through advertising.

Fake News

On the problem of Fake News and transparency in the past 😛 TAGEND

It’s tough to be transparent when we don’t first have a full understanding of where the nation of some of the systems are. In 2016, we were behind having an understanding and operational excellence on preventing things like misinformation, Russian interference. And you can bet that that’s a huge focus for us going forward.

On how Facebook is trying to serve up content, including news content, that is meaningful to users 😛 TAGEND

The way that this works today, broadly, is we have panels of hundreds or thousands of people who come in and we show them all the content that their friends and pages who they follow have shared. And we ask them to rank it, and basically tell, “What were the most meaningful things that you wish were at the top of feed? ” And then we try to design algorithm that only map to what people are actually telling us is meaningful to them. Not what they click on , not what is going to make us the most revenue, but what people actually find meaningful and valuable. So when we’re induce shifts — like the broadly trusted shift — the reason why we’re doing that is because it actually maps to what people are telling us they want at a deep level.

Zuck was also asked about supporting news organizations, as some slice of Facebook’s revenue comes from users devouring news on the platform 😛 TAGEND

For the larger institutions, and maybe even some of the smaller ones as well, subscriptions are genuinely a key point on this. I suppose a lot of these business models are moving towards a higher percentage of subscriptions, where the people who are getting the highest value from you are contributing a disproportionate amount to the revenue. And there are certainly a lot of things that we can do on Facebook to help people, to assist these news organizations, drive subscriptions. And that’s certainly been a lot of the run that we’ve done and we’ll continue doing.

He also addressed that subscriptions might not work for local news, which the CEO believes are equally important 😛 TAGEND

In local news, I think some of the solutions might be a little bit different. But I think it’s easy to lose track of how important this is. There’s been a lot of conversation about civic participation changing, and I think people can lose sight of how closely tied that can be to local news. In a town with a strong local newspaper, people are much more informed, they’re much more likely to be civically active. On Facebook we’ve taken steps to show more local news to people. We’re also working with them specifically, creating funds to support them and working on both subscriptions and ads there should hopefully create a more thriving ecosystem.

In Reaction to Tim Cook

In an interview last week, the Apple CEO used to say tech firms “are beyond” self-regulation. When asked what he would do if he was in Zuckerberg’s position, Cook said ” I wouldn’t be in this situation .” The CEO has long is of the view that an advertising model, in which companies use data around users to sell to brands, is not what Apple wants to become.

” They’re gobbling up everything they can learn about you and trying to monetize it ,” he told of Facebook and Google in 2015.” We think that’s incorrect. And it’s not the kind of company that Apple wants to be .”

Zuck was asked about Cook’s statements in the interview 😛 TAGEND

You know, I find that debate, that if you’re not paying that somehow we can’t care about you, to be extremely glib. And not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support constructing this service to reach people.

That doesn’t mean that we’re not primarily focused on serving people. I believe probably to the discontent of our sales squad here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.

Zuck even took the opportunity to clap back at Cook a bit, saying we shouldn’t believe that companies trying to charge us more actually care about us.

But if you want to build a service which is not just serving rich person, then you need to have something that people can afford. I believed Jeff Bezos had an excellent saying on this in one of his Kindle launches a number of years back. He told, “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.” And at Facebook, we are squarely in the camp of the companies that work hard to charge you less and provide a free service that everyone can use.

I don’t suppose at all that that means that we don’t am worried about people. To the contrary, I think it’s important that we don’t all get Stockholm Syndrome and let the companies that work hard to charge you more persuade you that they actually care more about you. Because that sounds ridiculous to me.

The Government of Facebook

Vox’s founder and Editor-at-Large Ezra Klein brought up something Zuck said in an earlier interview, that Facebook was more like a government than a traditional company. Zuck has pointed out that disputes over what content is admissible on Facebook has grown to a scale that requires a certain level of governance.

But I think it’s actually one of the most interesting philosophical questions that we face. With their home communities of more than 2 billion people, all around the world, in every different country, where there are wildly different social and cultural norms, it’s only not clear to me that us sitting in an office here in California are best placed to always ascertain what the policies should be for people all around the world. And I’ve been working on and thinking through, how are you able set up a more democratic or community-oriented process that reflects the values of people around the world?

That’s one of the things that I actually think we need to get right. Because I’m merely not sure that the current state is a great one.

On how Facebook could prepare for its own overwhelming scale 😛 TAGEND

One is transparency. Right now, I don’t think we are transparent enough around the prevalence of different issues on the platform. We haven’t done a good job of publishing and being transparent about the prevalence of those kind of issues, and the run that we’re doing and the trends of how we’re driving those things down over time.

And on long-term objectives for governance 😛 TAGEND

But over the long-term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second sentiment. You can imagine some sort of structure, almost like a Supreme court, that is made up of independent folks who don’t work for Facebook, who ultimately build the final judgment call on what should be acceptable speech in their home communities that reflects the social norms and values of people all around the world.

You can read the full interview at Vox.com.

Make sure to visit: CapGeneration.com

Zuckerberg refuses UK parliament summons over Fb data misuse

So much for’ We are accountable ‘; Facebook founder and CEO Mark Zuckerberg has declined a summons from a UK parliamentary committee that’s analyse how social media data is being used, and — as recent revelations suggest misused — for political ad targeting.

The DCMS committee wrote to Zuckerberg on March 20 — following newspaper reports based on interviews with a former employee of UK political consultancy, Cambridge Analytica, who disclosed the company obtained Facebook data on 50 million users — calling for him to give oral evidence.

Facebook’s policy staffer, Simon Milner, previously told the committee the consultancy did not have Facebook data.” They may have lots of data, but it will not be Facebook user data ,” told Milner on February 8.” It may be data about people who are on Facebook that they have collected themselves, but it is not data that we have .”

In his letter to Zuckerberg, the chairman of the committee Damian Collins accuses Facebook officials of having” consistently downplayed” the risk of user data being taken without users’ consent.

” It is now day that I hear from a senior Facebook executive with the sufficient authority to give an accurate account of this catastrophic failure of process ,” Collins writes.” There is a strong public interest test regarding user protection. Accordingly we are sure you will understand the necessity of achieving a representative from right at the top of the organisation to address concerns. Devote your commitment at the start of the New Year to “fixing” Facebook, I hope that this representative is likely to be you .”

Regardless of rising pressure around what is now a major public scandal — including the FTC opening an investigation — Zuckerberg has declined the committee’s summons.

In a statement a company representative said it has offered its CTO or chief product policeman to answer questions.

“We have responded to Mr Collins and the DCMS and offered for two senior company representatives from our management team to meet with the Committee depending on timings most convenient for them. Mike Schroepfer is Chief Technology Officer and is a matter of Facebook’s technology including the company’s developer platform. Chris Cox is Facebook’s Chief Product Officer and results development of Facebook’s core products and features including News Feed. Both Chris Cox and Mike Schroepfer report directly to Mark Zuckerberg and are among the longest serving senior representatives in Facebook’s 15 year history ,” the spokesperson said.

Facebook declined to answer additional questions.

Collins made a statement before today’s proof session of the DCMS committee, which is hearing from Cambridge Analytica whistleblower Chris Wylie — saying it would still like to hear from Zuckerberg, even if he isn’t able to provide evidence in person.

” We will seek to clarify with Facebook whether he is available to give evidence or not, because that wasn’t clear from our correspondence ,” he said.” If he is available to give evidence, then we will be happy to do that either in person or by video link if that will be more convenient for him .”

Update: Collins returned to the theme of the Facebook founder’s reluctance to put in a personal appearance to answer questions about the questions more than once during the course of its four hour oral hearing, remarking subsequently:” I must say that given the extraordinary evidence we’ve heard in so far today, and the things we’ve heard in the other enquiry, I think it’s absolutely astonishing that Mark Zuckerberg is not prepared to submit himself to questioning in front of a parliamentary or congressional hearing given that these are questions of a fundamental importance and concern to Facebook users and to our enquiry as well .”

” We would certainly recommend him to suppose again if he has any care for people who use him company’s services ,” he added.

Make sure to visit: CapGeneration.com

Facebook starts fact checking photos/videos, blocks millions of fake accounts per day

Facebook has been engaged in letting partners fact check photographs and videos beyond news articles, and proactively review stories before Facebook asks them. Facebook is also now preemptively blocking the creation of millions of fake accounts per day. Facebook exposed this news on a conference call with journalists[ Update: and later a blog post] about its efforts around election integrity that included Chief Security Officer Alex Stamos, who’s reportedly leaving Facebook afterwards this year but asserts he’s still committed to the company.

Articles flagged as false by Facebook’s fact checking partners have their reaching reduced and showing Pertained Articles presenting perspectives from reputable news outlets below

Stamos outlined how Facebook is constructing ways to address fake identities, fake audiences grown illicitly or pumped up to make content appear more popular, acts of spreading false information and false narrations that are intentionally deceptive and shape people’s positions beyond the facts.” We’re trying to develop a systematic and comprehensive approach to tackle these challenges, and then to map that approach to the needs of each country or election ,” tells Stamos.

Samidh Chakrabarti, Facebook’s product manager for civic engagement, also explained that Facebook is now proactively go looking for foreign-based Pages producing civic-related content inauthentically. It removes them from the platform if a manual review by the security team discovers they infringe terms of service.

” This proactive approach has allowed us to move more quickly and has become a really important way for us to prevent divisive or misleading memes from running viral ,” said Chakrabarti. Facebook first piloted this tool in the Alabama special election where the proactive system identified and shut down a ring of Macedonian spammers meddling with the election to earn money, but has now deploys it to protect Italian elections and will use it for the U.S. mid-term elections.

Meanwhile, advances in machine learning have allowed Facebook” to find more suspicious behaviours without assessing the content itself” to block millions of fake account creations per day” before they can do any damage ,” tells Chakrabarti.[ Update 2:15 pm PST: Facebook is expected to share more about these tools during its ” Fighting Abuse @Scale” conference in SF on April 25 th .]

Facebook implemented its first slew of election protections back in December 2016, including working with third-party fact checkers to flag articles as false. But those red flags were indicated to entrench some people’s belief in false narratives, resulting Facebook to shift to showing Related Articles with perspectives from other reputable news outlets. As of yesterday, Facebook’s fact checking partners began reviewing suspicious photographs and videos which can also spread false information. This could reduce the spread of false news image memes that live on Facebook and necessitate no extra clicks to view, like doctored photos demonstrating the Parkland school shooting survivor Emma Gonzalez ripping up the constitution.

Normally, Facebook sends fact checkers tales that are being flagged by users and going viral. But now in countries like Italy and Mexico, in anticipation of elections, Facebook has enabled fact checkers to proactively flag things because in some cases they can identify false narratives that are spreading before Facebook’s own systems.” To reduce latency in advance of elections, we wanted to ensure we dedicated fact checkers that ability ,” tells Facebook’s News Feed product manager Tessa Lyons.

A photo of Parkland shooting survivor Emma Gonzalez ripping up a shooting scope target was falsely doctored to show her ripping up the constitution. Photo fact checking could help Facebook prevented the false image from going viral.[ Image via CNN ]

With the mid-terms coming up quick, Facebook has to both secure its systems against election interference, as well as convince users and regulators that it’s made real progress since the 2016 presidential election, where Russian meddlers operated rampant. Otherwise, Facebook risks another endless news cycle about it being a detriment to republic that could trigger reduced user engagement and government intervention.

Make sure to visit: CapGeneration.com

Platform power is crushing the web, warns Berners-Lee

On the 29 th birthday of the world wide web, its inventor, Sir Tim Berners-Lee, has voiced a fresh warn about threats to the web as a force for good, adding his voice to growing concerns about big tech’s impact on rivalry and society.

The web’s creator argues that the “powerful weight of a few cases dominant” tech platforms is having a deleterious impact by concentrating power in the hands of gatekeepers that gain “control over which ideas and opinions are ensure and shared”.

His suggested fix is socially minded regulation, so he’s also lending his clout to calls for big tech to be ruled.

“These dominant platforms are able to lock up their position by creating roadblocks for challengers, ” Berners-Lee writes in an open letter published today on the Web Foundation’s website. “They acquire startup challengers, buy up new innovations and hire the industry’s top talent. Add to this the competitive advantage that their user data dedicates them and we can expect the next 20 years to be far less innovative than the last.”

The concentration of power in the hands of a few cases mega platforms is also the source of the current fake news crisis, in Berners-Lee’s view, because he tells platform power has made it possible for people to “weaponise the web at scale” — echoing remarks made by the UK prime minister last year when she called out Russia for planting fakes online to try to disrupt elections.

“In recent years, we’ve insured conspiracy hypothesis trend on social media platforms, fake Twitter and Facebook accounts stoked social tensions, external actors interfere in elections, and criminals steal troves of personal data, ” he writes, pointing out that the current response of lawmakers has been to look “to the platforms themselves for answers” — which he argues is neither fair nor likely to be effective.

In the EU, for example, the risk of being future regulation is being used to encourage social media companies to sign up to a voluntary code of conduct aimed at speeding up takedowns of various types of illegal content, including terrorist propaganda. Though the Commission is also seeking to drive action against a much broader set of online content issues — such as hate speech, commercial scam and even copyrighted material.

Critics argue its approach risks chilling free expression via AI-powered censorship.

Some EU member states have gone further too. Germany now has a law with big fines for social media platforms that fail to comply with detest speech takedown requirements, for example, while in the UK ministers are toying with new regulations, such as placing limits on screen day for children and teens.

Both the Commission and some EU member states have been pushing for increased automation of content moderation online. In the UK last month, ministers unveiled an extremism blocking tool which the government had paid a local AI company to develop, with the Home Secretary warn she had not ruled out forcing companies to use it.

Meanwhile, in the US, Facebook has faced huge pressure in recent years as awareness has grown of how extensively its platform is used to spread false information, including during the 2016 presidential election.

The company has announced a series of measures aimed at combating the spread of fake news generally, and reducing the risk of election disinformation specifically — as well as a major recent change to its news feed algorithm ostensibly to foster users towards having more positive interactions on its platform.

But Berners-Lee argues that letting commercial entities pull levers to try to fix such a wide-ranging problem is a bad idea — arguing that any fixes companies come up with will inexorably be restrained by their profit-maximizing context and also that they amount to another unilateral impact on users.

A better solution, in his view, is not to let tech platform giants self-regulate but to create a framework for ruling them that factors in “social objectives”.

A year ago Berners-Lee also alerted about the same core threats to the web. Though he was less coherent in his thinking then that regulation could be the answer — instead flagging up a variety of initiatives aimed at trying to combat threats such as the systematic background harvesting of personal data. So he seems to be shifting towards backing the idea of an overarching framework to control the tech that’s being used to control us.

“Companies are aware of the problems and are making efforts to fix them — with each change they make affecting millions of people, ” he writes now. “The responsibility — and sometimes burden — of inducing these decisions falls on companies that have been built to maximise gain more than to maximise social good. A legal or regulatory framework that accounts for social objectives may help ease those tensions.”

Berners-Lee’s letter also emphasizes the need for diversity of think in shaping any web regulations to ensure regulations don’t get skewed towards a certain interest or group. And he makes a strong call for investments to help close the global digital divide.

“The future of the web isn’t just about those of us who are online today, but also those yet to connect, ” he cautions. “Today’s powerful digital economy calls for strong criteria that balance the interests of both companies and online citizens. This entails thinking about how we align the incentives of the tech sector with those of users and society at large, and consulting a diverse cross-section of society in the process.”

Another specific call he makes is for fresh thinking about Internet business models, arguing that online advertising should not be accepted as the only possible route for sustaining web platforms. “We need to be a little more creative, ” he argues.

“While their own problems facing the web are complex and big, I think we should insure them as bugs: problems with existing code and software systems that have been created by people — and can be fixed by people. Make a new define of incentives and changes in the code are as follows. We can design a web that creates a constructive and supportive environment, ” he adds.

“Today, I want to challenge us all to have greater ambitions for the web. I want the web to reflect our hopes and fulfil our dreamings, rather than magnify our dreads and deepen our divisions.”

At the time of writing Amazon, Facebook, Google and Twitter had not responded to a request for comment.

Make sure to visit: CapGeneration.com

Factmata closes $1M seed round as it seeks to build an anti fake news media platform

While big companies like Facebook and publishers continue to rethink what their role is in disseminating news these days in the wake of the growing influence of’ fake news’ and the ever-present spread of misleading clickbait, a London-based startup called Factmata has closed a seed round of$ 1 million in its ambition to build a platform using AI to help fix the problem throughout the whole of the media industry, from the spread of biased, incorrect or just crappy clickbait on various aggregating platforms; to the use of ad networks to help disseminate that content.

There is no product on the market yet — the company is piloting different services at the moment — and so it’s reasonable to wonder if this might ever get off the ground. But what Factmata is doing is notable anyway for a couple of reasons.

First and foremost, for the timeliness of Factmata’s mission. It’s been over since the US election, and virtually two years after Brexit in the UK. Both events created the specific characteristics of just how strategically placed, biased or patently incorrect stories might have influenced people’s voting in those pivotal events; and people( and industries) are still talking about how to fix their own problems, which started as a public relations risk but threatens to tip into becoming a business and legal danger if not checked.

And secondly, because of who is backing it. The list includes Biz Stone, one of the co-founders of Twitter( which itself is grappling with its role as a’ neutral’ player in people’s wars of words ); and Craig Newmark, a longtime advocate of freedom of information other civil liberty as they intersect into the digital world. In August of last year, when Factmata announced the first shut of this round, the committee is also named Mark Cuban( the investor who is a very outspoken foe of US President Donald Trump ), Mark Pincus, Ross Mason and Sunil Paul as investors.

Image courtesy of Intellectual Take Out.

In an interview with TechCrunch, Factmata’s CEO and founder Dhruv Ghulati — a machine learning specialist whose field of run has included “distant supervision for statistical claim detection”( which seems to have a strong correlation for how one might model a detecting system for a massive trove of news item) — would not be drawn out on the specifics of how Factmata would work, except to note that it would be based on the concept of “community-driven AI: How do we take a machine learning model where you get data to develop your model, perhaps pay 10,000 people to flag content? How are you able build a system where[ what you have and what you want] is symbiotic? ”

( And you can, in fact, think of many routes that this could in the end be implemented: deem, for example, pay walls. You could build up credits to bypass pay walls with readers for every report they make that’s determined to be a help to the fake news challenge .)

Ghulati said that Factmata’s team of machine learning and other AI experts are constructing three different strands to its product at the moment.

The first of these will be a product aimed at the adtech world. Programmatic ad platforms and the different players that feed into it have constructed a system that is ripe for abuse by bad actors.

Those who are posting “legitimate” tales are finding their work posted alongside ads that are being inserted without visibility into what is in those ads, and those selling ads sometimes will not know where those ads will operate. The idea will be that Factmata will help detect anomalies and present them to different players in the field to help reduce those unintended placements.

“We have a recognition system that can detect things like spoof websites, ” which might utilize legitimate-looking ads to help further the image of their legitimacy, said Ghulati.

The success and adoption of the adtech product is predicated on the idea that most of the players in this space are more worried about quality than they are about traffic, which seems antithetical to the business. However, as more junk infiltrates the web, people might gradually move away from employing these services( Facebook’s recent traffic fall is an interesting one to mull in that light ), so the quality issue may well win out in the end.

Ghulati said that AppNexus, Trustmetrics, Sovrn and Bucksense are among the companies in the programmatic space that are already testing out Factmata’s platform.

“Sovrn is passionate about working with independent publishers of quality content. To offer further quality metrics to our buyers, we have chosen to work with Factmata to help build new whitelists of inventory that are free of loathe speech, politically extreme, and fake/ spoof content, ” said Andy Evans, CMO at Sovrn. “This is a new offering in the programmatic ad marketplace, and Factmata is a strong partner in this space. We are excited to be part of Factmata’s journey to help indirect programmatic offer a cleaner, healthier environment for brands”.

The second area where Factmata is hoping to make a mark is on aggregation platforms. While news publishers’ own sites continue to be strong drivers of traffic for those working business, platforms like Google and Facebook continue to play an even bigger role in how traffic gets to those in the first place — and in some cases, where your story is read full stop.

Here, Ghulati said that Factmata is working on an “alpha” of a product that would also work on these platforms to, like the programmatic ad networks, detect when something that is biased or incorrect is being shared and read.( He would not disclose which of these platforms might be talking with Factmata, but devoted Stone has a role again at Twitter, it would be interesting to see if it’s one of them .)

Ghulati is clear to point out that this is not censorship: nothing would ever get removed based on Factmata’s determinations, but instead would be flagged for readers. This sounds not unlike Facebook’s own tries at get people to report when something is of questionable origin, and ultimately, in my opinion, this part of the business might merely succeed if it demonstrates to be able to arrive at the goal faster and better than the companies it will be trying to sell its services to.

He also notes that in the fight against’ bias’ it’s not trying to remove all sentiment from the web, but merely to inform readers when it’s there. “We are not trying to build tech that is trying to make articles unbiased. We’re not trying to create automated machine journalism that stimulates the most unbiased articles. We’re trying to surface and make clear to the reader that those biases do exist. For example, they made the claim because it’s like this. It’s difficult in a fast news cycle always to know the context.”

The third area where Ghulati hopes Factmata will exist is as a consumer-facing service, and this might be the more plausible outcome of the work it’s doing to potentially work with publishers, platforms and others in the world of news and news distribution. Here you can imagine a kind of plug-in or extension that would pop up additional information about a news piece right as you are reading it.

Factmata is just getting started and this seed round is potentially just the tip of the iceberg for what it would need to bring a full product to market. There’s surely a will behind its mission today, and hopefully, that will not ebb away as people move on to, well, the next item in the news cycle.

“It is def a huge problem, and not one that will be solved this year or next year, ” Ghulati said. “We are taking a long-term perspective on this. We think in five to ten years, will we have a new news platform that sets the user at its core? From the tech perspective, it is well known that this space has been dominated by social media platforms. That market is there, but there is a huge chunk that is not, and we think there is a huge opportunity to revamp safety in that market.”

Make sure to visit: CapGeneration.com

YouTube suspends ads on Logan Pauls channels after recent pattern of behavior in videos

More problems and controversy for Logan Paul, the YouTube star who caused a strong public backlash when he posted a video of a suicide victim in Japan. Google’s video platform today announced that it would be pulling advertising temporarily from his video channel in response to a “recent pattern of behavior” from him.

This is in addition to Paul’s suspensions from YouTube’s Preferred Ad program and its Originals series, both of which have been in place since January; and comes days after YouTube’s CEO promised stronger enforcement of YouTube’s policies use a mix of technology and 10,000 human curators.

Since coming online again after a one-month break from the service in the wake of the Japanese video, in addition to the usual( asinine) content of his videos, Paul has tasered a rat, indicated swallowing Tide Pods, and, according to YouTube, intentionally tried to monetize a video that clearly infringed its guidelines for advertiser-friendly content( we’re asking if we can get a specific reference to which video this might be — they all seem fairly offensive to me, so it’s hard to tell ).

“After careful consideration, we have decided to temporarily suspend ads on Logan Paul’s YouTube channels, ” a spokesperson said to TechCrunch in an emailed statement elaborating on the Tweet. “This is not a decision we attained gently, however, we believe he has exhibited a pattern of behavior in his videos that induces his channel not only unsuitable for advertisers, but also potentially damaging to the broader inventor community.”

Yesterday, during a series of “Fake News” hearings in the U.S. led by a Parliamentary committee from the UK, YouTube’s global head of policy Juniper Downs said that the company had detected no evidence of videos that pointed to Russian interference in the Brexit vote in the UK, but the platform continues to face a lot of controversy over how it vets content on its site, and how that content subsequently is used unscrupulously for financial gain.( YouTube notably was criticised for taking too long to react to the Japanese video that started all of Paul’s ache .)

This is a contagion problem for YouTube: not only do situations like his damage public perception of the service — and potentially have an impact on viewership — but it could impact how much the most premium brands choose to invest on ads on the platform.

Interestingly, as YouTube continues work on ways of improving the situation with a mix of both machine learning and human approaches, it appears to be starting to reach beyond even the content of YouTube itself.

The Tide Pod suggestion came on Twitter — Paul wrote that he would swallow one Tide Pod for each retweet — and appears to have since been deleted.

Generally, YouTube reserves the right to hide ads on videos and watch pages — including ads from certain advertisers or certain formats.

When a person builds especially serious or repeated violations, YouTube might choose to disable ads from the whole channel or suspend the person from its Partner program, which is aimed at channels that reached 4,000 watch hours in 12 months and 1,000 subscribers, and lets the creators make money from a special tier of ads and via the YouTube Red subscription service.( This is essentially where Paul has fallen today .)

Since YouTube is wary of getting into the censorship game, it’s leaving an exit road open to people who choose to post controversial things anyway. Posters can turn off ads on individual videos. From what we understand, Paul’s channel and videos will get reevaluated in coming weeks to see if they meet guidelines.

It’s not clear at all how much Paul has made from his YouTube videos. One calculate sets his YouTube ad revenue at between $ 40,000 and $630,000 per month, while another puts it at $270,000 per month( or around $3.25 million/ year ). To note, he’d already been removed from the Preferred program and the Originals program, so that would have already dented his YouTube income.

And you have to ask whether suspending ads genuinely fixes “the worlds biggest” content issues on the platform. While an advertising suspension might entail a loss of some revenue for the inventor, it’s not really a perfect solution.

Logan Paul, as one example, continues to push his own merchandise in his videos, and as a high-profile figure who has not lost his whole fan base, he will still get millions of views( and maybe more now because of this ). In other terms, the originally contravening content( and a viable business model) is still out there, even if it doesn’t have a YouTube monetizing element attributed to it.

On the other hand, SocialBlade, one of the services analytics on YouTube inventors , notes that Paul’s opinions have dropped 41 percent, and subscribers are down 29 percentage in the last month, so maybe there is a god.

Make sure to visit: CapGeneration.com

A young startup with a timely offer: fighting propaganda campaigns online

The prevalence of so-called fake news is far worse than we imagined even a few months ago. Just last week, Twitter admitted there were more than 50, 000 Russian bots trying to confuse American voters ahead of the 2016 presidential election.

It isn’t merely elections that should concern us, though. So argues Jonathon Morgan, the co-founder and CEO of New Knowledge, a two-and-a-half-year-old, Austin-based cybersecurity company that’s gathering up clients looking to fight online disinformation.( Worth mention: The 15 -person outfit has also softly gathered up $1.9 million in seed fund led by Moonshots Capital, with participation from Haystack, GGV Capital, Geekdom Fund, Capital Factory and Spitfire Ventures .)

We talked earlier the coming week with Morgan, a former digital content producer and State Department counterterrorism advisor, to learn more about his product, which is smartly utilizing concerns about fake social media accounts and propaganda campaigns to work with brands eager to preserve their reputation. Our chat has been edited gently for duration and clarity.

TC: Tell us a little about your background .

JM: I’ve expended my career in digital media, including as a[ product director] at AOL when publications were moving onto the internet. Over time, my career moved into machine-learning and data science. During the early days of the application-focused web, there wasn’t a lot of engineering talent available, as it wasn’t seen as sophisticated enough. People like me who didn’t have an engineering background but who were willing to spend a weekend learning JavaScript and could create code fast enough didn’t genuinely need much of a pedigree or experience.

TC: How did that experience lead to you focusing on tech that tries to understand how social media platforms are manipulated ?

TC: When ISIS was employing techniques to jam dialogues into social media, dialogues that were elevated in the American press, we started trying to figure out how they were pushing their message. I did a little work for the Brookings Institution, which led to some run as a data science advisor to the State Department — developing counterterrorism strategies and understanding what public discourse looks like online and the difference between mainstream communication and what that looks like when it’s been hijacked.

TC: Now you’re pitching this service you’ve developed with your team to brands. Why ?

JM: The same mechanics and tactics used by ISIS are now being used by much more sophisticated actors, from hostile governments to children who are coordinating activity on the internet to undermine things they don’t like for cultural reasons. They’ll take Black Lives activists and immigration-focused conservatives and amplify their discord, for example. We’ve also watched alt-right supporters on 4chan undermine movie releases. These kinds of digital rebellions are being used by a growing number of actors to manipulate the style that the public has dialogues online.

We realized we could use the same ideas and tech to defend companies that are vulnerable to these attacks. Energy companies, financial institutions, other companies managing critical infrastructure — they’re all equally vulnerable. Election manipulation is just the canary in the coal mine when it comes to the degradation of our discourse.

TC: Yours is a SaaS product, I take it. How does it work ?

JM: Yes, it’s enterprise software. Our tech analyzes dialogues across multiple platforms — social media and otherwise — and looks for signs that it’s being tampered with, identifies who is doing the tamper and what messaging they are using to manipulate the conversation. With that info, our[ client] can decide how to respond. Sometimes it’s to work with the press. Sometimes it’s to work with social media companies to say, “These are disingenuous and even fraudulent.” We then work with the companies to remediate the threat.

TC: Which social media companies are the most responsive to these attempted interventions ?

JM: There’s a strong appetite for fixing their own problems at all the media companies we talk with. Facebook and Google have addressed this publicly, but there’s action taking place between friends behind closed doors. A lot of individuals at these companies think there are problems that need to be solved, and they are amendable to[ working with us ].

The challenge for them is that I’m not sure they have a sense for who is responsible for[ disinformation much of they period ]. That’s why they’ve been slow to address the problem. We think we add value as a partner because we’re focused on this at a much smaller scale. Whereas Facebook is thinking about billions of users, we’re focused on tens of thousands of accounts and conversations, which is still a meaningful number and can impact public perception of a brand.

TC: Who are some of your clients ?

JM: We[ aren’t authorized to name them but] we sell to companies in the entertainment and energy and finance industries. We’ve also worked with public interest organisations, including the Alliance for Securing Democracy.

TC: What’s the sales process like? Are you go looking for changes in dialogues, then reaching out to the companies impacted, or are companies receiving you ?

JM: Both. Either we discover something or we’ll be approached and do an initial menace assessment to understand the landscape and who might be targeting an organization and from there,[ we’ll decide with the health risks client] whether there’s value in them in engaging with us in an ongoing way.

TC: A plenty of people have been talking this week about a New York Times piece that seemed to offer a glimmer of hope that blockchain platforms will move us beyond the internet as we know it today and away from the few large tech companies that also happen to be breeding ground for disinformation. Is that the future or is “fake news” here to remains ?

JM: Unfortunately, online disinformation is becoming increasingly sophisticated. Advances in AI mean that it will soon be possible to manufacture images, audio and even video at unprecedented scale. Automated accounts that seem practically human will be able to engage directly with millions of users, just like your real friends on Facebook, Twitter or the next social media platform.

New technologies like blockchain that dedicate us robust ways to establish trust will be a part of the solution, if they’re not a magic bullet.

Make sure to visit: CapGeneration.com

Facebook tries fighting fake news with publisher info button on links

Facebook believes showing Wikipedia entries about publishers and additional Related Articles will give users more context about the links they insure. So today it’s beginning a test of a new “i” button on News Feed links that opens up an informational panel. “People have told us that they want more information about what they’re reading” Facebook product manager Sara Su tells TechCrunch. “They want better tools to help them understand if an article is from a publisher they trust and evaluate if the narrative itself is credible.”

This box will display the start of a Wikipedia entry about the publisher and a link to the full profile, which could help people know if it’s a reputable, long-standing source of news…or a freshly set up partisan or irony site. It will also display info from their Facebook Page even if that’s not who posted the link, data on how the link is being shared on Facebook, and a button to follow the news outlet’s Page. If no Wikipedia page is available, that info will be missing, which could also offer a clue to readers that the publisher may not be legitimate.

Meanwhile, the button will also unveil Related Articles on all connections where Facebook can produce them, rather than only if the article is popular or suspected of being fake news as Facebook has hitherto tested. Trending information could also appear if the article is part of a Trending topic. Together, this could show people alternating takes on the same news bite, which might dispute the original article or provide more view. Previously Facebook only depicted Related Articles occasionally and immediately disclosed them on links without an extra click.

More Context, More Complex

The changes are part of Facebook big, ongoing initiative to improve content integrity
Of course, whenever Facebook shows more information, it creates more potential vectors for misinformation. “This work reflects feedback from our community, including publishers who collaborated on the feature growth as part of the Facebook Journalism Project” says Su.

When asked about the risk of the Wikipedia entries that are pulled in having been doctored with false information, a Facebook spokesperson told me “Vandalism on Wikipedia is a rare and unfortunate event that is usually resolved speedily. We count on Wikipedia to quickly resolve such the status and refer you to them for information about their policies and programs that address vandalism.”

And to avoid distributing fake news, Facebook says Related Articles will “be about the same topic — and will be from a wide variety of publishers that regularly publish news content on Facebook that get high involvement with our community.”

“As we continue the test, we’ll continue listening to people’s feedback to understand what types of information are most useful and explore ways to extend the feature” Su tells TechCrunch. “We will apply what we learn from the test to improve the experience people have on Facebook, advance news literacy, and support an informed community.” Facebook doesn’t expect the changes to significantly impact the reach of Pages, though publishers that knowingly distribute fake news could see fewer clicks if the Info button repels readers by debunking the articles.

Getting this right is especially important after the fiasco this week when Facebook’s Safety Check for the tragic Las Vegas mass-shooting pointed people to fake news. If Facebook can’t improve trust in what’s shown in the News Feed, people might click all its connects less. That could hurt innocent news publishers, as well as reducing clicks to Facebook’s ads.

Image: Bryce Durbin/ TechCrunch

Facebook initially downplayed the issue of fake news after the U.S. general elections where it was criticized for allowing pro-Trump hoaxes to proliferate. But since then, the company and Mark Zuckerberg have changed their tunes.

The company has attacked fake news from all angles, use AI to seek out and downrank itin the News Feed, working with third-party fact checkers to flag suspicious articles, helping users more easily report hoaxes, seeing news sites filled with low-quality ads, and deleting accounts suspected of spamming the feed with crap.

Facebook’s rapid iteration in its fight against fake news shows its ability to react well when its problems are thrust into the spotlight. But these changes have just been come after the damage was done during our election, and now Facebook faces congressional scrutiny, widespread backlash, and is trying to self-regulate before the governmental forces steps in.

The company needs to more proactively anticipate sources of disinformation if its going to keep up in this cat-and-mouse game against trolls, election interferers, and clickbait publishers.

Make sure to visit: CapGeneration.com