Factmata closes $1M seed round as it seeks to build an anti fake news media platform

While big companies like Facebook and publishers continue to rethink what their role is in disseminating news these days in the wake of the growing influence of’ fake news’ and the ever-present spread of misleading clickbait, a London-based startup called Factmata has closed a seed round of$ 1 million in its ambition to build a platform using AI to help fix the problem throughout the whole of the media industry, from the spread of biased, incorrect or just crappy clickbait on various aggregating platforms; to the use of ad networks to help disseminate that content.

There is no product on the market yet — the company is piloting different services at the moment — and so it’s reasonable to wonder if this might ever get off the ground. But what Factmata is doing is notable anyway for a couple of reasons.

First and foremost, for the timeliness of Factmata’s mission. It’s been over since the US election, and virtually two years after Brexit in the UK. Both events created the specific characteristics of just how strategically placed, biased or patently incorrect stories might have influenced people’s voting in those pivotal events; and people( and industries) are still talking about how to fix their own problems, which started as a public relations risk but threatens to tip into becoming a business and legal danger if not checked.

And secondly, because of who is backing it. The list includes Biz Stone, one of the co-founders of Twitter( which itself is grappling with its role as a’ neutral’ player in people’s wars of words ); and Craig Newmark, a longtime advocate of freedom of information other civil liberty as they intersect into the digital world. In August of last year, when Factmata announced the first shut of this round, the committee is also named Mark Cuban( the investor who is a very outspoken foe of US President Donald Trump ), Mark Pincus, Ross Mason and Sunil Paul as investors.

Image courtesy of Intellectual Take Out.

In an interview with TechCrunch, Factmata’s CEO and founder Dhruv Ghulati — a machine learning specialist whose field of run has included “distant supervision for statistical claim detection”( which seems to have a strong correlation for how one might model a detecting system for a massive trove of news item) — would not be drawn out on the specifics of how Factmata would work, except to note that it would be based on the concept of “community-driven AI: How do we take a machine learning model where you get data to develop your model, perhaps pay 10,000 people to flag content? How are you able build a system where[ what you have and what you want] is symbiotic? ”

( And you can, in fact, think of many routes that this could in the end be implemented: deem, for example, pay walls. You could build up credits to bypass pay walls with readers for every report they make that’s determined to be a help to the fake news challenge .)

Ghulati said that Factmata’s team of machine learning and other AI experts are constructing three different strands to its product at the moment.

The first of these will be a product aimed at the adtech world. Programmatic ad platforms and the different players that feed into it have constructed a system that is ripe for abuse by bad actors.

Those who are posting “legitimate” tales are finding their work posted alongside ads that are being inserted without visibility into what is in those ads, and those selling ads sometimes will not know where those ads will operate. The idea will be that Factmata will help detect anomalies and present them to different players in the field to help reduce those unintended placements.

“We have a recognition system that can detect things like spoof websites, ” which might utilize legitimate-looking ads to help further the image of their legitimacy, said Ghulati.

The success and adoption of the adtech product is predicated on the idea that most of the players in this space are more worried about quality than they are about traffic, which seems antithetical to the business. However, as more junk infiltrates the web, people might gradually move away from employing these services( Facebook’s recent traffic fall is an interesting one to mull in that light ), so the quality issue may well win out in the end.

Ghulati said that AppNexus, Trustmetrics, Sovrn and Bucksense are among the companies in the programmatic space that are already testing out Factmata’s platform.

“Sovrn is passionate about working with independent publishers of quality content. To offer further quality metrics to our buyers, we have chosen to work with Factmata to help build new whitelists of inventory that are free of loathe speech, politically extreme, and fake/ spoof content, ” said Andy Evans, CMO at Sovrn. “This is a new offering in the programmatic ad marketplace, and Factmata is a strong partner in this space. We are excited to be part of Factmata’s journey to help indirect programmatic offer a cleaner, healthier environment for brands”.

The second area where Factmata is hoping to make a mark is on aggregation platforms. While news publishers’ own sites continue to be strong drivers of traffic for those working business, platforms like Google and Facebook continue to play an even bigger role in how traffic gets to those in the first place — and in some cases, where your story is read full stop.

Here, Ghulati said that Factmata is working on an “alpha” of a product that would also work on these platforms to, like the programmatic ad networks, detect when something that is biased or incorrect is being shared and read.( He would not disclose which of these platforms might be talking with Factmata, but devoted Stone has a role again at Twitter, it would be interesting to see if it’s one of them .)

Ghulati is clear to point out that this is not censorship: nothing would ever get removed based on Factmata’s determinations, but instead would be flagged for readers. This sounds not unlike Facebook’s own tries at get people to report when something is of questionable origin, and ultimately, in my opinion, this part of the business might merely succeed if it demonstrates to be able to arrive at the goal faster and better than the companies it will be trying to sell its services to.

He also notes that in the fight against’ bias’ it’s not trying to remove all sentiment from the web, but merely to inform readers when it’s there. “We are not trying to build tech that is trying to make articles unbiased. We’re not trying to create automated machine journalism that stimulates the most unbiased articles. We’re trying to surface and make clear to the reader that those biases do exist. For example, they made the claim because it’s like this. It’s difficult in a fast news cycle always to know the context.”

The third area where Ghulati hopes Factmata will exist is as a consumer-facing service, and this might be the more plausible outcome of the work it’s doing to potentially work with publishers, platforms and others in the world of news and news distribution. Here you can imagine a kind of plug-in or extension that would pop up additional information about a news piece right as you are reading it.

Factmata is just getting started and this seed round is potentially just the tip of the iceberg for what it would need to bring a full product to market. There’s surely a will behind its mission today, and hopefully, that will not ebb away as people move on to, well, the next item in the news cycle.

“It is def a huge problem, and not one that will be solved this year or next year, ” Ghulati said. “We are taking a long-term perspective on this. We think in five to ten years, will we have a new news platform that sets the user at its core? From the tech perspective, it is well known that this space has been dominated by social media platforms. That market is there, but there is a huge chunk that is not, and we think there is a huge opportunity to revamp safety in that market.”

Make sure to visit: CapGeneration.com

YouTube suspends ads on Logan Pauls channels after recent pattern of behavior in videos

More problems and controversy for Logan Paul, the YouTube star who caused a strong public backlash when he posted a video of a suicide victim in Japan. Google’s video platform today announced that it would be pulling advertising temporarily from his video channel in response to a “recent pattern of behavior” from him.

This is in addition to Paul’s suspensions from YouTube’s Preferred Ad program and its Originals series, both of which have been in place since January; and comes days after YouTube’s CEO promised stronger enforcement of YouTube’s policies use a mix of technology and 10,000 human curators.

Since coming online again after a one-month break from the service in the wake of the Japanese video, in addition to the usual( asinine) content of his videos, Paul has tasered a rat, indicated swallowing Tide Pods, and, according to YouTube, intentionally tried to monetize a video that clearly infringed its guidelines for advertiser-friendly content( we’re asking if we can get a specific reference to which video this might be — they all seem fairly offensive to me, so it’s hard to tell ).

“After careful consideration, we have decided to temporarily suspend ads on Logan Paul’s YouTube channels, ” a spokesperson said to TechCrunch in an emailed statement elaborating on the Tweet. “This is not a decision we attained gently, however, we believe he has exhibited a pattern of behavior in his videos that induces his channel not only unsuitable for advertisers, but also potentially damaging to the broader inventor community.”

Yesterday, during a series of “Fake News” hearings in the U.S. led by a Parliamentary committee from the UK, YouTube’s global head of policy Juniper Downs said that the company had detected no evidence of videos that pointed to Russian interference in the Brexit vote in the UK, but the platform continues to face a lot of controversy over how it vets content on its site, and how that content subsequently is used unscrupulously for financial gain.( YouTube notably was criticised for taking too long to react to the Japanese video that started all of Paul’s ache .)

This is a contagion problem for YouTube: not only do situations like his damage public perception of the service — and potentially have an impact on viewership — but it could impact how much the most premium brands choose to invest on ads on the platform.

Interestingly, as YouTube continues work on ways of improving the situation with a mix of both machine learning and human approaches, it appears to be starting to reach beyond even the content of YouTube itself.

The Tide Pod suggestion came on Twitter — Paul wrote that he would swallow one Tide Pod for each retweet — and appears to have since been deleted.

Generally, YouTube reserves the right to hide ads on videos and watch pages — including ads from certain advertisers or certain formats.

When a person builds especially serious or repeated violations, YouTube might choose to disable ads from the whole channel or suspend the person from its Partner program, which is aimed at channels that reached 4,000 watch hours in 12 months and 1,000 subscribers, and lets the creators make money from a special tier of ads and via the YouTube Red subscription service.( This is essentially where Paul has fallen today .)

Since YouTube is wary of getting into the censorship game, it’s leaving an exit road open to people who choose to post controversial things anyway. Posters can turn off ads on individual videos. From what we understand, Paul’s channel and videos will get reevaluated in coming weeks to see if they meet guidelines.

It’s not clear at all how much Paul has made from his YouTube videos. One calculate sets his YouTube ad revenue at between $ 40,000 and $630,000 per month, while another puts it at $270,000 per month( or around $3.25 million/ year ). To note, he’d already been removed from the Preferred program and the Originals program, so that would have already dented his YouTube income.

And you have to ask whether suspending ads genuinely fixes “the worlds biggest” content issues on the platform. While an advertising suspension might entail a loss of some revenue for the inventor, it’s not really a perfect solution.

Logan Paul, as one example, continues to push his own merchandise in his videos, and as a high-profile figure who has not lost his whole fan base, he will still get millions of views( and maybe more now because of this ). In other terms, the originally contravening content( and a viable business model) is still out there, even if it doesn’t have a YouTube monetizing element attributed to it.

On the other hand, SocialBlade, one of the services analytics on YouTube inventors , notes that Paul’s opinions have dropped 41 percent, and subscribers are down 29 percentage in the last month, so maybe there is a god.

Make sure to visit: CapGeneration.com

A young startup with a timely offer: fighting propaganda campaigns online

The prevalence of so-called fake news is far worse than we imagined even a few months ago. Just last week, Twitter admitted there were more than 50, 000 Russian bots trying to confuse American voters ahead of the 2016 presidential election.

It isn’t merely elections that should concern us, though. So argues Jonathon Morgan, the co-founder and CEO of New Knowledge, a two-and-a-half-year-old, Austin-based cybersecurity company that’s gathering up clients looking to fight online disinformation.( Worth mention: The 15 -person outfit has also softly gathered up $1.9 million in seed fund led by Moonshots Capital, with participation from Haystack, GGV Capital, Geekdom Fund, Capital Factory and Spitfire Ventures .)

We talked earlier the coming week with Morgan, a former digital content producer and State Department counterterrorism advisor, to learn more about his product, which is smartly utilizing concerns about fake social media accounts and propaganda campaigns to work with brands eager to preserve their reputation. Our chat has been edited gently for duration and clarity.

TC: Tell us a little about your background .

JM: I’ve expended my career in digital media, including as a[ product director] at AOL when publications were moving onto the internet. Over time, my career moved into machine-learning and data science. During the early days of the application-focused web, there wasn’t a lot of engineering talent available, as it wasn’t seen as sophisticated enough. People like me who didn’t have an engineering background but who were willing to spend a weekend learning JavaScript and could create code fast enough didn’t genuinely need much of a pedigree or experience.

TC: How did that experience lead to you focusing on tech that tries to understand how social media platforms are manipulated ?

TC: When ISIS was employing techniques to jam dialogues into social media, dialogues that were elevated in the American press, we started trying to figure out how they were pushing their message. I did a little work for the Brookings Institution, which led to some run as a data science advisor to the State Department — developing counterterrorism strategies and understanding what public discourse looks like online and the difference between mainstream communication and what that looks like when it’s been hijacked.

TC: Now you’re pitching this service you’ve developed with your team to brands. Why ?

JM: The same mechanics and tactics used by ISIS are now being used by much more sophisticated actors, from hostile governments to children who are coordinating activity on the internet to undermine things they don’t like for cultural reasons. They’ll take Black Lives activists and immigration-focused conservatives and amplify their discord, for example. We’ve also watched alt-right supporters on 4chan undermine movie releases. These kinds of digital rebellions are being used by a growing number of actors to manipulate the style that the public has dialogues online.

We realized we could use the same ideas and tech to defend companies that are vulnerable to these attacks. Energy companies, financial institutions, other companies managing critical infrastructure — they’re all equally vulnerable. Election manipulation is just the canary in the coal mine when it comes to the degradation of our discourse.

TC: Yours is a SaaS product, I take it. How does it work ?

JM: Yes, it’s enterprise software. Our tech analyzes dialogues across multiple platforms — social media and otherwise — and looks for signs that it’s being tampered with, identifies who is doing the tamper and what messaging they are using to manipulate the conversation. With that info, our[ client] can decide how to respond. Sometimes it’s to work with the press. Sometimes it’s to work with social media companies to say, “These are disingenuous and even fraudulent.” We then work with the companies to remediate the threat.

TC: Which social media companies are the most responsive to these attempted interventions ?

JM: There’s a strong appetite for fixing their own problems at all the media companies we talk with. Facebook and Google have addressed this publicly, but there’s action taking place between friends behind closed doors. A lot of individuals at these companies think there are problems that need to be solved, and they are amendable to[ working with us ].

The challenge for them is that I’m not sure they have a sense for who is responsible for[ disinformation much of they period ]. That’s why they’ve been slow to address the problem. We think we add value as a partner because we’re focused on this at a much smaller scale. Whereas Facebook is thinking about billions of users, we’re focused on tens of thousands of accounts and conversations, which is still a meaningful number and can impact public perception of a brand.

TC: Who are some of your clients ?

JM: We[ aren’t authorized to name them but] we sell to companies in the entertainment and energy and finance industries. We’ve also worked with public interest organisations, including the Alliance for Securing Democracy.

TC: What’s the sales process like? Are you go looking for changes in dialogues, then reaching out to the companies impacted, or are companies receiving you ?

JM: Both. Either we discover something or we’ll be approached and do an initial menace assessment to understand the landscape and who might be targeting an organization and from there,[ we’ll decide with the health risks client] whether there’s value in them in engaging with us in an ongoing way.

TC: A plenty of people have been talking this week about a New York Times piece that seemed to offer a glimmer of hope that blockchain platforms will move us beyond the internet as we know it today and away from the few large tech companies that also happen to be breeding ground for disinformation. Is that the future or is “fake news” here to remains ?

JM: Unfortunately, online disinformation is becoming increasingly sophisticated. Advances in AI mean that it will soon be possible to manufacture images, audio and even video at unprecedented scale. Automated accounts that seem practically human will be able to engage directly with millions of users, just like your real friends on Facebook, Twitter or the next social media platform.

New technologies like blockchain that dedicate us robust ways to establish trust will be a part of the solution, if they’re not a magic bullet.

Make sure to visit: CapGeneration.com

Facebook tries fighting fake news with publisher info button on links

Facebook believes showing Wikipedia entries about publishers and additional Related Articles will give users more context about the links they insure. So today it’s beginning a test of a new “i” button on News Feed links that opens up an informational panel. “People have told us that they want more information about what they’re reading” Facebook product manager Sara Su tells TechCrunch. “They want better tools to help them understand if an article is from a publisher they trust and evaluate if the narrative itself is credible.”

This box will display the start of a Wikipedia entry about the publisher and a link to the full profile, which could help people know if it’s a reputable, long-standing source of news…or a freshly set up partisan or irony site. It will also display info from their Facebook Page even if that’s not who posted the link, data on how the link is being shared on Facebook, and a button to follow the news outlet’s Page. If no Wikipedia page is available, that info will be missing, which could also offer a clue to readers that the publisher may not be legitimate.

Meanwhile, the button will also unveil Related Articles on all connections where Facebook can produce them, rather than only if the article is popular or suspected of being fake news as Facebook has hitherto tested. Trending information could also appear if the article is part of a Trending topic. Together, this could show people alternating takes on the same news bite, which might dispute the original article or provide more view. Previously Facebook only depicted Related Articles occasionally and immediately disclosed them on links without an extra click.

More Context, More Complex

The changes are part of Facebook big, ongoing initiative to improve content integrity
Of course, whenever Facebook shows more information, it creates more potential vectors for misinformation. “This work reflects feedback from our community, including publishers who collaborated on the feature growth as part of the Facebook Journalism Project” says Su.

When asked about the risk of the Wikipedia entries that are pulled in having been doctored with false information, a Facebook spokesperson told me “Vandalism on Wikipedia is a rare and unfortunate event that is usually resolved speedily. We count on Wikipedia to quickly resolve such the status and refer you to them for information about their policies and programs that address vandalism.”

And to avoid distributing fake news, Facebook says Related Articles will “be about the same topic — and will be from a wide variety of publishers that regularly publish news content on Facebook that get high involvement with our community.”

“As we continue the test, we’ll continue listening to people’s feedback to understand what types of information are most useful and explore ways to extend the feature” Su tells TechCrunch. “We will apply what we learn from the test to improve the experience people have on Facebook, advance news literacy, and support an informed community.” Facebook doesn’t expect the changes to significantly impact the reach of Pages, though publishers that knowingly distribute fake news could see fewer clicks if the Info button repels readers by debunking the articles.

Getting this right is especially important after the fiasco this week when Facebook’s Safety Check for the tragic Las Vegas mass-shooting pointed people to fake news. If Facebook can’t improve trust in what’s shown in the News Feed, people might click all its connects less. That could hurt innocent news publishers, as well as reducing clicks to Facebook’s ads.

Image: Bryce Durbin/ TechCrunch

Facebook initially downplayed the issue of fake news after the U.S. general elections where it was criticized for allowing pro-Trump hoaxes to proliferate. But since then, the company and Mark Zuckerberg have changed their tunes.

The company has attacked fake news from all angles, use AI to seek out and downrank itin the News Feed, working with third-party fact checkers to flag suspicious articles, helping users more easily report hoaxes, seeing news sites filled with low-quality ads, and deleting accounts suspected of spamming the feed with crap.

Facebook’s rapid iteration in its fight against fake news shows its ability to react well when its problems are thrust into the spotlight. But these changes have just been come after the damage was done during our election, and now Facebook faces congressional scrutiny, widespread backlash, and is trying to self-regulate before the governmental forces steps in.

The company needs to more proactively anticipate sources of disinformation if its going to keep up in this cat-and-mouse game against trolls, election interferers, and clickbait publishers.

Make sure to visit: CapGeneration.com

Facebook security chief rants about misguided algorithm backlash

“I am assuring a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos” wrote Facebook Chief Security Officer Alex Stamos on Saturday in a careen tweetstorm. He claims journalists misunderstand the complexity of assaulting fake news, deride Facebook for thinking algorithms are neutral when the company knows they aren’t, and promotes reporters to talk to engineers who actually deal with these problems and their consequences.

Yet this argument minimizes many of Facebook’s troubles. The issue isn’t that Facebook doesn’t know algorithms can be biased or that people don’t know these are tough problems, but that the company didn’t anticipate abuses of the platform and work harder to build algorithms or human moderation processes that could block fake news and fraudulent ad buys before they impacted the 2016 U.S. general elections, instead of now. And his tweetstorm totally glosses over the fact that Facebook will fire employees that talk to the press without authorization.

[ Update: 3:30 pm PT) I commend Stamos for speaking so candidly to the public about an issue where more transparency is appreciated. But simultaneously, Facebook holds the information and context he says journalists and by extension the public absence, and the company is free to bring in reporters for the necessary briefings. I’d surely attend a “Whiteboard” session like Facebook has often held during reporters in the past on topics like News Feed sorting or privacy controls .]

Stamos’ commentaries hold weight because he’s leading Facebook’s investigation into Russian election tampering. He was the Chief Information Security Officer as Yahoo before taking the CSO role at Facebook in mid-2 015.

The sprawling response to recent backlash comes right as Facebook starts inducing the changes it should have implemented before the election. Today, Axios reports that Facebook just emailed advertisers to inform them that ads targeted by “politics, religion, ethnicity or social issues” will have to be manually approved before they’re sold and distributed.

And yesterday, Facebook updated an October 2nd blog post about disclosing Russian-bought election interference ads to congress to note that “Of the more than 3,000 ads that we have shared with Congress, 5% is available on Instagram. About $6,700 was spent on these ads”, implicating Facebook’s photo-sharing acquisition in the scandal for the first time.

Stamos’ tweetstorm was set off by Lawfare associate editor and Washington Post contributor Quinta Jurecic, who commented that Facebook’s shift towards human editors implies that saying“the algorithm is bad now, we’re going to have people do this” actually “just entrenches The Algorithm as a mythic entity beyond understanding rather than something that was designed poorly and irresponsibly and which could have been designed better.”

Here’s my tweet-by-tweet interpretation of Stamos’ perspective 😛 TAGEND

He starts by saying journalists and academics don’t get what it’s like to actually like to implement solutions to hard problems, yet clearly no one has the right answers yet.

Facebook’s team has supposedly been pigeonholed as naive of real-life repercussions or too technical to assure the human impact of its platform, but the outcomes speak for themselves about the team’s inadequacy to proactively protect against election abuse.

Facebook get that people code their biases into algorithms, and works to stop that. But censorship that results from overzealous algorithms hasn’t been the real problem. Algorithmic negligence of worst-case scenarios for malicious usage of Facebook products is.

Understanding of health risks of algorithm is what’s kept Facebook from over-aggressively implementing them in ways that could have led to censorship, who has responsibility but doesn’t resolving the urgent problem of abuse at hand.

Now Facebook’s CSO is calling journalists’ demands for better algorithms fake news, because these algorithms are hard to build without becoming a dragnet that assaults innocent content too.

What is totally false might be somewhat easy to spot, but the polarizing, exaggerated, opinionated content many see as “fake” is tough to develop AI to spot because of the nuance with which it’s separated from legitimate news, which is a valid point.

Stamos says it’s not as simple as opposing bots with algorithms because…

…Facebook would end up becoming the truth police. That might lead to criticism from conservatives if their content is targeted for removal, which is why Facebook outsourced fact-checking to third-party organizations and reportedly delayed News Feed changes to address clickbait before the election.

Even though Facebook publishes fund, some datasets are still too big to hire enough people to review manually, so Stamos believes algorithms are an unavoidable tool.

Sure, journalists should do more of their homework, but Facebook employees or those at other tech companies can be fired for discussing work with reporters if they don’t have PR approval.

It’s true that as journalists seek to fight for the public good, they may overstep the bounds of their knowledge. Though Facebook’s best strategy here is likely being more thick-skinned to criticism while making progress on the necessary work rather than complaining about the company’s treatment.

Journalists do sometimes tie everything up in a neat bow when they’re actually messier, but that doesn’t mean we’re not at the start of a cultural change about platform responsibility in Silicon Valley.

Stamos says it’s not a lack of empathy or understanding of the non-engineering parts to blame, though Facebook’s idealistic leadership did surely fail to anticipate how significantly its products could be abused to interfere with elections, hence all the reactive changes happening now.

Another fair point, as we often want aggressive protection against views we disagree with while dreading censorship of our own perspective when those things go hand in hand. But no one is calling for Facebook to be haphazard with the creation of these algorithms. We’re just saying it’s an urgent problem.

This is true, but so is the inverse. Facebook neded to believe long and hard about how its systems could be abused if speech wasn’t controlled in any way and fake news or ads were used to sway elections. Dedicating everyone a voice is a double-edged sword.

Yes, people should take a wholistic view of free speech and censorship, knowing both must reasonably traverse both sides of the aisle to have a coherent and enforceable policy.

This is a highly dramatic way of saying be careful what you wish for, as censorship of those you disagree with could bloat into censorship of those you support. But this actually positions Facebook as “the gods”. Yes, we want more effective protection, but no, that doesn’t mean we want overly aggressive censorship. It’s on Facebook, the platform owned, to strike this balance.

Not sure if this was meant to lighten the mood, but it made it sound like his whole tweetstorm was flippantly produced on a caprice, which seems like an odd style for the world’s largest social network to discuss its most pressing scandal ever.

Overall, everyone needs to approach this discussion with more subtlety. The public should know these are tough problems with potential unintended outcomes for rash moves, and that Facebook is aware of the severity now. Facebook employees should know that the public wants progression urgently, and while it might not understand all the intricacies and sometimes induces its criticism personal, it’s still warranted to call for improvement.

Make sure to visit: CapGeneration.com

South Park slams Facebook for selling fake news

“I make money from Facebook for my fake content in order to pay Facebook to promote my fake stories, ” said Professor Chaos in one of the most brutal and succinct criticisms of the social network to date. The latest episode of South Park pulled no punches in its take-down of the Facebook fake news scandal. It poses Mark Zuckerberg as an indecipherable bully protecting fake news spreaders for profit and says kids can’t acknowledge lies on the app, while blaming everyone for permitting Facebook so deep into our lives.

Meanwhile, the episode pokes fun at Netflix for greenlighting low-quality original series, and riffs on the horrible abuse of women by Hollywood mogul Harvey Weinstein.

South Park’s intense, episode-long focus on Facebook’s fake news problems underlines the severity of the mainstream backlash. The blunt characterization of Facebook and Zuckerberg, and the direct damage fake news has on the show’s protagonists, could force the company to see its actions and explanations through the lens of the public.

“Children absence the cognitive ability to determine what’s true”

Spoilers ahead. If you care, you should probably just watch the 22 -minute episode, which was both funny and jaw-dropping in how aggressively it attacks Zuckerberg, in particular.

The plot is essentially that the school boys of South Park have formed a superhero squad and are trying to sell to Netflix an original Tv series based on their escapades. But their nemesis, Professor Chaos, ruins their reputation and Netflix deal by publishing fake news on Facebook saying the heroes do disgusting things, and then promotes those stories with Facebook ads.

“Look fellas, you have a right to be on Facebook, and I have a right to be on Facebook, and sometimes that’s gonna cause a little…chaos, ” says the villain.

The line seems to reference, or at the least align with, Zuckerberg’s statement about Donald Trump accusing Facebook of being “anti-Trump.” Zuckerberg responded that “Trump says Facebook is against him. Liberals say we helped Trump. Both sides are upset about ideas and content they don’t like. That’s what operating a platform for all notions looks like.”

Professor Chaos goes on to build a profitable fake news and ads farm. The parents of South Park start considering the fake news, and believe the kids are performing unspeakable sex acts on innocent victims.

But one parent stands up and says[ Warning: Graphic Language ]: “We all know there’s been a lot of mixing of truth and fiction on Facebook lately, and children absence the cognitive ability to determine what’s true and what isn’t on Facebook. That’s now why we have kids garmenting up in costumes, eating poop, and having sex with antelopes in our town.”

The kids are actually acting pretty normal and can spot the lies, but the parents are the ones unwittingly buying into it.

“You all brought Mark Zuckerberg into your lives”

The parents invite Zuckerberg to town for questioning. But when one says “Facebook has become a tool for some to disrupt our country and our community, ” Zuckerberg chuckles off the critique, saying, “You say these things as if they are my fault, and yet they are not.”

When another responds, “Well you did create a platform with a monetary incentive for people to spread misinformation, ” Zuck tells the town it cannot block his fighting style, and waves his arms while building sound impacts like an old kung fu movie villain. This seems to be a excavation on both Facebook’s unrelenting expansion into every region of life, and Zuckerberg’s at-times opaque public speaking style.

The heroes confront Professor Chaos and Zuckerberg, and say to the CEO, “This kid is deliberately lying about us on your platform for no other reason than to cause harm. Why are you protecting him? ” “Simple, he paid me $17.23, ” Zuckerberg answers. It’s clear that many see Facebook’s policy of letting fake news because of free speech as an excuse for greed.

In reality, Facebook’s execs have so much fund they probably don’t care much about earning more. My seven years of reporting on and interviewing the company leading me to believe it earnestly believes in free speech despite the ugly side effects, and this scandal has been driven by its idealistic leadership’s naivety about the worst of humanity rather than greed.

Every South Park episode, while laced with profanity and sillines, resolves with a moral turn. In this case, the townspeople demand police shoot Zuckerberg, or at least kick him out of township. But the police chief asks, “Who invited Mark Zuckerberg to township in the first place ?, ” and the public glumly acknowledges “we did.” “You all should have guessed harder about this before letting him into your lives, ” the chief chastises the town, and everyone watching South Park.

In the end, the kids gang-stomp Zuckerberg until he opposes back, but catch merely the second half of the fight on Facebook Live, in turn ruining his reputation despite his protests that it’s all untrue. With a touch of his smartphone, the defeated Zuckerberg neutralizes the fake news pushers, with the show poking the real him for not employing his power to more drastic action. The children get their Netflix show, and Professor Chaos’ dad castigates Vladimir Putin for setting a bad example.

The lessons are clear. South Park highlights how Facebook is profiting off fake news, which the company needs to avoid, even if it means making things harder for innocent advertisers. As ” members of the public, we must accept some of the held accountable for Facebook’s influence, because we allowed ourselves to become so addicted to its content and to treat it like a verified news source.

Now the question is, did Zuck think it was funny?

Make sure to visit: CapGeneration.com

Trump calls Facebook anti-Trump so it goes soft on him

Trump may have found a style to tie Facebook’s hands as it investigates Russian interference into the election.

Without citing any evidence or even a reason, Donald Trump today declared that “Facebook was always anti-Trump”. That’s despite Trump’s campaign heavily relying on targeted Facebook ads during the election to rally citizens sympathetic to his brand of nationalism.

This morning Trump tweeted, “Facebook was always anti-Trump. The Networks were always anti-Trump hence, Fake News, @nytimes ( apologized)& @WaPo were anti-Trump. Collusion? .. But the person or persons were Pro-Trump! Virtually no President has accomplished what we have accomplished in the first 9 months-and economy roaring”.

Despite the claim, Facebook’s share price is still up 1.35% today, showing investors don’t believe the tweet portends Trump meddling with Facebook’s business.

[ Update: Mark Zuckerberg has responded to Trump’s tweet, writing “Trump says Facebook is against him. Liberals say we helped Trump. Both sides are upset about ideas and content they don’t like. That’s what operating a platform for all notions looks like.” Zuckerberg went on to describe how Facebook’s impact on the election was more about devoting everyone including the candidates a voice than malicious interference or bias .]

From one slant, though, Trump might be right. Facebook is led by liberals who support immigrants and refugees, the LGBT community, and equal rights for women and muslims — all of which Trump has railed against.

“I hear fearful voices calling for build walls and slowing immigration” CEO Mark Zuckerberg said at Facebook’s April 2016 F8 developer conference. He advocated that the country “Instead of build walls, build bridges. And instead of divide people, we can help bring people together.” Zuckerberg has also immediately spoken out against Trump’s moves against Dreamers who immigrated to the US as children, and his push for a transgender troop ban.

I attended then-President Barack Obama’s townhall at Facebook HQ in 2011, and it was clear that the employee section enthusiastically whooped it up for Democrat policies and growled when Obama and Zuckerberg discussed the Republican agenda.

Zuckerberg results a town hall session at Facebook’s headquarters with President Obama in 2011

Yet on the other side, Facebook has purposefully tried to avoid seeming biased against conservatives. Anonymous reports to Gizmodo claimed that Facebook’s Trending news team purposefully inhibited conseverative news outlets.

While Facebook’s internal investigation supposedly determined no proof of this, it induced stiff changes to how Trend were surfaced, and moved to a largely algorithm-driven system to reduce the potential human bias. It also met with conservative news outlets to promise them a balanced platform.

Some believe this scandal is what led Facebook to be soft in its initial response to fake news during the election, as this content was more commonly pushed by right wing new media sources. Gizmodo reported that Facebook shelved an anti-fake news update to its News Feed during the election for fear of eliciting Republican backlash against the platform. That could both endanger Facebook’s ad-driven business model, but also could have pushed conservatives off the social network, worsening the polarization of the country.

In the end, Facebook was is the responsibility of allowing fake news to proliferate in ways that might have assisted Trump’s election victory.

And so, here we may find a rationale for Trump’s criticism of Facebook today. If he can embolden critics who say Facebook leans left, the company may be less aggressive in tackling fake news and its on-going investigation into Russian interference in the election. It could also deter Facebook from potentially penalise or blocking Trump’s Facebook account for distributing abhor or menaces more, as many have called on Twitter to do.

Facebook today outlined 11 tactics it used to thwart interference in the Germany federal election. But it could encounter resistance from Trump’s followers for trying to implement these in the U.S.

As long as Facebook must actively combat the perception that it’s anti-Trump, it may have to act more pro-Trump, or at least neutral in the face of his incendiary actions.

Make sure to visit: CapGeneration.com