Facebook has been engaged in letting partners fact check photographs and videos beyond news articles, and proactively review stories before Facebook asks them. Facebook is also now preemptively blocking the creation of millions of fake accounts per day. Facebook exposed this news on a conference call with journalists[ Update: and later a blog post] about its efforts around election integrity that included Chief Security Officer Alex Stamos, who’s reportedly leaving Facebook afterwards this year but asserts he’s still committed to the company.
Stamos outlined how Facebook is constructing ways to address fake identities, fake audiences grown illicitly or pumped up to make content appear more popular, acts of spreading false information and false narrations that are intentionally deceptive and shape people’s positions beyond the facts.” We’re trying to develop a systematic and comprehensive approach to tackle these challenges, and then to map that approach to the needs of each country or election ,” tells Stamos.
Samidh Chakrabarti, Facebook’s product manager for civic engagement, also explained that Facebook is now proactively go looking for foreign-based Pages producing civic-related content inauthentically. It removes them from the platform if a manual review by the security team discovers they infringe terms of service.
” This proactive approach has allowed us to move more quickly and has become a really important way for us to prevent divisive or misleading memes from running viral ,” said Chakrabarti. Facebook first piloted this tool in the Alabama special election where the proactive system identified and shut down a ring of Macedonian spammers meddling with the election to earn money, but has now deploys it to protect Italian elections and will use it for the U.S. mid-term elections.
Meanwhile, advances in machine learning have allowed Facebook” to find more suspicious behaviours without assessing the content itself” to block millions of fake account creations per day” before they can do any damage ,” tells Chakrabarti.[ Update 2:15 pm PST: Facebook is expected to share more about these tools during its ” Fighting Abuse @Scale” conference in SF on April 25 th .]
Facebook implemented its first slew of election protections back in December 2016, including working with third-party fact checkers to flag articles as false. But those red flags were indicated to entrench some people’s belief in false narratives, resulting Facebook to shift to showing Related Articles with perspectives from other reputable news outlets. As of yesterday, Facebook’s fact checking partners began reviewing suspicious photographs and videos which can also spread false information. This could reduce the spread of false news image memes that live on Facebook and necessitate no extra clicks to view, like doctored photos demonstrating the Parkland school shooting survivor Emma Gonzalez ripping up the constitution.
Normally, Facebook sends fact checkers tales that are being flagged by users and going viral. But now in countries like Italy and Mexico, in anticipation of elections, Facebook has enabled fact checkers to proactively flag things because in some cases they can identify false narratives that are spreading before Facebook’s own systems.” To reduce latency in advance of elections, we wanted to ensure we dedicated fact checkers that ability ,” tells Facebook’s News Feed product manager Tessa Lyons.
With the mid-terms coming up quick, Facebook has to both secure its systems against election interference, as well as convince users and regulators that it’s made real progress since the 2016 presidential election, where Russian meddlers operated rampant. Otherwise, Facebook risks another endless news cycle about it being a detriment to republic that could trigger reduced user engagement and government intervention.
Make sure to visit: CapGeneration.com