Congress grills social media giants on efforts to curtail terror content

Congress demanded answers from tech giants including Facebook and Twitter on Wednesday about whether they’re doing enough to keep terror networks off social media, suggesting that their reports about frustrating such efforts indicate they’re falling short.

Even Twitter executive Carlos Monje’s report that the so-called microblogging service — with an estimated 330 million active users — has suspended more than 1.1 million terrorist-related accounts since mid-2 015 wasn’t enough for Sen. Brian Schatz, D-Hawaii.

“Based on your results, you are not where you need to be, ” said Schatz. He expressed anxiety of election-year chaos like what American enemies created in 2016.

In response, Monje said at the Senate commerce committee hearing that Twitter gets better “every day” at detecting terror-related and other dangerous messages. But he also acknowledged that company officials routinely “ask themselves the same question” about how they can improve.

The executives from Facebook, YouTube and Twitter also conceded at the hearing that they have yet to agree on a “shared standard” of what constitutes extremist or terrorist content. That came in response to questioning about the issues by the committee chairman, Sen. John Thune, R-S.D.

“That’s right, Mr. Chairman, ” replied Facebook executive Monica Bickert.

However, she also was contended that the popular social media network does not allow anybody involved with the revolutionary Islamic group Boko Haram, for example, to have a Facebook page “even if you are simply talking about the lovely weather.”

She also said Facebook does not permit “any praise for such groups or their actions.”

Thune raised concerns about a YouTube video that helped the so-called “Manchester bomber” generate his explosive device. He cited a recent report that received the video has been put up and taken down 11 hours on YouTube, yet resurfaced on the site again this month.

“How is that possible? ” Thune asked. The homemade bomb was detonated at the Manchester Arena, in the United Kingdom, in May 2017, killing some two dozen people and injuring more than 500 others. Revolutionary Islamists have claimed responsibility for the attack.

YouTube executive Juniper Downs said the company was rapidly catching “re-uploads” of the video, then removing them.

She also said the company was sharing such info as part of the coalition formed with Facebook, Twitter and Microsoft to “better identify and remove” offensive content.

Still, Monje, a Twitter public policy director, acknowledged that keeping up with artificial intelligence and other forces-out that put out dangerous content had elements of a “cat and mouse game.”

He also said that his company, like Facebook, was trying to identify and apprise users who might have been subject to Russian internet trolls spreading misinformation during the 2016 election cycle.

Make sure to visit: CapGeneration.com