The prevalence of so-called fake news is far worse than we imagined even a few months ago. Just last week, Twitter admitted there were more than 50, 000 Russian bots trying to confuse American voters ahead of the 2016 presidential election.
It isn’t merely elections that should concern us, though. So argues Jonathon Morgan, the co-founder and CEO of New Knowledge, a two-and-a-half-year-old, Austin-based cybersecurity company that’s gathering up clients looking to fight online disinformation.( Worth mention: The 15 -person outfit has also softly gathered up $1.9 million in seed fund led by Moonshots Capital, with participation from Haystack, GGV Capital, Geekdom Fund, Capital Factory and Spitfire Ventures .)
We talked earlier the coming week with Morgan, a former digital content producer and State Department counterterrorism advisor, to learn more about his product, which is smartly utilizing concerns about fake social media accounts and propaganda campaigns to work with brands eager to preserve their reputation. Our chat has been edited gently for duration and clarity.
TC: Tell us a little about your background .
TC: How did that experience lead to you focusing on tech that tries to understand how social media platforms are manipulated ?
TC: When ISIS was employing techniques to jam dialogues into social media, dialogues that were elevated in the American press, we started trying to figure out how they were pushing their message. I did a little work for the Brookings Institution, which led to some run as a data science advisor to the State Department — developing counterterrorism strategies and understanding what public discourse looks like online and the difference between mainstream communication and what that looks like when it’s been hijacked.
TC: Now you’re pitching this service you’ve developed with your team to brands. Why ?
JM: The same mechanics and tactics used by ISIS are now being used by much more sophisticated actors, from hostile governments to children who are coordinating activity on the internet to undermine things they don’t like for cultural reasons. They’ll take Black Lives activists and immigration-focused conservatives and amplify their discord, for example. We’ve also watched alt-right supporters on 4chan undermine movie releases. These kinds of digital rebellions are being used by a growing number of actors to manipulate the style that the public has dialogues online.
We realized we could use the same ideas and tech to defend companies that are vulnerable to these attacks. Energy companies, financial institutions, other companies managing critical infrastructure — they’re all equally vulnerable. Election manipulation is just the canary in the coal mine when it comes to the degradation of our discourse.
TC: Yours is a SaaS product, I take it. How does it work ?
JM: Yes, it’s enterprise software. Our tech analyzes dialogues across multiple platforms — social media and otherwise — and looks for signs that it’s being tampered with, identifies who is doing the tamper and what messaging they are using to manipulate the conversation. With that info, our[ client] can decide how to respond. Sometimes it’s to work with the press. Sometimes it’s to work with social media companies to say, “These are disingenuous and even fraudulent.” We then work with the companies to remediate the threat.
TC: Which social media companies are the most responsive to these attempted interventions ?
JM: There’s a strong appetite for fixing their own problems at all the media companies we talk with. Facebook and Google have addressed this publicly, but there’s action taking place between friends behind closed doors. A lot of individuals at these companies think there are problems that need to be solved, and they are amendable to[ working with us ].
The challenge for them is that I’m not sure they have a sense for who is responsible for[ disinformation much of they period ]. That’s why they’ve been slow to address the problem. We think we add value as a partner because we’re focused on this at a much smaller scale. Whereas Facebook is thinking about billions of users, we’re focused on tens of thousands of accounts and conversations, which is still a meaningful number and can impact public perception of a brand.
TC: Who are some of your clients ?
JM: We[ aren’t authorized to name them but] we sell to companies in the entertainment and energy and finance industries. We’ve also worked with public interest organisations, including the Alliance for Securing Democracy.
TC: What’s the sales process like? Are you go looking for changes in dialogues, then reaching out to the companies impacted, or are companies receiving you ?
JM: Both. Either we discover something or we’ll be approached and do an initial menace assessment to understand the landscape and who might be targeting an organization and from there,[ we’ll decide with the health risks client] whether there’s value in them in engaging with us in an ongoing way.
TC: A plenty of people have been talking this week about a New York Times piece that seemed to offer a glimmer of hope that blockchain platforms will move us beyond the internet as we know it today and away from the few large tech companies that also happen to be breeding ground for disinformation. Is that the future or is “fake news” here to remains ?
JM: Unfortunately, online disinformation is becoming increasingly sophisticated. Advances in AI mean that it will soon be possible to manufacture images, audio and even video at unprecedented scale. Automated accounts that seem practically human will be able to engage directly with millions of users, just like your real friends on Facebook, Twitter or the next social media platform.
New technologies like blockchain that dedicate us robust ways to establish trust will be a part of the solution, if they’re not a magic bullet.
Make sure to visit: CapGeneration.com