Oge Marques explaining recent developments in AI for Radiology
Author of the forthcoming book, AI for Radiology
AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
I’ve wondered a few times over the past couple of years if AI could be used to fight fake news, disinformation, and misinformation.
In an analysis I conducted back in March 2020, it didn’t appear so.
I couldn’t find much evidence of commercialized solutions – only a few academic research projects which were focused on disinformation detection.
While NLP-driven text analysis is an ideal way to analyze a massive amount of data quickly, the messiness, nuance, and context of language doesn’t make it easy for humans, lest machines, to discern truth or untruth.
But since then, there appears to be some progress in applying AI to the fake news problem.
While the major social networks, Twitter, Facebook (including Insta and Whatsapp), Snap, Tik Tok, and others wrestle with fake news policies and try to figure out how to navigate first amendment rights and separate fact statements from opinions, Microsoft and Google have developed some AI tools for fighting fake news:
As you can see, these are good-hearted attempts to address the issue and certainly give Microsoft and Google a leg up on the social networks, but the work to date will hardly thwart the overall problem.
What are the challenges for AI or any solution for that matter in fighting fake news? Typically, finding solutions to these types of issues requires someone identifying a business opportunity.
Other than Google safeguarding search, are there use cases that present an AI opportunity to make money from fighting fake news?
I spoke with two AI executives with interesting perspectives on these ideas – Paul Barba, Chief Scientist at social media monitoring NLP player Lexalytics; and Sean Gourley, CEO of Primer.ai, an NLP startup that is developing machine intelligence solutions for governments and commercial organizations.
Is AI suited to help fight against misinformation and fake news, and if so, in what way?
Barba: “AI almost has to be part of a solution because human manpower just won’t scale in fighting it. But I think it’s hard to get AI to figure out truth on its own without humans in the loop."
"It certainly makes sense to use AI to alert/flag humans to fake news/misinformation, AI can be good at curating the data.”
Gourley: “For processing high volumes of text-based data at speed, AI is 2000x better than human analysis."
"But the challenge is defining misinformation – what is real and what is not real. Science has proven that what is true today is not necessarily true tomorrow, so absolute truth is difficult. Machines can’t get to absolute truth."
"The genius of misinformation and the goal of misinformation is to amplify things that aren’t true. What AI can be leveraged for is more holistic -- the detection of an active campaign to push out misinformation. AI can do that, and governments are interested in tools that can do that as an early warning system.”
What are the particular challenges for AI in regard to fighting fake news?
Gourley: “There are plenty of challenges. AI models only know what they’ve been trained to know, so models have to be constantly retrained."
"That means humans have to do a good job in retraining the models -- if they don’t, it’s really like garbage in, garbage out. There are challenges to leveraging deep learning in our work, as the models might be to broad and the system won’t learn fast enough. With machine learning, we can keep the scope narrow and the AI will learn quicker."
"As with other natural language use cases, sarcasm and tone and fuzzy language make it hard for AI to understand what’s really being said. So those are all issues for AI and fake news regardless of the use case. Then consider what social media companies face in fighting fake news."
"They face a different challenge to filtering censorship. AI’s success rate for censorship detection purposes is in the low 90s. Automation at this layer is beyond the capabilities of the algorithms today because of absolute truth."
"That being said, social media companies should be deploying tools to detect misinformation/set up early warning capabilities.”
Barba: “Language is tough for AI and there’s only so much it can do. Honestly, the most fundamental solution to misinformation, disinformation, and fake news is good journalism."
"We haven’t funded journalism and that leaves holes. We as consumers also need to rediscover critical thinking and apply it to our consumption of information and media.”
Is fighting misinformation/fake news a monetization opportunity for companies such as yours? Why or why not?
Gourley: “For us it certainly is. Our technology is being used today to detect emerging information warfare campaigns by analyzing where claims originate and how they are disseminated through the media."
"Primer with the U.S. military is building an AI platform that will be able to automatically identify and assess suspected disinformation. The solution will be used by the Air Force and Special Operations Command."
"The way it works is this – the system continuously collects a massive amount of broad data. It looks for misinformation claims that have been made, and then identifies claim attribution – who are making these claims. Finally, it analyzes counterclaims."
"All of these elements collectively detect a potential issue which security analysts can then use much more quickly to strategize against threats. It’s very early days in the commercial space for misinformation use cases but monitoring meme stocks and stock manipulation makes sense. There is a lot at stake for the investment ecosystem to keep that clean.”
Barba: “We haven’t had any clients ask us about misinformation yet. I don’t see demand for it in the corporate sphere."
"Brands are using our solution to monitor social media to understand brand engagement and customer sentiment."
"A few years ago, there was a real fear that Twitter and other social media can hurt brands. So, our customers use our solution to help them plan how to react."
"It’s interesting that these brands have a very similar problem to the issue presented by fake news because there is too much being said in all of these channels. Brands can’t handle this avalanche of market chatter, so there is opportunity in that for NLP to help sort that out.”
While there certainly seems to be a promising market for AI in fighting fake news, misinformation, and disinformation, it’s clear AI is not the silver bullet that will tame the beast.
AI’s strengths in this fight are the ability to process copious amounts of data rapidly.
AI’s weakness is its limitations in natural language understanding, which curb AI’s ability to discern truth and parse facts from opinion.
I agree with Lexayltics’ Paul Barba – the greatest weapon we have to fight fake news, misinformation, and disinformation is the collective human will to seek the truth.
Mark Beccue is a principal analyst contributing to Omdia’s Artificial Intelligence practice, with a focus on natural language and AI use cases.
Author of the forthcoming book, AI for Radiology