Can AI fight fake news?
“If all others accepted the lie which the Party imposed – if all records told the same tale – then the lie passed into history and became the truth.” – George Orwell, 1984
September 9, 2021
“If all others accepted the lie which the Party imposed – if all records told the same tale – then the lie passed into history and became the truth.” – George Orwell, 1984
I’ve wondered a few times over the past couple of years if AI could be used to fight fake news, disinformation, and misinformation.
In an analysis I conducted back in March 2020, it didn’t appear so.
I couldn’t find much evidence of commercialized solutions – only a few academic research projects which were focused on disinformation detection.
While NLP-driven text analysis is an ideal way to analyze a massive amount of data quickly, the messiness, nuance, and context of language doesn’t make it easy for humans, lest machines, to discern truth or untruth.
But since then, there appears to be some progress in applying AI to the fake news problem.
Google, Microsoft dip their AI toes in the fake news fighting pool
While the major social networks, Twitter, Facebook (including Insta and Whatsapp), Snap, Tik Tok, and others wrestle with fake news policies and try to figure out how to navigate first amendment rights and separate fact statements from opinions, Microsoft and Google have developed some AI tools for fighting fake news:
Microsoft and Arizona State researchers published a research paper in April 2020 that outlines an AI framework that monitors social media to detect fake news. This remains laboratory research only.
In September 2020 Google announced in a blog post that the company had made some progress in using AI to monitor news and identify potential information threats to better inform Google Search. The Intelligence Desk is a team of Google analysts monitoring news 24/7. AI now helps that team automatically and accurately recognize breaking news in just a few minutes. In a related initiative, Google users can get quick access to facts related to the search. These “fact checks” display as fact check labels in Google Search, Google News and Google Images. Google says in the post people have seen these fact checks more than 4 billion times as of the writing in 2020. (NOTE: It would be interesting if Google were to apply some of these misinformation/disinformation tools to YouTube, the company’s most successful social media platform.)
In a separate initiative, Google assisted fact-checking non-profit Full Fact. In March 2021 in a blog post the company described how they donated $2 million and loaned seven Googlers to the organization to help them build AI tools to help fact checkers detect claims made by key politicians. Google says they helped Full Fact increase the amount of claims they could process by 1000x to more than 100,000 claims a day. That said, it is unclear what entities use Full Fact and how widespread their work is.
As you can see, these are good-hearted attempts to address the issue and certainly give Microsoft and Google a leg up on the social networks, but the work to date will hardly thwart the overall problem.
What are the challenges for AI or any solution for that matter in fighting fake news? Typically, finding solutions to these types of issues requires someone identifying a business opportunity.
Other than Google safeguarding search, are there use cases that present an AI opportunity to make money from fighting fake news?
I spoke with two AI executives with interesting perspectives on these ideas – Paul Barba, Chief Scientist at social media monitoring NLP player Lexalytics; and Sean Gourley, CEO of Primer.ai, an NLP startup that is developing machine intelligence solutions for governments and commercial organizations.
Is AI suited to help fight against misinformation and fake news, and if so, in what way?
Barba: “AI almost has to be part of a solution because human manpower just won’t scale in fighting it. But I think it’s hard to get AI to figure out truth on its own without humans in the loop."
"It certainly makes sense to use AI to alert/flag humans to fake news/misinformation, AI can be good at curating the data.”
Gourley: “For processing high volumes of text-based data at speed, AI is 2000x better than human analysis."
"But the challenge is defining misinformation – what is real and what is not real. Science has proven that what is true today is not necessarily true tomorrow, so absolute truth is difficult. Machines can’t get to absolute truth."
"The genius of misinformation and the goal of misinformation is to amplify things that aren’t true. What AI can be leveraged for is more holistic -- the detection of an active campaign to push out misinformation. AI can do that, and governments are interested in tools that can do that as an early warning system.”
What are the particular challenges for AI in regard to fighting fake news?
Gourley: “There are plenty of challenges. AI models only know what they’ve been trained to know, so models have to be constantly retrained."
"That means humans have to do a good job in retraining the models -- if they don’t, it’s really like garbage in, garbage out. There are challenges to leveraging deep learning in our work, as the models might be too broad and the system won’t learn fast enough. With machine learning, we can keep the scope narrow and the AI will learn quicker."
"As with other natural language use cases, sarcasm and tone and fuzzy language make it hard for AI to understand what’s really being said. So those are all issues for AI and fake news regardless of the use case. Then consider what social media companies face in fighting fake news."
"They face a different challenge to filtering censorship. AI’s success rate for censorship detection purposes is in the low 90s. Automation at this layer is beyond the capabilities of the algorithms today because of absolute truth."
"That being said, social media companies should be deploying tools to detect misinformation/set up early warning capabilities.”
Barba: “Language is tough for AI and there’s only so much it can do. Honestly, the most fundamental solution to misinformation, disinformation, and fake news is good journalism."
"We haven’t funded journalism and that leaves holes. We as consumers also need to rediscover critical thinking and apply it to our consumption of information and media.”
Is fighting misinformation/fake news a monetization opportunity for companies such as yours? Why or why not?
Gourley: “For us it certainly is. Our technology is being used today to detect emerging information warfare campaigns by analyzing where claims originate and how they are disseminated through the media."
"Primer with the U.S. military is building an AI platform that will be able to automatically identify and assess suspected disinformation. The solution will be used by the Air Force and Special Operations Command."
"The way it works is this – the system continuously collects a massive amount of broad data. It looks for misinformation claims that have been made, and then identifies claim attribution – who are making these claims. Finally, it analyzes counterclaims."
"All of these elements collectively detect a potential issue which security analysts can then use much more quickly to strategize against threats. It’s very early days in the commercial space for misinformation use cases but monitoring meme stocks and stock manipulation makes sense. There is a lot at stake for the investment ecosystem to keep that clean.”
Barba: “We haven’t had any clients ask us about misinformation yet. I don’t see demand for it in the corporate sphere."
"Brands are using our solution to monitor social media to understand brand engagement and customer sentiment."
"A few years ago, there was a real fear that Twitter and other social media can hurt brands. So, our customers use our solution to help them plan how to react."
"It’s interesting that these brands have a very similar problem to the issue presented by fake news because there is too much being said in all of these channels. Brands can’t handle this avalanche of market chatter, so there is opportunity in that for NLP to help sort that out.”
Conclusions
While there certainly seems to be a promising market for AI in fighting fake news, misinformation, and disinformation, it’s clear AI is not the silver bullet that will tame the beast.
AI’s strengths in this fight are the ability to process copious amounts of data rapidly.
AI’s weakness is its limitations in natural language understanding, which curb AI’s ability to discern truth and parse facts from opinion.
The fight against fake news will take a suite of weapons, some technological such as AI and media provenance tracking process initiatives such as Project Origin, and some not.
I agree with Lexayltics’ Paul Barba – the greatest weapon we have to fight fake news, misinformation, and disinformation is the collective human will to seek the truth.
Mark Beccue is a principal analyst contributing to Omdia’s Artificial Intelligence practice, with a focus on natural language and AI use cases.
About the Author
You May Also Like