March 21, 2023
At a Glance
- New York University professor warns ChatGPT and LLMs could provide troll farms a way to increase levels of misinformation.
- AI releases should be paused and vetted the same way drugs are, with lengthy trial periods.
- The main positive use case for generative AI tools is that they empower programmers.
U.S. elections are more than a year away but a new threat to democracy is emerging: AI. While applications like ChatGPT are still nascent, the risk they pose is very real, according to New York University professor Gary Marcus.
Speaking at The Alan Turing Institute’s AI UK 2023 event, the psychology and neural science professor warned that large language models will further empower troll farms that had caused such havoc in the 2016 and 2020 elections.
Marcus likened the potential spread of fake news to the problems online Sci-Fi publisher Clarkesworld faced with AI submissions. Clarkesworld had to close its submissions option due to a wave of articles featuring plagiarized material coinciding with the surging interest in ChatGPT.
“I don't think any of those stories were good, but they all took time for humans to deal with. And it's going to be like that with misinformation,” said Marcus.
A million stories in 5 minutes
The professor, who founded the startup Geometric Intelligence that was later acquired by Uber, warned that large language models could supercharge the capabilities of troll farms, which were already spreading fake news stories on an industrial scale.
“For decades, troll farms had hundreds or thousands of iPhones working in parallel to make stories. Now, you don’t just make one story, you can make 100,000 even a million in five minutes or an hour.”
Marcus explained that these troll farms would be operated either by state actors who are intentionally trying to interfere in U.S. elections or malicious actors that just want internet users to click on ads to generate money.
To be sure, internet users cannot simply type malevolent prompts into ChatGPT since its creator, OpenAI, has guardrails preventing it. However, jailbreaks for OpenAI’s chatbot exist and have proliferated online via the dark web, Marcus said.
The professor has seen plentiful examples of generations by ChatGPT where users got it to say “all the vilest things you could imagine − they made up conspiracy theories about (far right conspiracists group) QAnon and all kinds of really upsetting stuff.”
Also, the vast array of AI-generated stories being created is not identical, thereby making them harder to take down. And since they are being posted to sites that are designed to trick users into thinking they are news, it makes it harder for the user to determine whether or not they are real.
There are tools to identify AI-generated content with mixed results. Several research teams are working on projects, including OpenAI which launched AI Text Classifier in early February. However, AI Text Classifier merely predicts the likelihood that a piece of text was generated by an AI tool and cannot say so for certain. Turnitin, the online plagiarism tool used by universities across the globe, is working on something similar for essays. Turnitin claims its AI text detector identifies 97% of ChatGPT and GPT-3 authored writing, with a very low (less than 1/100) false positive rate.
Watermarking videos has proven successful, but the NYU professor did not seem to think the same would work for text.
“Watermarking for text is probably a lost cause,” Marcus said. “So far, I do not think any of these systems actually work that well. They are not so hard to defeat and they can be defeated by accident. It is worth trying … (but) I don't have a lot of faith.”
Google vs. Microsoft
Microsoft and Google have rushed forward to try and integrate these new technologies into tools across their suite of products. However, such a rush did not sit well with Marcus, who described it as “a race to the bottom.”
“Right now, the technology that we have is pretty widely known, and so there is not even that much to protect from a business perspective,” he said. “You can have your own unique dataset, but what we are seeing is, when there is an advance in one company, it is often followed (by a competitor) weeks later or days later.”
Marcus warned that the fear of falling behind competitors means companies are pushing out unfinished products and services.
His comments come after reports emerged that Microsoft cut its AI ethics team as senior leaders including CEO Satya Nadella sought to get its AI integrations with OpenAI out as fast as possible.
“This is not a great dynamic. There's a little bit of a race to the bottom. And that's really disconcerting," Marcus said.
He opined that product releases should be reduced or paused, with new systems being vetted in the same way drugs are, with lengthy trial periods.
But his talk was not all doom and gloom, however. Marcus spoke about the possibility of AI improving the lives of programmers by empowering them to work better.
“Computer programmers are uniquely well equipped to use this because you cannot become a coder and survive as a coder unless you can debug. Not everybody has that skill,” he said.
“It turns out ChatGPT, Codex (and the like) can write programs. They are not foolproof, but they can write pieces of programs kind of like an assistant. They’re 30% correct, which sounds lousy but they do not need to be 100% correct with that job.”
Read more about:ChatGPT
About the Author(s)
You May Also Like