There’s No Such Thing as ‘Generative AI’
A personal essay from a University of Cambridge researcher whose work examines the ethics of artificial neural networks
January 31, 2023
Generative AI is having a moment. Driven by the release of computer applications that allow millions of people to use these tools for writing, image creation, speech synthesis, coding, music, and more, the term has seen a dramatic increase in usage since the tail-end of last year − and shows little signs of slowing down.
Popular models include those from OpenAI (DALLE-2, ChatGPT), Stability AI (Stable Diffusion) and Midjourney AI (Midjourney v4) − although there are dozens of others in use across a dizzying number of applications. But given such a vast pool of use cases − and the extreme speed at which the technology has dominated attention in the AI/ML industry and beyond − does it make sense to call it ‘generative AI’ at all?
Researchers, the press, and sections of the public have been quick to adopt the new lexicon to describe what many have identified as a technology set to finally fulfill at least some of the promise of AI. As think-pieces abound and venture capital firms scramble to produce market maps, those quick to sing the praises of generative AI might do well to approach the discourse surrounding the technology with a healthy dose of scepticism.
We might start by asking what, exactly, do we mean by generative AI? Let us start with the second part: artificial intelligence. To understand conceptions of AI we could do worse than to begin with conceptions of intelligence. The American psychologist Robert Sternberg famously reflected that “there seem to be almost as many definitions of intelligence as there were experts asked to define it.”
Despite consensus on only the lack of consensus, received wisdom typically holds that intelligence is associated with the equally fuzzy notions of ‘agency,’ ‘problem-solving’ or ‘meeting objectives.’ This poses a problem for understanding AI, whose definition also remains a source of spirited debate, although it is commonly linked with the development of computer systems able to perform tasks that normally require human intelligence.
Far from simple pedanticism, the muddled definitions surrounding AI poses two problems for conceptions of its generative cousin. First, the haziness around definitions of AI (and subsequently, generative AI) is in part a function of the absence of agreement surrounding notions of intelligence that allows AI to seemingly refer to everything and nothing. Second, lofty claims of intelligence tend to overestimate the sophistication of these systems and underestimate the role of humans in ensuring their proper functioning.
As for what is precisely ‘generative’ about generative AI, common interpretations stress the ability of these systems to respond to human prompts to produce aural (such as text-to-speech, text-to-audio) or visual (such as text-to-image, text-to-video, text-to-code) outputs accessible via an electronic interface. (There are also burgeoning sub-areas that do not use text as the primary modality for data entry.) In this framing, then, a simplified definition of the technology might be an AI system reliant on human interaction to produce audiovisual outputs.
Generative AI is not AI
But are generative AI systems even a form of artificial intelligence? AI is, after all, not the same discipline as machine learning − a field of statistical inquiry focused on building systems that 'learn.’ The so-called generative AI revolution is premised on the use of neural networks, a form of machine learning in which thousands or millions of interconnected processing nodes extract a predictive function by analyzing the underlying relationships in a set of data.
Despite a tendency to conflate machine learning and AI, each belongs to a very different tradition. In the 1950s, researchers envisioned AI as a system for manipulating mental symbols, sought to use computers to instantiate a formal representation of the world, and understood intelligence through the lens of problem-solving. Machine learning, however, was conceived to simulate the interactions of the brain’s neurons, took learning as its paradigm of intelligence, and has a genesis that can be directly traced to the field of statistics.
Artificial intelligence and machine learning are often used interchangeably in today’s computer science discourse, but given the former’s lofty goals of understanding the meaning behind the world around us, we ought to be critical of any technology that claims to replicate even a portion of the breadth of human intelligence.
An example of machine learning, generative ‘AI’ is neither artificial nor intelligent. Generative AI systems are trained on vast corpuses of data created by humans, and, despite objections to the contrary, will require human input for widespread adoption.
Mirroring the ‘ghost work’ of thousands of low paid workers often located in the Global South to accurately label data used to train the previous generation of machine learning systems, the existence of generative AI is premised on the indiscriminate usage of art, music, writing, and code produced by humans for training purposes. To create labels for a system that sought to detect toxicity in its ChatGPT model, for example, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya who paid local workers to read and label harmful outputs.
And like early commercial deployments of neural networks in the 1980s that saw, for example, workers at the U.S. Postal Service correct errors from mail-sorting machines to ensure reliable operation, the widespread use of generative AI will require humans to oversee, correct, and tailor the systems’ outputs.
This is in part because, unlike the goals for AI introduced in the 1950s, the machine learning systems underpinning generative AI have no understanding of the outputs that they produce: In an ML-generated picture of a goose, there is no grasp of its ‘gooseness’ –– only a statistical correlation among wings, feathers and beaks.
A chatbot designed to provide patients with medical advice might be able to produce useful answers to the prompts provided by its users, but it has no understanding of what the words represent. Like all ‘generative AI’ systems, it is simply aggregating historical data to predict the next output in a sequence. More troubling still, some systems have displayed a worrying tendency to produce plausible sounding but nonsensical outputs.
As a result, ‘generative AI’ will require a dramatic expansion in the number of workers whose primary role is to oversee these systems. Human labor will be pivotal for ensuring their reliable operation by providing something that machine learning techniques cannot: an understanding of meaning.
Take text-to-text models as an example. Despite the flashy interfaces and starry-eyed questions about whether such systems represent an important step towards the holy grail of AI research, artificial general intelligence, they are at their core next-word predictors with next to no understanding of the underlying meaning of the words that they generate.
ChatGPT, for example, has been seen to suggest that “[crushed] porcelain can help to balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop.” Would we expect a system, with even the most rudimentary understanding of human anatomy, to make this sort of statement?
But an inability to comprehend meaning is not just a problem for trusting today’s machine learning systems. It is also an issue for tomorrow because they create the outputs on which the systems of the future will be trained. This dynamic promises a vicious cycle that not only threatens to turbocharge − through the creation of reasonable sounding but incorrect text − the creation of bogus information sources, but also poses a problem for finding and retrieving accurate information in a sea of misleading data.
Manufacturing a generative AI revolution
Finally, ‘generative AI’ is not new. There is certainly no such thing as ‘generative AI’ that refers to systems first developed in 2022, despite what the recent surge in interest might suggest. Rather, ‘generative AI’ has existed in one form or another since the middle of the 20th century.
The emergence of semi-autonomous computer systems that can generate outputs such as a text can be traced back at least 60 years. ELIZA, for example, was an early natural language processing computer program created in 1964 at MIT that generated text. It was one of the first chatbots and one of the first programs capable of attempting the Turing test, a popular but flawed test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Had you called large language models, whose development was made possible by the transformer architecture developed in 2017, ‘generative AI’ you would have no doubt received some strange looks. Yet, just a few years later, such models have undergone a quiet rebrand as ‘text-to-text generative AI.’
This is perhaps the most significant reason to question whether it makes sense to group such a wide range of models under the banner of generative AI. Helped on by enthusiastic reporting, the extreme popularity of ‘generative AI’ has at least in part been manufactured by certain sections of the AI industry and their investors. The collision of ballooning budgets and researchers quick to label chatbots as ‘intelligent’ is a powerful thing.
Do not doubt that the hype surrounding generative AI will continue. Better to question whether it is deserved.
About the Author
You May Also Like