October 2, 2023
2023 will go down as the year when generative AI went mainstream, with applications like ChatGPT, Google Bard, DALL-E and Midjourney grabbing headlines for their startling ability to create prose and art from just a simple text prompt.
These are all examples of generative AI — artificial intelligence which can create new content on the fly. But what is generative AI and how does it work? We will answer those questions below, as well as raising some of the serious societal questions that generative AI prompts and looking at just what the future holds for this exciting branch of artificial intelligence.
What is generative AI and how does it work?
Unlike other forms of AI — for example, algorithms designed to look for patterns and make predictions — generative AI is defined by its ability to produce new and original work of its own.
With just a simple text, image or audio prompt, generative AI can produce content in seconds on spec — be it an original essay on trickle-down economics, a picture of New York drawn in the style of Monet or a rap about Reese Pieces.
Of course, while the content appears entirely original, it is actually based on the vast quantities of training data fed to the generative AI models. These use neural networks to identify patterns in the data based on a probability distribution, allowing it to create similar patterns of its own, ultimately outputting something seemingly original.
In other words, while the content is new in the sense that nothing exactly like it exists, it is based on rules and lessons learned by already existing content — be it a repository of 19th-century art or a trove of encyclopedias and websites. It is not a million miles away from how humans operate now — only far faster.
That is both a strength and a weakness: It can potentially draw on millennia’s worth of human knowledge, but that has its own limitations which we’ll cover shortly.
What can generative AI do?
All of this means that generative AI can use everything it has learned from terabytes of human output to pretty convincingly mimic our creativity — in a fraction of the time.
As the 100 million people who have taken ChatGPT for a test drive will know, you can ask the chatbot to write a sitcom script to spec, suggest a twist on a family favorite recipe, or even produce code for you to drop straight onto your website. All of this happens in a matter of seconds.
ChatGPT is the most high-profile example of generative AI, but there are plenty of others doing interesting things. DALL-E — its name a mashup of Salvador Dali and Pixar’s Wall-E robot — is a generative AI model designed to create pictures from prompts. Ask for a racoon playing tennis at Wimbledon in the 1990s, and that’s exactly what it will deliver.
Beyond text and images, audio and video are also something that generative AI can tackle. That’s why you can watch Will Smith eating spaghetti that he never ate, or see a trailer for The Great Catspy — a movie that doesn’t exist, brought to life via text-to-video generation. Both clips are a bit rough around the edges, but with Meta and Google working on their own text-to-video solutions, you can bet things will improve — and quickly.
Heart on My Sleeve, meanwhile, appears to be megastars Drake and the Weeknd collaborating on a music track, but is actually generative AI creating a song that mimics their audio likeness. Universal Music Group did not see the appeal, and had it taken down from streaming platforms pretty quickly.
These are all pretty whimsical examples, but the potential should be obvious. Generative AI can mimic human administrative and creative output with an impressive fidelity already, and the technology is only going to get more convincing with time.
But that doesn’t come without its risks.
What does generative AI mean for humans?
All of this means that businesses, governments and public services could become more productive than ever, with McKinsey research forecasting that generative AI could add up to $4.4 trillion to the global economy every year.
But there are serious concerns that generative AI could increasingly replace humans, leading to job losses and the societal problems that flow from there. Generative AI has the potential not just to replace those in creative industries, but in everything from law and finance to education and engineering.
In the best case scenario, time-consuming and simple tasks can easily be scooped up by AI, leaving humans more time to work on problems that artificial intelligence cannot (yet) handle. But that seems a bit utopian in a world where businesses are looking to cut costs — and there is plenty of early evidence that some employers view the adoption of generative AI as a way of cutting head count without output dropping.
In a more abstract sense, there is also the law of unintended consequences to consider. For example, Google and Microsoft are both experimenting with changes to their search engines where a chatbot will summarize the content of websites without you needing to click through. Without those clicks, the source sites would lose their ad revenue and could therefore close, not only costing human jobs, but leaving the search engines with less current or accurate data to learn from in future.
What are the other problems with generative AI?
Beyond possible job losses there are, unfortunately, quite a few problems that come with generative AI.
The first is a problem that most fields of artificial intelligence are grappling with: the thorny issue of human bias. The training data that generative AIs learn from is made by humans, and as a species we have not always been hugely enlightened. If you ask an artistic generative AI to create a picture of a doctor, for example, it may automatically assume you want a white, male physician because of historic sexism and racism.
This is linked to another problem: the inability for generative AI to create something truly new. It has, after all, learned from training material and is basing anything it creates from that. It may combine two different strands of human experience in a way that has not been seen before, but it cannot have the natural spark of human inspiration that has led to so many breakthroughs over the centuries.
Ironically, one way it can create something new is one of the bigger problems that those working in generative AI are looking to stamp out: hallucinations. This is where generative AI will, in the absence of better information, invent things that simply are not true but repeat the ‘fact’ with authority. This has caught people out before, with two lawyers and a law firm fined $5,000 after their use of ChatGPT was uncovered because the AI had simply invented judicial opinions and legal citations.
Sticking with law, there are also questions of both the ethics and legality of generative AI. Given generative AI models have to be trained on other people’s work, this raises murky questions about plagiarism and copyright law. This is currently being tested in the courts, with a number of authors suing Open AI for using their books as training data without express permission.
Speaking of judicial processes, there is a real risk that bad actors will take advantage of generative AI to spread disinformation aimed at discrediting public figures and institutions. That could be in the form of quickly written fake news, AI generated social media profiles to spread them or even AI-generated 'gotcha' pictures. While ultimately pretty harmless, the AI-generated picture of the Pope in a puffer jacket was widely believed to be real until debunked as a generative AI creation.
Finally, there is the hidden, environmental cost. There is not only the enormous carbon footprint of creating and running generative AI models to contend with, but the ongoing cost of everyone having access to the apps. One research paper estimates that having a simple 20-50 question conversation with ChatGPT uses about 500ml of water: scale that up to millions of people a day, and you have a big environmental headache in a world where droughts are increasingly commonplace.
What could the future of generative AI hold?
Assuming these problems can be addressed or sidestepped, the potential for generative AI is enormous, which some will argue makes it worth deploying despite the very real concerns.
Some of this is just in terms of advancements of where we are now. Generative AI is already pretty effective at quickly processing information and outputting responses, and AI voice synthesis is also improving every day. Already, AI tools are being developed to enable real-time translation, audio dubbing, automated narration and musical scores for film and TV.
More excitingly, the ability to perform human tasks more efficiently could lead to scientific and medical discoveries at a far faster rate.
“Early foundation models like ChatGPT focus on the ability of generative AI to augment creative work, but by 2025, we expect more than 30% — up from zero today — of new drugs and materials to be systematically discovered using generative AI techniques,” said Brian Burke, Gartner's research vice president for Technology Innovation.
Gartner believes generative AI could be equally transformational in material science, chip design, synthetic data and the design of parts in manufacturing, automotive, aerospace and defense industries. Over the longer term, it is predicting even bigger things for creative industries, estimating that by 2030, 90% of a blockbuster film will be generated by AI using text to video.
The potential is perhaps even greater for another part of the entertainment industry: video games. Not only could whole worlds be generated on spec via a simple description, but content could also be far more dynamic. At the moment, an open-world game has a lot of non-player characters (NPCs) that are simple window-dressing. With generative AI, each could suddenly have their own interests and backstories.
And with Meta — the company formerly known as Facebook — going all-in on AI, augmented and virtual reality with the metaverse, such tools could be all-important to help widespread adoption of the company’s vision.
There is good reason why generative AI is seen as a game changer for humanity, and why big tech companies such as Google, Meta, Microsoft, Nvidia and Intel are making multibillion-dollar investments into it. Nvidia CEO Jensen Huang even said a new computing trend has arrived in generative AI. Done right, the world could look unrecognizable in the next 10 to 20 years.
Such seismic change can cause real problems, however, which is why the industry looks set to face regulatory scrutiny over the next few years. The outcome of this fight will go some way to deciding whether the widespread adoption of generative AI is a sea change moment for humanity, a flash in the pan, or something in between.
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like