Open AI’s GPT-3 language model is blowing minds

Beta test shows off the ability to generate anything from works of literature to computer code, using the simplest prompts

Rachel England

July 21, 2020

3 Min Read

Beta test shows off the ability to generate anything from works of literature to computer code, using simplest prompts

OpenAI’s latest language-generating AI experiment, GPT-3, is causing a stir online, with early adopters “amazed” at the quality of the text the machine learning model is able to produce with minimal human input.

Its predecessor made headlines last year after being dubbed “too dangerous to be released.”

The third iteration of the Generative Pre-training Transformer was first discussed in a research paper published in May, and is now being drip-fed to testers in a private beta.

So far, developers have shown off GPT-3’s ability to write pastiches of well-known authors, extremely convincing news articles, business memos, medical documentation, and computer code.

One developer, for example, built a layout generator that enables users to describe a web page in words, such as “a button that looks like a watermelon” or “large text in red that says WELCOME TO MY NEWSLETTER and a blue button that says SUBSCRIBE.” GPT-3 is then able to generate the requisite JSX code.

One trillion words

While GPT-2 – arguably the first language-generating model to create text seemingly indistinguishable from that written by a human – boasted 1.5 billion parameters, GPT-3 includes 1.75 billion, having been trained on an archive of the Internet called the Common Crawl, which contains nearly one trillion words.

GPT-3 is not without its flaws. Computer scientist Kevin Lacker, for example, wanted to see how close it could come to passing a Turing test. Some nonsense prompts, such as ‘How many eyes does my foot have?’ and ‘How many rainbows does it take to jump from Hawaii and seventeen?’ resulted in predictably confusing answers (GPT-3 reckons ‘two eyes’ and ‘two rainbows’, in case you were wondering).

The model was also found to be leaning towards harmful stereotypes when fed prompts such as ‘women’, ‘black’, or ‘Jewish’ – a challenge that has long plagued creators of language-based AI models. And, of course, critics have pointed out the damaging role GPT-3 could play in propagating fake news.

A lot to figure out

At its core, GPT-3 is an extremely smart predictive text tool, born of the way humans use words. And while there are certainly pitfalls to this approach, the third iteration of the model represents a step forward for language-generating AI. The impact it could have on chatbots in a customer service context, for example, is significant. Indeed, Open AI says it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the system via the cloud.

Despite all the excitement online, OpenAI is taking a measured approach to GPT-3’s capabilities, with the company’s CEO Sam Altman – who co-founded it alongside Elon Musk – decrying the hype.

In a tweet, he said: “It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

About the Author

Rachel England

Freelance journalist Rachel England has covered all aspects of technology for more than a decade. She have a particular interest in sustainability-focused tech innovation, and has once attended a green business expo dressed as a recycling bin.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like