AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
OpenAI’s latest language-generating AI experiment, GPT-3, is causing a stir online, with early adopters “amazed” at the quality of the text the machine learning model is able to produce with minimal human input.
Its predecessor made headlines last year after being dubbed “too dangerous to be released.”
The third iteration of the Generative Pre-training Transformer was first discussed in a research paper published in May, and is now being drip-fed to testers in a private beta.
One developer, for example, built a layout generator that enables users to describe a web page in words, such as “a button that looks like a watermelon” or “large text in red that says WELCOME TO MY NEWSLETTER and a blue button that says SUBSCRIBE.” GPT-3 is then able to generate the requisite JSX code.
While GPT-2 – arguably the first language-generating model to create text seemingly indistinguishable from that written by a human – boasted 1.5 billion parameters, GPT-3 includes 1.75 billion, having been trained on an archive of the Internet called the Common Crawl, which contains nearly one trillion words.
GPT-3 is not without its flaws. Computer scientist Kevin Lacker, for example, wanted to see how close it could come to passing a Turing test. Some nonsense prompts, such as ‘How many eyes does my foot have?’ and ‘How many rainbows does it take to jump from Hawaii and seventeen?’ resulted in predictably confusing answers (GPT-3 reckons ‘two eyes’ and ‘two rainbows’, in case you were wondering).
The model was also found to be leaning towards harmful stereotypes when fed prompts such as ‘women’, ‘black’, or ‘Jewish’ – a challenge that has long plagued creators of language-based AI models. And, of course, critics have pointed out the damaging role GPT-3 could play in propagating fake news.
At its core, GPT-3 is an extremely smart predictive text tool, born of the way humans use words. And while there are certainly pitfalls to this approach, the third iteration of the model represents a step forward for language-generating AI. The impact it could have on chatbots in a customer service context, for example, is significant. Indeed, Open AI says it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the system via the cloud.
Despite all the excitement online, OpenAI is taking a measured approach to GPT-3’s capabilities, with the company’s CEO Sam Altman – who co-founded it alongside Elon Musk – decrying the hype.
In a tweet, he said: “It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”