As its competitor AI21 launches the world's largest language model

Sebastian Moss

August 12, 2021

4 Min Read

OpenAI has upgraded its artificial intelligence-powered coding assistant, Codex, to translate natural language into code.

The system was announced in June, and is used as the foundation for GitHub's coding assistant, Copilot.

Codex is similar to GPT-3, OpenAI's huge natural language processing model, but instead of being trained on human languages, it is fed code. This means it can help generate code, and even create entire sections.

In controlled demos, the company showed the system translating simple English commands like “create a webpage with a menu on the side and title at the top" into code, by marrying Codex's coding skills with GPT-3’s language expertise.

Coders code coders out of a job

Codex can speak more than a dozen programming languages, including JavaScript, Go, Perl, PHP, Ruby, Swift, and TypeScript – but performs best in Python. It does not appear to support OpenAI's own recently-launched programming language called Triton, likely because no data exists to train the model.

“Once a programmer knows what to build, the act of writing code can be thought of as (1) breaking a problem down into simpler problems, and (2) mapping those simple problems to existing code (libraries, APIs, or functions) that already exist,” the company said in a statement. “The latter activity is probably the least fun part of programming (and the highest barrier to entry), and it’s where OpenAI Codex excels most.”

The project is in its early days, still makes mistakes, and requires trial and error to use, OpenAI admitted.

In a research paper outlining an earlier version of Codex, OpenAI said that it was capable of a "difficulty level comparable to easy interview problems."

In its own HumanEval benchmark, the earlier version of the model solved 28.8 percent of given problems, but that was boosted to 70.2 percent with repeated sampling.

While the paper is mostly positive, it admits that Codex is not as efficient at learning as humans are. "Our training dataset comprises a significant fraction of publicly available Python code on GitHub, totaling hundreds of millions of lines of code. Even seasoned developers do not encounter anywhere near this amount of code over their careers.

"Indeed, a strong student who completes an introductory computer science course is expected to be able to solve a larger fraction of problems than Codex-12B."

All that data collection has its problems, too. While the paper notes that the publicly available code has been legally deemed "fair use," the earlier release of GitHub's Copilot was met by pushback from developers whose work was copied by the algorithm.

GitHub and OpenAI took free-to-use and copyrighted code and “put it all in a blender in order to sell the slurry to commercial and proprietary interests,” Evelyn Woods, a Colorado-based programmer and game designer, told Wired. “It feels like it’s laughing in the face of open source.”

Microsoft's GitHub claims that its Copilot programming helper directly copies code roughly 0.1 percent of the time, and said it was working on tools to both reduce plagiarism and give credit to individual delevlopers. OpenAI is also thought to be working on such tools for Codex.

In a comment to the United States Patent and Trademark Office in 2019, OpenAI tried to argue that training an AI on copyrighted data represented fair use.

In it, the company said: "Well-constructed AI systems generally do not regenerate, in any nontrivial portion, unaltered data from any particular work in their training corpus."

Codex is currently available in private beta, for free – but the company hinted that it would charge once the service reaches release version.

OpenAI gets more competition

Elsewhere in machine learning model news, Israeli startup AI21 Labs has launched a larger language model than GPT-3, planning to make it available as a service.

The Jurassic-1 Jumbo contains 178 billion parameters – 3 billion more than GPT-3, although that's still less than PanGu-Alpha, HyperCLOVA, or Wu Dao 2.0. Size is not everything, but it helps – the more parameters, generally, the more advanced the system.

AI21 Labs claims that Jurassic-1 can recognize 250,000 lexical items including expressions, words, and phrases, much more than GPT-3's 50,000.

“AI21 Studio makes text-based AI accessible to businesses in the same way AWS did for cloud computing,” said Yoav Shoham, co-founder and co-CEO of AI21 Labs.

The company called out OpenAI on its closed approach, where it resells some services through Microsoft, and works with select partners on others.

"You shouldn’t have to be an AI researcher working at a big tech company to do this stuff," Shoham said. "Now anyone – publishers, students, artists, business people, researchers – can build language-based applications that rival those being dreamed up in big AI labs.”

So far, the system has only been used in AI21 Labs' own writing companion app, Wordtune.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like