Tool launches as teachers fear students will use chatbot to write papers

Ben Wodecki, Jr. Editor

February 1, 2023

2 Min Read

At a Glance

  • ChatGPT makers launch fine-tuned GPT model to identify AI-generated text
  • AI Text Classifier “not fully reliable” but “significantly” better than predecessor
  • AI Business experiment finds only 5% of text failed to be identified

OpenAI, creators of the viral sensation ChatGPT, has come up with a tool to detect whether a piece of text has been generated by its chatbot.

Called AI Text Classifier, the free tool is available for the public to use as OpenAI gathers feedback on its effectiveness. To start the detection process, users simply enter text into the AI model.

Try the detection tool here: https://platform.openai.com/ai-text-classifier

The release comes as educators are voicing concerns that students will be using ChatGPT to generate essays and reports instead of writing the assignments themselves. Several school associations across the globe have prohibited access to the chatbot. An annual machine learning event even banned the use of large language models for writing papers.

OpenAI has developed a preliminary resource page for educators on the use of ChatGPT, which outlines some of the uses and associated limitations and considerations. The company also noted that the tool can be used by journalists, misinformation researchers and other groups.

‘Significantly’ improved detection

The Microsoft-backed AI company said the AI Text Classifier is a fine-tuned GPT model that “significantly” improves upon its prior model. GPT-2 Output Detector, released in February 2021, was built atop the RoBERTa model and is considerably more outdated in terms of performance compared to its replacement.

Related:A Revealing Conversation with ChatGPT

However, OpenAI cautioned the new Classifier tool is “not fully reliable.” In its own evaluations, OpenAI found that Classifier correctly identified only 26% of AI-written text as "likely AI-written," while incorrectly labeling the human-written text as AI-written 9% of the time.

The new tool also needs an input of at least 1,000 characters (150-250 words) to be reliable and can be fooled if the user edits the AI-generated text. It also is likely to fail when used on languages other than English.

The Microsoft-backed AI company further cautioned that Classifier’s results “should not be the sole piece of evidence when deciding whether a document was generated with AI. The model is trained on human-written text from a variety of sources, which may not be representative of all kinds of human-written text.”

Is the AI Text Classifier any good?

OpenAI said its AI Text Classifier was fine-tuned on a dataset of pairs of human-written text and AI-written text on the same topic from a variety of sources, such as the “pretraining data and human demonstrations on prompts submitted to InstructGPT.”

The model does not say definitively that a piece of text was not generated by AI. Instead, OpenAI’s new model predicts its likelihood of being so.

Related:Baidu to Launch its Own Version of OpenAI’s ChatGPT

To test its credentials, AI Business generated a handful of ChatGPT responses to see if the Classifier guessed the source correctly – and found that the model was right 95% of the time.

Of the 20 generations tested, only one came back wrong – it failed to detect that this ‘scene’ from Star Trek: The Next Generation was fake.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like