Metaverse avatar makers use the tool to make characters ‘stay on-script.’
OpenAI, the renowned AI outlet behind DALL-E 2, GPT-3 and Codex, has published an updated version of its AI-powered content screening tool that is “faster and more accurate.”
The Microsoft-backed company said the tool can detect “undesired content” such as references to violence, self-harm or content of a sexual nature. The “new and improved” version better protects applications from misuse.
When given a text input, such as social media posts, the tool assesses the content and whether it falls under categories the user wants to filter out.
Called the Moderation Endpoint, the tool was trained to be “quick, accurate, and perform robustly across a range of applications,” according to OpenAI. Its speed and accuracy “reduces the chances of products ‘saying’ the wrong thing, even when deployed to users at scale,” its blog post said. “As a consequence, AI can unlock benefits in sensitive settings, like education, where it could not otherwise be used with confidence.”
The company’s new content moderation tool is available for free to OpenAI API developers. Instead of building and maintaining their own classifiers, developers can make use of current GPT-based classifiers through a single API call.
One outlet already using the tool is Inworld AI, which is developing a platform to create AI-powered virtual characters for metaverse applications.
OpenAI said Inworld is using the tool to make sure its characters “stay on-script.” “By leveraging OpenAI’s technology, Inworld can focus on its core product, which is creating memorable characters.
The tool also can monitor non-API traffic for a fee; this implementation is currently in private beta. OpenAI did, however, release the evaluation dataset under an MIT license for free use in a move to “spur further research in this area.”
About the Author
You May Also Like