Using machine learning to force trolls to be a little more creative

Louis Stone, Reporter

February 9, 2021

3 Min Read

Using machine learning to force trolls to be a little more creative

An API created by Google to automatically moderate Internet comments now processes 500 million requests a day.

Developed by Google subsidiary Jigsaw, 'Perspective' is a free, open source tool that uses machine learning to spot toxic language.

Hoping for a clean Internet

Perspective was launched in 2017, and has since been adopted by multiple large media companies. It currently supports English, Spanish, French, German, Italian, and Portuguese languages.

For example, Vox Media's public comment platform Coral uses Perspective Commenter Feedback, an AI-based system that informs the author about the perceived toxicity of their statements in real time.

A controlled study carried out with McClatchy found that 40 percent of commenters changed their messages to reduce toxicity, 20 percent gave up commenting, and 40 percent ignored the AI and submitted a potentially offensive comment anyway.

A similar study with audience engagement platform OpenWeb - using data from 400,000 comments across 50,000 users - found that 34 percent of users changed their comments after being prompted by the system, although only 54 percent of the revised text met community standards.

Some commenters simply recreated their statements using language the machine learning model could not detect, but the OpenWeb study found that of those that edited their comments, 44.7 percent "replaced or removed offensive words, and 7.6 percent elected to rewrite their comment entirely."

Jigsaw's Perspective is also able to work in collaboration with human moderators. This is used by The New York Times, which developed 'Moderator,' a tool that uses Perspective to prioritize comments for human review and approve those most likely to pass.

The partnership between NYT and Jigsaw began with just a few select articles, but has now been expanded. "Perspective API has greatly enhanced our ability to spot toxicity in our comment submissions," Marcia Loughran, community editor at The New York Times, said.

"It has made the work of our moderation team more impactful and efficient and since implementing the tool, we've been able to increase the number of articles open to comments by about a third."

Jigsaw was originally established in 2010 as Google Ideas by then-CEO Eric Schmidt and former state department employee Jared Cohen to serve as a “think/do tank.”

It rebranded in 2016 and has since launched projects focused on a variety of issues, from stopping DDoS attacks to reducing radicalization on YouTube and providing anonymity to journalists.

Perspectives has been its biggest success, with the company offering Google's AI tech to help clean up the Internet's perpetual toxicity.

Ironically, toxicity has been one of Jigsaw's most serious internal problems. In 2019, a Motherboard investigation based on interviews with current and former employees, leaked documents, and internal messages, painted the picture of a division wrought with division, beset by allegations of a harsh and misogynistic corporate culture.

Employees said that HR complaints were ignored, and those that tried to change things were retaliated against. “The details of the story have hit me hard and I’m deeply disappointed for all of you to see our culture characterized in this way,” Cohen told staff following the article's publication.

“We haven’t always gotten everything right, and as CEO, I take this responsibility seriously and I’m committed to ensuring we continue to improve.”

About the Author(s)

Louis Stone

Reporter

Louis Stone is a freelance reporter covering artificial intelligence, surveillance tech, and international trade issues.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like