The end to Internet 'Trolls' Might be Near

by Robert Woolliams
Article Image

Cyberbullies, Internet Trolls, WebMonsters, and the list goes on. There are many names for those hiding behind their screens and harassing people online. Now, Google is applying AI to detect these people, and to hopefully put an end to cyber-bullying.

The group goes by the name of Jigsaw, Google's former brainstorming-device and is now working towards enabling that technology in order to address "geopolitical issues", the report published by WIRED reads.

"Conversation AI" is produced by the organisation, and will work as an intelligence tool to detect and hopefully end harassment online.

”The software is designed to use machine learning to automatically spot the language of abuse and harassment — with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators”, WIRED writes.

So how will this device work?

"Conversation AI" will study abusive, foul language, before rating it from 0-100, giving it an "attack score". Self-explanatory as it is, 0 would mean that no offensive language was detected, whereas 100 would mean that some abusive language was used.

"Jigsaw has now trained Conver­sation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10-percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack", WIRED writes.

Abusive and offensive language is mostly known to flourish in social media platforms such as Facebook, Twitter and Instagram. However, "Conversation AI" is starting off in a (hopefully) less abusive environment, being applied in New York Times' commentary sections.

YouTube was also up for discussion, as well as Wikipedia showing interest in the app, but for now it will mainly focus on New York Times. Jigsaw's Founder and President Jared Cohen said to Wired that;” I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight to do everything we can to level the playing field.”

For anyone who has been struggling with handling cyberbullies, the good news is that it will be easier to access and apply "Conversational AI" very soon, as it will soon turn into an open source, MobileMag writes.

In theory, any website that wishes to protect its users will be able to, thanks to artificial intelligence.

MobileMag says that the reason why online tool or application so amazing is due to the type of technology being used, which is extremely advanced and capable of instantly flagging offensive language, insults, profanity, and auto-delete bad language and scold harassers.

"For those who are fond of bashing, hurling hate comments and saying bad things online, be careful, your days are over".

The report was originally found at: https://www.wired.com/2016/09/inside-googles-internet-justice-league-ai-powered-war-trolls/

For the latest news and conversations about AI in business, follow us on Twitterjoin our community onLinkedIn and like us on Facebook

Practitioner Portal - for AI practitioners

Story

Hesai and Scale AI open-source LiDAR data set for autonomous car training

6/2/2020

Scale claims this is the first time such data has been released with zero restrictions

Story

IBM adds free AI training data sets to Data Asset eXchange

5/28/2020

Big Blue has something for you

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

More Webinars

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

9/30/2019
More Research Reports

Infographics

Understanding the advantages of AI chatbots over rule-based chatbots

Infographics archive

Newsletter Sign Up


Sign Up