AI Business Podcast 9: AI doesn’t get PTSD

Some AI models are trained on great works of art. Others are trained on images of violence. If they were people, which one would you like to meet?

October 6, 2020

Some AI models are trained on great works of art. Others are trained on images of violence. If they were people, which one would you like to meet?

Welcome to another episode of the AI Business podcast, with your weekly dose of AI news and editorial chaos. This time, we’re talking about the different kinds of data that can be used to train an AI model.

AI Business · AI Business Podcast 9: AI doesn’t get PTSD

We start with the story of Facebook’s Red Team, tasked with hacking the company’s AI systems in order to make them more resilient – and ostensibly, to stop the users from messing with Instagram’s automated nudity filters. Wired has the details.

From here, we take a detour into the dark, horrible world of content moderation – if you think Facebook is toxic, you should see the things that it is actively hiding. Content moderators, we salute you, and hope that AI will indeed take your jobs.

But we also have a positive story, about Saint George on a Bike – an EU-funded effort to teach AI-based models the intricacies of European art. These models are trained to understand culture, symbols, and historical contexts – and the hope is it will help annotate and index lesser-known works of art hidden in little museums and galleries.

In this episode, we also cover: Instagram photos! Dan Brown novels! Despair of modern existence!

As always, you can find the people responsible for the circus podcast online:

You might have noticed that the latest episode doesn’t seem to include any of the latest news – the reason being it was recorded in early September. We will return to our regular schedule next week.

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.