Massive fines for dodgy AI are coming. Maybe.
This week on the AI Business podcast, we look at the draft European Regulation on artificial intelligence, a.k.a. the Artificial Intelligence Act.
This piece of legislation will be the first attempt to regulate AI on a super-national level.
But does it go far enough to meet the aim of stopping AI systems that pose a ‘clear threat’ to citizens’ rights and livelihoods?
We start (and end, really) with the first draft of the Artificial Intelligence Act – which could become the law governing the use of artificial intelligence across 27 member states in about three years.
Is not just a draft, but a declaration of intent – the proposed policy offers a vision that’s very different from both the relaxed regulatory approach seen in the US, and the embrace of AI for the purposes of the state that is practiced in China.
The EU framework Proposes to categorize AI systems in terms of their impact, and the risk they pose. ‘Unacceptable risk' would cover systems that are deemed to be a "clear threat to the safety, livelihoods, and rights of people” – like systems designed to manipulate human behavior, or those used for ‘social scoring.’
The ‘High-risk’ category would cover systems for critical infrastructure, and some systems for law enforcement. ‘Limited risk’ and ‘Minimal risk’ categories would cover products like chatbots, AI-enabled video games, and spam filters.
The draft seems to take a strong position on biometric surveillance systems in public spaces. At first sight, these appear to be banned, but the document lists a large number of potential exceptions. We’re not the only ones confused by this; the EU's chief data protection supervisor is confused too.
We also cover: Gonzo the Cat! Bernie memorabilia! Reasons to distrust the intelligence services! Apple VS Facebook!
As always, you can find the people responsible for the circus podcast online: