Google's DeepMind Forms New AI Ethics Group - And In Doing So, Brings Social Issues To The Fore of AI ResearchGoogle's DeepMind Forms New AI Ethics Group - And In Doing So, Brings Social Issues To The Fore of AI Research
Google's DeepMind Forms New AI Ethics Group - And In Doing So, Brings Social Issues To The Fore of AI Research
October 4, 2017
Yesterday, Google's DeepMind announced in a blog post that they are founding DeepMind Ethics & Society (DMES), a new research unit dedicated to exploring and understanding the 'real-world impacts' of AI. Comprised of full-time DeepMind employees and a group of independent Fellows, and in development for the past 18 months, the team is set to grow to around 25 people within the next 12 months. From self-driving to facial recognition, the announcement comes at a time when the ethics and social implications of AI are a hot topic - something the unit is explicitly aiming to address.
"[DMES] has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all," Verity Harding and Sean Legassick, co-leads of Ethics & Society, wrote in the post. Verity Harding is a former special adviser to former Deputy PM Nick Clegg, as well as a former policy manager at DeepMind, while Legassick was a policy adviser to the firm. "We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work."
"At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face."
Citing an interdisciplinary approach, Harding and Legassick point to other research into the ethical implications of AI as inspiration, from studies of racism in criminal justice algorithms to research into the broader social consequences of AI. The unit aims to mitigate what DeepMind has identified as key ethical challenges for AI, from privacy, transparency, and fairness; to economic impacts, governance, accountability, and more.
Other companies are also looking at how they can leverage AI for ethical ends. Organisations like the Ford Foundation believe the technology can be used to incentivise renewable energy via capital markets, while political leaders are already looking to deploy AI and automation technologies in a way that will benefit the population and radically improve and alter working conditions.
Meanwhile, tech giants around the world are in dialogue regarding the future of ethics within AI. It's ultimately up to businesses to ask the big questions today and ensure ethical, balanced outcomes for all tomorrow.
Read more on the AI Ethics & Society group here.