Any organization developing or using AI solutions needs to make sure that the potential dangers of AI don’t tarnish their brand, draw regulatory attention, lead to boycotts or destroy business value

August 10, 2020

5 Min Read

National protests against racism in the US. led tech giants Microsoft, Amazon and IBM to publicly announce that they would no longer allow police departments access to their facial recognition technology. Their reasoning was that Artificial Intelligence (AI) can still make errors at scale based on how it is trained with a particular spotlight on how it struggles to recognize people of color and those in other underrepresented groups.

Any organization developing or using AI solutions needs to make sure that the potential dangers of AI don’t tarnish their brand, draw regulatory attention, lead to boycotts or destroy business value.

With the absence of rigorous regulatory protections against AI dangers, what can organizations do to guard against them?

After all, AI could lead any number of negative outcomes, from replacing stable, well-paying jobs to deciding prison sentences or access to medical benefits – all through unaccountable algorithms. Anything that is fully automated can be a cause of abuse and be a victim of abuse.

The first, and perhaps the most critical, step to prevent these outcomes is to install an external AI ethics board to prevent, and not just mitigate, AI dangers.

Establish protections beyond what already exists

The COVID-19 pandemic has already raised concerns about how data ethics is applied to decision-making processes when collecting, using and sharing data about employees and the health of individuals. Organizations now have an opportunity to deploy AI in order to process that data, while limiting human access to it but they must do so in a way that guards against the perceived and actual dangers.

The perceived and actual dangers most often stem from the ability of AI to make decisions and act with little or no human intervention. Often publicized are the risks that AI poses to privacy, jobs, relationships and equality.

Although public mechanisms do exist to mitigate against AI dangers, they are rarely sufficient for a technology that evolves daily. Market forces have encouraged companies to develop AI to meet, for example, government needs for facial recognition and surveillance — yet there hasn’t been an associated boost in their commitment to ethical use.

Governments often aren’t prepared or knowledgeable enough about AI to codify adequate oversight. Courts, focused on avoiding a difficult-to-reverse legal outcome, tend to be more or less stringent than is necessary to ensure public well-being.

It’s hardly surprising, then, that AI solution developers are concerned that their AI will become warped during development or even outlawed once in the market. Big Tech’s concerns about the use of facial recognition technology is certainly warranted. The advanced technologies many companies are developing are designed as tools to protect and communities, but they can also negatively impact those very same communities when misused.

And that burden can all too easily fall on people of color, those from the LGBTQ community and others who already tend to be underrepresented in many of the institutions that develop and provide oversight of such technologies.

Creating an external board dedicated to ethical AI

Axon, a law enforcement technology solution provider, established an external ethics board to help mitigate the shortcomings of the mechanisms meant to protect the public. Specifically, the company created a board that enabled greater transparency, accountability and representation in the AI development process. During the project, many things were learned and the following are from among those lessons.

  • Representation

Your primary customers are the buyers of your products, but who are the end-users? In the case of policing technologies, law enforcement agencies might be the customers but the communities they serve are the consumers — and they are directly impacted by the use of the technologies.

AI ethics boards need to make sure they maximize the input of those who understand the end-user and can provide insight into the potential impact of your AI technologies on them. This helps to filter out poor AI product proposals before they go to market and minimised the risk of unintended consequences once the product launches.

  • Transparency

The need to be totally transparent with your AI ethics board about every AI project and your AI roadmap is paramount. Make sure your board members have all the details they need to make confident recommendations on developing projects – an uninformed, unaware board is not a useful board. This kind of transparency and credibility will help you to acquire and keep worthy board members.

  • Accountability

Once the board has made its recommendations, senior leaders need to respond to them — publicly. This demonstrates that the organization is committed and accountable.

Empower the AI ethics board

As well as building an AI ethics board rooted in ethical principles, you must also take discrete steps to empower its members to make the right decisions:

  1. Select effective members. As well as overrepresenting those who will be impacted by your AI technologies, make sure the board is truly independent. To do this, involve the board itself in selecting future members, and make sure the board doesn’t include any employees of your organization. Make the board diverse enough and qualified enough to challenge business decisions. Possible members include not only experts in AI and machine learning, but regulators, academics and practitioners in your business segment.

  1. Enable relevant and transparent feedback. Use three basic rules to ensure that the board is effective: 1) Provide total access to information (e.g., about the logic behind algorithms), 2) allow the board to control its own agenda and 3) don’t interfere in the board’s recommendations. Axon’s board publishes its recommendations to a completely separate website maintained by a university, not the company.

  1. Organise for accountability. Give your AI development team unrestricted access to the ethics board, an internal ombudsperson and the ethics board ombudsperson. This ensures trust among all parties and participants and installs mechanisms to handle any ethical concerns that arise (including whistle-blower complaints).

An effective external AI ethics board embeds itself in to the organisation’s culture. AI development practices will begin to drive competitive advantages and make the organization more attractive to talent and more resilient to market changes.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like