An interview with Steve Mills on how to implement responsible AI to mitigate current and emergent risks

Deborah Yao, Editor

May 17, 2023

11 Min Read
Getty Images

BCG’s chief AI ethics officer, Steve Mills, joins AI Business to talk in detail about how enterprises can implement responsible AI to mitigate existing and emergent risks. He said generative AI poses a particularly tricky problem in what's called a 'massive capability overhang.'

Listen to the podcast below or read the edited transcript.

AI Business: Let's start with a definition. What is responsible AI?

Steve Mills: Responsible AI, in its simplest, is ensuring that AI aligns with an organization's purpose and values while still delivering business impact. That last piece is important − it's not an ‘or,’ it's an ‘and.’ It's both adhering to purpose and values, but also delivering the business impact organizations want. In fact, if done right, it will actually amplify the business impact of solutions.

In practice, this means aligning on a set of ethical principles like fairness, equity and privacy and social environmental impact. Then, you ensure that the controls are in place throughout the product development process to make sure that those principles are being adhered to. It's really about the full value chain from data collection to building the model to actually embedding it in the business.

Too often people focus just on the algorithm and not necessarily the context in which that algorithm lives within a business. Getting this right is really critically important for companies, whether they're building or buying AI, because there's real implications to getting it wrong. There are impacts to customer trust if an AI system fails. You can harm individuals and society, which we first and foremost need to prevent. But it can also damage an organization's reputation. Customers just don't want to work with companies that they feel like they can't trust.

There's also a piece around stakeholder interest. We're seeing boards of directors, institutional investors push for things like equity audits, or not wanting to deploy their funds into companies that they don't see as adhering to their CSR (Corporate Social Responsibility) commitments and high ethical standards. And then increasingly, there's regulatory risks. We're seeing national, local governments all over the world developing AI specific regulations, as well as regulators taking the existing laws and applying them to AI.

Stay updated. Subscribe to the AI Business newsletter

So there's this whole set of downside risks that companies need to address to get this right, but also a lot of upside potential. We've seen companies report higher brand differentiation and customer retention and profitability, improve recruiting and retention, and (accelerate) innovation. That's an important one because there's this old trope that doing things in an ethical way will slow you down. We actually see companies report the opposite: It actually allows them to move faster and innovate faster.

AI Business: Given everything that you've said, it seems that responsible AI is really important to do. And yet we're not really quite there yet. What are some hurdles to implementation?

Mills: There are three big barriers to implementing responsible AI that consistently come up. One is lack of an empowered senior leader who can really drive the change. There's a degree of inter-functional argument or politics or lack of clarity around which function should even drive this − Is it ESG? Is it compliance or legal or the AI team or CIO?

What we often see is a mid-level manager ending up getting assigned to this - and the reality is doing this right requires some organizational change and organizational commitment that the manager just doesn't have the positional authority to (accomplish).

The next is just insufficient resourcing. Too often, this is another duty assigned to somebody and they're trying to do it in their spare time, vs. dedicating one or more people to really make change within an organization. The third piece is poor integration into existing risk and governance processes. So you end up with this responsible AI thing off to the side.

That has real implications in two ways. One, the organization doesn't see it as being that important. If it's not part of your formal risk processes, and it's not on par with things like cyber, the organization starts to see it as less important. … And the advantage of being tied into these the corporate processes is it gives you these natural escalation paths

AI Business: Can you tell us step by step how a company should go about making sure that AI models are responsible from the get-go?

Mills: The answer is it depends. That's just the nature of it. The context matters so much. But let's start at the product level. Each product is unique: the context in which it operates, the data, the goal.

We worked with Microsoft and codified a set of 10 high-level guidelines that product teams can follow. There's a lot of detail behind all of these but just to give you a sense, there are three big steps we talk about when we think about products.

The first is assess and prepare. This is really just stepping back and thinking about the merits of developing the product, considering the organizational values, the business objective, and pulling together a team that has diverse perspectives, and even potentially engaging those that may be impacted by the product, which could be customers - and not necessarily direct customers but groups that could be indirectly impacted. So it's really having that conversation and asking, ‘What are the benefits? What are the potential harms? Should we even do this?’ It’s important that you need to have that discussion up front.

That's the first piece. Then, when you're actually building the product, you need to do things like evaluate the data and system to make sure you're minimizing fairness harms, designing the product to minimize negative impacts on society and the environment, incorporating features in human control and documenting all of this to ensure transparency. There's a whole set of practices that need to happen while you're building it.

Finally, when you're ready to deploy − you're validating that the product performance is there − you test for unplanned failures and potential misuse, which is that much more important as we think about generative AI. The need for red teaming is really strong there. Then communicate all of these design choices and the performance limitations out to the users.

AI Business: You say that deploying responsible AI is not just about the algorithms, but also about the business context. Can you explain that a little bit more?

Mills: A lot of people focus on the algorithm in the idea of fairness and bias, … and it's something you need to worry about. But let's hypothetically say you do all this great work to minimize bias. If you then embed that in the business and humans add back their own biases on top, it is all for naught.

You have to think about how that system will live in that ecosystem. Make sure as you design it, you're taking steps to minimize harmful things. Some of those steps may be outside the system itself.

If it's a product to support business decision makers, you actually want diversity and inclusion training for the users so that they're thinking about these issues. So it's not a pure tech solution. It's really a socio-tech answer to the issue and may involve processes or education.

AI Business: Can you explain how people can inject their own biases, even after the algorithms are all responsible and fair?

Mills: Let’s use hiring as an example. The algorithm recommends 10 job candidates for a recruiting team (to choose from). If that recruiting team then looks at that list, and makes decisions by keying on names, universities, or (other non-job related factors), they could be introducing their own biases. The bias may not even be for an ethnic group; it could simply be based on (being from the same) alma mater. So you have to think about all these potential ways that bias can manifest itself and ask yourself, ‘how would we present these results to the user to minimize that bias, and how do we educate the user?’

AI Business: You’re recommending a very holistic approach. How long will this process take? In one of BCG’s papers, it says for responsible AI to reach maturity, it takes an average of three years. Does it really take that long?

Mills: Yes. Truly operationalizing responsible AI requires that we put in place a comprehensive program. First, there's the strategy of setting the North Star for responsible AI across the organization: What are the principles that you're adhering to? What's the inherent risk framework you're going to use? Then there's a whole piece around governance: What's the organizational structure, the escalation paths? What are the decision rights? How are you linking into corporate governance, and all those types of issues.

There's a process piece, which ties back to what I was talking about earlier of actually embedding this into the product development processes, putting in place the right tech and tools. It's very easy to say, ‘evaluate data for bias’ but in practice, that can be challenging. Finally, establish a code of conduct, set the right tone, and build that culture of shared responsibility.

So, fully mature programs do take on average about three years. That's not to say that you can't make tremendous progress very quickly, or drive impacts. It's not like you need to have hundreds of people doing this over many years. Small steps can drive big change, but it does take time to mature, particularly when you think about things like a cultural change, which does not happen overnight.

AI Business: In another of BCG’s papers, the author said there were some new problems that people have not talked about when it comes to AI models. One of them is something called a massive capability overhang. Can you briefly tell us more about these other challenges and what they mean?

Mills: This goes to generative AI and the new challenges that generative AI is bringing. Capability overhang refers to systems designed to do a basic task (that then also shows the ability to do other unexpected tasks as well). Large language models can predict the next word in a sentence, but we also found that they can do a tremendous number of things very well that no one ever planned for, like design a website from that just basic capability. This makes it much harder to identify the potential risks when you have this system that can be used in ways we don't even know yet.

There are three big issues companies are struggling with today around generative AI. I’m going to put aside the very real, broader societal questions, and just focus on the near term business challenges: uncertain quality, unclear ownership and unclear protection.

Uncertain quality means you really need a detailed review of the outputs of these systems to make sure they're correct, or else they can look convincingly right but be factually wrong. You hear this referred to as hallucinations in the media.

The next piece is unclear ownership. It's very hard for users to look at the outputs and understand whether the source data is being used appropriately or inappropriately, whether it is copying copyrighted data. There was a news organization that put out a number of news stories and had to retract over half of them because they directly plagiarized other sources of text.

The last piece is unclear protections. There are a lot of open questions around these models: How are the inputs I put into them going to be used by the model builder? What ownership rights do I have over the outputs? Do I have any intellectual property rights over the outputs?

(If that's not enough, there's also) what we call the shadow AI problem. If you're familiar with shadow IT, it's the same idea: Somewhere in the organization, somebody is building AI systems for outsiders that governance is unaware of, and it creates risk for the company.

That's not new. But what generative AI has done is really democratize access to AI. So you don't need an expert team anymore, or you don't need access to corporate infrastructure and data. Suddenly this AI can pop up anywhere from anyone exceptionally quickly in ways you don't foresee.

AI Business: That sounds pretty scary. One thing businesses are always concerned about is risk management. Is there a solution they can buy perhaps off the shelf or already partly customized for them that makes it easier to implement these guardrails?

Mills: Unfortunately, there really isn't a one-size-fits-all solution. There are good frameworks; we have ours that I talked about. But each organization is truly unique. Trying to force-fit one solution doesn't work. It's really about tailoring it and thinking about how this will work best within the organizational structure and which function makes sense to drive this.

AI Business: One final question, and I don't know if you have a view on this, but do you think that AI does or could pose an existential threat to humanity?

Mills: I do think there is the risk of AI having a destabilizing effect if we, as a society and a community, don't come together and collectively manage it. That's what you're starting to see with the recent White House announcement: The government's thinking about it and bringing together a lot of the generative AI developers into that conversation.

The right conversations are starting to happen. As long as they continue, and steps are taken, I am an eternal optimist so I believe we will end up on a good path.

To keep up-to-date with the AI Business Podcast, subscribe on Apple and Spotify or wherever you get your podcasts.

Stay updated. Subscribe to the AI Business newsletter

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like