Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI Future

Beena Ammanath, global head of the Deloitte AI Institute, joins the AI Business podcast

Deborah Yao, Editor

January 31, 2024

8 Min Read
Illustration of an arrow pointing up
Getty Images

Beena Ammanath, global head of the Deloitte AI Institute, joins the AI Business podcast to discuss five key steps organizations can take to ensure that their board members and business leaders are prepared to face a future shaped by AI.

Listen to the podcast below or read the edited transcript.

Tell us about the Deloitte AI Institute.

Deloitte AI Institute focuses on the applied side of AI. What are the different applications of AI? Which AI can be used? What are the best practices? What are the nuances that come with it – such as the impact on the workforce, regulatory landscape, best practices, impact on ESG, and diversity, equity and inclusion? The Deloitte AI Institute looks across all the industries, including government, and puts out best practices so that companies can better use AI in a faster and more effective manner.

I do not think it is just generative AI. I think it is broadly applicable to AI. Generative AI has just raised the awareness and has broad implications. It is important for board members to be prepared for this future that is going to be shaped by AI and generative AI. We talk about five steps that board members should consider right now as they prepare themselves to be effective for their organization.

The number one step is to build the board's AI literacy to take part in AI risk management or generative AI risk management. Board members need to know what AI is, what generative AI is, what are the capabilities and what are the risks that come with it because it is not going to be a one size fits all. Depending on the organization's focus, the industry and the use case, the risks are going to be different. So the boards need to build their own AI literacy.

This can be done through traditional methods by taking a course, or bringing in external speakers and subject matter experts. And make sure the boards keep themselves updated because it is not going to be a once-and-done; there are so many advances happening in AI, in generative AI, that boards have to not only think about building their AI literacy as of today, but also have a plan for continuous ongoing learning.

Number two would be to make sure that their C suite is also AI literate. If AI literacy in the boardroom is important, it is even more important for the C-suite. … At the end of the day, the board looks from a governance perspective, but the C-suite has to actually be aware enough that they can most effectively use the AI capabilities to drive their business value. And it is not just about looking at the risks; it has to be about the values and opportunities that come with the AI. They need to have that knowledge and familiarity with the technology and, just like for boards, it is not a once-and-done thing. They need a plan for ongoing literacy.

The third is to consider recruiting board members who have operational AI experience, meaning they have played the role of a chief AI officer, chief data analytics officer. It will be very useful for the board to have an AI operator kind of executive on the board to bring in that technical (and complex understanding of AI). Traditionally those roles have not existed on boards. But this might be a good time to add them, given the relevance of generative AI and the impact it's going to have on every organization. It would be important to have an in-house subject matter expert on the board.

The fourth step relates to orienting the board for the future, so governance is not going to be an ad hoc experience. Boards need to be able to set up governance to guide the ethical and trustworthy use of generative AI, whether you are setting up a subcommittee within your audit committee for AI-specific auditing or generative AI-specific auditing, for succession-planning, for risk management related to finance and operations, since there will be broader implications.

The last one is focused on guiding the organization as generative AI matures. Given their role, board members are usually not directly working with generative AI, but they are important stakeholders with real responsibilities. As the company’s leadership and the lines of business explore how generative AI can be a productivity-enhancer and innovation-driver that brings new business value, the board can actually step back and take a higher level, big picture view of AI programs, and focus on guiding the enterprise in the ethical and trustworthy deployment of generative AI. In fact, it is very helpful for the board to take a proactive approach as their organizations start looking at generative AI and implementing it at scale.

Who will be responsible within the company about handling AI risks as well as opportunities?

We have seen a few different roles that are taking on that responsibility for managing risk. We have definitely seen chief AI officers or chief data analytics officers taking that mandate on. We have seen chief risk officers expanding their scope to include AI-related risks. And there is a new role that is evolving: chief trust officers. … And so, depending on where they are in their AI journey, it might call for expanding the scope of an existing role or if they are very advanced in their journey, they might be looking at getting a specific role around it.

You mentioned earlier that one of the things they could do to orient the board for the future is to create subcommittees on AI. What would those subcommittees look like? What kind of activities would they oversee?

Just like traditionally on boards we have had audit committees focused on financial auditing, you might see one that it is more focused on AI-related risks and the auditing of AI tools that the organization might be using. … But the reality is, since AI is not a mature enough or there are no best practices around it, it will be the mature organizations that start with this subcommittee. Over time, we will figure out exactly what the mandate will be.

As organizations focus on the business value creation from generative AI, I can see the subcommittee getting more active on proactively identifying and adopting a framework for managing risks, getting regular reports from the C-suite on some of the identified risks and how they were mitigated, looking at the regulatory landscape and seeing how the organization is prepared for regulations that might be coming up. … So there will be a need to be constantly sensing and implementing changes that might be needed to address AI-related risks from the context of regulations over the next couple of years.

How can companies stay abreast of regulations? There are global, national and state regulations. They do not always fit together nicely.

There is always going to be complexity around regulation. The challenge that we are going to deal with is that, unlike in the past, you are going to see regulations coming at a faster pace because the impact of AI and generative AI is so broad and it cuts across several industries. The structures exist to make sure you are following the local, regional, global regulations but what does not exist is the capacity to cope at the speed at which these regulations might come in.

That is where boards can play a very active role in figuring out how to keep up with the speed of regulations, and a big part of it will be sensing, anticipating and staying on top of the policies in place or that is being proposed in different countries, different regions, and then being prepared for it when it translates into actual regulation. … Boards have to have make sure that the organization is prepared (to handle) the accelerated pace at which the regulations come at us.

Does diversity and inclusion play a part in board governance, and if so, how?

It absolutely does. (Beyond) the traditional view of diversity, whether it is based on gender or ethnicity, having an operational AI role is a way to bring diversity to the board from an educational or experience background. For me, it is another way to look at diversity. For any board that operates at a global level, you have to look at diversity from multiple different dimensions. There is obviously gender, race, ethnicity, but also the cultural background, educational background, the experience background, the domain expertise, the subject matter expertise − there are so many important factors to consider from a board profile perspective.

Also, part of the risk that comes with using certain AI models is that it could impact the organization’s diversity goals. If you are doing certain AI projects where it is leaving behind certain demographics or cultures, that is an impact to the diversity, equity and inclusion (DEI) mandate the boards might have set for themselves. So it is important to have the right diversity within the board's profile, but it is also important to look at the impact to the broader diversity and inclusion goals for an organization when you start using AI.

Is your AI being built by certain demographics in which the output excludes certain demographics? Is your AI team diverse enough? Just like boards have looked at whether the C-suite and the leadership team are diverse enough, it is important to start thinking about the impact of AI to DEI goals as well.

Read more about:

ChatGPT / Generative AI

About the Author

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like