Op-ed by Gregory Shea, adjunct professor of management at Wharton and adjunct senior fellow at its Leonard Davis Institute of Health Economics

April 20, 2022

5 Min Read

Op-ed by Gregory Shea, adjunct professor of management at Wharton and adjunct senior fellow at its Leonard Davis Institute of Health Economics

The grandest challenge in artificial intelligence may be one that long predates computer technology and yet sits at the center of AI governance: the challenge of overseeing the use and avoiding abuse of AI.

More fundamentally, it is the challenge of thinking clearly about our thinking and, in the case of AI, the expression of our thinking, however flawed, in a powerful tool. Throughout time and across traditions, we have struggled with the underlying challenge of disciplining our thought as witnessed by both Socrates and Lao Tzu.

Today, we might ponder what data to weigh and in what fashion to arrive at the best decision possible. The challenge begins there, but it also includes the mental model we carry with us as we conduct the weighting and the fashioning as we deploy AI and learn from it.  We are both teacher and student, guiding AI’s learning and being guided by it.

Challenge and consequence of data cleanliness

In health care, AI can mine vast sets of data and unearth disease treatment options. But AI can only search among the tomes of data provided to it and in the manner and within the guidelines laid out.

If the provided tomes do not include potential variables (such as gender, race, economic class, exposure to lead or all four in combination) then AI cannot learn about their respective and combined impact.

If the tomes of data do not include reliable information whether by accident or on purpose, then AI will learn from the flawed data and produce flawed decisions. Thus will we have our own flawed models of the mind developed more fully and fed back to us, limits and blinders firmly in place.

Additionally, health care providers and payers tend to guard their data, however unique or flawed.  Some data hoarding occurs due to legal and ethical concerns over patient privacy and distrust or because of the real, presumed, and feared work required to make data-sharing possible.

That work would include developing apples-to-apples data definitions across providers as well as within provider systems. That work would in turn likely mean standardization of care. Standardization of care constitutes a challenge of the first order given the variation in actual care protocols across providers and provider systems at any given time, let alone across time.

This challenge therefore requires a substantial amount of work within and between provider systems, work that would land on the desks of already heavily ladened workers besieged by direct and indirect pandemic induced demands.

Governance of AI in health care

Given the above considerations, how ought we to  govern or oversee installation and  evolution of AI in health care? The following are some suggested guidelines:

1. Understand the challenge. Take the time to understand the nature of the underlying problem before diving into its particular manifestation, namely securing the discipline of ongoing critical thinking.

2. Begin with the end in mind. What do you want AI to do for you? How will you know that it has accomplished its goal?

3. Design accordingly. AI is but one tool in the toolbox − where does it fit? Use the Work Systems Model to design how your organization will deliver and sustain integration of AI into its operation.

4. Turn to the issue of governance. In the context that you have created, what needs oversight? Who should provide it? Who oversees the governors?

Answering these questions depends on steps 2 and 3 above and, in the end, should require board approval. It’s that important.

Boards need educating about AI well in advance of overseeing the construction of AI oversight and education. Boards would also benefit from including at least one AI technical expert, one ethicist, and one advanced student of stakeholder capitalism/view of the firm.

5. Staff the governance committee with appropriate diversity of perspective. AI will only grow in breadth and importance of application. Prepare for that by staffing or supporting any governance committee with an appropriate range of expertise and commitment, e.g., medical, research, community, technical, organizational change/design, and educational, ethical, compliance, and legal. Notably, leveraging diversity requires advancing team leadership and membership skills, especially conflict management.

6. Establish oversight basics

- Overall criteria for AI use and development

- Quality of data

- Protocols for data use

- Review of AI learning, both processes and outcomes

- Expand AI collaboration

- Develop and guide implementation to broaden understanding of AI

Clinicians − physicians in particular − and public health experts need to understand the basics of AI. While the medical school curriculum already is crammed to overflowing, part of it should be trimmed to allow adequately training physicians in how to guide the development of AI as a clinical tool. 

Even rudimentary exposure to decision-making traps, statistics, research methods, systems thinking, and machine learning would help to prepare anyone, physicians included, for consuming let alone overseeing AI. Potentially, schools of public health, medical schools, and professional societies can combine to provide both degree and continuing education offerings.

Similarly, communities should revamp public education to include these basics of AI in order to enable the citizenry to fulfill their obligations as the ultimate governors of AI and as consumers of the health care it will increasingly inform and even drive. Any group charged with governance of AI anywhere ought, therefore, to know that it can draw upon quality ‘upstream’ work to educate societal members and stakeholders for their role as AI consumers.

AI may prove the equivalent of the opposable thumb in the digital world. In that case, it may represent both a staggeringly powerful result of that physiology and all the tools that it enables. 

More importantly, AI may represent a premier expression of our capability not just to make and use tools as well as of our capacity to consider the science of their ongoing development. The  tool of AI, however, requires competent and dedicated oversight lest it fall into misuse.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like