AI Frameworks Reduce Generative AI’s Risks, Applied Intelligence Live! Austin 2023

An ethical framework will accelerate adoption of generative AI

Deborah Yao, Editor

September 20, 2023

2 Min Read

At a Glance

  • American Airlines' Sai Nikhilesh Kasturi describes the ethical frameworks needed for generative AI adoption to flourish

The tipping point that ushered in the era of generative AI came from the confluence of three factors: massive proliferation of data, advances in scalable computing and machine learning innovation.

But as awe-inspiring as the capabilities of generative AI are − be it text-to-image, text-to-text, text-to-video and others – adoption is being hindered by its well-known risks including bias, privacy issues, IP infringement, misinformation and potential toxic content.

“These are the high-level risks and concerns that every company or organization sees right now, and that’s the whole reason they are a little skeptical of using ChatGPT for their daily work,” said Sai Nikhilesh Kasturi, a senior data scientist at American Airlines at Applied Intelligence Live! in Austin, Texas.

His solution to mitigate these risks? Establish the right AI frameworks.

  1. Strategy and control

    1. AI policy and regulation

    2. Governance and compliance

    3. Risk management

  2. Responsible practices

    1. Model interpretation

    2. Transparent model decision-making

  3. Bias and fairness

    1. Define and measure fairness

    2. Test

  4. Security and safety

  5. Core practices to fine-tune model output

    1. Follow industry standards and practices

    2. Keep humans in the loop

    3. Monitor against model drift

“Once the ethical frameworks are built, and they are in place, the massive adoption of generative AI might increase over the years,” Kasturi said, citing Bloomberg’s prediction of market growth to $1.3 billion by 2032.

Related:Making AI a Board-Level Priority, Applied Intelligence Live! Austin 2023

Traditional AI models were used for one specific task but the foundation models underpinning generative AI are being used for various tasks at the same time such that the training time has been reduced “drastically.” 

Asked how one can solve a sticky problem in generative AI of getting answers wrong – or making them up – Kasturi said one way is to use two AI systems to cross-check each other.

MIT and Google DeepMind researchers recently developed a method in which AI chatbots get to the right answer by debating each other.

About the Author(s)

Deborah Yao


Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like