The Race Towards Ethical AI: A Call for Responsible Innovation

Regulatory frameworks must evolve to embrace technology integration

Eric Winston, Mphasis general counsel and chief ethics and compliance officer

June 21, 2024

3 Min Read
legal books on a bookshelf
Getty Images

A future powered by artificial intelligence (AI) seems inevitable. However, even as humankind embraces AI in different walks of life, vital legal and moral issues confront society. To responsibly navigate this future, we must consider possible ethical oversights and choose a path of innovation coupled with a conscience. 

AI systems of the future must navigate prejudice, bias and privacy concerns carefully. According to an industry report, 54% of businesses are deeply concerned about AI bias, 69% do data quality checks to avoid this and 81% of business leaders want government regulation to define and prevent bias.  

These AI systems are often powered by vast amounts of personal data, raising consent and privacy concerns with 22% of executives citing data privacy as their top generative AI ethical concern.

Accountability for AI decisions remains a complex challenge due to difficulties in determining if the developer, user or the AI system itself should be held responsible when an AI system decides with negative consequences.

As AI replaces human jobs, adaption is necessary to address potential job losses, with nearly 40% of all jobs globally vulnerable to AI automation. 

Evolving Regulatory Frameworks Embrace Technology Integration

Related:Why the Wrong Questions Are Being Asked About AI

Governments around the world are now establishing laws and regulations. The U.S. has taken a decentralized approach, with various states implementing their own regulations. At the federal level, guidelines and principles have been proposed to ensure that AI development aligns with public interests and values.  In October 2023, President Biden issued a landmark Executive Order to manage AI risks around safety and security, privacy, equity and civil rights, consumer and worker protection, and to promote innovation. 

On May 21, 2024, the EU Council approved the pioneering Artificial Intelligence Act, a ground breaking law that aims to standardize AI regulations. This law focuses on higher-risk AI systems and could set a global standard for AI regulation. Internationally, organizations like UNESCO are spearheading efforts to establish globally accepted ethical AI. This effort is taking shape through the creation of the Global AI Ethics and Governance Observatory, which serves as a one-stop shop for policymakers, regulators, academics, businesses and civil society organizations. 

Regulatory frameworks must be flexible enough to adapt to new developments while providing clear standards to prevent misuse and harm.

Related:Boston Dynamics Highlights AI-Powered Robot Safety at AI Summit London

Path Toward Ethical Implementation

As AI continues to evolve, industry leaders will play a pivotal role in shaping its ethical trajectory. Best practices involve a commitment to transparency, where companies create visibility and auditability to how AI systems make decisions and use data. Additionally, investing in diverse teams to develop AI can help mitigate biases right from the start. Collaboration across academia, the public and the private sector is also essential to ensure that AI technologies are inclusive of diverse perspectives and expertise.

The journey towards ethical AI requires collective vigilance, dialogue,and a willingness to uphold ethical standards alongside the pursuit of innovation. By following a rigorous path, AI serves as a tool for positive change, enhancing lives while respecting values.

About the Author

Eric Winston

Mphasis general counsel and chief ethics and compliance officer, Mphasis

Eric Winston is responsible for Mphasis’ global legal and compliance function and policies. He has spent nearly twenty years guiding international, market-leading, public, and private equity-owned IT companies.

Before Mphasis, Eric served for two years as vice president of legal at Syntel, Inc., an IT services company, where he was responsible for advising executive management on a wide range of matters including domestic and international, strategic and commercial transactions, litigation, mergers and acquisitions, employment, business development, corporate governance and compliance.

From 1996 through 1999, Eric was in private practice in New York City, where he specialized in complex civil litigation matters. From 1989 through 1994, Eric served as an assistant district attorney for the Kings County District Attorney’s Office in Brooklyn, New York. Eric received his juris doctorate at the Emory University School of Law and his Bachelor of Arts degree in economics from Vassar College.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like