Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
January 5, 2024
Sponsored by IBM
A Responsible Approach to Scaling and Accelerating the Impact of AI
By Tarun Chopra, Vice President, Product Management, Data & AI at IBM
As enterprises race to capture the promise of AI, it’s paramount that they take the time to do it right. If deployed on a foundation of carefully governed data, the latest breed of generative AI tools can revolutionize the way businesses operate - transforming everything from customer care, IT, to finance - giving shrewd adopters a solid leg up on their competition. However, if done without proper care, these platforms can lead companies down a path of ethical missteps, lawsuits, and lasting reputational damage.
The good news: with the right approach, enterprises can get up to speed quickly without putting their business - or their customers - at risk.
Innovation Amid Regulation
A recent study found that 80% of business leaders had ethical concerns about adopting generative AI. Such concerns have driven legislators around the world to push for regulation around its use - and penalties for its misuse. In June, European Union policymakers proposed the EU AI Act with the aim of regulating the ethical development, use, and adoption of AI. The act puts forth severe repercussions for enterprises that overstep its boundaries, imposing a fine of €30M or 6% of company's global revenue.
Not long after, President Biden issued an Executive Order establishing new standards around AI safety and security. These regulatory motions signal tighter regulations will impact AI’s future and underscore the fact that it’s never been more critical to do so with a roadmap built on explainability, ethics, and trust.
Building with Trust
For AI models to succeed, they need to be trained on carefully curated data and built with transparency. If the data a model is trained on is flawed or there is a lack of transparency into where it came from, businesses run the risk of not being able to validate it as fair and accurate, perpetuating hate, profanity, and abuse, and not being able to explain its outputs. If the AI model is a black box without visibility into its function, the results will eventually stray.
The key is governance: organizations must codify a system of rules, practices, and processes to use AI in accordance with their values, principles of fairness and safety, and future AI regulation. It’s vital that stakeholders throughout the company are aligned on the extent of AI’s use and its application with the goals of the business.
Aligning on the Mission
To this end, many companies have established AI policy groups or boards - multidisciplinary teams comprising stakeholders from consumer insights, legal, cybersecurity, privacy, and data science teams - to oversee the design and implementation of AI in their organization. The policy group should form guidelines on everything from the AI’s technical aspects to principles of ethics and risk, and make these guidelines accessible throughout the organization. At IBM, our AI Ethics Board serves this function.
The entire AI lifecycle from planning through implementation should be rigorously documented. This includes the origins of the data, the techniques that trained each model, the hyperparameters that were used, and the metrics from testing phases. This will help provide visibility into the model’s behavior throughout its lifecycle, including the data at the heart of the model’s development and possible risks that could result from its use.
Organizations that lack the resources and systems to form a fully developed AI governance system should leverage technology partnerships to bring in the skills and tools needed to establish AI principles, strategies, and operational governance mechanisms. Leveraging purpose-built governance tools, any enterprise can get on the path to AI transformation with greater speed and assurance of trust.
Getting to Work
With strict governance frameworks in place, organizations can begin to unleash AI’s transformative power on their operations. Take the example of a customer care application for an insurance company: Using AI, the company can simplify its claims processes by classifying positive and negative feedback and summarizing key information from customers. By automating this initial step, agents can respond more quickly and with better information, reducing the time typically spent on manual reviews.
Aided by a purpose-built AI governance toolset, enterprises can proceed with greater assurance that such applications are being carried out in adherence with established ethics policies. Such tools can monitor for the presence of personal identifiable information and raise alerts when this data could pose risk; they can monitor the natural language models for drift and relevancy; and they can create dashboards to give stakeholders better visibility into how the model is functioning, aiding in their adherence to regulations for audit and compliance.
AI is no longer an experiment. With the worldwide AI software market predicted to grow from $66 billion in 2022 to nearly $307 billion in 2027 at a compound annual growth rate (CAGR) of 31.4%, enterprises must begin integrating it throughout their organization or face an insurmountable competitive disadvantage. Companies that embark on this transformation with a foundation of airtight governance will not only enjoy the confidence that their ethics are being upheld, but they’ll also stand as a testament to AI’s true potential as a trustworthy catalyst for change.
You May Also Like