by Alan Rodger, Ovum 15 August 2019
Across the broad spectrum of AI usage (see Ovum graphic below), its deployment will both reduce and extend different parts of the workload for enterprise risk managers and practitioners.
Examples of positive, efficiency-gaining AI-related outcomes for risk professionals include technology being applied to provide early warning in operational systems. Here, analyzing threats in real time and prompting human response or triggering automated action could prevent negative events from causing loss or damage.
Machine and deep learning can also help to analyze an operation’s characteristics over time and provide input for more effective and efficient risk management. A specific example would be analysis of contracts using natural language processing techniques, with the objectives of improving the quality of individual agreements and providing an overview of the contract-related risks across all an organization’s contractual relationships.
However, organizational risk practices will have to consider a number of new areas of risk that can arise from AI implementations, and find ways to counter or mitigate specific instances. General categories of new AI-related risk include bias, overconfidence in AI, cyberattacks, and legal and reputational risks.
- Even though an AI engine may be impartial, existing biases can be reinforced in two major ways: via humans introducing their own elements of bias as part of helping AI to understand what is relevant and what is not while the AI models are being “trained”; and via the data the AI consumes incorporating behavior patterns of the past rather than being a neutral source of all possible behaviors and their outcomes. Poorly trained AI may not make the right connections to correctly identify risks or issues.
- The might be overconfidence in AI and unwarranted assumptions in AI’s ability to provide “true” insights from incomplete or poor quality data, faulty training, or defective programming. Relying on AI as the sole source of risk insight poses the same kind of risk as any narrowly focused approach to risk management.
- AI systems have direct access to large quantities of corporate data. They will be a natural and increasing target for cyberattacks, so built-in security must be to a high standard.
- Data privacy regulation is still catching up with AI, and there is little existing precedent to guide appropriate use, particularly with the advent of GDPR. Any system that is biased, error-prone, or subject to attack constitutes a reputational risk to the organization that operates it, and indeed a risk of regulatory non-compliance if the system is not managed appropriately.
Without doubt, attractive forecasts of financial gains will feature in many business cases for investment in AI. However, Ovum strongly advises all organizations to ensure that the potential impact of any AI-related risks is featured alongside the benefits, and that accountability is defined.
Alan Rodger is a senior analyst at Ovum, covering the topics of IT governance and security. He’s been with the company since 2002, when he joined Butler Group (now incorporated within Ovum, and the Informa Group Plc).
This post was originally published in Ovum Knowledge Center.