Businesses will soon be able to access dedicated testing services for AI systems, Accenture revealed in a new product announcement today.
Accenture's AI Testing will provide companies with a methodology to help companies build, monitor, and measure reliable AI systems within their own infrastructure or by the cloud. It will form part of Accenture's Testing Services suite, which also includes testing strategy, engineering, digital, and enterprise technology for organizations.
"The adoption of AI is accelerating as businesses see its transformational value to power new innovations and growth," said Bhaskar Ghosh, group chief executive of Accenture Technology Services. "As organizations embrace AI, it is critical to find better ways to train and sustain these systems - securely and with quality - to avoid adverse effects on business performance, brand reputation, compliance and humans."
Deploying the company's 'Teach and Test' methodology, AI Testing will ensure AI systems are producing the right decisions in two different ways.
The 'Teach' phase examines the choice of data, models, and algorithms that are used to train machine learning. It statistically evaluates different models to select the best performing model for production, while avoiding gender, ethnic and other biases, as well as ethical and compliance risks.
During the 'Test' phase, outputs from any AI system will be compared to key performance indicators within the business, as well as face assessment for whether the system can explain how a decision or outcome was determined. Accenture claim this phase uses 'innovative techniques and cloud-based tools to monitor the system on an ongoing basis for sustained performance'.
This methodology was used to train a conversational virtual agent for a financial services company, which was purportedly trained 80% faster than previously possible and said to have achieved an 85% accuracy rate on customer recommendations. It has also been used to teach a sentiment analysis solution to evaluate a brand's service performance.
"Testing AI systems presents a completely new set of challenges. While traditional application testing is deterministic, with a finite number of scenarios that can be defined in advance, AI systems require a limitless approach to testing," said Kishore Durg, senior managing director, Growth and Strategy and Global Testing Services Lead for Accenture. "There is also a need for new capabilities for evaluating data and learning models, choosing algorithms, and monitoring for bias and ethical and regulatory compliance."
The announcement comes at a time when building transparency and accountability in AI is assuming increasing significance for businesses. Another Big 4 consultancy, PwC, recently publicised their 'Responsible AI' framework, which similarly looks at the need for dedicated AI governance structures. It represents a shift in concerns - as the use cases for AI become clearer, stakeholders and businesses are increasingly turning their attention - and ire - towards 'black box' AI solutions.
[caption id="attachment_10527" align="aligncenter" width="1100"] See full presentation - Accenture[/caption]