Signal AI: 95 percent of the c-suite believes in AI for decision making

Around 80 percent of respondents said they have too much data to consider

Ben Wodecki, Jr. Editor

June 10, 2021

2 Min Read

Around 80 percent of respondents said they have too much data to consider

Around 95 percent of c-level business executives believe that using AI in their company's decision making processes will vastly improve outcomes for their brand, according to a survey by Signal AI.

The 2021 State of Decision Making Report that surveyed 1,000 c-level executives found that 91.6 percent believed companies should be using AI to augment their decision making.

A further 79.3 percent revealed their companies are already using AI to make some decisions.

“We’re seeing tangible data now that shows these executives don’t feel they have the tools they need to succeed, to make decisions as efficiently and effectively as possible,” David Benigson, CEO at Signal AI, said.

Data overload

Signal AI cited a prediction from Gartner that decision augmentation will surpass all other types of business AI initiatives by 2030, and will account for 44 percent of global AI-derived business value.

The report opines a need for decision making augmentation to accelerate, with around 70 percent of respondents saying they base their business decisions on data, yet 80 percent feeling that they have too much data to consider when making decisions.

Around 85 percent of respondents said they believed using technologies like AI in decision making could increase revenue.

Caution amid change

The benefits of AI to the decision-making process were also noted by Sascha Marschang, Deputy Director of the European Public Health Alliance, who recently warned that caution is needed when applying such a system in a public health setting.

Marschang referenced that AI systems may not be able to consider certain factors, such as the role of socio-economic determinants in shaping health status and limiting life choices, and that the difference between risk categories may not be straightforward.

“Increased reliance on machine-generated decision-making could also be risky because the people affected by outcomes are real; individuals at the margins of society could be further disadvantaged,” Marschang said.

“If not carefully checked, the conclusions drawn by biased AI systems could become even more problematic than the equally biased decisions of gatekeepers in the analog world, not least due to the many preconceptions and assumptions of their designers and programmers.”

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like