AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI Leaders

Pandemic pushes enterprises towards more AI automation

by Chuck Martin
Article ImageStaff numbers, economic outlook and infrastructure can all shape the AI strategy

The pandemic has pushed businesses to embrace flexibility and automation to keep pace with dynamic market conditions.

One of the most fundamental changes in the current business environment is the transformation of the role of people in the enterprise – as detailed in a virtual discussion on post-pandemic AI investment strategies organized by the VisionAIres community.

Doing more

“One of the big topics for us is automation,” said Ben Dias, data science and analytics director at EasyJet. “We’re having to do more with the same people. For example, in the airline industry, we used to build one schedule for a season, and it was fixed for the whole season. Now we have to build multiple schedules when countries close down or reopen their borders. We can’t do as much manual work as we used to.”

Economics is also a major factor in AI project management: “The profits maybe aren’t the size of what they used to be, and costs are maybe going up due to the pandemic, so they have to do more with the same or more with even less,” said Tyler Folkman, head of artificial intelligence at Branded Entertainment Network (BEN).

“That’s when investments in automation become more interesting because maybe the upfront cost was too high and you didn’t want to make it and now you have higher costs in front of you. So now you’re willing to make that investment into the future and automation, which a lot of companies will do as non-automation becomes more expensive.”

Mark Gerban, digital partnership manager for data and connected car at Mercedes-Benz, suggested a broader approach is required, that considers the need for buildings and traditional infrastructure – something that’s in flux due to remote working policies.

“Because of what COVID has brought, like an expedited approach to automation, to making things more continuous on their own, you don’t need to have people in the process,” Gerban said.

In the media industry, the situation is not exactly the same, noted Brian Leonard, head of engineering, production and workflow at IMG.

“I see a future of people doing more in the same period of time,” Leonard said. “Where they necessarily might do one job or one problem now, in the future it’s going to be two or three. They won’t be doing the laborious jobs, like speech to text to translation, they’ll be doing the more creative elements.”

A choice that many businesses are currently facing is where, and when, to deploy AI.

“The part that is rules-based is getting automated and AI is getting applied to that,” said Riccardo Calliano, head of global finance capabilities and talent at GSK. “Then there is the whole gray area that stands between rules and judgment in planning.”

Avoiding bias

As part of a formal VisionAIres community council, an earlier roundtable focused on ethics in AI, and the questions of built-in bias.

“Where you train neural networks based on today’s human behaviors, you also train the biases into the neural network,” Tobias Mathur, head of AI operations at Uniper, explained. “The cool thing about this is that sometimes the neural network makes that visible. It shows you when bias exists and then you can take it out.”

Uniper builds its neural networks by taking past data from a power plant and training the machines on what the human operators were doing.

“Sometimes the humans have gut feeling that helps you understand issues that you didn’t see before the data,” Mathur said. “The combination of using neural networks and human instincts is really the key here.”

The major caution regarding biases is to identify them before they have taken root deep within AI systems.

“Where we have the biggest problem is where we’re trying to use AI on top of human decision making,” said Richard Self, senior lecturer in governance of advanced and emerging technologies at the University of Derby.

His major concern is companies would “use these learning systems to learn from our past decisions, then bake in the biases that humans already have that we don’t even know we’re using.”

“These are the ones that businesses need to be incredibly careful of,” Self said.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up