Insights from a fellow at the prestigious The Alan Turing Institute

Ben Wodecki, Jr. Editor

February 13, 2023

2 Min Read

At a Glance

  • Humans from across the business landscape should be involved in development.
  • DeepMind, Google and Meta helping the AI agent research field grow.

Companies looking to integrate agent-based autonomy solutions, such as DeepMind’s Sparrow chatbot, need to be able to anticipate irrational behavior potentially stemming from human interference, according to Long Tran-Thanh, a fellow at the prestigious The Alan Turing Institute.

Speaking to AI Business at the World Artificial Intelligence Cannes Festival (WAIFC), Long said enterprises cannot ignore that once human involvement is a factor, this kind of behavior should be accounted for and even anticipated.

As the main investigator of the Human-Agent Learning (HAL) lab, Long’s research proposes using tools such as game theory and incentive engineering when designing systems to tackle ‘strategic human behaviors,’ which is human manipulation of overall system dynamics for their own benefit.

Long is also part of a group of AI scientists that leads a training program for FPT Software in collaboration with the Mila Institute, the research group founded by renowned machine learning researcher Yoshua Bengio. Long currently is deputy head of research at the University of Warwick’s Computer Science department.

Long said collaboration was key to getting better agents out to market, and that researchers need to continue working on this growing area of AI to publish better papers and get better results.

To be sure, OpenAI’s ChatGPT has reignited the fire around humans and AI agents working together. The chatbot is a stark improvement from older systems such as Tay AI and Blenderbot 3, which perpetrated existing negative connotations after generating derogatory responses.

These misfires have spurred a wider interest in creating more effective AI agents to eventually be applied effectively in business use cases ranging from retail assistants to better online chatbots for things like banking.

DeepMind’s Sparrow, revealed last September, proved a step in the right direction with an increase in appropriate answers to user queries. But it was still found to break its own rules 8% of the time.

Compounding the issue are humans who intentionally manipulate AI agents. Russian-based cybercriminals are already circumventing ChatGPT to use it for their own nefarious purposes, like improving the underlying code of malware.

The quest for better AI agents

To improve AI agents for effective deployments, people from varying levels of a business need to be involved from the beginning of the design stage, the professor stressed.

Businesses “need to understand that this is not just a single isolated system, but people will be using it and making decisions affecting people's lives,” he said.

Long said businesses that are aware of such considerations would be able to deploy agents that are more robust and sustainable.

Long pointed to DeepMind, as well as Google and Meta, for their work in this space, saying the community around improved human-AI agent research is growing thanks to the help of big companies.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like