Can AI Be Conscious?

OpenAI's Chief Scientist Ilya Sutskever and Turing winner Yoshua Bengio are not ruling out that AI could one day be 'conscious.'

Sascha Brodsky, Contributor

January 15, 2024

6 Min Read
Image of a digital face
Getty Images

At a Glance

  • The idea that AI might attain 'consciousness' is starting to be taken seriously by scientists.
  • A math group asked the U.N. to investigate. OpenAI's chief scientist and Turing winner Yoshua Bengio are not ruling it out.
  • Experts say what could emerge is an AI capable of mimicking human consciousness, but not be the real thing.

The idea that AI might be conscious has long been the stuff of science fiction, but some researchers are starting to take the notion seriously.

A math association recently called on the United Nations to explore the concept of AI consciousness. And famed AI scientist Yoshua Bengio has co-written a paper on conscious AI. The excitement over the fast-developing field of generative AI has raised hopes that humans might make machines that can truly think for themselves despite a struggle to define the nature of consciousness.

Last year, OpenAI’s Chief Scientist Ilya Sutskever suggested that sophisticated AI networks might possess a degree of consciousness. This proposal followed an incident a year earlier where a Google engineer was fired after claiming that LaMDA, an early version of the chatbot Bard, exhibited sentience.

An open letter from researchers at the Association for Mathematical Consciousness Science (AMCS) highlighted the need to expedite research in the field of consciousness science.

“In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning,” the researchers wrote in their letter to the U.N. “Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognized in Psychology, including evidence of Theory of Mind.”

Related:Google Fires Engineer Who Claimed AI Was ‘Sentient’

A path to conscious machines?

Not everyone buys the idea that AI will think for itself, at least not anytime soon. Patrick Hall, a professor in the Department of Decision Sciences at George Washington University, said in an interview that machine learning is not a direct path to consciousness.

“We would not ask if an Excel trend line is conscious,” he added. “We are only making progress toward consciousness with ML if you are willing to believe that something magical happens when we move from using one Excel trend line to using a billion Excel trend lines together to, say, predict the next word in response to a user query. Personally, I do not believe today's computers are capable of magic.”

Bengio and his team could be viewed as leaning towards skepticism. After a review of various consciousness theories, he and the other authors of the study ‘Consciousness in Artificial Intelligence: Insights from the Science of Consciousness’ concluded that no existing AI systems are conscious and outlined key approaches for future research in this field.

"Our analysis indicates that current AI systems lack consciousness,” Patrick Butlin, a primary author of the report, stated. “However, we identified no clear technical impediments to creating AI systems that could meet these criteria for consciousness."

The researchers pinpointed six theories of consciousness as crucial indicators for identifying conscious entities. A notable hypothesis is the Recurrent Processing Theory, suggesting that the brain's feedback mechanisms are crucial for adjusting to new situations, honing perceptions, making decisions, forming memories, and learning. The report also highlights the Global Workspace Theory, which suggests that consciousness develops when information is shared widely in the brain rather than being limited to specific sensory inputs, creating a unified center for various mental activities.

Another vital concept is the Higher Order Theory, which can be encapsulated by the notion of 'being aware of one's awareness.' "Higher-order theories stand out for their focus on the necessity of a subject's awareness of their mental state and their explanations for this awareness," Butlin explained.

A first step in making AI conscious might be to give it the skill to come up with and use new ways to solve complex problems, Bob Rogers, the CEO of and the former chief data scientist at Intel, said in an interview. For example, an autonomous vehicle can use many tools like GPS, and strategies such as minimizing travel time instead of travel distance, to get us from point A to point B automatically.

“However, if we add an objective like, “Get gas on the way for the lowest cost and with a minimal impact on the schedule, new competitive strategies could come into play in which a vehicle has to compete with other vehicles to achieve its goals,” he added. “To be effective, the vehicle needs strategies that prioritize its goals above those of other vehicles, and we begin to see an abstract concept of self emerge.”

AI can be more accurate than humans for some tasks, Rogers said. But in the end, it does not have subjective experiences or its own feelings. “It does tasks that people have programmed it to do,” he added.

What is consciousness?

The possibility of conscious AI raises deep philosophical and ethical questions. To determine whether AI is conscious, first, researchers have to tackle the tricky subject of what ‘consciousness’ even means. John T. Behrens, a professor in the Department of Computer Science and Engineering at the University of Notre Dame, said that you can think of consciousness as having at least two interrelated ideas: autonomous action and self-awareness.

“Lots of computing and biological systems have autonomous action, as we see in robots and self-driving cars,” he added. “Self-awareness is a hallmark of human consciousness, and that is something still quite theoretical and far off right now.”

In philosophical terms, consciousness involves qualia: the quality of what experience is like, Brian P. Green, the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University, said in an interview. Consciousness is the awareness of experience and self-identification with that experience: ‘This is happening to me,’ ‘I am experiencing this,’ and it feels, sometimes viscerally, a certain way.’

“I define consciousness like I think most people would think of it: as the awareness of experiencing something, whether pleasure, pain, color, smell, taste, thought, fear, joy, and so on,” he added.

Green suggested that instead of attaining true consciousness, what will emerge are AIs designed to give the impression of being conscious.

“But we should not be fooled,” he added. “Fake jewels are not real jewels. Copies of great paintings are not the originals. Actors playing historical figures are not those historical figures. And AI acting like a conscious being is not a conscious being. It is just a good act - or, in more negative terms, a deception, with ourselves or others as potential victims.”

Behrens pointed out the current buzz around AI is because new generative systems perform tasks that most experts did not expect to see for another 10 years.

“This has caused both great excitement and real confusion about what is possible,” he added. “With the new generative AI, we are seeing behavior that looks remarkably like human thinking in its language fluency and ability to create things beyond what was specifically asked for. At the same time, these systems are unreliable.”

Current generative AI systems are convincing when mimicking human language. Behrens said that when those AI processes are chained together, they can look like human thinking and function. But, he warned, “because these systems are so new, we do not know very much about how to evolve them, and we know very little about how to get them from looking like they perform human thinking to actually getting to consciousness.”

Green, for one, is doubtful that AI will ever be truly conscious. And even if we can create a truly thinking machine, should we?

“We might make hybrids of machines and living organisms that could be conscious, but we should also ask whether this would be good in any sense or just cause pain and suffering for these hybrid beings when it could have been avoided,” he added.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Sascha Brodsky


Sascha Brodsky is a freelance technology writer based in New York City. His work has been published in The Atlantic, The Guardian, The Los Angeles Times, Reuters, and many other outlets. He graduated from Columbia University's Graduate School of Journalism and its School of International and Public Affairs. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like