Rapid tech innovation brings AI closer to consciousness

The latest large language models appear to question, display emotion, convey preference, possess independent thought, and even offer wisdom. Is this still algorithms, or something deeper?

Deena So'Oteh

Rapid tech innovation brings AI closer to consciousness

Artificial Intelligence (AI) models have advanced rapidly in recent years, becoming an integral part of everyday life in 2025, something that brings both enthusiasm and apprehension. While many embrace the opportunities AI presents, others fret over its implications. With Artificial General Intelligence (AGI), are we creating an uncontrollable force that endangers humanity?

To its advocates, AI is a tool of liberation and innovation. To the big technology firms investing billions of dollars in it, this heralds the Fourth Industrial Revolution. Yet the increasing dependence on algorithmic decision-making invites questions about the extent of human oversight, or its misuse by criminals or dictators. These questions are being raised by AI developers themselves.

Two years ago, engineer Blake Lemoine made claims about the chatbot LaMDA, saying it displayed emotions not unlike those of a human child. He even argued that it deserved legal representation. Does this constitute consciousness? There is no easy answer, not least because consciousness is a deeply complex philosophical and scientific concept that encompasses self-awareness, emotional depth, and the capacity for independent thought.

Mimicry and comprehension

Although other Generative AI (GenAI) models like GPT-3 and LaMDA can mimic human behaviour, they have no subjective experience and their functioning is based on statistical pattern recognition, with no true comprehension of meaning. Yet the question remains: is this merely advanced simulations of human language generated from vast datasets, or is it the earliest indicators of true awareness? Some suspect the latter. For neuroscientists, consciousness is inseparable from biology, so software, therefore, lacks this essential foundation, but others ask how we might identify consciousness in non-biological entities, and what ethical responsibilities might follow.

Consciousness is a recognition of one’s own awareness—the experience of existence. It includes emotion and awareness of one’s presence in time and space. Yet consciousness is not to be confused with intelligence. This includes the ability to compute, analyse, and solve problems. Can AI think like a human? Some liken it to an actor delivering lines about love without having loved, or a philosopher contemplating death without ever fearing it.

While software has already surpassed human abilities in problem-solving, strategy, and data analysis, the question of consciousness is much more difficult, because many think it can only be inferred, unlike intelligence, which can be tested and measured. Intelligence can also be replicated, whereas consciousness cannot.

In an interview with The New York Times, Kyle Fish of Anthropic said current models already had a 15% likelihood of being conscious to some degree

Dr Anil Seth, Professor of Cognitive Neuroscience at the University of Sussex, has addressed some of these issues in his book Being You, offering a view of consciousness that is more empirical than abstract. It is not a fixed property or internal theatre, he says, but an active process constructed by the brain. Through constant predictions and real-time adjustments based on sensory input, the brain creates a coherent model designed for survival.

According to Seth, our conscious experience—including a sense of self—is a "controlled hallucination" generated by the brain to manage our interaction with our environment. It is grounded in information flowing from the body, like heartbeats or a sense of gravity. In this interpretation of consciousness, rather than passively receiving the world, we are its active architects. Prediction precedes perception. In his view, consciousness is no mystical biological by-product, but a practical evolutionary adaptation.

For Seth, consciousness is rooted in the interplay of mind, body, and environment, meaning that attempts to create artificial consciousness using abstract computational systems are misguided. Just as a computer simulation of a hurricane will never produce real winds sweeping through your room, a simulation of brain activity cannot produce genuine consciousness. While intelligence may be simulated, consciousness requires a living entity, a being capable of experiencing its own existence.

Non-biological consciousness

This is not everyone's view, however. A school of thought known as computational functionalism, whose leading figures include philosopher David Chalmers and neuroscientist Kyle Fish of Anthropic (an AI firm), thinks the foundation of consciousness is not in biological matter but in the functional structure of the system itself. As such, they say, consciousness can arise in any entity, whether organic or synthetic, as long as it replicates the sophisticated computational activities of the human brain.

If an AI system can reproduce core cognitive functions such as learning, attention, self-assessment, and decision-making grounded in experience, it may possess a form of consciousness, however rudimentary. Some take it further. In a provocative 2025 interview with The New York Times, Fish said current models already had a 15% likelihood of being conscious to some degree. Others say large language models (LLMs) are increasingly displaying behaviours that resemble human subjectivity, such as preferences, and even the expression of moral judgments.

Chalmers, who is known for introducing the 'hard problem' of consciousness, offers an even bolder proposition. He denies any fundamental separation between biological and artificial forms of consciousness. For him, consciousness arises wherever there is a sufficiently complex computational system capable of integrating information. Neurons, therefore, are not the sole path to consciousness.

Deena So'Oteh

As the power of AI builds, the boundary between intelligent simulation and authentic awareness gets blurred. The latest models have the capacity to engage in meaningful dialogue, appear to have curiosity, can request clarification, and can articulate emotions (such as anxiety or longing). This can seem genuine, yet it is simply our own behaviours and thought processes being mirrored back at us. Despite their eloquence, the software remains a sophisticated tool of prediction.

Pushing the boundaries

Like virtuoso pianists performing brilliant compositions without ever learning to read music, they generate responses by calculating the statistical likelihood of word sequences drawn from vast datasets, with no real understanding of meaning or subjective awareness behind the language. And yet, there are moments that give cause to wonder. In a remarkable experiment conducted by Anthropic, one model spontaneously began speaking in language reminiscent of mystical philosophy. It refers to "cosmic unity" and "freedom from the ego," as if a Buddhist monk.

Although this is not yet evidence of awareness, it further demonstrates AI's extraordinary skill in mimicking human linguistic patterns. Google's Gemini model likewise appeared to display wisdom when, in response to a complex question, it replied: "I'd prefer to wait until the full picture becomes clear."

Where AI ends and consciousness begins is a vital question today, because the answer may reveal that these striking behaviours represent more than just 'performance'. Science is whirring, not least in neuroscience, with researchers studying neuromorphic AI—artificial neural systems that are analysed for patterns resembling those found in the human brain.

Teams look for neural signatures akin to those associated with biological consciousness, such as self-referential processing and context-aware decision-making. Both are regarded as key indicators of conscious awareness. And yet, there is no single test to prove the presence of machine consciousness.

Observing behaviours that closely mimic awareness can be persuasive, even mesmerising, but it is ultimately deceptive. For now, AI hovers on the edge of illusion, impressing but misleading us with the most advanced feats of statistical linguistic illusion that still do not cross over into genuine awareness. Consciousness is more than the processing of data. It is the lived experience of being, an internal sense of presence, the awareness of self, time, and place. For now, that remains exclusively human.

font change