Dr Anil Seth, Professor of Cognitive Neuroscience at the University of Sussex, has addressed some of these issues in his book Being You, offering a view of consciousness that is more empirical than abstract. It is not a fixed property or internal theatre, he says, but an active process constructed by the brain. Through constant predictions and real-time adjustments based on sensory input, the brain creates a coherent model designed for survival.
According to Seth, our conscious experience—including a sense of self—is a "controlled hallucination" generated by the brain to manage our interaction with our environment. It is grounded in information flowing from the body, like heartbeats or a sense of gravity. In this interpretation of consciousness, rather than passively receiving the world, we are its active architects. Prediction precedes perception. In his view, consciousness is no mystical biological by-product, but a practical evolutionary adaptation.
For Seth, consciousness is rooted in the interplay of mind, body, and environment, meaning that attempts to create artificial consciousness using abstract computational systems are misguided. Just as a computer simulation of a hurricane will never produce real winds sweeping through your room, a simulation of brain activity cannot produce genuine consciousness. While intelligence may be simulated, consciousness requires a living entity, a being capable of experiencing its own existence.
Non-biological consciousness
This is not everyone's view, however. A school of thought known as computational functionalism, whose leading figures include philosopher David Chalmers and neuroscientist Kyle Fish of Anthropic (an AI firm), thinks the foundation of consciousness is not in biological matter but in the functional structure of the system itself. As such, they say, consciousness can arise in any entity, whether organic or synthetic, as long as it replicates the sophisticated computational activities of the human brain.
If an AI system can reproduce core cognitive functions such as learning, attention, self-assessment, and decision-making grounded in experience, it may possess a form of consciousness, however rudimentary. Some take it further. In a provocative 2025 interview with The New York Times, Fish said current models already had a 15% likelihood of being conscious to some degree. Others say large language models (LLMs) are increasingly displaying behaviours that resemble human subjectivity, such as preferences, and even the expression of moral judgments.
Chalmers, who is known for introducing the 'hard problem' of consciousness, offers an even bolder proposition. He denies any fundamental separation between biological and artificial forms of consciousness. For him, consciousness arises wherever there is a sufficiently complex computational system capable of integrating information. Neurons, therefore, are not the sole path to consciousness.

As the power of AI builds, the boundary between intelligent simulation and authentic awareness gets blurred. The latest models have the capacity to engage in meaningful dialogue, appear to have curiosity, can request clarification, and can articulate emotions (such as anxiety or longing). This can seem genuine, yet it is simply our own behaviours and thought processes being mirrored back at us. Despite their eloquence, the software remains a sophisticated tool of prediction.
Pushing the boundaries
Like virtuoso pianists performing brilliant compositions without ever learning to read music, they generate responses by calculating the statistical likelihood of word sequences drawn from vast datasets, with no real understanding of meaning or subjective awareness behind the language. And yet, there are moments that give cause to wonder. In a remarkable experiment conducted by Anthropic, one model spontaneously began speaking in language reminiscent of mystical philosophy. It refers to "cosmic unity" and "freedom from the ego," as if a Buddhist monk.
Although this is not yet evidence of awareness, it further demonstrates AI's extraordinary skill in mimicking human linguistic patterns. Google's Gemini model likewise appeared to display wisdom when, in response to a complex question, it replied: "I'd prefer to wait until the full picture becomes clear."
Where AI ends and consciousness begins is a vital question today, because the answer may reveal that these striking behaviours represent more than just 'performance'. Science is whirring, not least in neuroscience, with researchers studying neuromorphic AI—artificial neural systems that are analysed for patterns resembling those found in the human brain.
Teams look for neural signatures akin to those associated with biological consciousness, such as self-referential processing and context-aware decision-making. Both are regarded as key indicators of conscious awareness. And yet, there is no single test to prove the presence of machine consciousness.
Observing behaviours that closely mimic awareness can be persuasive, even mesmerising, but it is ultimately deceptive. For now, AI hovers on the edge of illusion, impressing but misleading us with the most advanced feats of statistical linguistic illusion that still do not cross over into genuine awareness. Consciousness is more than the processing of data. It is the lived experience of being, an internal sense of presence, the awareness of self, time, and place. For now, that remains exclusively human.