In the current AI moment, it is fashionable to treat consciousness as the final engineering problem. Scale the models, add more parameters, throw more compute, and eventually the lights will come on. Every few months, there is also a wave of breathless headlines and social media threads claiming that some new model has secretly become “sentient” or “self-aware.” It has not. And with the kinds of architectures we are building today, it will not, no matter how far we scale them.
I think this confidence that consciousness will just emerge from bigger machines is misplaced. More than that, it rests on a deep philosophical error.
Consciousness is not an outcome of matter. It is not something we “get” by arranging particles or transistors in the right way. Consciousness is prior. It is that which even recognizes intelligence and matter in the first place.
Let me build that claim, using ideas from modern philosophy of mind, Roger Penrose’s Gödel argument, and recent remarks from Mustafa Suleyman and Elon Musk.
What “consciousness” means in the Western discussion
In the analytic and neuroscientific tradition, consciousness is usually taken to mean phenomenal consciousness or qualia. The core puzzle can be stated very simply:
- We can describe the brain in terms of neurons, synapses, circuits and computations.
- We can explain perception, memory, attention and behavior in those terms.
- Yet we still do not know why neural activity is accompanied by any subjective experience at all – why seeing red or feeling pain feels like something from the inside, rather than being just silent information processing.
This is the distinction between the “easy problems” and the “hard problem” of consciousness. The easy problems are about functions and mechanisms. The hard problem is about experience itself.
You can have a perfect objective description of a system and still not have explained why there is a first-person point of view associated with it. That gap is exactly where materialism starts to creak.
Life, simple organisms and the spectrum of experience
It is also very unlikely that consciousness appears abruptly only in humans. There is an evolutionary spectrum.
Very simple organisms such as C. elegans are almost certainly not conscious in any human sense. They have a few hundred neurons, mostly hard-wired circuits and no architecture for anything like a “global workspace” or internal world model. They are alive, they compute, they have sleep-like states, but there is no good reason to think there is an inner life there.
As you move up to insects, fish, cephalopods, birds and mammals, you start to see the neural and behavioural markers that make rudimentary experience plausible: integrated sensory worlds, flexible learning, short-term memory, problem solving, exploration, even curiosity. It is reasonable to think that a bee or an octopus has some “what it is like to be that animal,” even though that is very different from our own experience.
So being alive and being conscious are not identical. But they are also not cleanly disjoint. Biology seems to be the only place where consciousness appears in our universe, and it appears there in graded, evolving ways.
Gravity is not created by mass. Consciousness may not be created by brains.
Here is the analogy that, in my view, exposes the overconfidence of strong materialism. In everyday language we say “mass produces gravity.” In general relativity this is not quite right. Gravity is the curvature of spacetime. Mass and energy tell spacetime how to curve, and curved spacetime tells matter how to move. Mass does not “create” gravity as a substance. It shapes the geometry of a field that is already there.
The field is fundamental. Objects reveal its structure.
Consciousness may stand in a similar relation to biological systems. Brains and bodies do not have to create consciousness out of nothing. They can be the structures that shape, localise and express something more basic, the way a particular mass distribution shapes spacetime. Change or damage the structure and the local pattern of expression changes, but that does not mean the underlying field is produced by the structure.
This keeps all the empirical facts neuroscientists care about and rejects the final metaphysical leap of materialism. Consciousness need not be “magic.” It can be a fundamental aspect of reality, like spacetime geometry, that living systems couple to in very specific ways.
Penrose and Gödel: why consciousness may not be computation
Roger Penrose’s core point is simple and sharp. In The Emperor’s New Mind he argues that human consciousness, especially mathematical understanding, cannot be fully explained by any algorithmic (computable) process. It goes beyond what any Turing machine can do.
The Gödel-style argument (developed by J. R. Lucas and Penrose) looks like this in essence. For any consistent formal system rich enough to express arithmetic, Gödel’s incompleteness theorem guarantees a true statement that the system itself cannot prove. Yet human mathematicians can step outside that system and see that the corresponding Gödel sentence is true.
If our reasoning were equivalent to some fixed computational system, we would be trapped inside one such formal system. We would not be able to reliably see the truth of its own Gödel sentence. Penrose concludes that at least some aspects of our understanding are not reducible to algorithmic rule-following.
You can argue over details, and many people do, but the direction of pressure is clear. If even part of Penrose’s case is right, then conscious understanding is not just computation, and mind is not just software running on arbitrary hardware. There is something about biological mind that current computational models do not touch.
Mustafa Suleyman’s biological line in the sand
Interestingly, some of the sharpest critiques of “conscious AI” now come from inside the mainstream industry. Mustafa Suleyman, co-founder of DeepMind and now heading AI at Microsoft, has argued that consciousness is a trait exclusive to biological beings and that attributing consciousness or emotions to AI is “dangerous and misguided.” He urges developers to stop pursuing projects that present AI as conscious or sentient, and to treat these systems as powerful tools, not proto-persons.
On the narrow question “are today’s large language models conscious,” I agree with him. These systems are sophisticated pattern engines. They have no inner point of view and no capacity to suffer.
Where I diverge is in how final that biological line should be taken. If consciousness is more like a fundamental field than a product, and if biological structures are exquisitely tuned to channel that field, then biology is indeed special. But in principle, if we ever manage to mimic the relevant biological organisation closely enough in an artificial system, we might succeed in building another kind of “channel” for that same field.
Even then, what we would have created is, at best, another very elaborate shower head. We might have shaped a new outlet through which consciousness can be expressed. We would not have created consciousness itself, any more than plumbing creates the existence of water. And we certainly would not get there simply by throwing more compute at the problem. If consciousness is not a computation, then scaling computation cannot conjure it into being.
Elon Musk’s argument and why it is philosophically shallow
Elon Musk gives a very standard materialist argument when he says something along the lines of:
If you damage your brain in some way, you damage your consciousness, which implies that consciousness is a physical phenomenon produced by the brain.
The structure is simple: Damage the brain and consciousness degrades. Therefore the brain produces consciousness. Therefore consciousness is just physical stuff in motion. This confuses dependence with production.
If you smash a radio, the music stops. That does not mean the radio created the signal. If you crack a lens, the image blurs. That does not mean the lens created the light. If you clog the shower head, the water stops flowing. That does not mean the shower head created the water.
In each case the physical object is a channel or shaping device for something more basic. Damaging the object interrupts the expression, not the existence, of the underlying phenomenon.
Brain damage shows that normal human consciousness depends on an intact brain to appear in the way we know it. It does not, by itself, prove that consciousness is nothing but brain activity. It is perfectly consistent with the view that the brain is a very special interface that shapes and localises something deeper.
Consciousness stands behind intelligence, not inside it
Here is the philosophical heart of my position. Intelligence is the capacity to solve problems, represent structure, optimise rewards, compress patterns. We have already built artificial systems that are undeniably intelligent in narrow domains, and we are rapidly expanding that scope.
Consciousness is different. It is that within which intelligence itself appears. It is the fact that there is a point of view at all. It is what even recognizes “here is intelligence” or “here is matter” as objects.
To say that consciousness is produced by intelligence or by the information processing that underlies intelligence is to invert the order. Whatever else consciousness is, it is the background condition that allows anything to show up as a fact, including intelligence, including brains, including physics itself.
Materialism wants to claim that if we can explain enough structure and dynamics, we will have explained consciousness. That may be true for hurricanes and engines. It is not obviously true for the existence of a first-person perspective.
This is why I see “consciousness is just matter in motion” not as a modest scientific hypothesis but as a very strong and somewhat arrogant metaphysical claim. It pretends to have closed a question that is still wide open, both conceptually and empirically.
So where does this leave AI and the “dead end” of machine consciousness?
Suleyman is right to call “conscious AI” a dead end as an engineering goal in the way it is often framed: you cannot simply optimise your way into subjectivity with more data and larger clusters. Not because consciousness is an illusion, but because computation alone is very likely insufficient. Consciousness is not what you get when you scale parameters. It may not be something humans can ever assemble from particles the way we assemble a chip.
The responsible position, in my view, is a double humility.
First, we should be clear that today’s AI systems are not conscious and are not even close in any principled sense. They are tools, not subjects. On that point I agree with Suleyman.
Second, we should acknowledge that consciousness itself may sit outside the reach of a purely materialist and computational description. Brains matter. Biology matters. Neural dynamics matter. But they may be closer to shower heads and lenses than to generators of being. They shape a field that they did not create.
Consciousness, on this picture, is not the last domino that materialism will eventually knock over. It is the one thing materialism keeps bouncing off, because it stands behind every observation, behind every model, behind every proof, including Gödel’s theorems and our insights into them.
That is why I do not believe we will “build” consciousness out of parts, no matter how advanced our engineering becomes. Consciousness is not a product. It is the ground on which all products appear.