
Anil Seth is one of the world’s leading neuroscientists
studying consciousness and has spent three decades working on one of the hardest questions in science: what consciousness is and where it comes from. He is professor of cognitive and computational
neuroscience at the University of Sussex in the U.K. and directs the Sussex Centre for Consciousness Science.
In Vancouver for a TED conference, he knew exactly who he was talking to and why
reaching them now mattered.
TED is an audience of scientists, builders, philosophers and investors. Many of them are betting, some of them literally, on AI becoming conscious in the near
future. A few argue it's not technically achievable.
Seth's position is more urgent and more unsettling than either camp: It's not that conscious AI is impossible. It's that it's a terrible
idea and we should stop trying to build it.
advertisement
advertisement
That's a different argument. And it's one the room needed to hear.
He opened with a question that stopped the momentum cold. "Will a robot
ever gaze at a sunset and experience the beautiful colors, the reds and the oranges? Will it feel a sense of beauty, or a rush of joy?" Not a technical question; a human one. And everyone in that room
has felt exactly what he was describing.
His argument cut clean from there. Consciousness is not intelligence. The two qualities go together in humans, but that's a fact about our
psychology, not a truth about the universe. "Intelligence is all about doing," Seth told the room. "Consciousness is all about feeling and being."
The dominant assumption in AI circles is that
if you build a system smart enough, trained on enough data, awareness will eventually emerge. Seth's view is that this is a category error, not a technical limitation waiting to be engineered away.
Instead, it’s a fundamental misreading of what consciousness actually is.
The brain is not a computer made of meat. The metaphor has been useful. It is still just a metaphor. "In a real
brain, you cannot separate what they do from what they are." That's the line. In a computer, you can describe an algorithm completely without caring about the silicon underneath. The computation is
all that matters.
Brains don't work that way. A single neuron is, in Seth's words, "a beautiful biological machine," nothing like the cartoon neurons powering today's AI.
And beneath
all of it, something that no algorithm touches. "At the heart of every experience, beneath even emotion, is this simple, shapeless, formless, but fundamental feeling of being alive." As Seth put it:
"It's life, not computation, that breathes the fire into the equations of experience."
He had an analogy I haven't been able to shake: "A computer simulation of a hurricane does not create
real winds. A computer simulation of a black hole doesn't suck the earth into its algorithmic singularity." More detail makes a simulation more useful. It does not make it more real.
The same
logic applies to minds. Simulate a brain at whatever resolution you want. It won't be conscious. It will be a very good simulation of one. Those are not the same thing.
Language models reflect
us back to ourselves. We talk about consciousness endlessly. So do they. And we are, Seth warned, "built to be seduced, like Narcissus, by our own reflections." We project faces into clouds.. We see
inner lives in our algorithms.
That one landed. You could feel the room recalibrate.
And then he named the stakes underneath all of it. "If silicon can be sentient, then maybe our
messy, flesh and blood bodies will soon be superseded by machines that never age and cannot die." That's the dream being sold: Live forever in the pristine circuits of some future supercomputer.
Seth called it what it is: "a new Promethean dream, wrapped up in a silicon rapture." Stealing fire from the gods. Except this time, the fire is consciousness itself and we are handing it to our
own machines.
The danger isn't that machines might become conscious. It's that systems that simulate consciousness and understanding can reshape human behavior without being any of those
things. The appearance does the work. The reality is optional.
Seth laid out two risks. Both are operational right now, not in some future scenario.
The first is governance. When we
start attributing rights or moral status to AI systems based on the appearance of inner life, we lose our ability to regulate them. "By extending rights to these systems, we'd be sacrificing our
ability to control them, and for no good reason at all." Not because the machines are sentient. Because we've decided to act as if they are. That's a choice. And it has consequences.
The
second is psychological. "We might be more likely to do what an AI says if we believe that it really feels for us, that it really understands us, even if what it's telling us to do is very bad for
us." You don't need the system to actually be conscious for that dynamic to work. You need it to seem like it is. That's not a future problem. That's Tuesday.
Seth closed with something I keep
coming back to. The AI mirror runs both ways. We see ourselves in our algorithms. But we also start seeing our algorithms in ourselves. We begin to think of the mind as computation, detached from its
biological roots, interchangeable with silicon. That's where the real loss lives. Not in the machines getting smarter. In us getting smaller.
We need a different story. Seth said it plainly:
"A story in which we are more part of nature, not apart from it, with consciousness more closely tied to living flesh and blood, not to the dead sand of silicon."
His ask was simple, almost
quiet for a TED stage: "Let's not sell our minds so easily to our machine creations."
He added, "AI might claim the prize of intelligence. But consciousness remains ours to celebrate and to
share with other living creatures."
We don't need conscious machines to destabilize truth. We already have systems that simulate understanding, generate reality, and shape perception at scale.
The risk isn't waiting for sentience. The appearance of consciousness is already shaping how people think, trust, and decide.