My Most Recent Guest
My most recent guest on Brave New World was Anil Seth, who is a professor of Cognitive and Computational Neuroscience at the University of Sussex. Anil is also the author of the book Being You: A New Science of Consciousness, in which he asks what “the self” means and how is it related to consciousness.
Consciousness is a fascinating subject. A big question at the moment is whether Artificial Intelligence, however intelligent it becomes, can achieve consciousness. It’s a deep question, which has implications for how we should think about the obligations and rights of AI and its operators.
Anil describes two broad perspectives on consciousness.
The first perspective is that consciousness is a “functional” one, that is, the functions can it do. It’s like having a checklist of necessary and sufficient conditions of things that a system needs to do or exhibit to be called conscious.
The alternative view is that consciousness is about being, not doing, that is, it is about our subjective experiences as living beings. That suggests that biology is essential for consciousness. In this view, AI machines that aren’t living things are unlikely to achieve consciousness, however intelligent they may become functionally.
Anil was exceptionally clear and precise, and it was truly delightful discussing consciousness and human and animal intelligence with him. He made a complex subject easily understandable. So, check out the episode at:
The Prediction Machine
A key part of Anil’s thinking is that the brain follows a simple decision rule: to minimize surprise. In doing so, it is not reporting a passive readout of the world, but rather, it is continually making predictions about what it expects, which is being matched against reality. So, if you’re walking past a field with a grazing cow, you are likely to predict that the grass is green, even if it is blue. The brain a continuous prediction machine that matches its expectation against reality, calculates the “prediction errors,” and makes the appropriate adjustments. That’s how it learns and make inferences.
In this view, we see what we expect to see. Expectations often dominate the sensory data. It’s the brain’s way of keeping things simple and dealing with information overload.
Anil describes the predictions we make as “controlled hallucinations.” Reality is defined as the hallucinations we agree on!
My conversation with Anil reminded me of a recent conversation with Michael Levin about things like memory that are represented in biology, and an earlier conversation with Tony Zador about the genome and how it impacts the wiring of the brain. They describe how human intelligence is enabled by an incredibly flexible machinery shaped by biological evolution, which is expressed elegantly by the roboticist Hans Moravec:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
I’m struck by the parallel between the brain as a prediction machine in the physical world, and how prediction plays a central role in machine learning. It serves as the basis for comparing alternative theories, and for something to count as knowledge. One of the most influential thinkers in the philosophy of science, Karl Popper, was a big proponent of prediction as a necessary condition for something to count as knowledge. In his book Conjectures and Refutations, Popper argued that theories that seek only to explain a phenomenon are weaker than those that make “bold” ex-ante predictions that are easily falsifiable and still stand the test of time – like Einstein’s theory of relativity.
The conversation with Anil also reminded me of a 1995 book by the neurologist Oliver Sacks called An Anthropologist on Mars. Sacks brings out the incredible plasticity of the brain through seven case studies of individuals with neurological “disorders,” who adapt in unexpected ways to develop remarkable new capabilities and lead rich lives. One of the cases Sacks presents is the autistic animal scientist Temple Grandin, who attributed her understanding of animals to her visual, non-social thinking style. Autism didn’t preclude empathy or insight, but led to a different and deeper understanding of animals that eludes “normal” people. In another story, a blind man since early childhood gains vision in adulthood through surgery, and struggles to make sense of what he sees. The realization from the case is that the brain doesn’t perceive the world “objectively” but must be trained to see it in early childhood. Sacks provides an incredibly humanistic perspective on his seven subjects that shows the resilience and plasticity of the brain that Anil discusses in his book.
I was also reminded of the 1997 book by Steven Pinker called How the Mind Works, in which he demonstrates that vision is not passive but an active process of inference. According to Pinker, the brain constructs a model of the world from ambiguous inputs in a way that is “useful.” I recall a simple experiment Pinker describes to make his point about the role of usefulness in perception: turn your head suddenly by 90 degrees and see how the world “turns” in the other direction so that it stays horizontal – which is useful to the brain for all kinds of reasons. The larger point is that the brain doesn’t see things as they are but in a way that is useful.
Anil also describes his thinking about animals and consciousness, which reminded me of my conversation with Pippa Ehrlich, producer of My Octopus Teacher. From watching the film, it’s hard to believe that octopuses don’t have consciousness. No more grilled octopus for me.
When Software Becomes Evil
I’ve just finished writing my book and am in the process of final edits. A friend suggested printing it and laying it out on the floor to get the holistic perspective and see the connections among chapters. Each chapter is a column of pages on the floor, and I can walk up and down the book and see the big picture literally.
But my printer stopped printing after page 79, saying “Replace Toner” even though page 78 was perfect. I have used laser printers for several decades, and seen that quality degrades slowly, and when it does, you can shake the toner cartridge and extract a few hundred pages more from it. So this is complete bullshit and pure evil.
The bottom line is that everyone now has you by the short ones through software. It’s just a plain unethical use of software. And terrible for the environment. It is the last Brother printer that I buy. But I’m wary that the “new boss” will be the “same as the old boss,” to quote a line by The Who.
On literally laying out pages to see the big picture, web browsers should similarly move beyond merely recreating the printed-page. Hyperlinking already supports showing actual connections & refutations across topics. With language AI, we should further be able to create semantically zoomable UI ("Google Map for ideas"): zoom in for micro detail, zoom out for macro structure, trace idea paths, etc.