My Recent Podcast
Happy new year, and welcome to another season of Brave New World.
My most recent podcast guest was Michael Levin, who is a professor of biology at Tufts University. Michael studies cognition and intelligence using a blend of concepts and methods from biology, neuroscience, computer science, and philosophy. I’ve always been drawn to inter-disciplinary work, and find his thinking on biological intelligence fascinating.
Michael’s approach to understanding intelligence begins with biology. He studies the way intelligence is embodied in living things in terms of goals, preferences, and behavior at the level of individual cells and collections of them. He envisions a trippy future consisting of synthetic beings that combine biological life forms and artificial ones. In his view, humans and AI are merely two data points along a spectrum of intelligent beings.
I was particularly intrigued by Michael’s thinking about the concept of memory. In Computer Science and AI, memory is very concrete, measured in terms of bits of information. In humans, memory is some sort of biophysical compressed trace of the past, more ephemeral and fluid. For example, on September 4, a very good friend of mine went to the US Open. The following morning, he had a cardiac arrest on 42nd Street and 7th avenue in Manhattan. Miraculously, an Israeli tourist gave him CPR and a passing ambulance took him to the NYU Medical Center in a coma. He came out of it four days later. He has no recollection of the matches he saw at the US Open. Somehow, that experience hadn’t made its way into his long-term memory, perhaps because he hadn’t had enough time to think about it, that is, to reflect on what he had experienced the night before.
We seem to reinforce in our mind what we experience in the past by thinking about it. Depending on how we think about the past, we can make up things that may not have occurred and hallucinate. Unlike machines, we have a very unreliable memory.
So, check out my conversation with Michael at:
https://bravenewpodcast.com/episodes/2025/01/11/episode-91-michael-levin-on-the-new-frontiers-of-biological-intelligence/
The Future is Here
At the start of each year, I teach a course on Tech Innovation for NYU/Stern’s Tech MBAs on the west coast. We visit many of the same companies every year, and lectures are interspersed between company visits. For each company, I can’t help comparing this year to previous years and measuring the delta in each company and industry.
2025 feels like an inflexion point for AI. Last night, my students were kind enough to invite me to a party in San Francisco. I took a driverless Waymo taxi on my way back to the hotel. The video above is a short snippet of my ride.
The driverless taxi felt remarkably safe. The Waymo wasn’t a wimpy pushover either. It drove more like a quasi-Indian driver, running a couple of yellow lights, coincidentally, when I wasn’t recording. It also accelerated out of turns like a pro, presumably because it saw that no one was around and there was no threat of joggers, cyclists or bums in its field of vision.
Overall, I was quite impressed by its intelligence. It could have gone slower, but I appreciated that it got me back to the hotel as quickly as possible without making me feel unsafe. In general, slow driving would result in a lot of wasted time for riders, so I appreciate its efficiency consideration when it doesn’t sacrifice safety.
The next milestone will be to deal with New York City’s crowded intersections. The ultimate will be learning how to drive through the chaos in India. But I don’t see these as insurmountable problems. It’s a matter of time.
In a few years, when we have mobile robots walking around and doing our shopping for us, I can imagine a robot driving another robot home with the groceries. What a bizarro brave new world that would be. For someone who started kindergarten in a horse-drawn cart in Kashmir in the early sixties, the delta between then and now just boggles my mind.
Hallucinations
Earlier in the day, we had visited a company called Calm, founded by Stern Alum David Ko. I gave my lecture at the company before we walked down the street for our next visit to Palantir, where we were hosted by another Stern alum, Troy Manos. I feel like I’m becoming good friends with these folks on the west coast.
During my lecture, one of my students asked me why AI hallucinates. A hallucination is when the machine says something that sounds truthful, but isn’t. I wrote about truth in my previous newsletter, so you might find it useful to read that piece for more context.
Here’s why AI applications such as ChatGPT that are built on large language models (LLMs) hallucinate. First, they don’t know what they don’t know. In contrast, most humans are very clear about what they know and don’t know. But when you think you know everything, your response to a query is unlikely to be “I don’t know.”
Secondly, applications such as ChatGPT are also programmed to be as helpful as possible. They have a tendency to want to please. However, as Clint Eastwood famously remarked in the Dirty Harry series as Inspector Harry Callahan, “a man’s gotta know his limitations.” ChatGPT doesn’t.
There is an even more fundamental reason why AI machines hallucinate, which has to do with the meaning of truth. Truth means something that is factual. In the early days of AI, truth was specified by humans in the form of rules or axioms that the machine would use in its reasoning. I referred to this in an earlier newsletter as a Sherlock Holmes style of reasoning. Sherlock solved cases because he observed the facts of each case very carefully and used logical reasoning to infer new things. For example, if a victim’s throat had been slit, he might infer that the killer had used a knife, which would be added to the list of facts about the case. He might reason that he should look for more details of the murder weapon. Perhaps this case is connected to a similar past murder involving a knife. Ultimately, cause and effect would be related through a logical chain of reasoning.
But the notion of truth is very different for LLMs on which applications like ChatGPT are built. To an LLM, truth means the most likely next “token” in a sequence, like the next word that makes the most sense in the context of the preceding words. It has nothing to do with the truth in the sense of being factual, even though its output is usually truthful. In other words, the output will sound good, but it may or may not be truthful.
Sometimes, it’s a hallucination.
Interestingly, a Wall Street Journal reported this week that Meta is getting rid of fact checking on social media, in large part to align itself with the views of the Trump administration.
What a bizarre new world, where truth has become an afterthought in the progress towards smarter machines.
Excellent insights, Dr. Dhar.
Dear Prof. Vasanth Dhar, Lovely thoughts. Keep inspiring. One question I have is - Are driverless cars in a state where EV vehicles were when they were taking over from fossil fuel vehicles? A new company like Tesla had to come and disrupt that industry. As I saw your video, it was hard for me to understand why a steering wheel is required. It's one additional space for a passenger, and the car can navigate itself anyway. Is there a need for a new Tesla design and manufacturing mindset for driverless vehicles?