The Mind-Gut Connection
Have you ever gotten that sinking feeling in the pit of your stomach? What’s that all about? Is there a brain in our gut that communicates with the brain in our head? What do they talk about?
I had a fascinating conversation with Emeran Mayer MD, and author of The Mind-Gut Connection, about how the two brains influence each other. They talk a lot, with more than 90% of the communication initiated by what’s happening in the gut. Mood might be more of a gut thing than a brain thing, for example. Much of you is driven by what’s going on down there.
Emeran’s research also brings up a deeper philosophical question about humanity’s place in evolution, and the larger forces that shape us. The human cells in our bodies are vastly outnumbered by foreign cells, like bacteria and viruses, who have been around much longer than we have. So, are human bodies just a vehicle for the microbes living in them? Do the microbes manipulate our brains to make us seek out foods and create conditions that are best for them? It’s a sobering thought.
So check out my conversation with Emeran. I learned a lot, and it made me think anew about when I should trust my gut and when I shouldn’t. I’ll save that train of thought for a future newsletter.
ChatGPT3: Is “General Intelligence” Emerging?
If you haven’t already played with ChatGPT3, you’re living under a rock and need to get a life, even if it’s artificial. Over the last few weeks, I have not met anyone who hasn’t been blown away by it. One of my listeners asked me last week, “have we stumbled into the future, finally?”
Indeed. Last month, I wrote about how it is becoming difficult to give an exam these days since the AI out there is good enough to answer university level exams. I saw a recent tweet by an instructor to the effect that one of his students submitted an unbelievably good answer that ran counter to prior data, but he couldn’t prove that it had been created by a bot! Perhaps he wasn’t aware of this tool, which can tell whether an answer is generated by GPT3. But I’ve “humanized” GPT3 outputs to a point where the machine is unable to tell the difference. Welcome to the new world of machines with general purpose intelligence. It will interesting when these hybrid outputs become part of the machine’s training data.
We may need to go back to handwritten answers. But wait, I’m hearing that some primary school parents want to eliminate cursive writing since it is useless in the era of keyboards. But why stop there? After all, keyboards are probably on their way out, so why learn how to write at all?
But let’s come back to these amazing chatbots, and whether there is anything different this time around with AI.
Machines that Understand
The fundamental change is that machines are finally beginning to understand us. All this time, we have had to shoehorn our interactions through interfaces involving pointing and clicking at things to run programs. Sure, graphical User Interfaces (GUIs) made computers easier to use than having to type commands, but the machine is still passive, and we have to conduct the interaction on its terms.
AI has fundamentally changed the nature of the human-machine interface. ChatGPT3 seems to understand virtually anything you tell it. Its responses are coherent. It keeps track of the dialog and doesn’t lose context, with the ability to refer to earlier parts of a long conversation. It recognizes and recovers from misunderstandings. It even has a degree of awareness about its limitations (ask if it has any self-awareness).
But, does it really understand anything we’re saying, or does it just appear to? And should we care?
Herbert Simon, one of the fathers of AI, argued that humans are essentially symbol processing programs with a long-term memory and a short-term memory. Long-term memory is everything we’ve learned, while short term memory handles whatever we’re attending to in our current environment. Intelligence is independent of the hardware that is processing the symbols, whether it is silicon or meat.
This isn’t a universally accepted view. The linguist John Searle proposed the following thought experiment that takes a different view:
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
Searle asks, does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? To Searle, it is the latter.
Here’s how Searle’s thinking applies to ChatGPT3, which can be “trained” on all the human curated data out there – our accumulated wisdom, theories, opinions, norms, social media chatter, computer programs, pretty much anything available to it. Using this massive record of human activity, the machine has learned enough about us to enable it to respond intelligently to virtually anything we say to it. It keeps track of context, and displays a remarkable degree of common sense, which has eluded machines so far. It has learned for example, that the word apple is related to orange, Microsoft, technology, phones, and more, and can figure out context in a conversation. It has also acquired a lot of common-sense about physics, human psychology and much more from the training data. It even learns things from the data that humans might not even imagine. That’s all part of its understanding.
Searle would argue that the machine’s “understanding” about the world, embedded in vectors of numbers that somehow capture the implicit relationships among things, isn’t really understanding. Sure, the machine can make accurate predictions and inferences about all kinds of things from its internal representation which is a bunch of numbers, and even generate novel outputs like answers to exam questions, stories, paintings, and movies, but does it really understand anything? Is a simulation of reality the same as reality?
This reminds me of my conversation with philosopher Dave Chalmers, on the nature of reality. If you can’t tell the difference between real and virtual, he asks, does it really matter? It’s a great question. Let me attempt to provide an answer to a slightly simpler question, namely, when might the difference between how machines and humans understand the world matter? It has to do with trust.
We tend to trust things and people who are familiar, and things that accord with our beliefs. When risk is involved, we desire more of an understanding of how why it makes mistakes. In an article I published almost seven years ago in the Harvard Business Review, called “When to Trust Robots with Decisions and When Not To,” I showed how trust is driven by risk. We don’t trust driverless cars despite their record because we don’t understand how they see the world, and because of this, we worry about the possibility of a catastrophic unforeseen error. We worry about that nasty “edge case” that wasn’t covered in its training data. As humans, we have the confidence that we can handle novel situations by drawing on personal experience.
Chatbots are similarly opaque as driverless cars, but there are countless low risk situations where they will be tremendously useful. They will transform businesses, beginning with routine customer support, directing the edge cases to humans. In general, they will eliminate the grunt work and filtering that requires human perception and common sense, and make humans more productive.
On the personal front, general intelligence will change our lives. Imagine the countless things you can’t attend to at the moment because of the limits on your personal attention. An avatar with common sense could change your interface with the real world or the metaverse. Unlike Huxley’s dystopian future where humans become machines, we’d all have our personal AI – maybe as a coming of age Christmas gift.
Merry Christmas!
Until the next year, V.
As a jokester, I just can’t resist adding this joke for cheer:
One day, in the depths of Siberia, Rudolph, who happened to be a committed member of the Communist Party, was arguing with his wife:
Him: It's raining.
His wife: It's not raining.
Him: It's raining.
His wife: It's not raining.
Him: Rudolph the Red knows rain, dear.