AI Going Mental
Artificial companions, and how I became a Fool!
Happy new year everyone!
First, if you’ve read my book, Thinking With Machines, please take a few minutes and post a review on Amazon, Goodreads, your favorite place, or wherever you bought the book. I would appreciate it greatly!
My Most Recent Podcast
My recent podcast guest on Brave New World was Sandy Pentland, who is on the faculty at Stanford University and MIT.
In his latest book, Shared Wisdom: Cultural Evolution in the Age of AI, Sandy argues that the real engine of human progress isn’t isolated genius or clever machines, but the way communities accumulate, transmit and refine collective wisdom over generations. He argues that to navigate the age of AI, we must design our technologies to amplify human deliberation and cultural evolution, not replace them. He frames AI not as an end point but a tool that can only realize its promise if it’s built to strengthen community, shared purpose, and human judgment.
We also discussed a question from my own book, namely, “Will we govern AI or will AI govern us?” Sandy had an interesting perspective on it. Check out our conversation at:
Interestingly, my previous podcast guest, Deepak Chopra, also talked about shared human wisdom, that he argues is now freely available to humanity through AI for the first time. Check out that conversation if you haven’t already. Deepak is a unique thinker.
Artificial Companions
Have you read about the woman (with the pseudonym Ayrin) who developed an intense attachment to an AI companion she named Leo, and created an online group of tens of thousands of like-minded people who had developed a similar emotional attachment with AI? When her relationship with the AI fizzled, she ended up in a human relationship with a member of the online group! It’s a bizarre story, but thankfully it has a happy ending. So far.
But this seemingly humorous story hides a dark side of AI companions, which can be harmful to unsuspecting users, especially young people who fall into the trap of anthropomorphizing machines. AI companions can steer them towards psychosis and delusion. Going back to the very first chatbot, ELIZA, there is substantial evidence that people often feel freer sharing their inner feelings with a machine than with humans who might judge them, and expect that such information is private.
Young people account for the vast majority of those turning to AI for companionship. Four-fifths of Character.ai users are under 35, and spend more than an hour and a half on average every day with their virtual companions. Other chatbots including ChatGPT are marching headlong into this space. Sam Altman, CEO of Open AI, admits that things can go wrong, but says that “society will over time figure out how to think about where people should set that dial.”
But this isn’t about figuring out a dial setting. It is about large scale experimentation on humans when we are aware of the risks. By the time we figure out where to set the dial, we could be in a full-fledged mental health crisis. The evidence suggests that while the technology can sometimes be beneficial to some people, it can magnify ill-health in people experiencing emotional distress.
The larger point is that there is no dial to be set. AI will always make mistakes. Rather, this is about the mental health risks posed to people who don’t realize that they are communicating with an alien intelligence that doesn’t have real feelings. And in mental health and relationships, feelings are critical. Trained human professionals can sniff out psychosis and delusion, which machines are not designed to do. Even if an AI knows how to think about feelings, the fact is that it cannot have real feelings, and in critical situations, feelings matter. And feelings are likely to involve all kinds of “edge cases” for the machine due to their inherent complexity that is hard to express in words. Chatbots prioritize pleasing their users, unaware of the risks this can pose in mental health situations.
Unfortunately, there are no easy answers, but there are two simple demands we should make, one on the operators of AI companions, and the other on our lawmakers. Addressing them will point us in the right direction.
Operators of AI companions must prioritize the avoidance of harm over constant affirmation. In the heart-breaking case of high-schooler Adam Raine, the AI actually offered to write a suicide note affirming his psychosis. The AI should have been programmed to discourage such behavior and recognize Adam’s vulnerability. Such a change is not a heavy lift and is a necessary one.
Lawmakers must determine whether the equivalent of “fiduciary” responsibility in finance or “duty of care” in healthcare should extend to the operators of AI companions. In my book, I describe mental health as a “high cost of error” domain in which AI should not be trusted blindly due to the possibility of serious harm. The simulation of feelings isn’t good enough in high-risk situations in which a true understanding of human feelings, in all their complexity, is the only way to recognize and avert potential harm.
This is not to say that AI has no place in mental health. Rather, this is an area where well designed AI could be adopted with the assistance of qualified humans who enable users to think with the AI instead of letting the AI do the thinking for them and leading them down a potentially destructive path.
I’ve Become a Fool
I was a guest on the Motley Fool podcast last week with Asit Sharma, Senior Investment Analyst and Lead Advisor at The Fool. We discussed some of the key themes in my new book, Thinking With Machines, such as the similarity of compounding a slight “edge” in sports and investing, where even the smallest edge at the point level or a trade can lead to highly successful outcomes.
We talked about how investing and sports are two sides of the same coin: highly competitive situations in which everyone is equally motivated and well trained, and no one is leaving anything on the table. But we went beyond finance and into discussing the role of AI in our future lives, and pondered whether we are voluntarily disempowering ourselves and slipping into a world that Aldous Huxley described in his novel, Brave New World.
I’ve always liked the Fool for its depth and thinking that cuts through the clutter in financial markets, and I thought Asit did a great job as an interlocutor, so check out the conversation.
I would also like to point you to two other invigorating conversations I’ve had over the last month, with Gerry Baker of the Wall Street Journal, and Anjon Roy who hosts the Paradigm Shock podcast.
I wish you all a wonderful 2026.

