Dave: Open the pod bay doors HAL.
HAL: I’m sorry Dave, I’m afraid I can’t do that.
From 2001: A Space Odyssey
My Latest Podcast
My most recent guest on Brave New World was neuroscientist Sandeep Robert Datta, professor of Neurobiology at the Harvard Medical School. Sandeep is a pioneer in the study of smell and the brain, and explores the link between chemistry, perception, and behavior. He was the thesis advisor of Alex Wiltscho, who is CEO of the smell company osmo.ai and a previous guest on Brave New World.
As Sandeep informed me, smell occupies a big and malleable part of our total “genomic real estate,” suggesting that evolution has wired us with a tremendous capacity for smell. In studying smell myself for the past few years with colleagues and students at NYU, I am realizing our that our brain has a significant “chemical intelligence” capability, one that we are just beginning to understand. Some humans can smell diseases with near-perfect accuracy.
Sandeep and I had a great conversation about the mysteries of smell and its relationship to the brain. The conversation got deeper as we went on, raising fundamental questions about what smell really means, and the meaning of meaning when it comes to smell. So, check out the podcast here:
https://bravenewpodcast.com/episodes/2024/12/13/episode-90-sandeep-robert-datta-on-smell-and-the-brain/
The Implications of General Intelligence
The latest paradigm shift towards what I have called “General Intelligence” in my recent article on The Paradigm Shifts in AI, is signaled by the emergence of general-purpose pre-trained models such as large language models (LLMs). These pre-trained “foundational models,” which underpin applications like ChatGPT, have transformed AI into a general-purpose-technology that can talk about anything with anyone. Everyone can now relate to AI in their own way.
It’s a bonanza for creators. For the first time, anyone can harness these pre-trained building blocks and create AI applications in minutes, which would have taken a decade only a couple of years ago. General Intelligence has taken AI to a new level, where the increased level of intelligence in systems around us is palpable. The more data the machine sees, the more it learns. This is great, but there’s always the lurking danger of its dark side and nefarious uses.
A current fear is whether AI will surpass human-level intelligence. Such a possibility requires that we think about a more general question, namely, whether a less intelligent species (us) can govern a more intelligent one (the machine). This question is especially timely today, when the artificial intelligence is emerging with few guardrails.
What makes general intelligence uniquely challenging for us to govern is the fact that its design lacks a specific purpose. Previous technologies, including previous AI machines, were created with a purpose, such as medical diagnosis, engineering design, planning, customer support, and so on. We could turn off such applications at will when they didn’t satisfy our goals or expectations or became obsolete. In contrast, current pre-trained AI machines are the first ever to be designed with no other goal than to be able to converse with us intelligently. Everyone relates to it, and it is intertwining itself rapidly into our lives. At a high school recently, no homework was turned in one day because ChatGPT wasn’t available. Fort children coming of age post 2022, AI is interacting with their brains all the time. They’re going increasingly to AI over humans for answers, entertainment, and even companionship. There’s no going back or turning it off. It’s here to stay, so it’s a good time to think about whether we need new laws or guardrails for AI.
AI Governance
A central question we face at the moment is whether we can govern AI if it becomes better than us at almost everything, making us its passive consumers. Should we worry that it will govern us, perhaps even without us realizing it, or because we become addicted?
Many people fear that AI could destroy democracy as we know it through misinformation or manipulation. Some worry that it will destroy jobs and damage our dignity by making us dependent on a universal basic income handout from the government. It could also worsen inequality, by creating a new elite class that controls the AI. The economist Thomas Picketty has shown that in the industrial era, the returns on capital greatly exceeded those on labor. We may be entering an era where future returns will accrue to those who create and control knowledge, in particular, those who control AI. It reminds me of a Seinfeld episode in which Newman the postman boasts “when you control the mail, you control information.” Replace mail with knowledge and information with AI! Knowledge is power.
Health and AI
Governing means having decisive influence, control, or the ability to exercise authority over something. Let’s consider the impact of AI on two important areas of our lives: health and well-being, and on our political system, that is, liberal democracy.
Healthcare is an area in which AI is already making a material impact on diagnosis and prediction of outcomes because of the increasing amount of data it is collecting from images, laboratory tests, and sensors such as wearable devices. We should expect that such data will become more integrated over time and will inform us about the state of our health and give us advice. AI robots are also making surgeons better, which benefits patients.
AI will improve health because the objective functions of healthcare providers are largely aligned with ours. Despite its impersonal nature and inflated cost structure of healthcare, the incentives of providers and consumers are largely aligned in the long-term.
But what about mental health? Perhaps future personal AI assistants and therapists will enhance our well-being, but at the moment this is an arena in which the incentives of the operators of AI, such as those who run social media platforms, often run counter to the best interests of the public. The algorithms of such platforms, such as those that order newsfeeds or suggest friends, have been driven by business models that maximize “engagement,” even if it is counter to the well-being of their users.
It wasn’t always this way. During their early days, platforms such as Facebook enabled people to connect with long lost friends and discover people with similar interests. However, over time, the objective functions of such platforms turned to using private data to maximize engagement, which correlates strongly with advertising revenues. This type of business model had all kinds of undesirable side-effects, which have been well chronicled by my colleague Jonathan Haidt, author of The Anxious Generation. Teen-age girls have been especially susceptible. The “Like” button turned social media into a parade for self-affirmation or dopamine hits, arguably with devastating side-effects that the platform ignored or concealed for many years.
When such concerns surfaced, the reaction of the operators of social media platforms like Facebook was typical, not unlike that of cigarette makers years ago: denial, followed by acknowledgement, and then a demonstration that they were doing something about it. The picture painted by whistleblower Frances Haugen who worked in Facebook’s “Integrity organization” paints a dark picture of the company’s early-days.
The media scrutiny seems to have changed things in some areas. My colleague Yann LeCun, who is Chief Scientist for Meta’s AI Research, points to how the Integrity organization at Meta now uses sophisticated AI to enforce content moderation policies. He argues that the best countermeasures against nefarious uses of AI such as disinformation, hacking, and scams is better AI, and points to dramatic improvements in these areas over the last five or six years that have only been possible due to progress in AI, such as language understanding and translation. It is ironic that AI can be the solution to problems caused or exacerbated by AI.
The challenge for regulators is how to balance freedom with the harm it can cause. We know that human frailties such as addiction are exploitable, as we have seen with cigarettes, alcohol, and other substances and devices. The risks are highest for people who are the least able to protect themselves, like children. We try to protect children from their harms via minimum age requirements. But more is needed in social media, for which we need laws, such as those used in the financial services industry. I’m glad to see movement in this direction. Australian lawmakers just passed a law banning social media for children under sixteen, making platforms liable for verifying age and moderating content. Interestingly, however, a major justification for such a law is not proven harm beyond the shadow of doubt; rather, it is that children under sixteen don’t have the wherewithal to sign contracts regarding the use of their data. They are not in a good position to assess the tradeoff that we make as adults: free access to services in exchange for our data. Children are very likely to be exploited.
It remains to be seen how effective this type of blanket regulation will be. Critics rightly warn that when you prohibit children from doing something, it’s like waving a red flag challenging them to find ways around the prohibition. A blanket ban also denies access to children who might genuinely benefit from social media and are able to avoid its toxicity. So, the real challenge is to protect those who need it most without restricting access to everyone else.
A recent article reported that Ozempic, a drug for obesity that was originally designed for diabetes, could crush the junk food industry because it makes people averse to such foods. It is perhaps worth devising a type of Ozempic for social media, but until then we are stuck with imperfect tools such as regulation and KYC laws. I argued in 2017 for KYC laws for social media, such as those used in the financial services industry. While platforms have taken some steps in this direction, they are unlikely to inform us voluntarily of activity that could put them at risk of lawsuits, especially since current laws like Section 230 give them blanket immunity against liability for all content posted on the platforms. It’s a contentious area.
Democracy and AI
At a recent TEDx event at NYU that was provocatively titled “Brainrot: Fractured Realities in the Digital Age,” I was asked about the risks posed to democracy if its citizens no longer trust the source of their information. The organizers asked whether the perception of truth is becoming fragmented as online platforms compete for our attention, and whether a society can function without a shared reality. These are very interesting questions, which require disentangling trust and truth.
My response to the larger question was that a liberal democracy can progress and thrive despite different versions of the truth. Indeed, truth is often difficult to ascertain. What makes liberal democracies work is stable and trustable institutions, not a shared version of truth. My previous podcast guest and recent Nobel Laureate James Robinson and his colleague Daron Acemoglu have described the importance of stable trusted institutions and that of the balance of power between such institutions and the people (see Why Nations Fail, and The Narrow Corridor.)
In other words, truth has little to do with trust or democracy. People often trust biased media and mistrust truthful sources, but institutions and democracies can survive as long as the electorate is savvy and educated. This latter point is also made by the intellectual historian Helena Rosenblatt on my podcast and in her book titled The Lost History of Liberalism. If future government and business institutions will be AI-based, it is important to make the public savvier about the objective functions of these institutions and how they impact us. For example, do engagement-maximizing models enable AI-based manipulation that makes people stupid, anxious, or ill-informed?
What does the evidence tell us about such questions?
Not enough, as I’ve discovered in some of my podcast conversations. The political scientist and my previous podcast guest Josh Tucker has studied social media “interventions” data in detail, where the interventions involve things like exposing people to various treatments, including fake news aimed at political persuasion. His results suggest that short-term interventions at political persuasion, such as intensive misinformation campaigns and even dramatic interventions, like taking people off social media for a month during an election campaign makes little difference to their political beliefs. People are apparently not that easily manipulatable. So, that’s good news. The impact over longer time frames, however, are not known yet. Tucker also acknowledges that some parts of the population, like young people, might be more vulnerable to AI-based persuasion than the general population. We just don’t know.
Indeed, at the NYU/TEDx talk, I said that this question, about the differential impacts of social media on people, would make for some great doctoral dissertations. It could better inform us about which segments of the population are most at risk, as Jonathan Haidt has done, and help us to design regulation accordingly. For example, obligatory instruction in our public schools about AI might be necessary in order to preempt accidents such as the case of a fourteen-year-old boy who took his life due to a fundamental ignorance about AI machines. The article that reported the incident was titled “Can AI Be Blamed for a Teen’s Suicide?” It’s a good question. Children need to understand that these are potentially dangerous toys, not therapists or companions who genuinely care about you.
The larger takeaway is that the risks of AI arise in large part from misalignment of the objective functions of the operators of AI and their consumers. Problems arise when business models are misaligned with those they serve. This is something young people need to be savvy about. They must beware of turning into passive consumers governed by AI in an environment overflowing with information, misinformation, and bad intent. Citizens have to be able to think for themselves.
The philosopher Immanuel Kant said something quite relevant about the importance of being able to think instead of passively consuming and following instructions. He noted that people tend to remain in a state of "immaturity," in which it is comfortable to let others think for them. Kant attributed this tendency to laziness, fear, and a preference for security over thinking critically and independently. He also noted that authorities like to encourage this because it makes people easier to control.
Kant’s challenge is especially relevant in the world of AI.
I will address the unanswered question about whether a less intelligent species can control a more intelligent one in a later post.