My Recent Podcast
My most recent guest on Brave New World was Kevin Mitchell, who is a professor of Genetics and Neuroscience at Trinity College in Dublin. Kevin is the author of the recent book Free Agents: How Evolution Gave Us Free Will.
It’s hard to believe that free will doesn’t exist, isn’t it? Don’t you have some agency in the decisions you make? How could they all be pre-ordained, or deterministic? When I think back to all the important decisions I’ve made where I stewed for days to better define and understand the choices more clearly, was I just under an illusion that I had any agency?
And yet, there’s no consensus about the right answer. Indeed, by some measure, the determinists dominate, asserting that free will is, indeed, an illusion. They assert that biology is just complicated chemistry, and chemistry is just complicated physics. We are algorithms, under the illusion that we’re making choices based on agency, when in reality, we are just reacting to our environment deterministically.
Determinism also presents a problem for ethics and morality. After all, what if you had no control over the fact that you shot someone on the street. Moral philosophers who lean towards determinism still say that we should pretend that we have control, otherwise society would descend into chaos. I’m still trying to wrap my head around that.
Kevin’s book lays out the arguments for and against free will, and argues eloquently that life at a macro level is about free will. So, check out the conversation:
Will AI Make Human Doctors Obsolete?
Last week, I gave a talk at the Center for Disease Control (CDC) titled “Will AI Make Human Doctors Obsolete?” It led to a fascinating question and answer session.
I started by walking through what I call the paradigm shifts in Artificial Intelligence that have occurred over the last six decades, which I’ve described in an article I posted on arxiv last year:
https://arxiv.org/abs/2308.02558
Interestingly, medicine is what got me into AI in 1979, after I witnessed a mind-blowing dialog between a legendary clinician Jack Myers and the first diagnostic AI system in Internal Medicine called Internist. Internist’s architect was AI pioneer Harry Pople, who had spent over a decade working with Myers to create its knowledge base and reasoning process. Internist’s knowledge base was a large network of medical terms connected by an assortment of relationships such as “A causes B,” “A is associated with B,” “A is a part of B,” “A inhibits B” and so on. It looks like a large hand-crafted neural network.
Pople and Myers viewed medical diagnosis as a process of assembling competing jigsaw puzzles, each of which connected as many of the observed symptoms as possible in coherent ways, ranking the alternative formulations, and asking questions that would help discriminate among the leading contenders. Internist was impressive for its time.
Where are we now in AI?
In contrast to systems like Internist which were specified top-down, modern AI systems such as large language models learn the medical knowledge – their neural network – using all available data on the Internet. Given their access to this vast and increasing store of human knowledge, we should expect such machines to become a lot smarter than Internist over time. However skilled Myers might have been, his knowledge should be no match for the collective medical knowledge and experience that is accessible to AI. Indeed, I used ChatGPT on some test cases processed by Internist and it did quite well, although I had to prod it in the right direction a few times before it converged on the same answers as Internist. That’s impressive for a pre-trained model right out of the box, So, is it a matter of time before an AI machine becomes the oracle of medical knowledge, and human clinicians become obsolete?
Yes or No?
In thinking about the question, I revisited my podcast conversation with my guest David Sontag from MIT, who works at the intersection of AI and Healthcare. David and I had discussed a number of topics, including why AI systems are notably absent from clinical diagnosis at the moment, and how AI is likely to make its way into healthcare in the future.
In my CDC presentation, here’s how I summarized how to think about the answer. First, most cases are routine, and the costs associated with such care are bloated and unnecessary. It shouldn’t cost hundreds of dollars to be prescribed a painkiller or a sedative. AI will greatly reduce such costs by increasing the supply of good advice for routine cases. Secondly, machines can now ingest all kinds of data including images, notes, and numbers, and communicate fluently with people in English. And unlike the current system which uses simple decision rules like “high cholesterol increases cardiac risk,” machines can be instrumented differently, since they have the ability to simultaneously consider hundreds of markers in assessing risk and treatment. In other words, machines’ decisions will be based on more unbiased and larger and holistic information. A related human drawback is their increasing specialization relative to old-school generalists like Myers. Specialists can easily miss the bigger picture or are uninterested in it. David described how his mother, who was suffering from a cancer called multiple myeloma, actually died from cardiac failure due to a buildup of amyloids in her heart – which wasn’t monitored because the oncologist was focused on her cancer. Her heart was the cardiologist’s problem.
Perhaps the strongest long-term advantage of the machine is the orders of magnitude more cases that it will see relative to even the most experienced humans. When you consider all the factors, along with AI’s new capability to communicate fluently in natural language, it looks like a no-contest.
Or are we undervaluing humans and underestimating the risks of AI?
Relative to humans, machines still can’t reason properly and reliably, and don’t have the capacity for introspection and questioning their assumptions. They have no consciousness. A second argument is that diagnosis is based on a lot more than data. It is based on subtle cues that come from things like eye contact, and observing and feeling the patient. As Yogi Berra famously said, “You can observe a lot just by watching.” The way patients look or walk into the office or describe their symptoms might matter. And empathy and bedside manner matter. A more empathic listener might extract better information from a patient than a less empathic one.
Talking about information, it isn’t straightforward for the machine to acquire all the right training data it requires. As David pointed out, not only is the current healthcare system instrumented for humans – meaning that decisions are based on rules involving standard well-understood markers such as PSA or A1c levels – but the data that is gathered by the healthcare system is heavily driven by what costs insurance will cover. A clinician is less likely to do tests not covered by insurance. This can lead to serious bias in the data, where much data is collected about some conditions but very little on others, especially if these conditions are rare or not well understood. This kind of existing bias makes it hard for a machine to learn about ill-understood diseases, where data should be collected.
Finally, one of the headwinds for AI could be regulation. It is unclear about how the market for AI in healthcare will be regulated and how much transparency will be required of applications. And will App providers be on the hook for mistakes, or will buyers have to beware?
Interestingly, during the question-and-answer session, I sensed that several people in the audience felt that the reasons against AI would weaken over time. Someone pointed to evidence showing AI responses as being more empathetic than those of human doctors. Indeed, in my early podcast with Eric Topol, we discussed a word cloud in his book, Deep Medicine, which opens with an unflattering characterization of doctors by patients as arrogant, uncaring, rude, hurried, rushed, uninterested, late, and unconcerned. Perhaps machines could do better, given their fluency with language and rapidly improving interface.
Others questioned whether the subjectivity of humans in interpreting the evidence might be a bad thing in the long run, as more objective data become available on a more frequent basis. Perhaps the subjectivity was essential in a time of little data, but will become a disadvantage as more unbiased data become available and machines get better at interpreting subtle cues, including body language.
If we take the long view, the machine will do more and more of the heavy lifting for us, regardless of whether the final decision is made by human or machine. But perhaps more importantly, the role of human doctors is likely to change. AI should free them up from routine cases and other time-consuming activity that can be done better by AI, and let them spend more time where it is needed: with patients who need it.
Vasant, so are we going to finally end up with a more advanced interactive setup like the ones we encounter when trying to speak to customer care at a bank, airline or travel agency!