My Recent Episode
My most recent guest on Brave New World was Missy Cummings, one of the first US Navy’s women fighter pilots, who is now a professor at the George Mason University College of Engineering and Computing. Missy’s research is on the application of artificial intelligence in safety-critical systems.
Missy and I discussed the state of the art of autonomous navigation systems. Her position is that current-day autonomous vehicles still make too many mistakes for two reasons. The first is that the sensors and recognition technology are not good enough, and cause things like unnecessary braking and acceleration for reasons we don’t fully understand. They result in more rear-end collisions than human driven cars. More fundamentally, she argues that they don’t have any “situational awareness“ like human drivers. Humans are able to invoke common sense and adapt prior knowledge when needed. Missy doesn’t think the technology is ready for full-on autonomous driving on highways.
Check out the conversation with Missy at:
https://bravenewpodcast.com/episodes/2024/11/14/episode-89-missy-cummings-on-making-ai-safe/
Trust
I’ve been thinking about trust in AI for many years, but a near-death incident earlier this year made me think anew about the role of AI across all areas of our lives, including transportation, financial, legal, and healthcare decisions.
My initial thinking about trust in AI algorithms was driven by my foray into systematic machine-learning-based hedge fund in the late nineties. At the time, the idea of machines learning how to make trading decisions had not been explored. Even as exchanges and transaction processing became automated, investment decisions were mostly human.
The larger question, of when to trust AI versus humans, became more prominent as I started exploring AI algorithms in other domains such as sports. I worked with an NBA franchise to estimate things like how the odds of winning would change in response to using alternative lineups against opposing teams, or by resting valuable injured players late in the season and during the playoffs when game outcomes became more critical.
Based on my experiences implementing prediction systems in finance, healthcare, advertising, and sports, I published a 2016 article in the Harvard Business Review describing when we should trust AI with decision-making and when we shouldn’t. I’m going to cast some of Missy’s observations and my own experience using this framework, which I’ll describe briefly.
The framework presents how the interaction of two variables impact trust: how often the machine is wrong and the consequences of mistakes. I call these predictability and cost of error. We shouldn’t entrust a machine completely with decisions when the costs of mistakes are unacceptably high for a given error rate. Rank- ordering problems by cost of error is a good indicator of which ones will be automated first, and those where humans are essential.
The figure below illustrates the Trust zone in green and the Don’t Trust zone in red, and an automation frontier that separates them. As machines become better at prediction and make fewer serious mistakes, applications cross the automation frontier into the Trust zone. Likewise, lowering the cost of error through better models that avoid serious mistakes also nudges applications into the Trust zone. So does looser regulation, whereas more onerous regulation inhibits automation by increasing error costs.
Every prediction problem lies somewhere on the horizontal axis, that is, it has a certain level of predictability. The extreme left shows a coin toss, which has “zero signal”—an activity in which prediction won’t be any better than random. The extreme right suggests purely deterministic, mechanical decision problems. Driverless cars on highways would fall towards the top right part of the heatmap. Although there might be relatively few errors, they can be very costly.
Traditional thinking was that high predictability problems are more easily automatable than low predictability problems, but this ignores the cost of mistakes. In contrast, the framework says that the combination of the two axes determines trust. The cost of error plays a major role in determining trust.
No Highway
When I wrote about my near-death experience in an earlier newsletter, several readers asked me what happened. I’ll recap the incident briefly, with the larger objective of answering the question of trust in AI and humans.
On an April Sunday evening this year, I was cruising south through a green light on the Saw Mill Parkway into New York City. Traffic was normal, moving at roughly fifty miles per hour. Now, play the following scene in your mind. Fifteen feet in front of you, an SUV suddenly appears right before your eyes, crossing the parkway perpendicular to oncoming traffic. It took less than a fifth of a second from the time I realized what was happening to the impact. That’s sufficient time to register that you’re about to die. I swerved left in desperation to avoid the SUV, but at that speed, as my car’s computer later revealed, I only managed to turn left by two degrees before the moment of impact. The shatter-proof windshield turned opaque and the car continued to lurch forward for a few seconds. I feared getting hit by oncoming traffic.
Miraculously, my car stopped on the side of the road. I owe BMW big time for the fact that the compartment didn’t buckle at all. My partner, who was sitting in the passenger seat suffered two broken wrist bones, which was a relatively minor injury in the larger scheme of things. I say this because my mother died in the seventies from an accident where the front buckled on impact and she broke her leg. She never came out of anesthesia after the surgery.
In my case, the other driver, an old lady, escaped without a scratch. I never spoke to her, nor did the insurance company tell me anything other than the fact that she was at fault and they had covered the wreck.
What happened?
I’m quite certain she didn’t see me coming at all. It was twilight, which didn’t help. I didn’t see her until she was right in front of me, at which point, there was nothing I could do.
Ironically, this was a case of a human’s complete lack of situational awareness. The woman behaved like an autonomous vehicle with a failed sensor, coupled with zero situational awareness. That’s the most dangerous kind of human, and one most in need of reliable AI.
Humans Are Not Equal
A friend told me that the last time his mother picked him up at the train station, she almost caused several accidents on the way home. The trouble is that she wasn’t aware of any of them. When he left, he made sure to take an Uber back to the station, and now he refuses to be a passenger when she drives.
But more importantly, she refuses to stop driving. It might well have been his mother who almost killed me.
The National Highway Transportation Safety Authority (NHTSA) reported 40,990 deaths from motor accidents in 2023. Almost 95% of these are from human error. Young adults and senior citizens have the highest incidents. Young adults, below 30 years in age, have riskier driving behavior, while seniors suffer from frailty, slow reaction times, and impaired vision or motor skills. I’m quite certain that the old lady fell into the latter category, which creates a serious hazard for everyone else.
Would my accident have occurred if we had been driving driverless cars? Unlikely. “The other” vehicle’s cameras and AI would have averted such a suicidal move, and I would have been spared. Unfortunately, there is no data on how many accidents a risky driver causes, but I would not be surprised if a minority of drivers are the cause of many accidents. These are the people who desperately need driverless cars. According to the NHTSA, if all cars on the road were driverless, fatalities should come down by 90%, but I suspect that nudging, say, the riskiest 10% of humans judged by AI into driverless cars would reduce fatalities considerably.
AI As The Judge of Drivers
Transportation is an area that is ready for AI intervention. While I would personally not trust current-day AI with full-on autonomous driving on highways, slower moving vehicles that deal with less uncertainty are already seeing inroads by AI. As of this writing, autonomous taxis in Phoenix, San Francisco, Los Angeles and Austin have driven over 22 million miles, albeit with some violations and minor accidents, but no serious injuries. The cost of error is much lower with urban taxis than highway driving. Quoting Missy, “speed kills.”
AI could reduce accidents by evaluating human drivers, which could probably be done accurately with a few hours of observation by the machine. Insurance companies already have devices that measure things like speed, braking, and acceleration, but these are very crude measures of risk. Slow drivers are not less risky if they drive slowly due to poor vision or slow reflexes. AI systems will be much better at judging the true risk of human drivers.
This paints a picture of the future that is both encouraging and dystopian. It is encouraging in that AI would rank every driver objectively by risk and ability. It is well known that 80% of humans think they are above average drivers, which is mathematically impossible, and indicative of a dangerous human bias. AI will cut down this bias and get a majority of hazardous drivers off the road.
On the other hand, this situation points to a dystopian future, in which our driving is always measured and governed by AI. If the AI thinks you’re risky to society, you will be confined to autonomous vehicles!
I have very mixed feelings about such a brave new world, but the collective costs to society of the status quo are unacceptably high, so I don’t see an alternative. I feel for the dependents of victims of road fatalities. They deserve better than to become victims of road kill due to negligence of their fellow citizens.
With regard to autonomous vehicles where I believe you may be off target is that you hold human life to be excessively valuable and individual human agency not so much. Fact is human lives have little value in conflict zones so why not let AI decide which wars to fight?