My Latest Podcast
My most recent guest on Brave New World was Angela Hawken, director of the Marron Institute at NYU. Marron is an applied research think tank that helps governments create and enact better policies through the use of technology and data.
Angela has spent much of her career on the ground, dealing with District Attorneys, Correctional Officers, Offenders, Judges, and government agencies. We had a great conversation, which got progressively deeper into the practical challenges involved in trying to improve urban living, law enforcement, criminal justice, and government efficiency. I asked what success would mean for Trump’s DOGE initiative, which is front-and-center at the moment. So, check out my conversation with Angela at:
https://bravenewpodcast.com/episodes/2025/02/13/episode-92-angela-hawken-on-changing-government/
The AI Optimists and Pessimists
Over the last few months, I’ve been asking high school and university students how they feel about their future in the world of AI. Are they optimistic or pessimistic?
As an AI insider, the progress in the field still blows me away, perhaps because I’ve had lower expectations of the machine than those growing up with it, who take its magic for granted. In particular, my undergraduate students consider themselves to be the “AI generation.” AI is the water in which they swim. The graduate students are five to ten years older on average and more diverse.
Both cohorts are surprisingly optimistic, but let’s start with what the pessimists had to say. They voiced three concerns.
The first concern is that AI will automate a lot of tasks that entry level analysts do at the moment, so there will be a lot fewer such roles and tougher competition for them. Some draw on an exercise analogy – they’re worried about not performing enough repetitions of analyses and preparing materials manually for clients, which is necessary for developing the expertise and intuition required to become a senior manager. Since AI will eliminate the repetitions they were required to do manually in junior roles, they won’t acquire the necessary depth. A related concern is that they will not develop sufficient domain expertise that the current-day senior managers have acquired from their mentors the hard way, without the use of AI tools. It has become so easy to ask the AI about anything and get great answers, that they’re concerned about not developing the “mental muscle” that they need.
But the majority of students are highly optimistic. As the AI generation, they feel more comfortable with the tools than current-day senior managers, who rely on them for their tools-based knowledge. They feel that AI will make them better and more productive in whatever they do, and that full automation of jobs will take some time. During this time, there will a need for humans, so they are not at immediate risk from AI. Interestingly, however, while they are optimistic about their future, many are less optimistic about whether AI will be better for society as a whole. The most common concerns are that it will increase income inequality, fraud, manipulation, and other nefarious activity. Some point to the urgent need for regulation to protect us from such harms.
This justification from one of my optimistic students gave me a chuckle: “If you had an insanely smart servant who did everything you asked without ever getting tired, wouldn’t that be the dream? Even if it takes our jobs, humans naturally cannot just sit around and do nothing, they’ll always adapt and find a job to do.”
I agree, as long as we find meaningful things to do and the insanely smart servants don’t turn us into feeble-minded robots.
Obsolete vs Super Humans
Personally, I feel we are headed for a bifurcation of humans and more inequality. Those who are unable to exercise their mental muscle because of AI will become obsolete, while those who are able to amplify their expertise will become super human. I am reminded of two Brave New World podcast conversations that touch on this subject with my NYU colleague and valuation guru Aswath Damodaran, and political scientist Phil Tetlock from the University of Pennsylvania.
Damodaran recently asked how he could stay ahead of his “Damodaran bot, ” which I’ve been creating with my colleague Joao Sedoc to think about valuation the way he does. Aswath muses that if the AI reads everything he’s written and never forgets anything and keeps getting better, is he in jeopardy? He asks whether humans in general are at risk of being replaced by AI.
My short answer is that humans are indeed at risk, unless they up their thinking by better exercising their mental muscle. It has always been that way with every technology, and AI should be no different. Damodaran isn’t at risk for one simple reason: the machine isn’t capable of asking the quality of questions he poses when confronted with complex open-ended problems. Philip Tetlock, author of the book Superforecasting: The Art and Science of Prediction, describes what makes a select few individuals like Damodaran “superforecasters.”
What makes superforecasters special in terms of their mental muscle? Tetlock provides some useful lessons. In a nutshell, develop an insatiable curiosity and ability to read extensively, to handle numbers and information, and a knack for asking good questions, that is, questions that are not easily biased towards a misleading view or conviction. This ability grounds inquiry in the right ballpark. I advise students that studying philosophy is great way to develop such muscle. It’s great training for appreciating what makes a good question and how to compare answers.
Will AI ever get to the point where it asks the right questions and becomes capable of reflection and deep thinking on its own?
Probably. But it will take time. In the meantime, humans who are the most AI-proof are those with deep knowledge in some domain, who can ask the right questions, and have the ability to use the machine to answer them. For such people, the machine becomes a powerful amplifier by lowering the barriers to entry into other areas, especially those requiring programming and analytics tools or deep expertise in a related domain. A physicist can get help with chemistry, a biologist can cross into physics or genetics, and a financial analyst can get help with understanding the details of the chip industry when valuing a semiconductor business. But it requires the ability to prime the machine with the right questions and a depth of knowledge to interpret its responses.
I’ve encountered this in my own work. I asked ChatGPT to summarize the most influential articles in a certain area of finance from the 1990s and why they were impactful. Without sufficient depth in finance, I wouldn’t have been able to determine the quality of its response. Because I was familiar with the domain, I recognized several of the authors that it listed and summaries of their articles, so I was able to trust its outputs. It amplified my ability and saved me several days of my time.
Depth of knowledge matters in the era of AI. It reminds me of a Russian expression that Ronald Reagan often used during negotiations with Russians in the 1980s: “doveryai, no proveryai,” which translates into “trust but verify.”
If you have the ability to verify the AI, you’ll be alright.