My Latest Podcast
My latest guest on Brave New World was Andy McAfee, author of The Geek Way, which is about what makes the west coast geeks special when it comes to innovation. I got under the hood with Andy to understand how the geek culture works in creating innovative companies and why the center of gravity of innovation shifted west in the 21st century. The conversation was such a nice balance of interesting stories and concepts.
Check out the podcast at:
https://bravenewpodcast.com/episodes/2024/09/12/episode-87-andrew-mcafee-on-the-geek-mindset/
The Damodaran Bot
Last week, in anticipation of the earnings report from NVIDIA, my friend and colleague Aswath Damodaran published a fascinating blog on how he intends to stay a step ahead of the “Damodaran Bot” that I have been creating with my colleague Joao Sedoc over the last year. We are looking forward to comparing the first version of the bot with seven students who took Damodaran’s class on Valuation last semester, some of whom also took my class on Systematic Investing.
I had first mused with Aswath about building a Damodaran way back in 2015 because I’ve always wondered whether long-term investing is amenable to systematization. It seemed like a futile endeavor at the time. But the emergence of ChatGPT in late 2022 made what seemed like science fiction in 2015 very real. A handful of human investors have exceptional track records, but no one comes even remotely close to Damodaran in terms of the volume and quality of training data he has published from which an AI can learn. And who better to evaluate the bot other than Damodaran himself.
Joao and I will be thrilled if DBOT 1.0, as we refer to it, gets a B grade from Damodaran on its analysis of the electric car maker BYD. I say B because it is difficult to tune a LLM to think at the same depth as Damodaran. LLMs are designed to converse with humans in a way that makes sense, which they do well, but not to be truthful or rational. Nor are they designed for reflection and out of the box thinking. Indeed, our initial attempts at prompting the LLM to read Damodaran’s blogs and evaluate a new company based on them didn’t generate anything even remotely comparable. The output made sense, but sounded like boilerplate material. It forced us to break down his thinking into components, which is still work in progress.
I’ve admired Damodaran’s analyses for their creativity and unique insights for a long time, but they seem to get better and better with experience and wisdom. His blogs routinely get tens of millions of views, which conjures up a modified 80s era where I substitute “Damodaran” for the brokerage house EF Hutton: “when Damodaran talks, people listen.” It isn’t surprising that Wall Street considers him to be the “Dean of Valuation.”
Although the AI isn’t even close to Damodaran’s level of analytic thinking, he warns us that it will get much better, and asks how humans can make themselves AI-proof, that is, not replaceable by a machine. This is a question that I have thought and written about over the years, so I’m going to provide an additional perspective on Aswath’s musings. I’m not going to describe the bot, which I will do in another post, but what it is up against.
NVIDIA
To highlight the type of reasoning ability we are trying to replicate ultimately in DBOT, I’ll use Damodaran’s analysis of the chip-maker NVIDIA in his Musings on Markets blog from June 2023.
Valuation is the bridge between numbers and stories. The key is to get the story right and to apply some core principles to a context to arrive at a value. The story focuses our attention in the right ballpark, where we can dig deeper to come up with credible numbers.
The analysis in the June 2023 blog starts with a deep analysis of the semiconductor business, tracing its lifecycle, size, operating margins, and its shifting cast of winners and losers. Within this backdrop, it positions NVIDIA as an opportunist in niche markets, where it has achieved above-average profitability. Its R&D investments and acquisitions have helped it sustain a performance and capability edge in its chips and software over those of its rivals.
What’s the AI story, and NVIDIA’s story within it?
This requires asking the right questions. Like, for starters, is AI an incremental or disruptive technology? Examples of disruptive changes over the last four decades are personal computers, the Internet, smartphones, and social media. Damodaran says he previously viewed AI as an incremental technology, but changed his mind with the emergence of ChatGPT, which everyone could relate to and incorporate in their daily lives.
His follow-on question is equally interesting, namely, whether disruptions have been good or bad for investors in general. He shows that the last four disruptions have been beneficial to the market on average. But perhaps even more significantly, he asks about the distribution of winners and losers among suppliers of the technology, and notes that disruptions have led to very few big winners and lots of hyped-up eventual losers looking to ride the disruptive bandwagon. A basket that includes such companies is therefore likely to be overpriced and underperforms in the future. We should expect to see something similar play out in AI. I’d be wary of an AI ETF as an investment.
Digging deeper, for the few winners, what are the potentially profitable market segments in AI? For example, these might be hardware, software, data, and applications, or some combination of them. Damodaran argues that the “AI-chip story” is the most credible one because NVIDIA has a technology in place that is already generating solid performance in an existing target market. At the moment, we are seeing a surge in data centers for supplying the horsepower required by AI applications. These are likely to contribute to a large part of NVIDIA’s future revenue stream. Current reports in the media bear this out, as NVIDIA shoots to become a one-stop shop in the data center business that integrates processing, networking, and the cloud.
Value Drivers
In Aswath’s world, the price of a company derives from four “value drivers.” They consist of operating margins, revenue growth, reinvestment efficiency (how much you must invest to stay competitive), and risk as measured by its cost of capital and likelihood of failure of a company. The 64-dollar question in valuation is what values to assign to these drivers. Aswath converges on the numbers based on the story, which is driven by an analysis of its markets and the company’s competitive positioning in them.
The beauty of Damodaran’s methodology is that it produces a number, namely, the value of a company and hence its price per share, which can be compared to what the market assigns to it. When asked how he accounts for all the intangibles involved, I’ve heard him invoke an 80s era commercial for Raghu spaghetti sauce in which a very discerning customer asks the camera one ingredient at a time whether the sauce contains that ingredient, like garlic, onions, herbs, and virgin olive oil. The answer to all the questions, such as “how about my grandmother’s fresh Italian herbs?” is “it’s in there.”
On the basis of his story and the estimates derived from it, Damodaran estimated NVIDIA’s value in June 2023 at $240/share. At the time, the stock was trading at $434/share.
Could he be wrong? Indeed, Damodaran asks himself whether he might be missing something or whether an estimate could be off. Should any of the value drivers be revised upwards? Could the automobile market, for example, be larger than expected? Could the operating margin be higher if NVIDIA’s technology continues to stay well ahead of large competitors such as AMD and other emerging upstarts? Might its existing position in the industry as a chip designer require lower capital expenses in the future relative to chip manufacturers?
Damodaran’s analysis ended with such a simulation, that varied margins and revenues while keeping the other drivers constant. He grounded this analysis by noting that NVIDIA’s operating margin adjusted for R&D expenses was 42.5% in 2020 and 38.4% in 2021. Accordingly, the margin in the simulation was varied between 30% and 50%. The expected revenue in year ten was varied between 100 and 500 billion dollars. Incidentally, the size of the total AI-chip market at the time was estimated to be between 200 and 300 billion dollars, so these are lofty numbers.
Was a price above $400/share justifiable?
The simulation showed that it was not, except for scenarios with operating margins in the region of 50% and 2033 revenues in excess of $400 billion. To put things in perspective, the company’s total revenue for 2022 was $27 billion and for 2023 it was slightly lower.
Sell?
Not surprisingly, Aswath’s blog sent the stock down by roughly ten percent. I communicated to him at the time that it didn’t feel right to sell considering the stock’s momentum, and the fact that markets can be irrational for long periods. He acknowledged that one of the shortcomings of value investing is that it can get you out of the really big winners too early. But you are spared the anguish when the bubble eventually bursts.
Aswath’s analysis made me reflect deeply about NVIDIA, partly because it is in my own portfolio, but more so because of its pivotal role in AI. While operating margins have a well-defined upper limit, could there be another story to justify the lofty revenues that the market price reflects? Or is the market ahead of itself? Indeed, NVIDIA just reported blowout revenues of 30 billion for the second quarter of 2024 which exceeded expectations, and the stock is down twenty five percent since then.
Is this it for NVIDIA, or could it still match up to the inflated expectations?
In January of 2024, I wrote an article titled “Is NVIDIA the New General Electric?” I had published an article the previous summer called “The Paradigm Shifts in Artificial Intelligence, where I argued that AI is a new general-purpose technology like electricity, and similarly poised to transform every industry. If this is true, I reasoned that NVIDIA might become central to all industries just like GE did in the 20th century. In which case, perhaps the automotive market is just one vertical, and there are others that will emerge?
Time will tell.
Superforecasters
The general question Damodaran asks in his blog, which I want to return to, is whether humans will ultimately be replaced by an algorithm. Is Damodaran himself in jeopardy, having made his thinking and analyses public over the last three decades?
How can he stay ahead of his bot, he asks?
This is an important question facing humanity as the AI gets better. How will humans add value to the machine?
Damodaran frames his thinking in terms of whether what we do is formulaic versus adaptable, whether it is based on principles versus rules, and whether it is biased versus objective. He argues that jobs involving formulaic or rule-based reasoning will be replaced, whereas the ones based on the use of basic principles will not. He also makes the case that human bias can sometimes be desirable, for example, in situations where a client wants a biased estimate of value, such as in a divorce proceeding. In such cases, you don’t want an objective valuation from the AI. You prefer a human spin doctor.
I agree with this general line of thinking, but there’s another way to think about what makes people like Damodaran special and the challenge AI faces in thinking like him. I’ve spent the majority of my career on AI algorithms for trading, which was motivated by the increasing availability of data for machine learning algorithms. My investment strategies are based on the application of the scientific method. I have also spent a lot of time on Wall Street observing human decision makers, who tend to blame poor outcomes on bad luck and good outcomes on skill. The truth is that while everything seems obvious in retrospect, it is anything but obvious at decision-time. This phenomenon has been well described by the sociologist Duncan Watts in his book Everything is Obvious: Once You Know the Answer.
The late Nobel Laureate Daniel Kahneman has made a similar observation. In our podcast conversation, he argued that many of the challenging problems in life involve a high degree of “objective ignorance,” which makes them very hard to predict. Like markets. We try, but we tend to blame people for getting such decisions wrong, whereas in real life, there is no way they could have been predicted. In such cases, errors often arise from randomness as opposed to being demonstrably wrong in some objective sense, like mistakenly classifying a malignant tumor as benign or mistaking a human for a tree. For difficult decision problems, humans tend to be inconsistent and do poorly.
And yet, research shows that a select few people seem to be a lot better than the vast majority of us at prediction on difficult problems. What makes them better? What makes Damodaran better than the thousands of experts who engage in valuation?
I hinted at one reason at the outset: wisdom. While it is impossible to define wisdom, some its components are identifiable. A large part of it is about asking the right questions. If you read Damodaran’s NVIDIA blog, its full of what I would consider the right types of questions, beginning with “Is AI an incremental or disruptive technology?”
How did he come up with that question, and why is it a good one? That’s a good question.
Consider this follow-on question: “are disruptions beneficial for the market as a whole?” Again, a great question.
Or this one, which is more specific: “what are the business implications of being a chip designer versus a manufacturer from the perspective of value drivers?”
What enables Damodaran to ask the right questions? In a nutshell, wisdom, and out of the box thinking, something algorithms can’t do yet. Part of asking the right question is invoking the right analogies. What Damodaran doesn’t mention in asking how to stay ahead of his bot, is the extent and role of his tacit knowledge in his analyses, of which he may not even be aware. Indeed, much of human reasoning involves tacit knowledge gained through experience and intuition, which we reuse in all kinds of novel ways. Until ChatGPT came along, machines struggled with tacit knowledge and common-sense, which are virtually impossible to program into a machine. LLMs seem to display some degree of common sense and tacit knowledge, which has been a huge leap forward for AI, but it doesn’t match the depth of tacit knowledge reflected in Damodaran’s blogs.
A different way to frame what makes Damodaran unique is best understood in terms of Philip Tetlock’s research presented in his book, Superforecasting: The Art and Science of Prediction. Based on dozens of long-term tournaments requiring predictions about the economy, the environment, and global affairs, Tetlock and his collaborators find that a select group of individuals and teams do consistently better than others. Tetlock was a guest on Brave New World a month before Damodaran, and we discussed these characteristics, which I summarize below.
Superforecasters tend to start with an “outside view” of the problem, such as, what would someone ask if they know nothing about the domain under consideration? For example, imagine asking people to predict the possibility of a recession next year. Indeed, a recent article in the Wall Street Journal sked this very question in anticipation of the Fed’s actions in light of recent economic data. My experience is that business people tend to dive into the data immediately, and focus on employment, the yield curve, inflation, and other indicators of the economy. In contrast, an outsider would ask something more general, like “how frequently have recessions occurred in the past?” This is known as the “base rate” of the phenomenon: how probable is it to occur in general? The correct base rate grounds the analysis in the right ballpark, from where it can be adapted to the current context.
Damodaran conjured up very similar types of grounding questions in his analysis. For example, he asked whether disruptions are positive or negative for investors in the first place. Do they lift all boats on average or lower them? This is a very basic kind of base rate. He also made extensive use of base rates in coming up with his estimates for operating margins, reinvestment levels, and risks in the semiconductor industry.
More generally, Tetlock describes superforecasters as committing unnatural cognitive acts in terms of having an unusual amount of tolerance for cognitive dissonance, and for arguments and counter arguments. He notes that they’re more likely to say “however” than “moreover,” so they have a higher “however to moreover ratio” in their transcripts than most people. They frequently revise their estimates, which reminds me of an old quote by the economist John Maynard Keynes: “When the facts change, I change my mind – what do you do, sir?”
Interestingly, teams of superforecasters do even better. Tetlock explains this as flowing from their inherent curiosity and tendency to share and critique available materials, which leads to better predictions. This is how he put it:
They helped each other. They managed to divide the labor effectively. They asked each other challenging questions. And they avoided the pathologies that degrade group decision making in many workplaces. They avoided groupthink. They avoided free riding. So, they avoided the perils of groupthink and free riding and factionalism. They also tend to be more curious, measured by the number of questions they ask. They are also more likely to gather news and opinion pieces and share them. They comment more on other peoples’ queries.
AI doesn’t yet have this kind of curiosity and capability for reflection.
Talking about sharing, everything Damodaran has done in the area of valuation is public. Indeed, this is what made DBOT possible in the first place. He keeps score, acknowledges his failures, and constantly revises his estimates.
The larger point is that Damodaran is a superforecaster. Until machines are able to become superforecasters, there is little risk of a bot coming for Aswath’s job. He is unique for the reasons I have described, especially in his ability to ask the right questions in highly complex and uncertain situations. Current-day AI isn’t there. This morning, Open AI announced a new series of AI models “designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”
These sound like problems where there is an objective truth or some idea of perfection to train the machine. It is very different from Damodaran’s ability.
Will AI ever get there? Perhaps, but it will have to become more inherently curious and learn how to ask the right questions and reflect. Until then, there’s plenty to do for humans. But advancing AI will keep raising the bar for humans. Steve Jobs’ advice during his 2005 commencement speech at Stanford was on the money: “stay hungry, stay foolish.” There’s a joke that the North Korean dictator Kim Jong Un said the same thing to his people, so I’m inclined to replace foolish with curious. Figure out how to use the AI to help you satisfy that curiosity.
Until next time.
V/