My Recent Podcast
My most recent Brave New World episode features Paul Sheard, author of "The Power of Money," a Wall Street Journal bestseller. Paul uses his academic thinking and industry experience to reduce macroeconomics and markets to their essence. I look at the world anew whenever I speak to him.
I began our conversation with a personal reflection after reading The Power of Money. On the one hand, I marvel at the human creativity and learning that underpins our modern banking system. At the same time, however, the system feels complex and fragile, held together by a veneer of trust in our institutions and governments.
Paul is my first repeat guest on Brave New World. A question I asked him towards the end of our conversation was whether AI could replace the Central Bank, which is doing a lot of guesswork and modeling that might be done better by AI. I loved his take on it. So, check out my episode with Paul.
Mission Statements
Open AI has been front and center recently. Earlier this week, Yahoo Finance host Rachelle Akuffo asked me what I thought about the bizarre turn of events at the AI company. I described the board’s decision as one that exposed a core tension within Open AI, namely, the huge market opportunity versus Open AI’s original mission, to ensure that artificial general intelligence is safe and benefits all of humanity.
While some people regard mission statements as marketing bullshit, I think they are very important. A good mission statement distills the purpose of an enterprise. It makes it easier to determine whether a path is consistent with its mission. For comparison, let’s consider the missions of some Big Tech companies where AI is a core part of their business model.
Google's mission is to organize the world's information and make it universally accessible and useful. It’s a clear and ambitious statement. Essentially, Google’s promise is “whatever you want to know, come to us for the best answer.”
Amazon's mission statement is to be Earth's most customer-centric company. Again, it’s simple and powerful. Whatever you want, Amazon promises to deliver it to you faster and cheaper than anyone else.
Meta's mission is to give people the power to build community and bring the world closer together. Arguably, Facebook’s mission to build community has already been accomplished, although one might question whether it brought the world closer or polarized it.
Some might similarly argue that Open AI has already fulfilled an important part of its mission by releasing the first successful conversational AI to humanity. But there is no way for Open AI to control how unevenly the benefits of AI will accrue across society. That will be shaped by markets and policy. Also, while Open AI might have a commitment to safety, there is no agreement about what this really means. Nor is it possible to weigh all the harms its technology will cause down the road against the immediate economic gains to be realized by its first mover advantage. As a leader in AI, Open AI finds itself in need of a new mission statement that considers the economics of the AI business. Even more importantly, it must consider the ethics associated with its usage of the training data to create its products.
On the last point, there’s an interesting parallel with Google’s IPO in 2004, where its founders stated that it would “do no evil,” implying it would forego short-term profits for the greater good. But in reality, Google did its share of evil as it amassed and used data in digital advertising that would be considered blatantly illegal in the regulated world of financial markets. Google created a data monopoly while no one was paying attention. My podcast guest Dina Srinivasan has documented Google’s and Facebook’s history in detail in a couple of detailed articles.
Open AI similarly used the world’s information on the Internet as training data without seeking permission. Perhaps we ignored this because of Open AI’s stated noble mission to benefit humanity. But a profitable enterprise will face harsher scrutiny in a world that is becoming more sensitized to dodgy ethics around data. Indeed, Open AI already faces several lawsuits, including one by author John Grisham, asserting that its large language model and all downstream applications are “derivative work.” An expectation of such lawsuits might explain why Open AI became less open over time.
The discovery process in lawsuits confronting Open AI could force it to divulge its training data. It could become a landmark precedent for future cases involving the use of pre-trained AI models in language, vision, and music where generative AI is seeing a lot of creative use. Increasingly, the question will be whether derived products are legitimate without complete transparency about the training data.
How Open Should AI Be?
A core challenge for Open AI is to create a proper governance structure that considers the economics of the AI market, and craft a strategy for the firm. The appointment of Larry Summers to the board is a step in that direction.
The other challenge, perhaps a more difficult one, is how open should Open AI be.
Historically, capitalism is about creating and guarding the secret sauce. Apple and Microsoft were both closed and proprietary, forcing us to be captives in their ecosystem. While Apple is still closed, Microsoft has taken more of an open approach under Satya Nadella, embracing open source and Linux, which was more stable, secure, and customizable than Windows. The open-source movement also embodies a certain ethos, of sharing scientific progress without borders, an ethos that seems to have gained momentum in the last decade.
Open AI should consider going back to its roots by being more open, along the lines of its 49% owner Microsoft. For one thing, this would be more consistent with its original culture and mission. It should reveal its training data and settle any potential infringements while the stakes are still relatively low.
The economics of an unencumbered Open AI providing intelligence as a service seem compelling. It cost me just 45 cents to use Open AI’s API to transcribe my last podcast and to have ChatGPT summarize its key points. This took a few minutes, and the result wasn’t bad. Using free tools was painful. I’d gladly pay Open AI 10 bucks to process my entire podcast archive. As I’ve argued elsewhere, intelligence is fast becoming a commodity, and with its first mover advantage, an open Open AI could dominate this market.
The Ethics of Data Reuse
Finally, it is worth noting that there’s a lot of money to be made exploiting existing content and IP. This is evident in the recent “Beatles” release called Now and Then, which used recordings by John Lennon in the late 70s that was scrubbed by artificial intelligence and combined to create a new “Beatles” product. A recent New York Times article that reported this new release asked an ethical question, namely whether it was appropriate to use a song originally written by Lennon alone, with no known intention of ever bringing it to his former bandmates, as the basis for an AI-scrubbed “Beatles” song. Would Lennon have embraced it or found it repulsive? Is it unethical for Paul and Ringo to use John and George’s material without their consent?
The issue of how digital content is reused to create new products is critical. Considering how easy it is to create avatars and digital versions of almost anything, it is likely that the world will be awash with gobs of such content that can be easily repurposed. We will need laws and standards around the use of such data to avoid AI going rogue on us. In my last post, I identified the ways in which AI can go rogue through the actions of unscrupulous individuals, companies, governments, and how we can get ahead of this problem. We live in interesting times.