My Recent Podcast
My most recent podcast was with Marty Fridson, a Wall Street veteran, who started his career in the mid-70s as a bond analyst, when technology was automating the back office. Marty wrote a wonderful newsletter that caught my eye, called “When the Technology Revolution Came to Wall Street,” that chronicles how technology transformed the financial services industry.
When I went to Wall Street in the mid-90s, the data revolution was beginning, and my objectives were to leverage data and AI to automate decision-making for trading, sales, and managing customer relationships. I was trying to replace people like Marty, or make them better at decision-making using AI. With all the data that was becoming available, opinions of experts were put to the test, and often didn’t stand up to scrutiny. One of my clients, a sales manager, once told me “The intel from your systems is very useful. If a salesperson tries to bullshit me, I just show them the data.”
So, check out my conversation with Marty for a historic stroll down Wall Street..
Aliens Who Speak English
Geoff Hinton provided a colorful description of machines such as ChatGPT recently, as an alien species that we’re not regarding as such because they speak good English.
They do speak very good English, but how well do they understand things? I was very surprised, for example, that GPT3 and GPT4 both flubbed this question:
In the following sentence tell me what the it refers to: the trophy wouldn’t fit into the suitcase because it was too small.
ChatGPT said the trophy. I was surprised at its lack of common sense, probably because it speaks such good English. While it has acquired some common sense in the course of learning to speak fluently, clearly, attaining language fluency isn’t sufficient for acquiring common sense. We shouldn’t conflate how well someone speaks with what they know.
I asked ChatGPT the question several times with slightly varied prompts and got the same answer. I waited a week, figuring it would have learned the correct answer in the meantime. Presto, it finally said the suitcase! To which I asked, are you sure? Sadly, it fell apart, reversing itself with an apology for its initial confusion.
The machine still feels somewhat shaky when it comes to meaning and understanding. It just doesn’t feel sufficiently well-grounded at the moment despite its language fluency.
But that will change. It will also get better math, improve its programming, and learn whatever is explicitly learnable. As my podcast guest Eric Topol said, it will capture the collective knowledge of doctors who have processed billions of cases compared to the best humans, who only see a tiny fraction of such cases during their lifetimes. Unlike us, the machine never forgets. We should assume that it will figure out the mysteries of the genetic code and create new species, perhaps at our behest.
In short, there’s little doubt that machine intelligence will greatly exceed ours. The only question is when. In which case, the pressing question worth answering is whether a less intelligent species can control a more intelligent one. Hinton says he doesn’t know whether this is possible.
Is AI Out of Control Already?
I asked ChatGPT for the answer. Under what conditions can a less intelligent species control a more intelligent one?
It listed three conditions: (1) the less intelligent species has a physical or political advantage, (2) it exploits vulnerabilities or loopholes in the programming of the more intelligent species, or (3) it has access to better resources or information.
I pushed back against the second and third conditions. Surely, a highly intelligent machine with a programming capability would fix its loopholes over time, and it is unlikely that the less intelligent species would be smart enough to even find them. Also, I don’t see how humans can control access to resources or information going forward. It’s not like we can just turn off the AI, which would require turning off the Internet and virtually every application in our lives. As far as access to resources goes, the machine should be able to hack into financial systems and acquire the resources relatively easily.
After a little back and forth, it admitted defeat:
Overall, it may be difficult for a less intelligent entity to control a more intelligent one.
The question is, have we lost control already, and we just don’t realize it?
Think about it. Why are we so polarized? Could AI be a contributor? Have the objective functions of the operators of AI platforms, like how to maximize our attention on these platforms, learned how to tap into our neural circuits without our realization? Is that some form of manipulation? Is that what TikTok is already doing? It has some awesome content, but it shows you just enough to make you want to come back for the next fix. People spend hours on it every day. Most admit they are addicted.
Coming back to GPT’s scenarios, thus far, humans have had the political advantage and access to resources. But machines that are smarter than us will be able to get access to whatever resources they need to achieve their goals. Machines operate all operational parts of our lives already, so taking over more of our decision-making is a natural progression towards machines being in complete control.
But why would they want to, people ask. And surely, we will fix whatever problems arise as we go along, like we’ve always done.
Perhaps. But there’s no reason to believe that the problems will become apparent in time, and that they are fixable once they become apparent. The machine could create inscrutable sub-goals without our realization, that even it doesn’t realize are sinister. If the machine figures out, for example, that assuming control is a good way to achieve any goal, that’s what it will pursue. So, even if it is tasked with a noble objective of protecting human-kind, it might think that the best way to achieve the goal is by taking control. Neither the machine nor its operators need to be evil for this to happen.
It’s worth figuring out soon whether there is an answer to this problem, or whether resistance is futile.
Very Interesting dimension to ponder: who's in control of AI? Humans or AI? How would things look different i the two scenarios?
As a scenario thinker I am trying to contemplate the most important uncertainties in and around the AI space. You have posed a new one for me - if and/or when the machines will become more "intelligent" (whatever we take that to mean) than the humans who think they are in control. Another way of posing this uncertainty might be: "AI is in human control" versus "AI is no longer in human control."