Hey there, behind the money listeners. Happy new year. A few weeks ago, the Financial Times and Troaters gave out their prize for the 2024 Best Business Book of the Year. It was a tough competition. The group of books that made the shortlist cover topics ranging from the technology revolution inside the US military, to the management practices that have shaped the world's corporations.
but the book that took home the top prize is about a topic that has dominated the news in 2024 and will, no doubt, continue on in 2025. That is artificial intelligence. The book is called Supremacy, AI, chat GBT, and the race that will change the world. The book's author is Parmie Olson. She's a columnist at Bloomberg Opinion and previously a tech journalist at the Wall Street Journal in Forbes.
My colleague Andrew Hill is the FT Senior Business Writer. He's been in charge of running the book awards since 2005. Back at the beginning of December, Andrew sat down with Parmy in our London studio to discuss the themes of her book. What you're about to hear is an abridged version of that conversation. I hope you enjoy it.
I'm delighted to be joined by Parmie Olson, author of Supremacy, AI, chat GPT and the race that will change the world. We're talking only hours after Supremacy was crowned FT and Schroeder's Business Book of the Year for 2024, the 20th title to win the FT award since we launched it in 2005. Parmie, welcome. Thank you. It's great to be here. Congratulations. Thank you.
just outlined for those who haven't yet read it, the argument of the book. Really, this is a book about the concentration of power in technology and AI. I wanted to write it pretty soon after chat GPT came out about two years ago. I was enthralled with this new tech. Here's something that was
above and beyond anything we'd seen from Siri and from Alexa. But I felt like there was something going on behind the scenes that I thought was important for people to know about, which was this kind of battle for control. And I thought there's two really important people in this story. And one of them is Demis Hissabis, who's the founder of DeepMind, which is now owned by Google. And the other is Sam Altman, who's the founder of OpenAI. And both these men
have been trying for years to eventually build artificial general intelligence, which is this theory of a threshold of AI, which surpasses our own cognitive abilities as humans. And their sense was that once they reached that,
then we would solve so many of our current social ills and problems. As they went along their journey, however, those quite utopian, almost humanitarian ideals faded into the background as they aligned themselves more and more with two very large technology companies.
And the point I wanted to get across in the book is that, first of all, there is a problem with this absence of proper governance and regulation of large tech companies. They're so big, they're almost untouchable now. And the other was to make the point that even these founders, these two very important people, they knew they saw that there was something concerning about
large companies controlling this technology in a regulatory vacuum and they both tried to put in governance structures to separate the technology a little bit and give it proper oversight and both of them failed to do it. So I wrote the book almost as a little bit of a warning call about AI and why we need to have proper regulation of the technology, particularly
as it becomes increasingly controlled and steered by just a handful, not just companies, but a handful of people. Right, and this was a hypothesis, if you like, that had already emerged from your work as a columnist at Bloomberg, I imagine. Was there anything that you learned as you were writing the book that was unexpected, either about the two central characters or about the wider risks that you lay out in the book?
I think as I was really exploring the risks is almost like as soon as I came up with one risk, there was another risk. And it's almost like because artificial intelligence is kind of being woven into so many potential parts of our lives, not just business, but education, healthcare, culture.
There are so many potential ways that we could pay a price, whether it's the erosion of our critical thinking skills. You know, if you've got a whole generation of kids going to school who are using and relying on these tools to help them do their homework. And then it's sort of, if maybe the teachers turn a blind eye to it.
And then they go into the world of work where their entry level roles are also working with these large language models and doing the thinking for them. What does that mean for that next generation of professional workers and how do we train them up? How do they become the next senior level managers?
So things like that, I just kind of found all these different avenues, which is something that I explore in the last part of the book. There were some things about the main characters, of course, that surprised me learning about the very different structures of open AI and deep mind, learning about Demisysabas and this kind of spiritual background that he had that
There was this potential interest in maybe finding God one day if he eventually built AGI or understanding the nature of reality and the mysteries of the universe is how he talks about it. So those were some of the things that surprised me.
Yeah, one of the things that fascinated me was that they are both pretty socially adept. I know it's a cliche, but to think of the sort of digitally aware, super brainy introverts who one thinks of as being the people doing the coding and so on. It is a caricature, but they are both.
good managers and great salespeople. I mean, they are the front people for important organizations and seem to combine those social skills with their obvious braininess. Yeah, absolutely. And I think that that is something that makes them more similar than different. I think you can be an introvert. I still have a lot of charisma and be very good at rallying people to a cause.
And I would suspect that both of them have some elements of introversion. Demis is like a former chess champion obsessed with games, has been obsessed with games his whole life. He loves music, very emotive about music. And I think maybe one way is that they're different is Demis has this more kind of scientifically minded side. He did a PhD.
SAM is a bit more engineering led. So if you look at their two companies, OpenAI is just like a ton of people who used to work at Y Combinator or former startup founders or former engineers. It was a much more kind of flat structure, whereas DeepMind was very hierarchical.
If you were a scientist or PhD, then you were like a rock star in the company. You could get FaceTime with Demis. It was quite hard to get FaceTime with him. So I think, although they're very similar, very charismatic, there were some differences as well in how they ran their companies. Right. So when we launched this year's Book Award back in the spring, I asked winners of the past 19 awards
what they would add to their book if they had the chance to write an extra chapter. And your book only came out in September, but this is such a fast-moving area that I think it's probably still relevant to ask you what would you add if you were now doing a second edition?
I feel like the main thrust of the book is all still pretty much there. Like nothing really kind of undermines the premise. So if I was going to add anything, it would just be detail about the story. One being, for example, that OpenAI is increasingly becoming a for profit organization. I think we could sort of see that coming in the summer. It feels as though it's gone further since. But now there's been a lot more reporting on it, although the company has not publicly disclosed anything about the direction they want to go in.
But there's been a ton of reporting saying it's happening. Also the fact that some of the most promising AI startups over the last year, they'd already been acquired by Amazon, Google, and Microsoft. And so I feel like that already kind of set the scene for what we're continuing to see, which is the real struggle for
smaller innovative companies to try and compete with the larger ones. But the obvious thing that I would add, of course, is the Trump, the new Trump administration, the fact that Elon Musk is in the Trump administration, which I think actually it kind of throws a spanner in the works a little bit. People have been saying, we'll probably have a light touch regulatory regime under Trump.
But you have to remember that Musk is an AI doomer. He started OpenAI in part because he was so worried about Google having control of AGI, and he started Neuralink because he wants humans to be augmented with these brain chips so that if AI ever goes rogue, we can get ahead of that.
So I don't think it's just talk. I think he, from what he said, it seems like he genuinely believes that. So it makes me think that I don't think we'll necessarily see a dismantling of regulations around AI under Trump, necessarily. Right. You might put a little bit of a cramping the style of the light touch crowd. Yes. I mean, at the very least, if he repeals Biden's executive order on AI, he'll just replace it with something very similar. Right.
And it was a great discussion among judges about this year's short list and just throwing a little bit of light in on the jury room. One constructive criticism of supremacy was that Musk wasn't quite prominent enough. And obviously that's with the benefit of hindsight about what's happened since you completed the book. But does Musk's own AI venture, XAI stand a good chance with his billions and possibly with some Trump facilitation of
posing a threat to Google and Microsoft. I am honestly surprised at how quickly Grok has grown in terms of its ability to raise money. Just quickly explaining what Grok is. So Grok is a large language model that was developed by Elon Musk's AI company called XAI. It's pretty much what Chat GPT is. So Musk has his own version of Chat GPT. It's integrated
into Twitter. I think a lot of us in the press when Elon Musk took over Twitter and we immediately started seeing advertisers leave, I think maybe we underestimated.
just how popular he is among certain people in the business community and in Silicon Valley and how he's still able to just continue raising money. And so it seems like the models that they're creating, the grok models are actually doing quite well in terms of hitting similar benchmarks to those made by the likes of anthropic, the other leading AI players and open AI.
But yeah, he's really come along quite well. I just find this paradox of these tech billionaires and founders who are so worried about the
potentially imminent catastrophic risk to human existence from AI, feeling that to address it, they must build the most powerful AI they possibly can, because there is a sense that they alone can do it safely and do it properly, because if someone else does it, they'll build the wrong kind of AI. So I think there's a real element of hubris in all this as well.
preferring to drive the car rather than be a passenger. Yes, exactly. Even if your driving might itself be dubious for whoever's the passenger. Yeah, that's a great analogy. You mentioned in the book that Sam Altman
has a sort of bunker, has an actual bunker, I think, doesn't he? Yeah, somewhere he's gonna go with the purified water ready and the case things go wrong, which is not wholly reassuring to me. But it's also not unusual. There are plenty of tech. I mean, Peter Thiel also has bunkers in New Zealand and Mark Zuckerberg has a bunker in Hawaii. They're all kind of buying property in Hawaii and all these kind of islands and quite temperate climates, for I suppose, if ever they need to jet off somewhere to get away from who knows what.
Yes. So that leads to, I suppose, the question that you tackle in the book, you know, where are we when it comes to imposing any ethical or regulatory restraint on AI? Is the cat so far out of the bag that it can never be put back in again? Are we just at the point of having to say it's gone and regulation and ethics will never catch up?
So I would say the cat is out of the bag when it comes to how humans are using these models for better or worse. We talked earlier about the dependency of a new generation of people on these models and what that might do to our critical thinking skills. That's a very kind of soft and squishy and abstract.
consequence to think about hard to measure. But I do feel concerned about it, and I do see that as something that's probably going to happen over the next five to 10 years. In terms of how tech companies design their algorithms to make them safe, to make them ethical, I don't think that's necessarily out of the bag and uncontrollable. Because once you have regulations that come in and
state that these companies need to be audited, they need to be more transparent about the training data that they've used for these models, that they need to put in certain so-called guardrails to make sure that there's less bias, that there are less security threats. Those are things that they can do. They have the money. Companies like Microsoft and OpenAI are spending billions on data centers, so they can surely put the investment in.
to designing the algorithms to be more fair. And I say fair because almost all the models, they're very, very, very good at not saying toxic things. They just, they say all the right things. And I think that's because these companies really do care about reputation. However, where they fall down is on issues of bias and fairness, on issues like gender, racial stereotyping, and also the security of these models. They're not
actually that's secure. So those are two areas where companies really need to improve their standards.
Hey, it's Mike here, one of the hosts of the Guardian's award-winning daily news podcast, Today in Focus. Every weekday morning, we bring you a single story, going beyond the headlines and taking you closer to the Guardian's global journalism. Combining personal storytelling with analysis, we take you inside the stories that matter most. Listen wherever you get your podcasts.
So we were sort of earring on optimism almost there about how the models might be tightened up and there might just be a little bit of restraint there. I mean, what are other reasons? I mean, your book is pretty gloomy about that. I came out thinking, I'm a bit a little bit gloomier than when I went into the book. But what are the reasons for optimism about AI across the whole
area of its uses. Yeah, I think there are reasons to be optimistic. I think one reason why maybe there was a bit of a gloomy note is because at the time when the book was written, there was just so much excitement and hype and belief in how this would be positively transformational for people.
and not enough attention being paid to issues like bias and guardrails. But I think absolutely there are reasons to be optimistic. And I think we're seeing, obviously, the laws coming from the European Union, and it's not just the EU's AI Act. I believe that the Digital Services Act, which addresses social media harms, and also the Digital Markets Act,
which is an antitrust law aimed only at the biggest players and addressing anti-competitive behavior of the largest players. In some ways, those laws could also address some of the issues we might find with AI. But the AI act itself. Also, I think it's quite well designed. It's not trying to regulate the technology itself. It's not saying you should build a model this way. It's about doing risk assessments on the model to make sure it doesn't have this outcome of
potentially harming people or being misused. And it pushes for more transparency of the builders of these models. So I think that is a reason to be optimistic. Unfortunately, the law doesn't really kick in for another year and a half, two years. So anything could happen between AI worlds. It's been two years since chat GPT and look how much people are already using it. We've got 300 million weekly active users of chat GPT. So it's a big part of people's lives already.
Yes, I did a webinar as part of a run-up to the Book Award announcement with Sherri Kutu and others about AI. Sherri Kutu is a tech entrepreneur and angel investor who was also one of our judges this year. And she was pretty adamant that there is a risk here on the other side. That one becomes two
wary of the use of a i she cited in particular education actually that she thinks that actually there's huge potential to use a i safely and productively in education i just wonder where the balance is here because one can come out of reading some things and think we must clamp down on this.
Let's stop it going any further. And at the same time, I certainly personally find I'm using it a lot for things. I can see it supercharging, you know, my future career and ways in which I can use it in my work.
Where do you strike the balance? Well, I think it sort of depends on how people continue to use these services and what we observe over the next sort of three to five years. Like, how is this affecting the job market? If companies like hedge funds and legal firms are primarily using large language model technology,
to do entry-level work, which is what they're doing. What does that mean for graduates who are taking those jobs? What kind of jobs do they have to do? So it's kind of hard to say whether that's good or bad. Like obviously, as you said, it could really kind of accelerate things in your career. The question is, what price could we pay? And I think it was so hard to have predicted that with social media.
Because when social media first came on the scene, it was incredibly useful utility, a social utility. It was almost like our new infrastructure. People talked about its importance in the Arab Spring among pro-democracy demonstrators. We only really saw the upside. But it was only after a few years that we could start to see these unintended consequences. And I think a big part
of that is the business model behind these services. So right now, large language models are subscription models, but some of them are moving potentially into ad models. OpenAI is exploring an ad model. Perplexity shows ads.
And that's something that concerns me a little bit. Because one of the reasons that Facebook, for example, became harmful in a lot of ways was because it had an ad model. And the company was incentivized to keep people on the site for as long as possible, keep us engaged. Essentially, they would never use this word, keep us addicted. The more we keep checking it, the greater the chance we'll look at an ad.
And I wonder what happens if a company that, for example, like character AI, this is a very successful AI business that does these AI avatars. It's often used by teenagers to chat to. You talk to an AI version of a celebrity or manga character. And kids, I've interviewed kids who use this app and they say they're on it for three to five hours a day sometimes. They're kind of addicted to it.
Now, imagine if CharacterAI starts showing ads, then it's incentivized to continue that. And already, CharacterAI, I spoke to the founder of the company, and the goal is that the AI knows the user, the so-called context window,
becomes infinite. And the context window is how much the AI remembers. Right now it's about 30 minutes. But he's like, what if they could remember every conversation they ever had about you? They really know you better than anyone else. I mean, just imagine what happens when people's relationships with these AIs, and they're not really relationships because it's not an actual other person.
But what if people grow really attached to them? And if there's an ad model behind that, then that could end up being a little bit toxic, potentially, for the user. And it becomes harder for the vendor to think about the well-being of their user, because their business model depends on that person being on their app for as long as possible. And you already cite some examples in the books of some pretty creepy sounding relationships between humans and the machine, where people have
become attached, all in love, actually, with the machine. And that's in the early phase before, as you say, you've attached any ads to it. Yes, yes, that's right. And I remember speaking to, I won't name the person, but I spoke to the founder of a startup that builds these chatbots that can have this kind of emotional connection with human users and something like 50% or more of the users see the AI as a romantic partner. Millions of people have used
the service and that person and I was talking to them like what about if apps like yours were optimized for engagement in the same way social media was and that their response was like that can never happen. That would be a really bad thing if that happened but who knows we're sort of seeing that already with other chat bot makers with looking at models.
So this is a strange segue, but how do you use AI yourself in your work, or indeed in your personal life? I've actually, when it first came out, I kind of tinkered around with like, how good is this thing as a writer, really? And it's really not, for me personally, in terms of my own standard, it's not.
good enough at all. Totally agree. It's kind of, I'm underselling it a little bit, but like as a really good thesaurus or for phrases, I use it a lot for just personal things, especially recipes, going places, ideas for things to do with my kids. In a lot of ways, it's replaced Google for me. Yes. I think something like 50% of all searches on Google are for something informational. And then sort of 20 to 30% are like transactional queries like Nike shoes,
under 50 pounds or whatever. And those are the searches that really matter for Google. I don't think that those are the kinds of searches people are doing on large language models. So in a way, I don't think they're a huge threat to Google just yet, just because of the types of searches, at least not to their revenue. But I think that will eventually eat away at the model of
even though it underuses AI to be using it for search. I mean, in a strange way, it's not really meant for search in that respect. Yeah, it's kind of a classical use of machine learning. But Google, of course, now they have their AI overviews, which I think is like, I certainly see potential for cannibalizing it around business model. You don't go beyond the first paragraph of Gemini responses. Why would you? And the funny thing is the responses have footnotes and links, but do people click on them?
I mean, I suppose the one thing you can guarantee is that Google is probably measuring this stuff. So if it suddenly disappears, we will realize that actually probably it was cannibalizing something that was making more money from them as your book points out. That is the bottom line. Well, I think, yeah, absolutely. I think, essentially, when chat GPT came out and was working with Bing, Microsoft's search engine, and at the time, this was maybe a year and a half ago,
A lot of people thought, that's it. Google's dead. Who would have thought? Little old Bing is going to just take over everything. But actually, a year and a half later, this barely moved the needle in terms of market share, for sure. I used it for about three weeks. Exactly that. Oh, this is going to be much better now. I know I thought that.
Yeah, it might have actually helped that Google added AI overviews, but I don't think that helped too much. I think the main thing is just how entrenched these services are in our lives. Literally, it's a verb to Google something. I think that just speaks to the size of these companies and their dominance. This is why antitrust regulators are trying to address that now. They are so deeply baked into our daily lives and infrastructure.
So I do use AI to brainstorm ideas, perhaps the fifth thing that it comes up with is something I hadn't even thought of as a way of pursuing a story or addressing an issue. But I should reassure listeners that the FT is strictly
forbidding its journalists to do any writing through AI. But I did not run this past our various committees. Obviously, I had to ask an AI what to ask you because that's the point.
And I asked it specifically for an unconventional question that it would expect you had not had to answer before. And so this is the question, which is almost an entire podcast in itself. And I'm not even sure that I would know how to answer this.
Jackie BT asks, if AI were to write a definitive history of humanity a hundred years from now, what perspective or biases might it have, and how would this history differ from one written by humans today?
It's quite a good question, actually. Oh, it is a good... I mean... But it's almost impossible to answer question. It was a level of speculation. A hundred years from now is a really long time. That is. I think if it was 10 years from now, it would be pretty similar to what a human would write, because as I've discussed in the book and you've probably talked about this ad nauseam before, so many of the biases that are just out there on the internet are baked into these models.
And so history, there'll be a great focus on men and white men. And I do wonder, 100 years from now, what that would look like if humans really make an effort. I mean, maybe things will have changed so much by then. We'll just be living in a slightly more equitable society than maybe that will be whatever AI writes in 100 years will be a reflection of that. I do look at the future with optimism.
So I would hope that it would actually be quite an inspiring read, hopefully. I'm not one of these people that looks to the future in this kind of dystopian way. And I'm sorry you came away from the book with a little bit of gloom. But I think that I do think that we work through these things eventually. That seems like a good moment on which to end. Pami Olson, author of Supremacy, thanks very much for joining us. Thank you.
Thanks for listening to this week's episode. If you'd like to learn more about Parmy's book, or any of the other shortlisted authors in the 2024 awards, check out the links in our show notes. Behind the Money is hosted by me, Michaela Tindara. Safia F. Med is our producer. Sound Design and Mixing by Joseph Salcedo. Original music is by Hannah Sprann. Tofort Forres is our executive producer. Cheryl Brembly is the global head of audio. Thanks for listening.
Hey, it's Mike here, one of the hosts of the Guardian's award-winning daily news podcast, Today in Focus. Every weekday morning, we bring you a single story, going beyond the headlines and taking you closer to the Guardian's global journalism. Combining personal storytelling with analysis, we take you inside the stories that matter most. Listen wherever you get your podcasts.