In the latest episode of the TED Radio Hour, host Anusha Zamarodi converses with Mustafa Suleyman, co-founder of DeepMind and current CEO of Microsoft AI. This episode encapsulates Suleyman's journey in the field of artificial intelligence, discussing the evolution of AI, its societal implications, and the future he envisions.
Understanding Suleyman's Background
- Early Involvement in Youth Counseling: Mustafa Suleyman began his journey with the founding of a youth hotline for young British Muslims in response to the post-9/11 atmosphere of Islamophobia. This experience taught him the importance of empathy and culturally relevant communication.
- Transition to AI Development: Moving from social activism to technology, he co-founded DeepMind with Demis Hassabis, aiming to create a powerful AI that could positively impact humanity.
Key Concepts in AI Evolution
The Philosophy Behind AI Development
- Suleyman emphasizes the importance of creating AI responsibly, balancing innovation with ethical considerations. His mission has always been focused on building AI systems that are beneficial and used for good.
- Personal AI Companions: The episode highlights Microsoft’s latest AI, called CoPilot, which aims to act as a supportive assistant in daily tasks. Suleyman envisions AI companions as a part of everyday life, akin to smartphones or search engines.
The Impact of DeepMind under Google
- Following the acquisition of DeepMind by Google in 2014, Suleyman discusses the significant breakthroughs in healthcare, such as AI systems accurately predicting diseases, the developments that led to AlphaFold, and its implications in biology.
The Rapid Advancement of AI Technology
- AI Mainstreaming: The discourse touches on the revolutionary shift in AI's acceptance, citing how AI-generated content has now permeated daily life. Consumers have begun to rely on AI for everything from drafting emails to providing emotional support.
- Technological Acceleration: Suleyman notes the exponential growth in computational power and AI capabilities, underlining the continuous evolution of models and their applications.
Navigating the Risks of AI
With great power comes great responsibility. Amidst the promise of AI, Suleyman articulates crucial concerns:
- Asymmetry: The potential for concentrated AI capabilities in the hands of rogue individuals or small groups, leading to significant risks.
- Hyper-Evolution: AI's capacity to evolve software rapidly could result in dangerous applications, including bioengineering and autonomous weapons.
- Autonomy: The risk associated with AI systems operating independently from human oversight raises ethical and safety questions.
Influence on Society
- Suleyman stresses the need for careful consideration regarding how AI is used. He argues for oversight and ethical guidelines to ensure that technology serves the public good rather than becoming a tool for harm.
The Future Vision of AI
- Suleyman believes that AI will transform industries and everyday interactions, where everyone will eventually have personalized AI companions. These systems will assist with various tasks, empowering users and fostering creativity.
- Personalized Experiences: Future AIs will be capable of understanding and engaging with users on a personal level, providing tailored support without judgment. This fits into the larger narrative of reshaping human-computer interaction.
Takeaways for Readers
- Embrace Technology Responsibly: As we advance into an era dominated by AI, it is crucial to engage with the technology thoughtfully, ensuring that its applications are beneficial and aligned with human values.
- Stay Informed: Understanding the dual-edged nature of AI—from its capabilities to potential risks—will empower individuals to navigate this technological landscape effectively.
- Participation in the Dialogue: Encouraging conversations about the future of AI and its societal impact is essential. People should advocate for ethical practices in AI development to foster a responsible ecosystem.
In conclusion, Mustafa Suleyman offers a compelling vision of a future where AI plays an integral role in enhancing human life, advocating for both creativity and caution as we navigate this rapidly evolving landscape.
Was this summary helpful?
This is the TED Radio Hour. Each week, groundbreaking TED Talks. Our job now is to dream big. Delivered at TED conferences. To bring about the future we want to see. Around the world. To understand who we are. From those talks, we bring you speakers and ideas that will surprise you. You just don't know what you're going to find. Challenge you. We truly have to ask ourselves, like, why is it noteworthy? And even change you. I literally feel like I'm a different person. Yes.
Do you feel that way? Ideas worth spreading. From TED and NPR. I'm Manouche Zamorode. It was about a year after 9-11. This is Mustafa Suleiman. In 2002, Mustafa was a student at Oxford University when a friend decided to start a helpline for young British Muslims.
A good friend of mine at the time had just started the first prototype one evening session. I think it was a Thursday evening of offering counseling services on the phone. Mustafa was intrigued. He ended up joining the project to help grow it. And we became essentially co-founders. Quickly, their service was in hot demand.
It's a pretty remarkable time because young British Muslims were feeling judged by 9-11 as though they were responsible or somehow complicit. There was a lot of Islamophobia.
But what they found these young people needed was really just someone to talk to. Most of the challenges that they were working through were the sort of conventional things that a young person might deal with, like relationships, bullying, self-harm.
increasing homophobia in the community. And fundamentally, I think people were just looking to find support in a language that made sense to them. You know, like a London accent using sort of Islamic terminology and having a broader understanding of the, you know, whether it was a Bangladeshi or Pakistani culture and Arab culture combined with the religious foundation and just being able to reflect back that empathy went a long way.
Twenty years later, the Muslim youth helpline is still going strong. And Mustafa Suleyman now lives in Silicon Valley, where he's been working on another kind of helper. I'm the CEO of Microsoft AI, and I work on building new AI systems and products. These days, I'm working on an AI companion.
With artificial intelligence and the backing of one of the biggest tech companies in the world, Mustafa Suleiman wants us each to have our own digital assistant. I can get philosophical, creative, or poetic if you like. Available 24-7, kind of like a personal helpline. I can be your sounding board as you work through challenges. I can write stories, brainstorm ideas, think of me as your superpower in your pocket.
How can it serve you? How can it support you? How can it save you time? How can it reduce your anxiety? How can it help make you feel smarter and more confident and more prepared to go about your day with no judgment, no pressure? Hey, Minush, nice to hear from you. What's on your mind today? Microsoft calls this assistant co-pilot, and it was released at the end of 2023. I've been playing around with it.
Well, I need to interview Mustafa Suleiman, your creator. What do you think I should ask him? I go for topics like his vision for AI's role in society, the balance between innovation and ethics, and what he sees as the most exciting future possibilities for AI. OK, these are good. Now, can you please write the intro for this episode? Sure thing. Here's a possible intro.
Hey, it's Minush Samaroti, and today's TED Radio Hour is dedicated to a deep dive with none other than Mustafa Suleiman. Co-founder of DeepMind and author of The Coming Wave, Mustafa has been a pivotal force in AI, driving both groundbreaking advancements and sparking intense debates. In this episode, we'll discuss the promises and perils of artificial intelligence and whether we're truly prepared for the wave of change it's bringing.
Is AI evolving too fast for society to adapt? Are we doing enough to mitigate the risks? It's an hour of insight, reflection, and some tough questions. So, let's jump in. Okay, that was pretty good.
As co-pilot noted, Mustafa Suleyman is a longtime pioneer in the field of AI. After he dropped out of Oxford University, he worked for London's mayor. He was even part of an NGO's delegation to the Copenhagen Climate Negotiations in 2009. He wasn't impressed.
There were so many sort of difficult egos and entrenched sort of interests. And I just felt a sense of frustration and I felt kind of demoralized after that experience.
Mustafa wanted to change the world. Governments and nonprofits didn't seem terribly effective. But there was one company that was managing to change the behavior of millions of people. Facebook was exploding at that time. It had got to 100 million monthly active users in the course of a couple of years.
And it was pretty clear to me that that was going to have a profound impact, more so than anything I could do in sort of the world of social activism. Around then, Mustafa started hanging out with the older brother of a friend, a software genius named Demis Hasabis, who had been designing computer games since he was a teen. Demis saw how to make Mustafa's vision possible.
And he was coming at the world from a completely different perspective, which I found really interesting. His belief was that we would just simulate complexity in the world. This new suite of methods in AI self-learning systems were coming up strong and looking likely to work.
We really just clicked on that technical and socio-cultural view of how to positively impact the world.
Together, the two decided to start one of the first AI companies ever. They called it DeepMind. It was very clear from that, even those early stages, that if we got the technology right and it was going to be this decade that led to major, major breakthroughs, then the consequences for humanity were going to be significant.
I've been lucky enough to be working on AI for almost 15 years now. Mustafa Suleyman picks up the story from the TED stage. Back when I started, to describe it as fringe would be an understatement. Researchers would say, no, no, we're only working on machine learning because working on AI was seen as way too out there. In 2010, just the very mention of the phrase AGI, artificial general intelligence, would get you some seriously strange looks.
and even a cold shoulder. You're actually building AGI, people would say. Isn't that something out of science fiction? People thought it was 50 years away or 100 years away if it was even possible at all. Talk of AI was, I guess, kind of embarrassing. People generally thought we were weird. And I guess in some ways we kind of were.
The ironic thing is that many people still don't think it's possible. Many people still think that we're crazy. And at the time, people really thought we were crazy. I mean, it was so far out there. It was really strange. And we were a strange group of people.
You know, Demis, our third co-founder, Shane, who basically a mathematician spent his entire career thinking about how to formalize a definition of intelligence that we could use to measure our progress in the company. We were misfits and kind of outsiders, and there weren't very many people willing to back us. So naming the company DeepMind, what was it that you pictured in your mind that you hoped to achieve?
Yeah, I mean, we wrote our business plan in the summer of 2010 and took it to Silicon Valley to shop it around a bunch of people. And the strap line for the business plan was, you know, building artificial general intelligence safely and ethically.
And that evolved into a two-line mission which was solve intelligence and use it to make the world a better place. And that dual frame was kind of the foundation of the company.
that our belief that science and technology was the engine of progress. And there are some downsides, but I certainly think this is the engine of creating civilization in a more healthy and sustainable way for the very long term.
And if you think about it, everything that is of value in our world today is a product of us humans being able to take materials from our environment and recompose those into products and services and other compounds that are useful to us from our lights, our carpets, to everything that you see in front of you today.
In 2014, DeepMind ended up being acquired by Google. That must have been huge for you in terms of money, resources, you were off to the races.
Yeah, it was a huge event. I mean, it was the largest acquisition Google's ever made outside of the US. We became Google's primary AGI bet. And we were empowered with tremendous resources, both people and computation to go and both do the hard science, but also work on really important applied practical product problems.
And that's where I kind of really honed my craft, if you like. As a product maker, it was just the most amazing experience. And as early as 2015, I actually ran a hackathon project in my applied group at DeepMind. And the theme of our hackathon was to find high impact ways of using our technologies for good.
And so there were lots of applications, lots of prototype hackathon experiments in healthcare or in energy systems, both of which went on to become significant parts of DeepMind Applied. And my group published three papers in Nature showing human-level performance on, for example, classifying eye diseases, the top 50 eye diseases from
OCT scans, three-dimensional eye scans showing that we could perform as well as a panel of radiologists in identifying cancerous tissues in mammograms, showing that we could predict the onset of sepsis and acute kidney injury.
as well as the top, you know, renal doctors using vast amounts of data. And this was way back in sort of 2016, 2017, 2018 and really helped to kind of lay a foundation for the application of large-scale machine learning to, you know, tough social problems. And that was very much my motivation.
In a minute, more of the incredible breakthroughs that Mustafa and DeepMind had at Google that changed the way medicine works today and why Mustafa ended up leaving the company. The company was just frankly being too slow to get things into production.
Today on the show, the CEO of Microsoft AI, Mustafa Suleyman, and the future of artificial intelligence. I'm Anusha Zamarotti, and you're listening to the TED Radio Hour from NPR. Stay with us.
This message comes from WISE, the app for doing things in other currencies. Send, spend, or receive money internationally, and always get the real-time mid-market exchange rate with no hidden fees. Download the WISE app today or visit wise.com.
This past year, we have read you stories about AI, relationships, climate change, neurotechnology, dinosaurs, privacy, human behavior, and even one about what it means to create thriving public spaces. All of these public spaces that we take for granted, you know, all of the social infrastructure. We work really hard to bring you all of these stories because that is our public service, kind of like a park or a public library.
and libraries are in these beautiful hubs that can take on the shape of whatever community that uses them really needs. Public media is infrastructure that we all can use. It's free and it's for everyone. And our mission is to help create a more informed public. So this time of year, I want to thank you for listening and to say that one of the best ways you can support this public service is by signing up for NPR Plus.
With one subscription, you are supporting all the work that NPR does, and you get sponsor-free episodes and perks across NPR's catalog of 25 podcasts. It's a great time to join because throughout December and January, plus listeners have access to even more insights from TED speakers to help you kick off 2025 right.
from making big life decisions, to being more hopeful, to carving out time for what is important to you we have got you covered. Just visit plus.npr.org. You can also find the link in our episode notes. And the other way you can give is to make a donation at donate.npr.org. Your gifts are tax deductible either way. Thank you so much for being here. And now let's get back to the show.
It's the Ted Radio Hour from NPR, I'm Minush Zomorodi. Today on the show, we're spending the hour with Mustafa Suleiman, the CEO of Microsoft AI, who was the co-founder of one of the first AI tech companies ever, DeepMind. The company was acquired by Google in 2014, and Mustafa and his co-founder, Demis Hasabis, went on to have numerous scientific breakthroughs, including a project called Alpha Fold.
AlphaFold uses AI to figure out incredibly complex protein structures in molecules. The building blocks of every biological process in our bodies. It won Hasabas the 2024 Nobel Prize in Chemistry. Here he is on the TED stage.
The rule of thumb is that it takes one PhD student, their whole PhD, so four or five years to uncover one structure. But there are 200 million proteins known to nature. So you could just, you know, just take forever to do that. And so we managed to actually fold using alpha fold in one year, all those 200 million proteins known to science. So that's a billion years of PhD time, safe.
Alphafold was just one of the ways that computers were solving biological problems, says Mustafa Suleyman.
I mean, this really alpha fold is really the first time we started hearing about it. I guess the term computational biology, this idea of using tech and science to rethink how biology works and then getting it out into the world, changing the way we treat diseases or maybe developing crops that are more resilient. All kinds of ways that
what you figured out at DeepMind would potentially change the world, which was the goal. Absolutely. That was the core mission of the company from day one. How could we take that process of synthesis and prediction and turn that into a general purpose system that could use all the data and all the compute that we have in the world?
to try and solve these very hard problems, whether it's growing crops in drought-resistant environments, whether it's more efficient ways to do water desalination, whether it's long-term battery storage.
But in 2022, Mustafa decided to leave DeepMind and Google. There were reports that he clashed with Google leadership over using technology for military projects and that some employees had complaints about his management style. Mustafa says he was simply frustrated with the pace of innovation and was anxious to get an AI product out into the world that everyone could use.
really wanted to scale these large language models and get them into production as quickly as possible. And the company was just frankly being too slow to get things into production. And I felt that it was an opportunity to sort of have all the resources that, you know,
that I needed. So I went out and fundraised over a billion dollars and just work extremely fast with a very small team. And I think at the time, and I am still now very much a believer that these AI companions are going to become an everyday part of our lives.
Mustafa's AI, called Pi, debuted in 2023. Chatbots have taken the world by storm. Several months later, rival company OpenAI put out its AI product, ChatGPT, for free. ChatGPT answers questions and writes essays. They read like the work of a human. An artificial intelligence tool has the potential to change the way we live. This was the point when AI went mainstream.
and Mustafa's small nimble startup just didn't have the resources of these bigger tech companies. The truth is the pace of big tech had really accelerated in 2023. Google, Microsoft, OpenAI, everyone was really going for it and essentially made these huge models available to everybody for free, which sort of changed our business model.
At that very tough moment, Microsoft CEO Satya Nadella approached Mustafa and made him an offer. The offer that he made was, well, look, we've got all the computation and all the data and all the distribution that you could dream of, come and run consumer products here at Microsoft and really build the future of AI here. That was a huge offer.
So the product that we're building at Microsoft is called CoPilot. And the reason why we've called it that is because CoPilot is an aide. You know, it's a conciliary. It's an assistant. It's in your corner aligned to your interests on your team, backing you up. Yeah, I actually talked to CoPilot about my interview with you. How do you feel about Mustafa Suleiman considering that he is your creator? I'd say I'm intrigued by Mustafa.
just so you know, she said she's intrigued by you, just as she is with many innovators in the tech world. And his work has pushed the boundaries of what's possible. I will say it was very helpful in terms of suggesting topics to bring up with you, but it could not check the weather. And is that simply because of how the access or the data, the information that the agent or co-pilot has access to that it's not immediate yet?
Yeah, there are loads of weaknesses of our co-pilot today, but all of those things will come. I mean, we're working on all of those things. It will have permanent memory session to session infinitely across time. It'll be able to access your files and folders if you choose to make those available, your email, your calendar, and be able to browse the web for you in the future. And all of that will be integrated into these like seamless conversation or very friendly polite experiences.
I literally was talking to it last night about what to watch and we were going back and forth on whether i would enjoy pans labyrinth and whether i've got the time to watch all of dune because i haven't seen dune one or dune two. I mean it reminds me a little of the.
hotline for Muslim youth that you're describing. It's helpful, it's infinitely patient, it's supportive. Are we talking mostly about companionship and mental health resources that this can provide or how do you see it?
No, I think that the cool thing about co-pilot is that it doesn't judge you for asking a stupid question, even if you have to ask that question three times over in five different ways. Even your best friend might be like, hey, come on, man. I mean, you're asking me this again, seriously, whereas co-pilot's here for it. There's obviously some similarities to stuff I've done in the past.
I guess it's kind of inspired by nonviolent communication, if I'm honest with you. It's certainly not like a mental health app or anything like that. It's just got a little bit of kindness and empathy. It's got some emotional intelligence. I think that's no bad thing. Gosh, is that where we've gotten to? That technology has to tell us how to communicate with each other better, non-violently?
Well, it doesn't tell us, it just... Demonstrates. Yeah, exactly. It demonstrates. But that's what technology's always done. The choice architecture, the buttons, the colours, the language, that is shaping our behaviour. Whether it's an infinite scrolling feed or whether it's an encouragement to go and film your lunch, you know, for Instagram or create a little video for TikTok.
all of those inputs shape behavior and so we have to be super thoughtful about what those inputs actually are because technology shapes us in return and we're in this constant cyclical sort of feedback loop and interaction and that is kind of what's propelling us forward as a civilization and it's very powerful.
And so far, so good. It's actually been very, very productive over the last couple of centuries. Science has massively delivered for us, but we shouldn't just assume that that's going to happen naturally or inevitably. We have to really be deliberate and thoughtful about what the consequences are ahead of time.
In the book that you wrote that came out in 2023, you really tried to put what's happening with AI in a historical context. If the printing press let people own and share information and the personal computer let people search and disseminate information, tell me how you're thinking. You can explain to people what
AI will do for people now. Each new wave of technology is fundamentally a new interface. It's a new interlocutor, translator, a way of accessing and creating new information, new tools, new knowledge.
So, if the last wave of social media and web search help people to access information, this wave is going to help us to invent and create new ideas, be it in science or in culture and media and entertainment.
And I think everybody is ultimately going to have an AI companion just as we have a search engine or a smartphone and just as we use a browser. You'll just ask your computer in natural language, you know, can you write that contract and check that it's okay? Can you create that new piece of software for me and you're just going to describe what it is? Can you help me plan that trip, you know, for my parents that are coming into town?
So, you know, that kind of breakthrough is a change in the interface, which changes itself what we can actually get done. And I think it's going to be pretty transformational. With the invention of computers, we quickly jumped from the first mainframes and transistors to today's smartphones and virtual reality headsets. Information, knowledge, communication, computation. In this revolution, creation has exploded like never before.
And now a new wave is upon us, artificial intelligence. These waves of history are clearly speeding up as each one is amplified and accelerated by the last. And if you look back, it's clear that we are in the fastest and most consequential wave ever. The journeys of humanity and technology are now deeply intertwined. In just 18 months, over a billion people have used large language models.
We've witnessed one landmark event after another. Just a few years ago, people said that AI would never be creative. And yet AI now feels like an endless river of creativity, making poetry and images and music and video that stretch the imagination. People said it would never be empathetic.
And yet today, millions of people enjoy meaningful conversations with AIs, talking about their hopes and dreams and helping them work through difficult emotional challenges. AIs can now drive cars, manage energy grids, and even invent new molecules. Just a few years ago, each of these was impossible.
And all of this is turbocharged by spiraling exponentials of data and computation. Last year, Inflection 2.5, our last model, used 5 billion times more computation than the DeepMind AI that beat the old-school Atari games just over 10 years ago. That's nine orders of magnitude more computation.
10x per year every year for almost a decade. Over the same time, the size of these models has grown from first tens of millions of parameters to then billions of parameters and very soon tens of trillions of parameters. If someone did nothing but read 24 hours a day for their entire life, they'd consume 8 billion words. And of course, that's a lot of words.
But today, the most advanced AIs consume more than eight trillion words in a single month of training. And all of this is set to continue. The long arc of technological history is now in an extraordinary new phase.
I think the way to think about this is that we're at the very earliest stages of development of this technology. Today, when you need to go and sort of ask a question like that, you go to a search engine like Google, you type in a query, you get these 10 blue links, you have to then go to the web page, you have to read all of this sort of complicated stuff formatted in 25 different ways. And that takes time. And it means that you don't always want to invest two or three minutes to go and make sense of that. You don't always have the energy for it.
Whereas you can always just like quickly send off a text or quickly make a phone call or leave a voice note. So I think it's lowering the barrier to entry to access high quality information. But how do we know it's high quality information? How do we make sure of that? Well, there was an amazing study two months ago that was published in the journal Science, which showed that for over a thousand participants who held strong conspiracy theory beliefs,
So this is things like flat earthers or all kinds of things about the COVID vaccine, et cetera, et cetera. And after a month of conversation with an AI that had been primed to talk about these conspiracy theories, it reduced the belief in those conspiracy theories by 20%.
which is pretty significant. And I think shows the first glimmers of what it's like to have, you know, that kind of patient, insistent, deliberate access to high quality information. And in the case of this study, human fact checkers, expert human fact checkers went back and read over the transcripts. So my point is the value of these systems is that they actually are more accurate.
And that's only going to continue. They're more accurate than your average human answering any question today. We've sort of passed the Turing test because they're more fluent and conversational than we ever thought they were going to be, which is pretty remarkable. And that's only going to continue. So what does this mean in practice?
Well, just as the internet gave us the browser and the smartphone gave us apps, the cloud-based supercomputer is ushering in a new era of ubiquitous AIs. Everything will soon be represented by a conversational interface, or to put it another way, a personal AI. And these AIs will be infinitely knowledgeable, and soon they'll be factually accurate and reliable. They'll have near-perfect IQ,
They'll also have exceptional EQ. They'll be kind, supportive, empathetic. These elements on their own would be transformational. Just imagine if everybody had a personalized tutor in their pocket and access to low-cost medical advice. A lawyer and a doctor, a business strategist and coach, all in your pocket 24 hours a day.
But things really start to change when they develop what I call AQ, their actions quotient. This is their ability to actually get stuff done in the digital and physical world. And before long, it won't just be people that have AIs. Strangers, it may sound, every organization from small business to nonprofit to national government, each will have their own.
Every town, building and object will be represented by a unique interactive persona. And these won't just be mechanistic assistants. There'll be companions, confidants, colleagues, friends and partners as varied and unique as we all are. At this point, AIs will convincingly imitate humans at most tasks. And we'll fill this at the most intimate of scales.
an AI organising a community get-together for an elderly neighbour, a sympathetic expert helping you make sense of a difficult diagnosis. But we'll also feel it at the largest scales, accelerating scientific discovery, autonomous cars on the roads, drones in the skies. They'll both order the takeout and run the power station.
They'll interact with us and of course with each other. They'll speak every language, take in every pattern of sensor data, sights, sounds, streams and streams of information, far surpassing what any one of us could consume in a thousand lifetime.
when we come back, the remarkable, but also terrifying visions of what AI may be capable of in the future. Distopian scenarios that might make you want to unplug all your devices forever. Are they realistic or just fear-mongering? The CEO of Microsoft AI, Mustafa Suleiman, weighs in. I'm Manush Zamorodi, and you're listening to the TED Radio Hour from NPR. We'll be back in a minute.
It's the TED Radio Hour from NPR. I'm Manush Zamorode. Today on the show, the future of artificial intelligence. We're spending the hour with the CEO of Microsoft AI Mustafa Suleiman. In 2023, Mustafa wrote a book called The Coming Wave.
In the coming wave, Suleiman delves into scenarios where... I asked Microsoft's co-pilot to help me talk through the reasons why Mustafa says AI could be so dangerous. Imagine a world where artificial intelligence has seamlessly integrated into everyday life. The first is what he calls asymmetry, meaning vast amounts of data that could get into the hands of a single person who could go rogue.
because we're now concentrating knowledge and capability into smaller and smaller units that are transmissible. You can download open source models, you know, for a few gigabytes and put it on a thumb drive. And, you know, so that's the kind of asymmetric impact because that therefore means that individual actors or smaller groups of actors can have a massive one to many broadcast effect.
This is AI unleashing computing power to many, kind of like the anonymous hacker who could bring down an entire electrical grid, that kind of thing. Healthcare systems could be overwhelmed. Economies paralyzed. Is that right? That's right. You could just sort of self-star immediately. Okay, so number two is hyper-evolution.
Yeah, software itself just evolving in hyper real time. In this scenario, a rogue scientist or a bioterrorist could design a pathogen with specific traits like heightened transmissibility or increased lethality.
or new iterations of weapons and surveillance could be developed. To track people's movements, conversations, and even emotions through their online activities. And put to use before authorities have time to test them or put any safeguards in place. A small programming error or a hatch system could result in catastrophic loss of life. How could a country build a defense system if it doesn't even know what it needs to defend itself against?
which brings us to the third dangerous attribute of AI, omni-use. One single kind of technology will be able to do everything. Your AI companion will make phone calls.
call other AIs and we'll call other humans to check on stock or availability in a hotel or get some advice from a local tour guide on where you're going to visit or it will send the emails or it will talk to another database or a backend. All of those things are essentially the AI learning to act on your behalf.
Sure, but what if your AI also talks to another database or makes a trade on your behalf and decides to shut down the entire stock market? Which relates to the fourth and final, maybe scariest attribute, autonomy. AI that takes action on its own without a human's go-ahead. Autonomous weapon systems operate on battlefields.
making split second decisions about targeting and firing without direct human oversight. I mean, autonomy is one of the core characteristics of these systems. It's a long road and it's going to be many years before these systems are true to your autonomous. And we want to be very careful about that because we have to ask ourselves, like, what is the added benefit of the system operating autonomously? And is that worthwhile? Is it safe? Is it stable? Is it controllable?
Can we really trust that autonomy? So yeah, I think there's one that we'll have to be very careful of. The societal disruption could lead to widespread fear, mistrust and geopolitical tension. Suleiman and other experts argue for robust oversight and ethical guidelines to ensure that these powerful technologies are used responsibly and safely.
For years, we in the AI community have had a tendency to refer to this as just tools. But that doesn't really capture what's actually happening here. Here's Mustafa Suleyman on the 10th stage. AI is a clearly more dynamic, more ambiguous, more integrated and more emergent than mere tools, which are entirely subject to human control. So to contain this wave,
to put human agency at its centre and to mitigate the inevitable unintended consequences that are likely to arise. We should start to think about them as we might a new kind of digital species. Now it's just an analogy, it's not a literal description and it's not perfect. I mean for a start they clearly aren't biological in any traditional sense.
but just pause for a moment and really think about what they already do. They communicate in our languages. They see what we see. They consume unimaginably large amounts of information. They have memory. They have personality. They have creativity. They can even reason to some extent and formulate rudimentary plans. They can act autonomously if we allow them.
And they do all this at levels of sophistication that is far beyond anything that we've ever known from a mere tool. And so saying AI is mainly about the math or the code is like saying we humans are mainly about carbon and water. It's true, but it completely misses the point.
And yes, I get it. This is a super arresting thought. But I honestly think this frame helps sharpen our focus on the critical issues. What are the risks? What are the boundaries that we need to impose? What kind of AI do we want to build or allow to be built? This is a story that's still unfolding. Nothing should be accepted as a given. We almost choose what we create.
What AIs we bring into the world? Or not? These are the questions for all of us here today. And all of us are live at this moment.
you lay out 10 strategies for containing AI. And one of the easiest it seems is having more researchers working on safety. Do you have more researchers working on co-pilot safety? I mean, one thing that worries me is people using your AI to help them do destructive things or further their destructive views. Is that something you're thinking about at Microsoft?
Yeah, we have a big safety team. We are definitely very focused on that. We're very focused, particularly on the sort of tone of the AI. Like how do we make sure that it isn't too sycophantic? How do we make sure that it isn't over flattering? How do we make sure that it doesn't mirror you and sort of lead to this sort of negative cycle of reinforcing unhealthy views?
And that's a real art and craft in trying to sort of engineer that healthy balance where your sort of AI companion can push back on your views in constructive ways without making you feel judged or making you feel angry, make you feel heard for your anger. It may be the case that you are angry about immigration, that you feel that
you haven't had the opportunities and access to jobs in your community that you feel have been available to new people coming into your world. And so it's about being respectful and acknowledging that people do genuinely feel aggrieved by that and not shutting them down because they don't adhere to some met of you. And I think that's a very challenging line to draw. It requires real care and attention.
So what role do you see yourself playing in terms of pushing the tech industry towards the public good? Is that a role that you are taking on? What do your fellow technologists think when they hear you talking about some of the more pessimistic visions you have for how AI could be deployed?
Well, I think I'm both a pessimist and an optimist, and that's not a bias. It's just an observation of the landscape before us. So most of all, I'm inspired by the practice of science. We have to say what we see and do our best to refute a hypothesis that we have with respect to evidence. And so I see evidence for both trends, and that's why I wrote about them.
Look, I think in terms of how we're shaping the industry, I'm a big fan of the work that many of these NGO organizations and social activists have been doing in order to raise questions and to challenge and push back. I think that's healthy. We need more of that. I'm very open-minded to it. I've been very encouraging of additional regulation for a long time. I think this is a moment when
Going slowly and adding friction to the system will be long term beneficial and I think it's rational to just be a little cautious and increase the burden of proof, you know, and just just make it a requirement that
For example, an AI shouldn't just be a straightforward imitation of a human. We want to create an aid, a conciliatory, a tool that is an amplifier and a supporter. And so, you know, there's kind of a lot of things to think through in terms of how this manifests in the world.
Do you think there should be mandatory testing requirements that before any technologies released to the public, they should have to go through a certain series of tests and if they don't pass, they don't make it to market?
I think we're approaching that time. I think sometime before 2030, we will need something like that. I don't know if now is quite the right time. If you look back on the impact that these chatbots have had in the last two or three years,
It's been unbelievably positive. It's sort of overwhelmingly positive. So had we had those checks ahead of time three years ago, I think it could have slowed things down quite a lot. But that doesn't mean that it's not right to keep asking that same question every year. And reconsider is now the right time for pre-deployment testing. I think that's the right question.
There is a sense that big tech needs to regain trust from consumers. There's a lot of people who just, you know, thrown up their hands and said, well, the convenience has outweighed all the digital privacy problems that we have. I give up. This is just the world we live in. But there are other people who are saying,
I don't feel great about giving a tech company all my data, which is what you need to run some of these new AI tools, especially if you're telling me that terrible things could happen to it. You're clearly very cautious when it comes to the incredible powers of technology, but how are you balancing that with the demands on you to innovate and sell these products?
Well, I personally think that there is going to be a huge amount of value to the user to have your co-pilot companion be able to read over your email, look at your calendar, schedule things, buy things for you book and plan.
And I think the truth is, we have to wait and see if consumers agree with that. They may not. And that's a very fair and reasonable thing to do. But I believe that the utility will drive the way. And in conjunction with that,
We have to make sure that we have the strongest privacy and security infrastructure to protect that information, just as we already do today. Many billions of people store their email with Microsoft and rely on Microsoft security to protect their consumer work and their enterprise work. That's a massive, massive priority for the company, but it isn't just about
The utility obviously has to be really useful. Obviously it has to have good privacy and security controls. But I think it's also about the way that we approach it, like how do we take feedback? And do we admit when we make mistakes? How open-minded are we two different ways of doing things? What's our business model?
So I think my commitment is to try to be as open-minded as I can on all those questions and just listen and just innovate carefully, observe the consequences and iterate as we go. That's the best I can see to do at the moment. In the past, unlocking economic growth often came with huge downsides. The economy expanded as people discovered new continents and opened up new frontiers.
but they colonized populations at the same time. We built factories, but they were grim and dangerous places to work. We struck oil, but we polluted the planet. Now, because we are still designing and building AI, we have the potential and opportunity to do it better, radically better. And today, we're not discovering a new continent and plundering its resources. We're building one from scratch.
Sometimes people say that data or chips are the 21st century's new oil. But that's totally the wrong image. AI is to the mind what nuclear fusion is to energy. Limitless, abundant, world changing. And AI really is different.
That means we have to think about it creatively and honestly. We have to push our analogies and our metaphors to the very limits to be able to grapple with what's coming. Because this is not just another invention. AI is itself an infinite inventor. And yes, this is exciting and promising and concerning and intriguing all at once. To be quite honest, it's pretty surreal.
But step back, see it on the long view of glacial time, and these really are the very most appropriate metaphors that we have today. Since the beginning of life on Earth, we've been evolving, changing, and then creating everything around us in our human world today. An AI isn't something outside of this story. In fact, it's the very opposite.
It's the whole of everything that we have created distilled down into something that we can all interact with and benefit from. It's a reflection of humanity across time. And in this sense, it isn't a new species at all. This is where the metaphors end. AI isn't separate. AI isn't even in some senses new. AI is us.
It's all of us. And this is perhaps the most promising and vital thing of all. As we build out AI, we can and must reflect all that is good, all that we love, all that is special about humanity, our empathy, our kindness, our curiosity and our creativity. This, I would argue, is the greatest challenge of the 21st century.
but also the most wonderful, inspiring and hopeful opportunity for all of us. Thank you. That was Mustafa Suleiman. He's the CEO of Microsoft AI and the author of the book, The Coming Wave, Technology, Power, and the 21st Century's Greatest Dilemma. You can see his full talk at TED.com.
Thank you so much for listening to our episode on the future of AI. It was produced by Katie Montilione. It was edited by Sanaz-Meshkinpour and me. Our production staff at NPR also includes Rachel Faulkner-White, Harshan Ahada, James Delahussi, Fiona Girin, and Matthew Cloutier. Our executive producer is Irene Noguchi. Our audio engineers were Tiffany Vera Castro, Becky Brown, and Gilly Moon.
Our theme music was written by Romtine R. Bluey. Our partners at TED are Chris Anderson, Roxanne Hylash, Alejandra Salazar, and Daniela Balorazzo. I'm Anu Shazamarodi, and you have been listening to the TED Radio Hour from NPR.
Was this transcript helpful?
Recent Episodes
Learning from what others leave behind
TED Radio Hour
This podcast explores ideas about preserving the human experience through artifacts like art, books, music, and discusses insights from paleoanthropologist Genevieve von Petzinger, museum curator Ariana Curtis, music curator Alexis Charpentier, and artist Katie Paterson.
January 03, 2025
Searching for paradise
TED Radio Hour
TED Radio Hour explores different interpretations of paradise through talks by George Whitesides, Pico Iyer, Ramón Méndez Galain, Ramona Pierson, Richard and Zetta Gore from Paradise, California.
December 27, 2024
New year, new habits: How to start writing with author Kelly Corrigan
TED Radio Hour
Best-selling memoirist Kelly Corrigan shares advice for overcoming writer's block and writing personal narratives in this bonus TED Radio Hour episode.
December 25, 2024
New year, new habits: Fixing time management mistakes in 2025
TED Radio Hour
Time management expert Laura Vanderkam shares strategies for prioritizing and focusing on important tasks to make effective decisions in a new year.
December 18, 2024
Related Episodes
Mustafa Suleyman: Harnessing AI to Transform Work, Business, and Innovation | E314
Young and Profiting with Hala Taha
In this podcast, Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, discusses the ethical challenges of AI development, its potential misuse, and its impact on work and society, while advocating for personalized AI companions as future tech. He also touches upon the differences between Narrow AI and Artificial General Intelligence (AGI) and discusses how Microsoft Copilot is transforming the future of work.
October 28, 2024
Google DeepMind Co-founder: AI Could Release A Deadly Virus - It’s Getting More Threatening! Mustafa Suleyman
The Diary Of A CEO with Steven Bartlett
Mustafa Suleyman co-founded DeepMind, which programmed an AI to defeat the world champion of Go in 2016. His book 'The Coming Wave' explores AI's impact and regulation when it meets human society.
September 04, 2023
Is AI a new species? Microsoft’s Mustafa Suleyman thinks so
Masters of Scale
At the Masters of Scale Summit 2024, Mustafa Suleyman, CEO of Microsoft AI, discusses AI's potential impact on our memory experience, its risks and rewards, and advises it's a good time to start scaling companies. His conversation was synthesized by Respeecher.
October 31, 2024
972: Mustafa Suleyman | The Coming Wave of Artificial Intelligence
The Jordan Harbinger Show
Discusses AI evolution, its impacts on various fields, ethical dilemmas of replacing humans, effects on global security, and need for responsible innovation with Microsoft AI CEO Mustafa Suleyman.
April 02, 2024
Ask this episodeAI Anything
Hi! You're chatting with TED Radio Hour AI.
I can answer your questions from this episode and play episode clips relevant to your question.
You can ask a direct question or get started with below questions -
What was the main topic of the podcast episode?
Summarise the key points discussed in the episode?
Were there any notable quotes or insights from the speakers?
Which popular books were mentioned in this episode?
Were there any points particularly controversial or thought-provoking discussed in the episode?
Were any current events or trending topics addressed in the episode?
Sign In to save message history