Superagency: The Bull Case for AI | Reid Hoffman
en
January 27, 2025
TLDR: In this episode of Bankless, Reid Hoffman discusses how AI is poised to reshape society and democratize technology; explores perspectives on its impact (pessimistic, cautious, ambitious, balanced); shares lessons from past tech revolutions and risks of misuse; examines benefits to average citizens and the potential for AI to deliver value worth millions.

In this enlightening episode of the Bankless podcast, host Reid Hoffman, renowned entrepreneur and author, shares his visionary insights into artificial intelligence (AI) and its transformative potential for humanity. Following are the key highlights and takeaways from the discussion, organized under relevant headings and subheadings.
Understanding "Super Agency"
- Defining Agency: Hoffman begins by exploring the concept of "agency," which refers to individuals’ ability to act and make choices in the world.
- Super Agency Explained: He introduces the term "super agency" to describe a state where millions of people gain access to transformative technology, enhancing both individual and collective capacities.
- Impact of Technology on Freedom: Hoffman argues that instead of diminishing human agency, AI has the potential to empower people, much like the introduction of cars allowed for greater mobility and convenience.
Perspectives on AI's Impact
Hoffman categorizes societal responses to AI into four distinct groups:
- Doomers: Those who believe AI poses an existential risk to humanity, often associating it with dystopian outcomes like killer robots.
- Gloomers: Individuals who forecast negative consequences from AI adoption, such as job displacement and increased surveillance.
- Bloomers: This group, which Hoffman identifies with, advocates for an intelligent and measured acceleration in AI development while acknowledging potential risks.
- Zoomers: An optimistic faction that celebrates the transformative potential of AI without substantial concern for its pitfalls.
AI as a Democratizing Force
- Accessibility of Technology: Hoffman points out that AI, similar to smartphones, can democratize access to information and resources, enabling individuals from all walks of life to benefit.
- Real-World Applications: He suggests practical applications of AI, such as in education and healthcare, where AI can provide personalized learning assistance and healthcare support, enhancing quality of life for average citizens.
Embracing Change
- Overcoming Resistance: Hoffman shares historical parallels, likening current anxieties about AI to past fears surrounding the advent of mainframe computers and the Internet. He emphasizes that humanity has consistently adapted to technological changes.
- Navigating Transition: Addressing the fears surrounding AI, he asserts that while there will be challenges—like job transitions or ethical concerns—these can be managed proactively.
Concerns and Safeguarding Agency
- Potential Risks of AI: Hoffman acknowledges that there are legitimate concerns regarding privacy and potential misuse of data by corporations or governments. However, he stresses the importance of building safeguards through innovation rather than stifling development.
- Innovative Solutions: He advocates for solutions that prioritize human agency, such as transparent AI systems that support human choices rather than replace them.
Regulatory Considerations
- Innovative Regulation: Hoffman suggests that regulations should focus on facilitating AI's benefits while managing risks, emphasizing that innovation can lead to safety and ethical outcomes.
- American Values in AI: He calls for embedding American values of empowerment and individual freedom into the development of AI technologies.
The Future Outlook
- Inescapable Progress: Hoffman is optimistic about the trajectory of AI, asserting that there is zero chance of stagnation in advancements. He believes the integration of AI into daily life will grow exponentially, enhancing productivity and creativity across various sectors.
- Concrete Benefits to Society: He envisions a future where AI significantly enriches lives through enhanced healthcare, education, and personal development opportunities.
Conclusion
Reid Hoffman’s compelling arguments and insights position AI as a driving force for individual and societal empowerment. By framing AI not just as a tool but as a catalyst for enlightened agency, he encourages listeners to embrace the potential of AI while navigating its challenges thoughtfully. The conversation is a call to action for proactive engagement with technology, emphasizing optimism and responsibility in shaping the future.
This summary encapsulates the core discussion points from Reid Hoffman's enlightening dialogue on AI and super agency, leaving readers with reflective insights and practical considerations for harnessing AI's transformative capabilities.
Was this summary helpful?
Can you guarantee me that killer robots will never be built? The only existential risk for human beings is not killer robots. There's pandemics, there's asteroids, there's nuclear weapons, there's climate change, and the list kind of goes on. And so you have to look at existential risk as a portfolio.
namely, it's not just one thing, it's a set of things. And so when you look at any particular intervention, you say, well, how does this affect the portfolio? My very vigorous and strong contention is that AI even unmodified at all is net, I think, very positive on the existential risk portfolio.
Welcome to Bankless, where today we explore the frontier of AI. This is Ryan Sean Adams. I'm here with David Hoffman, and we're here to help you become more Bankless. The question for today, will AI give a super agency, or will it be used to enslave us?
We have Reed Hoffman on the podcast today. He gives his bull case for AI, why it's good, why we should accelerate AI into the future, and how it will turn each of us into super agents, and to him that equals more freedom for everyone. I think the bankless journey is all about becoming a more sovereign individual. That's what David and I have talked about since inception. And it's increasingly hard to imagine being a sovereign individual without crypto, which we've talked a lot about, but also without AI.
Like crypto gives you the ability to own things, but AI seems to be the ability to control your own destiny. And that's why we're doing an AI episode today with Reed Hoffman to help stay ahead of the AI curve. Few things we discuss, super agency, doomers, bloomers, and zoomers. What could go right? How to use AI, American super intelligence. And finally, we end with the question to read, what if this whole AI thing is overhyped?
Stay tuned for that answer. Super agency is the title of Reed Hoffman's book, which is coming out this week if you are listening to it at the time of release. And so this is all on the backs of that. Ryan of the two co-hosts of Bankless read the book. I did not. And so I'm more along for the ride. I'm in listening mode asking a few questions here or there, but it's really Ryan in the driver's seat for this episode. So I hope you guys enjoy the episode with Reed Hoffman. But first, before we get there, a moment to talk about some of these fantastic sponsors that make this show possible.
Are you ready to swap smarter? Uniswap apps are simple, secure, and seamless tools that crypto users trust. The Uniswap protocol has processed more than $2.5 trillion in all-time swap volume, proving it's the go-to liquidity hub for swaps with support
for growing numbers of chains, including Ethereum, Mainnet, Base, Arbitrum Polygon, ZKSync. Uniswap apps are built for a multi-chain world. Uniswap syncs your transactions across its web interface, mobile apps, and Chrome browser extensions. So you're never tied to one device. And with self-custody for your funds and MED protection, Uniswap keeps your crypto secure while you swap anywhere, anytime. Connect your wallet and swap smarter today with the Uniswap web app or download the Uniswap wallet available now in iOS, Android, and Chrome. Uniswap.
the simple, secure way to swap in a multi-chain world. With over $1.5 billion in TVL, the M-Eath protocol is home to M-Eath, the fourth-largest ETH liquid-staking token, offering one of the highest APRs among the top 10 LSTs. And now, CMEath takes things even further.
This restaked version captures multiple yields across Kerak, Eigenlayer, Symbiotic, and many more, making CME the most efficient and most composable LRT solution on the market. Metamorphosis, season one, dropped $7.7 million in cook rewards to M ETH holders. Season two is currently ongoing, allowing users to earn staking, restaking, and ABS yields, plus rewards in cook, M ETH protocols governance token, and more. Don't miss out on the opportunity to stake, restake, and shape the future of M ETH protocol with cook.
Participate today at meth.mantle.xyz. Cello is transitioning from a mobile first, EVM compatible layer one blockchain to a high performance Ethereum layer two built on OP stack with EigenDA and one block finality, all happening soon with a hard fork with over 600 million total transactions, 12 million weekly transactions and 750,000 daily active users. Cello's meteoric rise would place it among one of the top layer twos built for the real world and optimized for fast, low cost global payments.
As the home of the stablecoins, Celo hosts 13 native stablecoins across seven different currencies, including native USDT on Opera MiniPay, and with over 4 million users in Africa alone. In November, stablecoin volumes hit $6.8 billion, made for its seamless on-chain FX trading. Plus, users can pay gas with ERC20 tokens like USDT and USDC, and send crypto to phone numbers in seconds. But why should you care about Celo's transition to a Layer 2? Layer 2's Unify Ethereum, L1's fragmented.
By becoming a Layer 2, Cello leads the way for other EVM-compatible Layer 1s to follow. Follow Cello on X and witness the great Cello Havening, where Cello cuts its inflation in half as it enters its Layer 2 era and continuing its environmental leadership. Banklist Nation, very excited to introduce you to Reid Hoffman. He is a founder and investor. He co-founded LinkedIn, which I'm sure many of you have used in the past. He's extremely active in Silicon Valley, particularly over the last couple of decades. More recently, he's been very close to what we would call the epicenter of this whole AI thing.
So he was serving on the board of OpenAI starting in 2018. Notably, I should mention, because whenever someone talks about the board of OpenAI, a lot of things, he'll come up. But he actually left to go focus on AI investing before Sam Altman and the Ousting drama, and you guys remember all that.
He's also a gifted writer, a communicator. I've read several of his books. I think one of the canonical books for tech founders is this book called Blitzscaling, which is just phenomenal on how to grow an internet scale business. And now all of this preamble to say now he's written a book on AI called super agency. And I'd pretty much describe this as maybe Reid Hoffman's thesis for artificial intelligence and how it will impact us in the decades to come. Reid Hoffman, welcome to Bankless.
It's great to be here and I look forward to not only this conversation, but future ones as well. Yeah, I mean, I think we're going to really focus on AI because that's the subject matter of your book, but maybe in a future episode, we get into crypto because I know you have a lot of thoughts on that. Yeah, I know I actually think I bought my first Bitcoin in 2014. Congrats. It's a little late, but you know, it's earlier than most.
Yeah, that's a good seasoning of a time to buy Bitcoin for sure. So let's talk about this book. Let's talk about your thesis for artificial intelligence. And when I heard that you were writing a book called Super Agency, my first question without reading anything further was like, okay, Super Agency, who is Reid talking about? Like, who are the Super Agents? Is this the humans? Do they become the Super Agents? Or is he talking about the AIs themselves? Do the robots become the Super Agents? So maybe you could kind of start there. Could you define what you even mean
by super agency and like, who gets it? Yeah, so let's actually start even a little bit earlier with agency and then get to your excellent question, which is what is agency? Agency is the ability to kind of make plans, do things in the world, you know, kind of make parts of the world, you know, according to your intentions and desires and express yourself in the kind of the ordering of the world around you. And obviously, no one has perfect agency. You know, that's, you know, kind of theoretical,
perhaps. And it depends even on what your particular theology is. So it's the reason I was being a little bit more vague. Not denominational today in this podcast. Yes, exactly. And so super agency, the precise term is kind of when millions of human beings get access to kind of an elevating technology, a transformative technology, the superpowers, not only they get as individuals, but society gets transformed.
And so, for example, a canonical example is cars, so you go, well, it gives me superpowers, because I can drive, I can get to further distances.
But as other people in society also get cars, you know, like suddenly you'd had to go down to the doctor's office to get an appointment. Now the doctor can come to you. And obviously later instantiations of this is you can get, you know, Instacart deliveries and, you know, all the rest. And so super agency is kind of how we all get superpowers. And so to your opening question about our humans, the super agents or our AIs, the super agents, to some degree it's both.
But the important emphasis is that rather than us as human beings in humanity losing our agency, we are gaining agency. And by the way, in a very similar pattern to the way that I gain agency when you guys also get cars.
It's not just me, the game's agency with my own car. I gain agency when you guys have cars. And so that's the elevation of agency and hence super agency. I was almost thinking about your book title. If you use the difference in them besides super agency, if you just titled the book superpowers.
Right? And like, who gets them? That's almost the same discussion. Like, maybe when we get to this term agency, is the term superpower and super agency, are they kind of synonyms? Is the short form version of this to just like, we get all of this additional choice surface area. We have new abilities to do things that previous generations could not have imagined. That feels like a superpower to me. Is it kind of one and the same in your mind? Well, superpower, it's deeply related. The Venn diagrams have a high overlap.
because the kind of the elevation of capabilities are superpowers. And every new major technology gives us new kinds of superpowers. Now, some of it is with a superpower, and as lots of people get superpowers, individuals, institutions, societies, governments, et cetera,
Your agency changes some. So it isn't, for example, like the agency of people who are kind of driving horse and buggy carriages, that changed with cars. Because it was like, well, no longer are the streets set up for you, no longer can you be doing this thing that you know, you had been doing and we're planning on doing, you know, no longer, for example, was the courses transport industry, you know, kind of central. And by the way, even like earlier technologies like trains, those changed.
in the kind of the ways that people express their agency and be able to work on it. So superpowers are a way you extend your agency, but when it happens in a super agency context, it also transforms it and changes it. So that's the reason why it's not 100% the same, but closely related.
Read something that we share in common is we actually both have podcasts. You have a podcast called Possible. And I think Ryan, I stumbled on your podcast and we noticed that you did an episode, an interview with yourself, but yourself was, we would might call it in the crypto world and AI agent. Now, maybe this illustrates what you mean by super agency and maybe you can take that metaphor all the way home, but how do we know that we are actually talking to the real Reed Hoffman and not
your AI co-host spot that now is with us, actually, and the real human read hopman is somewhere else doing work in a different direction. How do we know you're the real read hopman? Well, that will get to be a more and more complicated question. At the moment, the video avatars are not actually, in fact, real time.
So the read AI discussion has to be a little bit scripted, even though it looks like it's on a podcast and it's a completely real-time thing. There's actually, in fact, kind of running it through the chat GBT instance that's trained on 20 years of my writing, and then most specifically getting the audio and video produced with the right kind of quality.
doesn't really, you know, enable that for kind of a full real-time stack today. But, you know, part of the reason, of course, you know, I did it, put it on possible, because what did possibly go right was to start getting people familiar with the future universe, just as you guys are doing, you know, in the kind of all of the technology broadly, but also around, of course, crypto and what is sovereignty and identity and all the rest of that mean, it's kind of like, here, here's a lens into the future, and we don't know exactly where the future's going, but we're trying to get everyone
kind of participating, ready, navigating well, et cetera. And that was part of the reason why doing read app. But there is, you know, obviously at some point, one could get to that as an interesting question. And, you know, my own, you know, hazard of an answer here is something a little bit more like, you know, well, crypto signatures and identity as to what's happening. But of course,
you know, given that I'll probably have both the crypto signatures for me and for read AI, you know, that might still even be a live question. Yeah, it's really interesting though. There's something very empowering about the experiment that you're running with read AI because it leads to a promising future of like, if individuals are sovereign over their own kind of AI agent twin, well, maybe that AI agent twin could go do work while they're like goofing off.
They're going like doing something that they enjoy. Maybe they're watching a movie. They're doing art. They're like working out something like that. And then there's read AI doing podcasts like while all of this goes on and you know the real read Hoffman sort of has ownership over that and somehow like that feels very democratizing.
I want to get back to the through line of this conversation, though, when we talk about super agency, though. So your thesis is we understand what super agency is and how that's similar versus different to superpowers. And you said very emphatically that it's not the robots that get it. The AIs do get it, but also the humans get it. Your view here is that it's going to be humans amplified by AI. That's the real unlock here. But I have a question within that subset of humans who get it. Which humans are we talking about, Reed?
Are we talking about the Silicon Valley elite in your thesis? Are we talking about the 1%? Those that control most of the capital in society? Are we talking about governments? Or are we talking about individuals? Because the distribution of this seems incredibly relevant to how we actually view whether this is a good thing or not.
So I think the path we're already on with hundreds of millions of people using chat GBT and exposure to agents in other contexts, whether it's entropic, Gemini, co-pilot, etc. So I think we're already seeing hundreds of millions of what you're referring to as individuals, but kind of call it
access from, you know, kind of a bulk of at least middle class Western folks, although like one of the things that I was very cool that I had heard about from a friend who was traveling.
in Morocco recently is that the taxi driver was using chat GBT as the translator for like, where do you want to go for the tourists? And so it's very broad indeed. Now, that being said, I don't want to paper over the fact that we live in a human society that has kind of differences of wealth, differences of power, differences of position, not just between nations, but within nations.
And, you know, that's not going to go away. And so it wouldn't surprise me, you know, if you said, well, but actually the kind of AI that the wealthy have access to has some improvements in betterness than, you know, maybe real-time, you know, responsiveness, maybe, you know, number of GPUs available, et cetera, et cetera, then, you know, kind of a lower-income person.
Now that being said, part of the reason I'm really optimistic is a little bit like kind of smartphones, which is three quarters of the world today has mobile phones. But the smartphone that Tim Cook has, or Jeff Bezos has, or Sundar Bishai has, is the same smartphone that the Uber driver has. And so I think that the natural drive in technology, which includes AI,
is building it for the very mass market, the billions. And so I think that I can confidently assert that superpowers will be available very, very broadly, even if there's also some differences in superpowers based on country and wealth and kind of access. But I think democratizing will be the name of the game.
So in your world, AI is really a democratizing technology. It's pretty like, you know, of course, you know, if you're in the early adopter curve, maybe you get things a little bit sooner, but generally it's going to take the form of the way cell phones did, where in the 1980s is a large, you know, big brick. You cost thousands of dollars.
And then the technology democratize or the way the internet has kind of democratize things. It's not because there is this fear out there, read that AI is kind of going to be controlled by superpowers, let's say governments or a small cabal, maybe in Silicon Valley, that they're going to have the technology and kind of the rest of us plebs, like maybe won't. But you're saying it'll be more similar to, I guess, the propagation of the internet or the cell phone in that it will be
fairly widely distributed and actually be like a technology that's available to the general public? Yes, in short. And part of that's also because the same called Silicon Valley ecosystem, the built smartphones, the built the internet. And obviously, it's not just Silicon Valley, but there's a lot of Silicon Valley contribution is also very similarly building
You know, kind of AI, both in the hyperscalers and large models, but also, you know, at this point, there's so many thousands of startups that, you know, they kind of, you know, you could start mapping various cryptocurrencies per startup. There's similar numbers of orders of magnitude.
Let's talk about the AI religions that exist, because I think this was a fairly fantastic framing in your book and one of my chief takeaways. So you talk about, and I'm using the term religion, you could say ideology, you could say philosophy, but just the point is that each of these categories, I think all of them have an expected outcome or an article of faith, because of course the future is unknown.
But anyway, so the four categories in your book of, you know, people of thoughts about AI and it's useful, I think, to categorize them to sort of understand the worldview a bit better. One is the doomer. Okay. The second is the glomer. The third is the bloomer and the fourth is the zoomer. Okay. Now these are four different categories, subsets of groups with different perspectives on AI. Could you just define those four categories for us?
Absolutely, I'll go through them in that order, which is doomers basically are like, AI is the destruction of humanity. And it's very much like the Terminator robot or other kinds of popular Hollywood thieves argued in a way that's kind of like, well,
It'll be more intelligent than us. It'll kind of want to run the earth. You know, it'll look at human beings as either hostile or, you know, kind of, you know, ants or a kind of equivalent. And so AI should just be stopped.
Gloomers are essentially, look, I don't think the AI future is going to be particularly good. I think it'll take away a whole bunch of jobs and disorder society. It may lead to much more misinformation and unbalanced democracies. It'll have a whole bunch of more information, surveillance, and so their privacy will be worse.
And so I don't think it's stopable because multiple countries and multiple companies around the world are building it. And that's the way that humanity rolls and companies are going to become a lot more productive for this, but I think it'll be an unfortunate outcome. And it's gloomers, by the way, because they only see the gloomy side, if that helps people. Exactly. And actually, I'll do zoomers for bloomers, because I want to spend a little bit more time on bloomers, since I self-identify there.
Zoomers are essentially like, no, no, no, this technology is great. It's like the opposite of doomers. And it's like everything that we're going to build with it is going to be really amazing. You know, the sky isn't even the limit in terms of what kinds of things could be made. You know, but maybe AI is going to invent fusion rather than us inventing fusion. And everything that comes out of this is just spectacular.
And Zoom refers to just hitting the gas pedal. Yes. Go forward. Go fast. Exactly. Yeah. And then, bloomers, which I describe myself as, is kind of a zoomer, but it's supposed to just, like, maximally hitting the gas pedal in all circumstances. You go, we'll drive intelligently.
like avoid the potholes, slow down at the curve, be looking at kind of like, oh, look, this is a little bit of a dangerous area. Let's go through this a little bit more care. Still acceleration us, the kinds of things that we can build in the future, whether they're medical outcomes or climate change outcomes or kind of human enablement with work and with education, all of that stuff is super important to get to.
But, you know, let's kind of make sure that we're not enabling rogue states or terrorists or unbalancing crime waves or other kinds of things as ways of doing this. And let's make sure that we don't, for example, inadvertently create terminators.
Um, you know, because it's a little bit of question of how we drive. It's not inevitable. So that's the bloomer category. And that's the category I'm in. And obviously if you said, well, you can't pick bloomer, I'd be closer to zoomer, much closer to zoomer than bloomer or doomer, but it's also part of the reason why the subtitle.
of super agency, which parallels the podcast, is what could possibly go right, is because we always, as human beings, encounter new technologies with a, oh my God, the world's coming to an end. I mean, remember all those discussions around crypto? Maybe we're still having them, right? And also, you know, by the way, the internet, and by the way, cars, and by the way, the printing press, it always starts with, oh my God, this is the end of society. And then when we start navigating, we go, oh wait, if we do this way,
We make society a whole lot better. And by the way, we have in every technological instance in history so far made that happen and gotten super agency through all of them. One can argue the technology, it is new and unique, whether it's new and unique in that characteristic or not. And that's, of course, why to write the book and go out and talk to people and so forth to show. Actually, in fact, the only way you can create a positive future is by imagining it and steering towards it. And so that's what we should be doing.
Let's make sure we understand these examples of these four categories, like maybe by way of example, actually. Somebody on the Zoomer side of things, and again, we're not referring to Gen Z here. We're talking about Zoomers. I was thinking in my head, another term that bankless listeners might be familiar with is EAC. If you've heard that term, read effective acceleration-ness.
of which we've had Beth Jayzos on the podcast. He's basically full speed ahead. Let's harness energy, let's harness AI, and conquer the universe, full speed ahead. Mark Andreessen put together a techno-optimist manifesto that has some EAC characteristics. Zoomer is basically the EAC group. Is that right? Exactly. Although, I think you might say that zoomers and bloomers are
kind of two variants of the EAC group. Because I also, by the way, I think, you know, I started using in term techno-optimism some number of years ago. Like, usually I'm a techno-optimist, not techno-utopian, which is you can build great things with technology. It doesn't mean everything you do with technology is great. Right. So, you know, do it with some care. I'd say it's the zoomers are, hey, any thing that anyone's doing with this, it'll end up good.
And the bloomer is, hey, most of the stuff is going to end up really good. Let's try to steer a little bit. It's hard for me to actually put people in boxes, like somebody like Mark Andreessen. I don't know if he's full, like, kind of everything technological, technical is good, or how much of this is sort of, you know, a personal choice to just amplify this extreme position in order to kind of... You might need to plant a flag in order to shift the Overton window.
Move the Overton window, right? And like, I think that's part of the meme games that people like Beth Jayzos and maybe Andreessen are doing, but it's hard to speculate on it. Okay, so that's the Zoomer. Now, the Doomer is pretty easy. I think we've also had guests and Ben Gliss, Elizio Kalski, he very much clearly thinks that like everything that we're doing right now in AI, like basically, we only have years, maybe decades to kind of live before AI actually supplants us. Like he genuinely thinks that that's the Doomer category.
So you don't have to go into more detail there. But how about the Gloomer category a little bit more? It seemed to me that this is sort of the mainstream media type of take on things. And it might even be the popular narrative around AI. Like if you ask the average American, what do they think about AI? I think in like the 2020s with the current spirit of the age, I think there'd be some cynicism.
about AI. There'd be some pessimism about AI. It would definitely be the glass, you know, half full type of outlook. And I think that's the popular idea, but who are some archetypes for this gloomer category? So I do think that it's kind of generally speaking, you know, kind of the discourse, because the discourse now, just like earlier times in history with earlier technologies, tends to focus around like all the things that could possibly go wrong.
And so many journalists, definitely the vast majority of people in Hollywood who are like, oh my God, this is the destruction of the content production industry. And when Sora and Vio are going, all of our jobs are going.
a lot of it's focused around job displacement. So worries and concerns about job displacement. So, you know, I think it's more or less kind of like if you can't put the person clearly in another bucket, they're probably in the glimmer bucket. It's probably the, and that's a little bit like mainstream media. It's the everyone else bucket.
How about from a political landscape perspective, would you look at the axis that way here? Because I think a lot of people listening would be like, okay, Democrats are a bit more on the gloomer side of things, and Republicans are a bit more on the, maybe not the zoomer side of things, but the bloomer side of things. Do you think that's an axis at play as well? Well, I think it depends, right? Because there's also a lot of modern Republican that's kind of anti-big tech.
You know, thinks that big tech is, you know, too big for its britches and should conform. So, you know, I think that there's kind of as we're gloomers in both sides. I think the democratic side tends to be a little bit more. We should be regulating and the Republican side tends to be the, no, no, we should be allowing, you know, industry to do what industry does.
So read, I think for the rest of this podcast, I think we want you to make the case for bloomerism here. Like, why is AI going to go really well for humanity? This idea of humans really amplified by artificial intelligence and it kind of leads to really positive outcomes. One of the early chapters in your book,
talk about some history I actually wasn't familiar with and maybe this is an analog that will be helpful for some it was helpful for me. So this is the history of the mainframe computer and you go back to the 1960s and apparently I did not know this maybe some bank listeners also don't know this during the 1960s when the mainframe computer kind of entered the cultural public scene as a new technology we had computers that could do incredible things for the time there was a media
the Hysteria that broke out, okay? And there was concerns about this new computer that had the ability to recall in a few seconds every pertinent action, including all of your failures, your embarrassments, or incriminating acts from a lifetime of every citizen. There were many comparisons to a 1984, the book, of course, that's in Western canon by George Orwell, just like this Orwellian society that would be built out by these mainframe computers. There were even congressional hearings, guys.
One lawmaker warned the danger of the computerized man, which is a citizen that would lose all of their individuality, their privacy, basically their agency, and they'd be reduced to magnetic tape. That, of course, was the technology to program computers at the time. It's like literal magnetic tape. So give us the history of the mainframe computer in this hysteria. Why do you think this is analogous to what's happening today?
So, well, you covered it pretty well. Thank you for actually reading the book. That doesn't happen that often these days. And so, you know, I think that the question is any time that we encounter a new technology, and in this case, the mainframe, they were like looking at like, okay, what could possibly go wrong? And they think about, well, actually, in fact,
this could track everything, make all decisions, take away the agency of people by putting it into government centralized control, a little bit of discussion about what's happening with AI in some circles today, and then make us as human beings
essentially powerless and agency less. And that's, you know, of course, you know, part of a lot of the, and we talk about this in super agency a bunch is, you know, a lot of it was the 1984 George Orwell worries where that kind of centralizing technology became a control over individuals and individuals through this kind of control of information, control of power, become, you know, almost irrelevant cogs in a machine.
And if you look at it, it's similar and it goes, what's AI doing with my data? Am I going to be able to make decisions because AI is going to be so persuasive and manipulative and advertising systems and information systems? Am I going to be able to control my life and work? Or is AI going to be doing all the work? All of those are very parallel, not just obviously to the mainframe discussions, which are relevant and close.
We at least gotten through, I think, punch cards, the magnetic tape before we started having all the worries. So we're magnetic tape, not punch cards. That's meant to be a joke. And so, anyway, that was essentially what the dialogue was going on. People forget it now because it seems absurd looking back on it. I mean, it's kind of like, well, yeah, I don't know why those people thought that. I mean, look at all the computers we have now and look at, you know, the smartphone that everyone has in their pocket is, you know, thousands of times more powerful than those mainframes.
And everyone has one, and it's kind of working throughout the entire place. And by the way, I think everyone's going to have an agent, too, with AI. And so I think that's why the parallel discussion to say we're going through all this energy to imagine every possible bad outcome.
when a lot more of the energy is better put into what are the good outcomes that we should be steering towards, and which specific bad outcomes, you know, that are not ones that are easily correctable as we get into it. So for example, you can put a car on the road without bumpers.
It's good to build bumpers later. You can put a car on the road without seat belts. It's good to put in seat belts later. But you don't try to imagine all 10,000 things that could go wrong before you will put the car on the road. You got to put the car on the road and start learning as you're going. And that's what the AI thing, and so for most glimmers to persuade them to switch from call it AI skeptical to AI curious,
within the kind of the bloomer category is to say, start using it and start using it not just for, hey, I have these ingredients in my refrigerator. What can I cook? Probably good use case or my relative is having a birthday party and I want to create a sonnet from them. Great. But for real things, like for example,
I'll give a personal example, because I think this might be useful in particular to the bankless community. So when I first got access to GBD4, I sat down and said, how would read off and make money by investing in AI as a proxy for what degree of job replacement do I have with GBD4?
It gave me back an answer that was powerfully written, compelling, and completely wrong. Because it gave me back the answer that a business school professor who was very smart, doesn't understand venture capital would say. First, you analyze which markets have the largest hand. Then you analyze what the substitute products might be. Then you go find teams that could possibly build those substitute problems, can stand them up in order to invest in them. And you're like, yeah, that's not the way any
capable venture. Any venture capitalist who's successful does not operate, though. Yeah, it's like business school slop, I guess, right? Yes, exactly. And so I was like, okay, but then you say, well, then is it completely irrelevant to investing? And the answer is no, actually, in fact, one of the things that AI like figure this out by the next day was, hey, I can feed in the PowerPoint deck or feed in the business plan. And I say, what are the top questions for answering and due diligence?
And while as an experienced investor, I might have known all those questions and gotten them all. It helped me go, oh, yeah, question number three, I would have figured that out is the right question asked three days from now. And it's useful to have it now while kind of composing a due diligence plan. And so that kind of acceleration or that kind of amplification.
you know, or that kind of agency, super agency as part of the kind of human agency. And so all of this is a personal story to go back to, you know, the bankless community. We'll start using it for things that matter to you. And even if the first one, like how do you invest in, you know, cryptocurrency doesn't give you anything useful, keep trying and different things. And you may find something go, oh, this helps me.
with how I can operate at speed and with accuracy. And then that gives you a wedge to start learning, you know, kind of how you can be kind of superpower enabled. The Arbitrum Portal is your one stop hub to entering the Ethereum ecosystem. With over 800 apps, Arbitrum offers something for everyone.
Dive into the epicenter of DeFi, where advanced trading, lending, and staking platforms are redefining how we interact with money. Explore Arbitrums rapidly growing gaming hub, from immersed role-playing games, fast-paced fantasy MMOs, to casual, luck-battle mobile games.
Move assets effortlessly between chains and access the ecosystem with ease via Arbitrum's expansive network of bridges and on-rifts. Step into Arbitrum's flourishing NFT and creator space, where artists, collectors, and social converge and support your favorite streamers all on chain. Find new and trending apps and learn how to earn rewards across the Arbitrum ecosystem with limited time campaigns from your favorite projects. Empower your future with Arbitrum. Visit portal.arbertrum.io to find out what's next on your web-free journey.
What if the future of Web3 gaming wasn't just a fantasy, but something you could explore today? Ronin, the blockchain already trusted by millions of players and creators, is opening its doors to a new era of innovation starting February 12th. For players and investors, Ronin is a home to a thriving ecosystem of games, NFTs, and live projects like Axie and Pixels. With its permissionless expansion, the platform is about to unleash new opportunities in gaming, DeFi, AI agents, and more. Sign up for the Ronin wallet now to join 17 million others exploring the ecosystem.
And for developers, Ronin needs your platform to build grow and scale. With fast transactions, low fees, and proven infrastructure, it's optimized for creativity at scale. Start building on the testnet today and prepare to launch your ideas, whether it's games, meme coins, or an entirely new web3 experience. Ronin's millions
of active users and wallets means tapping into a thriving ecosystem of 3 million monthly active addresses ready to explore your creations. Sign up for RoninWallet at wallet.roninchain.com and explore the possibilities. Whether you're a player, investor, or builder, the future of Web3 starts on Ronin.
I completely agree. And my lived experience of using tools like Chad GPT is that it does amplify my productivity when I use it in the right way. And I have to spend time to figure out how exactly to apply this to my own amplification of what I do. I guess when I was reading this section about the 1960 in the mainframe computer, I sort of putting my head in the minds of people at that time. And you could kind of see at that time, the way Compute was sort of playing out, it was really controlled by a small number of companies.
governments. It was sort of like, I mean, the computers were the size of buildings, right? And so you can sort of take a 1960s mindset and extrapolate that and get very scared. What ended up happening was, of course, the personal computer revolution, where everybody got those building size computers in their own home as an amplifier for their own productivity, and society completely forgot the 1960s hysteria around Mainfree. But
I can't help but also wonder if some of the criticisms were right. You go back to the 1960s and they talked about surveillance and the lack of privacy, and they weren't completely wrong. We didn't get the worst case scenario of what they were projecting, but we did get
a lot of good and then some bad outcomes. And this is why I sort of want to ask you about your framing of like, do you actually think the doomers and the gloomers are completely wrong? Or do you think that there's some probability of like a doomer style of outcome or even a gloomer style outcome where AI is like not so sunshine and rainbows that it actually is kind of negative for society? Like what do you think about that from a probability distribution perspective? And do they have a point? So I think smart people always have a point.
And so I think the question's good because it's always to listen to what is the thing that they're thinking about. I think the two answers are very different between doomers and glimmers. Let's start with doomers who, you know, another thing you're in the bankless community may be familiar with is, you know, X-risk. And so they tend to be existential risk predominantly, you know, socially, Yukowski and others. Now, the thinking starts like this. It says, can you guarantee me that killer robots will never be built
either in the hands of humans or autonomously. You say, well, you can't guarantee that. There's lots of things you can't guarantee. You say, ah, so then we have an existential risk that's being added, and we should stop that existential risk. Because why should you add any existential risk? QED my arguments over.
Like, well, until you consider the fact that existential risk is not one thing, like the only existential risk for human beings is not killer robots. There's pandemics, there's asteroids, there's nuclear weapons, there's climate change, and let's kind of goes on. And so you have to look at existential risk as a portfolio. Namely, it's not just one thing, it's a set of things. And so when you look at any particular intervention, you say, well, how does this affect the portfolio?
Now, my very vigorous and strong contention is that AI even unmodified, and we'll get to why steering is good, but unmodified at all is net, I think, very positive on the existential risk portfolio. Because when you get to, for example, pandemics, one of the things we've experienced in our lifetimes, and obviously, if it was a lot more fatal and everything else, it could have been substantially worse than the many thousands who died.
The question is to say, well, how do you see it? You know, detect it. How do you analyze it? And how do you both do therapeutics and preventive vaccines at speed in order to navigate that? And AI is the primary answer to that. Like, none of that can work without the speed of AI. And then you get to, oh, well, how about asteroids? Well, identifying which asteroids might get to us being able to intervene on them early.
You get to, for example, climate change. You go, well, actually, in fact, whether it's anything from accelerating the invention of fusion to how do we manage our electric grids better, there's positive contributions across all this. You go, OK, given all of that, I think AI even unmodified. Just let the industry know exactly what it's going to do. It's going to be strongly positive in the existential risk bucket. And I'll pause there in case you have a
contention on that before I get to the Gloomer category. No, I'll just say it in another way where you're just saying the most fully zoomered, the fastest engine going into the AI revolution, it hits every single pothole. It's on two wheels as it's going around the corners. Even under that situation, the solutions that it provides to all alternative existential risks is still net positive in your opinion. Exactly. Yeah. So that's the reason why I'm very far away from Doomers.
Okay, well, how about the gloomers? Do they have a point? Yeah, well, no, and by the way, I thought the doomers have a point too, which is you say, hey, by the way, we should try to minimize the killer robot risk. Yes, that is something we should be doing. I guess your answer would be like through use of AI to help us also. Exactly. Okay.
Exactly. That feels a little recursive, but... Hey, whenever technology is part of the problem, it's almost always the best part of the solution, too. Okay. Right. Okay. That's the optimist. That's the EAC, can you talk anything? Okay. But I think I have history on my side, which is good. Yes. And we can go back to the privacy thread from the mainframe things as well. How about the glimmer, though? So, on the glimmer side...
The primary thing where I think I'm very sympathetic to the gloomers is that if you look at, and we cover this some in two-bridge agencies, if you look at the transitions for human societies in these technologies, we as human beings adopt and adapt new technologies very painfully, like the disruption. So you go, ah, the printing press, we could not have anything of the modern world without the printing press. You can't have science, scientific method, you can't have literacy, you can't have, you know, kind of a robust middle class.
Yet, there was a century of religious war because of the printing press. When we as human beings come to this, the transition periods almost always very painful. And I think even with AI, we're going to have pain in the process. I don't think there's any way, unfortunately, around it.
Part of the reason I'm writing super agency and doing these conversations say, well, let's try to be smarter about it than the times we've done before. Let's try to make the transition as easy and more grateful. But it will still be painful in terms of even if you say, hey, most human jobs will be placed by humans using AI.
that process itself is still painful people have to learn a i mean humans maybe the human who couldn't learn a i feels out of place you know is suffering because of it. And that's the kind of thing that i think the glimmers are kind of putting as or intuitive finger on which is hey look all this kind of transition will project it to infinity.
But all this kind of transition, boy, this is going to be difficult. And you're like, yes, it is. Right? It's not. No, it's not. And we're going to try to make it as good as possible. And that's, again, part of the reason I'm arguing about
we should be intentional here about what could possibly go right is you say, well, and this gets back to the technology of either the solution is like, well, okay, so we're going to have some job transitions, we're going to have in transformations, we're going to have information flows and misinformation flow transformations that we're going to have some expectations of privacy transformations. What should we do? And the answer is, well, I actually think AI can be helpful in all of these cases. And like one of them, part of the reason why inflection and pie was kind of
You know, something that I help getting as an agent for every human being that's on your side that's for you, you know, and buy you is one of the things that can help you then navigate because it can be like, okay, how do you help me navigate this new world? And I think it's one of the things that's really important for us to provision early that goes all the way back to your democratization question. And one of the reasons why I think that's an important thing to make sure that there's very broad access to.
Okay, let's underscore this point because I think some of the reason why the gloomers sort of are winning right now in the narrative war is because, like, of course, fear is a bit more viral and it's easier to imagine. It's much easier to imagine an Orwellian future in the 1960s or the 2020s than it is to imagine a more optimistic future. And as soon as you start talking about this optimistic future, it sounds like
to utopian it just doesn't even sound real right but we are limited in terms of our imagination but that question that prompt that you just raised is like a chapter in your book is the question of what could go right and the gloomers rarely ask what could go right and i think to be fair to them they have some limitations on their imagination so i want to ask you as a kind of a techno visionary like
How would you answer that question? So if human beings, if every citizen in the United States had an AI agent that amplified what they do, and we have this across the side, this technology was widely deployed, what could go right? What are the benefits for the average American here? So line of sight, namely, no technological innovation is just a question of how we get a built and deployed. A medical assistant that's better than your average doctor that's available 24 by 7 in every pocket.
So you have a health concern, it's 11 p.m., you have a health concern for your kid, your parent, your grandparent, your pet, anything, you can begin to address it. And it can help you, including going, oh, for that, you should go to the emergency room right now, right? And so that's buildable, a tutor on every subject for every age, anything from two-year-old to 82-year-old. Like, hey, you like to learn this, you'd like to understand more. By the way, there's obviously economic implications to that.
That's, I think, another thing that's available. Then, to your democratization point, there's a lot of services, not just medical and access to doctors. Some people have concierge doctors. Most people have to go through their medical plan, and some people don't even have medical insurance, even in the US. There's a bunch of people who are uninsured.
What other kinds of things could be? It's like, well, actually, in fact, I'm reading the lease for my rental. How do I understand that? What's important to know about it? Well, the agent can help you with that, too. And that's all line of sight today. That's not even getting to, hey, how can it help you code better? How could it help you create marketing plans better? How could it help you sell better? How could it help you? All of that stuff is also coming.
but like those three basics for everybody is, you know, life transforming. What about the societal level? So when those things for individuals kind of aggregate and compound, we have better healthcare, we have better kind of like learning capabilities. We have better things in all areas of our life. Like what does that amount to from the United States from a societal perspective? Do we have like more free time as a society? Does our happiness increase? Does our GDP like double or triple? Do we get those things as well? Well, I definitely think
The equivalent of what GDP is supposed to be measuring should be increased. Now GDP has this challenge that it's measured in an industrial dollar for things way. For example, all the benefits you get from Wikipedia are actually deflationary in GDP, but the quality of that. Another thing that people worry about with AIs is, I'm not going to spend time talking to people. I'm going to spend all my time talking to agents.
And so I'm loneliness will be increased or this is decreased. I think that to some degree that's a design, you know, kind of choice. And I think what we want to both see and I hope we'll get and we want to nudge towards is, you know, like when you ask, you know, inflections pie, hey, you're my best friend. It says, no, no, I'm not your friend. I'm your AI companion. Let's talk about your friends. Have you seen them recently? How would you like to talk to them? You know, maybe you could set up a lunch date, you know, that kind of thing.
And I think that could lead to a much greater happiness for this. And I do think that actually part of what I love about the Bhutan evangelical concept is actually, I think, measuring gross national happiness is also a good thing that we should be aspiring to as a society. And I think that could be increased with this. But I think that the place where we'll see it
is in being much more fulfilling lives. And the fulfilling life might be, hey, I get more time to do my hobby. I love fishing. I'm going to have more time to do fishing because I can do my work in a short amount of time. Or for people who, because American society tends like to work, is like, oh, I can accomplish a lot more in my work. I may be still working the same amount. But as opposed to putting a whole bunch of time into form entry, I can now do the parts that are not
just like form entry and do the other parts of the work in much more kind of fulfilling and capable and productive ways. Yeah, one way I think about it is like 100 years from now, well, the average person in the United States or wherever this technology is like have a better quality of life. I think of, you know, the 2020s and I compare that to the 1920s and I would like hands down.
You like prefer to live in the 2020s for all of its problems than in 1920s. But before the advent of antibiotics, you like, you know, look at kind of mortality rates from that time. Look at kind of the amount of society that had to basically like do a grueling agrarian type farm job in order to just to get by, right? And it's like much better for most people now than it was previously. And there's lots of stats we could get into on that. But let's just pause and go back to kind of another glomer objection. So
They would say, read everything you're saying sounds so amazing. But like, yeah, we've heard it before. This is another bait and switch from Silicon Valley, okay? They promised, and remember the 2020s, in the advent of your Facebook, not to mention LinkedIn, they promised that we would connect the world. And what ended up happening, Silicon Valley got rich. They extracted our attention. The term extraction is used a lot about this. They sold us products.
They sold us as products that are like the highest bidder. And now I'm thinking about even the time I spent with chat GPT and it feels really good right now. Like it's amazing. I spend more time with chat GPT than I do with like Google. And you know, as a result, I think chat GPT knows me.
even better than Google. I mean, a lot about me could be revealed by my search history, but like even more so with Chad GBT. And I'm getting the point of daily use where it's like, who knows me better than Chad GBT? Like maybe my wife, maybe a handful of other individuals, but like it knows me. And that all feels good because it's amplifying what I do, okay? But what happens if things go dark? If we get this kind of like bait and switch, if suddenly open AI or whatever, you're still a con valley like corporation here,
start saying, oh, you know, all this AI stuff is pretty expensive. We're going to have to start harnessing all this data we know about Ryan to like do something and sell it to the highest bidder. Cambridge Analytica 2.0. Yeah. Or maybe they sell it out to the government or something or they control me in all of these subtle ways by recommending things that aren't in my best interest. It's in their best interest or some government's best interest. Okay. This is the crux of the bait and switch. And so address that head on.
read like, how do we know this isn't a Silicon Valley bait and switch because it feels like that's happened previously with social media. Well, I mean, a little bit depends on what you mean by bait and switch because I think, for example, let's take your example with Google and AdWords. Yes, Google gets a bunch of data from you.
and can advertise to you better. Hopefully, that means that the products that you're seeing are actually things that might interest you, which I think is a feature, not a bug, in terms of things you might want to buy, and actually has so far the best business model that's been invented.
Certainly in the media world, maybe in any part of the world today, and they say, well, what do you get for your data? Well, you get a panoply of amazing free services, free search, free email, a bunch of other things. And so it's a voluntary, something you participate in.
you know, because you get a bunch of value, you know, kind of transaction. And by the way, you'd probably rather be having it figure out how to monetize off your data than saying, oh, in order to get our RPU, you got to pay us, you know, 50 bucks a month.
I'd rather get the advertising right and give me all this stuff for free. And so it's possible that the kind of AI agents will end up in a similar kind of thing where they say, well, hey, look, we could charge you 50 bucks a month like Google could for search. But actually, in fact, figuring out a way that it's kind of
transparent and voluntary and engages with you because it shouldn't be deceptive. It should be with kind of your awareness in engaging and using it that this becomes a positive kind of economic transaction for you. But it could be other things too. It could be a subscription model. It could be integrated into the various productivity apps that you're using. It could be any number of things for that. I think that the dialogue
that's kind of very well captured in this compelling slogan, surveillance capitalism is misleading because it's like, well, but like I for one like surveillance medicine. I like the fact that my watch is tracking my sleep and health things cause it's for me and it helps me and it's part of that positive thing. And a lot of the uses of these data in these internet systems are as a way of making it free for you where they had the economics for expanding and improving
the free product. And so, you know, I think that would challenge the bait and switch kind of methodology. And, you know, the last thing I guess I would say is like, for example, you say, well, you know, whether it's a social network, you know, and I obviously think LinkedIn has handled this, you know, the best of, you know, all of them, whether it's Google, whether it's these things.
These are all voluntary participation questions. You might say, well, it's very hard for me to participate in modern society without being informed in the way that I could be informed this way. And it's like, OK, you know, like, yes, I myself use search a lot. But I think I can do it in a way that is, you know, maybe you should say, hey, Google should offer
a paid alternative. On the other hand, for that to be economically viable, at least 2% or 3% of the audience would have to opt for it. I'm not even sure 2% or 3% of the people would opt for it. You'll get individuals like I would do it, but that might not be
You know, economically relevant unless at least two or three percent of the people were doing it. So anyway, so I think it's a challenge to the challenge has a word. I think the bloomer take on this was sort of interesting, right? Which is like acknowledging that there are some potholes and there's some costs, right? Maybe to society individuals, but also saying that the benefit far exceeds the cost.
One way you underscored this was like, you said this, I wanted to get you to justify this because it kind of blew my mind when I was reading it. Even if LLMs get no better, that is no better than today, the consumer surplus to the average 20-year-old living today is millions of dollars over their lifetime.
Okay, so what you're effectively saying is for an actual Zoomer. So somebody in Gen Z, somebody in that age demographic, they're going to be able to harness LLMs and it's going to deliver millions of dollars in value to them. And that's not even talking about the LLMs and AI of the future. That's talking about the AI of today. How? Some people think about that and they're like, how does that deliver someone millions of dollars in their lifetime?
Well, let's just start with something that's really simple, which is legal assistance. You're going to counter unemployment contracts, rental contracts. You're going to have products and services you might be engaging in. Today, your average person just basically can't afford to pay a lawyer because the lawyer is hundreds of dollars an hour.
Well now, even today, with GBD4 today, you can put it in there and get useful analysis, useful kind of participation. And you just take every single contract that you're potentially engaging in and use that now, that gets you a lot of dollars towards your millions of dollars.
Then you say, well, what about like medical stuff, right? Like consulting, medical or other kinds of things. Or especially if I like, you know, in the periods where since, you know, in this country we tend to do, you know, insurance in kind of challenging ways, mostly through employers, you know, like, okay, so getting medical advice. Well, that's another area where you can get a bunch. Then you say, okay, well, how about amplifying my ability to find and do economic work?
That's another place. And so when you add all that up and you add it up for hopefully what is even a longer life because if you're getting, you know, kind of call it pre-critical medical advice about how to preventively stay healthy and preventively avoid certain kinds of, you know, catastrophic health conditions or navigate like early signs in ways that you can do it before you're in critical condition.
not only is that hugely economic, but that should also lead to longer lifespans. And so all of that is part of how, you know, we get to, hey, today it's already worth millions to you. So Reed, this gets us to regulations. And I think the Gloomer camp has one take on how we regulate AI and like the EX and the bloomers and the zoomers have a different take on this. But generally what I'm seeing coming from the establishment government is like breaks.
It's no gasoline on this thing. It's all breaks. They have this precautionary principle, which is like they think about what could go wrong and how to prevent all of the things that could go wrong. You make a different argument. I think that your argument is that innovation is actual safety. So you're making the argument and want to hear how this makes sense. But you're making the argument, I think, that actually hitting the accelerator on AI is how we make this thing safe.
And that feels very counterintuitive. What's that claim based on? Why do you think innovation is safety? So, for example, when you get to like how are modern cars able to go these speeds and able to go them much safer than earlier cars were doing them, is that as you iterate and deploy them, you realize there's like, oh, actually, in fact,
We could put in anti-lock brakes. Oh, we can put in seat belts. Oh, we can have crumple zones. Oh, we can have bumpers. And that's an innovative path to making the car safer. And the car can then go faster and navigate circumstances because you've innovated safety into the car as part of the innovation with the car.
And the parallel with that is essentially doing kind of like, well, what are the future features of AI? And what are the things that we could be doing that make them much safer from these kind of aligned circumstances? You're like, OK, so can we make the AI really enable
you know, people who are trying to figure out stuff with like their health and other kinds of things, but also make, you know, any efforts at terrorism, you know, kind of much more difficult and much harder. And by the way, this is, of course, what, you know, red teams and safety and alignment groups are already doing at Microsoft and OpenAI and Tropic and others, you know, for doing this because they're aware of these kinds of safety things. But it's that innovation
into the future that is the kind of really important thing. And the way you discover that is by iterative deployment, by actually making it live and then seeing what things needed to be modified. Now, obviously on really extreme things like, well, okay, terrorists who are creating weapons of mass destruction, we want to make sure that that's as little possible in any field as absolutely the case.
For example, safety groups more or less used as their minimum benchmark, let's make sure that these agents are not any more capable of doing that than Google searches today. Obviously, we want to drive both of them to the lowest, but that's what innovation to safety means in the historical and easy to understand car example and what it means in terms of technological features for building future software.
Last objection here that I think comes up is that this idea that AI kind of kills human autonomy. Like this is a control technology. It's not a freedom technology, basically. So like, and in this AI world that we're all moving towards, I mean, where is my agency? I mean, you title the book read, you know, super agency, right? It's like, but I feel less.
like I have less agency if the AI is making all of the decisions for me. I want you to address that too because it kind of ties into this concept of freedom. And I sort of wonder how much of this is also like boiling the frog, like we just kind of get used to it. And maybe that's okay, but maybe it's not. So if you went back to the 1960s and pulled those same people and you told them, hey, in 2020s, most adults, they would actually meet their future spouse or maid or partner by computer algorithm. It's basically computers to side.
That's actually the lived experience of how most people meet and get married today is like they meet via social network of some sort. They meet on like Tinder or whatever dating site they subscribe to. It's kind of the algorithms that are almost matching them. That sounds dystopic in the 1960s. Now it's like, oh, I kind of gotten used to it. I've met many couples in healthy relationships and they sort of met online by route of computer. Anyway, back to this.
this argument that AI agents making the decisions and outsourcing that part of our intelligence will actually restrict our freedoms? What do you make of this? Or do you think that there's some merit to this argument? So I think one of the things that I said at the very beginning is agency changes. It isn't just new superpowers, but also as you get to super agency, it changes some things around. And so for example, think of it as kind of different kind of tactile perceptions of what
it means to be kind of human and human gauge in life. Like when you first make a technology, it feels kind of alien, you know, fire and agriculture and, you know, glasses and computers and phones. Like, you know, starts feeling like, you know, kind of everyday life. Like our grandparents use phones now too, even though at the beginning of the kind of smartphone era, I was like, ah, this is one of those new fangled things. I'd rather just, you know, go get on the hard line and call my, you know, grandchild or whatever.
And so it does make changes, and part of the iterative deployment and learning about it is how do you make those changes such that when we get to the future state, we go, oh, yeah, this one's better. And you say, well, as our current state just adapting, and actually the fact that previous state judgment was correct,
Well, if you kind of look at it, like take your 1920s and 2020s, like, you know, do you actually understand what the world passed, you know, kind of penicillin and antibiotics and all the rest of the stuff really fully looks like and what the consequences all that is and why the portfolio of it is so much better.
And so you actually have to take that state that you learn into. It's kind of like, think about it as kind of the judgments that you make as a child, the judgments that you make as an adult. You go, look, there's a certain kind of innocence to the children. We get wiser as we learn and we get experience and we use that for the viewpoint of kind of making good judgments. And that's part of the reason why I think, yes, you'd say, hey, you're meeting your life partner now on a internet service.
Like, whoa, that seems really alienating. But actually, in fact, it's the, OK, how do we make that a lot better than the lottery of college or the workplace, which was very limited. Yeah. And what you had before. And again, it's an iterative process. It doesn't mean that there aren't still some things that are broken in the internet, you know, kind of dating things. But it's one of the things that we say, we know how to continually improve it. And that's one of the things that we continue to work on.
Read as we begin to close this out. I want to ask a question about the United States and America. Based on your different religious preferences for AI, you might decide to kind of regulate this thing in one direction or another. And the question becomes, okay, how do we implement this technology across America, across society? There are some that get to this stage of the conversation and they're like, well, the doomer take and even the gloomer take is not sustainable.
because we live in a multipolar world with many different actors. And this is kind of an AI race. And so if not us, then our adversary doubles their GDP and we kind of stay stagnant. And that leads to a world that maybe we don't like. So I want to ask you this question. What do you think America should do here? Like what should our approach for AI be? Well, one of the things that I started doing is calling artificial intelligence, American intelligence, for precisely this reason.
Which is it's really important that we embrace this cognitive industrial revolution because the societies that embrace the industrial revolution had prosperity for their communities, their children, their grandchildren, and kind of made, you know, kind of the modern world.
And I think the same thing is true for the cognitive industrial revolution with artificial intelligence or amplification intelligence or American intelligence. And we want the kind of the spree decor of American values, the American dream, the empowerment of individuals, the ability to
you know kind of do your best work and to you know kind of make progress from wherever you start in the rungs of society to take more economic control of your destiny and i think it's one of the reasons why it's particularly important that american
Values are deeply embedded in this and that it's an empowerment of American society and it's part of the reason why I think that our regulatory stance needs to be much more, you know, bloomer, zoomer, and accelerationist than it does, you know, putting on the brakes because I think that's part of the future of the world as we can help make it become.
As we close this out, then it's just a final question. Is there any chance in your mind that all this AI stuff is kind of overhyped? We basically, we flatline here that we have chat GPT-4 and the innovation really slows to a crawl that none of this matters that much because it'll happen very slowly over time. I think there's zero chance of that.
So I think that already we see enough in the scale compute and learning systems that are just only beginning to get deployed, like part of what 2025 is going to be the year of there. We see the acceleration of what happens in software coding.
across the board and that software coding is both going to enable a bunch of other things like all of us as professionals are going to have a coding copilot that helps us do our work in various ways. But it's also it's a template for how you advance a bunch of other functions.
of all of this work. So I think even if you say, hey, GBD5 is only going to be like, you know, 10% better or 20% better. I think it'll be a lot better than GBD4 and that the progress of the increased cognitive capabilities slows down.
I think the implications throughout the cognitive and industrial revolution, the technology is already visibly present. It's just a question of how we build it, configure it, deploy it, integrate it. And I think that's part of the reason why, you know, American intelligence. There you go, guys. From Reid Hoffman, zero percent chance that all of this stuff slows down. So into the frontier we go and we're going with you, Bankless Nation. Reid Hoffman, thank you so much for joining us here today. It's been a pleasure.
My pleasure as well. I look forward to the next. Yeah, we'll have to talk about crypto in the next conversation. So everyone listening, the book is called Super Agency. It is out now. We'll include a link in the show notes, a fantastic book with Reed's entire thesis around this distilled. Gotta let you know, of course, crypto is risky. So is AI. You could lose what you put in, but we are headed west. This is the frontier. It's not for everyone, but we're glad you're with us on the Bankless Journey. Thanks a lot.
you
Was this transcript helpful?
Recent Episodes
ROLLUP: Bitcoin on the US Balance Sheet? | Trump's US Crypto Stockpile | Banks Can Now Custody Crypto | Venice AI's Massive Airdrop

Bankless
Haseeb Qureshi discusses the U.S. government's new stance on crypto with Trump signing an executive order for a national digital asset stockpile and banning CBDCs. SAB121 is revoked, allowing banks to custody crypto. Important crypto-related events include Venice AI token launch, Abstract Chain mainnet launch, Nvidia selloff, and Gary Gensler returning to MIT.
January 31, 2025
AI ROLLUP #9: $6M DeepSeek Shocker | $500B AI Push | Venice’s Billion-Dollar Airdrop | Solana DEX Record

Bankless
Ejaaz and David discuss how a $6M open-source AI model (DeepSeek) threatened OpenAI's dominance, affected Nvidia stock, and sparked a new crypto AI 'arms race'. They also cover Trump's AI funding pledge, Solana's record DEX volumes potentially leading to the biggest bull run in AI space. Mention of several builder activities including ARC partnering with Solana, AI16Z launching a $10M fund, Virtuals planning multi-chain expansions, and others.
January 30, 2025
DeepSeek R1 & The Short Case For Nvidia Stock | Jeffrey Emanuel

Bankless
New DeepSeek AI model matches GPT-4 performance at 1/45th the cost, contributing to a 20% dip in Nvidia's stock price and raising concerns about its market dominance, as argued by investor-technologist Jeffery Emanuel in his viral article.
January 28, 2025
ROLLUP: Trump's Massive Memecoin | Ethereum Ecosystem Drama | Ross Ulbricht Freed | Phantom Wallet Worth $3B?

Bankless
Discussion about a busy week in crypto features Trump's return to office, memecoin launch on Solana, Ross Ulbricht's pardon, pro-crypto cabinet appointments, Ethereum Foundation drama, Solana ecosystem growth, BlackRock Bitcoin buys, and SEC shakeup.
January 24, 2025

Ask this episodeAI Anything

Hi! You're chatting with Bankless AI.
I can answer your questions from this episode and play episode clips relevant to your question.
You can ask a direct question or get started with below questions -
What was the main topic of the podcast episode?
Summarise the key points discussed in the episode?
Were there any notable quotes or insights from the speakers?
Which popular books were mentioned in this episode?
Were there any points particularly controversial or thought-provoking discussed in the episode?
Were any current events or trending topics addressed in the episode?
Sign In to save message history