Today, we're looking at the most important AI essays of 2024. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI to join the conversation, follow the Discord link in our show notes.
Hello friends! This is our last Longreads episode of the year, and I thought what would be interesting would be to try to hone in on what I think are the most significant or influential op-eds or essays of AI this year. One idea I had, given how frequently I reference it, would be to count down the top pieces from Professor Ethan Mollack's one useful thing blog, but it decided that for our purposes there was a slightly better strategy.
My pick for the three most significant essays of the year are Sam Altman's op-ed in the Washington Post, who will control the future of AI, Dario Amade's machines of love and grace, and once again Sam Altman's the intelligence age. I'm also going to reference the announcement of safe super intelligence, and where we're going to start is not one of the most significant op-eds of the year, but is still useful set up in context. One of the really interesting things that happened this year is that the AI safety movement took a big backseat.
It did have a bit of a resurgence around the conversation of SB 1047 in California, but by and large, this was the year where the tone shifted to be more about the competition for AI leadership than about concern for AI safety. Back in May, Senate Majority Leader Chuck Schumer and a bipartisan working group of senators dropped a guidance plan around federal policy for AI.
One of the things that was really, really notable about that was how much it didn't consider big AI safety issues, particularly the x-risk type issues, and instead focused on, as columnist and thinker Tyler Cowan put it, the national security importance of the US maintaining its lead in AI.
Cowan continues, lawmakers seem to understand that they would rather face the risks of US-based AI systems that have to contend with Chinese developments without a US counterweight. Ultimately, he concludes, safety work of course will continue under many guises, both private and governmental, but AI-safetyism as an intellectual movement has peaked.
Now, I am not prepared even eight months on to fully co-sign on that idea. I think instead what has happened is a combination of things. First, I think that our being stuck around GPT-4 level capabilities has turned down the volume on the urgency of the conversation, if not extinguished it completely. And second, I do think that the idea of national security and American leadership in AI has certainly taken a more important seat at the table.
That was really brought to bear in July when OpenAI CEO Sam Altman wrote, who will control the future of AI in the Washington Post? That question, he argued, is the quote urgent question of our time. Altman continues, the rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we're going to live in.
Will it be one in which the United States and allied nations advance a global AI that spreads the technology's benefits and opens access to it? Or an authoritarian one in which nations and movements that don't share our values use AI to cement and expand their power? There is no third option and it's time to decide which path to take. From there, Albin argues that the US has a lead, but not a huge one. He writes that the first chapter of AI is already written, but that the next set of advances will quote usher in a decisive period in the story of human society.
Altman argued in that piece that the public and technology sector needs to get four big things right to ensure that AI of the democratic version is the one that takes hold. First, he argues, the AI industry basically needs to get good at ensuring its own secrets, effectively saying if they can steal secrets from us, they're going to.
Second, he argued, quote, infrastructure is destiny when it comes to AI. This is something that he has been beating the drum on in both public and private. The US needs to have a more diligent build out of the infrastructure needed to power the AI movement, both figuratively and literally in the context of power plants.
Almond also points out that, quote, building this infrastructure will create new jobs nationwide. AI can be the foundation of a new industrial base. It would be wise for our country to embrace. Third, he calls for a coherent commercial diplomacy policy for AI, including, quote, clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems.
And fourth, he writes, we need to think creatively about new models for the world to establish norms in developing and deploying AI with a particular focus on safety and ensuring a role for the global south and other nations who have historically been left behind. Ultimately, he calls for something akin to an international atomic energy agency for AI. And yet what is much more significant than the particular recommendations is the fact that he wrote this at all and that he did so in the Washington Post.
This very much both reflected and set a tone for what I think has been the theme of a lot of AI policy discussions throughout the year. The other two most significant essays in my estimation came in the last few months of the year. One in September, the intelligence age, once again by Sam Altman, and one about a month later from Dario Amade, the CEO of Anthropic, called Machines of Loving Grace.
Machines of Loving Grace was Dario's attempt to lay out a vision for the best-case scenario for the world when it comes to AI. Here's the way that he starts the essay. I think and talk a lot about the risks of powerful AI, the company I'm the CEO of and Tropic does a lot of research on how to reduce these risks.
Because of this, people sometimes draw the conclusion that I'm a pessimist or doomer who thinks AI will be mostly bad or dangerous. I don't think that at all. In fact, one of my main reasons for focusing on risks is that they're the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
So what's interesting to me about this is that this was not what many people took it for, which was some turnaround of Dario's position. Instead, it was him saying, you've heard about all the bad, but let me tell you about the good. It was an articulation of a piece of his philosophy, which has always been there, but which was comparatively less articulated.
The goal of the essay he said is to sketch out what a world with powerful AI might look like if everything goes right. Dario divides his list into five categories, biology and physical health, neuroscience and mental health, economic development and poverty, peace and governance, and work and meaning. In terms of the setup of this piece, Dario rejects the language of AGI, but discusses powerful AI, something that's more powerful than what happens now.
His framework for each of these issues is what the five to 10 years following the emergence of powerful AI looks like. And he says the powerful AI could come as early as 2026. In terms of how he defines powerful AI, he says that it's smarter than a Nobel Prize winner across most relevant fields. He says that in addition to being just a smart thing, you talk to it has all the interfaces available to a human working virtually, including text audio, video, mouse and keyboard control and internet access.
It's basically agentic, although he doesn't use that word. It can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously. It doesn't have a physical embodiment, but it can control existing physical tools and so on and so forth.
Now, in a lot of ways, the specific predictions that he has, or the specific best case outcomes that he has, aren't really the most important thing here. Instead, it's just the fact that he thought that at this point in time, what was the most important thing to write was a full articulation of the world that we're trying to create. And what makes it even more interesting is it is a complement to a very similar type of essay from Sam Altman, who published his just a little bit before.
Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever. Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI Risk Management Framework, saving you time and money while helping you build customer trust.
Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI. Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash NLW. That's vanta.com slash NLW.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year.
And given how new this is, all of us are going to be back in pilot mode. That's why Super Intelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me at www.bsuper.ai, put the word agent in the subject line so I know what you're talking about, and let's have you be a leader in the most dynamic part of the AI market.
Altman's was called the intelligence age. And it's short enough that I'm just going to reread it. And this is actually me. It's not AI. In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents. This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time. We can already accomplish things now that our predecessors would have believed to be impossible. We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us.
In an important sense, society itself is a form of advanced intelligence. Our grandparents and the generations that came before them built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn't have figured out on our own. The story of progress will continue, but our children will be able to do things we can't.
It won't happen all at once, but we'll soon be able to work with AI that helps us accomplish much more than we ever could without AI. Eventually, we can each have a personal AI team full of virtual experts in different areas working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction on any subject, in any language, at whatever pace they need.
We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more. With these new abilities, we can have shared prosperity to a degree that seems unimaginable today. In the future, everyone's lives can be much better than anyone's life is now. Prosperity alone doesn't necessarily make people happy. There are plenty of miserable, rich people. But it would meaningfully improve the lives of people around the world.
Here is one narrow way to look at human history. After thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence. This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days. It may take longer, but I'm confident we'll get there.
How did we get to the doorstep of the next leap in prosperity? In three words, deep learning worked. In 15 words, deep learning worked got predictably better with scale, and we dedicated increasing resources to it. That's really it. Humanity discovered an algorithm that could really truly learn any distribution of data, or really the underlying rules that produce any distribution of data. To a shocking degree of precision, the more computing data available, the better it gets at helping people solve hard problems.
I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is. There are a lot of details we still have to figure out, but it's a mistake to get distracted by any particular challenge. Deep learning works and we will solve the remaining problems. We can say a lot of things about what may happen next. The main one is that AI is going to get better with scale and that will lead to meaningful improvements to the lives of people around the world. AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care.
At some point further down the road, AI systems are going to get so good that they may help us make better next generation systems and make scientific progress across the board. Technology brought us from the Stone Age to the agricultural age and then to the industrial age. From here the path to the intelligence age is paved with compute, energy, and human will. If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant, which requires lots of energy and chips.
If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that mostly becomes a tool for rich people. We need to act wisely, but with conviction. The dawn of the intelligence age is a momentum development with very complex and extremely high stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves and the future to figure out how to navigate the risks in front of us.
I believe the future is going to be so bright that no one can do it justice by trying to write about it now. A defining characteristic of the intelligence age will be massive prosperity. Although it will happen incrementally, astounding triumphs, fixing the climate, establishing a space colony, and the discovery of all of physics will eventually become commonplace. With nearly limitless intelligence and abundant energy, the ability to generate great ideas and the ability to make them happen, we can do quite a lot.
As we've seen with other technologies, there will also be downsides and we need to start working now to maximize AI's benefits while minimizing its harms. As one example, we expect that this technology can cause significant change in labor markets good and bad in the coming years, but most jobs will change more slowly than most people think. And I have no fear that we'll run out of things to do, even if they don't look like quote unquote real jobs to us today. People have an innate desire to create and be useful to each other and AI will allow us to amplify our own abilities like never before.
As a society, we will be back in an expanding world and we can again focus on playing positive-sum games. Many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back at the past wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast forward a hundred years from today, the prosperity all around us would feel just as unimaginable.
So this is a declaration, a line in the sand that this new better age is coming. When it comes to those risks that we talked about, Altman mentions jobs, yes, but the one he spends more time on or that seems more significant is the idea that we under-resourced this transformation and limit who benefits from the change.
And that's why I wanted to flag one more, which is the announcement post of SafeSuperintelligence. This is, of course, the company founded by OpenAI co-founder Ilya Sootzgever and reads, Superintelligence is within our reach. Building SafeSuperintelligence is the most important technical problem of our time. We've started the world's first straight shot SSI lab with one goal and one product, a SafeSuperintelligence. It's our mission, our name, and our entire product roadmap because it is our sole focus.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures. Basically what they're saying is that they agree that super intelligence is close, or at least as they put it within reach. But they think, and this of course is hard-won knowledge having been inside OpenAI, that effectively all of the investment and business model is just a distraction.
And the reason I think this is interesting in the context of these other two pieces is that it just really reaffirms that the race is on, the race for wildly more advanced AI, the race for which country or more importantly which political philosophy has leadership of it, and the race to ensure that it benefits the greatest possible number of people. If you want to understand how AI in 2024 sees itself heading into 2025, these I think are the pieces that tell that story.
That's gonna do it for this last Longreads episode of the year. Appreciate you listening or watching as always, and until next time, peace.