Today on the AI Daily Brief, why this much-valleyhood AI slowdown is actually an opportunity. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI to join the conversation, follow the Discord link in our show notes.
Hello friends, welcome back to another long reads episode of the AI Daily Brief. Today, we are connecting the dots between a topic that we have been covering lots on the show, which is this question of whether we're reaching some plateau in the ability to improve performance of LLMs with the reality of AI as applied to business.
This is something that I've hinted at in previous episodes, but this gives us a chance to dig into it all the way. And to kick it off, we're going to read an essay by Parmie Olson called AI Slow Down is everyone else's opportunity. Businesses will benefit from some much needed breathing space to figure out how to deliver that all important return on investment. So first, let's read the piece. And yes, it is actually me, not 11 labs me who will read it. And then we will do a little bit of a discussion.
The multi-trillion-dollar artificial intelligence boom was built on certainty that generative models would keep getting exponentially better. Spoiler alert, they aren't. In simple terms, scaling laws said that if you threw more data and computing power at an AI model, its capabilities would continuously grow. But a recent flurry of press reports suggests that that's no longer the case, and AI's leading developers are finding their models aren't improving as dramatically as they used to.
OpenAI's Orion isn't that much better at coding than the company's last flagship model, GPT-4, according to Bloomberg News, while Google is seeing only incremental improvements to its Gemini software. Anthropic, a major rival to both companies, has fallen behind on the release of its long-awaited cloud model. Executives at OpenAI and Anthropic and Google all told me without hesitation in recent months that AI development was not plateauing. But they would say that. The truth is that long-held fears of diminishing returns for Gen. AI, predicted even by Bill Gates, are becoming real.
Ilya Sutzgever, an AI icon who popularized the bigger-is-better approach to building large-language models, recently told Reuters that it had leveled off. The 2010s were the age of scaling, he said. Now we're back in the age of wonder and discovery once again. Wonder and discovery puts quite the positive spin on we have no idea what to do next. It could also understandably spark anxiety attacks for investors and businesses who were expected to spend $1 trillion on the infrastructure needed to deliver on AI's promise to transform everything.
Wall Street banks, hedge funds, and private equity firms are spending billions on funding the build-out of vast data centers, according to a recent Bloomberg News investigation. Does this all add up to a terrible gamble? Not exactly. There's no question that the main beneficiaries of the AI boom have been the world's largest tech companies. Quarterly cloud storage revenue for Microsoft, Google, and Amazon has been growing at a steady clip, and their market capitalizations, along with those of Nvidia, Apple, and Meta, have soared by 8 trillion in aggregate over the last two years.
returns on investment for everyone else, their customers, are taking longer to show up. Yet a break in the market hype around AI could be useful, just as it's been for previous innovations. That's because technology doesn't typically hit a brick wall and die, but goes through an S-curve. The idea of the S-curve is that initial progress takes years before rapidly accelerating, as we've seen over the last two years with generative AI before it starts to slow again and crucially evolve. Critics over the years, for instance, regularly declared Moore's law dead just before a manufacturing breakthrough for chips pushed it forward again.
The development of airplanes progressed at a glacial pace until the transition from propellers to jets in the late 1950s led to a leap forward, before the technology seemed to plateau. But just like chip manufacturing, aviation's development didn't stall, but transformed. Passenger planes had become far more fuel efficient, safer, and cheaper to operate, even if they're only nominally faster than they were in the 1960s.
A similar plateau for AI and its scaling laws also might mean a new approach to development and measuring success, which until now has focused too much on capability and not enough on other areas such as safety. Some of the most advanced generative AI models fall short on critical areas like security and fairness, according to a recent academic study that measured how well they followed Europe's upcoming AI law.
For much of this year already, AI researchers have been looking at new paths for improving their models that don't just involve throwing more data and computing power at them. One approach is to focus on enhancing a model after it has been trained in the so-called inference phase. This can involve giving a model extra time to process multiple possibilities before settling on an answer. And it's why OpenAI described its most recent model, 01, as being better at reasoning. The beauty of the S-curve is that it can give everyone else some breathing room. Instead of clamoring for the latest tech that will give them an edge over their competitors.
Companies that have been experimenting with generative AI and grappling with ways to boost their productivity now have time to redesign their workflows and business processes to better capitalize on current AI models, which are already powerful.
Remember, it took years for businesses to reorganize themselves around computers in the 1980s. Stanford University professor Eric Brynjolfsson's writing on the productivity paradox points out that output often appears to stall or drop when major new technologies arrive before surging. A pause for AI gives businesses more space in that all-important investment phase. It also gives regulators time to design more effective guardrails.
The European Union's AI Act, which companies will be subject to from 2026, needs to be more specific in how it defines harms. As standards bodies do that work, it helps that new fangled models leading to a batch of unexpected problems aren't about to flood the market. Generative AI has been on a bullet train during the past two years, and the momentum has clearly been lucrative for tech giants. A slowdown at the station offers a much needed break for everyone else.
Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever. Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI Risk Management Framework, saving you time and money while helping you build customer trust.
Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI. Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash NLW. That's vanta.com slash NLW.
Today's episode is brought to you as always by Super Intelligent. Have you ever wanted an AI daily brief but totally focused on how AI relates to your company? Is your company struggling with AI adoption either because you're getting stalled figuring out what use cases will drive value or because the AI transformation that is happening is siloed at individual teams, departments, and employees and not able to change the company as a whole?
Super Intelligent has developed a new custom internal podcast product that inspires your teams by sharing the best AI use cases from inside and outside your company. Think of it as an AI daily brief, but just for your company's AI use cases. If you'd like to learn more, go to bsuper.ai slash partner and fill out the information request form. I am really excited about this product. So I will personally get right back to you. Again, that's bsuper.ai slash partner.
All right, so that is the piece. And of course, what I want to hone in on is this framing of the idea that this can be a opportune moment, particularly for business. I'm going to hold aside the discussion of guardrails and policy in the European Union and all that, and instead focus on this idea that enterprises and big companies have been struggling to catch up to the pace of generative AI.
And on this point, there is absolutely no debate. Even the organizations that right now feel that they are far ahead of their competitors still feel behind relative to the opportunity that lies in front of them. If you look at the speed with which big business has adopted generative AI, fully embracing its potential, attempting to create new structures to integrate it, it is easily the fastest adoption cycle that we've ever seen when it comes to technology in the enterprise.
and yet it still lacks simply because of how totally all encompassing the change really is. This is not a shift from a pre-AI set of business processes to a post-AI set of business processes. It is a paradigm shift in how often, how broadly and how deeply organizations have to change. AI by definition is a technology that speeds up its own next development.
Just as enterprises get comfortable with one new set of workflows, something totally new is going to be coming down the pipeline. And that's why adoption and utilization can't be thought of as a one-time process. Instead, a new infrastructure for ongoing adaptation is what's required. And I agree that to the extent that we are actually getting to a model plateau, it gives enterprises and organizations a chance to get a little closer.
to building that infrastructure and the systems that come with it to actually keep up with the pace of innovation in generative AI. So what does this mean in practice though? Well, I think it means a couple of things. First of all, I think that the enterprises that succeed are going to be the ones who do view this as not a one-time shift, but a new operational modality.
and who endeavor to go build systems for change. Systems by which they can integrate new business processes, understand what's working and what isn't, and quickly scale what is working across the whole organization.
Another thing that I believe will separate business winners from losers in this transition is going to be in how different companies define success. Those who define AI strictly as an efficiency technology and are content with the same outputs, just with fewer inputs or faster inputs, I think are going to be initially thrilled and then later disappointed.
They're going to be disappointed because they're going to see their peers who instead view AI as an opportunity creation technology, race ahead of them offering new products, new services, a new layer of customer support and success never before possible, and generally embrace the transformative capabilities of AI rather than just hoping it makes what they do cheaper.
The question, of course, is how? If you are an organization who believes all these things that I'm saying organizations need to believe, how do you put it into practice? In a word, it's systems, systems, systems, systems. You need systems for examining and reviewing all existing business workflows and processes on an individual, team, business unit, department, whatever, all the levels.
Second, you need a system and an infrastructure for understanding what the alternatives are and how those alternatives change week over week and month over month. You need the ability to map all of these new offerings against what people are already doing and also against what people wish that they could be doing.
Enterprises need systems for monitoring all of the experiments that are happening, both big and small, again, on an individual level and on a team level. They need systems and tools for processing all the information that comes out of those experiments and pilots and determines what the new set of insights and best practices and new and improved processes are. Enterprises need systems to scale that, systems for taking what is working and spreading it.
Theoretically, the moment that someone discovers an incredibly valuable new use case in one part of the organization, there really shouldn't be very many barriers to getting that everywhere across the organization in short order. Of course, there are when you don't have systems. Now, of course, this is directly what we are spending all of our time on super intelligent on. So I am pretty deep in the weeds with what these types of systems might look like in the future. However, as with so much, I think the answer is not to get it perfect, but to start by starting and just move.
Ultimately, what this essay says, and the point that I agree with, is that to the extent that there is a reprieve in the speed of technological innovation, it is one that should not be used to slow down and catch your institutional breath, but instead to try to race and make up some of the distance between where you are and where AI is.
Both me and super intelligent are, of course, around if you need any help on that journey. But for now, that is going to do it for today's AI Daily Brief. Appreciate you listening, as always. Hope you're having a great weekend. Until next time, peace.