The market absolutely cratered yesterday in fear of what DeepSeek represented but should it have. Today we are separating hype from hyperbole when it comes to this Chinese AI model that has everyone talking. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.
All right, friends. Well, here we are, day two of Deepseak coverage. I think we might be starting to be saturated here, but when I was looking across the news today, it was very clear that we just needed at least this one follow up to really deal with the second day analysis of Deepseak, particularly because of the market implications, the fact that the president in the United States is now talking about it.
So what we're going to do today, like I said, is try to separate hype from hyperbole, give some of the updates, and try to leave on the other side with an understanding of how we should be thinking about DeepSeq, what it means for the AI industry, so that we can get back to talking about other things, by which, of course, I mean, agents.
Kicking it off, though, like I said, Wall Street was an outright panic as soon as the market opened. Ultimately, the Nasdaq fell by 3%, led by a 17% wipeout for Nvidia. Nearly $600 billion were wiped off of Nvidia's market cap, which is the largest one-day reduction in stock market history.
Overall, the Nasdaq lost a trillion dollars in value. Put it in perspective, Wall Street editor for the economist Mike Birdroute. In total market cap, the NVIDIA sell-off today is a little bit bigger than if the entire listed market of Mexico went to zero. And a lot of the coverage on the first day or two that this went mainstream took things kind of at face value. Vaser and Ling, managing director at Union Private Bank, said, deep-seek shows that it is possible to develop powerful AI models that cost less.
It can potentially derail the investment case for the entire AI supply chain, which is driven by high spending from a small handful of hyperscalers. Others thought that this portended something even bigger on the horizon. Perpetual, curmudgeon, and existential blowhard, also someone who has me blocked on X not seem to lab, says that he thinks that this Nvidia route is just a hint of what's coming down the line.
writes Bloomberg, Taleb said too many investors have been bidding up prices of firms related to AI without properly knowing the details of how it functions or is able to succeed. He described technology firms as gray swans because investors underestimate the deviations in their prices that are possible in a day. Doomsayers got a doomsday, right?
More sober but still concerned analysis was reflected in this Bloomberg piece, meta and Microsoft show AI spending could be a double edged sword. With this piece being an all points considered rational and somewhat more calm discussion of the good and the bad of big spending from the Magnificent Seven in this AI space.
Speaking of the Magnificent Seven, there are also plenty of people out there who are effectively making an argument similar to what I did yesterday, which is that it feels very much like the market was looking for a reason to reprice particularly in video, but the Magnificent Seven and big tech stocks in general. The way that I've viewed the market for the last few years is that the introduction of chat GPT happened to coincide with the end of the zerp era and the beginning of rate hikes.
Throughout basically the entire period of the right hiking part of the cycle, AI enthusiasm was the bulwark against broader stock malaise. When the hiking cycle ended and the cutting cycle began, Wall Street had other things to look forward to, but NVIDIA just kept performing so well that it didn't really have a chance to reprice and take all of its hopes and dreams out of that particular stock and the handful of other stocks around it that represented something similar.
Every time we have any sort of catalyst that can possibly be a reason why NVIDIA might not be the stock market second coming, we get some sort of reaction like this. Max Gugman, senior VP at Franklin Templeton, put it even more simply, saying today's move showed just how precarious this market setup is. When valuation is stretched to the sky, it's easier for small trembles to make the entire market rumble.
But outside of my contention that the stock market was just stock marketing, what are other reasons why we might not want to be as scared as these market investors clearly were yesterday? First of all, there continues to be a loud chorus who just simply do not believe that we're getting the full story when it comes to what DeepSeek actually has done.
Former Nvidia leader Boyan Tungu said, So you're telling me that a Chinese hedge fund released an LLM with unverified claims about its training setup and efficiency, which ended up wiping out trillions of dollars in the U.S. stock market and we're supposed to believe there's nothing shady going on? Nierajago Wal-Rites? Is everything selling off because of something a Chinese company self-reported? Investor Jeff Lewis memeed it even harder, COVID came from a bat equals deep-seat cost $5.57 million to train.
Then again, Angel is founder of all Ravakant responded to Jeff saying smart technical teams are already starting to confirm that the techniques in resulting cost savings are real. And so perhaps a better argument than China based conspiracy is just that there seems to be a fundamental misunderstanding of how this is likely to impact the demand for compute. John Stokes writes, what R1 does is a new type of scaling. It's also GPU intensive. In fact, the big mystery today in AI world is why NVIDIA dropped despite R1 demonstrating that GPUs are even more valuable than we thought they were.
No part of this is coherent. NVIDIA made that point as well. A spokesperson called DeepSeek's R1 model a quote, excellent AI advancement. They said DeepSeek's work illustrates how new models can be created using that technique, leveraging widely available models and compute that is fully export control compliant. The spokesperson added, inference requires significant numbers of NVIDIA GPUs and high performance networking. We now have three scaling laws, pre-training and post-training, which continue and new time test scaling.
Someone more crassly, LaCoya Cap writes, everyone has the deep-seek thing literally act backwards. The hyperscalers in Frontier LLMs will learn slash use whatever they can from it, along with everything else that's on their roadmaps to make their models even better and then exponentiate them with these looming giga clusters. There's virtually no scenario in which they actively want less compute as their models improve. The better they get, the better they can be productized. Those products need to be serviced to more customers, et cetera. Deep-seek just made AGI and all of the associated infrastructure needs to serve it more likely sooner.
Ishan actually thinks that a lot of the bluster is what he calls an over rotation because of China. He writes, deep-sea could have come out of some lab in the US Midwest. Like some CS lab couldn't afford the latest NVIDIA chips and had to use older hardware, but they had a great algo and systems department and they found a bunch of optimizations and trained a model for a few million dollars in low, the model is roughly on par with 01. Look everyone, we found a new training method and we optimized a bunch of algorithms. Everyone is like, oh wow, it starts trying the same thing. Great week for AI advancement. No need for US markets to lose a trillion in market cap.
The tech world and apparently Wall Street is massively over rotated on this because it came out of China. I get it, after everyone has been sensitized over the H1 BLM uproar, we're conditioned to think of OMG immigrants China as some kind of alien other, as though the alien other Chinese researchers are doing something special that's out of reach. And now China the Empire is somehow uniquely in possession of super efficient AI power and the US companies can't compete. Like no, these guys are basically working on the same problems we are in the US, and not only that, they wrote a paper about it and open sourced their model.
It's not actually some sort of tectonic geopolitical shift. It's just some nerds over there saying, hey, we figured out some cool shit. Here's how we did it. Maybe you'd like to check it out. And so his argument overall is that this is less of a Sputnik moment. Sputnik, he wrote, showed that the Soviets could do something the US couldn't. And by the way, didn't publish all of the technical details in half the blueprints. Instead, he thinks the better analogy is the Google moment in 2004. He writes deep-seak is much more like the Google moment because Google essentially described what it did and told everyone else they could do it too.
So while the stock market may be overreacting, in fact, Tom Lee called this the worst overreaction since the 2020 pandemic outbreak, it has definitely raised competitive spirit.
A.I.'s R. David Sachs writes, DeepSeekR1 shows that the A.I. race will be very competitive and that President Trump was right to rescind the Biden E.O., which hamstrung American A.I. companies without asking whether China would do the same. I'm confident in the U.S., but we can't be complacent. OpenAI Sam Altman wrote, DeepSeekR1 is an impressive model, particularly around what they're able to deliver for the price. We will obviously deliver much better models and also it's legit invigorating to have a new competitor. We will pull up some releases.
But mostly we're excited to continue on executing our research roadmap and believe more compute is more important now than ever before to succeed at our mission. The world is going to want to use a lot of AI and really be quite amazed by the next gen models coming. Look forward to bringing you all AGI and beyond.
Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex.
That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly.
Plus, with automation and AI throughout the platform, Vanta gives you time back, so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, this audience gets $1,000 off Vanta at vanta.com slash NLW. That's V-A-N-T-A dot com slash NLW for $1,000 off.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Super Intelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me at www.bsuper.ai. Put the word agent in the subject line so I know what you're talking about, and let's have you be a leader in the most dynamic part of the AI market.
Hello, AI Daily Brief listeners. Taking a quick break to share some very interesting findings from KPMG's latest AI quarterly pulse survey. Did you know that 67% of business leaders expect AI to fundamentally transform their businesses within the next two years? And yet, it's not all smooth sailing. The biggest challenges that they face include things like data quality, risk management, and employee adoption. KPMG is at the forefront of helping organizations navigate these hurdles.
They're not just talking about AI, they're leading the charge with practical solutions and real-world applications. For instance, over half of the organization surveyed are exploring AI agents to handle tasks like administrative duties and call center operations. So if you're looking to stay ahead in the AI game, keep an eye on KPMG. They're not just a part of the conversation, they're helping shape it. Learn more about how KPMG is driving AI innovation at kpmg.com slash US.
But one person who firmly is in the camp of this is generally a good thing, and that making AI cheaper is a good thing, is President Trump himself, who spoke about deep-seek explicitly in an appearance yesterday. And we'll come back to this idea in a moment that the real winners in all of this are us as the consumers. However, it's important to detour for just a moment while we're on the China subject that a lot of the additional reactions were a reminder around what it meant to actually use these models.
Luke DePulford writes, Just FYI, DeepSeek collects your IP, keystroke patterns, device info, etc., etc., and stores it in China, where the data is vulnerable to arbitrary requisition from the state. He then pointed to their own privacy policy where they say this. OpenAI's Stephen Heidel writes, Americans sure love giving their data away to the CCP in exchange for free stuff.
investor Joshua Kushner writes, pro-America technologists openly supporting a Chinese model that was trained off of leading U.S. frontier models with chips that likely violate export controls and, according to their own terms of service, take U.S. customer data back to China. We also saw discussion here around what this all meant for those export controls. Financial Times ran a very representative opinion piece titled U.S. export controls have forced Chinese tech companies to be more innovative.
Miles Brundage writes, unfortunately, this narrative won't die, and I'm extremely concerned that the Trump administration might believe it and be pressured by NVIDIA to believe it. To be clear, the U.S. reversing export controls is the absolute best possible outcome for DeepSeek.
Miles had tweeted back in December, deep-seq uses compute efficiently. That means export controls are counterproductive. Gotcha. Let's take away American AI companies compute to make them efficient. Wait, what? The flip side of this is that even with concerns around China having access to more users' data, many pointed out that because they released the API at the same time, people didn't just have to use the deep-seq app.
Perplexities are of entrant of us rights. The world's most powerful reasoning model deep seek are one with reasoning traces is now on perplexity for supporting your daily deep web research. Enjoy.
Samuel Hammond also made a similar point, commenting on that specific op-ed that I just mentioned, he writes, Next chain of thought and inference time techniques make access to large amounts of compute more relevant, not less, given the trillions of tokens generated for post-training.
Also, we're barely one new chip generation into the export control, so it's not surprising China quote-unquote caught up. The controls will only really start to bind and drive a delta in the US-China frontier this year and next. DeepSeek CEO has himself said that chip controls are their biggest blocker. The export controls also apply to semiconductor manufacturing equipment, not just chips. DeepSeek is not a Sputnik moment. Their models are impressive, but within the envelope of what an informed observer should expect.
Imagine if US policymakers responding to the actual Sputnik moment by throwing their hands in the air and saying, oh well, might as well remove the expert controls on our satellite tech, it would be a complete non-sequitur.
Now, one thing that the technology industry loved about the DeepSeq announcement was that it was a true open-source release, complete with the API. And what's more, what this allows for is integrations into other services that aren't just going to be handing data over to the CCP. Perplexity CEO, Aravan Shrinivas writes, essentially, the DeepSeq you get within perplexity pro searches is American, both in values, no censorship, and in hosting and storage of your data. When someone asked, but you're still subject to the internal censorship that was trained into the DeepSeq AI model, right?
Arvan writes, no, pointing to a pro search for who is the president of Taiwan that actually gives the answer. Friend of the show, Venice AI has also integrated DeepSeek. CEO Eric Voorhees writes, if you want to use DeepSeek but don't want all your convos going to the CCP, use Venice.ai. All convos are private stored only in your local browser. Now, this doesn't answer the concern entirely. At the time of recording, DeepSeek is still the top free app on the Apple App Store, ahead of chat GBT, threads, Gemini, you name it.
When our eventually of us again tried to explain to investor Bill Ackman why the fact that they could put the deep-seek model in an American shell like perplexity made it less of a security concern. Ackman said, while most users just download the app and not go through the trouble of the workaround you describe above, which based on those apps for results is a legitimate concern.
DeepSeek for their part have taken advantage of their viral moment, actually deciding to release a set of new image models. Called Janis Pro, the models can function as both standalone image generators and image analysis tools for multimodal AI. DeepSeek are claiming that the models outperform OpenAI's Dali 3 and Stability AI staple diffusion XL on leading benchmarks. The models don't seem to have been stacked up against XAI's Aurora or Black Forest Labs flux model. Like R1, the model comes with a unique architecture, which DeepSeek describes as a quote novel auto-aggressive framework that can both analyze and create images.
DeepSeek also claimed efficiency improvements over rival models stating that they were, quote, aiming to achieve a balance between performance and computational cost. DeepSeek has also had to temporarily limit new user registrations, apparently for a large-scale malicious attack, although some are wondering if it's just their infrastructure struggling to keep up with peak demand.
And so where are we left after all of this? When it comes to the markets, as I said, I think it's an overreaction. I am firmly in the camp that believes that a reduction in cost of AI is going to increase demand for AI and that demand is going to need more compute to be serviced. Taking that a step farther, though, some have pointed out that not only is Wall Street not necessarily thinking about this in the right way long term, but that even if they are, there's simply no denying that the cost of intelligence going down is great for consumers.
investor Hoseep Kureshi writes, intelligence is now way cheaper than we thought. This is great for all consumers of AI, meaning you and me. Remember the NASDAQ is an index of producers not consumers. The price of oil plummeting is bad news for oil companies, but great for those of us who drive.
investor Vijay Reddy extended the idea of JVON's paradox, which we talked about yesterday, and made it clear that it's even potentially better than we think. He writes, so what is JVON's AI paradox? As the cost of AI goes down, the usage of AI goes up, making a resource cheaper increases overall consumption, sometimes more than you'd expect.
We've seen this play out across storage, virtualization, cloud computing, and now AI. Take AI agents, for example, and let's extend Jevon's paradox to agent paradox. As the cost and latency of AI are driven down, we start developing agent-like systems that act more autonomously. AI gets cheaper faster, we will simultaneously push for more complex reasoning and autonomy. Two trends that can be intention. This is especially true in complex multi-agent systems where we need better reasoning and hence more computation to reduce butterfly effects and reduce compounding errors.
Ultimately, he writes there's near unlimited demand for compute and we're just scratching the surface with multimodal models, agents, embodied AI, etc. His point though ultimately is that when it comes to agents, the cost and latency of AI going down not only is going to increase the usage of agents, it's going to improve their reasoning.
I think when all is said and done, the big impact of deep-seek over the last week has been to shake off dust and cobwebs of a priori assumptions we didn't even realize we were making across the AI industry. It is supercharging competitive dynamics, making it easier for AI startups to experiment with new products, and generally likely to accelerate the next wave of what AI can achieve. Not without risk, not without cost, not without challenge, but still net net. It's hard not to see it as a pretty exciting time, at least sitting from the consumer's seat.
I hope and I believe guys that this will be the last time I have to talk about this in quite this depth for some time. I have some cool things coming up later this week, including an interview with the founding engineer of Notebook LM. For now though, that is going to conclude this second day of Deep Seat coverage. Appreciate you listening or watching, as always, and until next time, peace.