Today on the AI Daily Brief, why DeepSeek changes everything in AI? The AI Daily Brief is a daily podcast that video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.
Hey, hello, friends. Quick note before we dive in. I had originally planned to do a normal episode divided between the headlines and the news, but everything today is so much about this big R1 deep-seek news that the episode ended up being much longer than normal. I decided to just focus on that. We will be back with our normal format tomorrow, presumably. But for now, let's just dig into what everyone is talking about, which is deep-seek, and just how big a deal it actually is.
Welcome back to the AI Daily Brief. If you have spent any time online over the last few days, and I guess if you follow any AI sources, you might have seen some sentiment like this one. Frank DeGods writes,
Signal writes, I've been running DeepSeek locally for a few days and it's absolutely on par with O1 or Sonnet. I've been using it non-stop for coding and other tasks, and what would have cost me a fortune through APIs is now completely free. This feels like a total paradigm shift. Investor Nick Carter writes, DeepSeek just accelerated AGI timelines by five years. So focus on the gym, knowledge work is obsolete, muscles are all that's left.
The superhuman newsletter writes, Deepseeks are one stunned Silicon Valley. China's new Deepseek R1 model has shocked Silicon Valley and many long-held assumptions about Chinese innovation and AI have evaporated overnight. Some are calling it a hoax, while others are calling it a gift to humanity. So what the heck are we talking about? Well, if you've been listening closely, you've probably heard me talk about Deepseek before. In December, we started hearing about their models, which were performing really well at apparently a fraction of the training cost of big models from companies like OpenAI.
Last Monday, the lab released their reasoning model R1. And while it was immediately obvious that the model was good, benchmarking at a similar standard to OpenAI's O1 in Google's Gemini 2.0, as the week progressed, it started to become clear that something bigger was happening. A post on blind and anonymous professional social media network circulated on Thursday. It was entitled Meta-GenAI Organ Panic Mode. It read, engineers are moving frantically to dissect deep-seeking copy anything and everything we can from it. I'm not even exaggerating.
Management is worried about justifying the massive cost of Gennai org. How would they face the leadership when every single leader of the Gennai org is making more than what it costs to train Deepseek V3? We have dozens of such leaders. Deepseek R1 made things even scarier.
So the big thing going on here, and the reason that the AI industry is so freaked out is cost. Deepsea claimed that their V3 LLM was trained for $5.6 million over three months. Frontier model training at US Labs is closer to a half billion dollars for O1 class models, and likely in the billions for the next generation of training runs.
We don't have solid estimates on the post-training cost to create the R1 model, but it seems reasonable to think that the budget was similarly tight. Some tech executives are openly dismissive of these claims. In an interview last week, scale AI CEO Alexander Wang said that his understanding is that deep-seek has a cluster of 50,000 top-of-the-line NVIDIA H100 chips in breach of export controls. The V3 paper claims the model was trained on a cluster of just 2,000 NVIDIA H800s, the down-rated version of the chip that's allowed to be exported.
Earlier this month, the South China Morning Post reported that DeepSeek has 10,000 NVIDIA GPUs but didn't go into specifics about the chips. And for as unbelievable as it is, there are some reasons to believe the claims about their rock bottom training costs. Quant trader Jeffrey Emanuel broke down the innovations in their training methods in a blog post. Here's a part of that explanation, although it's worth reading in its entirety. Jeffrey writes, A major innovation is their sophisticated mixed precision training framework that lets them use eight-bit floating point numbers, FPA, throughout the entire training process.
Most Western AI labs train using full precision 32-bit numbers. This basically specifies the number of gradations possible in describing the output of an artificial neuron. 8-bits in FP8 lets you store a much wider range of numbers than you might expect. Not just limited to 256 different equal-sized magnitudes that you get with regular integers, but instead uses clever math tricks to store both very small and very large numbers, though naturally with a lot less precision than you get with 32-bits.
The main trade-off is that while FP32 can store numbers with incredible precision across an enormous range, FP8 sacrifices some of that precision to save memory and boost performance while still maintaining enough accuracy for many AI workloads. And if that was Greek to you, don't worry. Why combinator partner Jared Freeman writes, lots of hot takes on whether it's possible that DeepSeek made training 45 times more efficient, but Jeffrey Immanuel wrote a very clear explanation to how they did it.
Once someone breaks it down, it's not that hard to understand. Rough summary, use 8-bit instead of 32-bit floating point numbers which gives massive memory savings, compress the key value indices which eat up much of the VRAM, do multi-token prediction instead of single-token prediction which effectively doubles inference speed, mixture of experts model decomposes a big model into small models that can run on consumer grade GPUs.
Point being that it's not like this is a black box where we have no idea why this is going on. There's some amount of explanation of how this actually could be. Still, whatever the truth about their training cluster, DeepSeek is serving the model at rock bottom prices. API access for R1 is priced at around 3% of OpenAI's O1.
Over the weekend, X was filled with examples of people accessing the model at high volumes for fractions of a cent per query. Henry writes, I've made over 200,000 requests to the DeepSeek API in the last few hours. Zero rate limiting, and the whole thing cost me like 50 cents. Over the weekend, the Mindshare really broke through. DeepSeek's phone assistant reached number one in the App Store, and the model is racked up around 150,000 downloads from HuggingFace and tops the trending list.
What's more, because the model is open-source and has a novel design for efficient inference, it can be run on a wide range of consumer devices. AI researcher Harrison Kinsley was able to run the full model on his admittedly beefy workstation with 1TB of RAM. Others were running smaller distilled versions of the model on phones and on laptops.
Now at this point seemingly everyone in Silicon Valley has a take about what Deepseek has achieved and what it means for the AI industry. Mark Andreessen of Andreessen Horowitz writes, Deepseek R1 is one of the most amazing and impressive breakthroughs I've ever seen and has open sourced a profound gift to the world. He returned later in the weekend to declare, Deepseek R1 is AI's Sputnik Moment.
I imagine pretty much everyone here is familiar with the reference, but Sputnik was of course the first ever satellite. It's 1957 launch signal that Russia was leading the US in the space race, which came as a shock to the United States and kickstarted the Apollo program. In short, it was the wake-up call during the Cold War that the US can't be complacent in the technological arms race.
Y Cominated President Gary Tan wrote, deep-seek search feels more sticky even after a few queries because seeing the reasoning, even how earnest it is about what it knows and what it might not know, increases user trust by quite a lot. Indeed, the ability to view chain of thought reasoning seemed to be a pretty resonant moment for a lot of users, especially for those who have never paid to access O1.
Caspian on X writes, the normies think deep-seek is cute because it shares its thought process, sharing a conversation where other folks are talking about how deep-seek is, quote, so cute because it shares its thought process and talks to itself. As you might imagine, some think something nefarious is going on. Neil Coastlaw, CEO of Curai and Senna Vinod Coastlaw wrote, deep-seek is a CCP state scyop in economic warfare to make American AI unprofitable.
They are faking the cost was low to justify setting price low and hoping everyone switches to it to damage AI competitiveness in the US. Don't take the bait.
Now, while many might find that a plausible theory, the one small piece of evidence that you might point to is that this is an introductory price. Our one is currently being served at an introductory rate about one-tenth the cost of 01, but next month the cost will almost triple to be one-quarter of 01. Still, 4X cheaper is no joke. Regardless of which story you believe on DeepSeek, we've clearly entered a new era of competition in AI. There are now multiple models that are basically on par across US and Chinese labs.
The biggest difference is now cost of inference and deep-seek is serving up the cheapest on the market. While the market reaction has been, let's say, terrified and we'll get to that in just a moment, the reaction from big tech has not been fear but the thrill of opportunity. Microsoft CEO Satya Nadella writes, Jevon's paradox strikes again. As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we can't get enough of.
Jevon's paradox, which is a term you're going to hear a lot more about in the next couple days, refers to the phenomenon where technological progress leads to efficiency gains and cost reductions, but that rather than reducing demand, it actually leads to a massive increase in demand. As an example, think about what happened to the demand for cloud storage as the cost became negligible.
As an open-source project, DeepSeek has fully described their method and provided their data set, so theoretically there's nothing to hide. Huggingface are currently replicating the model in their own training run so we'll know soon enough. If successful, the methods will quickly be copied by every big tech firm in hundreds of startups. The implication of cost-efficient training and extremely good models is likely that the next era of AI is all about inference. In other words, companies are no longer just competing on the quality of their models. They're competing to deliver them as cheaply as possible.
Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex.
That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly.
Plus, with automation and AI throughout the platform, Vanta gives you time back, so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, this audience gets $1,000 off Vanta at vanta.com slash NLW. That's V-A-N-T-A dot com slash NLW for $1,000 off.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year.
And given how new this is, all of us are going to be back in pilot mode. That's why Super Intelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me at www.bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market.
Hello, AI Daily Brief listeners. Taking a quick break to share some very interesting findings from KPMG's latest AI quarterly pulse survey. Did you know that 67% of business leaders expect AI to fundamentally transform their businesses within the next two years? And yeah, it's not all smooth sailing. The biggest challenges that they face include things like data quality, risk management, and employee adoption. KPMG is at the forefront of helping organizations navigate these hurdles.
They're not just talking about AI, they're leading the charge with practical solutions and real-world applications. For instance, over half of the organization surveyed are exploring AI agents to handle tasks like administrative duties and call center operations. So if you're looking to stay ahead in the AI game, keep an eye on KPMG. They're not just a part of the conversation, they're helping shape it. Learn more about how KPMG is driving AI innovation at kpmg.com slash US.
Now, when it comes to market reactions, there has definitely been a response. The S&P 500 futures market was down more than 3% in overnight trading, and some amount of panic is absolutely settling in. The concern, of course, is that Big Tech has sunk hundreds of billions of dollars into AI infrastructure over the past few years, and seems likely to spend a trillion dollars this year. One argument is that Deepsea has rendered all of those USGPUs worthless as Chinese AI proves you can do it in a totally different way without all of that expensive CapEx.
Going back to that piece by Jeffrey Emanuel, he broke down the bear case for NVIDIA in that extensive post. He tackles a ton of areas where NVIDIA has excelled over recent years, from software to chip networking to raw performance. The logic is that competing chip makers are catching up quickly across multiple vectors. Couple this with a massive decrease in training costs and AI chips quickly become a commodity.
NVIDIA is the leader in producing top-of-the-line ships for training clusters, but if the focus shifts to being about delivering cheap inference, there are other companies that are much more competitive with NVIDIA in that space and soon to be many more. Investor Nick Carter writes, deep-seek has completely upended the conventional wisdom around AI.
That conventional wisdom includes China will only do close source and proprietary. Silicon Valley is the global nexus of AI development and has a huge head start. OpenAI has an unbeatable moat. You need to spend tens, maybe hundreds of billions for state-of-the-art model development. Value will accrete to the model's fat model hypothesis. Scaling hypothesis means model performance is a linear function of training input, cost, compute, data, and GPUs. All of these narratives shake in if not completely undermined overnight.
And yet Nick also gets at the counterpoint. In his next tweet, he says, although he points out he does have a massive bag bias here. Why, he writes, when a commodity gets cheaper, the use of that commodity increases. So inference overnight becomes vastly more abundant.
DeepSeek's innovations will be rapidly incorporated by other model companies, so AI can be embedded cheaply everywhere. This probably shifts the ratio of training to inference in AI CapEx in favor of the latter, but I don't believe it undermines equity value in the firms that produce the inputs for inference, GPUs, data centers, et cetera. Just accelerates the transition from pre-AI world to fully embedded world. All of that said, the investor premise that the model companies, OpenAI and Thropic, et cetera, are where equity value will accrete has a massive hole in it now.
I've always felt and have said that I thought model companies would be capital incinerators due to high quality open source models and a raise to the bottom, and I think that is more true now. But overall, I don't worry about the rest of the stack, whether it's the producers or the firms that are actually bundling up compute and selling it to the end user in the form of better consumer experiences. TLDR, for most of you, no need to panic, although I think it will take the market some time to digest and the ride will be bumpy in the near term.
Why commentators Gary Tan took this out as well? Responding to a market analyst who said China's deep-seat could represent the biggest threat to US equity markets, calling into question the utility of the hundreds of billions worth of CapEx being poured into the industry. Gary writes, Do people really believe this? If training models get cheaper faster and easier, the demand for inference, actual real-world use of AI, will grow and accelerate even faster, which assures the supply of compute will be used.
Ben Thompson of Strategy also makes this point. He writes in the long run model commoditization in cheaper inference, which DeepSeq has demonstrated, is great for big tech. A world where Microsoft gets to provide inference to its customers for a fraction of the cost means that Microsoft has to spend less on data centers and GPUs, or just as likely sees dramatically higher usage given that inference is so much cheaper. Another big winner is Amazon. AWS has by and large failed to make their own quality model, but that doesn't matter if there are very high quality open source models that they can serve at far lower costs than expected.
Apple, and this is an interesting one to me, is also a big winner Ben writes. Dramatically decreased memory requirements for inference make edge inference much more viable, and Apple has the best hardware for exactly that. Apple Silicon uses unified memory, which means that the CPU, GPU, and NPU have access to a shared pool of memory. This means that Apple's high-end hardware actually has the best consumer chip for inference.
Meta, meanwhile, is the biggest winner of all. I already laid out how every aspect of Meta's business benefits from AI. A big barrier to realizing that vision is the cost of inference, which means that dramatically cheaper inference and dramatically cheaper training, given the need for Meta to stay on the cutting edge, makes that vision much more achievable.
Google, he does say, is probably in worse shape. A world of decreased hardware requirements lessens the relative advantage they have from TPUs. More importantly, a world of zero cost inference increases the viability and likelihood of products that displace search. Granted, Google gets lower costs as well, but any change from the status quo is probably a net negative.
Still, as Ben points out, the reason the stocks are down is that, quote, it seems likely the market is working through the shock of R1's existence. The moment has unquestionable geopolitical ramifications as well. It's not the first time a Chinese lab has demonstrated cutting-edge capabilities, but it is the first time a Chinese model has grabbed this kind of mindshare. Importantly, R1 is competing on price in the same way that Chinese industries have outcompeted their US counterparts for several decades. This moment runs right up against the Trump administration's goal of USAI dominance and will kickstart a new chapter in the rivalry.
investor Chamath Palahapatiya has a long thread explaining how the chessboard has changed in his opinion. He covered the need to pivot to inference and export those chips aggressively to allies, and also warned that VCs have been asleep at the wheel and need to improve their capital discipline. He writes,
We've been running towards the big money shiny object spending programs. AI is not the first and it likely won't be the last, where we, Team USA, have thrown hundreds of billions of dollars at a problem versus thinking through the problem more cleverly and using resource constraints as an enabler. Let's get our act together. We need all the bumbling middle managers out of the way. Let the engineers and the brilliant folks we have actually working on this stuff to cook. More spending, more meetings, more oversight, more weekly reports, and the like does not equate to more innovation. Unburden our technical stars to do their magic.
A more joking take came from Jordi Hayes, who said, Some say that the battle here is not really about China vs. the US, but about open source vs. closed source. Meta's chief AI scientist John LeCun writes, The people who see the performance of DeepSeek and think China is surpassing the US and AI, you're reading this wrong. The correct reading is, open source models are surpassing proprietary ones. DeepSeek is profited from open research and open source.
They came up with new ideas and built them on top of other people's work. Because their work is published in open source, everyone can profit from it. That is the power of open research and open source. Menlo Ventures Dededas had a contrarian take after running comparisons all weekend. He plotted R1's performance against OpenAI's O3 model and suggested it's probably better. Then again, this performance is extrapolated on massively increased inference, so who really knows?
He did point out, however, quote, the China is crushing US rhetoric totally forgets about Gemini 2.0 flash thinking, likely cheaper, longer context and better on reasoning. Overall, what I think is unmistakable from this, hold aside the geopolitical implications, hold aside the stock market implications, intelligence has just gotten massively cheaper. There is no way that this doesn't drive prices down.
Professor Ethan Malek writes, I think the market will adjust to any per token cost decrease brought on by deep-seat quite quickly. Costs for GPT-4-level intelligence dropped by 1000 X in the last 18 months. A 95% price drop in reasoning model seems not to be something that will break the labs.
Indeed, some many, in fact, are reminding that this is exactly the type of situation where you want to get away from mainstream media and look more deeply with people who are closer to the news. NVIDIA's Dr. Jim Fan writes, The power of 01 in the palm of every coder's hand to study, explore, and iterate upon. Ideas compound. The rate of compounding accelerates with open source. The pie just got much bigger faster.
We as one humanity are marching towards universal AGI sooner. Zero-sum game is for losers.
It is very tempting in AI to dismiss big headlines as hyperbole. So many of the threadboys and the YouTubers are just looking for the next dopamine hit of this changes everything. But in this case, deep-seek I think might be as big a moment as people are feeling. My guess though is that it's not the moment that the market is reacting to, but the one that Dr. Jim Fan is pointing out here. No matter what 2025 just got a heck of a lot more interesting. So come on back as we wait for the next crazy shoe to drop.
For now, that's gonna do it for today's AI Daily Brief. Until next time, peace.