How To Build The Future: Sam Altman
en-us
November 18, 2024
TLDR: YC President and CEO Garry Tan interviews Sam Altman of OpenAI about advancements in machine learning, prediction of ASI within a few thousand days, and guidance for founders navigating technological platform shifts.
In a thought-provoking podcast episode titled "How To Build The Future", Garry Tan interviews Sam Altman, President and CEO of OpenAI, discussing the transformative potential of technology, particularly in the realm of artificial intelligence (AI). Below is a summary highlighting the key concepts and insights shared during the conversation.
Introduction to the Age of Intelligence
Sam Altman argues that we are on the brink of monumental advances in AI, specifically Artificial Super Intelligence (ASI), which he believes is only thousands of days away. This assertion stems from the significant progress made in machine learning and AI capabilities, setting the stage for further breakthroughs.
Key Takeaways from Sam Altman's Perspective:
- Optimism in Technology Startups: Altman asserts that now is the best time to start a tech company. Each major technological revolution provides opportunities for new ventures to flourish, aided by dynamic shifts like those witnessed in the internet and AI sectors.
- Focused Conviction in Development: Starting OpenAI with an intense focus on achieving Artificial General Intelligence (AGI) was pivotal. The team prioritized deep learning models and maintained a steadfast approach against skepticism from established leaders in the field. This mix of conviction and focus aided their success.
The Promise of Abundance
The discussion pivots towards the concept of an "age of abundance"—a future where AI and robotics could enhance productivity across all sectors. Altman discusses two crucial resources necessary for this abundance: intelligence and energy.
- Potential Impacts: Achieving an abundance of energy could lead to revolutionary changes in how physical work is accomplished, allowing AI to improve efficiency and quality of life for everyone.
- Techno-optimism: Altman emphasizes a belief in technological solutions for major global issues such as climate change and sustainable energy, stating that we can achieve breakthroughs in other fields of science and innovation as well.
The Role of Peer Groups in Innovation
A powerful part of Altman's narrative is the importance of finding the right peer group when starting out in the tech industry. He reflects on his own experiences at Y Combinator (YC) and how surrounding oneself with ambitious founders fosters a supportive environment for growth and learning.
- Collaboration Over Competition: The synergy experienced within such supportive communities can significantly enhance one's entrepreneurial journey, providing both motivation and inspiration.
Historical Context of OpenAI
Altman shares insights into the initial formation of OpenAI, emphasizing how the team started with a bold vision and faced considerable skepticism but chose to focus on what they knew would work—deep learning and scaling models.
- Transformational Moments: The evolution from initial ideas to breakthroughs like GPT-3 exemplifies how dedication, experimentation, and iterative learning shape significant technological advancements. Through commitment to their goals, Altman’s team managed to navigate challenges and realize substantial milestones.
Navigating the Future of AI
Looking forward, Altman presents a roadmap for the development of AGI, illustrating five potential levels of AI capabilities:
- Level 1: Basic chatbots
- Level 2: Advanced reasoning systems
- Level 3: AI that can perform longer-term tasks and interact with its environment
- Level 4: AI that can explore complex systems autonomously
- Level 5: AI operating at an organizational scale
Advice for Aspiring Founders
For those looking to enter the tech landscape, Altman provides essential tips:
- Leverage the Current AI Wave: Start building with advanced AI tools, as capabilities are developing rapidly. Founders should embrace the speed and agility allowed by modern technology to bring their ideas to fruition.
- Maintain Business Fundamentals: Despite the allure of innovative technologies, remember that building a sustainable business requires enduring value—startups must avoid the trap of thinking that technology alone will yield success.
Conclusion
Sam Altman’s insights provide a roadmap for understanding the current state and future of AI. His call to action encourages founders to seize the moment, embrace change, and engage deeply with technology to realize the monumental potential ahead. As AI continues to evolve, it challenges existing paradigms and opens new pathways for innovation.
Through engaging discussions with visionary leaders like Sam Altman, we gain invaluable perspectives on the future landscape of technology, positioning ourselves to navigate this transformative era thoughtfully and decisively.
Was this summary helpful?
We said from the very beginning we were going to go after AGI at a time when in the field you weren't allowed to say that because that just seemed impossibly crazy. I remember a rash of criticism for you guys at that moment. We really wanted to push on that.
And we were far less resource than the mind and others. And so we said, OK, they're going to try a lot of things. And we've just got to pick one and really concentrate. And that's how we can win here. Most of the world still does not understand the value of a fairly extreme level of conviction on one bet. That's why I'm so excited for startups right now. It is because the world is still sleeping on all this to such an astonishing degree.
We have a real treat for you today. Sam Altman, thanks for joining us. This is actually a reboot of your series, How to Build the Future. And so welcome back to the series that you started. That was like eight years ago. I was trying to think about that. Something like that. That's wild. I'm glad it's being rebooted. That's right. Let's talk about your newest essay on the age of intelligence. You know, is this the best time ever to be starting a technology company?
Let's at least say it's the best time yet. Hopefully there'll be even better times in the future. I sort of think with each successive major technological revolution, you've been able to do more than you could before.
And I would expect the companies to be more amazing and impactful and everything else. So yeah, I think it's the best time yet. Big companies have the edge when things are moving slowly and not that dynamic. And then when something like this, or mobile, or the internet, or semiconductor revolution happens, or probably back in the days of the industrial revolution, that was when upstarts had their edge.
So yeah, this is like, and it's been a while since we've had one of these. So this is like pretty exciting. In the essay, you actually say a really big thing, which is ASI super intelligence is actually thousands of days away.
Maybe. I mean, that's our hope, I guess, whatever. But that's a very wild statement. Tell us about it. I mean, that's big. That is really big. I can see a path where the work we are doing just keeps compounding and the rate of progress we've made.
over the last three years continues for the next three or six or nine or whatever. Nine years would be like 3,500 days or whatever. If we can keep this rate of improvement or even increase it, that system will be quite capable of doing a lot of things. I think already, even a system like a one is capable of doing quite a lot of things. From just a raw cognitive IQ on a closed end, well-defined task in a certain area, I'm like,
Oh one is like a very smart thing and I think we're nowhere near the limit of progress. I mean that was an architecture shift that sort of unlocked a lot and what I'm sort of hearing is that these things are going to compound. We could hit some like unexpected wall or we could be missing something but it looks to us like there's a lot of compounding in front of us still to happen.
I mean, this essay is probably the most techno-optimist of almost anything I've seen out there. Some of the things we get to look forward to fixing the climate, establishing a space colony, the discovery of all of physics near limitless intelligence and
abundant energy. But I do think all of those things, and probably a lot more we can't even imagine, are maybe not that far away. And I think it's like tremendously exciting that we can talk about this even semi-seriously now. And one of the things that I always have loved most about YC is it encourages slightly implausible degrees of techno-optimism.
And just a belief that like, ah, you can figure this out. And, you know, in a world that I think is like sort of consistently telling people, this is not going to work. You can't do this thing. You can't do that. I think the kind of early PG spirit of just encouraging founders to like think a little bit bigger is like it is a special thing in the world.
The abundant energy thing seems like a pretty big deal. There's sort of Path A and Path B. If we do achieve abundant energy, it seems like this is a real unlock. Almost any work, not just knowledge work, but actually real physical work could be unlocked with robotics and with language and intelligence on tap. There's a real age of abundance.
I think these are like the key to the two key inputs to everything else that we want. There's a lot of other stuff, of course, that matters, but the unlock that would happen if we could just get truly abundant intelligence, truly abundant energy.
what we'd be able to make happen in the world, like both like come up with better ideas more quickly and then also like make them happen in the physical world, like to say nothing of it'd be nice to be able to run lots of AI and that takes energy too. I think that would be a huge unlock and the fact that it's
I'm not sure whether to be surprised that it's all happening at the same time, or if this is just the natural effect of an increasing rate of technological progress, but it's a very exciting time to be alive and it's a great time to do a startup.
Well, so we sort of walk through this age of abundance. You know, maybe robots can actually manufacture, do anything, almost all physical labor can then result in material progress, not just for the most wealthy, but for everyone. You know, what happens if we don't unleash unlimited energy? If, you know, there's some physical law that prevents us from exactly that.
Solar plus storage is on a good enough trajectory that even if we don't get a big nuclear breakthrough, we would be like, okay, ish. But for sure, it seems that driving the cost of energy down the abundance of it up has like a very direct impact on quality of life.
Eventually we'll solve every problem in physics. So we're going to figure this out. It's just a question of when and we deserve it. Uh, there's, you know, someday we'll be talking not about fusion or whatever, but about the Dyson sphere and that'll be awesome too. Yeah. This is a point in time, whatever feels like abundant energy to us will feel like not nearly enough to our great grandchildren. And there's a big universe out there with a lot of matter. Yeah.
I wanted to switch gears a little bit to your earlier you were mentioning Paul Graham who brought us all together, really created Y Combinator. He likes to tell the story of how you got into YC was actually you were a Stanford freshman. And he said, this is the very first YC batch in 2005.
And he said, you know what, you're a freshman and YC will still be here next time you should just wait. And you said, I'm a sophomore and I'm coming.
And you're widely known in our community as one of the most formidable people. Where do you think that came from? That one story. I think I would be happy if that like drifted off. Well, now it's purely immortalized here. Here it is. My memory of that is that I needed to reschedule an interview one day or something.
And PG tried to like say like, I'll just do it next year or whatever. And then I think I said some nicer version of I'm a sophomore and I'm coming. But yeah, then, you know, these things get slightly apocryphal. It's funny. I don't. And I say this with no false modesty. I don't like identify as a formidable person at all. In fact, I think there's a lot of ways in which I'm really not. I do have a little bit of a just like. I don't see why things have to be.
the way they are and so I'm just gonna like do this thing that from first principles seems like fine. And I always felt a little bit weird about that. And then I remember one of the things I thought was so great about YC and still that I care so much about YC about is it was like a collection of the weird people who are just like, I'm just gonna do my thing.
The part of this that does resonate as a accurate self-identity thing is I do think you can just do stuff or try stuff as a surprising amount of the time. And I think more of that is a good thing. And then I think one of the things that both of us found at YC was a bunch of people who all believed that you could just do stuff for a long time when I was trying to figure out what made YC so special.
I thought that it was like, okay, you have this very amazing person telling you. You can do stuff I believe in you. And as a young founder, that felt so special and inspiring. And of course it is. But the thing that I didn't understand until much later was it was the peer group of other people doing that. And one of the biggest pieces of advice I would give to young people now is
Finding that peer group as early as you can was so important to me. And I didn't realize it was something that mattered. I kind of thought, like, I have, you know, I'll figure it out on my own. But man, being around like inspiring peers was so, so valuable.
What's funny is both of us did spend time at Stanford. I actually did graduate, which is I probably shouldn't have done that, but I did. That's great. You pursued the path of a far greater return by dropping out, but that was a community that purportedly had a lot of these characteristics, but I was still beyond surprised at how much more potent it was with a room full of founders. I was just going to say the same thing. Actually, I liked Stanford a lot, but
I did not feel surrounded by people that made me want to be better and more ambitious and whatever else. And to the degree I did, the thing you were competing with your peers on was like, who was gonna get the internship at which investment bank? Which I'm embarrassed to say, I fell on that trap. This is like how powerful peer groups are. It was a very easy decision to not go back to school after seeing what the YC5 was like now.
There's a powerful quote by Carl Jung that I really love. The world will come and ask you who you are and if you don't know, it will tell you. It sounds like being very intentional about who you want to be and who you want to be around as early as possible is very important.
Yeah, this was definitely one of my takeaways, at least for myself, is no one is immune to peer pressure. And so all you can do is pick good peers. Yeah. Obviously, you went on to create looped, sell that, go to green dot, and then we ended up getting to work together at YC.
Talk to me about like the early days of YC research, like one of the really cool things that you brought to YC was this experimentation. And you sort of, I mean, I remember you coming back to partner rooms and talking about some of the rooms that you were getting to sit in with like the Larry and Sergey's of the world and that, you know, AI was some sort of at the tip of everyone's tongue because it felt so close. And yet it was, you know, that was 10 years ago. The thing I always
thought would be the coolest retirement job was to get to like run a research lab. And it was not specific to AI at that time. Well, we started talking about YC research. Well, not only was it going to, it did end up funding like a bunch of different efforts. And I wish I could tell the story of like, if I was obvious that AI was going to work and be the thing, but we tried a lot of bad things to around that time.
I read a few books on the history of Xerox, Park, and Bell Labs and stuff. And I think there were a lot of people, like it was in the air of Silicon Valley at the time, that we need to have good research labs again. And I just thought it would be so cool to do. And it was sort of similar to what YC does and that you're going to allocate capital to smart people. And sometimes it's going to work and sometimes it's not going to.
I just wanted to try it. AI for sure was having a mini moment. This was like kind of late 2014, 2015, early 2016 was like the super intelligence discussion, like the book super intelligence was happening. Both drumming up. Yeah. The deep mind had had a few like impressive results, but a little bit of a different direction. You know, I had been an AI nerd forever. So I was like, oh, it'd be so cool to try to do something. But it was very hard to say. ImageNet out yet? ImageNet was out. Yeah. Yeah.
for a while at that point. So you could tell if it was a hot dog or not. You could sometimes. Yeah, that was getting there. Yeah. You know, how did you identify the initial people you wanted involved in, you know, wacky research and opening eye? I mean, Greg, Greg Brockman was early in retrospect. It feels like this movie montage and there were like all of these, like, you know, at the beginning of like the bank ice movie when you're like driving around to find the people and whatever. And they're like, you son of a bitch. I'm in.
Right, like Iliya, I like heard he was really smart and then I watched some video of his and he's also not, he's extremely smart, like true, true, genuine, genius and visionary, but also he has this incredible presence. And so I watched this video of his on YouTube or something. I was like, I got to meet that guy and I emailed him, he didn't respond. So I just like went to some conference he was speaking at and we met up and then after that we started talking a bunch and
And then, like, Greg, I had known a little bit from the early stripe days. What was that conversation like, though? It's like, I really like your ideas about AI and I want to start a lab. Yes. And one of the things that worked really well in retrospect was we said from the very beginning, we were going to go after AGI.
at a time when in the field you weren't allowed to say that, because that just seemed impossibly crazy and borderline irresponsible to talk about his attention immediately. It got all of the good young people's attention and the derision, whatever that word is, of the mediocre old people. And I felt like somehow that was a really good sign and really powerful. And we were like this ragtag group of people. I mean, I was the oldest by a decent amount. I was like, I guess I was 30 then.
And so you had like these people who were like, those are these irresponsible young kids who don't want anything by anything, and they're like saying these ridiculous things. And the people who that was really appealing to, I guess are the same kind of people who would have said like, it's a, you know, I'm a sophomore and I'm coming or whatever. And they were like, let's just do this thing. Let's take a run at it.
And so we kind of went around and met people one by one and then in different configurations of groups and it kind of came together over the course of, in fits and starts, but over the course of like nine months. And then it started, I mean, and then it started, started happening. And one of my favorite like memories of all of OpenAI was,
Iliya had some reason that Google or something that we couldn't start in, we announced in December of 2015, but we couldn't start until January of 2016. So like January 3rd, something like that of 2016 for like very early in the month. People come back from the holidays and we go to Greg's apartment. Maybe there's 10 of us, something like that. And we sit around and it felt like we had done this monumental thing to get it started.
And everyone's like, so what do we do now? And what a great moment. It reminded me of when startup founders work really hard to like raise a round. And they think like, oh, I accomplished this great thing. We did it. We did it. And then you sit down and say, like, fuck, I got to like figure out what we're going to do. It's not time for popping champagne. That was actually the starting gun. And now we got to run. Yeah. And you have no idea how hard the race is going to be.
It took us a long time to figure out what we're going to do. But one of the things that I'm really amazingly impressed by, Ilya in particular, but really all of the other people about is, although it took a lot of twists and turns to get here,
The big picture of the original ideas was just so incredibly right. And so they were like up on like one of those flip charts or whiteboards I don't remember which in Greg's apartment. And then we went off and, you know, did some other things that worked or didn't work or whatever. Some of them did and eventually now we have this like system.
And it feels very crazy and very improbable looking backwards that we went from there to here with so many detours on the way, but got where we were pointing. Was deep learning even on that flip chart initially? Yeah. I mean, more specifically than that, like do a big unsupervised model and then solve RL was on that flip chart. One of the flip charts from a very, this is before Greg's apartment, but from a very early off site, I
Think this is right. I believe there were three goals for the for the effort at the time It was like figure out how to do unsupervised learning solve RL and never get more than 120 people
missed on the third one. That's right. The predictive direction of the first two is pretty good. Deep learning, then the second big one sounded like scaling, like the idea that you could scale. That was another heretical idea that people actually found even offensive. I remember a rash of criticism for you guys at that moment. When we started, the core beliefs were deep learning works and it gets better with scale.
And I think those were both somewhat heretical beliefs. At the time, we didn't know how predictably better it got with scale, but didn't come for a few years later. It was a hunch first, and then you got the data to show how predictable it was. But people already knew that if you made these neural networks bigger, they got better. Like that was...
We were sure of that before we started. And what took the like, or the keeps going to mind is like religious level of belief was that that wasn't going to stop. Everybody had some reason of, oh, it's not really learning. It's not really reasoning. I can't really do this. It's, you know, it's like a parlor trick. And these were like the eminent leaders of the field. And more than just saying you're wrong, they were like, you're wrong. And this is
Like a bad thing to believe or a bad thing to say was that there's got it. You know, this is like, you're going to perpetuate an AI winter. You're going to do this. You're going to do that. And we were just like looking at these results and saying they keep getting better. Then we got the scaling results. It just kind of breaks my intuition even now. And at some point, you have to just look at the scaling loss and say, we're going to keep doing this. And this is what we think it'll do. And it also, it was starting to feel at that time, like,
something about deep learning was just this emergent phenomenon that was really important. And even if we didn't understand all of the details in practice here, which obviously we didn't and still don't, that there was something really fundamental going on. It was the PGism for this, as we had discovered a new square in the periodic table.
And so we really wanted to push on that. And we were far less resourced than DeepMind and others. And so we said, OK, they're going to try a lot of things. And we've just got to pick one and really concentrate. And that's how we can win here, which is totally the right startup takeaway. And so we said, well, we don't know. We don't know.
We do know this one thing works, so we're going to really concentrate on that. I think some of the other efforts were trying to outsmart themselves in too many ways and we just said, we'll do the thing in front of us and keep pushing on it. Scale is this thing that I've always been interested in at the emergent properties of scale for everything, for startups, turns out for deep learning models for a lot of other things.
I think it's a very underappreciated property and thing to go after. And I think it's, you know, when in doubt, if you have something that seems like it's getting better with scale, I think you should scale it up. I think people want things to be, you know, less is more, but actually more is more, more is more. We believed in that. We wanted to push on it. I think one thing that is not maybe that well understood about OpenAI is we had just this
Even when we were pretty unknown, we had a crazy talented team of researchers. If you have the smartest people in the world, you can push on something really hard.
Yeah, and they're motivated. And or you created sort of one of the sole places in the world where they could do that. Like one of the stories I heard is just even getting access to compute resources, even today is this crazy thing. And embedded in some of the criticism from maybe the elders of the industry at the moment was sort of that, you know, you're going to waste a lot of resources and somehow that's going to result in an AI winter, like people won't give resources anymore.
It's funny, people were never sure if we were going to waste resources or if we were doing something kind of vaguely immoral by putting in too much resources and you were supposed to spread it across lots of bets rather than conviction on one. Most of the world still does not understand the value of a fairly extreme level of conviction on one bet. And so we said, okay, we have this evidence, we believe in this thing.
We're gonna at a time when like the normal thing was we're gonna spread against this bet and that bet and that bet. You're definitely an optimist. You're a definite optimist and I think across like many of the successful YC startups, you see a version of that again and again. Yeah, that sounds right. When the world gives you sort of pushback and the pushback doesn't make sense to you, you should do it anyway. Totally. One of the many things that I'm very grateful about
getting exposure to from the world of startups is how many times you see that again and again and again. And before, I think before I see, I really had this deep belief that somewhere in the world, there were adults in charge, adults in the room, and they knew what was going on. And someone had all the answers. And, you know, if someone was pushing back on you, they probably knew what was going on. And the degree which I now understand that
You know, to pick up the earlier phrase, you can just do stuff. You can just try stuff and no one has all the answers. There are no like adults in the room that are going to magically tell you exactly what to do. And you just kind of have to like iterate quickly and find your way. That was like a big unlock in life for me to understand. There is a difference between being a high conviction just for the sake of it. And if you're wrong and you don't adapt and you don't try to be like truth seeking, it still is really
Not that effective. The thing that we tried to do was really just believe whatever the results told us and really kind of try to go do the thing in front of us. And there were a lot of things that we were high conviction and wrong on. But as soon as we realized we were wrong, we tried to like fully embrace it. Conviction is great until the moment you have data one way or the other. And there are a lot of people hold on it past the moment of data.
So it's iterative. It's not just they're wrong and I'm right. You have to go show your work. But there is a long moment where you have to be willing to operate without data. And at that point, you do have to just sort of run on conviction. Yeah. It sounds like there's a focusing aspect there too. Like you had to make a choice and that choice had better, you didn't have infinite choices. And so the prioritization itself was an exercise that made it much more likely for you to succeed.
I wish I could go tell you like, oh, we knew exactly what was going to happen. And it was, you know, we had this idea for language models from the beginning. And, you know, we kind of went right to this, but obviously the story of OpenAI is that we did a lot of things that helped us develop some scientific understanding, but were not on the short path.
If we knew then what we know now, we could have speedrun this whole thing to like an incredible degree. Doesn't work that way. Like you don't get to be right at every guess. And so we started off with a lot of assumptions, both about the direction of technology, but also what kind of company we were going to be and how we were going to be structured and how AGI was going to go and all of these things.
And we have been like humbled and badly wrong many, many, many times. And one of our strengths is the ability to get punched in the face and get back up and keep going. This happens for scientific bets, for being willing to be wrong about a bunch of other things we thought about how the world was going to work and what the sort of shape of the product was going to be.
Again, we had no idea, or I at least had no idea, maybe Alec Radford did. I had no idea that language models were going to be the thing. You know, we started working on robots and able to play video games and all these other things. Then a few years later, GPT-3 happened. That was not so obvious at the time. It sounded like there was a key insight around positive or negative sentiment around NGP-1.
Even before GBT won. I think the paper was called the unsupervised sentiment neuron. I think Alec did it alone. By the way, Alec is this unbelievable outlier of a human. And so he did this incredible work, which was just looking at, he noticed there was one neuron that was flipping positive or negative sentiment as it was doing these generative Amazon reviews, I think.
other researchers might have hyped it up more, made a bigger deal out of it or whatever, but it was Alex, so it took people a while to fully internalize what a big deal it was, and he then did GPT-1 and somebody else scaled it up into GPT-2, but it was off of this insight that there was something amazing happening where
And at the time, on supervised learning was just not really working. So he noticed this one really interesting property, which is there was a neuron that was flipping positive or negative with sentiment. And yeah, that led to the GPT series. I guess one of the things that Jake Heller from Case Text, I think of him as maybe, I mean,
Not surprisingly, a YC alum who got access to both 3, 3.5, and 4. And he described getting 4 as sort of the big moment revelation. Because 3.5 would still do... I mean, it would hallucinate more than he could use in a legal setting.
And then with Thor, it reached the point where if he chopped the prompts down small enough into workflow, he could get it to do exactly what he wanted. And he built huge test cases around it and then sold that company for $650 million. So I think of him as one of the first to commercialize GPD4 in a relatively grand fashion.
I remember that conversation with him with one GPT-4. That was one of the few moments in that thing where I was like, okay, we have to be really great on our hands. When we first started trying to sell GPT-3 to founders, they would be like, it's cool. It's doing something amazing. It's an incredible demo.
With the possible exception of copywriting, no great businesses were built on GPT-3. And then 3.5 came along and people, startups, like YC startups in particular, started to do, like it no longer felt like we were pushing a boulder uphill. So people actually wanted to buy the thing we were selling. Totally. And then four.
We kind of like got the, like, just how many GPUs can you give me? Oh, yeah. Moment. Like very quickly after giving people access. So we felt like, okay, we got something like really good on our hands. So you, you knew actually from your users then. Totally. Like when the, when the model dropped itself and you got your hands on it, it was like, well, this, this is better. We were totally impressed then too. We had all of these like,
tests that we did on it that were very, it looked great and it could just do these things that we were all super impressed by. Also, like when we were all just playing around with it and like getting samples back, I was like, wow, it's like, it can do this now and there were, it can rhyme and it can like telephony joke, slightly funny joke. And it can like, you know, do this and that. And so it felt really great, but you know, you never really know if you have a hit product on your hands until you like put it in customer's hands. You're always too impressed with your own work.
Yeah. And so we were all excited about it. We're like, oh, this is really quite good. But until like the test happens, it's like the real test is the real test users. Yeah. So there's some anxiety until that moment happens. Yeah.
I wanted to switch gears a little bit. So before you created, obviously, one of the craziest AI labs ever to be created, you started at 19 at YC with a company called Looped, which was basically find my friends geolocation, probably what, 15 years before Apple ended up making it.
too early in any case. What drew you to that particular idea? I was interested in mobile phones, and I wanted to do something that got to use mobile phones. This was when mobile was just starting. It was still three years or two years before the iPhone. But it was clear that carrying around computers in our pockets was somehow a very big deal.
I mean, it's hard to believe now that there was a moment when phones were actually literally you just... They were just a phone. They were an actual phone. Yeah. I mean, I try not to use it as an actual phone ever, really. I still remember the first phone I got that internet on it. And it was this horrible, like, text-based, mostly text-based browser. It was really slow. You could, like, you know, do, like, you could so painfully and so slowly check your email. But I was like a...
I don't know, in high school, sometimes when I was one, I got a phone that could do that versus just texting call. And I was hooked right then. I was like, this is not a phone. This is a computer we can carry, and we're stuck with a dial pad for this accident of history, but this is going to be awesome.
And I mean, now you have billions of people who they don't have a computer, like to us growing up, you know, that, that actually was your first computer. Yeah. Not physically replica or like another copy of my first computer, which is the LCT2. Yeah. So this is what a computer was to us growing up.
and the idea that you would carry this little black mirror, like, kind of- We've come a long way. Unconsciable back then. Yeah. So, you know, even then you, like, technology and what was going to come was sort of in your brain. Yeah, I was like a real- I mean, I still am a real tech nerd, but I always, that was what I spent my Friday nights thinking about. And then one of the harder parts of it was we didn't have the App Store, the iPhone didn't exist.
You ended up being a big part of that launch, I think. Small part, but yes, but they're going to be a little part of it. It was a great experience for me to have been through because I kind of like understood what it is like to go through a platform shift and how messy the beginning is and how much like little things you do can shape the direction it all goes. I was definitely on the other side of it then. Like I was watching somebody else create the platform shift, but
It was a super valuable experience to get to go through and sort of just see what, how it happens and how quickly things change and how you adapt through it. What was that experience like? You ended up selling that company. Uh, it was probably the first time you were managing people and, you know, doing enterprise sales. All of these things were useful lessons from that first experience. I mean, it obviously was not a successful company. Um, it was,
And so it's a very painful thing to go through, but the rate of experience and education was incredible. Another thing that PG said or quoted somebody else saying, but always stuck on me is your 20s are always in apprenticeship, but you don't know for what and then you do your real work later. And I did learn quite a lot and I'm very grateful for it. It was like a.
difficult experience, and we never found product market fit, really. And we also never really found a way to get to escape velocity, which is just always hard to do. There is nothing that I have ever heard of that has a higher rate of generalized learning than doing a startup. So it was great in that sense. Yeah, when you're 19 and 20, like riding the wave of some other platform shift, this shift from dumb cell phones to smartphones and mobile,
And, you know, here we are many years later and your next act was actually, you know, I mean, I guess two acts later, literally spawning one of the major platforms. We all get old. Yeah. But that's really what's happening. You know, 18, 20 year olds are deciding that they could get their degree
but they're going to miss the wave because all of the stuff that's great. Everything's happening right now. I am proud of that. Do you have an intuitive sense? Speaking to even a lot of the really great billion dollar company founders, some of them are just not that aware of what's happening. There seems to be a stung to me. It's wild.
I think that's why I'm so excited for startups right now is because the world is still sleeping on all this to such an astonishing degree. Yeah. And then you have like the YC founders being like, no, no, I'm going to like do this amazing thing and do it very quickly. Yeah. It reminds me of when Facebook almost missed mobile because they were making web software and they were really good at it. Yeah. And.
I mean, they had to buy Instagram, like Snapchat. Yeah, and WhatsApp. So it's interesting. The platform shift is always built by the people who are young with no prior knowledge. I think it's great.
So there's this other aspect that's interesting in that I think you're, you know, you and Elon and Bezos and a bunch of people out there, like they sort of start their journey as founders, you know, really, you know, whether it's looped or zip two or, you know, really in maybe pure software, like it's just a different thing that they start and then later they, you know, sort of get to level up.
You know, is there a path that you recommend at this point? If people are thinking, you know, I want to work on the craziest heart-tech thing first, should they just run towards that to the extent they can? Or is there value in, you know, sort of solving the money problem first, being able to invest your own money like very deeply into the next thing? It's a really interesting question. It was definitely helpful.
then I could just write the earlier checks for OpenAI, and I think it would have been hard to get somebody else to do that at the very beginning. And then Elon did it a lot at much higher scale, which I'm very grateful for, and then other people did after that. And there's other things that I've invested in that I'm really happy to have been able to support, and I think it would have been hard to get other people to do it.
Um, so that's great for sure. And I did, like we were talking about earlier, learn these extremely valuable lessons. But I also feel like I kind of like was wasting my time for lack of a better phrase working on looped. I don't, I definitely don't regret it. It's like all part of the tapestry of life. And I learned a ton and whatever else. What would you have done differently?
Or what would you tell yourself from now to in a time travel capsule that would show up on your desk at Stanford when you're 19? Well, it's hard because AI was always the thing I most wanted to do. And I went to school to study AI. But at the time I was working in the AI lab, the one thing that they told you is definitely don't work on neural networks. We tried that and it doesn't work. That's one time ago. I think I could have picked a much better thing to work on than looped. I don't know exactly what it would have been. But it all works out. It's fine.
There's this long history of people building more technology to help improve other people's lives. I actually think about this a lot. I think about the people that made that computer. I don't know them. Many of them probably long retired, but I am so grateful to them. Some people worked super hard to make this thing at the limits of technology. I got a copy of that on my eighth birthday and it totally changed my life.
and the lives of a lot of other people too. They worked super hard. They never like God of thank you for me, but I feel it to them very deeply.
It's really nice to get to add our brick to that long road of progress. Yeah. It's been a great year for OpenAI, not without some drama. Always. Yeah, we'll go to that. What did you learn from the Alster last fall and how do you feel about some of the departures? I mean, teams do evolve, but how are you doing, man?
a tire of a good. It's we've kind of like speed run like medium size or even kind of like pretty big size tech company arc that would normally take like a decade and two years. Like chat GPT is less than two years old. Yeah. And there's like a lot of painful stuff that comes with that.
And there are, you know, any company as it scales goes through management teams at some rate. And you have to sort of the people who are really good at the zero to one phase or not necessarily the people that are good at the one to 10 or the 10 to the hundred phase. We've also kind of like changed what we're going to be made plenty of mistakes along the way, done a few things really right. And that comes with a lot of change. And I think the goal of
the company, the emergent AGI, whatever you want to think about it is like just keep making the best decisions we can at every stage. But it does lead to a lot of change. I hope that we are heading towards a period now of more calm, but I'm sure there will be other periods in the future where things are very dynamic again. So I guess, how does OpenAI actually work right now? I mean, the quality and like,
the pace that you're pushing right now, I think is like beyond world class compared to a lot of the other really established software players like who came before. This is the first time ever where I felt like we actually know what to do. Like I think from here to
Building an AGI will still take a huge amount of work. There are some known unknowns, but I think we basically know what to go do. And it'll take a while, it'll be hard, but that's tremendously exciting. I also think on the product side, there's more to figure out, but roughly we know what to shoot at and what we want to optimize for.
That's a really exciting time. And when you have that clarity, I think you can go pretty fast. Yeah. If you're willing to say, we're going to do these few things, we're going to try to do them very well. And our research path is fairly clear. Our infrastructure path is fairly clear. Our product path is getting clearer. You can orient around that super well. We for a long time did not have that. We were a true research lab. And even when you know that it's hard to act with the conviction on it because there's so many other good things you'd like to do.
But the degree to which you can get everybody aligned and pointed at the same thing is a significant determinant in how fast you can move. I mean, sounds like we went from level one to level two very recently, and that was really powerful. And then we actually just had our 01 hackathon at YC. Yeah, that was so impressive. That was super fun.
And then, weirdly, one of the people who won, I think they came in third, was Camper. And so, CAD-CAM startup, you know, did YC recently last year or two, and they were able to, during the hackathon, build something that would iteratively improve an airfoil from something that wouldn't fly to literally something that had a competitive amount of lift. And, I mean, that sort of sounds like level four.
which is, you know, the innovator stage. It's very funny you say that I had been telling people for a while. I thought that the level two to level three jump was going to happen. But then the level three to level four jump was level two to level three was going to happen quickly. And then the level three to level four jump was somehow going to be much harder and require some medium sized or larger new ideas. And that demo and a few others have convinced me that
you can get a huge amount of innovation just by using these current models in really creative ways. Well, yeah, I mean, it's what's interesting is basically Camper already built sort of the underlying software for CADCam and then, you know, language is sort of the interface to the large language model that then can use
the software-like tool use. Then if you combine that with the idea of CodeGen, that's a scary, crazy idea. Not only can the large language model code, but it can create tools for itself and then compose those tools similar to Chain of Thoughts with O1.
Yeah, I think things are going to go a lot faster than people are appreciating right now. Yeah. Well, it's an exciting time to be alive, honestly. You know, you mentioned earlier that thing about discover all the physics. I was wanting to be a physicist. It wasn't smart enough to be a good one. Had to contribute in this other way. But the fact that somebody else, I really believe, is now going to go solve all the physics with this stuff. Like, I'm so excited to be alive for that. Let's get to level four. I'm so happy for whoever that person is. Yeah.
Do you want to talk about level three, four, and five briefly? Yeah, so we realized that AGI had become this like badly overloaded word and people and all kinds of different things. And we tried to just say, okay, here's our best guess roughly of the order of things. You have these level one systems, which are these chatbots. There'd be level two that would come, which would be these reasoners. We think we got there earlier this year with the O1 release.
Three is agents ability to go off and do these longer term tasks, you know, maybe like multiple interactions with an environment, asking people for help when they need it, working together, all of that. And I think we're going to get there faster than people expect for as innovators, like that's like a scientist. And, you know, that's ability to go explore like a not well understood phenomena.
over like a long period of time and understand what's just kind of go just figure it out. And then, and then level five, this is the sort of slightly amorphous, like do that, but at the scale of the whole company or, you know, a whole organization or whatever, that's going to be a pretty powerful thing.
Yeah. And it feels kind of fractal, right? Like even the things you had to do to get to to sort of rhyme with level five and that you have multiple agents that then self-correct that work together. I mean, that kind of sounds like an organization that may just at like a very micro level. Do you think that we'll have, I mean, you famously talked about it. I think Jake talks about it. It's like, you will have companies that make, you know, billions of dollars per year and have like,
Less than 100 employees, maybe 50, maybe 20 employees, maybe one. It does seem like that. I don't know what to make of that other than it's a great time to be a startup founder. Yeah. But it does feel like that's happening to me. Yeah. Um, you know, it's like one person plus 10,000 GPUs.
It could happen. Sam, what advice do you have for people watching who either are about to start or just started their startup bet on this tech trend, like bet on this trend? We are not near the saturation point. The models are going to get so much better so quickly. What you can do as a startup founder with this versus what you could do without it is so wildly different.
The big companies, even the medium-sized companies, even the startups that are a few years old, they're already unlike quarterly planning cycles. And Google is on a year or decade planning cycle. I don't know how they even do it anymore. But your advantage with speed and focus and conviction and the ability to react to how fast the technology is moving, that is the number one edge of a startup.
kind of ever, but especially right now. So I would definitely like build something with AI and I would definitely like take advantage of the ability to see a new thing and build something that day rather than like put it into a quarterly planning cycle. I guess the other thing I would say is
It is easy when there's a new technology platform to say, well, because I'm doing some of the AI, the laws of business don't apply to me. I have this magic technology, and so I don't have to build a moat or a
you know, competitive edge or a better product. It's because, you know, I'm doing AI and you're not. So that's all I need. And that's obviously not true. But what you can get are these short term explosions of growth by embracing a new technology more quickly than somebody else. And remembering not to fall for that and that you still have to build something of enduring value. That's, I think that's a good thing to keep in mind too.
Everyone can build an absolutely incredible demo right now, but everyone can build an incredible demo. But building a business, then that's the brass ring. The rules still apply. You can do it faster than ever before and better than ever before, but you still have to build a business. What are you excited about in 2025? What's to come? EGI.
excited for that? What am I excited for? Um, we're only a kid. I'm more excited for that than congratulations. I've never been incredible. Yeah, probably that's by that's going to be really something I'd like most excited for ever in life. Yeah, it changes your life completely. So I cannot wait. Well, here's to building that better world for, you know, our kids and really hopefully the whole world. There's a lot of fun. Thanks for hanging out, Sam. Thank you.
Was this transcript helpful?
Recent Episodes
How To Start A Dev Tools Company with Nicolas Dessaigne | Startup School
Y Combinator
Nicolas Dessaigne, co-founder and CEO of Algolia, a widely used Search API for apps and websites powering over 1.75 trillion searches yearly, discusses tips for founders building dev tools companies, advising on topics such as team, idea, GTM strategy, and sales.
November 26, 2024
Twitter vs. X: Product Lessons For Startup Founders | The Breakdown
Y Combinator
In The Breakdown's first episode, Tom Blomfield (Monzo) and David Lieb (Google Photos) analyze Twitter's rebrand to X, discussing product design lessons for founders building consumer products.
November 21, 2024
How Do Billion Dollar Startups Start? | Office Hours
Y Combinator
This podcast explores the early beginnings of successful Y Combinator (YC) companies and shares insights on maintaining a startup in those critical early days, focusing energy effectively, and finding product-market fit to achieve mega success.
November 17, 2024
Why Design Matters: Lessons from Stripe, Lyft and Airbnb | Design Review
Y Combinator
Katie Dill discusses her roles in design at Stripe, Airbnb, and Lyft, shares her design philosophy, key factors for successful companies' design, and importance of cultivating a design mindset from day one in startups.
November 16, 2024
Ask this episodeAI Anything
Hi! You're chatting with Y Combinator AI.
I can answer your questions from this episode and play episode clips relevant to your question.
You can ask a direct question or get started with below questions -
What is Sam Altman's view on the timing of Artificial Super Intelligence?
What does Sam Altman believe are key resources for abundance in the future?
How important is finding the right peer group according to Sam Altman?
What is Sam Altman's roadmap for the development of Artificial General Intelligence?
What advice does Sam Altman have for aspiring tech founders?
Sign In to save message history