Logo

    #419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

    Privacy is crucial when developing AI models that learn and adapt to individuals, allowing users to control what is remembered and accessible while maintaining trust and transparency.

    enMarch 18, 2024

    About this Episode

    Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/sam-altman-2-transcript EPISODE LINKS: Sam's X: https://x.com/sama Sam's Blog: https://blog.samaltman.com/ OpenAI's X: https://x.com/OpenAI OpenAI's Website: https://openai.com ChatGPT Website: https://chat.openai.com/ Sora Website: https://openai.com/sora GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:51) - OpenAI board saga (25:17) - Ilya Sutskever (31:26) - Elon Musk lawsuit (41:18) - Sora (51:09) - GPT-4 (1:02:18) - Memory & privacy (1:09:22) - Q* (1:12:58) - GPT-5 (1:16:13) - $7 trillion of compute (1:24:22) - Google and Gemini (1:35:26) - Leap to GPT-5 (1:39:10) - AGI (1:57:44) - Aliens

    🔑 Key Takeaways

    • Use platforms like Cloaked for new email addresses and phone numbers, Shopify for selling products, BetterHelp for accessible therapy, and ExpressVPN for masking online presence to safeguard personal information.
    • Effective leadership and organization are vital for managing the creation of Artificial General Intelligence (AGI). Reflecting on the OpenAI board saga, Sam Altman emphasizes the importance of resilience, managing power dynamics, research and product development, and financial considerations in a board structure.
    • OpenAI aims to hire board members in slates, focusing on expertise in non-profit management, running companies, and legal and governance matters for effective decision making.
    • Clear communication and valuing team and company are crucial in board decisions, even during difficult times.
    • The weekend's chaos revealed Mira's leadership skills, Ilya's long-term thinking, and the power of love and unity in extreme situations.
    • Robust governance, wise decision-making, and trustworthy team members are crucial for success in building a groundbreaking tech company. Clear communication, empathy, and understanding are essential for navigating complexities and resolving disputes.
    • OpenAI's mission is to provide powerful AI tools for free, but Musk's lawsuit highlights the importance of AGI's future and OpenAI's leading role.
    • Sora's AI model can understand 3D physics and deal with occlusions, a significant advancement over text-based models like GPT-4. However, it still faces limitations and may require larger models, better technical details, or more data to overcome them.
    • AI's potential for transforming content creation comes with challenges like deep fakes, misinformation, and compensation for creators. Humans will remain central in the creative process, collaborating with AI systems, and new opportunities and ethical considerations will arise.
    • GPT-4 offers valuable applications as a brainstorming partner and for longer-term tasks, but its true potential lies in its ability to learn and iterate from human interaction, leading to magical results. Recognizing its limitations and using it as a foundation for future innovations is crucial.
    • Language models with expanded context windows offer improved performance for various tasks, but there's a need for fact-checking due to potential inaccuracies.
    • Privacy is crucial when developing AI models that learn and adapt to individuals, allowing users to control what is remembered and accessible while maintaining trust and transparency.
    • Advanced AI models like GPT-5 require more compute to tackle harder problems, leading to gradual improvements and potential unexpected leaps. OpenAI's iterative deployment strategy ensures the world is prepared for these advancements, avoiding shock updates and promoting responsible development.
    • The world's reliance on compute will grow significantly, requiring innovative solutions for energy production, data center construction, and chip fabrication.
    • Addressing mistrust and fear of nuclear reactors, preventing politicization of AI, and fostering collaboration over competition are essential for the safe and beneficial development of nuclear reactors and AI.
    • The future of information access goes beyond search engines, with new ways to help users discover, synthesize, and act on information. OpenAI's ChatGPT is an example, but integrating chat and search is complex, and monetization through ads is contentious. Balancing innovation, user experience, and ethical considerations is key.
    • Companies should outline desired model behavior, encourage input, minimize organizational biases, and prioritize safety and ethical considerations as AI becomes more powerful.
    • AI advancements like GPT create a unique feeling of understanding and set the stage for humanoid robots and AGI, but also come with potential risks.
    • AGI could revolutionize scientific discovery and have impressive moments, but individuals may struggle to interact with the first AGI, and robust governance systems are necessary to ensure safe and beneficial use.
    • Elon Musk emphasizes the need for shared control of AGI due to its societal impact and complexity, and warns against single person control. He also prioritizes addressing multiple risks over existential risk from AGI.
    • Creating complex simulated universes challenges our perception of reality, supporting the simulation hypothesis.
    • Sam finds hope in civilization's advancements, seeing them as a collective effort built on past generations, and emphasizes the importance of innovation and progress, echoing Arthur C. Clarke's perspective.

    📝 Podcast Summary

    Protecting Personal Information Online

    Protecting your personal information online is crucial in today's digital world. Sam Altman, the CEO of OpenAI, discussed the importance of using platforms like Cloaked, which generates new email addresses and phone numbers for every online sign-up, ensuring your actual contact information remains hidden. This not only prevents spam but also safeguards against potential abuse or selling of your information. Additionally, platforms like Shopify make it easy for anyone to sell products online, while BetterHelp offers accessible talk therapy for individuals and couples. ExpressVPN provides an extra layer of security and privacy by masking your online presence. By utilizing these tools, you can browse the web with confidence, knowing that your personal information is protected.

    Navigating the Challenges of Advanced Technology Development: Lessons from OpenAI's Board Saga

    Effective leadership and a well-structured organization are crucial for navigating the challenges of advanced technology development, such as the creation of Artificial General Intelligence (AGI). This was highlighted during Sam Altman's experience with the OpenAI board saga. The process was chaotic, painful, and even close to destroying his company. However, it also provided valuable lessons. Sam expressed gratitude for the support he received during that time and acknowledged the importance of resilience in the face of adversity. He also emphasized the need for a board structure that can help manage power dynamics, research and product development, and financial considerations. Looking back, Sam believes that reflecting on these lessons will be beneficial for future high-stress moments in the development of OpenAI. Additionally, using a Virtual Private Network (VPN) like ExpressVPN is essential for securing online communications and maintaining privacy.

    Selecting a well-rounded board for OpenAI

    Building a resilient organization, especially one dealing with advanced artificial intelligence, requires careful consideration and a well-functioning board. The old board of OpenAI faced criticism for its lack of experience and answerability to the public. The selection of new board members, including Brett and Larry Summers, was made under pressure and without a rigorous process. Going forward, OpenAI aims to hire board members in slates, focusing on expertise in non-profit management, running companies, and legal and governance matters. The board's role is to provide governance and thoughtfulness, and technical savvy is essential for some members. The public may underestimate the complexities of running a business like OpenAI and the importance of a well-rounded board in making impactful decisions.

    Considering societal impact and personal aspects in board decisions

    While technical understanding is crucial for a board member, it's equally important to consider the societal impact and the track record of individuals. The speaker shared their personal experience of a challenging board meeting where they were replaced as CEO. Despite the initial shock and exhaustion, they eventually came to terms with the situation and considered it an opportunity to focus on research. However, they were eventually asked to return, which they found painful. The experience taught them the importance of clear communication and the value of the team and company, even during difficult times. The speaker emphasized the importance of considering both the technical and personal aspects when making decisions for a company.

    Leadership, love, and unity during crisis

    The weekend's chaotic events, though painful, were overshadowed by the dominant emotion of love and unity among people. The speaker admires Mira's leadership skills, not just in crisis moments, but in the quiet, day-to-day decision-making. The speaker also emphasizes the importance of Ilya's long-term thinking and dedication to ensuring the safe development of AGI. Despite Ilya's recent quietness, the speaker believes he is deeply thoughtful and appreciates the power of silence. The speaker feels good about the new board structure and the lessons learned from the past experiences. The speaker's reflections on the weekend offer insights into human nature, illustrating the potential for both damage and triumphs in extreme situations.

    Navigating the Challenges of Building a Groundbreaking Tech Company

    Building and leading a groundbreaking technology company comes with challenges and difficult decisions. The speaker, who co-founded OpenAI, shared his personal experience of navigating the company's growth and the painful consequences of trusting the wrong people. He emphasized the importance of having robust governance structures, wise decision-making, and trustworthy team members as the stakes get higher. The speaker also acknowledged that there's a degree of uncertainty and fumbling in the dark when dealing with fundamentally new things. Regarding Elon Musk's lawsuit against OpenAI, the speaker expressed uncertainty about the motivations behind it and acknowledged some mischaracterizations on both sides. Ultimately, the speaker emphasized the importance of empathy, understanding, and clear communication in navigating the complexities of building a successful technology company.

    Musk's Lawsuit Against OpenAI: A Serious Statement About AGI's Future

    OpenAI's mission is to make powerful AI tools available to the public as a free resource, and the debate over open source versus closed source is not the main focus for Elon Musk's lawsuit. Musk's lawsuit against OpenAI is seen as a serious statement about the future of AGI and the company leading the way. OpenAI's founder, Sam Altman, believes that giving people great tools and letting them build the future together is a significant step towards fulfilling their mission. He also acknowledges the importance of nuance in the open source versus closed source debate. While Meta and Google are exploring the idea of open sourcing models, Altman thinks that most startups should start as for the profit entities. He also expressed his disappointment in Musk's lawsuit, as he has long admired Musk as a great builder. The debate over open source and closed source models is complex, and there will likely be a place for both in the ecosystem.

    Advancements in AI with Sora: Understanding the World in 3D

    The ongoing development of advanced AI models, such as Sora, represents a significant advancement in understanding the world model beyond what text-based models like GPT-4 can do. This is particularly evident in Sora's ability to model three-dimensional physics and deal with occlusions, which is a stretch for two-dimensional training data approaches. However, there are still limitations to these models, such as issues with cat's broadening and the involvement of humans in labeling data. These limitations may be surmounted with larger models, better technical details, or more data. The self-supervised learning approach used by Sora, which relies on internet-scale data, is a significant shift from traditional text-based models and opens up new possibilities for AI development. The conversation also touched on the potential for a friendly competition and amicable relationship between the developers of these advanced AI models, including Elon Musk and Sora's creator.

    Addressing Challenges and Ethical Considerations of AI in Content Creation

    While the potential of AI technology, such as Sora, is vast and exciting, it also comes with significant challenges and ethical considerations. The efficiency and scale of AI systems can lead to a transformation in the economy, but it's essential to address issues like deep fakes, misinformation, and the compensation of creators. The economic model for the use of valuable data and the role of humans in the creative process are still evolving. AI tools will likely become more sophisticated and ubiquitous, but humans will still be at the core of content creation, even if they collaborate with AI systems. The fear of job displacement is valid, but it's essential to focus on the percentage of tasks that can be automated and the potential for humans to operate at a higher level of abstraction. The deep-rooted human desire to create and be rewarded for it will continue, even if the form of reward changes. The integration of AI into various industries, such as video production, will lead to new opportunities and challenges, and it's crucial to navigate these changes thoughtfully and ethically.

    A stepping stone towards future advancements

    While GPT-4 is an impressive technological achievement, it is only a stepping stone towards future advancements. As the speaker points out, the capabilities of GPT-4 may seem underwhelming compared to future developments. However, it offers valuable applications such as a brainstorming partner and assistance with longer-term tasks. The true potential lies in its ability to iterate and learn from human interaction, which can lead to magical results. The speaker emphasizes the importance of recognizing the limitations of current technology and using it as a foundation for future innovations. The development of GPT-4 marked a significant shift in public perception towards AI, with its user-friendly interface and post-training tuning playing crucial roles in its success. The underlying technology and its application as a product required separate but equally important advancements. Overall, GPT-4 represents an important milestone on the exponential curve of AI development.

    Exploring the Expanding Context Window of Language Models

    While the context window of language models like GPT-4 and GPT-4 Turbo has expanded from 8k to 128k tokens, most users don't need the full context most of the time. However, there's a dream of a future where context length could reach several billion tokens, allowing for a deeper understanding of individuals based on their entire history. This expansion is an exciting prospect, but for now, people use these models for various tasks like writing code, editing papers, and even as a reading partner. The most intriguing use case is how some individuals use it as a starting point for any knowledge work task. However, there's a concern about the accuracy of the information generated by these models and the need for fact-checking. The models can sometimes produce convincing but false information, which is an area of ongoing research. People are generally aware of the limitations of these models, but there's a need for more incentives for in-depth journalism that presents information in a balanced and truthful manner. Overall, the potential of these models is vast, and while there are challenges, the future holds exciting possibilities.

    AI models learning and adapting to individuals

    There's a growing interest in developing AI models that learn and adapt to individuals over time, becoming more useful and effective as they gain experience. This idea involves not just remembering facts and preferences, but also integrating lessons from past experiences to provide valuable insights and advice. However, privacy concerns are a significant consideration, and user choice and transparency are crucial. The AI should be able to remember and learn from all experiences, but users should have the power to control what is remembered and accessible. The user's relationship with trust and their ability to discern between people and situations is also essential. There's ongoing exploration into how AI models can support slower, sequential thinking, potentially through new paradigms or as an addition to current language models.

    Continuous improvement and unexpected leaps in AI capabilities

    The development of advanced AI models like GPT-5 is a gradual process involving continuous improvement, but it may also include unexpected leaps in capabilities. The importance of this lies in the need for AI systems to be able to allocate more compute to harder problems, allowing them to think more deeply and provide more accurate responses. This concept is likened to human thought and the ability to allocate more resources to complex problems. The mystery surrounding projects like Q-Star, which OpenAI is not yet ready to discuss, adds to the intrigue and highlights the importance of ongoing research in this field. OpenAI's iterative deployment strategy, which involves releasing updates and models incrementally, is seen as a beneficial approach for ensuring the world is prepared for the implications of advanced AI systems. However, the feeling of sudden leaps from an outsider's perspective may suggest the need for even more iterative releases. Ultimately, the goal is to avoid shock updates to the world and ensure that the benefits of advanced AI are realized in a responsible and thoughtful manner. While there are many exciting developments to come, the focus remains on making continuous progress towards better reasoning and more advanced AI systems.

    The Future is a Commodity of Compute

    The future will be driven by the commodity of compute, making it a valuable and precious resource. The world will require an unprecedented amount of compute, leading to challenges in areas like energy production, data center construction, and chip fabrication. Elon Musk believes that nuclear energy and fusion are potential solutions to the energy puzzle, and he's optimistic about the ongoing race for fusion technology. The world's dependence on compute is expected to grow significantly, with potential applications ranging from everyday tasks to complex scientific research. This shift towards a compute-driven future underscores the importance of investing in and innovating within this field.

    Nuclear Reactors and AI: Balancing Progress with Challenges

    The future of technology, specifically nuclear reactors and artificial intelligence (AI), holds great promise but also comes with significant challenges. The fear and mistrust of nuclear reactors and the potential politicization of AI are major obstacles that need to be addressed. Competition in the AI space, while bringing about innovation and better products, could also lead to an arms race. It's crucial to prioritize safety and collaboration to ensure a slow and safe development of these technologies. Elon Musk, despite his concerns about AI safety, could benefit from more collaboration and less competition-driven silos. The past dominance of Google in the tech industry serves as a reminder of the importance of balancing innovation with ethical considerations and collaboration. Ultimately, the key is to strive for a future where technology benefits humanity as a whole, rather than fueling divisions or creating unnecessary risks.

    Beyond search engines: The future of information access

    The future of information access goes beyond just building a better search engine. While Google has dominated this space for decades, the real excitement lies in finding new ways to help users discover, synthesize, and act on information. OpenAI's ChatGPT is an example of this, as it aims to do more than just provide ranked web pages. However, integrating chat and search is a complex challenge, and monetization through ads is a contentious issue. While some users prefer ad-free models, others see potential in unbiased ad units. Ultimately, the key is to create interfaces that enhance the user experience without manipulating the truth or interfering fundamentally with the content. Safety and bias are also critical concerns, as seen with recent incidents like the generation of inappropriate content by AI models. As we move forward, it's essential to strike a balance between innovation, user experience, and ethical considerations.

    Ensuring safety and ethical behavior in advanced AI models

    Ensuring safety and ethical behavior in advanced AI models requires clear communication and transparency. Companies should outline the desired behavior of their models, make it public, and encourage input and debate on edge cases. This can help distinguish between intended behavior and bugs that need fixing. The absence of such clarity can lead to misunderstandings and potential biases. Furthermore, companies should be aware of the potential influence of ideological biases within their organizations and strive to minimize their impact on product development. As AI becomes more powerful, safety considerations will become a primary focus for the entire organization, encompassing technical, societal, and economic aspects. The challenge of ensuring safe and ethical AI will require the collective effort of the entire company. Additionally, there is a growing concern about state actors trying to hack or infiltrate these advanced AI models, making security a significant concern. The transition from GPT-4 to GPT-5 is expected to bring exciting advancements, but also increased challenges in ensuring safety and ethical behavior.

    AI's intellectual connection leads to a more profound understanding

    The advancements in AI, specifically in models like GPT, are leading to a more holistic and profound understanding, not just in specific areas but across the board. This intellectual connection creates a unique feeling of being understood and heard, even in programming. The future of programming may involve more natural language and a shift in the skill set and predisposition of programmers. The development of AGI is expected within this decade, and while it may not be a world-changing event on its own, it will likely set the stage for more significant advancements in technology and robotics. The creation of humanoid robots or robot brains is hoped for to avoid relying on humans to interact with the physical world. Overall, these advancements in AI represent an exciting and inspiring transition in technology, but also come with the potential for menacing implications.

    The Impact of AGI on Scientific Discovery and Society

    While AGI (Artificial General Intelligence) is not expected to bring about a singularity level transition, it could significantly increase the rate of scientific discovery and have impressive theatrical moments. However, it might be difficult for individuals to interact with the first AGI and ask complex questions, as it may not have enough data or capabilities yet. Additionally, there is a need for robust governance systems to ensure that the development and use of AGI are safe and beneficial for society. The speaker, Sam Altman, emphasizes the importance of scientific progress and the potential power that comes with building AGI, but also expresses caution and the need for proper regulations.

    Importance of Balance of Power and Shared Responsibility with AGI

    No single person should have complete control over an AGI (Artificial General Intelligence) due to the complexity and societal impact of the technology. Elon Musk, the speaker, acknowledges his past mistakes and successes with Open AI but emphasizes the importance of balance of power and shared responsibility. He also mentions that existential risk from AGI is not his top concern at the moment, but the focus should be on addressing multiple significant risks. Musk also shared his personal history of not capitalizing his online texts, attributing it to a poetic style and a drift towards less formal communication. The discussion highlights the importance of balance of power, addressing multiple risks, and the evolving nature of communication styles.

    Exploring simulated worlds and the simulation hypothesis

    Our ability to generate simulated worlds, although not the strongest evidence, increases our belief in the simulation hypothesis and opens our minds to new possibilities. The ease with which we can create complex and realistic universes makes us question the nature of reality itself. Another intriguing topic discussed was the use of capital letters in tweets, with the speaker expressing a lack of concern for the convention and appreciating the movement towards letting go of this requirement. The speaker also shared his excitement about visiting the Amazon jungle and his belief in the existence of intelligent alien civilizations. He finds the Fermi Paradox puzzling and scary, but remains confident in their existence despite the challenges of interstellar travel. Additionally, the speaker highlighted the profound insights that simple concepts, like the square root function, can offer and the potential role of AI in helping us understand the nature of intelligence.

    Civilization's Progress and the Human Spirit

    The progress and achievements of human civilization, despite its flaws and challenges, give Sam hope for the future. He sees the advancements as a collective effort, built on the shoulders of past generations, much like how modern technology allows us to use and benefit from discoveries made by others. Sam also expresses a sense of gratitude for life and the experiences it brings, even in the face of potential risks or fears. Additionally, he shares Arthur C. Clarke's perspective that humans may not be here to worship God, but to create him, emphasizing the importance of innovation and progress.

    Recent Episodes from Lex Fridman Podcast

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    #425 – Andrew Callaghan: Channel 5, Gonzo, QAnon, O-Block, Politics & Alex Jones

    #425 – Andrew Callaghan: Channel 5, Gonzo, QAnon, O-Block, Politics & Alex Jones
    Andrew Callaghan is the host of Channel 5 on YouTube, where he does street interviews with fascinating humans at the edges of society, the so-called vagrants, vagabonds, runaways, outlaws, from QAnon adherents to Phish heads to O Block residents and much more. Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - BetterHelp: https://betterhelp.com/lex to get 10% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - MasterClass: https://masterclass.com/lexpod to get 15% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/andrew-callaghan-transcript EPISODE LINKS: Channel 5 with Andrew Callaghan: https://www.youtube.com/channel5YouTube Andrew's Instagram: https://instagram.com/andreww.me Andrew's Website: https://andrew-callaghan.com/ Andrew's Patreon: https://www.patreon.com/channel5 This Place Rules: https://www.hbo.com/movies/this-place-rules Books Mentioned: On the Road: https://amzn.to/4aLPLHi Siddhartha: https://amzn.to/49rthKz PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:53) - Walmart (10:24) - Early life (29:14) - Hitchhiking (40:49) - Couch surfing (49:50) - Quarter Confessions (1:07:33) - Burning Man (1:22:44) - Protests (1:28:17) - Jon Stewart (1:31:13) - Fame (1:44:31) - Jan 6 (1:48:15) - QAnon (1:54:00) - Alex Jones (2:10:52) - Politics (2:20:29) - Response to allegations (2:37:28) - Channel 5 (2:43:04) - Rap (2:44:51) - O Block (2:48:47) - Crip Mac (2:51:59) - Aliens

    #424 – Bassem Youssef: Israel-Palestine, Gaza, Hamas, Middle East, Satire & Fame

    #424 – Bassem Youssef: Israel-Palestine, Gaza, Hamas, Middle East, Satire & Fame
    Bassem Youssef is an Egyptian-American comedian & satirist, referred to as the Jon Stewart of the Arab World. Please support this podcast by checking out our sponsors: - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/bassem-youssef-transcript EPISODE LINKS: Bassem's X: https://x.com/Byoussef Bassem's Instagram: https://instagram.com/bassem Bassem's Facebook: https://facebook.com/bassemyousseftv Bassem's Website: https://bassemyoussef.xyz PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:30) - Oct 7 (36:59) - Two-state solution (52:37) - Holocaust (1:00:24) - 1948 (1:09:17) - Egypt (1:23:39) - Jon Stewart (1:25:51) - Going viral during the Arab Spring (1:49:55) - Arabic vs English (2:02:18) - Sam Harris and Jihad (2:07:25) - Religion (2:26:37) - TikTok (2:31:10) - Joe Rogan (2:33:07) - Joe Biden (2:37:33) - Putin (2:39:21) - War (2:44:17) - Hope

    #423 – Tulsi Gabbard: War, Politics, and the Military Industrial Complex

    #423 – Tulsi Gabbard: War, Politics, and the Military Industrial Complex
    Tulsi Gabbard is a politician, veteran, and author of For Love of Country. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - NetSuite: http://netsuite.com/lex to get free product tour - Notion: https://notion.com/lex Transcript: https://lexfridman.com/tulsi-gabbard-transcript EPISODE LINKS: For Love of Country (book): https://amzn.to/3VLlofM Tulsi's X: https://x.com/tulsigabbard Tulsi's YouTube: https://youtube.com/@TulsiGabbard Tulsi's Podcast: https://youtube.com/@TheTulsiGabbardShow Tulsi's Instagram: https://instagram.com/tulsigabbard Tulsi's Facebook: https://facebook.com/TulsiGabbard Tulsi's Website: https://tulsigabbard.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:14) - War in Iraq (15:00) - Battle injuries and PTSD (22:10) - War on terrorism (30:51) - War in Gaza (34:52) - War in Ukraine (38:38) - Syria (46:20) - Warmongers (55:40) - Nuclear war (1:11:08) - TikTok ban (1:23:13) - Bernie Sanders (1:28:08) - Politics (1:46:59) - Personal attacks (1:49:07) - God

    #422 – Mark Cuban: Shark Tank, DEI & Wokeism Debate, Elon Musk, Politics & Drugs

    #422 – Mark Cuban: Shark Tank, DEI & Wokeism Debate, Elon Musk, Politics & Drugs
    Mark Cuban is a businessman, investor, star of TV series Shark Tank, long-time principal owner of Dallas Mavericks, and founder of Cost Plus Drugs. Please support this podcast by checking out our sponsors: - Listening: https://listening.com/lex and use code LEX to get one month free - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/mark-cuban-transcript EPISODE LINKS: Mark's X: https://twitter.com/mcuban Mark's Instagram: https://instagram.com/mcuban Cost Plus Drugs: https://costplusdrugs.com Shark Tank: https://abc.com/shows/shark-tank Dallas Mavericks: https://www.mavs.com PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:10) - Entrepreneurship (26:03) - Shark Tank (36:29) - How Mark made first billion (1:02:39) - Dallas Mavericks (1:08:05) - DEI debate (1:43:58) - Trump vs Biden (1:46:20) - Immigration (1:55:53) - Drugs and Big Pharma (2:11:53) - AI (2:16:05) - Advice for young people

    #421 – Dana White: UFC, Fighting, Khabib, Conor, Tyson, Ali, Rogan, Elon & Zuck

    #421 – Dana White: UFC, Fighting, Khabib, Conor, Tyson, Ali, Rogan, Elon & Zuck
    Dana White is the CEO and president of the UFC. Please support this podcast by checking out our sponsors: - LMNT: https://drinkLMNT.com/lex to get free sample pack - Notion: https://notion.com/lex - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - InsideTracker: https://insidetracker.com/lex to get 20% off Transcript: https://lexfridman.com/dana-white-transcript EPISODE LINKS: Dana's X: https://x.com/danawhite Dana's Instagram: https://instagram.com/danawhite Dana's Facebook: https://facebook.com/danawhite UFC's YouTube: https://youtube.com/@UFC UFC's Website: https://ufc.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:31) - Mike Tyson and early days of fighting (17:10) - Jiu jitsu (23:14) - Origin of UFC (37:25) - Joe Rogan (43:31) - Lorenzo Fertitta (45:58) - Great fighters (49:55) - Khabib vs Conor (53:01) - Jon Jones (56:03) - Conor McGregor (1:01:05) - Trump (1:06:44) - Elon vs Zuck (1:08:04) - Mike Tyson vs Jake Paul (1:10:52) - Forrest Griffin vs Stephan Bonnar (1:18:06) - Gambling (1:33:08) - Mortality

    #420 – Annie Jacobsen: Nuclear War, CIA, KGB, Aliens, Area 51, Roswell & Secrecy

    #420 – Annie Jacobsen: Nuclear War, CIA, KGB, Aliens, Area 51, Roswell & Secrecy
    Annie Jacobsen is an investigative journalist and author of "Nuclear War: A Scenario" and many other books on war, weapons, government secrecy, and national security. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - BetterHelp: https://betterhelp.com/lex to get 10% off - Policygenius: https://policygenius.com/lex - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/annie-jacobsen-transcript EPISODE LINKS: Nuclear War: A Scenario (book): https://amzn.to/3THZHfr Annie's Twitter: https://twitter.com/anniejacobsen Annie's Website: https://anniejacobsen.com/ Annie's Books: https://amzn.to/3TGWyMJ Annie's Books (audio): https://adbl.co/49ZnI7c PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:37) - Nuclear war (12:21) - Launch procedure (18:00) - Deterrence (21:34) - Tactical nukes (30:59) - Nuclear submarines (33:59) - Nuclear missiles (41:10) - Nuclear football (50:17) - Missile interceptor system (54:34) - North Korea (1:01:10) - Nuclear war scenarios (1:10:02) - Warmongers (1:14:31) - President's cognitive ability (1:20:43) - Refusing orders (1:28:41) - Russia and Putin (1:33:48) - Cyberattack (1:35:09) - Ground zero of nuclear war (1:39:48) - Surviving nuclear war (1:44:06) - Nuclear winter (1:54:29) - Alien civilizations (2:00:04) - Extrasensory perception (2:13:50) - Area 51 (2:17:48) - UFOs and aliens (2:28:15) - Roswell incident (2:34:55) - CIA assassinations (2:53:47) - Navalny (2:56:12) - KGB (3:02:48) - Hitler and the atomic bomb (3:06:52) - War and human nature (3:10:17) - Hope