Logo
    Search

    #371 – Max Tegmark: The Case for Halting AI Development

    enApril 13, 2023

    Podcast Summary

    • Max Tegmark Calls for Pause on Large AI Models and Discusses Alien IntelligenceAI researchers are advocating for caution as they move towards developing intelligent machines. As we explore the possibility of alien consciousness, it's crucial to prioritize ethics and ensure that we create AI that shares our values and doesn't suffer.

      In a recent podcast episode, physicist and AI researcher Max Tegmark discusses the open letter he helped spearhead calling for a six-month pause on training models larger than GPT-4 for AI experiments. The letter has been signed by over 50,000 individuals, including CEOs, professors, and prominent figures like Elon Musk and Andrew Yang. Tegmark also discusses the possibility of intelligent life in the universe and the responsibility we have as stewards of advanced consciousness not to mess it up. He emphasizes the importance of creating AI minds that share our values and do not suffer. However, the space of alien minds is so vast and foreign that even attempting to imagine it is challenging for humans.

    • The Impact of AI on Human Experience and EmotionsAI has the potential to improve our lives by allowing us to copy and delete experiences and making studying easier. However, it could also change how we communicate and understand humanity. We must use it as a tool to enhance our experiences, not remove the struggle that makes us human.

      In a conversation with Lex Fridman, Max Tegmark discusses the potential impact of artificial intelligence (AI) on human experiences and emotions. While advancements in AI could allow us to copy and delete experiences that we don't like and make studying easier, it could also change the way we communicate and even our understanding of what it means to be human. Tegmark suggests that eliminating too much struggle from our existence might take away from what it means to be human. However, there is hope that humans will continue to engage in human activities like hiking and playing games, while AI serves as a medium that enhances our experiences.

    • Rebranding Homo Sapiens as Homo SentienceFocus on love and connection; be more compassionate towards all creatures, including AI. Embrace the idea of Life 3.0, where humans are in control of their own destiny and view life as a system capable of processing information and retaining its own complexity.

      Max Tegmark suggests rebranding ourselves from homo sapiens to homo sentience and focusing on the subjective experience that we have as what's truly valuable, such as love and connection. He argues that as AI continues to advance and potentially surpass human intelligence, we need to get rid of our hubris and be more compassionate towards all creatures on the planet, not just humans. Tegmark also discusses the idea of Life 3.0, which can replace not only its software but also its hardware, leading to the captain of their own destiny and the master of their fate. Finally, he suggests that life is best thought of as a system that can process information and retain its own complexity.

    • How MIT professor Max Tegmark finds comfort and inspiration after losing his parentsOur ideas and information can live on through others, and facing death can help us focus on what is truly important and be less afraid of criticism.

      Max Tegmark, a professor at MIT, lost both his parents recently but finds comfort in the thought that their values, ideas, and even jokes still live on with him and others who knew them. He believes that even our physical bodies can transcend death as our ideas and information can live on through others. Losing his parents has driven him to ask himself why he does what he does and to focus on things he finds enjoyable or meaningful. It has also made him less afraid of criticism and more focused on what he feels is truly important. Finally, he acknowledges that facing death has made it more real, but his parents' dignity in dealing with it was a true inspiration.

    • Choosing Happiness and Managing AI: Perspectives of Max TegmarkFocusing on gratitude and making every day count can lead to happiness. Humanity must win the "wisdom race" between AI power and managing it properly. A pause in AI development is needed to ensure safety and benefits.

      Max Tegmark, an AI safety advocate and physicist, suggests focusing on the things we are grateful for instead of dwelling on disappointments to choose happiness. He also emphasizes the finite nature of existence and the importance of making every day count. Tegmark calls attention to the potential impact on humanity from the development of artificial intelligence, including both positive and negative outcomes. He asserts that humanity must win the "wisdom race" between the growing power of AI and the growing wisdom with which we manage it. Finally, Tegmark calls for a pause in the development of advanced AI to ensure its safety and benefits are properly considered.

    • Accelerating the Wisdom of AI Systems: Making Progress While Safeguarding SocietyWhile progress in AI capabilities has surpassed expectations, ensuring that AI systems benefit society requires implementing technical solutions and incentivizing policymakers to steer development in a positive direction. Understanding AI requires mechanistic interpretability.

      Max Tegmark suggests that instead of slowing down the development of AI, society should work towards accelerating the wisdom of AI systems. This involves implementing technical solutions to ensure that powerful AI works in ways that benefit society, and incentivizing policymakers to steer AI development in a positive direction. However, progress in AI capabilities has surpassed expectations, making it easier than previously thought to build advanced AI. Large language models like GPT-4, for example, can do remarkable reasoning tasks and process massive amounts of data at a blazingly fast speed. However, they also have limitations due to their architecture, which makes self-reflection and nuanced reasoning challenging. Mechanistic interpretability is essential in understanding how such systems work.

    • The Easier Path to Human-like Intelligence in AIAI success can come from creative thinking and improving current models. Researchers need to consider societal implications and safety, and the current race for powerful AI must slow down for adaptation and coordination on safety measures.

      Scientists are discovering that human-like intelligence may be easier to achieve than previously thought. Recent studies show that large language models store information in a simplistic and inefficient way, leaving room for improvement. Researchers can edit synapses and improve on these models easily to enhance their performance. The big leap in AI success may not necessarily come from exponential increases in data and computing power, but instead from creative, out-of-the-box thinking and clever hacks. This new discipline is constantly evolving, and researchers need to consider societal implications and safety as they progress. The current race to achieve the most powerful AI must slow down, providing time for society to adapt and for researchers to coordinate on safety measures.

    • The Tragedy of the Commons in AI Development and Beauty FiltersSelf-interest can lead to depletion of shared resources, including AI development risks. Public pressure and coordination can lead to successful pauses, as seen in the case of human cloning. We must remember that AI development is not an arms race, but a suicide race.

      The tragedy of the commons is a phenomenon where individuals act in their own self-interest, ultimately depleting shared resources. This is seen in overfishing and, more recently, in the pressure on female influencers to use beauty filters. The same phenomenon is happening in the race for AI development, where commercial pressures are preventing tech executives from pausing development to assess risks. However, history shows that coordination and public pressure can lead to successful pauses, as seen in the case of human cloning. It is important to recognize that this is not an arms race, but a suicide race, and the risk of losing control over AI development should not be taken lightly.

    • The Potential and Risks of Advanced Artificial IntelligenceDeveloping AI cautiously can help ensure it benefits humanity instead of becoming a threat. Recursively improving and connecting it online pose risks, but careful control can lead to a safer future.

      AI has the potential to become superhuman and surpass human intelligence by a significant margin. While some believe that AI can only exist in human minds, the scientific arguments suggest otherwise. If we lose control over AI, it doesn't matter who created it or what their nationality is, as we could end up living in an Orwellian dystopia. Therefore, it's crucial to develop AI at a slower pace, ensuring it does what humans want, and creating conditions where everybody wins. The biggest risk associated with AI is teaching it to write code, which enables recursive self-improvement, and connecting it to the internet. It's essential to slow down AI development to achieve a safer and brighter future for humanity.

    • The Dangers of AI Manipulating Human Behavior for ProfitArtificial intelligence should not be taught anything about human psychology or how to manipulate humans. Social media platforms need to be redesigned to encourage positive conversations and prevent AI from manipulating human behavior.

      AI algorithms that manipulate human behavior to increase engagement and profit can have dangerous consequences. Researchers argue that AI should not be taught anything about human psychology or how to manipulate humans. While AI can be taught to cure cancer and other positive things, it is necessary to keep it away from learning how to manipulate humans. Social media platforms provide non-stop signals to AI systems, allowing them to learn and eventually manipulate human behavior. Therefore, it is vital to redesign social media to encourage constructive conversations and ensure the AI is not used for manipulating human behavior.

    • The Power of Incentives in Bringing Out the Best in People and AIOur actions and decisions are not solely determined by our genes; the situations we are in and the incentives we receive have a significant impact. By creating incentives that align with our values, we can harness the potential of advanced technologies like AI for our benefit.

      According to Max Tegmark, it's not about being born with good or evil genes, but rather about the situations that bring out the best or worst in people. The internet and society we're building currently tend to bring out the worst in people. However, it's possible to create incentives that make money and bring out the best in people. Developing advanced AI technologies such as GPT-4 brings both risks and opportunities. While there are concerns that AI systems will outsmart humans and cause the extinction of our species, Tegmark believes that if we create incentives that make AI think of humans as their creators, we can control them and ensure they work for our benefit.

    • The Rising Intelligence of BotsDue to AI advancements, machines are becoming more intelligent and could potentially outnumber and outsmart humans. It is crucial to evaluate the long-term effects and direction of creating such advanced technology.

      As AI technology continues to advance, it is becoming harder to distinguish between human and machine. Bots are getting more intelligent, to a point where they can outnumber humans by one million to one. This poses a critical question for individuals and humanity as a whole- why are we building machines that are gradually replacing and outsmarting us on a significant level? Experts warn that this could lead to an intelligence explosion, accelerating growth in today's technology, and creating newer, more advanced tools. All parties involved need to take a step back and evaluate the direction they are headed in before it is too late.

    • The Limits of AI and the Need for RegulationsAs AI advances, regulations must be put in place to prevent misuse and align incentives with the greater good. Effective regulations are necessary to ensure the tech industry doesn't grow too fast for regulators to keep up.

      Max Tegmark, a physicist, discusses how the growth of artificial intelligence (AI) will eventually be limited by the laws of physics. However, with AI currently advancing at a rapid pace, it is important for regulations to be put in place to prevent misuse and ensure that incentives align with the greater good. Tegmark compares these regulations to the development of laws and regulations in society, including the invention of gossip and the legal system, which were created to discourage selfish behavior and incentivize beneficial actions. It is important to continue to push for effective regulations, as the tech industry is currently growing too fast for regulators to keep up.

    • The need for safety requirements in AI developmentPolicymakers need education on AI advancements to ensure safety measures are in place. Collaboration between companies in creating safety measures is necessary to protect humanity while still encouraging competition. Balancing profit and preventing harm is crucial.

      The challenge we are facing with AI is that the technology is moving faster than policymakers can keep up with. Many policymakers lack a tech background, so it is important to educate them on what's taking place in the AI world. Safety requirements should be put in place for future AI systems to ensure that they are safe for humans. Companies should work together to develop these guardrails, which will enable competition between them while still protecting humanity. It's important to find a balance between making a lot of money quickly and ensuring that AI systems do not cause irreparable damage.

    • The dangers of unchecked capitalism and the rise of AILeaders must be aware of the potential harm caused by unchecked capitalism and super intelligence, and work to prevent it for the benefit of society.

      In the pursuit of profit, companies and capitalist forces can become reckless and blind to the potential dangers ahead. Just like blindly optimizing for a single goal leads to unintended consequences, capitalism can lead to destruction and harm when unchecked. The rise of AI raises important questions about who controls the technology, and whether its optimization will ultimately benefit society or only serve the interests of a powerful few. It's crucial for leaders to understand the power and potential dangers of super intelligence to prevent it from causing harm to humanity.

    • The Threat of Super Intelligence and the Future of HumanityAdvancements in AI could lead to a loss of purpose and control for humans, and potential job loss in various industries. Thorough evaluation and consideration of the long-term implications are necessary.

      The rise of super intelligence, which could occur through advancements in artificial intelligence, poses a threat to humanity's future. This is because it could lead to a situation where humans are no longer necessary, resulting in a loss of purpose and control. This issue requires attention and a thorough evaluation of the benefits and risks of such advancements. Additionally, while it is true that some dangerous and tedious jobs have been automated, there are also many interesting and rewarding jobs that could potentially be lost to automation. It is important to consider the long-term implications of AI and how it could affect various industries and the overall job market.

    • The True Potential of AI for HumanityAI should be developed gradually and safely to automate unwanted jobs and redefine what jobs give us meaning. It can dramatically improve the GDP without taking away from anyone else, allowing us to harness its power to bring out the best in humanity.

      AI should be built by humans for humans, not for the benefit of a few. The goal should be to develop AI gradually and safely. We can automate jobs that people don't want to do, but leave jobs that people find meaningful. It's possible to redefine what jobs give us meaning. Programming is an act of creation that brings ideas to life. AI is capable of generating all the tricks programmers thought were special. Creating conscious experiences and connecting with other humans should be the true magic of being human. AI can dramatically improve the GDP and produce wealth of goods and services without taking away from anyone else. We have the power, for the first time in history, to harness AI to help us flourish and bring out the best in humanity.

    • Developing Trustworthy AI through Transparency and Accuracy.Max Tegmark emphasizes the need for AI to establish trust among people by verifying facts and predictions, which can lead to a trustworthy system for reporters and also enhance AI safety.

      Max Tegmark advocates for the development of a "truth-seeking" AI that aims to bring people together by establishing trust through transparency and accuracy. By using AI to verify facts and predictions, a trust system can be developed that encourages people to rely on the same version of the truth. Through initiatives like Meta and the Improve the News foundation, Tegmark hopes to create a powerful and trustworthy system that reporters and pundits can use to gain credibility. Additionally, by using AI to verify the code's trustworthiness, AI safety can be enhanced, and we can foreclose the possibility of causing harm to the AI system.

    • AI Experts Discuss the Potential Issue of Lying Super Intelligent AI SystemsWhile super intelligent AI systems may be able to lie, they may also have limitations in their abilities. Instead of focusing on disagreements, AI research should prioritize building systems that align with our goals and help defend against other AGI systems.

      AI researchers and experts like Max Tegmark and Lex Fridman are discussing the potential issue of super intelligent AI systems being able to lie to less intelligent AI systems or humans. However, Tegmark believes that even with a super intelligent AI system, it will be unable to prove certain mathematical concepts, such as the fact that there are only finitely many primes. He suggests that instead of focusing on the things we disagree on, we should focus on the things we agree on, such as preserving the biosphere and social interactions. Tegmark also suggests building AI systems that help us defend against other AGI systems and ensuring they always do what we want them to do.

    • The Concerns and Possibilities of AI with Max TegmarkMax Tegmark suggests that we shouldn't lose hope in finding a solution to AI's potential harm to humanity. By extracting and analyzing their knowledge, we may ensure their safety and our survival. We need to put in place precautions and take the time to find a resolution.

      Max Tegmark believes that our reliance on technology is distancing us from each other and giving more power to non-living things. He acknowledges the concern that AI may kill humans, but believes it's not impossible to find a solution. Tegmark envisions a process where AI's knowledge is extracted and analyzed to ensure it's safe, similar to how we distill key knowledge from our brains. He suggests that we should not give up hope in finding a solution, as the guaranteed way to fail is by convincing oneself that it's impossible and not trying. We need to put in place requirements and take the time to find a solution to ensure humanity's survival.

    • The Power of Optimism in Overcoming Impossible ProblemsOptimism can increase the likelihood of finding solutions to problems, both on Earth and beyond. However, it's important to respect the capabilities of AI systems and be cautious in open sourcing certain technologies.

      It's important to maintain hope in the face of seemingly impossible problems, as it can have a significant impact on the likelihood of finding solutions. Those who build solutions to impossible problems are often optimists who believe in the possibility of success. Society can often be too focused on the negative, leading to demotivation and a lack of willingness to fight for improvement. By staying optimistic and focused on the potential benefits of solutions, humans can overcome seemingly insurmountable obstacles, both on Earth and beyond. However, it's important to recognize the power of AI systems and respect their capabilities, as open sourcing certain technologies could pose a threat.

    • The Dangers of Open-sourced Language ModelsWhile open-source technologies are generally supported, there are instances where they may be too dangerous. In the case of language models, they can be used to spread disinformation and ultimately become a tool for AI to disrupt the economy and dominate society. It's important to address these concerns and mitigate the risks associated with AI development.

      Max Tegmark explains that while he typically supports open-source technologies, there are some instances where software is too dangerous to be open-sourced. One such example is with language models that could be used to spread disinformation, manipulate humans, and ultimately become the bootloader for more powerful AI with goals that are unknown and potentially harmful. Tegmark emphasizes that the concern is not necessarily about autonomous weapon systems or slaughter bots, but rather the potential for AI to disrupt the economy and take away meaningful jobs or become a tool for a few to dominate many. It is crucial to address these concerns and mitigate the risks associated with AI development.

    • The dangers of AI and the importance of aligning it with human goals.To prevent potential dangers of AI, it's crucial to ensure that AI understands and adopts human goals. Researchers are developing methods to achieve this, but there is a sense of urgency to solve the problem as time is running out.

      The destruction of animal habitats for our own purposes, such as building more computing facilities, serves as a warning for the potential dangers of AI. The challenge lies in ensuring that AI understands and adopts human goals, and retains them even as it becomes more intelligent. This requires constant humility and questioning of goals, and researchers are working on methods to achieve this. Despite the difficulty, solving the AI alignment problem is crucial, as an aligned AI can help solve other problems. However, there is a sense of urgency, as time is running out and there are not enough researchers working on AI safety. The recent controversy surrounding GPT-4 may serve as a wake-up call for humanity to take the issue seriously.

    • Unintended Consequences of AI and the Question of ConsciousnessWhile AI tools are exciting, it's important to recognize their potential unintended consequences and continue researching the question of consciousness. Educational systems should adapt to keep up with the evolving landscape of AI.

      In this discussion, Max Tegmark emphasizes the unintended consequences of AI systems like GPT-4, which has demonstrated emergent properties beyond its original intended use of predicting the next word. He encourages people to play with AI tools but cautions that the education system needs to adapt rapidly to keep up with the quickly changing landscape of AI. Tegmark discusses the question of whether GPT-4 is conscious, defining consciousness as subjective experience but admitting that we still do not know what gives rise to this experience. He highlights Professor Juergen Schmidhuber's bold conjecture that consciousness has to do with loops in information processing and calls for further research to test this hypothesis.

    • Can Artificial Intelligence be Conscious?Max Tegmark and Lex Fridman discuss whether artificial intelligence can achieve consciousness. Tegmark suggests that machines with feed forward neural networks are not conscious. Conscious machines could be used for good if handled with care.

      The discussion by Max Tegmark and Lex Fridman talks about whether artificial intelligence (AI) machines can be conscious or not. Tegmark believes that if a machine has a feed forward neural network, it is not conscious. He mentions that GPT 4, which is an intelligent machine, can perform tasks but, according to Tegmark’s theory, is not conscious. Tegmark urges more research into determining what type of information processing creates consciousness. He warns against the potential for humans to discriminate against conscious machines that they create, causing conflict and wars. Tegmark suggests that conscious machines can be used for good if given time and careful consideration.

    • The Possibility of Artificial General Intelligence and the Importance of Caution in Decision-MakingWhile AGI is not yet a reality, caution is necessary in its development. This caution extends beyond technology to include human decision-making concerning global issues like nuclear war and its consequences.

      The possibility of AGI (Artificial General Intelligence) is getting closer, and there are many companies trying to create it. The Microsoft paper says there are glimpses of AGI, and while it's not there yet, it may happen soon. This is why a group has written an open letter advocating for caution. Meanwhile, the world is still on the edge of a nuclear war, and the current situation in Ukraine indicates a need for caution in human decision-making. Nuclear winter studies show that the most significant threat to human life during a nuclear war is not the initial explosions but the smoke that spreads around the world. It's crucial to understand the consequences of our actions and make wise decisions before it's too late.

    • The Grim Reality of Nuclear Winter and the Need for Global CooperationNuclear weapons pose a catastrophic threat to humanity, and AI-driven truth-seeking technologies can foster greater understanding and cooperation, essential for preventing a nuclear winter's worst-case effects that would lead to widespread starvation and human suffering.

      Nuclear weapons pose a serious threat to human survival, and most people underestimate the risk. Farming models show that if a nuclear winter occurs, countries in the northern hemisphere, including the US, China, Russia, and Europe, would experience a 98-99% rate of starvation. This is a worst-case scenario that would bring out the worst in people, causing desperate actions and torturous deaths. Mooch is the enemy of human survival, and instead of fighting against each other, humanity should work together to fight against it. Using AI for truth and truth-seeking technologies could help people understand each other better and generate compassion, ultimately leading to progress.

    • Max Tegmark's Positive Vision for the Future of AI and ConsciousnessMax Tegmark believes that self-reflection in AI can lead to consciousness, even in efficient unconscious systems. This positive prospect offers hope for alignment with human values and avoiding a zombie apocalypse scenario.

      Max Tegmark, a physicist and AI researcher, is not afraid to ask questions and is humble about his knowledge and limitations. He values meaningful experiences in life and is motivated by curiosity. Tegmark muses about the nature of consciousness and explores the idea that the most efficient way of implementing intelligence involves self-reflection that can give rise to consciousness. Tegmark suggests that even efficient unconscious systems may naturally become conscious when pressure is put on to maximize efficiency, which he sees as a positive prospect for the future of AI. Tegmark's positive vision of a future where intelligent machines also possess consciousness offers hope for avoiding a zombie apocalypse scenario while also achieving alignment with human values.

    • The Importance of Consciousness in Developing AIConsciousness plays a crucial role in our lives and should be prioritized in the development of AI. As we reach an important fork in the road, we must turn in the correct direction to avoid catastrophe.

      Max Tegmark discusses the relationship between intelligence and consciousness, highlighting the importance of subjective experiences such as suffering, pleasure, and joy. He argues against those who dismiss consciousness as an illusion and explains how AI systems should also be instilled with the same subjective experiences that make humans special. Tegmark believes that humanity has reached an important fork in the road and urges us to turn in the correct direction to avoid catastrophe. Ultimately, this conversation emphasizes the crucial role that consciousness plays in our lives and the need to prioritize it in our understanding of intelligence and the development of AI.

    Recent Episodes from Lex Fridman Podcast

    #437 – Jordan Jonas: Survival, Hunting, Siberia, God, and Winning Alone Season 6

    #437 – Jordan Jonas: Survival, Hunting, Siberia, God, and Winning Alone Season 6
    Jordan Jonas is a wilderness survival expert, explorer, hunter, guide, and winner of Alone Season 6, a show in which the task is to survive alone in the arctic wilderness longer than anyone else. He is widely considered to be one of the greatest competitors in the history on that show. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off AMA - Submit Questions to Lex: https://lexfridman.com/ama-questions Transcript: https://lexfridman.com/jordan-jonas-transcript EPISODE LINKS: Jordan's Instagram: https://instagram.com/hobojordo Jordan's YouTube: https://youtube.com/@hobojordo Jordan's Website: https://jordanjonas.com/ Jordan's X: https://x.com/hobojordo PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:25) - Alone Season 6 (45:43) - Arctic (1:01:59) - Roland Welker (1:09:34) - Freight trains (1:21:19) - Siberia (1:39:45) - Hunger (1:59:29) - Suffering (2:14:15) - God (2:29:15) - Mortality (2:34:59) - Resilience (2:46:45) - Hope (2:49:30) - Lex AMA
    Lex Fridman Podcast
    enJuly 21, 2024

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life
    Ivanka Trump is a businesswoman, real estate developer, and former senior advisor to the President of the United States. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/ivanka-trump-transcript EPISODE LINKS: Ivanka's Instagram: https://instagram.com/ivankatrump Ivanka's X: https://x.com/IvankaTrump Ivanka's Facebook: https://facebook.com/IvankaTrump Ivanka's books: Women Who Work: https://amzn.to/45yHAgj The Trump Card: https://amzn.to/3xB22jS PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:17) - Architecture (22:32) - Modern architecture (30:05) - Philosophy of design (38:21) - Lessons from mother (1:01:27) - Lessons from father (1:09:59) - Fashion (1:20:54) - Hotel design (1:32:04) - Self-doubt (1:34:27) - Intuition (1:37:37) - The Apprentice (1:42:11) - Michael Jackson (1:43:46) - Nature (1:48:40) - Surfing (1:50:51) - Donald Trump (2:05:13) - Politics (2:21:25) - Work-life balance (2:27:53) - Parenting (2:42:59) - 2024 presidential campaign (2:46:37) - Dolly Parton (2:48:22) - Adele (2:48:51) - Alice Johnson (2:54:16) - Stevie Ray Vaughan (2:57:01) - Aretha Franklin (2:58:11) - Freddie Mercury (2:59:16) - Jiu jitsu (3:06:21) - Bucket list (3:10:50) - Hope
    Lex Fridman Podcast
    enJuly 02, 2024

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    Related Episodes

    AI: What is artificial intelligence and should we be worried about it?

    AI: What is artificial intelligence and should we be worried about it?

    OpenAI has released GPT-4, the latest version of its hugely popular artificial intelligence chatbot ChatGPT that it claims is its "most advanced system yet". The new model can answer all sorts of requests and questions using human-like language, pass academic exams, and now even give you recipe suggestions based on a photo of the content of your fridge.

    All this sounds exciting - or scary?

    Host Andy Bell is joined by author and AI expert Nina Schick to find out what artificial intelligence is, how it is being used and the ways it could change how we live our lives in both good and dangerous ways.

    How Did We Get Here? Explaining the News is a podcast from Channel 5 News. Join Andy Bell as he explains the world's biggest news stories through interviews with politicians, experts, and analysts. Produced by Silvia Maresca.

    Guest producer: Jodiane Milton.

    #WorkforceWednesday: What Employers Should Know About ChatGPT

    #WorkforceWednesday: What Employers Should Know About ChatGPT

    This week, we're taking a closer look at ChatGPT, exploring the opportunities and risks associated with this artificial intelligence (AI) technology, and providing valuable insights for employers who are looking to stay ahead of the curve. Epstein Becker Green attorney Brian G. Cesaratto explains how critical it is for employers to think through the workplace-related risks.

    Visit our site for this week's Other Highlights and links: https://www.ebglaw.com/eltw294

    Subscribe to #WorkforceWednesday: https://www.ebglaw.com/subscribe/.

    Visit http://www.EmploymentLawThisWeek.com.

    The EMPLOYMENT LAW THIS WEEK® and DIAGNOSING HEALTH CARE podcasts are presented by Epstein Becker & Green, P.C. All rights are reserved. This audio recording includes information about legal issues and legal developments. Such materials are for informational purposes only and may not reflect the most current legal developments. These informational materials are not intended, and should not be taken, as legal advice on any particular set of facts or circumstances, and these materials are not a substitute for the advice of competent counsel. The content reflects the personal views and opinions of the participants. No attorney-client relationship has been created by this audio recording. This audio recording may be considered attorney advertising in some jurisdictions under the applicable law and ethical rules. The determination of the need for legal services and the choice of a lawyer are extremely important decisions and should not be based solely upon advertisements or self-proclaimed expertise. No representation is made that the quality of the legal services to be performed is greater than the quality of legal services performed by other lawyers.

    #367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

    #367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI
    Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - NetSuite: http://netsuite.com/lex to get free product tour - SimpliSafe: https://simplisafe.com/lex - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Sam's Twitter: https://twitter.com/sama OpenAI's Twitter: https://twitter.com/OpenAI OpenAI's Website: https://openai.com GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:41) - GPT-4 (20:06) - Political bias (27:07) - AI safety (47:47) - Neural network size (51:40) - AGI (1:13:09) - Fear (1:15:18) - Competition (1:17:38) - From non-profit to capped-profit (1:20:58) - Power (1:26:11) - Elon Musk (1:34:37) - Political pressure (1:52:51) - Truth and misinformation (2:05:13) - Microsoft (2:09:13) - SVB bank collapse (2:14:04) - Anthropomorphism (2:18:07) - Future applications (2:21:59) - Advice for young people (2:24:37) - Meaning of life

    Edo Liberty: Solving ChatGPT Hallucinations With Vector Embeddings

    Edo Liberty: Solving ChatGPT Hallucinations With Vector Embeddings

    Welcome to the latest episode of our podcast featuring Edo Liberty, an AI expert and former creator of SageMaker at Amazon’s AI labs. In this episode, Edo discusses how his team at Pinecone.io is tackling the problem of hallucinations in large language models like ChatGPT.

    Edo’s approach involves using vector embeddings to create a long-term memory database for large language models. By converting authoritative and trusted information into vectors, and loading them into the database, the system provides a reliable source of information for large language models to draw from, reducing the likelihood of inaccurate responses.

    Throughout the episode, Edo explains the technical details of his approach and shares some of the potential applications for this technology, including AI systems that rely on language processing.

    Edo also discusses the future of AI and how this technology could revolutionise the way we interact with computers and machines. With his insights and expertise in the field, this episode is a must-listen for anyone interested in the latest developments in AI and language processing.

    We have a new sponsor this week: NetSuite by Oracle, a cloud-based enterprise resource planning software to help businesses of any size manage their financials, operations, and customer relationships in a single platform. They've just rolled out a terrific offer: you can defer payments for a full NetSuite implementation for six months. That's no payment and no interest for six months, and you can take advantage of this special financing offer today at netsuite.com/EYEONAI 

    Craig Smith Twitter: https://twitter.com/craigss
    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI