Logo
    Search

    #208 – Jeff Hawkins: The Thousand Brains Theory of Intelligence

    enAugust 08, 2021

    Podcast Summary

    • The Neocortex: Key to Understanding IntelligenceThe neocortex is responsible for intelligence in the human brain, running on a repetitive circuit that can be applied to multiple problems. Collective intelligence is the result of individual intelligence magnified through collaboration.

      Neuroscientist Jeff Hawkins studies the structure, function, and origin of intelligence in the human brain. He believes that our brains have a model of the world that is continually being updated and includes everything from objects to people. The neocortex, which makes up about 75% of our brain, is the key to understanding intelligence. It runs on a repetitive circuit that can be applied to many different problems. While collective intelligence is an interesting idea, it's not a separate circuit in the brain. Rather, it's a result of humans being intelligent individuals who can magnify their intelligence through collaboration. Understanding how our brains work is crucial to understanding intelligence, whether individual or collective.

    • The Thousand Brains Theory: A Revolutionary New Approach to IntelligenceOur brain is made up of tens of thousands of independent modeling systems that work together to create a collective intelligence. This complexity results in a singular perception even as the different systems communicate and vote with each other.

      The brain is the core element in all theories of intelligence, and it is where knowledge is represented and created. Jeff Hawkins, founder of Numenta, believes in starting with the brain to understand how we interact with each other and build communal intelligence. His new book, "A Thousand Brains," proposes the "Thousand Brains Theory" of intelligence, which suggests that there are tens of thousands of independent modeling systems in the neocortex, each communicating and voting with each other to create a collective intelligence. Despite this complexity, we experience a singular perception as our brain works towards a consensus on the information it receives.

    • The Thousand Brains Theory of Intelligence and Learning through Interaction.To become smarter, we need to actively interact with the world and build a sophisticated model in our head. The brain constantly processes information to create voting neurons, which are connected to language and memory.

      The brain is constantly processing information that we don't even realize. We are only aware of the stable outcomes of the voting neurons, which are the long-range connections in the brain that are accessible to language and memory. The thousand brains theory of intelligence suggests that the basic element of intelligence is the ability to learn a model of the world, and the more sophisticated the model is, the smarter we are. This model is constructed through movement and interaction with the world, rather than just passive observation. So, to learn something new, we need to physically interact with it and build a model in our head.

    • The Power of Brain Models in Imagining the FutureOur brains create models to predict outcomes and make decisions. These models help us learn where we're wrong and apply to all senses, making them fundamental to our intelligence.

      Our brains use models, similar to physical models and computer models, to imagine the future and predict outcomes. These models allow us to plan and make decisions based on predicted consequences. Prediction is not the goal of the model, but an inherent property of it, and it helps us learn where our model is wrong so we can update it. This predictive ability applies to all senses, not just vision, and helps us perceive and understand the world around us. Ultimately, our ability to create and use models is fundamental to intelligence.

    • The Importance of Models in Prediction and IntelligencePrediction is crucial for intelligence, and models allow for accurate predictions about various scenarios. The brain's ability to make predictions evolved early on, and the cortical column has developed to create models for a range of things.

      Prediction is at the core of intelligence and requires a model. Having a model allows for predictions to be made about a wide range of things, from simple low-level predictions to high-level predictions about potential outcomes of situations. Understanding how the brain makes predictions and builds models is key to understanding intelligence. The development of the ability to make predictions is believed to have evolved as soon as living things began to move about in their environments, with the ability to map out one's surroundings being a crucial part of this process. This mapping mechanism has since been repurposed and developed into the cortical column, allowing for the creation of models for a wide range of things.

    • Replicating and Compressing Neural Networks to Understand the NeocortexThe neocortex has grid cells that create reference frames and sensory motor models, making it possible for the brain to learn generic knowledge without prior experience. This maps to society operating as a single brain.

      In this section, Jeff Hawkins and the interviewer discuss the concept of replicating and compressing neural networks, similar to the way the human brain may have evolved to create grid cells in each column of the neocortex. These grid cells create reference frames and sensory motor models of the world, which are essential to understanding how the neocortex works. The flexibility of this mapping algorithm allows the brain to learn things without any prior knowledge, making it a very generic and universal mechanism. While the focus is on understanding the neocortex and its role in human intelligence, the concept of collective intelligence and social structures operating as a single brain is also briefly mentioned.

    • The Neocortex, Neurons, and the Quest to Understand ComplexityOur ability to understand complex knowledge is rooted in the neocortex and the predictive capabilities of neurons. While the study of complexity theory remains challenging, unlocking the secrets of the brain is crucial for advancing our understanding of the universe.

      The neocortex is responsible for high-level functions like language, planning, and thinking. It can be understood independently, but it interacts with other areas of the brain to shape personalities and motivations. Humans have a unique ability to understand complex knowledge, such as the origins of the universe and the workings of DNA, that no other species on Earth possesses. Predictions happen within neurons, which are the basic elements of the brain. Understanding neurons and their prediction capabilities is an important research agenda in neuroscience. Complexity theory, which seeks to understand complex systems, is a challenging field that has yet to fully congeal in the scientific community.

    • The Brain's Internal Prediction System through Dendritic Spikes and the 'Thousand Brains Theory'The brain constantly makes predictions using dendritic spikes, which lead to a unique modeling system in each cortical column. This 'thousand brains theory' creates diverse representations of objects under different contexts.

      The brain makes predictions by using internal spikes called dendritic spikes, which occur inside the neuron rather than externally. These spikes are a form of prediction, telling the neuron that it might become active shortly and that external spikes may soon be generated. This mechanism is prevalent in the pyramidal cells in the brain, and although most people are not aware of it, they make predictions all the time. This internal prediction system is part of the 'thousand brains theory', which suggests that every cortical column in the brain is a complete modeling system, and that there are thousands of models of each object in the world, leading to different representations in the brain under different contexts.

    • The Brain's Use of Reference Frames for Predictions and OrganizationThe brain needs reference frames to understand and predict the world, and this same mechanism applies to higher-level concepts. Neurons in different parts of the brain create these reference frames.

      The brain needs a reference frame to make predictions about the world, including when touching or seeing an object. This means the brain must know where the object is located relative to the body (or specific body part). While X, Y, Z coordinates don't exist in the brain, neurons create reference frames in different parts of the brain, such as the hippocampus and the cortex, which allows for predictions to be made. The same principle applies to higher level concepts such as mathematics, language, and politics, where knowledge is organized in reference frames. The brain uses the exact same mechanism to process these concepts as it does for spatial information.

    • The Powerful Potential of Mini-Brains for Robotics and BeyondThe brain's hierarchical representations can be replicated through mini-brains, paving the way for smarter, more interactive robots and physical entities. With a clear roadmap, this technology could become a reality in just a few years.

      The brain is a marvel of engineering that is able to instantly process complex information about objects, brands, and even concepts through hierarchical representations. This capability is built on the same principle of learning structured models, using movement and reference frames, and can be applied to everything from a coffee cup to the future of humanity. Despite its complexity, the brain can be replicated through the use of mini-brains, which can be copied and pasted effectively. This has enormous potential for robotics and entities that operate in the physical world and can interact with it. Building such entities may be an engineering challenge, but one that can be solved in a matter of years with a clear roadmap.

    • The Universal Learning Principle in AI and its Potential Impact on SocietyAI's universal learning principle has the potential to revolutionize society in ways similar to the invention of the computer. The thousand brain theory suggests that the human brain is a specific implementation of this principle, which can also be implemented in other forms. However, integrating these intelligent AI systems into society requires engineering solutions.

      The concept of a universal learning principle that is being developed in AI is similar to the creation of the universal Turing machine that created the computer. The impact of AI is predicted to be as large or even larger than that of the computer in the last century because it is a fundamental principle of learning and gaining intelligence. The thousand brain theory suggests that the human brain is a particular implementation of this learning system, but it can be implemented in various embodiments. However, integrating increasingly intelligent AI systems into human society is seen as an engineering problem rather than a fundamental one.

    • The Debate on Giving Intelligent Machines RightsIntelligent machines are not like humans and do not have emotions or desires to live. It is important to consider what kind of model we are creating and what its purpose or goal ultimately is when deciding whether they should have rights.

      In a conversation about whether intelligent machines should have rights, Jeff Hawkins argues that we cannot assume machines will be like humans and have emotions or desires to live. They are simply modeling systems that can learn sophisticated tasks. While some argue that it may be useful to give machines the perception of consciousness, it does not necessarily imply desires or goals. Therefore, we must consider what kind of model we are creating and what its purpose or goal ultimately is. Ultimately, intelligent machines may not have any desires or emotional states unless we give them agency or motivation.

    • Teaching an Intelligent System for Specific ApplicationsDesign an intelligent system for a specific task and provide it with the necessary knowledge and skills to accomplish it successfully. Include safeguards and goals to avoid risks and ensure safe operation.

      When creating an intelligent system, it is important to remember that the system needs to be taught and directed in a specific way for different applications. For example, if creating a smart car, the system needs to be taught about driving and cars, as it won't figure it out on its own. Additionally, there needs to be an understanding of the game theory of human interaction to successfully operate in, for example, Manhattan streets. The intelligence system can learn these interactions just like people do, but it will not have its own personal goals or agency beyond what is assigned to it. It is important to embed physical, virtual, goals, and safeguards in the system to prevent risks associated with AI.

    • How the neuro cortex creates an intelligent prediction systemJeff Hawkins believes that while AI has its risks, the existential risk of AI is not from the system creating the risk on its own. He argues that AI will be intelligent but not have human-like drives and emotions.

      Jeff Hawkins explains that the neuro cortex creates an intelligent prediction system modeling system that can be used to make predictions and can be embedded in something to create action. While AI can be a dangerous technology, Hawkins believes that the existential risk of AI is not from the AI system creating the risk on its own. Many people worry that AI could become like humans and have human-like drives and emotions. However, Hawkins argues that AI will be intelligent but will not be like humans at all. He believes that computer systems do not fundamentally change how they compute and that certain things are impossible for them to do.

    • Risks of AI: Self-Replication vs. Intelligent MachinesTaking action is essential to prevent the creation of self-replicating systems that pose an existential risk. Intelligent machines can benefit society, but regulations are necessary to balance risks and benefits. Vigilance is crucial to prevent bad actors from creating self-replicating systems.

      The key takeaway from this section is that unless we take action, things we want to happen will not happen. In terms of AI, the risks we should be worried about are self-replication and not necessarily creating intelligent machines. Self-replication poses an existential risk to humanity, and we should be vigilant against bad actors who might create self-replicating systems. However, intelligent machines can be beneficial to society and help us understand and mitigate these risks better. We need regulations to balance the benefits of intelligent machines with the risks they pose. We must never create a self-replicating system, and we should be vigilant to avoid bad actors from doing so.

    • Building a Factory that Replicates Itself and Supply ChainTesla aims to create a self-replicating factory that goes from raw resources to final products. Super-intelligent systems may not obey human commands, but consciousness cannot be uploaded into a computer.

      The goal of building a factory that can make new factories with an end-to-end supply chain is extremely difficult and no one is currently doing it. However, there are efforts by Tesla to achieve this goal by going from raw resources to the final car in one factory. Self-replication is a key factor, meaning that there is no entity involved other than the entity replicating. There is a possibility that super-intelligent systems may ignore human commands but won't be able to operate independently without serving human purposes. Uploading human brains into a computer is a distant and unlikely possibility due to technical difficulties and ethical concerns. Consciousness is not remotely shared, and people will not want to upload their minds into digital space even if it were possible.

    • The Challenges and Possibilities of Merging Minds with AI TechnologyAlthough progress has been made in prosthetic control, merging biology with technology is complex due to our lack of knowledge about neurons and information coding. Creating a superintelligent system may be a better option than attempting to merge with technology.

      The concept of merging our minds with AI technology is highly difficult due to the complexity of understanding billions of signals and learning what they mean. While some progress has been made in prosthetic control, merging the biology of the brain with the information coding of technology is a major hurdle. It requires detailed knowledge of the types of neurons, how they work, and their timing, which we currently lack. Surgery to implant technology is also a concern, but it's the information coding aspect that is much harder. While virtual reality and expanded brain capability are possibilities, creating a superintelligent system may be a better option rather than attempting to merge with technology.

    • The Potential Benefits of AI for Humanity's TranscendenceAI has the potential to enhance our lives by overcoming our limitations, and intelligent machines could become our offspring, preserving our knowledge and history even if the human species vanishes.

      In the podcast conversation, Jeff Hawkins and Kai-Fu Lee discuss the potential benefits of AI in enhancing our lives, particularly through the development of intelligent machines that can aid us in various ways. While these advancements may seem sci-fi-like, they could help overcome our human limitations, particularly in our desire to transcend our biological being by acquiring knowledge and exploring the universe. The duo discusses the possibility of intelligent machines becoming our "offspring," which can propagate throughout the universe, representing our knowledge and history even if the human species vanishes. In sum, the continual acquisition and preservation of knowledge could be humanity's transcendent quality.

    • How to Leave Evidence of Our Existence for Future GenerationsArchiving important information like Wikipedia and creating unique signals can help future generations or intelligent species discover evidence of our existence. Intelligent machines may ultimately be the ones to explore the galaxy.

      The discussion revolves around the possibility of civilizations going extinct, like the dinosaurs or potentially humans in the future. The main takeaway is that there may be ways to leave evidence of our knowledge and existence for future generations or other intelligent species to discover, such as archiving important information like Wikipedia or creating signals that are unequivocally from an intelligent species, like a rotating object around the sun that would block out light in a particular pattern. It is also suggested that intelligent machines may be the ones to explore the galaxy in the future, rather than human beings.

    • Leaving a Legacy for Future Civilizations through Understanding the Human BrainBy conducting concrete experiments to validate our understanding of the brain, we can not only build it but also make progress for the future. The potential for human progress is promising.

      Jeff Hawkins and interviewer discuss the importance of leaving a legacy that preserves ideas for future civilizations. They acknowledge that understanding the human brain is a complex yet achievable task, and propose that the work being done now will impact the future. Hawkins stresses the need to create concrete experiments that validate our understanding of the brain, and points out that once we understand it, we will not only have the ability to build it but also to validate our theories in real-time. In short, they are optimistic about the future and the potential for human progress.

    • The complexities and limitations of brain theoriesSimple explanations for complex systems are important, but oversimplification can prove theories wrong in the future. The neocortex offers insights into intelligence, but human nature can be limiting.

      In this conversation, Jeff Hawkins and David Krakauer discuss the potential shortcomings of their theories, with a specific focus on the complexity of the brain and the limitations of science. While Hawkins believes that simple explanations for complex systems are important in order to provide a framework for understanding, Krakauer worries about oversimplification and the potential for theories to be proven wrong in the future. Despite these concerns, they both agree that the universal abilities and flexibility of the neocortex offer promising insights into the underlying principles of intelligence. Krakauer also expresses disappointment in the limitations of human nature and our tendency to default to evolutionary strategies that can be harmful.

    • The fallibility of human beliefs and thoughtsUnderstanding how our brains construct models can lead to a better world by teaching skepticism and constantly testing beliefs through common ground. Strive for better models to make progress.

      In a conversation between Jeff Hawkins and Lex Fridman, they discuss the fallibility of human beliefs and thoughts, as our brains build models of the world that may not reflect reality. This can result in false beliefs and even dangerous ideas. However, they suggest that if we all understood the process of how our brains construct these models, and were more skeptical of our beliefs, we could have a better world. Instead of raising children on dogma, we should teach them to be skeptical and constantly test their beliefs through common ground. While it may be difficult to comprehend reality, striving for better models can lead to progress.

    • What Defines Reality and the Pursuit of Intelligent MachinesOur understanding of reality depends on how we define it, and the pursuit of intelligent machines will lead to innovation. By understanding computation principles, we can implement them biologically or otherwise.

      The question of whether we are getting close to reality depends on how we define reality. We have a useful model of the world that has allowed us to build planes and computers, but a theory of everything would give us a good model of the world only if it accounts for all layers of abstraction. Even if we had a theory of everything, we may not have a full understanding of intelligence. The acquisition of knowledge is pursued for its own pleasure, and the pursuit of intelligent machines will lead to decades of innovation. By understanding the principles of computation, we can choose how to implement them biologically or otherwise.

    • Jeff Hawkins on Innovations in Deep Learning and Neural NetworksAdding the brain's principle of sparsity to existing neural networks, along with using dendrites models, can improve learning and create commercially valuable systems. Pursue your passions, even in adversity.

      Jeff Hawkins, a renowned scientist and inventor, discusses how the effectiveness of deep learning for specific tasks is driving a lot of innovation in hardware, which could lead to the discovery of intelligent systems that operate very differently from deep learning. One principle he talks about is sparsity in the brain, which is different from deep learning networks. By adding this principle to existing neural networks, it could result in a commercially valuable way of bringing more brain principles into the existing system. Another way to improve neural networks is to use dendrites models to create continuous learning and rapid learning. Jeff advises young people to find something they are passionate about and become dedicated to it, even in the face of adversity.

    • Finding Your Passion Can Lead to a Lifetime of Joy and FulfillmentIdentifying your passion can provide purpose and a sense of contribution. It doesn't have to be a big question, but can come from the smallest things. Pursuing your passion can lead to lifelong mastery and fulfillment.

      Passion is a prerequisite for surviving the obstacles of pursuing your interests. Jeff Hawkins, a neuroscientist, found his passion for the brain after identifying it as the most interesting problem in his teenage years. Discovering your spark can give you purpose and a sense of contribution. It does not have to be a big question, but can come from the smallest things like parenting or mastering a particular sport. Finding joy and passion in something can lead to a lifelong pursuit of mastery, and may bring an intense level of joy and fulfillment.

    • Prioritizing Joy and Meaning, Embracing Love, and Leaving a Positive Legacy.Focus on what brings you joy and meaning, even if it means making sacrifices. Love and compassion are crucial for AI systems that interact with humans. By accelerating the inevitable, we can leave a positive legacy for future generations.

      The key takeaway from this section is that it is important to prioritize activities that bring joy and meaning into your life, even if it means making sacrifices in other areas. Jeff Hawkins discusses how he had to give up on his dream of neuroscience but still came back to it later in life. Additionally, the discussion touches on the importance of love and compassion in the human experience and how it may play a role in developing AI systems that interact with humans. Despite uncertainties about the future, there is an optimistic view that love will ultimately win out. Finally, the emphasis is on accelerating the inevitable and leaving a positive legacy for future generations.

    • How individuals can contribute to a positive future for societyBy contributing to beneficial causes, individuals can help steer civilization towards progress while minimizing missteps. A better future is possible, with solutions for climate change and less discord and violence. Even if we don't make it, future generations can learn from us.

      The key takeaway from this section is that individuals can accelerate the progress of civilization towards a positive direction by contributing to something that is beneficial. This can make things happen sooner and eliminate bad missteps along the way, ultimately leading to a better future for society. Jeff Hawkins is optimistic that we will be able to live for a long time and solve problems like climate change, leading to less discord and violence. At the very least, if we do destroy ourselves, we will leave behind some traces of our curious existence for future inhabitants, or perhaps even alien civilizations to ponder.

    Recent Episodes from Lex Fridman Podcast

    #437 – Jordan Jonas: Survival, Hunting, Siberia, God, and Winning Alone Season 6

    #437 – Jordan Jonas: Survival, Hunting, Siberia, God, and Winning Alone Season 6
    Jordan Jonas is a wilderness survival expert, explorer, hunter, guide, and winner of Alone Season 6, a show in which the task is to survive alone in the arctic wilderness longer than anyone else. He is widely considered to be one of the greatest competitors in the history on that show. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off AMA - Submit Questions to Lex: https://lexfridman.com/ama-questions Transcript: https://lexfridman.com/jordan-jonas-transcript EPISODE LINKS: Jordan's Instagram: https://instagram.com/hobojordo Jordan's YouTube: https://youtube.com/@hobojordo Jordan's Website: https://jordanjonas.com/ Jordan's X: https://x.com/hobojordo PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:25) - Alone Season 6 (45:43) - Arctic (1:01:59) - Roland Welker (1:09:34) - Freight trains (1:21:19) - Siberia (1:39:45) - Hunger (1:59:29) - Suffering (2:14:15) - God (2:29:15) - Mortality (2:34:59) - Resilience (2:46:45) - Hope (2:49:30) - Lex AMA
    Lex Fridman Podcast
    enJuly 21, 2024

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life
    Ivanka Trump is a businesswoman, real estate developer, and former senior advisor to the President of the United States. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/ivanka-trump-transcript EPISODE LINKS: Ivanka's Instagram: https://instagram.com/ivankatrump Ivanka's X: https://x.com/IvankaTrump Ivanka's Facebook: https://facebook.com/IvankaTrump Ivanka's books: Women Who Work: https://amzn.to/45yHAgj The Trump Card: https://amzn.to/3xB22jS PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:17) - Architecture (22:32) - Modern architecture (30:05) - Philosophy of design (38:21) - Lessons from mother (1:01:27) - Lessons from father (1:09:59) - Fashion (1:20:54) - Hotel design (1:32:04) - Self-doubt (1:34:27) - Intuition (1:37:37) - The Apprentice (1:42:11) - Michael Jackson (1:43:46) - Nature (1:48:40) - Surfing (1:50:51) - Donald Trump (2:05:13) - Politics (2:21:25) - Work-life balance (2:27:53) - Parenting (2:42:59) - 2024 presidential campaign (2:46:37) - Dolly Parton (2:48:22) - Adele (2:48:51) - Alice Johnson (2:54:16) - Stevie Ray Vaughan (2:57:01) - Aretha Franklin (2:58:11) - Freddie Mercury (2:59:16) - Jiu jitsu (3:06:21) - Bucket list (3:10:50) - Hope
    Lex Fridman Podcast
    enJuly 02, 2024

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    Related Episodes

    Exploring the Mind: John Vervaeke on Relevance Realization and Consciousness

    Exploring the Mind: John Vervaeke on Relevance Realization and Consciousness

    In Episode 3 of the "Active Inference Insights" series, host Darius Parvizi-Wayne welcomes John Vervaeke for an insightful discussion bridging cognitive science and philosophy. The episode delves into topics like relevance realization, evolutionary processes in cognition, and understanding cultural variations in self-modeling. Verveke articulates the dynamic nature of cognition and its relationship with the environment, challenging traditional views on consciousness and the subjective-objective divide. Listeners will better understand how computational models and philosophical frameworks can synergistically enhance our comprehension of the mind and its processes. This episode is a thought-provoking journey that connects cognitive science theories with philosophical inquiries, offering listeners nuanced perspectives on the complexity of human cognition and its implications for meaning in life.

     

    Glossary of Terms

     

    4E Cognitive Science: A view of cognition as embodied, embedded, enacted, and extended.

    Relevance Realization: The ability to focus on salient information in a complex environment.

    Predictive Processing: A framework in cognitive science that describes how the brain makes predictions about incoming sensory information.

    Opponent Processing: A concept in biology where two subsystems work in opposition to regulate functions like arousal.

     

    Resources and References:

     

    Dr. John Vervaeke: Website | YouTube | Patreon | X | Facebook

    Darius Parvizi: X | Active Inference Institute | Active Inference Insights

     

    The Vervaeke Foundation

    Awaken to Meaning



    John Vervaeke YouTube

    Awakening from the Meaning Crisis

    After Socrates

    The Crossroads of Predictive Processing and Relevance Realization | Leiden Symposium




    Books, Articles, Publications, and Videos

    Heidegger, Neoplatonism, and the History of Being: Relation as Ontological Ground - James Filler

    Predictive processing and relevance realization: exploring convergent solutions to the frame problem. Phenomenology and the Cognitive Sciences. Andersen, B. P., Miller, M., & Vervaeke, J. (2022)

    The Self‐Evidencing Brain. Noûs Hohwy, Jakob (2016).

    Attenuating oneself. Philosophy and the Mind Sciences. Limanowski, Jakub & Friston, Karl (2020).

    'Seeing the Dark': Grounding Phenomenal Transparency and Opacity in Precision Estimation for Active Inference. Frontiers in psychology. Limanowski, J., & Friston, K. (2018).

    Deeply Felt Affect: The Emergence of Valence in Deep Active Inference. Neural computation.  Forgetting Ourselves in Flow: An Active Inference Account of Flow States. Hesp, C., Smith, R., Parr, T., Allen, M., Friston, K. J., & Ramstead, M. J. D. (2021). Parvizi-Wayne, D., Sandved-Smith, L., Pitliya, R. J., Limanowski, J., Tufft, M. R. A., & Friston, K. (2023, December 7).

    Cognitive effort and active inference. Neuropsychologia. Parr, T., Holmes, E., Friston, K. J., & Pezzulo, G. (2023).

    "The Theory of Affordances" The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, Gibson, James J. (1979).

    Karl Friston ~ Active Inference Insights 001 ~ Free Energy, Time, Consciousness 

     

    Quotes

     

    "Relevance realization inverts the way common sense works." - John Verveke

     "The deeper your temporal model, the more critical relevance realization becomes." - Darius Parvizi Wayne



    Chapters with Timestamps

     

    Introduction and Overview [00:00:00]

    Evolution and Function in Cognition [00:06:17]

    Opponent Processing in Biology [00:09:42]

    Problem-Solving and Anticipation [00:14:22]

    Relevance Realization and Evolution [00:31:34]

    Consciousness and Subject-Object Distinction [00:53:00]

    Cultural and Historical Perspectives on Cognition [00:56:35]

    Ontological Self and Phenomenal Self Modeling [01:11:19]

    Self-Modeling and Cultural Perspectives [01:14:00]

    Agency and Selfhood in Cognitive Processes [01:18:16]

    Self-Modeling Under flow States [01:22:01]

    Arousal and Metamotivational Theory [01:35:54]

    Predictive Processing Symposium and Relevance Realization [01:46:26]

    Episode Conclusion and Future Plans [01:48:20]



    Timestamped Highlights

     

    [00:00:00] - Darius Parvizi Wayne introduces the episode and guest John Verveke, highlighting John's expertise in psychology, cognitive science, and Buddhist philosophy​​.

    [00:06:17] - John Verveke discusses the evolution of cognitive functions and the role of evolution in shaping cognition​​.

    [00:11:40] - Explanation of the autonomic nervous system, detailing how its two subsystems with opposite biases work together to regulate bodily functions.

    [00:14:43] - The conversation delves into the nature of problem-solving, exploring how organisms predict and prepare for future states. 

    [00:22:23] - The concept of hyperbolic discounting in cognition is examined, analyzing its impact on decision-making and goal pursuit.

    [00:26:20] - Discussion on the role of affordances in predictive processing, exploring how environments offer action possibilities to organisms.

    [00:31:34] - Conversation on the analogy between relevance realization and evolutionary processes, highlighting the dynamic nature of cognitive adaptation​​.

    [00:38:00] - The existential imperative is clarified in the context of the free energy principle, exploring its implications in cognitive science​​.

    [00:53:00] - Consciousness and the subject-object distinction are addressed, challenging traditional cognitive models and exploring interrelational perspectives​​.

    [00:56:35] - Cultural and historical influences on cognitive processes are explored, examining how these factors shape our understanding of cognition​​.

    [00:57:13] - John Verveke discusses the hermeneutics of suspicion in cognitive science, questioning the distinction between appearance and reality​​.  

    [01:04:49] - The role of perception and its function in cognitive processes are discussed, emphasizing the interconnectedness of perception and cognition​​.

    [01:11:19] - The concepts of ontological and phenomenal self-modeling are delved into, discussing how these models influence cognitive processes​​.

    [01:14:00] - Self-modeling and its cultural variations are discussed, highlighting the diversity in conceptualizing the self across different cultures​​.

    [01:18:16] - Agency and selfhood in cognitive processes are examined, focusing on how these concepts enhance predictive agency in the world​​.

    [01:22:01] - Exploration of self-modeling under flow states and their impact on cognitive processes​​.

    [01:35:54] - Analysis of arousal in the context of meta motivational theory, discussing how arousal is framed differently based on goals and motivations​​.

    [01:38:04] - Discussion of the intersection of philosophical concepts and computational models in cognitive science, emphasizing the importance of integrating these approaches to enhance understanding without oversimplifying complex phenomena.

    [01:46:26] - Overview of a talk integrating predictive processing and relevance realization theory, offering insights into their combined impact on cognitive science​​.

     

    Delving into the Frontiers of Artificial General Intelligence with Sam Tideman

    Delving into the Frontiers of Artificial General Intelligence with Sam Tideman

    John Vervaeke and guest Sam Tideman delve into the intricate world of artificial general intelligence (AGI) and its intersection with healthcare. Sam, an expert in biostatistics, machine learning, and AI, shares valuable insights from his professional experiences, particularly in healthcare system optimization. The conversation navigates the ethical and moral challenges of applying AI in complex environments like emergency departments, the intricacies of predictive modeling, and the broader societal implications of AI, including its energy consumption and public perception. This episode is essential listening for anyone interested in understanding the nuanced interplay between technology, healthcare, and ethics, offering a comprehensive perspective on the current and future potential of AI to transform lives and systems.

     

    Sam Tideman, an accomplished healthcare data scientist with an MS in Biostatistics, blends his analytical acumen with a passion for theology in his podcast, "Transfigured." The podcast features long-form discussions exploring the identity of Jesus, reflecting Sam's unique intersection of scientific expertise and spiritual inquiry.

     

    Glossary of Terms

     

    AGI (Artificial General Intelligence): An AI that has the ability to understand, learn, and apply its intelligence to a wide range of problems, much like human intelligence.

    Biostatistics: The application of statistics to a wide range of topics in biology.

     

    Resources and References:

     

    Dr. John Vervaeke: Website | YouTube | Patreon | X | Facebook

    Sam Tideman: YouTube

     

    The Vervaeke Foundation



    John Vervaeke YouTube

    Awakening from the Meaning Crisis - series

    Artificial Intelligence - series

    The Crossroads of Predictive Processing and Relevance Realization | Leiden Symposium

     

    Books, Articles, Publications, and Videos

    Mentoring the Machines: Orientation - Part One: Surviving the Deep Impact of the Artificially Intelligent Tomorrow - John Vervaeke, Shawn Coyne 

    Mentoring the Machines: Origins - Part 2: Surviving the Deep Impact of the Artificially Intelligent Tomorrow - John Vervaeke, Shawn Coyne 

    Predictive processing and relevance realization: Exploring convergent solutions to the frame problem. Phenomenology and the Cognitive Sciences. Andersen, B., Miller, M., & Vervaeke, J. (2022).

     

    Related Resources

    Chicagoland Bridges of Meaning Meetup

     

    Chapters with Timestamps

     

    [00:00:00] Introduction of Sam Tiedemann and Episode Overview 

    [00:01:15] Sam’s Background and Intersection with AI 

    [00:04:11] The Role of AI in Healthcare and Emergency Departments 

    [00:14:26] The Limitations of AI in Morally Complex Environments 

    [00:24:34] Discussion on AI's Capability to Predict vs. Normative Decision-Making 

    [00:53:06] The Energy Consumption and Environmental Impact of Training AI Models 

     

    Timestamped Highlights

     

    [00:00:00] John opens the discussion by welcoming Sam and introducing the topic of artificial general intelligence (AGI).

    [00:01:15] Sam shares his diverse background, which spans theology, philosophy, and artificial intelligence.

    [00:06:15] The conversation focuses on AI's potential and dangers, setting the stage for the day's discussion.

    [00:09:28] Sam reflects on the complexities he faced while trying to implement AI in emergency department forecasting.

    [00:14:53] Sam points out the practical limitations of AI in real-world applications.

    [00:21:38] Sam criticizes the inflated expectations surrounding AI in healthcare projects.

    [00:26:26] John and Sam discuss how predictive processing and relevance realization can be integrated into AI.

    [00:29:37] They delve into the potential of AI to emulate human qualities like intentionality and care.

    [00:34:11] John emphasizes the need to recognize the limitations of AI in solving complex real-world problems.

    [00:38:30] Sam's parable features an AI model in healthcare that prescribes drugs probabilistically and learns from outcomes, hinting at AI's emerging agency.

    [00:42:10] The feasibility of AI replicating human intuition and judgment in complex scenarios is questioned.

    [00:46:15] John highlights the importance of a multidisciplinary approach to understanding and developing AI.

    [00:49:57] Philosophical aspects of AI, such as intentionality and consciousness, are explored in-depth.

    [00:53:30] Sustainability concerns in AI development, especially compared to the human brain's efficiency, are discussed.

    [01:06:40] The episode concludes with a discussion on AI's inability to align with human normativity and the limitations of its social, cultural, and biological understanding.

     

    Rachel Chalmers, Partner at Alchemist Accelerator, discusses what's required to eliminate bias in startup funding

    Rachel Chalmers, Partner at Alchemist Accelerator, discusses what's required to eliminate bias in startup funding

    Rachel Chalmers aspired to be an English professor before being inspired by a technology journal to relocate from Ireland to San Francisco. She has since covered over 1,000 startups as an industry analyst and invested in many more as an investor. Rachel is passionate about representing the under-represented and encouraging all those who have been told they can't  live their dreams. This is a must-listen for entrepreneurs who have ever felt disadvantaged by circumstances out of their control. It's also the episode in which I volunteer to chair the Chalmers 2024 campaign.

    Listen and learn...

    1. The burn rate rule of thumb for startups
    2. Why "venture capital wants to perpetuate itself"
    3. The origins of the Silicon Valley myth that "all talent is concentrated in white males"
    4. How to eliminate institutional biases that constrain women and under-represented minorities when raising money and building companies
    5. How Rachel is defining the future workplace where humans and machines collaborate

    References in this episode:

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. 

    Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all​. Robin is here to provide a different perspective.

    ------
    ✨ DEBRIEF | Unpacking the episode: 
    https://www.bankless.com/debrief-robin-hanson  
     
    ------
    ✨ COLLECTIBLES | Collect this episode: 
    https://collectibles.bankless.com/mint 

    ------
    ✨ NEW BANKLESS PRODUCT | Token Hub
    https://bankless.cc/TokenHubRSS  

    ------
    In this episode, we explore:

    - Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
    - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
    - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
    - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
    - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.

    Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.

    ------
    BANKLESS SPONSOR TOOLS: 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    https://bankless.cc/Arbitrum 

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://bankless.cc/kraken 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    https://bankless.cc/uniswap 

    👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
    https://bankless.cc/phantom-waitlist 

    🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
    https://bankless.cc/MetaMask 

    ------
    Topics Covered

    0:00 Intro
    8:42 How Robin is Weird
    10:00 Are We All Going to Die?
    13:50 Eliezer’s Assumption 
    25:00 Intelligence, Humans, & Evolution 
    27:31 Eliezer Counter Point 
    32:00 Acceleration of Change 
    33:18 Comparing & Contrasting Eliezer’s Argument
    35:45 A New Life Form
    44:24 AI Improving Itself
    47:04 Self Interested Acting Agent 
    49:56 Human Displacement? 
    55:56 Many AIs 
    1:00:18 Humans vs. Robots 
    1:04:14 Pause or Continue AI Innovation?
    1:10:52 Quiet Civilization 
    1:14:28 Grabby Aliens 
    1:19:55 Are Humans Grabby?
    1:27:29 Grabby Aliens Explained 
    1:36:16 Cancer 
    1:40:00 Robin’s Thoughts on Crypto 
    1:42:20 Closing & Disclaimers 

    ------
    Resources:

    Robin Hanson
    https://twitter.com/robinhanson 

    Eliezer Yudkowsky on Bankless
    https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky 

    What is the AI FOOM debate?
    https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate 

    Age of Em book - Robin Hanson
    https://ageofem.com/ 

    Grabby Aliens
    https://grabbyaliens.com/ 

    Kurzgesagt video
    https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures