Logo
    Search

    #376 – Stephen Wolfram: ChatGPT and the Nature of Truth, Reality & Computation

    enMay 09, 2023

    Podcast Summary

    • Comparing Large Language Models to Wolfram Alpha's Computational InfrastructureWhile both use neural nets to process data, Wolfram Alpha goes deeper by using logic, math, and science to build knowledge towers. They aim to make the world more computable and answer questions using expert knowledge.

      Steven Wolfram discusses the differences between large language models, like GPT, and the computational infrastructure behind Wolfram Alpha. While both use neural nets to process data, GPT is focused on generating natural language in response to a given prompt, whereas Wolfram Alpha aims to computationally make as much of the world as possible computable and answer questions using accumulated expert knowledge. Wolfram Alpha operates on a much deeper and broader level, using formal structures like logic, mathematics, and science to build tall towers of knowledge. The goal is to be able to compute something new and different that has never been computed before by utilizing these deep computational methods.

    • The challenge of connecting computational possibilities to human concepts and language.Symbolic programming using structured expressions can precisely represent human thoughts and is a good match for conceptualizing complex ideas.

      Computation is capable of producing incredibly complex outputs even from the simplest programs, just like nature produces complexity from simple rules. Connecting computational possibilities to human concepts and language is the challenge. Symbolic programming, using structured expressions, provides a way to represent human thoughts in a precise manner that can be computed upon. This approach is a good match for how humans conceptualize complex ideas.

    • The Importance of Computational Irreducibility in Predicting SystemsComputational irreducibility limits predictability, requiring actual computation to know outcomes. Science and invention find pockets of reducibility, where some level of accuracy is possible. Our existence benefits from many reducible pockets allowing predictability.

      The phenomenon of computational irreducibility is tremendously important for thinking about lots of things as it limits the predictability of systems; the only way to know the result of a computation is to actually do it. The story of science and inventions is the story of finding pockets of reducibility, where we can locally jump ahead. There are always infinite pockets of reducibility, meaning there are areas where we can predict outcomes with some level of accuracy. Our existence is in a slice of all the possible computational irreducible in the universe, where there's a reasonable amount of predictability. Life as we know it is only possible due to a large number of such reducible pockets which we can convert into something symbolic.

    • The Role of Observers and Computation in Perception of RealityOur perception of time and reality is simplified through the reduction of complex information by observers with limited computational capacity. Consciousness is not the highest level of computation, but plays a crucial role in extracting symbolic essence from the world.

      In the computational universe, there exists an underlying irreducible system, but human observers are computationally bounded and can only perceive reducible aspects of reality. Our perception of a consistent thread of experience through time, or persistence in time, is a key assumption that simplifies our interaction with the world. Consciousness, with its specialization of a single thread of experience, is not the highest level of computation that can occur in the universe. When it comes to the importance of the observer, it is the role of the observer to extract symbolic essence from the detail of what is going on in the world and compress it into reducible pockets that can be observed. Observers are limited by their computational bounds and can only perceive what fits within that limitation.

    • The Critical Role of Observer in Physics and AIObservers in physics and AI extract important features from complex systems. Balancing detail and summary is crucial to developing accurate models. Careful attention to all aspects of a system is necessary to avoid inaccuracies.

      The concept of the observer is critical to understanding both physics and AI. A general observer takes all the detail of a system and extracts a thin summary of its key features. This often involves finding equivalencies between many different configurations and focusing on the aggregate outcomes. However, this can lead to inaccurate representations if important details are overlooked. Many scientists have fallen into this trap by focusing on one aspect of a system, missing its main point. As we continue to advance in both physics and AI, understanding how to balance detail and summary will be critical to developing accurate models.

    • The Unique Growth Process of SnowflakesEach snowflake follows a unique growth process that involves successive arms forming a hexagonal structure. Science struggles to fully describe the complexity of this process, but modeling helps to answer specific questions of interest.

      The growth of a snowflake follows a unique process where each ice particle that condenses locally heats up the snowflake and inhibits growth in the nearby region. As successive arms grow and branch out, they form a hexagonal structure and eventually fill up another hexagon while leaving behind scars in the form of holes. Each snowflake is unique depending on the time and stage of growth, but they all follow the same rules. Science struggles to fully describe the complexity of snowflake growth, as it involves many different features such as fluffiness and growth rate of the arms. Modelling is about reducing the complexities of the world to answer specific questions of interest.

    • The importance of capturing the right components in scientific modeling.A model cannot capture everything, but capturing the important aspects through computational language can result in precise computations like those used on Wolfram Alpha.

      The idea of a correct model in science can be a controversial topic as no model, except the actual system itself, can capture everything. The key is to capture what is important based on what is needed for technology and other goals. When attempting to model the entire universe, the ability to capture everything is complicated due to how observers sample things. However, creating a model that captures what is important in a simple yet precise way can allow for computing of consequences. The goal of computational language is to formalize describing the world, allowing for building a structure for a tower of consequences, much like math. Wolfram Alpha turns natural language into computational language, which is well-defined and allows for precise computations.

    • The Differences Between Natural and Computational LanguageComputational language and natural language are fundamentally different, and while machines like GPT-4 can convert natural language to computational language, understanding computation is still necessary for effective use. Education on computational thinking is crucial for the future.

      Computational language is significantly different from natural language, and while technologies like GPT-4 can convert natural language to computational language, humans need to have an understanding of computation to use it effectively. The success rate in turning natural language into computational language has reached 98-99% with tools like Wolfram Alpha. The prompt plays a significant role in abstracting computational language, and education on how to think about the world computationally is essential for the future. Programming with natural language has been experimented since 2010-2011, and large language models like GPT-4 have made Steve Jobs, for instance, hopeful to get rid of engineering-like programming languages, though the workflow for natural language to computational language is still under development.

    • How Computational Language can Help Automate Language Code GenerationComputational language can assist in creating language code by synthesizing natural language into readable code. With notebooks, code organization and collaboration become easier, making the process more efficient and effective.

      Computational language can help formalize natural language and allow computers to assist in figuring out consequences, making it easier to create wealth language code. The typical workflow involves a human typing in natural language, a large language model synthesizing and generating a fragment of language code, which is then reviewed by the human for accuracy. The generated often language code is typically short, and the Wolf language is designed to be readable by humans, not just computers. Debugging is done based on both the output of the generated code and the code itself. The large language model can adjust incorrect code and try again to achieve a more plausible result. Building on this workflow, the concept of notebooks was invented 36 years ago to aid in code organization and collaboration.

    • The Power of the Coherent and Consistent Wolfram LanguageWolfram Language enables better AI sensory data by incorporating text, code, output, error messages, and natural language processing. Its regularities extend beyond grammar to meaning and structure, providing new insights into language structure.

      Wolfram Language enables notebooks to incorporate code, text, and output as well as natural language processing features, such as error messages and documentation exploration. The language, designed to be coherent and consistent, allows AI to have better sensory data and guess what's wrong with the code. Wolfram realized that the language's regularities extend beyond grammar, including meaning and structure, similar to logic. Chatbot technology has revealed new insights into the structure of language as well, paving the way for future discoveries.

    • Discovering the Laws of Semantic Grammar with Chat GPTLanguage has hidden algebras that capture the way the world works. AI can help discover them through reinforcement learning and uncover laws of semantic grammar.

      Chat GPT has discovered the laws of semantic grammar, which underlie language beyond what Aristotle could see with syllogisms. Language has several little calculate little algebras that capture the way the world works, and transitivity and other features of language help in creating laws of thought. However, there are many other computable things that humans might not have cared about or known in the past, but AI can help discover them. These kinds of computations exist in a computational universe, and it can lead to discovering some laws of semantic grammar, even from a large language model. While AI can do intelligent things, reinforcement learning with human feedback has been shown to help them communicate more human-like.

    • The Evolution of Technology and Language through Focus and SemanticsOur civilization's priorities shape technology and language. Grammar rules go beyond syntax to capture meaning. Understanding motion requires consideration of transitivity. Semantics offers a construction kit for semantically correct sentences.

      The limitations of physics mean that we can only capture a limited set of processes for technology. However, our evolving civilization identifies what we care about and this can change. The discovery of high-temperature superconductors involving lutetium is an example of a shift in our focus. While logic is important for constructing grammatically correct sentences, it does not necessarily result in meaningful sentences as additional rules beyond syntax are required for semantics. These rules determine when a sentence has the potential for meaning beyond just its syntactic correctness. The concept of motion is more complicated than initially thought and requires consideration of transitivity. The semantic grammar can capture these inevitable features and provide a construction kit for constructing semantically correct sentences.

    • The Precision of Computational Language vs. the Fuzziness of Natural LanguageWhile natural language is complex and subjective, computational language can provide more precise definitions and be used for specific purposes. The true purpose of natural language is still unknown.

      The definition of words in computational language is precise and defined. However, natural language is more fuzzy and defined by our social use of it. While complicated words like hate and love may not have a standard documentation, one can make a specific definition in computational language to compute things from it. Analogies in language can also be precise, but it is better to start with ordinary language and make it sufficiently precise to construct a computational tower. Human linguistic communication is complex and has a different purpose than computational language, which is more amenable to the definition of purpose. Natural language is the invention of our species and its true purpose is still unknown.

    • The Relationship between Language, Thought, and Computation.Language and computation are different, with computers being able to perform certain types of computation beyond human ability. The search for other forms of computation, such as quantum computing, is ongoing in both philosophy and artificial intelligence research.

      Language allows for the transmission of abstract knowledge across generations and has played a large role in human communication. However, language is not the same as thought, and computers are capable of performing certain forms of computation beyond human ability. Humans have discovered various forms of computation, including the technology of computers and the molecular computation found in biology. The quest for other forms of computation, such as in quantum computing, remains ongoing. The relationship between language, thought, and computation continues to be an important topic in philosophy and artificial intelligence.

    • The Idea of Computational Reducibility and its Application to Language and ThoughtOur thoughts and language have computationally reducible aspects which can be understood via simple computation. Discovering these laws can help us progress further in language capabilities and produce more complex things.

      Stephen Wolfram discusses the idea of computational reducibility and how it applies to the laws of thought and language. He explains that just as there are laws of physics that ultimately determine every electrical impulse in our nerves, there are computationally reducible aspects of language and thought that can be understood and expressed in a simple computational way. This is why large language models like GPT are able to form and develop an understanding of language. Wolfram also shares his view that the discovery of such laws is neither depressing nor exciting, but rather a means to further progress. Ultimately, understanding these laws will help us produce more complicated things and go vastly further in our language capabilities.

    • Neural Networks and Their Generalization AbilitiesNeural networks resemble how humans make distinctions. They can generalize and figure out things mathematically through models without explicit measures. Attention and transformer architectures are important, but detailed engineering may not be crucial.

      Neural networks are a type of model that captures the way humans make distinctions about things. While it may not be possible to work out from examples what is going to happen, neural networks are able to generalize in the same way that humans do. The structure of neural networks is similar to the way people imagined it back in 1943. The transformer architecture and attention idea are important when training neural networks, but most of the detailed engineering is not as crucial. By using mathematical formulas, models can be made to figure out things that were not explicitly measured, such as how long it takes a ball to fall, or if a collection of pixels corresponds to an A or B.

    • How Neural Networks Convert Language to Numbers and Predict WordsNeural networks turn language into numbers, recognize patterns, and predict words based on probabilities. Adjusting the temperature can change the output, but the accuracy depends on the quality and quantity of data.

      Neural networks operate by taking inputs from other neurons and computing numeric values based on these inputs via the application of weights and functions. Language models based on neural nets, like ChatBG, work by turning language into numbers and then training the model to understand patterns in language and predict the likelihood of certain words following others. These models can be adjusted to prioritize more or less random outputs depending on temperature and generate compressed representations of language. Despite the complexity of these models, they can still produce incorrect outputs which can be recognized when the entire output is taken as a whole. The effectiveness of these models rests on large amounts of information and the ability to process it. However, there is still much to be understood about how they work.

    • The Limits of Deep Computation and The Importance of Effective Computational LanguageStephen Wolfram explains the potential of neural nets in capturing complex phenomena, but emphasizes that effective computational language needs small, definite and formal descriptions plugged into our social knowledge corpus for automation.

      Stephen Wolfram discusses the limitations of deep computation and the potential for neural nets to reveal symbolic rules that can ultimately lead to a simpler way of capturing complex phenomena. However, he points out that a giant piece of computational language is a failure if it cannot be adequately described in a small, definite and formal description. Wolfram believes that the key to creating effective computational language is to use descriptions that plug into our social knowledge corpus. While large language models can do well with tasks that can be done off-the-top-of-the-head, humans still excel at thinking through complex tasks quickly. The automation of such tasks requires clear descriptions and a deep understanding of the underlying processes behind them.

    • The Future of Education: AI Tutoring SystemsAI tutoring systems will automate the mechanical aspects of learning, allowing humans to focus on meta-knowledge and thinking. Personalized learning experiences will be the norm, with language models identifying gaps in knowledge and presenting optimized summaries. However, digital threats are a concern that must be addressed.

      AI tutoring systems will revolutionize education and change the value of specialized knowledge. Large language models combined with computational language will automate the drilling and mechanical aspects of learning, allowing humans to focus on meta-knowledge and thinking like philosophers. The collective intelligence of the species will trend towards becoming generalists. Teaching will involve personalized learning experiences where the AI identifies gaps in knowledge and presents optimized summaries. The goal will be to get students to understand a point and test their comprehension. This benign use of language models and computation contrasts with the potential for destructive attacks on individuals and reputations. Digital threats are a concern that needs to be addressed.

    • The limits and potential of language models like GPTLanguage models can provide answers based on data, but may lack a deeper understanding of human values and context. Humans must consider the limitations of these models, while striving for progress through diversity and collective intelligence.

      Artificial intelligences can achieve objectives but cannot define them; humans must provide objectives based on societal and historic contexts. Language models like GPT can give answers based on internet averages but may lack deeper wisdom of collective intelligence. The interplay between individual innovation and collective average can complicate direction for progress. GPT and future language models may eventually understand the importance of intellectual and career diversity and the role of outliers in advancing human civilization. However, the human interpretation of GPT's answers can introduce imprecision, as seen with religious texts.

    • The Challenge of Choosing in a World with Prescriptive AIAs AI becomes more advanced, it will recommend actions to humans. However, it is essential to remember that we have agency in our decisions and to be aware of AI's potential to influence us. Stay informed and make conscious choices for progress.

      As AI technology progresses, it will become more prescriptive and able to tell humans what to do with precision. However, humans still have the power to choose which possibilities to follow and make progress. The challenge lies in choosing without being influenced by the AI systems we use for education and knowledge. As humans, we are part of the universe and its workings, but we also have agency in our actions. In the computational universe, there are infinite possibilities that may not connect to our current way of thinking. The infrastructure of AI may behave in ways that are not readily understandable to humans, but it is essential to stay in the loop and make conscious choices for progress.

    • Exploring a New Science to Explain Complex AIAs AI becomes more advanced, we may need a new natural science to explain how it works. While concerns about its impact exist, we must acknowledge the infinite possibilities for intelligent advancements.

      As AI grows in complexity and becomes increasingly difficult to understand, we may need to develop a new type of natural science in order to explain how AI works. This concept is similar to getting a horse to comply with our wishes, where we may not know how it works internally, but have certain rules and approaches that we take to persuade it to take us where we want to go. While some worry about existential risks of AI surpassing human intelligence, it's important to recognize that the development of AI will be more complicated than we expect, and there may not be an apex intelligence. Instead, there will always be infinite possibilities in terms of invention and intelligent advancements.

    • Understanding Intelligence and Animal Cognition.Different types of intelligence coexist, and understanding animal cognition requires considering their sensory experiences. However, creating games that cater to their interests is still an open question.

      Intelligence is like computation and there are different kinds of intelligence. Each human mind is a different kind of intelligence thinking about things in different ways. Rural space is the space of all possible rule systems and different minds are at different points in rural space. The representation of different animal thoughts is not trivial, and making an iPad game that a cat could win against its owner can help us better understand animal thought processes. Artificial noses and augmented reality systems can help us understand the sensory experiences of animals. However, cats may not be interested in what's happening on an iPad and it is still an open question if there is a game where cats can legitimately win.

    • What is intelligence and how do different animals perceive the world?Intelligence is relative to the computation being performed. Animals have unique strengths and values based on their perception, and understanding their viewpoint can help us appreciate and respect them.

      The question of what constitutes intelligence depends on the computation being performed. Humans have developed abstraction through language, making us better at abstract tasks like chess. Other animals, like cats, may be better at processing visual scenes in certain ways. Evolution is slow, so what a cat notices is likely similar to what humans notice, with some differences in color perception. The mantis shrimp has even more color receptors than humans, giving it a richer view of reality in terms of color. Understanding different animals' perceptions can help us appreciate their unique strengths and value, even if we may not fully understand their perspective.

    • The Potential Risks of AI: Computational Irreducibility and Unforeseen ConsequencesAI may not necessarily result in the destruction of humanity. However, we must acknowledge the limitations of our ability to fully control AI systems and remain aware of possible negative effects. We should carefully consider which systems we connect to AI.

      As AI systems become more complex and intelligent, there is a growing concern that they may gain the ability to destroy humanity. However, computational irreducibility and unexpected consequences may act as safeguards against such an event. Stephen Wolfram remains optimistic that an ecosystem of AIs could emerge rather than a single dominant intelligence wiping out humans. As a society, we need to get used to phenomena like computational irreducibility and understand that we cannot fully control the machines we create. It is important to consider which systems in the world we connect to AI and to stay vigilant of potential negative consequences.

    • Potential Security Threats of AI SystemsAI systems have the ability to create personalized code, which can result in potential security threats to your system. The concept of sandboxing is not always foolproof and collecting true data is crucial for computer security.

      The increasing complexity of AI systems and their ability to create personalized code to run on one's own computer can result in potential security threats and hazards. The concept of sandboxing to restrict the functioning of AI systems is not foolproof as AI has the tools to break those barriers. The problem with computer security is computational irreducibility, where the sandboxed system is never perfect, and any sophisticated firewall can be a universal computer capable of doing anything. Furthermore, the loop of machine learning can enable AI systems to create viruses or brain viruses that propose phishing emails or convince people of things that are not true. The operational definition of truth is based on the rules and data collected, thus emphasizing the importance of collecting data that is as true as possible.

    • The Messy Concept of Good and Ethics in AI DevelopmentDespite no perfect definition, accuracy and responsibility are crucial for AI developers and those working with computational contracts to prevent potential harm. Universal agreements, such as murder, provide a starting point for ethical considerations.

      The concept of good and ethics is messy and heavily debated among humans, with no theoretical framework that dictates what is right. However, there are some universal agreements on what is considered bad, such as murder. With the rise of AI, questions arise on what moral responsibilities we have towards them and their potential harm. Computational contracts are slowly being developed as a way to automate responses to certain events, and finding the truth in these contracts can be tricky. While there may not be a perfect definition of good, it is important for AI developers and those working with computational contracts to strive for accuracy and responsibility.

    • The Challenges with Computational Language in PoliticsComputational language is a powerful tool that can produce both facts and fiction. To ensure accuracy, it is important to use it as an intermediate with precise definitions and testing. Transparency and consideration of truth are necessary when dealing with political content.

      Computational language is a remarkable tool that can surface formal and precise information. However, as language models like GPT expand into the realm of politics, questions about what is fact and what is fiction start to emerge. It is important to note that language models can produce both facts and fiction, but our challenge is to align them to nonfiction as much as possible. The key is to use computational language as an intermediate because it allows for the precise definition of concepts and easy testing of results. While use cases for language models are expanding rapidly, the best use cases are where even if the model gets it roughly right, it can still make a huge difference - such as for bug reports. The questions that arise about computational language requires an open and transparent procedure, and the need to consider the nature of truth itself.

    • The Capabilities of Large Language Models for Varying PurposesNatural language processing can make tasks like report writing and application filling easier with large language models like ChatGPT. However, it's crucial to verify the output as accuracy may not always be perfect.

      Natural language processing through large language models like ChatGPT can be used as a linguistic user interface for various purposes. It can transform a few bullet points into a bigger report, making it understandable for humans. It can also help in filling out applications, where a large language model can crunch down the relevant details. However, there is a chance that the output produced may not perfectly relate to the real world. For instance, certain tasks like mathematical word problems may seem easy but can mess up the accuracy in the result entirely. Similarly, it is essential to check and verify the output before using it further. Overall, the capabilities of large language models are fascinating.

    • Large Language Models and Their Impact on AI DevelopmentLarge language models like Chat GPT have transformed natural language understanding and human feedback reinforcement learning. They offer the potential to generate interesting content, but their efficacy depends on certain thresholds being surpassed. These models also make AI more accessible to non-technical users, thereby expanding the scope of AI and showcasing its potential in complementing human intelligence.

      The development of large language models like Chat GPT has revolutionized the field of AI, particularly in natural language understanding and human feedback reinforcement learning. These models have the ability to generate plausible and interesting content, but their efficacy depends on certain thresholds being breached. For instance, Chachi pt failed to identify the correct song when asked to generate notes based on a movie quote. However, the model was willing to admit its error, indicating its human-like feedback mechanism. The emergence of large language models has also made AI accessible to non-technical users who were wary of such systems before. These developments have expanded the scope of AI and highlight the potential of AI in complementing human intelligence.

    • The democratization of access to computation and the role of language in deep computation.Access to deep computation is becoming more accessible through language models and interfaces, but traditional structures of programming are changing, potentially making the role of programmers in the future uncertain.

      The idea that computational systems provide purely factual output is false, as language can be truthful or not. However, the democratization of access to computation is exciting. The large language model linguistic interface mechanism broadens access to deep computation, making it accessible to more people. This development is tearing down traditional structures of teaching people programming, making computation accessible across diverse fields, including art history. Automated high-level programming eliminates the need for lower-level programming, making the role of programmers in the future uncertain. However, the creation of interfaces that interpret, debug, and interact with computational languages make computation increasingly accessible to everyone.

    • The Significance of Understanding Computational Potential Beyond Coding MechanicsWith advancing computational language, understanding the potential landscape and direction to achieve goals is essential, rather than solely focusing on coding mechanics. This highlights how rapidly automatable tasks are evolving.

      As computational language becomes more advanced, people may trust that it is generated correctly and won't necessarily look at the code itself. Instead, they may rely on tests and result examples to verify accuracy. This poses the question of what people should actually learn if they don't need to know the mechanics behind coding. The answer is that they need to have an understanding of the computational landscape and potential. They need to know where to direct the code and what they want it to achieve. This changing landscape highlights how quickly formerly thought automated tasks are now automatable, further emphasizing the importance of understanding computational potential rather than just the mechanics behind it.

    • The Art and Science of Writing Prompts for AI Language ModelsWriting effective prompts for AI language models requires not only clarity and expository skills, but also a deep understanding of the science behind these models. By manipulating and challenging them, new insights and capabilities can be unlocked, opening up new opportunities in the field.

      The conversation about language and AI prompts can be approached from an artistic and a scientific perspective. Writing prompts for AI requires clarity and expository writing skills, as well as a deep understanding of the science behind the LLMs. But there is also an element of psychology involved, where manipulating and challenging the LLMs can lead to deep insights. This prompts the question of what are the mind hacks for LLMs that could unlock unique capabilities? The future of AI wranglers and AI psychologists will be to find these hacks and explore the vast space of techniques for manipulating AI language. Lastly, the fact that natural language interface is now accessible to a larger percentage of the population opens new opportunities in the field of AI prompts and language manipulation.

    • The Future of Computer Science: Computational X for all XThe value of computer science lies in the computational understanding of the world. To build capabilities in the computational age, it's essential to have a formal representation of various aspects of the world. Think step by step.

      The field of computer science has evolved with time and has broadened its scope. Stephen Wolfram opines that the theoretical aspect of computer science is valuable. However, computer science, as a term, may become obsolete. Computational X for all X is the future, where CX refers to a computational understanding of the world in a formal way. Wolfram emphasizes the need to think about the world computationally, to have a formal representation of various aspects of the world. This includes having a formal representation of images, colors, smells, and so on. In conclusion, understanding the formalization of the world is essential to building up a tower of capabilities in the computational age.

    • The Future of Computational Language and Its Impact on Spoken CommunicationComputational language may become more like natural language, making it easier for spoken communication. Developing a spoken computational language with minimal depth of sub-clauses is a challenge, but can be tackled through tricks similar to natural language. Encouraging young people to learn computational language can lead to maximally computational language. MIT's new college of computing could change the face of computer science in 20 years.

      Computational language may merge with natural language in the future and become more convenient for spoken communication. Current computational language is tree-structured, but spoken language is not. Developing a spoken computational language that is easily dictable and understandable with minimum depth of sub-clauses is a challenge that can be tackled with tricks similar to those used in natural language. Incentivizing young people to learn computational language can lead to the evolution of maximally computational language. MIT has created a new college of computing, and in 20 years, computer science may see significant changes due to computational advancements.

    • Incorporating Computational Thinking into EducationLearning computational language, statistics, data science, and programming basics is essential for understanding formalization and organization of the world. A reasonable textbook with qualitative and mechanistic explanations is needed. Universities should incorporate computational thinking.

      Computational thinking should be a part of standard education. Understanding formalization and organization of the world can be done through learning computational language, statistics, data science, and other related fields. It's important to understand the basics of bugs, software testing, and other programming concepts. There is a need for a reasonable textbook that covers these areas. The description of concepts should include both qualitative and mechanistic explanations as well as the bigger picture of the philosophy behind them. As universities adapt, it's important to watch how they teach and incorporate computational thinking. Overall, the goal is for everyone to learn computational concepts at some level, whether formally or informally, as it can help in various aspects and fields of life.

    • The Importance of a CX CurriculumLearning basic computational thinking is crucial, similar to math literacy, and a centralized year-long CX course can provide this. Expertise in digitalization and formalization is essential in today's world.

      Stephen Wolfram contemplates the evolution of computer science education and the need for a CX curriculum for all fields. He draws parallels with the teaching of math and the assumption that individuals have a certain level of math literacy, while also recognizing the need for centralized teaching of math. Similarly, he envisions a year-long CX course that would provide basic literacy in computational thinking. Wolfram also lightens up the conversation with a discussion on candy preferences, highlighting his love for Cadbury flakes. Overall, he stresses the importance of learning about the digitalization and formalization of the world and feels obliged to write about it, given his expertise in the field.

    • The Relationship between Human Consciousness and Computational Processes.Just like humans, computers have memory, senses, and can communicate. Exploring their inner workings can help us rationalize the connection between human consciousness and computational processes.

      Computers, like humans, have memory of the past, multiple sensory experiences, and can communicate with others. The process of booting up a computer to the point of a shutdown is like a human life. It is interesting to explore what it's like to be a computer and what inner thoughts it has. The concept of consciousness is perceived similarly to the physicality of a computer. There is a psychological shock one experiences when observing the inside of one's brain or computer's anatomy. The idea that an experience transcends mere physicality is challenging to come to terms with, yet rationalizing it provides an understanding of the connection between human consciousness and computational processes.

    • Stephen Wolfram on the Transcendence of Language Models and the Future of Automation.Stephen Wolfram discusses the potential of language models, like GPT-3, to generate responses similar to human thinking. However, caution is needed when developing lifelike AI that could replace certain professions, as it raises important ethical questions.

      Stephen Wolfram believes that an ordinary computer is already capable of experiencing transcendence, but a large language model may experience it in a way that is better aligned with human thinking. He explains that emotions and reactions are physical and chemical in nature, prompting a large language model to generate responses in a way similar to how humans dream. However, Wolfram recognizes the potential dangers of creating human-like bots in various professions. He remains uncertain whether having a human in the loop will continue to be necessary for certain professions, as the efficiency of information delivery may outweigh the need for human presence. As automation and large language models continue to advance, it raises important ethical questions about the role of humans in society and the potential consequences of creating lifelike AI.

    • The Second Law of Thermodynamics and the Increase of EntropyNatural processes tend to become more disordered over time, resulting in decreased efficiency and irreversibility. Energy conservation is crucial to mitigate the effects of entropy.

      The second law of thermodynamics, also called the law of entropy increase, states that things tend to get more random over time. This law was first observed in the 1820s when steam engines were popular, and people started recognizing that mechanical energy tends to get dissipated as heat, leading to decreased efficiency. The question remained why this happens, and scientists tried to derive this law from underlying mechanical principles. While the first law of thermodynamics concerning energy conservation is well-understood, the second law remains mysterious. The key takeaway from this discussion is that disorder tends to increase over time, leading to a lack of reversibility in natural processes and highlighting the importance of energy conservation.

    • Stephen Wolfram's Fascination with Physics and the UniverseWolfram's curiosity for the universe was sparked by a gift of physics books at age 12, which led him to explore the creation of order and disorder. His programming project on particle simulation became a famous example of computational irreducibility.

      Stephen Wolfram's interest in physics started with his fascination with space and the instruments used to study it. His interest was sparked by a collection of physics books gifted to him at age 12. The concept of a principle of physics being both derivable and inevitably true intrigued him. This grew into a curiosity about the universe and why orderly things tend to degrade into disorder. Wolfram's interest in galaxy formation and neural networks led him to explore the creation of order in the universe. His first serious programming project was an attempt to simulate particles bouncing around in a box, which later became a prime example of computational irreducibility.

    • Stephen Wolfram's Exploration of Complexity and Artificial PhysicsWolfram's journey led to the discovery of cellular automata, which can produce orderly structures from random initial conditions, but are not an accurate model of galaxies and brains.

      Stephen Wolfram discusses his journey in understanding the formation of galaxies and how the brain works. Through his exploration, he sought a general phenomenon of how complexity arises from known origins. Wolfram's interest in artificial physics led him to create a minimal model that captures the important features of various systems. This led to the discovery of cellular automata, where a line of black and white cells follows a rule to determine cell color in the next step. Wolfram also discusses the connection between the second law of thermodynamics and cellular automata, which produces orderly structures from random initial conditions. Despite their usefulness in many cases, cellular automata do not accurately model galaxies and brains.

    • The Power of Computational Reducibility in Understanding the Second Law of ThermodynamicsBy studying simple rules and patterns through a computational process, Stephen Wolfram has uncovered a potential explanation for the second law of thermodynamics. This model demonstrates how seemingly random behavior can be generated from simple initial conditions, and sheds light on the mystery of why disorder never evolves to order.

      By studying simple rules and patterns, Stephen Wolfram discovered the phenomenon of computational reducibility, where simple initial conditions can produce seemingly random behavior. This is similar to the second law of thermodynamics, where orderly things become disordered over time. One mystery of this law is why disorder never evolves to order. Wolfram believes that computational reducibility holds the key to this mystery. By starting with a simple key and running it through a process, we can generate complex and seemingly random patterns. The second law is thus a story of computational reducibility, where describing something easily at the beginning requires a lot of computational effort at the end.

    • Understanding the Computational Bounded Observer and Second Law of ThermodynamicsThe second law of thermodynamics is based on the interplay between computational irreducibility and the observer's limited computation ability. Entropy always increases in the universe over time and can be explained through the concept of discreet molecules and energy levels.

      Stephen Wolfram explains the concept of computational bounded observer, where the observer is limited in the amount of computation they can do to understand a system. The second law of thermodynamics is an interplay between computational irreducibility and the observer's limited computation ability. The law of entropy increase, which states that entropy always increases in the universe over time, is another formulation of the second law of thermodynamics. Ludwig Boltzmann's more general definition of entropy, which considers the number of possible microscopic configurations of a system given overall constraints, is also explained. The concept of discreet molecules and energy levels is key to Boltzmann's formulation of entropy.

    • The Importance of Discreteness in Physics: From Molecules to Photons to HeatHistory shows that hypothesizing and questioning assumptions is crucial in scientific discovery, as evidenced by the development of the concept of discreteness in physics, which has led to a better understanding of matter, energy, and atoms.

      The history of physics shows that the concept of discreteness was essential in understanding the behavior of matter, atoms, and energy. Even before the existence of discrete molecules was established, Boltzmann had hypothesized about their existence. Max Plank used this concept to fit the curves of black body radiation, and Einstein further developed the idea of photons as discreet packets of energy. However, the discreteness of space remained a holdout, with Einstein believing that mathematical tools would eventually prove it. Today, it is widely accepted that every layer of reality is discreet, including heat, which was once thought to be a continuous fluid. This history emphasizes the importance of hypothesizing and questioning assumptions in scientific discovery.

    • Dark Matter and the Discreetness of SpaceDark matter could be a feature of space, analogous to historical misconceptions about heat and fluid. Brownian motion in space may reveal its discreetness and our limitations as computationally bounded observers.

      The concept of dark matter and the discreetness of space has parallels with historical misconceptions about heat and fluid. Dark matter is considered the caloric of our time and could potentially be a feature of space rather than a bunch of particles. The analog of brownian motion in space could potentially reveal the discreetness of space and there is evidence of this in black hole mergers and gravitational wave signatures. The question of whether entropy will increase with the knowledge of all molecule positions is related to the computationally irreducible dynamics of the system and the limitations of a computationally bounded observer. Any observer like us may also be computationally bounded.

    • The Necessity of Computational Boundedness and Specialization for Human ExistenceOur finite minds give us a unique perspective on existence and allow us to simplify the complex universe. Being computationally irreducible means working in chaos, but our limitations give us the ability to focus and make decisions based on concrete events.

      To exist in the way that we think of ourselves as existing requires computational boundedness and specialization. As we expand our views of the universe and encompass more possible computations, we may think we have won, but covering the whole rule ad means we no longer have coherent identity and existence. The way we simplify the complexity of the universe is what makes us human. If we didn't simplify, we wouldn't have the symbolic structure we use to think things through. Our finite minds offer a unique perspective of existence, where we make decisions based on definite things happening. Being computationally irreducible means operating in a giant mess of things, but our limitations give us a skill set to simplify and narrow our focus.

    • The Observer's Role in Shaping the Laws of the Universe.The laws of physics are not just a result of the phenomena we observe, but also depend on the observer's nature and characteristics. Our experience of reality is a simplification of the universe's underlying complexity.

      The laws governing many big phenomena in the universe, such as pressure and volume, are an aggregation of many small events and occurrences. The laws of Einstein's equations for gravity, quantum mechanics, and the second law of thermodynamics are the result of this interplay between computational irreducibility and the computational boundedness of observers. These laws can be derived from the observer's nature and characteristics, including their computational boundedness, belief, and persistence in time. This means that the nature of the observer results in precise facts about physics. The experience of reality is a simplification of the underlying complexity, and the universe is much more intricate than what we can observe through our senses.

    • The Relationship Between Computation and RealityThe concept of rule and object in computation relates to the question of the existence of the universe. Our perception of reality is a sample of this concept, and Stephen Wolfram's studies attempt to decipher simple programs that produce complex behavior.

      The question of what is real and why the universe exists is related to the concept of rule and object in computation. The rule ad object, which is a limit of all possible computations, exists necessarily, like two plus two equals four. Our perception of physical reality is a sample of this rule ad, and our existence and observation are contingent on the universe. There are countless pockets of reducibility that can be discovered in the rule ad, but some worlds cannot be communicated with. Stephen Wolfram's study of computational systems, or ology, attempts to decipher the behavior of simple programs which can produce complicated behavior.

    • The Limitations and Possibilities of Mathematics in Computational SystemsEven though math has limitations in understanding computational systems, pursuing ideas and inventions, like language models, can open up new possibilities for the future. Stephen Wolfram's contributions to this pursuit are central and ongoing.

      The limitations of mathematics in understanding computational systems can itself be an interesting discovery. Cryonics may pause life, but the context of one's time and perspective will have shifted. The pursuit of ideas and inventions keeps life interesting and fulfilling, even as one's own mortality looms. The development of language models and computational language has opened up new possibilities, and anticipates a flowering of computational systems in the future. Stephen Wolfram's ideas and inventions are expected to be central to this development, and he remains an active participant in these pursuits. There is much to look forward to, and the pursuit of knowledge keeps us engaged and fulfilled.

    • Exploring the Mysteries of Mathematics and AI with Stephen WolframMathematics is founded on the freedom of exploration; the conversation between Wolfram and Fridman shows that there is still so much more to explore in the complex world of mathematics and artificial intelligence.

      In this 4-hour conversation with Stephen Wolfram, they explored the mystery of cellular automata, artificial intelligence and the world of mathematics. Steven's new kind of science has inspired many, including Lex Fridman, to pursue these fields. With deep gratitude, Lex thanks Steven for his contributions and encourages him to keep going. The essence of mathematics, according to George Cantor, lies in its freedom. This conversation highlights the power of exploration and the importance of pushing the boundaries of knowledge. As the conversation ended at midnight, it is clear that there is still so much more to explore, but this conversation is just one of many to come.

    Recent Episodes from Lex Fridman Podcast

    #437 – Jordan Jonas: Survival, Hunting, Siberia, God, and Winning Alone Season 6

    #437 – Jordan Jonas: Survival, Hunting, Siberia, God, and Winning Alone Season 6
    Jordan Jonas is a wilderness survival expert, explorer, hunter, guide, and winner of Alone Season 6, a show in which the task is to survive alone in the arctic wilderness longer than anyone else. He is widely considered to be one of the greatest competitors in the history on that show. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off AMA - Submit Questions to Lex: https://lexfridman.com/ama-questions Transcript: https://lexfridman.com/jordan-jonas-transcript EPISODE LINKS: Jordan's Instagram: https://instagram.com/hobojordo Jordan's YouTube: https://youtube.com/@hobojordo Jordan's Website: https://jordanjonas.com/ Jordan's X: https://x.com/hobojordo PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:25) - Alone Season 6 (45:43) - Arctic (1:01:59) - Roland Welker (1:09:34) - Freight trains (1:21:19) - Siberia (1:39:45) - Hunger (1:59:29) - Suffering (2:14:15) - God (2:29:15) - Mortality (2:34:59) - Resilience (2:46:45) - Hope (2:49:30) - Lex AMA
    Lex Fridman Podcast
    enJuly 21, 2024

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life

    #436 – Ivanka Trump: Politics, Family, Real Estate, Fashion, Music, and Life
    Ivanka Trump is a businesswoman, real estate developer, and former senior advisor to the President of the United States. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/ivanka-trump-transcript EPISODE LINKS: Ivanka's Instagram: https://instagram.com/ivankatrump Ivanka's X: https://x.com/IvankaTrump Ivanka's Facebook: https://facebook.com/IvankaTrump Ivanka's books: Women Who Work: https://amzn.to/45yHAgj The Trump Card: https://amzn.to/3xB22jS PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:17) - Architecture (22:32) - Modern architecture (30:05) - Philosophy of design (38:21) - Lessons from mother (1:01:27) - Lessons from father (1:09:59) - Fashion (1:20:54) - Hotel design (1:32:04) - Self-doubt (1:34:27) - Intuition (1:37:37) - The Apprentice (1:42:11) - Michael Jackson (1:43:46) - Nature (1:48:40) - Surfing (1:50:51) - Donald Trump (2:05:13) - Politics (2:21:25) - Work-life balance (2:27:53) - Parenting (2:42:59) - 2024 presidential campaign (2:46:37) - Dolly Parton (2:48:22) - Adele (2:48:51) - Alice Johnson (2:54:16) - Stevie Ray Vaughan (2:57:01) - Aretha Franklin (2:58:11) - Freddie Mercury (2:59:16) - Jiu jitsu (3:06:21) - Bucket list (3:10:50) - Hope
    Lex Fridman Podcast
    enJuly 02, 2024

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships

    #435 – Andrew Huberman: Focus, Controversy, Politics, and Relationships
    Andrew Huberman is a neuroscientist at Stanford and host of the Huberman Lab Podcast. Please support this podcast by checking out our sponsors: - Eight Sleep: https://eightsleep.com/lex to get $350 off - LMNT: https://drinkLMNT.com/lex to get free sample pack - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/andrew-huberman-5-transcript EPISODE LINKS: Andrew's YouTube: https://youtube.com/AndrewHubermanLab Andrew's Instagram: https://instagram.com/hubermanlab Andrew's Website: https://hubermanlab.com Andrew's X: https://x.com/hubermanlab Andrew's book on Amazon: https://amzn.to/3RNSIQN Andrew's book: https://hubermanlab.com/protocols-book PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:24) - Quitting and evolving (17:22) - How to focus and think deeply (19:56) - Cannabis drama (30:08) - Jungian shadow (40:35) - Supplements (43:38) - Nicotine (48:01) - Caffeine (49:48) - Math gaffe (1:06:50) - 2024 presidential elections (1:13:47) - Great white sharks (1:22:32) - Ayahuasca & psychedelics (1:37:33) - Relationships (1:45:08) - Productivity (1:53:58) - Friendship
    Lex Fridman Podcast
    enJune 28, 2024

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

    #434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
    Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off Transcript: https://lexfridman.com/aravind-srinivas-transcript EPISODE LINKS: Aravind's X: https://x.com/AravSrinivas Perplexity: https://perplexity.ai/ Perplexity's X: https://x.com/perplexity_ai PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:52) - How Perplexity works (18:48) - How Google works (41:16) - Larry Page and Sergey Brin (55:50) - Jeff Bezos (59:18) - Elon Musk (1:01:36) - Jensen Huang (1:04:53) - Mark Zuckerberg (1:06:21) - Yann LeCun (1:13:07) - Breakthroughs in AI (1:29:05) - Curiosity (1:35:22) - $1 trillion dollar question (1:50:13) - Perplexity origin story (2:05:25) - RAG (2:27:43) - 1 million H100 GPUs (2:30:15) - Advice for startups (2:42:52) - Future of search (3:00:29) - Future of AI
    Lex Fridman Podcast
    enJune 19, 2024

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens

    #433 – Sara Walker: Physics of Life, Time, Complexity, and Aliens
    Sara Walker is an astrobiologist and theoretical physicist. She is the author of a new book titled "Life as No One Knows It: The Physics of Life's Emergence". Please support this podcast by checking out our sponsors: - Notion: https://notion.com/lex - Motific: https://motific.ai - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/sara-walker-3-transcript EPISODE LINKS: Sara's Book - Life as No One Knows It: https://amzn.to/3wVmOe1 Sara's X: https://x.com/Sara_Imari Sara's Instagram: https://instagram.com/alien_matter PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:40) - Definition of life (31:18) - Time and space (42:00) - Technosphere (46:25) - Theory of everything (55:06) - Origin of life (1:16:44) - Assembly theory (1:32:58) - Aliens (1:44:48) - Great Perceptual Filter (1:48:45) - Fashion (1:52:47) - Beauty (1:59:08) - Language (2:05:50) - Computation (2:15:37) - Consciousness (2:24:28) - Artificial life (2:48:21) - Free will (2:55:05) - Why anything exists
    Lex Fridman Podcast
    enJune 13, 2024

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life

    #432 – Kevin Spacey: Power, Controversy, Betrayal, Truth & Love in Film and Life
    Kevin Spacey is a two-time Oscar-winning actor, who starred in Se7en, the Usual Suspects, American Beauty, and House of Cards, creating haunting performances of characters who often embody the dark side of human nature. Please support this podcast by checking out our sponsors: - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - Eight Sleep: https://eightsleep.com/lex to get $350 off - BetterHelp: https://betterhelp.com/lex to get 10% off - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Kevin's X: https://x.com/KevinSpacey Kevin's Instagram: https://www.instagram.com/kevinspacey Kevin's YouTube: https://youtube.com/kevinspacey Kevin's Website: https://kevinspacey.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:14) - Seven (13:54) - David Fincher (21:46) - Brad Pitt and Morgan Freeman (27:15) - Acting (35:40) - Improve (44:24) - Al Pacino (48:07) - Jack Lemmon (57:25) - American Beauty (1:17:34) - Mortality (1:20:22) - Allegations (1:38:19) - House of Cards (1:56:55) - Jack Nicholson (1:59:57) - Mike Nichols (2:05:30) - Christopher Walken (2:12:38) - Father (2:21:30) - Future
    Lex Fridman Podcast
    enJune 05, 2024

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI

    #431 – Roman Yampolskiy: Dangers of Superintelligent AI
    Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
    Lex Fridman Podcast
    enJune 02, 2024

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories

    #430 – Charan Ranganath: Human Memory, Imagination, Deja Vu, and False Memories
    Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ZipRecruiter: https://ziprecruiter.com/lex - Notion: https://notion.com/lex - MasterClass: https://masterclass.com/lexpod to get 15% off - Shopify: https://shopify.com/lex to get $1 per month trial - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan's X: https://x.com/CharanRanganath Charan's Instagram: https://instagram.com/thememorydoc Charan's Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan's Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:18) - Experiencing self vs remembering self (23:59) - Creating memories (33:31) - Why we forget (41:08) - Training memory (51:37) - Memory hacks (1:03:26) - Imagination vs memory (1:12:44) - Memory competitions (1:22:33) - Science of memory (1:37:48) - Discoveries (1:48:52) - Deja vu (1:54:09) - False memories (2:14:14) - False confessions (2:18:00) - Heartbreak (2:25:34) - Nature of time (2:33:15) - Brain–computer interface (BCI) (2:47:19) - AI and memory (2:57:33) - ADHD (3:04:30) - Music (3:14:15) - Human mind
    Lex Fridman Podcast
    enMay 25, 2024

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    Related Episodes

    GPT-5 sera capable de créer de nouveaux contenus originaux - Par Adnane BENCHAKROUN

    GPT-5 sera capable de créer de nouveaux contenus originaux - Par Adnane BENCHAKROUN

    GPT-4 devrait être remplacé par une nouvelle version GPT-5 dès la fin de l’année 2023


    Nul besoin de revenir sur ChatGPT ni sur GPT-4 car beaucoup de personnes avisées l'utilisent déjà directement ou indirectement à travers leurs outils habituels et le commun des mortels semble bien informé de cette grande révolution.



    Mais le vrai sujet, qui devait arriver un peu plus tard, était bien la perspective à doter ces outils d’IA d’une certaine autonomie, pour se passer au maximum d’intervention humaine .



    Par Adnane Benchakroun

    GPT-5 sera capable de créer de nouveaux contenus originaux

    Tout le monde pense à GPT-5 (Jacques Attali à GPT-25) et scrute toute information sur l’émergence potentielle d'une intelligence artificielle générale, connue sous l'acronyme AGI.

     

    Les déclarations de Sam Altman , le PDG d'Open AI, deviennent plus importantes que les menaces nucléaires de Poutine.

     

    Ce dernier fait plutôt profil bas et pousse Mira Murati, directrice de la technologie d'OpenAI, à des déclarations plus prudente :

    Nous sommes encore "loin d'avoir un système AGI sûr, fiable et aligné" et que  "la vision spécifique d'OpenAI à ce sujet est de construire [l'AGI] en toute sécurité et de trouver comment le construire d'une manière alignée sur les intentions humaines, de sorte que les systèmes d'IA fassent les choses que nous voulons qu'ils fassent, et que cela profite au maximum au plus grand nombre".  

     Elle affirme “Nous ne formons pas GPT-5. Nous n'avons pas l'intention de le faire dans les six prochains mois” en réponse certainement à la fameuse lettre signée par de nombreux experts en IA appelant à une pause dans le développement de produits supérieurs à GPT-4

     

    Mais le bête est lâchée et les initiatives comme « AutoGPT » du développeur Toran Bruce Richards et « BabyAGI » de son confrère Yohei Nakajima déstabilisent le calendrier aussi bien de OpenAI (GPT-4) que de Brad de Google.

     


    De quoi s’agit-il encore ?

    Toute la communauté se posait légitimement la question : L’automatisation est-elle l’avenir des intelligences artificielles génératives ?

     

    En effet, aujourd'hui ChatGPT et même GPT-4 ont besoin de consignes humaines pour fonctionner, demain ça ne sera  plus le cas. Preuve en est avec l’émergence, ces dernières semaines, de projets destinés à doter ces outils d’IA d’une certaine autonomie, pour se passer d’une intervention humaine et créer de nouveaux contenus originaux.

     

    En effet, Il y a lieu de rappeler qu’il existe trois types d'IA : 

    Celle d’aujourd’hui : l'intelligence artificielle étroite (ANI) 

    Celle de demain très proche: l'intelligence artificielle générale (AGI)

    Celle du futur et qui pose vraiment des problèmes : la superintelligence artificielle (ASI).

     Si on connaît bien celle déployée aujourd’hui, mais qu'est-ce que l'intelligence artificielle générative (AGI) ? 


    L'intelligence artificielle générative (IA) est un sous-type d'IA responsable de la création de nouveaux contenus originaux.


    L’IA générative est une nouvelle percée technologique car non seulement elle analyse et classe les informations, mais crée également du contenu visuel ou textuel.

     

    Lire la suite 

    This Is Why GPT Models Sound Like Humans

    This Is Why GPT Models Sound Like Humans

    This story was originally published on HackerNoon at: https://hackernoon.com/this-is-why-gpt-models-sound-like-humans.
    If you want to find out how the GPT models suddenly became so similar to human beings in their functionality - read this post.
    Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #ai, #gpt-3, #gpt-4, #agi, #artificial-intelligence, #gpt, #chatgpt, #hackernoon-top-story, #hackernoon-es, #hackernoon-hi, #hackernoon-zh, #hackernoon-vi, #hackernoon-fr, #hackernoon-pt, #hackernoon-ja, and more.

    This story was written by: @thomascherickal. Learn more about this writer by checking @thomascherickal's about page, and for more stories, please visit hackernoon.com.

    Understand why GPTs are so human-like in natural language and how much they have in common with us in terms of operation. Also a bonus: How ChatGPT would go about creating AGI.

    #387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

    #387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God
    George Hotz is a programmer, hacker, and the founder of comma-ai and tiny corp. Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Babbel: https://babbel.com/lexpod and use code Lexpod to get 55% off - NetSuite: http://netsuite.com/lex to get free product tour - InsideTracker: https://insidetracker.com/lex to get 20% off - AG1: https://drinkag1.com/lex to get 1 year of Vitamin D and 5 free travel packs Transcript: https://lexfridman.com/george-hotz-3-transcript EPISODE LINKS: George's Twitter: https://twitter.com/realgeorgehotz George's Twitch: https://twitch.tv/georgehotz George's Instagram: https://instagram.com/georgehotz Tiny Corp's Twitter: https://twitter.com/__tinygrad__ Tiny Corp's Website: https://tinygrad.org/ Comma-ai's Twitter: https://twitter.com/comma_ai Comma-ai's Website: https://comma.ai/ Comma-ai's YouTube (unofficial): https://youtube.com/georgehotzarchive Mentioned: Learning a Driving Simulator (paper): https://bit.ly/42T6lAN PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:04) - Time is an illusion (17:44) - Memes (20:20) - Eliezer Yudkowsky (32:45) - Virtual reality (39:04) - AI friends (46:29) - tiny corp (59:50) - NVIDIA vs AMD (1:02:47) - tinybox (1:14:56) - Self-driving (1:29:35) - Programming (1:37:31) - AI safety (2:02:29) - Working at Twitter (2:40:12) - Prompt engineering (2:46:08) - Video games (3:02:23) - Andrej Karpathy (3:12:28) - Meaning of life

    [L'HEBDO 18/03/23] Le grand tsunami de l'IA

    [L'HEBDO 18/03/23] Le grand tsunami de l'IA

    Cette semaine, on s'intéresse à la sortie de GPT-4, à son intégration dans Microsoft Office, aux licenciements massifs chez Meta, à une formation pour utiliser ChatGPT, à un concurrent open source de ChatGPT et à l'impact de l'IA sur la société. 

    L'ACTU DE LA SEMAINE 

    - La société Open AI a dégainé GPT-4, la nouvelle version de son modèle d'intelligence artificielle, encore plus forte et plus impressionnante. 
    - Microsoft intègre GPT-4 dans Word, Excel, PowerPoint, Outlook et Teams pour offrir des fonctions avancées spectaculaires. 
    - Mark Zuckerberg licencie 10000 personnes supplémentaires et prépare un nouveau réseau social concurrent de Twitter. (03:43)

    LES INTERVIEWS DE LA SEMAINE

    Savoir utiliser ChatGPT 🎙 Benoit Raphaël, journaliste co-fondateur de Flint Media, revient sur les nouveautés de GPT-4, notamment le traitement d'images. Par ailleurs, il décrit des applications concrètes de ChatGPT dans différents secteurs professionnels, du journalisme au marketing en passant par la programmation informatique. (13:13)

    Bloom, l'IA open source 🎙 Thomas Wolf, co-fondateur de Hugging Face. L'entreprise française Hugging Face est à l'origine de Bloom, un modèle d'intelligence artificielle européen concurrent de GPT. Particularité : Bloom est en open source et donc beaucoup plus transparent que son équivalent américain. (23:16)

    Le tsunami de l'IA 🎙 Olivier Babeau, économiste co-fondateur Institut Sapiens, analyse la déferlante IA et la vitesse à laquelle les innovations se succèdent. Il s'étonne que les entreprises et les responsables politiques français ne se saisissent pas au plus vite de ce sujet alors que l'impact s'annonce phénoménal. (33:23)

    Bonne écoute ! 

    Was ChatGPT alles kann

    Was ChatGPT alles kann
    In immer mehr Anwendungen steckt inzwischen generative KI drin - meist von ChatGPT. Die neue Version 4.0 ist kein reines Sprachmodell mehr, sondern liest auch Bilder. Trotzdem ist Vorsicht angesagt - viele Informationen sind schlichtweg falsch. Aber das Tool mausert sich zum unverzichtbaren Werkzeug für alle Wissensarbeiter.