Logo
    Search

    human values

    Explore "human values" with insightful episodes like "From Paperclips to Disaster: AI's Unseen Risks", "Guardrails for the Future: Stuart Russells Vision on Building Safer AI", "Economics Is the Deadliest Science", "Reid Hoffman confronts the AI critics, w/WaitWhat CEO Jeff Berman" and "Evidence We're In A Simulation Is Everywhere. All You Have To Do Is Look | Joscha Bach PT 2" from podcasts like ""A Beginner's Guide to AI", "A Beginner's Guide to AI", "Economics Explained", "Masters of Scale" and "Impact Theory with Tom Bilyeu"" and more!

    Episodes (19)

    From Paperclips to Disaster: AI's Unseen Risks

    From Paperclips to Disaster: AI's Unseen Risks

    In today's episode of "A Beginner's Guide to AI," we venture into the realm of AI ethics with a focus on the thought-provoking paperclip maximizer thought experiment.

    As we navigate through this intriguing concept, introduce by philosopher Nick Bostrom, we explore the hypothetical scenario where an AI's singular goal of manufacturing paperclips leads to unforeseen and potentially catastrophic consequences.

    This journey shed light on the complexities of AI goal alignment and the critical importance of embedding ethical considerations into AI development.


    Through an in-depth analysis and a real-world case study on autonomous trading algorithms, we underscore the potential risks and challenges inherent in designing AI with safe and aligned goals.


    Want more AI Infos for Beginners? 📧 ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Join our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    This podcast was generated with the help of ChatGPT and Claude 3. We do fact-check with human eyes, but there still might be hallucinations in the output. Join us as we continue to explore the fascinating world of AI, its potential, its pitfalls, and its profound impact on the future of humanity.


    Music credit: "Modern Situations" by Unicorn Heads.

    Guardrails for the Future: Stuart Russells Vision on Building Safer AI

    Guardrails for the Future: Stuart Russells Vision on Building Safer AI

    For this episode of "A Beginner's Guide to AI," we delve into the critical and thought-provoking realm of creating safer artificial intelligence systems, guided by the pioneering principles of Stuart Russell. In this journey, we explore the concept of human-compatible AI, a vision where technology is designed to align seamlessly with human values, ensuring that as AI advances, it does so in a way that benefits humanity as a whole.


    Stuart Russell, a leading figure in the field of AI, proposes a framework where AI systems are not merely tools of efficiency but partners in progress, capable of understanding and prioritizing human ethics and values. This episode unpacks Russell's principles, from the importance of AI's alignment with human values to the technical and ethical challenges involved in realizing such a vision. Through a detailed case study on autonomous vehicles, we see these principles in action, illustrating the potential and the hurdles in creating AI that truly understands and respects human preferences and safety.


    Listeners are invited to reflect on the societal implications of human-compatible AI and consider their role in shaping a future where technology and humanity coexist in harmony. This episode is not just a discussion; it's a call to engage with the profound questions AI poses to our society, ethics, and future.

    Want more AI Infos for Beginners? 📧 ⁠⁠⁠⁠Join our Newsletter⁠⁠⁠⁠! This podcast was generated with the help of ChatGPT and Claude 2. We do fact-check with human eyes, but there still might be hallucinations in the output.


    Music credit: "Modern Situations" by Unicorn Heads.

    Economics Is the Deadliest Science

    Economics Is the Deadliest Science
    If unintended consequences aren't considered, there can be a real and deadly trade-off between economic benefit and human wellbeing. Making poor policy decisions that may seem like economic opportunities can and have lead to famines, economic collapse and people losing their livelihoods, homes and lives. This is why economics is so important, and poor economic decision making can literally be fatal for millions of people.

    Reid Hoffman confronts the AI critics, w/WaitWhat CEO Jeff Berman

    Reid Hoffman confronts the AI critics, w/WaitWhat CEO Jeff Berman

    Are AI’s harshest critics causing irreversible harm to humanity? How far should leaders lean into AI? And just what would Reid sound like speaking Chinese with a British accent?

    Reid Hoffman sits down with CEO of WaitWhat and longtime friend Jeff Berman to tackle these pressing questions and more. Join them in an unguarded exploration of how the AI landscape has continued to shift in the six months since Reid published Impromptu, the book he co-authored with ChatGPT.

    Reid also gives a sneak peek into his work with Microsoft's unreleased AI text-to-speech tech, and discusses why he chose not to join Elon Musk, Steve Wozniak and other tech leaders in signing an open letter warning about the existential threat of AI.

    Plus, you’ll hear how early stage business leaders should tap their network to take advantage of AI, and how larger organizations should create long-term strategic plans that can adapt at the drop of a hat.

    Read a transcript of this episode: https://mastersofscale.com/

    Subscribe to the Masters of Scale weekly newsletter: https://mastersofscale.com/subscribe

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Evidence We're In A Simulation Is Everywhere. All You Have To Do Is Look | Joscha Bach PT 2

    Evidence We're In A Simulation Is Everywhere. All You Have To Do Is Look | Joscha Bach PT 2
    Confusing the distorted version of reality you’ve created in your mind with truth and perception causes people more suffering than most would ever admit to. In today’s episode, I’m joined by an expert with cognitive science and artificial intelligence, to discuss what you need to know about the self-generated simulation you’ve created and how to ensure you live the life you want, well. Joscha Bach, is a trailblazing AI researcher reshaping our understanding of mind and universe. Known for his groundbreaking work in cognitive architecture, his revolutionary insights offer a fresh perspective on artificial intelligence. He’s an essential voice in this generation that has grabbed my attention, and will grab yours as well.. In today’s episode Joshca walks through self-generated simulations and the possibilities of humans merging consciousness with artificial intelligence, plus: The nature of reality, free will, and consciousness. Stages of reality and perception develop your ability to self-control AI’s need for desire and goals. As an artificial intelligence researcher and cognitive scientist, Joscha Bach’s fresh perspectives and insights may send your mind spinning and it may also trigger your flight responses. Bringing you episodes like this, breaks down preconceived notions and creates new opportunities. SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Right now, Kajabi is offering a 30-day free trial to start your own business if you go to https://bit.ly/Kajabi-Impact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Learn a new language and get 55% off at https://bit.ly/BabbelImpact. Try NordVPN risk-free with a 30-day money-back guarantee by going to https://bit.ly/NordVPNImpact Give online therapy a try at https://bit.ly/BetterhelpImpact and get on your way to being your best self. Go to https://bit.ly/PlungeImpact and use code IMPACT to get $150 off your incredible cold plunge tub today. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    BTC151: This is NOT Capitalism w/ Allen Farrington (Bitcoin Podcast)

    BTC151: This is NOT Capitalism w/ Allen Farrington (Bitcoin Podcast)
    During Preston Pysh's discussion with Allen Farrington, Allen provides critical thought experiments and challenges to many of the ideas that plague Wall Street and business schools. IN THIS EPISODE, YOU’LL LEARN: 00:00 - Intro 04:56 - What does the UFC have to do with economics and money? 08:28 - The Efficient Market Hypothesis. 19:18 - Is risk volatility? 19:18 - The Capital Asset Pricing Model. 31:10 - This is NOT capitalism. 34:49 - The flaws in using GDP. 44:21 - Thoughts on Bitcoin and derivative markets. 44:21 - The societal implications of Bitcoin and beyond it's economic value. BOOKS AND RESOURCES Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, and the other community members. Allen Farrington's Twitter. Allen's Website. Allen's book, Bitcoin is Venice. NEW TO THE SHOW? Check out our We Study Billionaires Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets. Learn how to better start, manage, and grow your business with the best business podcasts.  SPONSORS Support our free podcast by supporting our sponsors: River Toyota Linkedin Marketing Solutions Fidelity Efani Shopify NDTCO Fundrise Wise NetSuite TurboTax Vacasa NerdWallet Babbel Learn more about your ad choices. Visit megaphone.fm/adchoices

    Teaching AI Right from Wrong: The Quest for Alignment

    Teaching AI Right from Wrong: The Quest for Alignment

    This episode explored the concept of AI alignment - how we can create AI systems that act ethically and benefit humanity. We discussed key principles like helpfulness, honesty and respect for human autonomy. Approaches to translating values into AI include techniques like value learning and Constitutional AI. Safety considerations like corrigibility and robustness are also important for keeping AI aligned. A case study on responsible language models highlighted techniques to reduce harms in generative AI. While aligning AI to human values is complex, the goal of beneficial AI is essential to steer these powerful technologies towards justice and human dignity.

    This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.

    Music credit: "Modern Situations by Unicorn Heads"


    ---

    CONTENT OF THIS EPISODE

    AI ALIGNMENT: MERGING TECHNOLOGY WITH HUMAN ETHICS


    Welcome readers! Dive with me into the intricate universe of AI alignment.


    WHY AI ALIGNMENT MATTERS


    With AI's rapid evolution, ensuring systems respect human values is essential. AI alignment delves into creating machines that reflect human goals and values. From democracy to freedom, teaching machines about ethics is a monumental task. We must ensure AI remains predictable, controllable, and accountable.


    UNDERSTANDING AI ALIGNMENT


    AI alignment encompasses two primary avenues:


    Technical alignment: Directly designing goal structures and training methods to induce desired behavior.

    Political alignment: Encouraging AI developers to prioritize public interest through ethical and responsible practices.


    UNRAVELING BENEFICIAL AI


    Beneficial AI revolves around being helpful, transparent, empowering, respectful, and just. Embedding societal values into AI remains a challenge. Techniques like inductive programming and inverse reinforcement learning offer promising avenues.


    ENSURING TECHNICAL SAFETY


    Corrigibility, explainability, robustness, and AI safety are pivotal to making AI user-friendly and safe. We want machines that remain under human control, are transparent in their actions, and can handle unpredictable situations.


    SPOTLIGHT ON LANGUAGE MODELS


    Large language models have showcased both potential and risks. A case in point is Anthropic's efforts to design inherently safe and socially responsible models. Their innovative "value learning" technique embeds ethical standards right into AI's neural pathways.


    WHEN AI GOES WRONG


    From Microsoft's Tay chatbot to biased algorithmic hiring tools, AI missteps have real-world impacts. These instances stress the urgency of proactive AI alignment. We must prioritize ethical AI development that actively benefits society.


    AI SOLUTIONS FOR YOUR BUSINESS


    Interested in integrating AI into your business operations? Argo.berlin specializes in tailoring AI solutions for diverse industries, emphasizing ethical AI development.


    RECAP AND REFLECTIONS


    AI alignment seeks to ensure AI enriches humanity. As we forge ahead, the AI community offers inspiring examples of harmonizing science and ethics. The goal? AI that mirrors human wisdom and values.


    JOIN THE CONVERSATION


    How would you teach AI to be "good"? Share your insights and let's foster a vibrant discussion on designing virtuous AI.


    CONCLUDING THOUGHTS


    As Stanislas Dehaene eloquently states, "The path of AI is paved with human values." Let's ensure AI's journey remains anchored in human ethics, ensuring a brighter future for all.


    Until our next exploration, remember: align with what truly matters.

    Google DeepMind Co-founder: AI Could Release A Deadly Virus - It’s Getting More Threatening! Mustafa Suleyman

    Google DeepMind Co-founder: AI Could Release A Deadly Virus - It’s Getting More Threatening! Mustafa Suleyman
    Mustafa Suleyman went from growing up next to a prison to founding the world's leading AI company. Dropping out of Oxford to start a philanthropic endeavour because he thought it would be of more real-world value than a degree, he went on to co-found DeepMind, which has since been acquired by Google and is leading their AI effort.  In 2016, DeepMind gained worldwide fame for programming the first AI program to defeat the world champion of Go, considered the most complicated game in the world and with infinitely more variables than chess. There is no better authority on the progress of AI and what it means for all of us. His new book, 'The Coming Wave,' is out on September 5th. In this conversation Mustafa and Steven discuss topics, such as: Emotional Responses to AI Surprises of the Past Decade Concerns and Fears The Containment Challenge AI's Physical Manifestations Regulating AI The Future of AI Containment AI-Human Interactions Quantum Computing and AI Cybersecurity Challenges Founding an AI Company Government's Role in Regulation Containing AI: Strategies and Approaches Emotional Impact of AI The Shift Towards AI Interactions Guidance for Young Innovators Success and Failure Scenarios Continuation of the Conversation You can purchase Mustafa’s book, ‘The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma’, here: https://amzn.to/3Qudl2Z Follow Mustafa: Twitter: https://bit.ly/45FZ0qr Watch the episodes on Youtube - https://g2ul0.app.link/3kxINCANKsb My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Follow me:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors:  Huel: https://g2ul0.app.link/G4RjcdKNKsb Airbnb: http://bit.ly/40TcyNr Learn more about your ad choices. Visit podcastchoices.com/adchoices

    AI and Existential Risk - Overview and Discussion

    AI and Existential Risk - Overview and Discussion

    A special non-news episode in which Andrey and Jeremie discussion AI X-Risk!

    Please let us know if you'd like use to record more of this sort of thing by emailing contact@lastweekin.ai or commenting whether you listen.

    Outline:

    (00:00) Intro (03:55) Topic overview (10:22) Definitions of terms (35:25) AI X-Risk scenarios (41:00) Pathways to Extinction (52:48) Relevant assumptions (58:45) Our positions on AI X-Risk (01:08:10) General Debate (01:31:25) Positive/Negative transfer (01:37:40) X-Risk within 5 years  (01:46:50) Can we control an AGI (01:55:22) AI Safety Aesthetics (02:00:53) Recap (02:02:20) Outer vs inner alignment (02:06:45) AI safety and policy today (02:15:35) Outro

    Links

    Taxonomy of Pathways to Dangerous AI

    Clarifying AI X-risk

    Existential Risks and Global Governance Issues Around AI and Robotics

    Current and Near-Term AI as a Potential Existential Risk Factor

    AI x-risk, approximately ordered by embarrassment

    Classification of Global Catastrophic Risks Connected with Artificial Intelligence

    X-Risk Analysis for AI Research

    The Alignment Problem from a Deep Learning Perspective

    Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3

    Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3

    In this episode, we delve into the frontier of AI and the challenges surrounding AI alignment. The AI / Crypto overlap at Zuzalu sparked discussions on topics like ZKML, MEV bots, and the integration of AI agents into the Ethereum landscape. 

    However, the focal point was the alignment conversation, which showcased both pessimistic and resigned optimistic perspectives. We hear from Nate Sores of MIRI, who offers a downstream view on AI risk, and Deger Turan, who emphasizes the importance of human alignment as a prerequisite for aligning AI. Their discussions touch on epistemology, individual preferences, and the potential of AI to assist in personal and societal growth.

    ------
    🚀 Join Ryan & David at Permissionless in September. Bankless Citizens get 30% off. 🚀
    https://bankless.cc/GoToPermissionless

    ------
    BANKLESS SPONSOR TOOLS:

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    ⁠https://k.xyz/bankless-pod-q2⁠ 

    🦊METAMASK PORTFOLIO | TRACK & MANAGE YOUR WEB3 EVERYTHING
    ⁠https://bankless.cc/MetaMask 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    ⁠https://bankless.cc/Arbitrum⁠ 

    🛞MANTLE | MODULAR LAYER 2 NETWORK
    ⁠https://bankless.cc/Mantle⁠ 

    👾POLYGON | VALUE LAYER OF THE INTERNET
    https://polygon.technology/roadmap 

    ------

    Timestamps

    0:00 Intro
    1:50 Guests

    5:30 NATE SOARES
    7:25 MIRI
    13:30 Human Coordination
    17:00 Dangers of Superintelligence
    21:00 AI’s Big Moment
    24:45 Chances of Doom
    35:35 A Serious Threat
    42:45 Talent is Scarce
    48:20 Solving the Alignment Problem
    59:35 Dealing with Pessimism
    1:03:45 The Sliver of Utopia

    1:14:00 DEGER TURAN
    1:17:00 Solving Human Alignment
    1:22:40 Using AI to Solve Problems
    1:26:30 AI Objectives Institute
    1:31:30 Epistemic Security
    1:36:18 Curating AI Content
    1:41:00 Scalable Coordination
    1:47:15 Building Evolving Systems
    1:54:00 Independent Flexible Systems
    1:58:30 The Problem is the Solution
    2:03:30 A Better Future

    -----
    Resources

    Nate Soares
    https://twitter.com/So8res?s=20 

    Deger Turan
    https://twitter.com/degerturann?s=20 

    MIRI
    https://intelligence.org/ 

    Less Wrong AI Alignment
    xhttps://www.lesswrong.com/tag/ai-alignment-intro-materials 

    AI Objectives Institute
    https://aiobjectives.org/ 

    ------

    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
     https://www.bankless.com/disclosures⁠ 

    How AI Will Disrupt The Entire World In 3 Years (Prepare Now While Others Panic) | Emad Mostaque PT 2

    How AI Will Disrupt The Entire World In 3 Years (Prepare Now While Others Panic) | Emad Mostaque PT 2
    If you missed the first part of this episode with Emad Mostaque, let me catch you up. Emad is one of the most prominent figures in the artificial intelligence industry. He’s best known for his role as the founder and CEO of Stability AI. He has made notable contributions to the AI sector, particularly through his work with Stable Diffusion, a text-to-image AI generator. Emad takes us on a deep dive into a thought-provoking conversation dissecting the potential, implications, and ethical considerations of AI. Discover how this powerful tool could revolutionize everything from healthcare to content creation. AI will reshape societal structures, and potentially solve some of the world's most pressing issues, making this episode a must for anyone curious about the future of AI. We'll explore the blurred lines between our jobs and AI, debate the ethical dilemmas that come with progress, and delve into the complexities of programming AI and potential threats of misinformation and deep fake technology.  Join us as we navigate this exciting but complex digital landscape together, and discover how understanding AI can be your secret weapon in this rapidly evolving world. Are you ready to future-proof your life?" Follow Emad Mostaque: Website: https://stability.ai/  Twitter: https://twitter.com/EMostaque  SPONSORS: elizabeth@advertisepurple.com ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    60 Possible AGI Futures

    60 Possible AGI Futures
    The debate around AI safety is often presented as a binary, but in reality there are a huge array of scenarios for if and how humans develop AGI and what happens if and when we do. A reading of: https://www.lesswrong.com/posts/SRW9WAEEKJEgHAhSy/60-possible-futures The AI Breakdown helps you understand the most important news and discussions in AI.    Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe   Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown   Join the community: bit.ly/aibreakdown   Learn more: http://breakdown.network/

    Megathreat: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat Pt 2

    Megathreat: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat Pt 2
    Mo Gawdat is the former chief business officer for Google X and has built a monumental career in the tech industry working with the biggest names to reshape and reimagine the world as we know it. From IBM to Microsoft, Mo has lived at the cutting edge of technology and has taken a strong stance that AI is a bigger threat to humanity than global warming. In the second part episode, we’re looking at the ethical dilemma of AI, the alarming truth about how vulnerable we actually are, and what AI’s drive to survive means for humanity. This is just the tip of how deep this conversation gets with Mo. You are sure to walk away with a better view of the impact AI will have on human life, purpose and connection. How can we best balance our desire for progress and convenience with the importance of embracing the messiness and imperfections of the human experience?  Follow Mo Gawdat: Website: https://www.mogawdat.com/  YouTube: https://www.youtube.com/channel/UCilMYYyoot7vhLn4Tinzmmg  Twitter: https://twitter.com/mgawdat  Instagram: https://www.instagram.com/mo_gawdat/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Get $300 into your brokerage account when you invest $5k within your first 90 days by going to https://bit.ly/FacetImpact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Sign up for a one-dollar-per-month trial period at https://bit.ly/ShopifyImpact. Are You Ready for EXTRA Impact? If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. Subscription Benefits: Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more New episodes delivered ad-free Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: Legendary Mindset: Mindset & Self-Improvement Money Mindset: Business & Finance Relationship Theory: Relationships Health Theory: Mental & Physical Health Power Ups: Weekly Doses of Short Motivational Quotes  Subscribe on Apple Podcasts: https://apple.co/3PCvJaz Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Most Frightening Article I’ve Ever Read (Ep 1988)

    The Most Frightening Article I’ve Ever Read (Ep 1988)
    In this episode, I address the single most disturbing article I’ve ever read. It addresses the ominous threat of out-of-control artificial intelligence. The threat is here.  News Picks: The article about the dangers of AI that people are talking about. An artificial intelligence program plots the destruction of human-kind. More information surfaces about the FBI spying scandal on Christians. San Francisco Whole Foods closes only a year after opening. This is an important piece about the parallel economy and the Second Amendment.  Copyright Bongino Inc All Rights Reserved Learn more about your ad choices. Visit podcastchoices.com/adchoices

    AI Is TAKING Over. DON'T Get Left BEHIND | Tom Bilyeu

    AI Is TAKING Over. DON'T Get Left BEHIND | Tom Bilyeu
    Chat GPT AI went from 0 to over 100Million users in less than 2 months. AI is super fucking dangerous, but it’s inevitable, will make our lives better, and you should CAREFULLY love it. Is AI dangerous? Next to thermonuclear war and an asteroid strike, I’d say it’s the most radically dangerous threat that we face as a species. Buuuuut I’m still super excited. And by the end of this video you will be too. If you’re terrified of AI right now, you’re rational. If you’re not at least worried it will take a wrong turn into James Cameron territory, you’re not paying attention. To paraphrase Elon Musk, the human endeavor to create AI could be the first step towards a technological utopia or a demon summoning circle. It’s not clear which way it will go. But one thing is for sure, we will not control it. And he’s right, but the truth is, the genie is out of the bottle, and we’re going to have to deal with it I fear it will be a bit of both. AI, which simply stands for Artificial Intelligence, is getting smart. Really smart. And AI doesn’t have to outsmart us by much to completely dominate us. Let me give you an example for scale: A moron is technically someone with an IQ of about 70. The average IQ in the US is 98. Einstein was estimated to be 160. And according to the Guiness book of World Records, the highest IQ ever recorded came in at a whopping 210. That means that a super genius is only 3 times smarter than a moron. But Einstein, who was actually only 2.3 times smarter than a moron, had insights that gave birth to nuclear power and nuclear bombs, not to mention lasers, GPS, and a whole lot more. Said another way, a little intelligence goes a loooooooong way. Nuclear warheads are scary enough, but AI isn’t going to be a little smarter than Einstein, it’s going to be a LOT smarter. Not 5 times smarter or 10 times or even 10,000 times smarter. We’re talking millions of times smarter. Despite the reality of all the bad stuff that’s coming, here are three traps we must avoid. 1. Burying our heads in the sand. 2. Panicking. 3. Trying to abolish AI. Panicking is the same as actively taking your own intelligence offline. When you panic, blood leaves your prefrontal cortex, which is the seat of high level cognition. Therefore it actually makes you dumber. And in the face of artificial super intelligence that strikes me as a very bad idea. Are You Ready for EXTRA Impact? If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. Subscription Benefits: Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more New episodes delivered ad-free Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: Legendary Mindset: Mindset & Self-Improvement Money Mindset: Business & Finance Relationship Theory: Relationships Health Theory: Mental & Physical Health Power Ups: Weekly Doses of Short Motivational Quotes  Subscribe on Apple Podcasts: https://apple.co/3PCvJaz Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    A Skeptical Take on the A.I. Revolution

    A Skeptical Take on the A.I. Revolution

    The year 2022 was jam-packed with advances in artificial intelligence, from the release of image generators like DALL-E 2 and text generators like Cicero to a flurry of developments in the self-driving car industry. And then, on November 30, OpenAI released ChatGPT, arguably the smartest, funniest, most humanlike chatbot to date.

    In the weeks since, ChatGPT has become an internet sensation. If you’ve spent any time on social media recently, you’ve probably seen screenshots of it describing Karl Marx’s theory of surplus value in the style of a Taylor Swift song or explaining how to remove a sandwich from a VCR in the style of the King James Bible. There are hundreds of examples like that.

    But amid all the hype, I wanted to give voice to skepticism: What is ChatGPT actually doing? Is this system really as “intelligent” as it can sometimes appear? And what are the implications of unleashing this kind of technology at scale?

    Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become one of the leading voices of A.I. skepticism. He’s not “anti-A.I.”; in fact, he’s founded multiple A.I. companies himself. But Marcus is deeply worried about the direction current A.I. research is headed, and even calls the release of ChatGPT A.I.’s “Jurassic Park moment.” “Because such systems contain literally no mechanisms for checking the truth of what they say,” Marcus writes, “they can easily be automated to generate misinformation at unprecedented scale.”

    However, Marcus also believes that there’s a better way forward. In the 2019 book “Rebooting A.I.: Building Artificial Intelligence We Can Trust” Marcus and his co-author Ernest Davis outline a path to A.I. development built on a very different understanding of what intelligence is and the kinds of systems required to develop that intelligence. And so I asked Marcus on the show to unpack his critique of current A.I. systems and what it would look like to develop better ones.

    This episode contains strong language.

    Mentioned:

    “On Bullshit” by Harry Frankfurt

    “AI’s Jurassic Park moment” by Gary Marcus

    “Deep Learning Is Hitting a Wall” by Gary Marcus

    Book Recommendations:

    The Language Instinct by Steven Pinker

    How the World Really Works by Vaclav Smil

    The Martian by Andy Weir

    Thoughts? Email us at ezrakleinshow@nytimes.com. Guest suggestions? Fill out this form.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    “The Ezra Klein Show” is produced by Emefa Agawu, Annie Galvin, Jeff Geld, Rogé Karma and Kristin Lin. Fact-checking by Mary Marge Locker and Kate Sinclair. Original music by Isaac Jones. Mixing by Jeff Geld and Sonia Herrero. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion audio is Annie-Rose Strasser.

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

    You might have noticed over the last few episodes that I’ve been keen to discuss subjects slightly leftfield of nutrition and what I’ve traditionally talked about, but fascinating nonetheless. And I hope you as a listener, who’s time and attention I value so greatly, will trust me as I take you on a  bit  of a ride. Because ultimately, I hope you agree that the topics I share are always very important.


    Mo Gawdat, who you may remember from episode #91 Solving Happiness is a person who I cherish and with whom I had a very impactful conversation with, on a personal level. He was the former Chief Business Officer of Google [X], which is Google’s ‘moonshot factory’, author of the international bestselling book ‘Solve for Happy’ and founder of ‘One Billion Happy’. After a long career in tech, Mo made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world on “Slo Mo: A Podcast with Mo Gawdat”.


    Mo is an exquisite writer and speaker with deep expertise of technology as well as a passionate appreciation for the importance of human connection and happiness. He possesses a set of overlapping skills and a breadth of knowledge in the fields of both human psychology and tech which is a rarity. His latest piece of work, a book called “Scary Smart” is a timely prophecy and call to action that puts each of us at the center of designing the future of humanity. I know that sounds intense right? But it’s very true.


    During his time at Google [X], he worked on the world’s most futuristic technologies, including Artificial Intelligence.  During  the  pod he recalls a story of when the penny dropped for him, just a few years ago, and felt compelled to leave his job. And now,  having contributed to AI's development, he feels a sense of duty to inform the public on the implications of this controversial technology and how we navigate the scary and inevitable intrusion of AI as well as who really is in control. Us.


    Today we discuss:

    Pandemic of AI and why the handing COVID is a lesson to learn from

    The difference between collective intelligence, artificial intelligence and super intelligence or Artificial general intelligence 

    How machines started creating and coding other machines 

    The 3 inevitable outcomes - including the fact that AI is here and they will outsmart us

    Machines will become emotional sentient beings with a Superconsciousness 


    To understand this episode you have to submit yourself to accepting that what we are creating is essentially another lifeform. Albeit non-biological, it will have human-like attributes in the way they learn as well as a moral value system which could immeasurably improve the human race as we know it. But our  destiny lies in how we treat and nurture them as our own. Literally like infants with (as strange as it is to  say it) love, compassion, connection and respect.


    Full show notes for this and all other episodes can be found on The Doctor's Kitchen.com website



    Hosted on Acast. See acast.com/privacy for more information.