Logo
    Search podcasts and episodes

    ai governance

    Explore "ai governance" with insightful episodes like "Rethinking the lifecycle of AI when it comes to deepfakes and kids", "EP 202: The Holy Grail of AI Mass Adoption - Governance", "Who should be in charge of AI?", "The Winners and Losers Following the OpenAI Power Struggle" and "Babbage: Fei-Fei Li on how to really think about the future of AI" from podcasts like ""Marketplace Tech", "Everyday AI Podcast – An AI and ChatGPT Podcast", "Search Engine", "The AI Breakdown: Daily Artificial Intelligence News and Discussions" and "Economist Podcasts"" and more!

    Episodes (19)

    Rethinking the lifecycle of AI when it comes to deepfakes and kids

    Rethinking the lifecycle of AI when it comes to deepfakes and kids

    The following content may be disturbing to some listeners.

    For years, child sexual abuse material was mostly distributed by mail. Authorities used investigative techniques to stem its spread. That got a lot harder when the internet came along. And AI has supercharged the problem.

    “Those 750,000 predators that are online at any given time looking to connect with minor[s] … they just need to find a picture of a child and use the AI to generate child sexual abuse materials and superimpose these faces on something that is inappropriate,” says child safety advocate and TikTokker Tiana Sharifi.

    The nonprofit Thorn has created new design principles aimed at fighting child sexual abuse. Rebecca Portnoff, the organization’s vice president of data science, says tech companies need to develop better technology to detect AI-generated images and commit not to use this material to train AI models.

    EP 202: The Holy Grail of AI Mass Adoption - Governance

    EP 202: The Holy Grail of AI Mass Adoption - Governance

    AI Governance is a tricky topic that no one seems to know how to approach. There can be many roadblocks or hurdles before properly implementing governance for AI. Gabriella Kusz, Sr. Fellow at AI2030, joins us to discuss how to create governance for AI mass adoption. 

     Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode page
    Join the discussion: Ask Jordan and Gabriella questions on AI governance
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    01:30 Daily AI news
    05:00 About Gabriella and AI2030
    08:44 Challenges with AI technology and governance issues.
    12:02 Rapid pace challenges global technical standards creation.
    13:26 Ethical AI guidance is available from Fintech.
    16:41 Draw parallels between ESG and responsible AI adoption.
    22:33 Industry self-regulation precedes formal laws in societies.
    26:00 Inclusion and multidisciplinary team crucial for governance.
    27:57 Clear boundaries for leveraging technology for advantage.
    33:29 Recognize failures, iterate, and empower AI progress.
    36:00 Set up task force, admit setbacks, progress.

    Topics Covered in This Episode:
    1. Role of corporations and Government in AI Governance
    2. Organization's Governance Structure
    3. Risks of AI and Generative AI
    4. Practical Tips for AI Governance
    5. Ethical and Global Technical Standards

    Keywords:
    ESG, AI, Artificial intelligence, Emerging areas, Pilot period, Return on investment, Risk identification, Risk mitigation, Governance, Legislation, Change management, Generative AI, Inclusion, Education, ongoing learning, Technology usage, Ethical standards, Global technical standards, Fintech for Good, AI 2030, NIST, Taskforce, Failure in AI implementation, Self-education, AI governance, AI applications, Business growth, AI news.

    Who should be in charge of AI?

    Who should be in charge of AI?
    This week, the story of a very brief, very absurd revolution at the world’s leading artificial intelligence company, OpenAI. And we try to answer the quite real question that might be animating all of the drama. Check out Casey Newton's newsletter Platformer and our newsletter, too. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

    The Winners and Losers Following the OpenAI Power Struggle

    The Winners and Losers Following the OpenAI Power Struggle
    As the dust settles, NLW looks at who has benefitted and who has lost out after Sam Altman's removal and reinstatement. Spoiler alert: no one comes out better. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Babbage: Fei-Fei Li on how to really think about the future of AI

    Babbage: Fei-Fei Li on how to really think about the future of AI

    A year ago, the public launch of ChatGPT took the world by storm and it was followed by many more generative artificial intelligence tools, all with remarkable, human-like abilities. Fears over the existential risks posed by AI have dominated the global conversation around the technology ever since.


    Fei-Fei Li, a pioneer that helped lay the groundwork that underpins modern generative AI models, takes a more nuanced approach. She’s pushing for a human-centred way of dealing with AI—treating it as a tool to help enhance—and not replace—humanity, while focussing on the pressing challenges of disinformation, bias and job disruption.


    Fei-Fei Li is the founding co-director of Stanford University’s Institute for Human-Centred Artificial Intelligence. Fei-Fei and her research group created ImageNet, a huge database of images that enabled computers scientists to build algorithms that were able to see and recognise objects in the real world. That endeavour also introduced the world to deep learning, a type of machine learning that is fundamental part of how large-language and image-creation models work.


    Host: Alok Jha, The Economist’s science and technology editor. 


    Sign up for a free trial of Economist Podcasts+. If you’re already a subscriber to The Economist, you’ll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.



    Hosted on Acast. See acast.com/privacy for more information.


    Sam Altman Fired as OpenAI CEO, Joins Microsoft?

    Sam Altman Fired as OpenAI CEO, Joins Microsoft?

    Sam Altman Has been fired as CEO of OpenAI, joining Microsoft this morning and sending the internet into a spiral. Here's everything you need to know about what went down and what it all means for the world.


    -----
    🏹 Airdrop Hunter is HERE, join your first HUNT today
    https://bankless.cc/JoinYourFirstHUNT 


    ------
    BANKLESS SPONSOR TOOLS:


    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    ⁠https://k.xyz/bankless-pod-q2 ⁠ 


    🦊METAMASK PORTFOLIO | MANAGE YOUR WEB3 EVERYTHING
    https://bankless.cc/MetaMask 


    ⚖️ARBITRUM | SCALING ETHEREUM
    ⁠https://bankless.cc/Arbitrum ⁠ 


    🔗CELO | CEL2 COMING SOON
    https://bankless.cc/Celo 


    👾GMX | V2 IS NOW LIVE 
    https://bankless.cc/GMX 


    💲 USDV | NATIVE OMNICHAIN STABLECOIN
    https://bankless.cc/usdv  


    ------
    TIMESTAMPS


    00:00 What Happened?
    https://x.com/OpenAI/status/1725611900262588813?s=20 
    https://openai.com/blog/openai-announces-leadership-transition 
    https://x.com/sama/status/1725631621511184771?s=20 
    https://x.com/gdb/status/1725736242137182594?s=20 


    06:25 OpenAI Board Members
    https://twitter.com/ilyasut?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor 
    https://twitter.com/adamdangelo?lang=en 
    https://www.linkedin.com/pulse/tasha-mccauley-wikipedia-joseph-gordon-levitt-wife-ali-raza-3gsbf/?trk=article-ssr-frontend-pulse_more-articles_related-content-card 
    https://cset.georgetown.edu/staff/helen-toner/ 


    11:59 The New CEO
    https://x.com/eshear/status/1726526112019382275?s=20 
    https://x.com/adcock_brett/status/1726484049664057789?s=20 
    https://x.com/EffMktHype/status/1726558177028935770?s=20 
    https://x.com/Jason/status/1726492873775079807?s=20 


    16:32 Microsoft's Role
    https://x.com/satyanadella/status/1726509045803336122?s=20 
    https://x.com/sama/status/1726510261509779876?s=20
    https://x.com/balajis/status/1726515841221681619?s=20 


    22:27 Employee Pushback
    https://x.com/miramurati/status/1726542556203483392?s=20 
    https://x.com/ilyasut/status/1726590052392956028?s=20 


    25:01 The Future of Silicon Valley


    28:03 Decel vs e/acc
    https://x.com/hosseeb/status/1726492186953535764?s=20 


    31:11 Takeaways


    ------
    Not financial or tax advice. See our investment disclosures here:
    https://www.bankless.com/disclosures⁠ 

    Paul Christiano - Preventing an AI Takeover

    Paul Christiano - Preventing an AI Takeover

    Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out!

    We discuss:

    - Does he regret inventing RLHF, and is alignment necessarily dual-use?

    - Why he has relatively modest timelines (40% by 2040, 15% by 2030),

    - What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?

    - Why he’s leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon,

    - His current research into a new proof system, and how this could solve alignment by explaining model's behavior

    - and much more.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Open Philanthropy

    Open Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.

    For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/

    The deadline to apply is November 9th; make sure to check out those roles before they close.

    Timestamps

    (00:00:00) - What do we want post-AGI world to look like?

    (00:24:25) - Timelines

    (00:45:28) - Evolution vs gradient descent

    (00:54:53) - Misalignment and takeover

    (01:17:23) - Is alignment dual-use?

    (01:31:38) - Responsible scaling policies

    (01:58:25) - Paul’s alignment research

    (02:35:01) - Will this revolutionize theoretical CS and math?

    (02:46:11) - How Paul invented RLHF

    (02:55:10) - Disagreements with Carl Shulman

    (03:01:53) - Long TSMC but not NVIDIA



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    Hinton and Bengio on Managing AI Risks

    Hinton and Bengio on Managing AI Risks
    A group of scientists, academics and researchers has released a new framework on managing AI risks. NLW explores whether we're moving to more specific policy proposals. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Smart Talks with IBM: Responsible AI: Why Businesses Need Reliable AI Governance

    Smart Talks with IBM: Responsible AI: Why Businesses Need Reliable AI Governance

    To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business.

    Visit us at: ibm.com/smarttalks

    Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance

    This is a paid advertisement from IBM.

    See omnystudio.com/listener for privacy information.

    AutoGPT: AI's Awakening? The Promises and Perils of Recursive Self-Improvement

    AutoGPT: AI's Awakening? The Promises and Perils of Recursive Self-Improvement

    Today we explore AutoGPT, an AI system that can recursively improve its own natural language skills without human supervision. Through unconstrained learning, AutoGPT can acquire expertise faster than humans. A case study showed AutoGPT mastering chess in just 72 hours by analyzing games. AutoGPT has huge potential but also poses risks like technological unemployment. Key insights:


    • AutoGPT represents a milestone in AI due to recursive self-improvement
    • It points to AI that can learn any textual skill without explicit training
    • Applications could reshape medicine, law, science and more
    • But it also risks automating human jobs and expertise
    • Responsible governance of the technology will be critical


    This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.


    Music credit: "Modern Situations by Unicorn Heads"

    It’s imperative – and nearly impossible – to contain artificial intelligence, expert says

    It’s imperative – and nearly impossible – to contain artificial intelligence, expert says

    When Mustafa Suleyman co-founded the AI research company DeepMind more than a decade ago, his goal felt ambitious, even a bit far-fetched: to build a machine that could replicate human intelligence. Now, he says, rapid progress in the development of AI means that goal could be met within the next three years, and the implications of that milestone are huge. Suleyman explores those implications in his new book, “The Coming Wave,” which came out this week. Marketplace’s Lily Jamali spoke to Suleyman, now CEO and co-founder of Inflection AI, about a core theme of the book: the question of containment.

    Google DeepMind Co-founder: AI Could Release A Deadly Virus - It’s Getting More Threatening! Mustafa Suleyman

    Google DeepMind Co-founder: AI Could Release A Deadly Virus - It’s Getting More Threatening! Mustafa Suleyman
    Mustafa Suleyman went from growing up next to a prison to founding the world's leading AI company. Dropping out of Oxford to start a philanthropic endeavour because he thought it would be of more real-world value than a degree, he went on to co-found DeepMind, which has since been acquired by Google and is leading their AI effort.  In 2016, DeepMind gained worldwide fame for programming the first AI program to defeat the world champion of Go, considered the most complicated game in the world and with infinitely more variables than chess. There is no better authority on the progress of AI and what it means for all of us. His new book, 'The Coming Wave,' is out on September 5th. In this conversation Mustafa and Steven discuss topics, such as: Emotional Responses to AI Surprises of the Past Decade Concerns and Fears The Containment Challenge AI's Physical Manifestations Regulating AI The Future of AI Containment AI-Human Interactions Quantum Computing and AI Cybersecurity Challenges Founding an AI Company Government's Role in Regulation Containing AI: Strategies and Approaches Emotional Impact of AI The Shift Towards AI Interactions Guidance for Young Innovators Success and Failure Scenarios Continuation of the Conversation You can purchase Mustafa’s book, ‘The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma’, here: https://amzn.to/3Qudl2Z Follow Mustafa: Twitter: https://bit.ly/45FZ0qr Watch the episodes on Youtube - https://g2ul0.app.link/3kxINCANKsb My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Follow me:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors:  Huel: https://g2ul0.app.link/G4RjcdKNKsb Airbnb: http://bit.ly/40TcyNr Learn more about your ad choices. Visit podcastchoices.com/adchoices

    EP 61: Using AI to Improve Banking and Financial Services

    EP 61: Using AI to Improve Banking and Financial Services

    We know that AI can help us make more money. But what can it do to help improve the way we manage and deal with money? Today, Nazia Raoof, IT Strategy and Strategic Partnerships for Financial Services,  joins us as we delve into the world of banking and financial services and explore how AI can be used to improve these industries.

    Newsletter: Sign-up for our free daily newsletter
    More on this: Episode Page
    More on this topic in today's newsletter
    Join the discussion: Ask Nazia and Jordan questions about AI and financial services
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Time Stamps:
    [00:00:17] Daily AI news
    [00:06:35] ChatGPT as a complementary tool for financial advisors
    [00:09:15] Firms adapting to AI and use cases
    [00:14:33] Embedding  AI in platforms and APIs while adding guardrails
    [00:16:47] Generative AI's risk for financial institutions' security
    [00:19:53] Final takeaway

    Topics Covered in This Episode:
    - Risks posed to financial institutions and how banks handle AI
    - Implementation of AI governance committee
    - Functions of the committee in evaluating and implementing AI
    - Understanding of decision makers in the financial industry regarding generative AI
    - Extracting information from PDFs and spreadsheets using generative AI
    - Importance of adapting to using bots in the financial industry
    - Personalized advice and financial education offered by bots
    - Use of bots in websites for loan assistance and account support
    - Guideline establishment and embracing the use of bots by banks
    - Use of embedded AI and chat GPT in platforms APIs and partnerships
    - Focus on implementing guardrails to protect against data breaches
    - Accuracy and limitations of ChatGPT requiring fact-checking
    - Risk of younger generations relying on AI-generated code without understanding its quality
    - JPMorgan's use of ChatGPT and AI in production
    - Development of JPMorgan's platform called Index GPT for investment selection

    Keywords:
    ChatGPT, generative AI, connections, voice cloning, impersonation, security, financial sector, risks, financial institutions, AI governance committee, ethical uses of AI, education, evaluation, decision making, prioritizing, implementation, understanding, power, potential, extraction, PDFs, spreadsheets, availability, bots, personalized advice, financial education, websites, loan assistance, account support, guidelines, embedded AI, platforms, APIs, strategic partnerships, data breaches, sensitive information, accuracy, fact-checking, verification, younger generations, AI-generated code, JPMorgan, production, investment selection, robo advisors, complementary tool, information security, analysis of large data sets, access to information, technology leader, business problems, process automation, strategy, delivery, Eliza, communication between humans and computers, pattern matching, substitution methodology, security measures, fraud attempts, identification, fraud, podcast, generative AI in banking, Meta, Microsoft, Llama Two, Bing Chat enterprise, Xai, artificial g

    EP 56: How To Govern and Properly Use AI

    EP 56: How To Govern and Properly Use AI

    Is it possible to govern AI or has it already grown to be bigger than we can handle? John Chiappetta, Principal and CEO of HG Technical Consulting, joins us to discuss the importance of defining the audience and setting goals before introducing AI.

    Newsletter: Sign-up for our free daily newsletter
    More on this: Episode Page
    Join the discussion: Ask John and Jordan questions about governing AI
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

    [00:00:17] Daily AI news

    [00:03:59] Intro to John Chiappetta

    [00:10:18] Examples of AI in the classroom

    [00:14:03] Cybersecurity is crucial for risk assessments.

    [00:17:27] Professors want students to learn, not rely on GPT.

    [00:19:34] Defining audiences and conducting research to govern AI 

    [00:22:43] Roadmap to governance of AI

    Topics Covered:

    Importance of defining audience and customers before introducing AI (Defining Audience and Customers)

    Creating a roadmap for AI implementation, setting goals and objectives (Creating a Roadmap)

    Emphasizing the need for continuous research in AI implementation (Continuous Research)

    Involving employees at all levels for diverse perspectives on AI (Employee Involvement)

    Potential benefits of using AI, including better revenue and optimized resources (Benefits of AI)

    Addressing job replacement fears and resource readjustment (Job Replacement and Resource Adjustments)

    Recommendations for companies new to AI (Recommendations for Newcomers)
        - Research and assemble a team
        - Consider AI principles and one applicable aspect

    Acknowledging the benefits of AI for companies that have not considered it (Benefits for Non-AI Companies)

    Not all companies using generative AI, seeking recommendations (Generative AI and Recommendations)

    Importance of taking an AI class and John's agreement (Importance of AI Classes)

    Educators' polarization on AI detection in education space (Educators' Polarization)

    Use of ChatGPT by students in writing papers and issues with professors (Use of ChatGPT in Education)

    Concerns about students becoming dependent on AI tools in education (Concerns in Education)

    Importance of roadmaps and governance in AI implementation (Importance of Roadmaps and Governance)

    Starting the process of mitigating risk in AI and the importance of education (Mitigating Risk in AI)

    Balancing the benefits of AI with ethical frameworks and policies (Balancing Benefits and Ethical Frameworks)

    Keywords:
    implementing AI classes, securing funding, educating teachers, sharing information with the corporate sector, high demand, quick fill-up, enthusiastic students, eager to learn about AI, real-life examples, interactivity, hands-on learning, train models, identify objects, explore own ideas, unleash creativity, US senators, classified senate briefing, governance of AI, Indian CEO, criticism for layoffs, AI support bot, concerns about being replaced by AI, VC Billionaire Mark Anderson, scenarios for AI, Chinese world domination, control and govern AI, guest speaker John Chapada, defining audience and customers, creating a roadmap, research, involving employees, potential benefits of using AI, better revenue, improved margins, optimizing resources, readjusting resources, assembling a team

    Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection

    Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection
    Mustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi. Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship. Sarah and Elad also discuss Mustafa’s upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBot Inflection.ai Mustafa-Suleyman.ai Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymn Show Notes: [00:06] - From Conflict Resolution to AI Pioneering [10:36] - Defining Intelligence [15:32] - DeepMind's Journey and Breakthroughs [24:45] - The Future of Personal AI Companionship [33:22] - AI and the Future of Personalized Content [41:49] - The Launch of Pi [51:12] - Mustafa’s New Book The Coming Wave

    This Philosopher Wants Liberals to Take Political Power Seriously

    This Philosopher Wants Liberals to Take Political Power Seriously

    America today faces a crisis of governance. In the face of numerous challenges — from climate change, to housing shortages, to pandemics — our institutions struggle to act quickly and decisively. Democratic processes often get captured by special interests or paralyzed by polarization. And, in response, public faith in government has reached a new low.

    For the political philosopher Danielle Allen, this crisis requires a complete transformation of our democratic institutions. “Representation as designed cannot work under current conditions,” she writes. “We have no choice but to undertake a significant project of democracy renovation.” Allen’s most recent book — “Justice By Means of Democracy” — puts forth a sweeping vision of what she calls “power-sharing liberalism,” which aims to place political equality, power and participation at the center of liberal thinking.

    But Allen isn’t just a theorist of liberal governance; she’s actively applying her insights in the real world. As the director of Harvard’s Edmond and Lily Safra Center for Ethics, she’s convened interdisciplinary groups to tackle a range of challenges from building Covid-19 testing infrastructure to innovating in A.I. governance. She was co-chair of the “Our Common Purpose” commission, which put forward over 30 specific policy recommendations for reinventing American democracy. She even ran for governor of Massachusetts.

    So this is a conversation about what it would mean to build a better, more responsive and inclusive government — and the numerous challenges standing in the way of doing that. Along the way, we discuss liberals’ failure to take power seriously, Colorado’s experiments with “plural voting,” Seattle’s efforts to publicly finance elections through “democracy bucks,” Taiwan’s groundbreaking innovations in deliberative democracy, whether most citizens actually want deeper participation in government — or just better results from it, what it would mean to democratically govern AI development and much more.

    Mentioned:

    Introducing Power-Sharing Liberalism” by Danielle Allen

    Movement vs. Abundance Progressives” by Misha David Chellam

    How Democracies Die by Steven Levitsky and Daniel Ziblatt

    “Our Common Purpose” Report

    How A.I. Fails Us

    Book Recommendations:

    The Darkened Light of Faith by Melvin L. Rogers

    Life 3.0 by Max Tegmark

    Open Democracy by Héléne Landemore

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” is produced by Emefa Agawu, Jeff Geld, Kristin Lin, and Roge Karma. Fact-checking by Michelle Harris. Mixing by Jeff Geld. Original music by Isaac Jones. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero and Kristina Samulewski.

    My View on A.I.

    My View on A.I.

    This is something a bit different: Not an interview, but a commentary of my own. 

    We’ve done a lot of shows on A.I. of late, and there are more to come. On Tuesday, GPT-4 was released, and its capabilities are stunning, and in some cases, chilling. More on that in Tuesday’s episode. But I wanted to take a moment to talk through my own views on A.I. and how I’ve arrived at them. I’ve come to believe that we’re in a potentially brief interregnum before the pace of change accelerates to a rate that is far faster than is safe for society. Here’s why.

    Column:

    This Changes Everything” by Ezra Klein

    Episode Recommendations:

    Sam Altman on the A.I. revolution

    Brian Christian on the alignment problem

    Gary Marcus on the case for A.I. skepticism

    Ted Chiang on what humans really fear about A.I.

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com. 

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    “The Ezra Klein Show” is produced by Emefa Agawu, Annie Galvin, Jeff Geld, Roge Karma and Kristin Lin. Fact-checking by Rollin Hu. Mixing by Isaac Jones. Original music by Isaac Jones and Pat McCusker. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser.