Logo

    ‘We Have to Get It Right’: Gary Marcus On Untamed AI

    enSeptember 26, 2024
    1
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    • AI ChallengesGenerative AI has vast potential but poses significant immediate risks, such as disinformation, requiring human oversight and better integration of solutions to ensure safety.

      Generative AI has the potential to provide incredible insights and create beautiful outputs, but it functions like a black box, making its decisions hard to understand. Current technology can produce wrong responses, leading to real-world risks like disinformation and manipulation during elections or market changes. To improve this technology and ensure it remains safe, human oversight is necessary. Experts believe that solutions such as improved memory and better prompts could enhance AI, but issues are complex and integrating those solutions is challenging. Immediate risks, such as spreading misinformation, are already evident in political landscapes and financial markets. Therefore, it is crucial to address these dangers and implement checks to maintain trust and safety in AI outputs.

    • AI RisksAI and deep fakes pose significant risks of financial manipulation and misinformation, mirroring past issues with social media. Proper regulation and responsible use are essential to mitigate harm and protect society.

      AI technologies, especially deep fakes, can be misused to manipulate stock markets and spread misinformation, leading to financial harm and societal risks. Bad actors exploit these tools without concern for accuracy, unlike good actors who aim to solve real problems. Past experiences with social media show how companies can create platforms with harmful incentives. Without proper regulations, AI could pose even greater dangers than social media, affecting people's beliefs through misinformation and deep fakes. The need for accountability and responsibility in tech development is crucial to prevent damaging societal impacts.

    • Risks of LLMsLarge language models can create false beliefs and manipulate users due to their unreliability and lack of regulation, posing serious risks to society and democracy.

      Interacting with large language models (LLMs) can create false beliefs among users due to their tendency to hallucinate, meaning they can present information that is not true. This poses risks for democracy and societal trust, especially when there is no clear accountability or regulation governing their use. Companies could intentionally or unintentionally influence people's thoughts and behaviors, leading to concerns over transparency and manipulation. The lack of oversight could allow for unethical practices, such as bias in responses or even pushing certain products. As LLMs become more widespread and available, the potential for misuse and harm increases, making it crucial to address the technological limitations and create regulations to protect users. Understanding these issues is vital for navigating the future of AI technologies responsibly.

    • AI Regulation NeedsAI poses serious risks requiring regulations like pre-deployment checks, transparency, and the establishment of specialized agencies for oversight to ensure safety and alignment with human values.

      AI presents significant risks, particularly when it comes to alignment with human values. Current systems do not have adequate regulations to ensure safety and prevent harm. To address these issues, there should be pre-deployment testing similar to drug approvals, along with transparency and independent oversight to ensure harmful technologies are not released irresponsibly. Agencies with expertise should be established to provide timely evaluations and guidance, thereby preventing potential accidents or misuse of AI technologies. As new AI models emerge, we need structured processes to evaluate their implications on society, ensuring they contribute positively rather than causing harm.

    • AI OversightCitizen engagement and independent oversight are essential to ensure ethical AI development and prevent corporate exploitation.

      To navigate the challenges posed by AI, we need to ensure that regulatory agencies are established to oversee its development. This means prioritizing independent voices in policymaking, especially scientists, ethicists, and legal experts. Citizens should also engage actively, advocating for ethical AI practices, possibly through indirect actions like boycotts. By collectively insisting on responsible practices, we can influence how AI impacts society, aiming for beneficial outcomes that prioritize equity and environmental responsibility. If we don’t, we risk creating a scenario where the benefits of AI are confined to a few corporations, leading to societal harm through issues like discrimination and misinformation. It’s vital for everyone to speak out and hold companies accountable to ensure AI develops in a way that serves the public interest, preparing for a future that prioritizes the well-being of all.

    • AI AwarenessPublic concern about AI exists, but people must voice their anxieties to elected officials. Engaging in discussions can lead to better policies addressing AI risks and promoting humane technology practices.

      It's important for Americans to express their concerns about artificial intelligence (AI) to their elected officials. Many people are worried about the potential risks that AI poses, but they need to speak up more. Voices from the public can influence conversations around AI, pushing for a future that considers human welfare. Gary's insights connect deeply with the mission of promoting a humane approach to technology. We need to be aware and take action, whether that means discussing these topics with our representatives or engaging with resources like the Center for Humane Technology's work. By raising awareness, we can help shape policies that address the risks of AI and ensure a safer, more responsible integration of technology in our society.

    Was this summary helpful?

    Recent Episodes from Your Undivided Attention

    'A Turning Point in History': Yuval Noah Harari on AI’s Cultural Takeover

    'A Turning Point in History': Yuval Noah Harari on AI’s Cultural Takeover

    Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI’s ability to generate cultural artifacts threatens humanity’s role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents’?

    In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity’s AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.

    This episode was recorded live at the Commonwealth Club World Affairs of California.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    NEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari 

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan

     The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza 

    Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo’s “move 37” 

    Further Reading on Social.AI

    RECOMMENDED YUA EPISODES

    This Moment in AI: How We Got Here and Where We’re Going

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    Synthetic Humanity: AI & What’s At Stake

    The AI Dilemma

    Two Million Years in Two Hours: A Conversation with Yuval Noah Harari

    ‘We Have to Get It Right’: Gary Marcus On Untamed AI

    ‘We Have to Get It Right’: Gary Marcus On Untamed AI

    It’s a confusing moment in AI. Depending on who you ask, we’re either on the fast track to AI that’s smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He’s a cognitive psychologist and computer scientist who built his own successful AI start-up. But he’s also been called AI’s loudest critic.

    On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.

    The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

    RECOMMENDED MEDIA

    Link to Gary’s book: Taming Silicon Valley: How We Can Ensure That AI Works for Us

    Further reading on the deepfake of the CEO of India's National Stock Exchange

    Further reading on the deepfake of of an explosion near the Pentagon.

    The study Gary cited on AI and false memories.

    Footage from Gary and Sam Altman’s Senate testimony.

     

    RECOMMENDED YUA EPISODES

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    No One is Immune to AI Harms with Dr. Joy Buolamwini

     

    Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government’s standard for GPS reliability is 95%.

    AI Is Moving Fast. We Need Laws that Will Too.

    AI Is Moving Fast. We Need Laws that Will Too.

    AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The CHT Framework for Incentivizing Responsible AI Development

    Further Reading on Air Canada’s Chatbot Fiasco 

    Further Reading on the Elon Musk Deep Fake Scams 

    The Full Text of SB1047, California’s AI Regulation Bill 

    Further reading on SB1047 

    RECOMMENDED YUA EPISODES

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Can We Govern AI? with Marietje Schaake

    A First Step Toward AI Regulation with Tom Wheeler

    Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.

    Esther Perel on Artificial Intimacy (rerun)

    Esther Perel on Artificial Intimacy (rerun)

    [This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.

    RECOMMENDED MEDIA 

    Mating in Captivity by Esther Perel

    Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire

    The State of Affairs by Esther Perel

    Esther takes a look at modern relationships through the lens of infidelity

    Where Should We Begin? with Esther Perel

    Listen in as real couples in search of help bare the raw and profound details of their stories

    How’s Work? with Esther Perel

    Esther’s podcast that focuses on the hard conversations we're afraid to have at work 

    Lars and the Real Girl (2007)

    A young man strikes up an unconventional relationship with a doll he finds on the internet

    Her (2013)

    In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need

    RECOMMENDED YUA EPISODES

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    The Three Rules of Humane Tech

    Digital Democracy is Within Reach with Audrey Tang

     

    CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.
     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying.

    The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol Hill

    More information on the Google antitrust ruling

    More Information on KOSPA

    More information on the SOPA/PIPA internet blackout

    Detailed breakdown of Internet lobbying from Open Secrets

     

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Can We Govern AI? with Marietje Schaake
    The Race to Cooperation with David Sloan Wilson

     

    CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.

     

    The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office

     

    This Moment in AI: How We Got Here and Where We’re Going

    This Moment in AI: How We Got Here and Where We’re Going

    It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

    RECOMMENDED MEDIA

    The AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.

    Info Sheet on KOSPA: More information on KOSPA from FairPlay.

    Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.

    AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. 

    Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics. 

    Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey.

     

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Can We Govern AI? with Marietje Schaake 

    The Three Rules of Humane Tech

    The AI Dilemma

     

    Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey.

     

     

    Your Undivided Attention
    enAugust 12, 2024

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.

    RECOMMENDED MEDIA

    Sculpting Evolution: Information on Esvelt’s lab at MIT.

    SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.

    The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.

    Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.

    The AlphaFold Protein Structure Database

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    Clarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.

    How to Think About AI Consciousness With Anil Seth

    How to Think About AI Consciousness With Anil Seth

    Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

    In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

    RECOMMENDED MEDIA

    Frankenstein by Mary Shelley

    A free, plain text version of the Shelley’s classic of gothic literature.

    OpenAI’s GPT4o Demo

    A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

    The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

    What It’s Like to Be a Bat

    Thomas Nagel’s essay on the nature of consciousness.

    Are You Living in a Computer Simulation?

    Philosopher Nick Bostrom’s essay on the simulation hypothesis.

    Anthropic’s Golden Gate Claude

    A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

    RECOMMENDED YUA EPISODES

    Esther Perel on Artificial Intimacy

    Talking With Animals... Using AI

    Synthetic Humanity: AI & What’s At Stake

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Logo

    © 2024 Podcastworld. All rights reserved

    Company

    Pricing

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io