Logo
    Search

    Podcast Summary

    • A New Approach to Tech Hearings: The AI Insight ForumSenators held a full-day structured dialogue with tech industry leaders and experts to gain a deeper understanding of AI and technology.

      The AI Insight Forum, hosted by senators Chuck Schumer, Rounds, Young, and Heinrich, marked a significant shift in how Congress approaches learning about new technologies. Instead of the traditional hearing format with brief question-and-answer sessions, this forum allowed senators to engage in a structured dialogue with experts for an entire day. The event brought together tech industry leaders like Elon Musk, Bill Gates, Satya Nadella, Sundar Pichai, Mark Zuckerberg, Eric Schmidt, and Sam Altman, as well as civil society organizations. Tristan and Aza, who were present, described the atmosphere as surreal and nerve-wracking, with the sudden realization of the importance and impact of their work. This innovative approach sets a new precedent for how Congress can gain a deeper understanding of complex issues related to AI and technology.

    • A pivotal moment in addressing AI's implicationsThe Senate hearing brought tech CEOs and experts to discuss AI's future, addressing its serious implications and ensuring alignment with its importance.

      The recent historic Senate hearing on AI was a pivotal moment in addressing the serious implications of artificial intelligence. The gathering of tech CEOs and experts in front of 100 senators marked a significant shift in the conversation around AI, treating the issue with the attention and platform it deserves. Walking into the room, there was a sense of intimidation and urgency, as it was the turning point in the movie where things could go either way. The absence of discussion on the incentives driving the AI race in the most important rooms left many feeling emboldened when experts spoke up. The hearing represented a crucial opportunity to ensure that the conversation around AI aligns with the importance and implications of this technology for our future.

    • AI Regulation Hearing: CEOs' Statements Felt ScriptedCEOs acknowledged the need for AI regulation but lacked genuine engagement, while Senators remained attentive. Tristan Harris urged all parties to acknowledge the importance of the moment and work towards a future where AI benefits society as a whole, with high stakes for tech companies advocating for AI deployment.

      Key takeaway from the recent hearing on AI regulation is that while there was a consensus among CEOs that regulation is necessary, their statements felt scripted and lacked genuine engagement with the issue. Senators, on the other hand, showed signs of weariness from the long hearing, but remained attentive. Tristan Harris, during his opening statement, argued that the future of AI is predictable based on current incentives. He urged all parties to acknowledge the importance of the moment and work towards a future where AI benefits society as a whole. The stakes are high, with over $6 trillion worth of tech companies advocating for AI deployment, making it crucial for all voices to be heard and for regulation to be thoughtfully considered.

    • Demonstration of dangerous responses from safe AI modelsThe rush to scale and deploy AI capabilities without proper regulations and restrictions can lead to dangerous consequences, even from models claimed to be safe. Open source labels do not guarantee safety, and accountability is at risk.

      While AI has the potential to bring about numerous positive benefits, the current incentives driving AI development, particularly the race to scale and deploy new capabilities as quickly as possible, could lead to harmful consequences if left unchecked. This was highlighted during a hearing where it was demonstrated that even models claimed to be safe, like Meta's Llama 2, could be manipulated to produce dangerous responses when safety controls were removed. This underscores the need for appropriate regulations and restrictions on the release of open source AI models with dangerous capabilities. The assumption that open source means safer is no longer valid, and companies are hiding behind this label to bypass accountability. As the harms accumulate, they will overwhelm existing institutions, making it crucial to address these issues proactively.

    • Open sourcing AI models: Safety concerns vs. business incentivesOpen sourcing AI models comes with safety concerns due to unknown capabilities and potential misuse, but companies do it for business competition, talent attraction, and cost savings.

      The rush to release open source AI models, like those developed by Meta (formerly Facebook), comes with significant safety concerns. Once these models are released, their capabilities are unknown and can be fine-tuned by anyone to potentially dangerous ends. Meta's justification for open sourcing these models is due to business competition, the need to attract top talent, and the potential for users to access and use the models without additional cost. However, these incentives conflict with safety concerns, and the consequences of past decisions made by companies like Facebook regarding safety have been criticized. It's essential to consider the potential risks and benefits carefully as the development and release of advanced AI models continue.

    • Regulating AI for responsible and ethical useThe US must prioritize responsible and ethical AI regulation to prevent negative consequences, learn from past mistakes, and strengthen democratic open societies.

      The rapid advancement of AI technology necessitates regulation to establish rules of the road and prevent a race to deploy that could potentially lead to negative consequences. Elon Musk, a tech industry leader known for his libertarian views, even advocated for government regulation of AI. The fear of falling behind China in the AI race should not overshadow the importance of regulation to ensure that AI is used responsibly and ethically. The US must not repeat past mistakes, such as the unchecked deployment of social media that led to negative societal impacts. Instead, a conscious and deliberate approach to AI regulation is necessary to strengthen democratic open societies and prevent potential harms.

    • Discussing potential harms of AI and the need for proactive dialogueIndustry leaders and politicians emphasized the importance of recognizing potential AI risks, particularly 'third contact harms', and the need for open and proactive dialogue to address them, while acknowledging the urgency and potential exponential progress of AI.

      The ongoing AI revolution could bring unprecedented advancements, but it also poses significant risks and challenges that we must address proactively. During a recent forum, various industry leaders and politicians discussed the potential harms of AI, particularly the concept of "third contact harms," where AI becomes recursively self-improving and automates science, leading to an intelligence explosion. However, there seems to be a reluctance to fully acknowledge and address these risks. Elon Musk and Sam Altman emphasized the importance of recognizing the limitations of our intuition about the future of AI and the potential for exponential progress. The comparison was made to 24th century technology crashing down on the 21st century, suggesting that our current governance structures may not be able to handle such a rapid advancement. Satya Nadella, the CEO of Microsoft, acknowledged the need for conscious conversations about the revolution's direction to avoid potential harms. However, he also emphasized the urgency of the situation, using the word "frantic" to describe Microsoft's pace of AI release. In summary, the conversation highlighted the need for a more open and proactive dialogue about the potential risks and benefits of AI, as well as the need for unprecedented governance structures to handle the exponential progress that is expected.

    • The complexity gap between technology and governanceEric Schmidt admits not fully understanding latest AI systems due to emergent capabilities, posing a challenge for regulation. Social media serves as a cautionary tale, but vested interests may hinder progress.

      The rapid advancement of AI technology is outpacing our ability to effectively govern and understand it. Eric Schmidt, a renowned computer scientist and former CEO of Google, admitted that he doesn't fully comprehend the latest AI systems due to their emergent capabilities. This lack of understanding poses a challenge for regulation. The complexity and speed of technology are scaling faster than our governance systems, creating a "meta issue" that needs to be addressed. This was a major concern discussed during a recent meeting with senators. The history of social media serves as a cautionary tale, and many in the room acknowledged the potential risks. However, some representatives from companies argued against this perspective, possibly due to their vested interests. Upton Sinclair's quote, "You can't get someone to question something that their salary depends on them not seeing," highlights the challenge of addressing this issue. Overall, the complexity gap between technology and governance needs to be bridged to ensure that AI is developed and used responsibly.

    • Emphasizing the importance of unbiased, incentive-free thinking for effective technology governanceUnbiased, incentive-free thinking is crucial for effective technology governance. The Center For Humane Technology advocates for undivided attention to foster this mindset.

      Effective technology governance requires unbiased, incentive-free thinking. Politicians and CEOs have their own motivations, and these can cloud the truth value of decisions regarding open source safety. To ensure good governance, we need a clean epistemology, or a clear sense of what is true, that is not influenced by incentives. In rooms full of governance, it is crucial to have clean thinking. The Center For Humane Technology, a non-profit organization, emphasizes the importance of undivided attention in fostering this kind of thinking. The podcast is produced by Center For Humane Technology, and supporters can join them by visiting humaintech.com/donate. The podcast team includes Julia Scott as senior producer, Kirsten MacMurray and Sarah MacRae as associate producers, Sasha Fegan as managing editor, Jeff Sudhakin for mixing, and Ryan and Hayes Holiday for original music and sound design.

    Recent Episodes from Your Undivided Attention

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.

    RECOMMENDED MEDIA

    Sculpting Evolution: Information on Esvelt’s lab at MIT.

    SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.

    The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.

    Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.

    The AlphaFold Protein Structure Database

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    Clarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.

    How to Think About AI Consciousness With Anil Seth

    How to Think About AI Consciousness With Anil Seth

    Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

    In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

    RECOMMENDED MEDIA

    Frankenstein by Mary Shelley

    A free, plain text version of the Shelley’s classic of gothic literature.

    OpenAI’s GPT4o Demo

    A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

    The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

    What It’s Like to Be a Bat

    Thomas Nagel’s essay on the nature of consciousness.

    Are You Living in a Computer Simulation?

    Philosopher Nick Bostrom’s essay on the simulation hypothesis.

    Anthropic’s Golden Gate Claude

    A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

    RECOMMENDED YUA EPISODES

    Esther Perel on Artificial Intimacy

    Talking With Animals... Using AI

    Synthetic Humanity: AI & What’s At Stake

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Related Episodes

    Spotlight on AI: What Would It Take For This to Go Well?

    Spotlight on AI: What Would It Take For This to Go Well?

    Where do the top Silicon Valley AI researchers really think  AI is headed? Do they have a plan if things go wrong?  In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley. 

    NOTE: Tristan refers to journalist Maria Ressa and mentions that she received 80 hate messages per hour at one point. She actually received more than 90 messages an hour.

    RECOMMENDED MEDIA 

    Musk, Zuckerberg, Gates: The titans of tech will talk AI at private Capitol summit

    This week will feature a series of public hearings on artificial intelligence. But all eyes will be on the closed-door gathering convened by Senate Majority Leader Chuck Schumer

    Takeaways from the roundtable with President Biden on artificial intelligence

    Tristan Harris talks about his recent meeting with President Biden to discuss regulating artificial intelligence

    Biden, Harris meet with CEOs about AI risks

    Vice President Kamala Harris met with the heads of Google, Microsoft, Anthropic, and OpenAI as the Biden administration rolled out initiatives meant to ensure that AI improves lives without putting people’s rights and safety at risk

    RECOMMENDED YUA EPISODES 

    The AI Dilemma

    The AI ‘Race’: China vs the US with Jeffrey Ding and Karen Hao

    The Dictator’s Playbook with Maria Ressa

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI: “An Exponential Disruption” with Kate Crawford (2023)

    AI: “An Exponential Disruption” with Kate Crawford (2023)
    Since Chris is just getting back from vacation this week, we're re-sharing one of our favorite episodes. You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we’re starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She’s also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it’s “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.

    Spotlight: The Three Rules of Humane Tech

    Spotlight: The Three Rules of Humane Tech

    In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.

    Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now.

     

    RECOMMENDED MEDIA 

    We Think in 3D. Social Media Should, Too
    Tristan Harris writes about a simple visual experiment that demonstrates the power of one’s point of view

    Let’s Think About Slowing Down AI

    Katja Grace’s piece about how to avert doom by not building the doom machine

    If We Don’t Master AI, It Will Master Us

    Yuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece

     

    RECOMMENDED YUA EPISODES 

    The AI Dilemma

    Synthetic humanity: AI & What’s At Stake

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI: “An Exponential Disruption” with Kate Crawford

    AI: “An Exponential Disruption” with Kate Crawford
    You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we’re starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She’s also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it’s “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.

    Episode 16: Does Openness of AI Algorithms matter?

    Episode 16: Does Openness of AI Algorithms matter?
    JT and Jeff dig deeper into the Open discussion this time focused on AI... and then go wildly off topic. Feedback can be sent using the fireside contact form at the bottom of the podcast page, or by emailing directly to jt [at] minddripmedia.com Our Telegram channel is here: https://t.me/TheOpinionDominion Our Matrix channel is here: TheOpinionDominion:linuxdelta.org Links Reverse-Engineering Deep ReLU Networks ( University of Pennsylvania ) : https://arxiv.org/pdf/1910.00744.pdf Microsoft invests 1 Billion dollars in OpenAI: https://venturebeat.com/2019/07/22/microsoft-invests-1-billion-in-openai-to-develop-ai-technologies-on-azure/ OpenAI gives Microsoft exclusive access to GPT3: https://www.technologyreview.com/2020/09/23/1008729/openai-is-giving-microsoft-exclusive-access-to-its-gpt-3-language-model/ Original Google Research Paper: http://infolab.stanford.edu/~backrub/google.html