Logo
    Search

    deeplearning

    Explore "deeplearning" with insightful episodes like "Ep. 278: How to Think", "What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever", "A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has.", "Algorithms Are Everywhere. How You Can Take Back Control" and "Chat about Teaching AI at Stanford with Abigail See" from podcasts like ""Deep Questions with Cal Newport", "No Priors: Artificial Intelligence | Machine Learning | Technology | Startups", "The Ezra Klein Show", "WSJ’s The Future of Everything" and "Last Week in AI"" and more!

    Episodes (15)

    Ep. 278: How to Think

    Ep. 278: How to Think

    One of the most important skills for cultivating a satisfying and high impact life is one we rarely discuss: the art of serious thinking. In this episode, Cal makes the case for serious thinking as an ability to specifically cultivate, then describes five specific practices you can put into place to launch a crash course in developing an effective life of the mind.

    Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: https://bit.ly/3U3sTvo

    Video from today’s episode:  https://youtube.com/calnewportmedia

    Deep Dive: How to Think [4:07]

    - Does reading count as Deep Work? [38:50]
    - How do I make sense of the topics I think about? [41:48]
    - How should I manage boredom in a job without hurting my ability to focus? [45:12]
    - How can I provide quick answers to management if I’m a slow thinker? [50:40]
    - CALL: How should I structure my Deep Holiday? [56:44]

    CASE STUDY: A successful Digital Detox allowing one to think in peace. [59:55]

    CAL REACTS: Does collaboration make us less creative? [1:07:29]

    Thanks to our Sponsors:

    https://www.shopify.com/deep
    https://www.mintmobile.com/deep
    https://www.cozyearth.com (promo code: CAL)
    https://www.mybodytutor.com

    Thanks to Jesse Miller for production, Jay Kerstens for the intro music, Kieron Rees for slow productivity music, and Mark Miles for mastering.

    What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever

    What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever
    Each iteration of ChatGPT has demonstrated remarkable step function capabilities. But what’s next? Ilya Sutskever, Co-Founder & Chief Scientist at OpenAI, joins Sarah Guo and Elad Gil to discuss the origins of OpenAI as a capped profit company, early emergent behaviors of GPT models, the token scarcity issue, next frontiers of AI research, his argument for working on AI safety now, and the premise of Superalignment. Plus, how do we define digital life? Ilya Sutskever is Co-founder and Chief Scientist of OpenAI. He leads research at OpenAI and is one of the architects behind the GPT models. He co-leads OpenAI's new "Superalignment" project, which tries to solve the alignment of superintelligences in 4 years. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D in Computer Science from the University of Toronto. Show Links: Ilya Sutskever | LinkedIn Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ilyasut Show Notes: (00:00) - Early Days of AI Research (06:51) - Origins of Open Ai & CapProfit Structure (13:46) - Emergent Behaviors of GPT Models (17:55) - Model Scale Over Time & Reliability (22:23) - Roles & Boundaries of Open-Source in the AI Ecosystem (28:22) - Comparing AI Systems to Biological & Human Intelligence (30:52) - Definition of Digital Life (32:59) - Super Alignment & Creating Pro Human AI (39:01) - Accelerating & Decelerating Forces

    A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has.

    A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has.

    Since the release of ChatGPT, huge amounts of attention and funding have been directed toward chatbots. These A.I. systems are trained on copious amounts of human-generated data and designed to predict the next word in a given sentence. They are hilarious and eerie and at times dangerous.

    But what if, instead of building A.I. systems that mimic humans, we built those systems to solve some of the most vexing problems facing humanity?

    In 2020, Google DeepMind unveiled AlphaFold, an A.I. system that uses deep learning to solve one of the most important challenges in all of biology: the so-called protein-folding problem. The ability to predict the shape of proteins is essential for addressing numerous scientific challenges, from vaccine and drug development to curing genetic diseases. But in the 50-plus years since the protein-folding problem had been discovered, scientists had made frustratingly little progress.

    Enter AlphaFold. By 2022, the system had identified 200 million protein shapes, nearly all the proteins known to humans. And DeepMind is also building similar systems to accelerate efforts at nuclear fusion and has spun off Isomorphic Labs, a company developing A.I. tools for drug discovery.

    Demis Hassabis is the chief executive of Google DeepMind and the leading architect behind AlphaFold. So I asked him on the show to talk me through how AlphaFold actually works, the kinds of problems similar systems could solve and what an alternative pathway for A.I. development could look like.

    Mentioned:

    The Curse of Recursion” by Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson

    DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution” by Billy Perrigo

    Book Recommendations:

    The Fabric of Reality by David Deutsch

    Permutation City by Greg Egan

    Consider Phlebas by Iain M. Banks

    Listen to this podcast in New York Times Audio, our new iOS app for news subscribers. Download now at nytimes.com/audioapp

    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

    This episode of “The Ezra Klein Show” was produced by Rogé Karma. Fact checking by Michelle Harris. Fact checking by Michelle Harris with Rollin Hu. Our senior engineer is Jeff Geld. The show’s production team also includes Emefa Agawu, Annie Galvin and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

    Algorithms Are Everywhere. How You Can Take Back Control

    Algorithms Are Everywhere. How You Can Take Back Control
    Computer algorithms and artificial intelligence increasingly affect more and more of our lives, from the content we’re shown online, to the music we enjoy, to how our household appliances work. But the results these algorithms produce may be changing our world in ways users may not fully understand. WSJ’s Danny Lewis speaks with psychologist Gerd Gigerenzer, director of the Harding Center for Risk Literacy at the University of Potsdam. He’s spent decades studying how people make choices and find patterns when faced with uncertainty, and has some ideas about how to navigate and improve the relationship between AI and our society. Further reading: The Backstory of ChatGPT Creator OpenAI  New York City Delays Enforcement of AI Bias Law  How AI That Powers Chatbots and Search Queries Could Discover New Drugs  Learn more about your ad choices. Visit megaphone.fm/adchoices

    What a Man With 60,000 Books Can Teach You About Lifelong Learning and Building Your Home Library

    What a Man With 60,000 Books Can Teach You About Lifelong Learning and Building Your Home Library

    Gary Hoover loves books. Among the nine companies he founded was the bookstore chain Bookstop, which was acquired by Barnes & Noble. He has a personal collection of 60,000 books, which he had to purchase an abandoned medical center to house. And he's the author of his own book, which is about books, called The Lifetime Learner's Guide to Reading and Learning.


    Today on the show, Gary shares how his fascination with books was born in his youth, why the collection he amassed over the decades is almost entirely non-fiction, why he prefers physical books over ebooks, and why getting your hands on old books can be particularly beneficial in enhancing your knowledge of the world. From there we turn to Gary's method for digesting a book, which allows him to glean its most valuable nuggets in just thirty minutes, without having to read it cover to cover. We also talk about whether Gary takes notes on the books he reads, and how to incorporate more serendipity into the way you do your own reading and build your home library.

    Check out the show notes at aom.is/hoover

    #94 – Ilya Sutskever: Deep Learning

    #94 – Ilya Sutskever: Deep Learning
    Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Ilya's Twitter: https://twitter.com/ilyasut Ilya's Website: https://www.cs.toronto.edu/~ilya/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:23 - AlexNet paper and the ImageNet moment 08:33 - Cost functions 13:39 - Recurrent neural networks 16:19 - Key ideas that led to success of deep learning 19:57 - What's harder to solve: language or vision? 29:35 - We're massively underestimating deep learning 36:04 - Deep double descent 41:20 - Backpropagation 42:42 - Can neural networks be made to reason? 50:35 - Long-term memory 56:37 - Language models 1:00:35 - GPT-2 1:07:14 - Active learning 1:08:52 - Staged release of AI systems 1:13:41 - How to build AGI? 1:25:00 - Question to AGI 1:32:07 - Meaning of life

    #93 – Daphne Koller: Biomedicine and Machine Learning

    #93 – Daphne Koller: Biomedicine and Machine Learning
    Daphne Koller is a professor of computer science at Stanford University, a co-founder of Coursera with Andrew Ng and Founder and CEO of insitro, a company at the intersection of machine learning and biomedicine. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Daphne's Twitter: https://twitter.com/daphnekoller Daphne's Website: https://ai.stanford.edu/users/koller/index.html Insitro: http://insitro.com This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:22 - Will we one day cure all disease? 06:31 - Longevity 10:16 - Role of machine learning in treating diseases 13:05 - A personal journey to medicine 16:25 - Insitro and disease-in-a-dish models 33:25 - What diseases can be helped with disease-in-a-dish approaches? 36:43 - Coursera and education 49:04 - Advice to people interested in AI 50:52 - Beautiful idea in deep learning 55:10 - Uncertainty in AI 58:29 - AGI and AI safety 1:06:52 - Are most people good? 1:09:04 - Meaning of life

    #73 – Andrew Ng: Deep Learning, Education, and Real-World AI

    #73 – Andrew Ng: Deep Learning, Education, and Real-World AI
    Andrew Ng is one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He co-founded Coursera and Google Brain, launched deeplearning.ai, Landing.ai, and the AI fund, and was the Chief Scientist at Baidu. As a Stanford professor, and with Coursera and deeplearning.ai, he has helped educate and inspire millions of students including me. EPISODE LINKS: Andrew Twitter: https://twitter.com/AndrewYNg Andrew Facebook: https://www.facebook.com/andrew.ng.96 Andrew LinkedIn: https://www.linkedin.com/in/andrewyng/ deeplearning.ai: https://www.deeplearning.ai landing.ai: https://landing.ai AI Fund: https://aifund.ai/ AI for Everyone: https://www.coursera.org/learn/ai-for-everyone The Batch newsletter: https://www.deeplearning.ai/thebatch/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".  This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching "Ride Home" in your podcast app. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:23 - First few steps in AI 05:05 - Early days of online education 16:07 - Teaching on a whiteboard 17:46 - Pieter Abbeel and early research at Stanford 23:17 - Early days of deep learning 32:55 - Quick preview: deeplearning.ai, landing.ai, and AI fund 33:23 - deeplearning.ai: how to get started in deep learning 45:55 - Unsupervised learning 49:40 - deeplearning.ai (continued) 56:12 - Career in deep learning 58:56 - Should you get a PhD? 1:03:28 - AI fund - building startups 1:11:14 - Landing.ai - growing AI efforts in established companies 1:20:44 - Artificial general intelligence

    Kyle Vogt: Cruise Automation

    Kyle Vogt: Cruise Automation
    Kyle Vogt is the President and CTO of Cruise Automation, leading an effort in trying to solve one of the biggest robotics challenges of our time: vehicle autonomy. He is the co-founder of 2 successful companies (Cruise and Twitch) that were each acquired for 1 billion dollars. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.

    Tomaso Poggio: Brains, Minds, and Machines

    Tomaso Poggio: Brains, Minds, and Machines
    Tomaso Poggio is a professor at MIT and is the director of the Center for Brains, Minds, and Machines. Cited over 100,000 times, his work has had a profound impact on our understanding of the nature of intelligence, in both biological neural networks and artificial ones. He has been an advisor to many highly-impactful researchers and entrepreneurs in AI, including Demis Hassabis of DeepMind, Amnon Shashua of MobileEye, and Christof Koch of the Allen Institute for Brain Science. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.

    Top Ten Lessons After Almost a Year - [Invest Like the Best, EP.44]

    Top Ten Lessons After Almost a Year - [Invest Like the Best, EP.44]
    A future guest just told me, every band has a song about being in a band, so today I give you my version. I won’t do this often, and only do it this week in case listenership drops due to the holiday—I didn’t want any guest to have a smaller than normal audience. I have now been doing this for almost one year, and have learned a tremendous amount. Since the whole idea behind the show is to learn in public, I am going to share a few of the lessons I’ve learned with you today. I’ll shape it as a top ten list, which ends with a fun story about my recent dinner with Warren Buffett. You’ll notice that many of these are just good business and life lessons applied to something specific: a podcast. I hope you can pull the essence of one or more of these and change how you do things, especially if you create any sort of content as part of your job.

    #13 Pedro Domingos: The Rise of The Machines

    #13 Pedro Domingos: The Rise of The Machines

    In this interview with AI expert Pedro Domingos, you’ll learn about self-driving cars, where knowledge comes from, and the 5 schools of machine learning.

     

    Go Premium: Members get early access, ad-free episodes, hand-edited transcripts, searchable transcripts, member-only episodes, and more. Sign up at: https://fs.blog/membership/

     

    Every Sunday our newsletter shares timeless insights and ideas that you can use at work and home. Add it to your inbox: https://fs.blog/newsletter/

     

    Follow Shane on Twitter at: https://twitter.com/ShaneAParrish