Logo
    Search

    Breaking Bard + Who Owns Your Face? + Gamer News!

    enSeptember 22, 2023

    Podcast Summary

    • The Limits of Technology: Casey's Rat Infestation ExperienceTechnology can't always solve real-life problems, adaptability and persistence are crucial, and rats are getting smarter.

      Technology, no matter how advanced, can't always solve every problem. Casey, a tech columnist, shared his experience with a rat infestation in his backyard and his attempt to use a rat bucket, a piece of technology discussed on their podcast before, to solve the issue. However, the rats proved to be smarter than anticipated, and the rat bucket failed to catch any rats. This experience highlights the importance of being adaptable and persistent when dealing with real-life problems, even with the help of technology. Additionally, the conversation touched upon the increasing intelligence of rats and the potential implications for the future. On a different note, Google's AI chatbot, Bard, recently gained the ability to plug into various Google services, making it more useful and integrated for users.

    • New features for Google's AI model BardBard now allows personal data searches and has a double check feature to ensure accuracy, enhancing its integration with Gmail and YouTube.

      Google's new AI model, Bard, has introduced several new features, including the ability to search personal data and a double check feature. These advancements mark a significant development in AI technology, allowing it to integrate into popular platforms like Gmail and YouTube. The double check feature addresses a common issue with AI, its tendency to make up information, by highlighting potentially inaccurate responses and allowing users to verify the information through a Google search. While the feature is useful, some may find it inconvenient that it doesn't automatically double check responses. Nonetheless, the potential for more accurate information and improved integration with everyday platforms makes Bard an exciting development in the field of AI.

    • Google Bard's potential as a research assistant but limitationsGoogle Bard can provide answers to any question but requires manual verification for accuracy. Its new features for connecting to personal data have limitations and can lead to inaccurate results.

      While Google Bard shows promise as a research assistant, it still has some way to go in providing accurate and reliable information. The technology can provide answers to any question asked, but the verification of the accuracy of those answers requires manual effort. The new features that allow Bard to connect to personal data also have limitations and can lead to inaccurate results. For instance, Bard was unable to correctly attribute quotes to the correct emails during a test. Google acknowledged that this is an early product and that the extensions feature is currently limited to retrieving and summarizing information from emails, but not analyzing their contents. Therefore, while Bard can be helpful in certain tasks, it's important to approach its answers with a critical eye and double-check the information provided.

    • Google Bard's Email Analysis and Assistance Needs ImprovementGoogle Bard struggles with interpreting user requests accurately and providing useful responses when analyzing emails and performing tasks based on that information. It needs further refinement to deliver on its promise.

      While Google's Bard AI has potential, it still has room for improvement when it comes to analyzing emails and performing tasks based on that information. The user, in this discussion, found that Bard struggled with interpreting the user's requests accurately, often providing incorrect or irrelevant information. For instance, when asked to diagnose the user's psychological issues based on their email history, Bard responded with a list of disorders from the DSM and mentioned some irrelevant email topics. Similarly, when asked to help with travel planning by finding train tickets, Bard made mistakes and couldn't access necessary information. However, Bard did show promise when asked to summarize specific emails or respond to them in the user's voice. The user also noted that Bard seemed to lack important context, such as the user's location or current situation, which hindered its ability to provide accurate and useful responses. Overall, while Bard's email analysis and assistance features have potential, they need further refinement to deliver on their promise. Google may be releasing these features early to gather user feedback and improve them over time.

    • Google releases new AI model Bard for user feedbackGoogle takes a risky yet determined step to release Bard, its new AI model, for user feedback to improve and surpass competitors, despite its known limitations.

      Google is releasing its new AI model, Bard, despite its known imperfections, to gather feedback from users and potentially surpass competitors. This risky move showcases Google's determination to improve the technology and provide better services. However, the current version of Bard may not excel in complex tasks like psychoanalysis due to its lack of personalized learning from user data. In another context, facial recognition technology, such as Clearview AI, raises significant privacy concerns as it publicly searches and collects vast amounts of personal data, posing potential harm to individuals. These developments highlight the need for societal discussions and guardrails around the use and potential risks of advanced AI technologies.

    • Investigating Clearview AI's secretive facial recognition databaseJournalist uncovers Clearview AI's controversial facial recognition database, raising concerns about privacy and ethics

      Clearview AI is a secretive company that scraped billions of images from the internet to create a facial recognition database without consent, which they claim works with 99% accuracy. The company came under investigative journalist Cash Michaels' radar after receiving a tip about its existence and receiving a confidential legal memo from a former solicitor general explaining its legality. Michaels was shocked to learn about the company, as it was not well-known at the time for this technology. He was also concerned about the potential misuse of the technology, such as identifying strangers and invading privacy. Despite limited information available online, Michaels was able to investigate further by visiting the company's supposed office address, only to find it was a fake. The lack of transparency and potential privacy concerns surrounding Clearview AI highlight the importance of ethical considerations in the development and use of advanced technologies.

    • The Controversial Use of Facial Recognition by Clearview AIClearview AI's facial recognition technology raises privacy concerns, as the company does not disclose how they obtain data and individuals have limited control over their information. Illinois is one of the few states with regulations protecting biometric data, but federal legislation is lacking.

      Clearview AI, a controversial facial recognition company, has been marketing itself as a tool for law enforcement to identify criminals. However, the company's technology raises concerns about privacy and consent, as the speaker discovered when they were unable to get information from the company and found their own face had been blocked from the system. The use of facial recognition technology by companies, such as retailers, for identification purposes is increasing, but the legal protections for individuals' biometric information vary greatly depending on the state. For instance, Illinois has a Biometric Information Privacy Act that requires consent and imposes fines for non-compliance. At the federal level, there is currently no legislation regulating the use of facial recognition technology. The speaker's experience highlights the need for clear regulations and transparency around the use of facial recognition technology to protect individuals' privacy and prevent misuse.

    • Facial Recognition: Benefits and ConcernsThe use of facial recognition technology raises ethical concerns and calls for regulation to protect privacy and civil liberties, while balancing the potential benefits.

      While facial recognition technology offers potential benefits such as crime solving, it also raises significant concerns regarding privacy and civil liberties. Despite bipartisan agreement on the potential harm of this technology, there seems to be a lack of action from Congress to regulate it. The debate over who should control AI technology also adds complexity to the issue. Clearview AI's success was partly due to open-source software packages, but the ethical decisions made by tech giants like Facebook and Google not to release facial recognition technology are commendable. The book suggests that a balance between ethical use of technology by responsible companies and appropriate regulation is necessary to prevent a dystopian outcome. The ongoing debate around legislation and tech control highlights the need for urgent action to protect individuals' privacy and autonomy in the digital age.

    • Open-source facial recognition technology's ethical concernsOpen-source facial recognition technology can lead to privacy invasions and misuse by radical actors, highlighting the need for caution and regulation.

      The open-source nature of advanced technologies like facial recognition can lead to their misuse by radical actors or individuals with ill intentions. This was highlighted in the discussion with the example of generative AI, where Google held back on releasing their tools, only for OpenAI to take the lead. Similarly, facial recognition technology, as demonstrated by companies like Clearview AI and Pimeyes, can have serious privacy implications. The case of a man using facial recognition for a sexual fetish is particularly alarming, as it shows how easily such technology can be used to invade people's privacy and uncover sensitive information. The listener's story of using facial recognition on dating app profiles to find stolen or inappropriate images further emphasizes the need for caution and regulation in the use of these technologies. Overall, it's crucial to consider the ethical implications and potential consequences of open-source technologies, especially those dealing with personal data and privacy.

    • Privacy concerns in online interactionsAs technology advances, privacy in online interactions becomes a growing concern, leading to potential loss of privacy and harmful consequences. Balancing safety and privacy is crucial.

      As technology advances, particularly in the areas of facial recognition and social media, there is a growing concern about privacy and security, especially when it comes to online interactions. People may feel the need to "google" or reverse image search potential dates or acquaintances to ensure safety and gather intel. However, this normalization of such practices could lead to a loss of privacy and potentially harmful consequences, particularly for individuals with past online activities that could be stigmatized. The use of masks or other means of disguise may become more common as a response to these concerns. It's important to consider the potential repercussions of these trends and strive for a balance between safety and privacy.

    • Unity's New Pricing Model Sparks Controversy in Gaming CommunityUnity's sudden change to charge developers based on installations raises concerns about unexpected costs and potential financial impact, emphasizing the need for clear communication in business practices.

      Unity, a popular game engine used by many developers to build video games, especially in mobile gaming, has recently announced a new pricing model that will charge developers based on the number of installations of their games. This change has sparked controversy and protests from the gaming community, with some developers expressing concern over the unexpected cost increase and potential financial impact on their businesses. The long development cycles of video games and the sudden announcement of these changes have left many developers in a difficult position. Unity's decision has highlighted the importance of transparency and clear communication in business practices, especially in the gaming industry, where developers invest significant time and resources into creating games. The gaming industry, as a whole, continues to shape the way we interact and connect with each other, and it's essential to acknowledge its impact and importance in media and culture.

    • Unity's Unexpected Pricing Changes Spark ControversyUnity's sudden pricing changes left developers feeling blindsided, causing anger and backlash, potentially costing larger companies significant money, and damaging Unity's reputation.

      Unity, a popular game engine, faced backlash from developers and gamers after announcing unexpected pricing changes. The developers felt blindsided by the changes, which they believed were unfair and uncommunicated. The situation drew comparisons to Reddit's pricing controversy earlier this year, where similar issues arose due to a lack of transparency and communication. Unity's CEO, John Riccitiello, has a history of making inflammatory comments about developers, which added fuel to the controversy. The changes could potentially cost larger companies like Niantic, which makes Pokemon Go, significant amounts of money. The situation led to anger and even death threats, causing Unity to cancel a town hall and close two offices. While some speculate that Unity's move was a response to Apple's introduction of app tracking transparency and the subsequent impact on Unity's ad business, others view it as a self-inflicted wound. Regardless, the controversy has significantly damaged Unity's reputation among its once-loyal user base.

    • Game engine transitions and industry updatesUnity's recent changes faced backlash, Microsoft leaked plans for a new console and potential acquisitions, and tech companies show growing interest in the gaming sector, leading to potential competition and market changes.

      Game engine transitions can be a significant challenge for game developers due to the complex interplay of various game components and the potential for unforeseen issues during the porting process. Unity, a popular game engine, has faced backlash for recent changes and has responded by softening some of these modifications to ease the transition for developers. Meanwhile, a Microsoft gaming console leak revealed plans for a new console without a disk drive, a new controller, and potential acquisitions, such as Nintendo, which could have significant implications for the gaming industry if such deals were to go through. The leak also underscores the growing interest of major tech companies in the gaming sector, potentially leading to increased competition and potential changes in the market.

    • The Evolving Nature of the Gaming IndustryMicrosoft and Sony acquire gaming studios, Mortal Kombat 11's complex mechanics frustrate older gamers, Casey enjoys team-based shooter games and mobile titles, deepfaking technology improving but not perfect, diverse gaming experiences and preferences

      The gaming industry is seeing significant growth and companies are trying to capitalize on it by expanding their offerings. For example, Microsoft and Sony are making moves to acquire major gaming studios. On the other hand, some games, like the latest Mortal Kombat title, are becoming increasingly complex, which may not appeal to older gamers. Casey shared his recent gaming experiences, including his purchase of Mortal Kombat 11 and the frustration he felt with its intricate mechanics. He jokingly suggested that he may have grown old since the last time he played the game. In contrast, he enjoys playing team-based shooter games like Valorant and trying out new mobile games. Kevin shared an amusing story about deepfaking Casey's voice and having him say things in German, which a listener translated. He admitted that the technology is improving but still has a ways to go when it comes to accurately replicating the content of the voice clips. Overall, the discussion highlights the evolving nature of the gaming industry and the diverse experiences and preferences of gamers. Whether it's the latest complex fighting game or a simple mobile puzzle game, there's something for everyone.

    • Understanding the limitations of AI-generated speechAI can generate speech in various languages but lacks the ability to ensure coherence and naturalness, leading to misunderstandings for listeners.

      While AI can help generate speech in various languages, it currently lacks the ability to ensure coherence and make the speech sound natural. A listener who is fluent in Hindi had trouble understanding a previous episode due to the nonsensical English used, which was an accurate translation of the AI-generated speech. The episode was fact-checked by Will Peischel and engineered by Alyssa Moxley, with original music by Marion Lozano, Pat McCusker, Niemisto, and Dan Powell. Special thanks to Paula Shuman, Puiwing Tam, Noga Loghly, Kate Lipresti, and Jeffrey Miranda. If you have any feedback or questions, feel free to email us at heartfork@nytimes.com, but please note that Bart, our AI, may not be able to understand it.

    Recent Episodes from Hard Fork

    Social Media Reacts to an Attempted Assassination + Tech Elites for Trump + TikTok's Jawmaxxing Trend

    Social Media Reacts to an Attempted Assassination + Tech Elites for Trump + TikTok's Jawmaxxing Trend

    This week, an assassination attempt for the social media age: what the platforms got right and wrong in the chaotic aftermath. Then we talk with the Times reporter Teddy Schleifer from this week’s Republican National Convention in Milwaukee about the wave of Silicon Valley billionaires stepping up to back Trump. And finally, we talk to The Times’s Styles reporter Callie Holtermann about facial fitness gum, a “jawmaxxing” product targeted at teen boys online.

    Guests:

    Additional Reading:

      We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJuly 19, 2024

    Alison Roman Answers Your Hard Questions

    Alison Roman Answers Your Hard Questions

    Throw down a picnic blanket, and grab some snacks and drinks: It’s time for some Hard Questions with the food writer, YouTuber and podcaster Alison Roman. We tackle quandaries like, Should you sign away your children’s image rights in order to get them into your preferred day care? Is hacking people for fun ever OK? And does it matter if we’re rude to our digital assistants?

    Guest:

    Additional Reading

    We want to hear from you. Email us at hardfork@nytimes.com

    Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJuly 12, 2024

    The Interview: The Netflix Chief’s Plan to Get You to Binge Even More

    The Interview: The Netflix Chief’s Plan to Get You to Binge Even More

    We’re off for the Fourth of July, but what’s a better tribute to America than a conversation about the technology that enables us to endlessly stream TV from the couch? This week, we’re bringing you an episode we enjoyed from the recently debuted New York Times podcast The Interview. Lulu Garcia-Navarro interviews Ted Sarandos, co-chief executive of Netflix, about his early days working in a video store, shows to fold your laundry to and the future of the entertainment industry.

    Guest:

    • Ted Sarandos, Co-CEO of Netflix

    Additional Reading

    We want to hear from you. Email us at hardfork@nytimes.com.

    Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJuly 05, 2024

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Related Episodes

    How AI Facial Recognition Works

    How AI Facial Recognition Works

    With the development of increasingly smart artificial intelligence and lots more cameras spread around than ever before, we have reached a critical point in the US and other countries where governments can easily track everyone, everywhere, all the time.

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.

    MSG's facial recognition software, BuzzFeed's deal with Meta, The Creator Economy and more | E1667

    MSG's facial recognition software, BuzzFeed's deal with Meta, The Creator Economy and more | E1667

    Please take our 90-second audience survey for a chance to win a $50 amazon gift card at www.thisweekinstartups.com/survey


    Molly and Jason chop it up about New York Attorney General Letitia James probing into MSG's alleged use of facial recognition to identify and target specific patrons (1:46). This segues into a great discussion about Meta's $10M deal with BuzzFeed and The Creator Economy as a whole (21:41). We cap off the week with two great interviews, one with LA26 graduate Nevsah Karamehmet of Breath Hub (36:17) and another with Brett Bauman, the creator of PlaylistAI (48:09).


    (0:00) M+J kick off the show

    (1:46) WLITF WTF: Facial recognition software at MSG

    (8:33) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist

    (10:03) More on the NYAG case regarding MSG's facial recognition software

    (20:11) Linode - Apply to Linode's Rise program for up to six figures in discounts at https://linode.com/twist

    (21:41) BuzzFeed's lucrative deal with Meta + Revenue share with creators

    (35:07) Crowdbotics - Get a free scoping session for your next big app idea at crowdbotics.com/twist

    (36:17) LA26 Founder: Nevsah Karamehmet of Breath Hub

    (48:09) Ok Boomer with Brett Bauman


    FOLLOW Nevsah: https://twitter.com/nevsahf

    FOLLOW Brett: https://twitter.com/brettunhandled

    FOLLOW Jason: https://linktr.ee/calacanis

    FOLLOW Molly: https://twitter.com/mollywood

    Navigating the Chatbot Jungle: Cyber Mindful Conversations

    Navigating the Chatbot Jungle: Cyber Mindful Conversations
    Unveil the fascinating world of AI chatbots with Sandra Estok on this enlightening episode of Cyber Mindful Podcast. From the promises of time-saving interactions to the potential pitfalls of data exploitation, Sandra navigates the intricate landscape of artificial intelligence. Real-life stories underscore the importance of responsible usage and safeguarding personal data.

    Join the conversation as Sandra demystifies AI chatbots, likening them to interactive companions within our devices. Explore the benefits and frustrations of these digital assistants, gaining insights into their vast potential and occasional limitations. The narrative takes a cautionary turn with stories of data exploitation, emphasizing the need for mindful and intentional engagement in the digital realm.

    In this episode, Sandra equips listeners with practical tips for secure chatbot interactions. From verifying platform legitimacy to avoiding oversharing, these rock-solid guidelines ensure a safer and more intentional digital experience.

    Sandra Estok is the founder of Way2Protect™, author of the international bestselling “Happily Ever Cyber!™” book, and the Cyber Literacy series for children. With over 25 years of experience in Cybersecurity, IT, and Data Privacy, Sandra is a Keynote Speaker and Corporate Trainer. Her mission is to empower women to take charge of their cyber safety and have Peace of Mind Online.

    Connect with Sandra:
    Website SandraEstok.com
    Social media handle: @Way2Protect

    Watch Sandra Estok's TEDx talk “What an identity theft victim can teach about cybercrime”: here. Or copy this link directly into your browser: https://www.ted.com/talks/sandra_estok_what_an_identity_theft_victim_can_teach_us_about_cybercrime

    What was the Obama Administration's Artificial Intelligence Legacy?

    What was the Obama Administration's Artificial Intelligence Legacy?

    Former President Barack Obama was one of the most science and technology oriented presidents, and took a personal interest in the work around Artificial Intelligence, according to Terah Lyons, former policy adviser to the U.S. Chief Technology Officer in the White House Office of Science and Technology Policy (OSTP). Lyons, now the Executive Director at the Partnership on AI, sat down with EY Global Tax Innovation Leader Jeff Saviano and discussed AI / data privacy policy, and the government's role in developing leading technology.