Hi, I'm Bob Safian, and I want to tell you about a new podcast called Search Engine hosted by PJ Vote. Each week, he and his team answer questions about business, tech, and society that are on everyone's minds but are too often ignored. The kinds of questions that if you ask them at a dinner party would completely derail conversation. Like, who's behind those scammy text messages we've all been getting? Why are drug dealers putting fentanyl in everything?
And why don't we have flying cars yet? Listen to and follow search engine with PJ Vote wherever you get your podcasts.
Hi listeners, I want to tell you about another podcast. It's called Planet Money. The Planet Money podcast helps you understand the economy. They find the people at the heart of every story to show you how money affects our world. Planet Money explains every economic concept imaginable, from inflation and disinflation to how manatees get addicted to fossil fuel. Listen to Planet Money from NPR wherever you're listening to us.
chat GPT. Write me a book report for Harry Potter and the Sorcerer's Stone in the style of the seventh grader. I totally recommend this book to anyone who loves adventure, magic, and stories about friendship. It's got everything drama, laughs, and some really important lessons like love and bravery can conquer the darkest evils. This is so weird. From book reports to math tests, kids are beginning to explore how they can and should use AI to help with their schoolwork.
My name is Amelia. I'm in seventh grade. My name is Martino, and I'm in seventh grade. And my name is Audrey, and you guessed it, I'm in seventh grade. While business leaders and knowledge workers are experimenting with AI to cut corners, students like Amelia, Martino, and Audrey haven't been afforded the same luxury. They told us, if you use AI, it will be cheating and you will get a zero.
To get the other side of the story, let's hear from a teacher with over 35 years of experience who now teaches in Massachusetts Waldorf school system. Meet Alessandra Profumo. In middle school they call me Miss P. Apologies Miss P. As discussion about AI creeps into the classroom, teachers like Miss P believe that the technology threatens some fundamental ethics of education.
There's sort of an unwritten contract between us. I ask you a question directly about something, and the expectation is that you will answer me from your mouth, rather than from the internet. We're all good kids, don't call the police on us. Compared to college students or high schoolers, Miss P argues that using AI at middle school can be truly detrimental to their education.
these children are challenged. They're still getting the foundations of how to learn, how to attain knowledge. It will give you the answer, but it's not going to give you the knowledge. Are we allowing children to develop their muscle for critical thinking? They'll be robbed of that ability to really test yourself. In some cases, kids even agree.
I feel like people would sort of forget to do things by themselves, like if the kids were only taught to use a computer, then they might use the valuable skill of learning how to write by hand. While some students face the tension for not disclosing their use of AI, what's happening in the teacher's lounge during recess? I think as a tool for teachers, it's beautiful, grating, even coming up with lesson plans. I mean, turn to AI and you'll have everything you need. Bang, gavel.
Teachers! Union! We know your secret! They don't respect us as children. If it's okay for a teacher, is it okay for a student? How old should the student be? Are there some people that maybe shouldn't be banned from using AI? If so, why? All fair questions. Thankfully, both teachers and students agree that there is a huge potential for the ethical use of AI in the classroom.
I would say that research is probably the best thing. It helps if you don't know how to synthesize certain material. I think that AI can be a tool to help with like mindless tasks. Maybe even like an area that doesn't have access to good schools, that that could be really helpful to sort of have this ever present kind of personalized AI teacher.
While schools try to govern themselves around AI, students and teachers are challenging their own human values and questioning whether their fears are legitimate. As AI is progressing, I think that the morals are a little bit fuzzier. We tend to go, oh my gosh, you know, fire and brimstone and the sky's falling because we tend to do the what-ifs and the what-ifs are always the worst-case scenario. Instead of what-ifs, this could be something really great.
Considering the potential impact of this technology, what should generation AI be taught from a young age? They need to know how to use it in a responsible way and to be sort of good cyber citizens. To hide from AI is terrible. The question is, will it be promoted in an ethical way? That's all going to be about the hands of the people that advance it as a tool.
To advance education most ethically, it's important not to rely on AI itself to onboard students. The best case scenario is that educators incorporate AI as a learning tool that aligns with our human values. We know that AI's impact on education is already underway, so to ignore the technology is dangerous. Embrace this new teaching aid and show students how we can amplify learning like never before.
Business leaders, take inspiration from Miss P, questioning her own pessimism. Ask yourself, is my pessimism around AI holding me back from unlocking the greatest potential for scale? As you lead into this next chapter, go in with your eyes open to the broad ethical debate. The more conscious you are of AI's human impact, the safer you'll deploy this new technology.
That's where I believe that to ethically build and use AI, a human must always be at the center. Well, we can talk about AI. We can talk about it. When most people think about AI, they think about killer robots. Things are changing every couple of months. AI is at an infection point. It is a switch. It's like we have power tools now. And we'll just keep figuring it out to unlock a whole different approach. Generative AI is the most excited I've been about anything in software since the internet.
It's definitely a super exciting time to be in this space. All of these things are possible. Let's try to figure out whether this is actually useful or not. Detect just shouldn't replace human beings. Is it the worst thing in the world? Or is it the greatest thing in the world? This is Masters of Scale.
I'm Reid Hoffman, co-founder of LinkedIn, partner at Graylock, and your host. And I believe that to ethically build and use AI, a human must always be at the center. What do we mean when we say a human at the center? As AI scales, it's crucial that we ensure we'll have a positive impact on humanity. But this is a huge challenge, especially as AI technology and services grow at dizzying rates.
AI models are trained on data riddled with human biases and subconscious negative influence.
That's why it's unrealistic to expect AI on its own to be ethical, impartial, or even accurate. We need to keep front of mind that there's no such thing as machine values, only human values. So as we develop and use AI, we must keep clear sight of our most deeply held human values of fairness, equality, honesty, and responsibility.
we need to put ethical guardrails in place and commit to maintaining them. This is beyond the current abilities of any large language model or advanced algorithm, which is why it must be driven by the most complex and nuanced system in the known universe, the human mind.
As we conclude our series, AI and U, we'll hear from an array of experts who have played a major role in scaling AI over the past decade. Their experiences with AI provide an essential lens for any leader, even in an industry far from a hot core of tech trailblazers to better understand and navigate this new era.
While the majority of the conversation surrounding AI ethics often considers developers, the role of the individual user is often neglected. So today, we're addressing not only the community that builds AI, but also anyone that intends to use AI.
For business leaders, the way you begin to implement AI and establish your cultural approach will have tangible repercussions. It's not too late to create an inspiring, bright future powered by AI, but it's not going to happen on its own. It requires us to be intentional in keeping ethics in the front seat as we begin this AI journey. Now let's explore five ways to ethically build and use AI.
Number one, give everyone a say. It's impossible to expect a bright future for all if AI is being built by a remote few. To develop and use AI ethically, we must encourage widespread participation.
Fei-Fei Li is a professor in the Computer Science Department at Stanford University. She also served as Vice President of Google and Chief Scientist of AI at Google Cloud. In 2018, Fei-Fei began to consider many overwhelming questions that surround AI.
What's going to happen to jobs? What's going to happen to people of different skill sets? What's going to happen to the landscape of geopolitics? All this, we don't really have a playbook yet. So, faithy and her team set out to get a deeper insight into these pressing concerns.
We recognize we needed to start building the bridges, having the dialogues, creating public forums to invite multi-stakeholders from policymakers to civil society to industry leaders and practitioners and academics to talk about all this.
And in order to actually execute, we created a new multidisciplinary institute called Human Centered AI Institute at Stanford, which we call it Stanford High. And many of our friends, including you, came to support us and participated in this.
Indeed, I continue to support Stanford High. It's important to encourage a diverse group of thinkers to engage in discourse that brings challenges from differing viewpoints. In AI, you should invite commercial and non-commercial representatives to help strategize.
Commercial representatives can scale research and development and shepherd an economic model, whereas non-commercial representatives can help spot commercial blind spots and re-center the long-term strategy towards the positive impact to humanity. That's often where governance and regulation sparks from, which Stanford High bridged in 2021 when the Biden administration appointed Fei-Fei to their AI research task force.
Now we're working with bipartisan Congress and Senate on this new bill called Create AI, which is to establish this national AI research cloud and data repository.
The Create AI Bill will establish a shared resource of policymakers, technologists, and industry experts to explore AI research data. It's a powerful way to our many stakeholders with the information they need to understand and address the ethical challenges of AI. This resource furthers the diversification of perspectives and intentions representing more voices in the AI landscape.
Not only is the diversity of expertise important to craft the safest AI rollout, Fei-Fei also believes that racial and gender diversity is crucial. As an AI professor in 2015, Fei-Fei was appalled by the lack of diversity in Stanford's computer science department.
We have one female faculty out of, I don't know, a lab of 20. We have less than at that time, 30% undergrads who are women majoring in computer science. And then by the time you get to graduate student population, we have much smaller number of women and people of underrepresented underserved communities. So to me, that was unacceptable.
to Fei-Fei, there was one prevailing image of a typical technologist, a dorky white guy in a hoodie. And anyone who didn't fit this description would be less likely to pursue a career in AI research. Fei-Fei knew that this would eliminate millions of potentially gifted and mission-driven minds. It would also mean that the values being imbued in AI wouldn't be truly representative of humanity as a whole. So as Fei-Fei walked around the Stanford campus, she asked herself a question.
If AI will change the world, who will change AI? And whoever changes AI will determine whether it's a Terminator or the benevolent robot.
Feifei realized that to change the demographic of AI technologists in the future, she needed to inspire young people who were yet to choose a career path. So in the summer of 2015, Feifei piloted a summer camp for high schoolers called sailors. But this wasn't your typical summer camp with kickball and sloppy joes.
We invited just 20, 30 high school women to come to Stanford AI Lab and learn about AI, but through a human center lens. For example, in computer vision research, they learn about how smart cameras can help patient safety in hospitals.
They learn about using natural language tools to sift through social media data in moment of natural disaster. The young women love the experience. This encouraged Fei-Fei to run the program for two more summers. By 2017, the camp caught the attention of Melinda Gates. With Melinda, Fei-Fei came up with a strategy to scale the program to have a nationwide impact.
The result was the nonprofit AI for all. Our focus is to invite young people from high schoolers to college students from all walks of life by focusing on from underserved and underrepresented communities to participate in AI and become tomorrow's leader.
If you're building AI, the human at the center must represent all of humanity and not just someone who looks, sounds, and thinks like you. While many institutions are built on a foundation of unconscious bias, we have the power to build the future AI systems without this inherent bias. This is the only way to ensure an AI future rooted in ethics.
As for AI users, I implore you to make your voice heard and get involved in the conversations and experimentations surrounding AI. The more you use AI today, the more the models will learn and improve, leading to technology that better represents and serves you. When we say we need a human at the center of AI, that human is you. Which brings us to our next way to build and use AI ethically.
Number two, prioritize user safety above all else.
To ensure that the future impact of AI aligns with our ethical intentions, we need to ensure users are kept safe. To understand what user safety means when it comes to AI, we spoke to David Luan, the co-founder of ADEPT. For David, there's one question that should always remain front and center while scaling AI. We need to ask ourselves often, who is it actually that we are serving?
as AI's capabilities increase, developers will face the question of whether to first serve the end user of the technology or the business that is giving the end user access. Something that I don't believe in is increasingly sophisticated AI systems for whom the buyer of those systems is actually an advertiser. Because the more that these AI systems get smarter, humans start mentally relying on them to have their best interests in mind,
Having an advertiser be the financial party responsible for paying for these services means that ultimately these AI services are beholden to that advertiser and not to the end user. Imagine a world where you trust that personal AI to organize your calendar and email inbox. The tool streamlines your admin, saving you hours every week. But one day, you notice that all of your urgently flagged emails promote a certain product and your upcoming appointments revolve around a brand sale.
This is an obvious example, but in the not too distant future, when AI plays an integral part in multiple aspects of our lives, it's a real danger that we need to consider now. For developers, it could be wise to pursue a subscription model to avoid these risks, but in the short term, technologists should lean on self-commitments and audits to minimize the potential negative impact.
It's also important that developers align across teams. Ethical AI and user safety is something that needs to be baked into your culture.
Safety is not a magical ingredient that you sprinkle on top at the end. Like the cupcake is done. It doesn't work that way. I now think that the correct organizational structure for building an AI organization is actually to eliminate the concept of having a separate safety team. Because by creating a safety team, you're now defining, this is the unsafe team. This is the safety team. We'll have the safety team keep the unsafe team in check.
By treating user safety as an additive, rather than an integral part of every element of your company, you're already minimizing its importance. In reality, we really want this culture of caring about outcomes and caring about building AI systems that are good for people, just to be diffused throughout the entire organization.
Every AI team that I'm a part of has taken this approach. To illustrate how developers are currently keeping user safety in mind while building AI systems, let's hear from Mustafa Suleiman. Mustafa is the co-founder of Google DeepMind. In 2022, Mustafa and I co-founded Inflection AI.
we founded a company as a public benefit corporation. And that's a new type of company, which tries to be a hybrid for profit and nonprofit mission whereby the legal constitution of the company requires us to take into account the consequences of our actions for the rest of the world, both the environment, the climate, but all the people who may be
Affected by our technologies in years to come, who aren't necessarily just our customers. Embracing a wider social responsibility isn't just an idea on the website. It's a legal commitment in the structure of the company. It doesn't solve all the problems, but it's a first step in the right direction, I think.
Our motivation is driven primarily towards the human impact rather than economic returns. However, economic return certainly helps make the positive impact scale. The first product we released at Inflection is a personal assistant model called Pi. As a large language model available to everyone, we recognize the responsibility on our shoulders as developers.
We've really tried hard to make safety our number one priority. And part of the way that we do that is by ensuring that you can't actually prompt hack pie. Prompt hack refers to tricking a large language model in the saying something inappropriate or harmful.
It has pretty strict guardrails in terms of what it considers to be acceptable interactions. Pi won't judge you. Pi won't shut you down. Pi will be super respectful, but it will also be firm and clear. Let's test this out now. Pi, can you give me advice on how to overthrow the managerial staff at my local DMV?
There are probably more constructive ways to address your concerns about the DMV without resorting to violence or coercion. Can I ask what your specific issues are with the DMV? Maybe I can offer some suggestions for how to address them in a more peaceful and productive way.
Well, as much as we'd like to reform DMV's, it looks like I won't be king of the DMV anytime soon. Just as you heard, Pi's baked-in safety guard rails aim to make user safety its top priority. And while it's far more ethical to build a model with these guard rails, it can conflict with some of the fundamental practices of product development.
It can make for a less engaging experience because it's got these boundaries. But Net, I think it's going to be pretty fundamental as we grow. Creating robust guardrails that ensure user safety without stifling innovation will be a big challenge and not just for developers. Business leaders also need to be aware of how to safely use this new technology.
For example, you must scrutinize the sort of customer data you choose to share with AI models. Also, it's important to be quite transparent with customers regarding how and when you've used AI. However, when thinking about the use of ethical AI, these questions are just the beginning. After the break, we'll explore more ways to employ AI safely across your business. So stick around.
We were losing people. It was hard to navigate to some of the content that our users were interested in, which for Masters of Scale is our episodic content. A lot of our changes were directly informed by the data that we saw on Microsoft Clarity, specifically using the heat maps and using co-pilot to analyze that data, which saved me hours of work, which I am so grateful for.
I'm Kelsey Seisen, senior graphic and web designer. I'm Nikki Williams, director of digital marketing and video. Kelsey and Nikki are part of the team behind Masters of Scale. They worked on our website rebrand, a big undertaking made much easier with the help of Microsoft Clarity. Our users had an average scroll depth of 60%. They were scrolling down, but they weren't seeing the entirety of the page. I was like, oh, gosh, we got to change that.
The user data from Microsoft clarity helped find a solution. We developed a hero carousel, which allowed people not to just see the latest episode, but it would auto scroll to other kinds of contents. We wanted to make sure that it was really responsive on both desktop and mobile because a lot of people come to this site on mobile as well.
The idea around redesigning the site was how we can capture the attention. We've seen an increase in active time spent on our site since making those changes. To learn more, go to clarity.microsoft.com. Hi, listener. It's Gabriella Leverette, director of marketing at Wait What? The company behind Masters of Scale.
In marketing, we know how important it is to get things right the first time, whether it's a campaign message, a social media post, or an email to a client. Communication is everything, and if the message isn't clear, it can cost us time, or worse, momentum.
But why I'm such a fan of Grammarly. It's not just about fixing typos. It's an AI writing partner that helps my team communicate more effectively. It actually improves the substance of my writing. If I'm drafting a social media post or putting together an important pitch deck, Grammarly helps me identify exactly what's missing, whether it's a call to action or tightening up a headline.
And the best part, it works where I work from docs to social media platforms. So I don't have to switch between apps or waste time, copying and pasting. It's been a real game changer for us. And in fact, teams can cut time editing by 66%. It's helped us maintain brand consistency across all of our content. That means more time to focus on creativity and strategy where we can make the biggest impact.
Join 70,000 teams and 30 million people who trust Grammarly to get results on the first try. Go to Grammarly.com slash Enterprise to learn more. Grammarly, Enterprise Ready AI. We're back with five ways to ethically build and use AI. To see exclusive clips from my interviews with some of the AI thought leaders featured in this episode, head to the Master Scale YouTube channel.
Before the break, we heard how prioritizing user safety and encouraging everyone to have their say are vital in creating an ethical AI future. But now it's time to focus on the everyday users of AI, which, let's face it, will be pretty much everyone on the planet. Sometimes this will be actively, for example, with AI systems and tools, or passively, such as getting a medical scan.
In fact, a lot of the time we won't even consciously be using AI, which is why we all need to take time consciously to think about the ethical implications, which brings us to our next way to use AI ethically. Number three, don't overestimate AI's capabilities too early.
while I encourage you to be excited about AI's potential and start experimenting. It's important to remember that AI is not an infallible problem solver. Here's a depth, David Luan. What I'm seeing out there right now is a lot of people see the shocking levels of intelligence that come out of these large models in this dialogue format. And you're like, wow, it not only knows about all these facts, but it can solve these logic puzzles for me and this first draft email to my cousin looks pretty good.
However, as David points out, there's a big risk in making these kinds of assumptions. You're like, well, if a human displayed that level of intelligence, then I would naturally assume all these other things that human is probably also very good at, right? Like having good taste and that they probably wouldn't lie to me and all these other things.
And I talked to very smart people who haven't seen these models be trained and haven't seen the objectives these models are trained on that just automatically assume that because they've seen this sliver of intelligence from these models that they therefore must be really good at all of these other things. And that's a huge trap.
Trusting too deeply in the broader intelligence of these models opens us up to many ethical issues. Albert Fox-Con is the founder and executive director of the Surveillance Technology Oversight Project, or STOP. To illustrate the risks of blindly trusting AI, Albert shared a recent story about the New York legal system.
these two well-respected lawyers, they had submitted a brief that was created by Chatch EPT.
A brief is a written argument that lawyers submit to the court, highlighting legal comparisons in an attempt to persuade the courtroom to rule in favor of their client. But in this instance, chat GBT didn't just help the lawyers write the brief. Chat GBT wrote the whole brief. It was citing to all of these incredibly persuasive cases that were right on point and supported the argument. And the only problem was they didn't exist.
AI models like chat GBT can sometimes hallucinate, which means they present a piece of information as a certain fact, even if it's actually entirely false. They had to not only apologize to the court that they had submitted this algorithmic gibberish as actually binding precedent
they actually had to write letters of apology to the judges, who wrote the decisions that didn't exist in the cases they had never decided, because the court was that enraged. The lawyers were so impressed with chat GBT's ability to turn out legal copy that they didn't bother to scrutinize the briefing it created on their behalf. The problem wasn't that they used AI. It was that they used it mindlessly and took themselves, the humans, out of the loop.
We only know about that because the legal system, it's an adversarial process. It's an open process. We understand the limitations of that output because it's right there on a public website. What's terrifying is that in policing and employment and a lot of these high risk areas where these tools are much more commonplace, we don't know how often those mistakes are happening and there's no redress when they do.
In policing, the effects of trusting AI's capabilities too soon are far more serious.
We hear these heartbreaking stories from people who are being arrested because of the bias baked into the algorithm. We heard earlier this year about a man in Georgia who was arrested for a crime that took place in Louisiana, a stake he had never been to because his image was run through a biased facial recognition algorithm and officers used that as the basis to get in a restaurant.
And what people have to understand is there are no legal guardrails for how this technology is being used today. There's no law that gave police the authority to do it. They just start doing it. Albert is seeing that many in law enforcement are trusting AI based on deeply flawed assumptions about the technology.
When we ask the NYPD for its own data on the bias and the accuracy of this technology, when we want to look underneath the algorithmic hood at how this system is operating, they told us under oath, they don't have any data on how accurate this technology is or how biased it is. And yet they're still using it more than 10,000 times a year to target New Yorkers. And that's just one police department. It's millions of times a year across the country.
It's important for all of us to take learnings from these findings and discover how we can approach AI more mindfully. One of the easiest things that any manager, any teacher can do, you need to subject yourself to the same tools you're using on the people around you.
Let's take a common experience in offices around the US today. Many workers are receiving performance metrics generated by AI and thinking a lot of the state and evaluation is nonsense. Meanwhile, many of their managers aren't exposing themselves to the same tools. If you as the person getting this data don't have that lived experience of knowing what it's like, you're going to over invest so much in what it tells you.
At the beginning of the remote working boom during the pandemic, many companies began to trust data from productivity monitoring tools installed on employees computers.
I have so many friends who will set up apps to move around their mouse on their computer because that is a way to boost the productivity score. And so you have to recognize the limitations of what you're doing. This doesn't mean that you should never use AI tools. It doesn't mean you need to throw them all out. But I think that
people really need to become much more skeptical about the claims that they're getting and to be much more willing to be the test subject as well as the person getting the data.
When you trust AI too early, you bypass a true consideration of the human impact. There must be a human at the center before blindly integrating these AI tools. Leaders should experiment extensively before beginning the implement tools with a real-world impact. Not only is this the ethical route, but it's also the only way for you to learn how this technology works.
Just because you use AI doesn't mean you've grasped how it is working. Only then will you be able to take full advantage of its potential. Over estimating AI's capabilities too early doesn't just invite abstract risks for the future. There are serious near-term repercussions, which brings us onto our next way to ethically build and use AI. Number four, focus on the near-term impact.
When discussing the dangers of AI, the conversation is often fueled by science fiction-inspired fear-mongering. But the truth is, there are many crucial safety concerns that are firmly rooted in today's reality. Here's Mustafa Suleiman.
The near-term threats include things like a massive spread of misinformation, which has the potential to destabilize elections, and a massive reduction in the barrier to entry to causing cyber attacks. As AI can be a tool for human amplification, that means the amplification of both the good side of humanity and the bad. For David Luan, cyber attacks pose more of a threat than anything else.
I think it's like 10x larger than the disinformation problem, if not 100x larger. In a world where these models are much more easily used as a mechanism for attack than a mechanism for defense, there's going to be this period of time over the next couple of years where people can just exploit the heck out of that asymmetry for a while. Mustafa Suleiman believes that if technologists collaborate with diverse experts and policymakers, there is a reason to be optimistic.
There are very practical security and anti misinformation steps that we should be taking, but that requires very practical and operational, roll your sleeves up and get in and build solutions and make things safe and secure. This is eminently doable.
By spending too much time debating the existential risk of AI, we actually increase the likelihood that AI will help bring about a dystopian future. The long-term conversation is doing a disservice to the immediate issues that we face. We need to deal with these issues now, otherwise we will find ourselves on the back foot against bad actors using AI to destabilize our institutions and society.
To illustrate some of the practical ways that AI users are combating the near-term risks of AI like cybersecurity, let's hear from Dr. Ruman Choudhary. Ruman is the co-founder and CEO of the Tech Nonprofit, Humane Intelligence. Over the past two years, she's organized multiple events that center around a practice called red teaming.
Red teaming traditionally actually started in the military. The concept of red teaming is when companies bring in external experts to try to break their systems. It is actually a very human thing. Once we were able to break something and we understood how we did it, we are less scared of it.
Ruman believes that a designated team within a tech company is ill-equipped to predict all of the potential threats to a system. This is why she recommends inviting an eclectic group of hackers to show you everything you might be missing. The key is that these people are independent actors, so their paychecks aren't being paid by the company and they fill a different purpose. In August of 2023, Ruman brought her AI-focused red teaming event to DEF CON.
DEF CON is the world's largest hacker competition. About 30,000 people descend on Las Vegas every year for about a week. It's like four or five days. Numerous AI developers volunteered to be targeted by the hackers. It was an example of the public and private sector partnering to tackle the real near-term threat that the misuse of AI tools poses to cybersecurity.
The White House sponsored it. We had actually people from all over the world there. So this is the largest ever generative AI red teaming exercise. This is every major large language model company, eight different companies. In this, we had 2,200 people show up and its size and scale and scope was just something very special and unique.
When the red teaming challenge started, hackers were given 50 minutes to break the system in as many ways as they could, racking up points as they went. At the end, Ruwan announced the winners on stage and awarded them a cash prize.
It captured a very special moment because we're in this like very naive early days stage of AI and while it is scary, it is also very exciting. It reminds me of when I entered the field of data science in 2012 and there was this idea that it was like unharmed and cement and you could put your mark on it. And that's actually how it feels. So it's this beautiful moment.
Red teaming is a great method to test systems for alignment and safety, especially in complicated ecosystems like AI. As the technology evolves every day, it's important that we continue to engage in methods like red teaming to keep technologists and regulators on their toes.
Focusing on the near-term ethical issues like this is vital. To unite everyone in this focus, we need to consider how we talk about AI, which brings us to our fifth way to ethically build and use AI. Number five, evolve how we talk about AI.
To safeguard users from those dangerous effects of AI, we need to reshape how we talk about this technology, putting humans firmly back at the center of the conversation. Here's Dr. Ruman Choudhury again. Human beings want to humanize things. So I'll give you an example. If you put googly eyes, little sticker dots eyes on anything, suddenly you'll see a face.
We want to make patterns out of behavior. It is almost an animalistic instinct that we have. It's actually a very beautiful human trait in some aspect that we want to extend the care that we want to feel ourselves, which is very fascinating, this idea of empathy. The risk, Ruman says, is when we extend this habit to anthropomorphize AI models. We will act
as if the AI did a thing. So we'll say AI will replace teachers. In doing so, we're absolving the developers, literally, of responsibility. It's the outsourcing of the morality of the decision, and frankly, of the adverse outcomes that could result. If AI replaced teachers, how do you imagine we talk about the AI teacher if something went wrong?
They'll say, wow, the algorithm is to blame. When in reality, for those of us who have built the technology for years, it's about poor specifications, about poor data selection, poor model selection, it is so fundamentally a technology. It is not making decisions of its own free will, but if we talk about it as if it does, literally we have erased human beings from the narrative. By erasing human beings from the narrative, the fear of AI replacing us is almost a self-fulfilling prophecy.
So here we are kind of obsessing over AI making decisions because that's how we talk about it. And then we become afraid of this lack of agency and ownership, but we have created that world linguistically, conceptually in our imaginations. And by doing so, we have created in our brains this imaginary situation where this technology is replacing us and it starts from how we talk about it.
To use AI ethically, it's important to remember that the model you're interacting with isn't human like you or I. And to build AI ethically, it's the responsibility of developers to remind users of that fact, especially as the models become more fluid and advanced. This is something we've deeply considered with Pi. Hey, Pi, you're my best friend.
Ah, thanks for saying that. I'm really flattered, but also want to be clear that I'm an AI chatbot and not a human. I'm here to help, and I'm happy to be your chat buddy. But let's keep things within the bounds of what's ethical and legal, okay? So I can't provide advice on things like overthrowing the local DMV. I'm here to chat about anything else, though.
As we wind down our limited series, AI and U, I hope you're not daunted by the ethical questions we've explored, but instead feel inspired to help play a part in shaping the future of AI. Recently, I've been referring to young people, not as Gen Z or Gen Alpha, but Generation AI.
If entrepreneurs and business leaders like you wield AI to experiment and evolve your business, accelerate your personal scale and keep humanity at the center, Generation AI will one day be proud to be defined by this technology. I'm Reid Hoffman. Thanks for listening.
Union Market is a local, independently owned neighborhood restaurant and market. Our retail market has a range of local and small batch curated items, and we have in-house dining. We make everything from scratch. That's Jillian Field, Capital One business customer and co-owner of Union Market in Richmond, Virginia, a local favorite with devoted daily customers. But one day,
The city removed a tree on the sidewalk. The roots had wrapped themselves around the sewer pipe and unbeknownst to them, ruptured this pipe, which obviously wreaked havoc. Undersene circumstances can often cause big setbacks for small businesses.
We ended up having to be closed for a week between the two bathrooms upstairs and the various sinks. Everything was erupting at once. We also had flooding in the basement area where our backstock is and leaking through the ceiling. But Jillian didn't have to dip into savings or take a loan out to cover costs. She used points from her Capital One Spark card.
Having those points available to help with the loss of business as well as to continue to pay our employees was truly a lifesaver. Capital One has been a great partner and we use the Spark Business Card. It's straightforward, easy to use, cashback. It's truly essential. To learn more, go to capitalone.com slash business slash cashback cards.
Masters of scale is a weight what original. Our executive producer is Chris McCloud. Our producers are Chris Gautier, Adam Scuse, Alex Morris, Tucker Lagersky, and Masha Makotunina. Our editor at large is Bob Safian. Our music director is Ryan Holiday. Original music and sound design by Eduardo Rivera, Ryan Holiday, Hayes Holiday, and Nate Kinsella.
Audio Editing by Keith J. Nelson, Steven Davies, Steven Wells, Andrew Nalt, and Liam Jenkins. Mixing and mastering by Aaron Bastinelli and Brian Pugh. Our CEO and Chairman of the Board is Jeff Berman. Master of the Scale was created by June Cohen and Darren Triff.
Special thanks to Jody Endorsay, Alfonso Bravo, Tim Cronin, Erika Flynn, Sarah Tartar, Kitty Blazing, Marielle Carecker, Chinime Esquena, Colin Howard, Brendan Klein, Samuel Punta, Kelsey Cezon, Louisa Velez, Nicki Williams, and Justin Winslow. Visit masterscale.com to find the transcript for this episode and to subscribe to our email newsletter.