Speaking of psychology is taking a winter break, so we're rerunning some of our favorite episodes from the past. In 2023, I talked to psychologist Daniel Simons and Christopher Shabri about why people fall for Ponzi schemes, fishing scams, and other cons and what you can do to protect yourself. We hope you will enjoy this episode from the archives. Speaking of psychology, we'll be back with new episodes in January. Thank you for listening.
From headline-grabbing Ponzi schemes to email fishing identity thieves to fake news stories circulating on social media, it can sometimes seem like the world is full of people who want to deceive us.
What many of these scammers and con artists have in common is that they take advantage of the patterns of thinking and the mental shortcuts that we all use in our daily lives. Most of the time, these mental habits serve us well. After all, most people aren't trying to trick us and it would be difficult to go through life skeptical of everything and everyone. But the same habits of trust and belief that work well most of the time can leave us vulnerable when we encounter a con artist
a false news story or some other type of deceit. So what are the cognitive habits that put us at risk of believing lies and falling prey to scams? Are some people simply more gullible than others or can anyone be conned? When you're offered something that sounds like an amazing opportunity, what questions should you ask to figure out if it's too good to be true? How common is fraud in the worlds of business, science, and elsewhere?
is technology, including artificial intelligence, making it easier for people to cheat and lie and to get away with it. And what can we do to protect ourselves and our wallets by spotting scammers before it's too late? Welcome to Speaking of Psychology, the flagship podcast of the American Psychological Association that examines the links between psychological science and everyday life. I'm Kim Mills.
I have two guests today, co-authors of the new book, Nobody's Fool. Why we get taken in and what we can do about it. First is Dr. Daniel Simons, a professor of psychology and head of the Visual Cognition Laboratory.
at the University of Illinois Urbana-Champaign, where he also has courtesy appointments in the Department of Advertising and the College of Business. Dr. Simon's research explores the limits of human awareness and memory, the reasons we're often unaware of those limits, and the implications of those limits for our personal and professional lives. Next is Dr. Christopher Shabri, a professor and co-director of the Behavioral Decision Sciences Program at the Geisinger Health System.
Dr. Shebri is a cognitive scientist whose research focuses on decision-making, attention, intelligence, and behavior genetics. Dr. Simons and Dr. Shebri were also co-authors of the 2010 New York Times bestseller, The Invisible Gorilla, How Our Intuitions Deceive Us.
They've collaborated on research for more than 25 years and spent nearly a decade working on their new book, which explores how scammers, con artists, and other liars fool us and what we can do to avoid being taken in. Thank you both for joining me today. Thanks for having us on. Great to be here.
You start your book with a quote from James Mattis, the former US Defense Secretary, who said, once in a while we can all be fooled by something. He was talking about the fact that he served on the board of the disgraced company Theranos before it was revealed as a fraud. So let's start there. Why is it that we're all vulnerable to being deceived at least some of the time?
I think we tend to think that only the most gullible among us can be deceived. And there's good reason for that. Whenever you hear about a scam or you hear about somebody falling for a con, in retrospect, it seems obvious. And you can think to yourself, oh, yeah, I wouldn't have fallen for that. But most scams aren't targeting you. They're targeting somebody. And when a scam is really targeted to you in particular, that meets your desires, your wishes, your wants,
you're more likely to fall for it than if it's not. So we all can be subject to being deceived if the targeting is aimed at us and it takes advantage of the way we think and the way and the information we're looking for. So Dr. Shabri, let me ask you, what are some of the most common cognitive shortcuts or thinking patterns that get us into trouble? I think the first one that we talk about is truth bias. So truth bias is the idea that
Our default tendency is to think that whatever we hear or read or encounter is true, that people are not always trying to lie to us or deceive us. And that's sort of a precondition for being taken in by any kind of scam or con or fraud. Because if you believed everything you saw was false or questionable or misleading, then
you wouldn't really act on those offers and those promises and those sales pitches and those marketing messages. And truth bias is important for us to have because if we didn't tend to accept what other people said, we'd never be able to make plans with them, have a conversation with them or do anything really other than be skeptical all the time. Once you start with truth bias,
then other cognitive habits we have can get us into further trouble. And I guess one I would highlight to start with is our habit of focus, which refers to our tendency to focus on information that's right in front of us that we have easily at hand, often that someone has provided to us and make decisions based only on that information.
operating essentially under the assumption that that's all that matters or that's all there is. And one of the main ways to avoid being deceived is to think about information that people aren't showing you or information that's missing or has been withheld or you're just not thinking about broadening your focus to include other information besides perhaps what the scam or a con artist wants you to be focusing on at that particular moment.
So it's especially easy to fool people when you tell them something that they want to believe anyway, or something that matches the preconceived notion. So how can we fall into the trap of not being willing to re-examine our own beliefs?
This is a general issue that we tend to make predictions about the world. We have to. We have to have expectations for what's going to happen, and we connect on those. And most of the time, that's a great thing to do. And that's true for most of these cognitive tendencies. They generally work really well for us. It's only when somebody is hijacking them to take advantage of us that we run into problems.
The problem with predictions and expectations is that when somebody gives us exactly what we're looking for, we don't tend to question it as much as we would if somebody told us the opposite of what we believe. So you're much more likely to forward that story on Facebook or Instagram or wherever if it matches your beliefs.
without stopping to think, hey, is that really true or might it be false? Even in research, we tend to be much more willing to question our findings if it looks like they came out the opposite of what we were expecting than if they came out exactly as we predicted. And that tendency to just not double check ourselves can get us into trouble.
Well, you just mentioned the world of science, and I know in your book you've given several examples of scientists, including psychologists, unfortunately, who have gotten away with fraud and deception in some cases for many years. How common is scientific fraud and how should non-scientists think about this when they're trying to read and make sense of the science news that they may be reading and publications that they look at all the time?
I don't know if we have a good estimate of what the base rate of scientific fraud is because it hasn't all been uncovered. So we can't go back to the last 100,000 publications and count up how many were fraudulent or had other kinds of fabrications or anything like that. So it's probably more common than the detected cases so far would have you believe, but it's probably not as common as some of the biggest critics of scientific fields would want you to believe either.
most scientists that I know and I'm sure that Dan knows and most scientists in the world are trying to get accurate results. They're trying to report what they actually did. They're not deliberately falsifying things. But in a climate where it's reasonable to expect that scientific results that are reported are honest and true and not false,
A few people can get away with inserting themselves into the process and deliberately creating things that will then slip through and get fabricated results into the literature. I think the general public should be skeptical of specific
scientific results. If they detect signs like, well, this is based on a very small sample size. It's based on a, you know, a single study. It hasn't replicated by independent researchers. It's getting hyped quite a lot in the media and yet nobody else seems to be able to reproduce these findings. It has a gigantic impact from a very tiny intervention. That's a sign that that something might be overblown or
or misleading or perhaps even made up. But when you encounter sort of the general considered opinions of large bodies of scientists who've really looked carefully at work and done dozens, hundreds, thousands of studies over years, those are the kinds of things that people should really trust and rely on.
Just add to that, that we tend to have this impression that discoveries happen in a bolt of insight in a single finding. And that's not how discovery in science works. If we want to make a claim that something is a truth about people in psychology, that's often what we want to do. We want to say this is a truth about people, their behaviors, their beliefs, their traits. You're not going to typically come to a definitive conclusion about all people from one study, especially very small studies.
Discovery involves sort of incremental understanding of an idea and testing it in a wide range of contexts to make sure it actually applies more broadly. It might be a discovery about one tiny little sample at one time, maybe, but if it doesn't, if you don't test whether it's general to a lot of contexts and to a lot of people, it's not really a broad discovery in the sense that we tend to think of in science.
So, I would be wary of single studies. This is a new study that's told us something that we didn't know before and that nobody ever thought of before, that tends to be something that's getting overhyped. Now, a lot of scams involve promising something that's too good to be true, like an investment opportunity that offers unheard of returns or a company that promises a miracle medical cure.
If you're offered something that sounds like an amazing opportunity, what are the questions to ask yourself to decide if it falls into that too good to be true category?
I can start with that. We all know that when something is too good to be true, we shouldn't believe it. That's something that everybody kind of knows, that if it seems too good to be true, it probably is. The problem is what sounds too good to be true to one person is just good enough for another person. So for somebody else, they're going to find that as just plausible enough that they're going to jump in. The challenge in deciding when something's too good to be true is in making sure that
you view it kind of skeptically from the beginning. So if something seems really promising, really amazing, you should take a step back and say, what would I need to know in order to verify that that's actually true? We know that if somebody offers you 50% returns on your investment with a guarantee of no losses all within a year, almost certainly not true because no investment does that. So you can take a step back and say, what would they have to be doing to be able to make that claim? And if as soon as you start questioning it a little bit,
you might realize that now this really actually is too good to be true. I was just going to say, what about the case of Bernie Madoff where it was too good to be true and yet he was paying people? So you didn't really have a reason to question other than, well, the market isn't behaving this well, and yet I'm still getting 15% returns every year. I mean, how do you escape being in a situation like that?
So too good to be true, as Dan said, can mean different things to different people. But there are some signs that are pretty common for things that are too good to be true, or at least that shouldn't be believed. And in the case of Bernie Madoff, what he was offering wasn't really necessarily too good. It was not a really huge rate of return. It was 8% to 12% a year.
which is a little bit above the long run average of the stock market, but not too much. It's nothing like what an original Ponzi scheme was offering or what a lot of cryptocurrency offers the recently have offered. But the telltale sign was that
It was every single year, there was never a losing year, and there was barely ever even a losing month. He managed to make the returns on his investors' investments fit within that narrow band. So the excessive consistency there was, I think, what trapped people. They thought, this is a sure thing. I will never lose. I'll get the same amount per year, kind of like a treasury bill or some kind of bond or savings account, but just with a much higher rate of return.
And that was, I think, the innovation, like the really the true novelty of Madoff's, it's called a Ponzi scheme, but some people call it a Madoff scheme because he really added this new dimension of not offering people the ridiculous proposition that educated people
maybe who've been in the stock market for a while, no is unrealistic. The 50% in six months or a year or something like that. But offering something that seems much more reasonable, it wasn't too good to be true in the sense of the total amount of money they made. It was too good to be true in the sense of the consistency with which they made it. So consistency or the lack of it is an important sign of fraud that people can pay more attention to than they might otherwise.
That happens in science too, right? So one of the hallmarks of fraud in clinical trials is if the baseline conditions, when people are measured at the beginning, you'll often do checks to make sure that the people assigned to the treatment and the placebo group were the same in other ways so that those aren't sort of contaminating the effects of your treatment. So you kind of expect the two groups to be balanced on prior health conditions, for example, or age. And if they're all perfectly balanced,
That's weird. That's a red flag because just by chance, if you randomly assign people to groups, they won't all end up perfectly balanced. You'll end up with a distribution with some differing on some factors and not on others. And it's turned out that that's been a major way of finding fraud in a number of disciplines from anesthesiology to any field that has clinical medical trials. So your book warrants people to be wary of the butterfly effect. What do you mean by that?
Well, so the old anecdote goes that a butterfly flapping its wings in Brazil can cause a tornado in Texas. And that's based on sort of old, well, not old, but relatively new ideas about chaos theory and certain attempts to model how small changes in one place can perturb a system so that some dramatic thing happens someplace else.
people tend to be too accepting of the idea that tiny little interventions in the world, not random things like butterflies flapping their wings, but maybe very small, very small, very small, very brief interventions. For example,
giving like a half hour and hour long sort of like reflective exercise at the beginning of college can cause big changes in people's GPA and you know willingness to stay in college and complete the degree and so on just to give one example of this class of of this class of effects. Another one is priming studies, the idea that sort of very brief exposures to stimuli we don't notice or remember encountering
can dramatically change our behaviors or beliefs just a few seconds, minutes or even longer later. There's one particular dramatic example where a study claimed that subliminally seeing the Israeli flag caused people to be more likely to vote and vote for centrist parties as opposed to more extreme parties.
days, weeks later, in an actual election. That's just the kind of thing that you should look up and say, how plausible is it that something that happened to you for 150 milliseconds in October changed what you did in the voting booth in November?
And there are many, many studies and stories like this, which we sort of take to mean that there's an inherent appeal to the idea that we could discover these little tiny things that made a vast difference. And if we did discover them, that would be great, right? It's not irrational to say we should want tiny, cheap things that make huge outcomes. A vaccine is one of those things, right? It's a tiny little injection that you get once that might prevent you from getting a disease for the rest of your life in some cases. It's just that those are rare events in science. They're not the things that are coming out every day in our psychology journals, unfortunately.
And that's a sort of general principle that any time you see a big effect from a small intervention, that's when you should require the strongest evidence. You shouldn't go to the bank on a single study showing something like that because the odds are good in those sorts of cases that it's not going to replicate in the same way that we think we are, or it's not going to be general in the way that it was claimed. Those are going to be the effects that are sort of really engaging and exciting and
more likely to be wrong because it's really rare to have giant effect sizes from tiny little variation, tiny little interventions. So do either of you have a favorite con or scam you came across in researching the book and I'm using the word favorite in air quotes here and why is it your favorite and what can we learn from it?
I'll start with one that I knew about before we started writing the book, but I learned a lot more about it in researching the book, and that's the famous case of John von Neumann. You may have heard of John von Neumann as a famous
economist, mathematician, computer scientist, game theorist, one of really the fathers of sort of modern science. But it was also the name taken by a guy who entered a chess tournament in 1993 under that name. I don't think it was his real name, but he entered a chess tournament under that name.
In 1993, this was 30 years ago, actually 30 years ago, this month, in fact, July 1993. And nobody had ever heard of the guy before. He didn't even play chess. He didn't even touch the pieces and move the pieces as though he had ever played in the chess tournament before. He looked like a complete outsider and novice. And yet, he drew with a grandmaster and beat masters and was doing really well in this chess tournament.
And in fact tied for a prize at the end. He didn't win every game. Some games he would just sort of like suddenly like stop playing for no reason for 40 minutes and just let his time run out. Or some games he would try to make illegal moves and just sort of stop. And at the end of the tournament though he had won enough games to win a prize.
But everybody was suspicious about how this guy could be doing it. And so they didn't really know exactly how, but they thought he couldn't like this. This can't happen. So the tournament director, a guy named Bill Goitchburg, who I've actually known since the very first Chester when I played in, didn't want to give money to this guy who he thought might be cheating somehow, but he didn't know whether he was cheating or how he was cheating. So he set up on the chessboard before writing him a check.
He set up on the chessboard, a very simple chess puzzle that a five year old who had learned the game a month earlier could probably have solved. And he refused to even try it and he walked out and essentially was never seen before. And so the reason why I love that con so much, and it's generally agreed now that basically what the guy was doing was probably concealing an earpiece and getting information transmitted to him by an accomplice who was using a computer
to help figure out what moves to play and was sort of sending code to him like through an earpiece saying, you know, move this piece from here to there. But if the transmission broke down, he would just stop and wait to hear something, you know, and so that didn't help him solve the puzzle at the end, you know. So the lesson that we drew from this for what you can do yourself is
Um, you know, it would be tempting to just write the check and move on, you know, and as our habit of efficiency sort of leads us to sort of just, you know, move on with things like make quick decisions and so on. But Goitchberg thought to ask one more question of this guy and do a little tiny test and see whether he really was what he purported to be that as a chess master. Uh, and.
in, you know, in 10 seconds he had his answer, right, when the guy refused to do it and started arguing and left. So we advise anybody to really think about what questions you could ask. Sometimes it only takes just one to get a crucial additional piece of information that will let you decide whether you're being conned or scammed or whether something is legitimate. And that's just my favorite example, because of the whole story that surrounds it.
I could talk about one that I think is more likely to be encountered by you or me, which is what's known as a call center scam. And these are scams that we all have encountered. So if you've gotten calls asking you to call back to extend your car's warranty, the vast majority of those are scams.
Because most of us don't need to do that. And you've probably never called back that number. But anybody who does has selected themselves as somebody who is likely or more likely at least to send money.
It's the same way that the famous Nigerian email scam works, that it looks ridiculous on its face to most people, but it doesn't have to convince most people. It only has to convince the people who find it really appealing and who are most likely to fall for that particular pitch. So you can send out millions of emails and only reach a tiny number of people who are going to send back their and respond, but that's exactly what you want, right? You only want to interact with the people who are most likely to give you their bank account number.
But call center scams have taken on a new form, which is to put people under huge time pressure and threaten them. So for example, saying, you owe money to the IRS. And unless you pay it right now, the police will come knock down your door. And what they often will ask people to do is go out and buy a prepaid cash card and then read the number over the phone.
Most people aren't going to follow for that, but they're really good at their pressure tactics, so that if you feel like you're in danger, if you feel like you've maybe done something wrong and you're not sure, there's a really strong pressure to just pay up and be done with it. But of course, as soon as you pay up, they're going to be asking you for more money. And I think it's a really interesting case because this has been a really successful
industrial-sized business fraud that's been happening. But people should keep in mind that no government agency or formal corporation or any organization that's asking you for money will ever ask you to go buy a cash card and read a number over the phone ever. That's always a scam.
Nobody's going to ask you to go out and buy cash cards. The other thing is that there's never going to be a case where somebody's going to call and allow you to pay off if the police are on their way to your door. The IRS doesn't send the police to your door based on a phone call and not paying immediately. There's never that much rush, but these call centers have refined their techniques to put you under a huge amount of pressure so that you want to just make them go away.
And that's been a really effective tactic. So if you get that sort of a call, you can safely hang up, because if it's true, they're going to email you, they're going to send you a letter, they're going to ask you to react in some form of way, and you can call the official number, look it up on the web yourself, don't call the number they give you, look it up online, and call the official number and see if it's real. It's more often than not, it's not.
Dan's story is a great example of I think why we sort of fall for this gullibility myth, right? Because we hear a story like this and we think I would never believe that the IRS is sending the police to my door to collect my tax bill with, you know, with Apple, you know, with Apple to iTunes cards or, you know, whatever.
Who are these people who fall for this stuff? Well, of course, I'm a scientist and scientists have been defrauded, but they've been defrauded in a different way from that. They've been defrauded by people who understood how to defraud scientists by making up plausible research results and getting them past peer review and so on. So everybody can get conned and scammed, but the way is going to be specific to everybody and people who think they're sophisticated and superior, they can be tricked in other ways, maybe not by that con. Like, there's something out there for something out there for all of us.
in the world of scams, unfortunately. I mentioned AI in my introduction as a kind of technology that may be making it easier for people to cheat and lie in all sorts of new ways, such as students using chat GPT to write their papers or people using AI tools to write fake news stories or doctor photos and videos. Do you think AI is going to lead to a big rise in deceptions and scams of all kinds or even new kinds?
I think one of the themes that we've run across over and over and researching this work is that the same principles have applied to scams as long back as you can look. So the same sort of taking advantage of our habits and hooking us with the information we find appealing, that's been true going back to the Trojan horse. So there's not anything new in that sense.
There are new variants of old scams that merge all the time, and AI might allow some new variants of scams to become much more potent. So here's one case where I think it's particularly going to be problematic. There's a scam right now that's been pretty effective where that I sometimes call an injury scam or a kidnapping scam, where somebody calls up a parent or a grandparent and says, your kid's been in an accident. We need you to send money right away so we can get them treatment at this hospital. It's not actually a hospital.
And again, it's a time pressure, preying on fears, preying on your desire to kind of help right away. Often that sort of scam depends on people being targeted because they know their kid isn't home and they're taking advantage of a little bit of information they've picked up about the parents and the kid on social media. But imagine how powerful that can be if instead of calling up and having it be somebody else that you don't know making that call, if it's a synthesized version of your kid's voice.
makes it that much more believable. So I think a lot of these sorts of things potentially could be amplified as AI capabilities improve and develop. So yeah, that's a concern. And there are things you can do to kind of try and prevent that in advance. One thing I've talked about with my family is having a passcode for our family. So that if we ever are in doubt whether this is true, you just ask for the passcode. And if they don't give it to you, it's a scam.
That's a good trick. And here we are putting our voices out there on the internet for everyone to copy. One of the things that we do at the American Psychological Association is our IT department sends fake phishing emails to employees. And you're supposed to report them as a training exercise. And of course, there are always people who fall for them. Do these strategies work? I mean, can you really be taught to avoid, to be alert to scams?
I think you can. I mean, you could start by reading our book. But often, you know, the specifics of a particular scam, those will evolve and change over time, right? So these particular phishing emails that, you know, that the IT department sends are designed to kind of look like the phishing emails that the people who run these business email compromise scams are using these days. They won't perhaps permanently inoculate inoculate you against every kind of scam. But I think there are reasonable, there are reasonable form of training and that they also help to
generate information like 25% of your colleagues fell for this thing, which can maybe raise awareness more broadly if the scale of the susceptibility is known. And there have been various research studies that have found surprisingly high percentages of people willing to turn over their passwords and other credentials to these kinds of scams. But they're not a permanent solution. I think to the extent there is a permanent solution, it really involves understanding the patterns of what the scammers are trying to do
and what in particular they're trying to target about our own habits and trying to react to that more. There's also a downside to constantly doing that sort of check, which is it makes you distrust emails that seem to be coming from your own administration and your own company. So if you're constantly getting faked emails, it makes you kind of like, well, should I believe that this actual administrative email is genuine or do I just need to
assume that they're suspicious of us all the time. So there's some potential backside to that, but training people to recognize the hallmarks of that sort of scam and that sort of fishing is a really useful thing to do. So to wrap up, I want to ask you both, has working on this book and doing this research changed any of your own habits. Has it made you less trusting or more trusting in life?
I do think that having looked at such a wide variety of different kinds of cons and scams throughout history, I learned a lot about ones that I never knew about. And it definitely, I would say, has made me more suspicious, hopefully not in a bad way. But also, I think, maybe a little better at recognizing the patterns that you see among these things, so that when you see some new offer, some new thing, a new story or whatever, you can sort of put it to a framework.
where you say, oh, I understand what they're doing in this thing. And that makes it easier to classify, easier to remember, easier to think about how you might avoid it or what you might advise somebody else to do. So I think overall, it's been quite a positive, even though on occasion, it has been dismaying to see what people are doing and what people are falling for.
And I think one of the big challenges in this sort of a context is thinking about when you need to be more skeptical, when you need to be more critical, because you can't be a skeptic all the time. That would be a terrible way to go through life, right? You'd be constantly cynical about everybody and everything.
But knowing when it actually matters and thinking about when this could be consequential for me, I think that's been something that's looking through all of these sorts of different scams and over history. It gives you a sense of what the patterns are for big risks and helps you kind of think about, okay, do I need to worry about whether the grocery store has given me exactly the price that they promised on the shelf? Well, if a couple of pennies here and there don't matter to me, I can kind of give it a sense of,
is this overall close enough, and I don't necessarily want to check everything in detail. So I can kind of let this sort of small scams and small deceptions that probably happen unintentionally all over the place, kind of let those go, not worry about them too much, and really think about it when there's something that could be very consequential. Well, Dr. Simon's Dr. Shabria, I want to thank you both for joining me today. This has been really interesting.
Thanks. It's been fun chatting. Yeah, thanks for having us. You can find previous episodes of Speaking of Psychology on our website at www.speakingofpsychology.org or on Apple, Spotify, YouTube, or wherever you get your favorite podcasts. And if you like what you've heard, please leave us a review. If you have comments or ideas for future podcasts, you can email us at speakingofpsychology at apa.org.
Speaking of psychology is produced by Lee Weinerman. Our sound editor is Chris Condian. Thank you for listening. The American Psychological Association. I'm Kim Mills.