Skip to main content

Misdiagnosing Our Cyberhealth

Why do we ignore information that could improve our ability to predict the odds of a personal cyberattack?

As schools and universities closed across the country, the #ClassOf2020 challenge went viral, with graduates taking to social media platforms such as Facebook, Instagram and Twitter to mark the rite of passage online. Using the hashtag, they posted photographs of themselves in cap and gown, holding their diploma and surrounded by loved ones. Millions of people shared #ClassOf2020 images, which included smiling selfies taken in graduation regalia, proud parents hugging their children, fizzing bottles of champagne and tassels flying high above caps tossed in the air. It was a moment of joy captured amid global crisis.

But these snapshots may have also given cybercriminals valuable information and insight into the private lives of these recent graduates. Using the hashtag, hackers could have mined the posts for information on the students, from the diploma in their hand to the university painted on their cap and the pets and family members tagged in the background. What few of them realized was that the content of these photographs and captions also held the answers to the security questions designed to protect their accounts. A quick scroll through Instagram, and the answer to “What’s your pet’s name?” or “What high school did you attend?” can go from a shot in the dark to an educated guess. As the Better Business Bureau has warned, posters may not realize that the content of these photographs and captions also held the answers to the security questions designed to protect their accounts. Hackers can cross-reference information posted to these social media campaigns against other publicly available personal data to glean our birth date, hometown, and other key facts that can be used to change our passwords and take over our accounts.

We hear a lot about what we can do to be cybersafe. We know to be wary about what we download. We know passwords should include random strings of letters, numbers and symbols. But what we don’t realize is that our human psychology is working against us, lulling us into a false sense of security. We think of ourselves as less vulnerable than others. We disregard people’s experiences and testimonies when considering our own risk.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


In three pilot experiments, I and my colleagues investigated how individuals determine their risk of succumbing to cyberattack. Before reporting their own or others’ likelihood of falling victim to an attempted scam, all subjects received base rates—the percentage of people who clicked on an e-mail link or downloaded an attachment sent by an unknown or suspicious source. Our research found that while participants used these percentages when assessing others’ risk, they largely ignored them when considering their own, suggesting a startling cognitive bias. We misjudge our own vulnerability, because we believe that, whatever the rule, we are somehow the exception.

Hackers take advantage of our complacency. In 2017 the Poneman Institute reported on the cost of data breaches in 13 countries and regions. According to its analysis, U.S. organizations pay the highest price for data breaches, with average annual losses of $4.13 million because of customer turnovers, reputation losses and diminished goodwill. When probing why, the researchers estimated that about half of the cost of U.S. breaches was the result of human error or negligence.

How do we prevent this problem? One approach is to motivate people through fear. Yan Chen of Auburn University at Montgomery and Fatemeh Zahedi of the University of Wisconsin–Milwaukee conducted a survey and found that when people in China and the U.S. had greater fear that they might be susceptible to a serious cyberattack, they more frequently asked for professional help and took precautions to protect against security breaches. Aligning with this evidence, experts believe that to increase the fear and awareness of the dangers of cyberattacks, they should report shocking statistics about them and the stories of victims’ experiences.

There’s reason to believe this strategy might work. Supposedly, social learning—where we view the good and bad outcomes others’ experience—is one of the most effective forms of education and prevention. We can learn more, and do so faster, by observing others, using their experience to inform our own decisions. If we want to know the likelihood that we could be targeted, we should draw from what other people did and the consequences they faced. But what my team’s research suggests is that when it comes to cyberthreat, we don't learn from others.

This “scared straight” approach often falls short, because, quite often, we simply ignore the message. Seeing the base rates does little to change people's beliefs about their own vulnerability to an attack. In one of our studies, we recruited 432 adults across the country and had them report the probability that they would respond to different illegitimate phishing scams that asked them to disclose personal information in exchange for something they wanted. For example, respondents considered how likely they were to click on a link to complete a survey in exchange for a chance to win a new Apple Watch. On average, participants estimated there was a 13 percent chance they would respond as requested. When asked to predict the likelihood of someone else doing so, however, their estimates were 46 percent higher, rising to about 19 percent.The subjects expected other people to be more likely to respond to the request, and as a result, they underestimated their own risk.

In assessing their own vulnerability, individuals did not consult the group averages. The participants received true base rates about the percent of people who clicked a suspicious link, downloaded an attachment from an unknown sender or completed one of the other tasks listed in the e-mail. For every 10 percent increase in the actual likelihood reported in the base rates, self-predictions rose by only 3.5 percent, whereas predictions made about others did so by 8 percent.

Despite the increase in group averages, the self-assessments stayed the same, largely because people didn’t even look at the base rates. Embedded in the frame of the computer monitor were four infrared sensors tracking each participant’s gaze. Using this technology, we measured how often people looked at the base rates and found that they glanced at the group averages 12 percent less frequently when predicting their own reactions than when predicting other peoples’. They considered the general averages useful when thinking about someone else but less important when thinking about themselves.

People often have access to base rates but disregard them. This pattern has led prominent theorists to propose that our disinterest in other people’s experiences reflects a more general cognitive bias. Psychologists Nicholas Epley and David Dunning, for instance, found that when individuals considered whether they were likely to donate to a charity, their predictions did not track population base rates of such donations.

Our belief in our own exceptionalism makes us less informed about our own vulnerabilities online. So how can we override our tendency to disregard others’ experiences? First, we can assess our risk with an impartial eye, weighing the statistics over our personal beliefs. We can review security settings on social media and limit access to our posts. We can change our security questions and pick answers that can’t be found on the Internet. And, most importantly, we can learn from our mistakes. Because once we’ve become one of the statistics, we aren’t likely to overlook them again.