Pūrongo

Feature

Computer scientists investigating new ways to prevent phishing

11 August 2021
Phishing is an expensive problem and neither spam filters nor user training do enough. Could focusing on the circumstances of phishing attacks help?

Phishing attacks are expected to cost the global economy US$20 billion in 2021, and that number is only projected to go up. Within 10 years, global costs related to ransomware – often installed following successful phishing attacks – are projected to balloon to US$265 billion a year. A group of University of Auckland researchers is hoping to change that.

Giovanni Russello, Danielle Lottridge and Yun Sing Koh are all members of the School of Computer Science but they each boast different skills. Associate Professor Russello, the head of the school, is an expert in cybersecurity and online privacy, while Associate Professor Koh is a machine learning expert and Senior Lecturer Lottridge is an expert in human-computer interaction and user experience.

 Until now, most work aimed at stopping phishing has focused on technological fixes or on what Russello calls “blame-the-user” approaches. The problem is, neither approach is doing enough.

Image
Danielle Lottridge, Giovanni Russello and Yun Sing Koh

Technological approaches have undeniably had an impact. Spam filters and similar tools stop about 90 percent of malicious emails. But that still leaves 10 percent. Given the sheer volume of email, most people are still confronting potentially dangerous emails on a daily or near-daily basis.

Current user-based interventions aren’t solving the problem either. Certainly, education can help people learn to recognise signs an email may be suspicious. However, 65 percent of companies that have been victims of phishing attacks had previously performed some form of training, says Russello.

Lottridge, Koh, Russello and their colleagues, who include a PhD student, a visiting professor from Canada and three psychology researchers, want to focus on something new: the individuals involved and the circumstances in which they receive phishing attacks.

Different email situations

It’s not hard to imagine situations when you might react differently to emails. On a good day, you might arrive at work well-rested and sip your coffee calmly as you read your several messages. Now imagine arriving frazzled on a Monday morning after an insomniac night and hairy commute only to find dozens, maybe hundreds of emails have piled up since your sick day on Friday. Oh, and you have a meeting shortly that may touch on the contents of some of those emails.

Currently, none of these factors make any difference to your email software, though you might be a lot more likely to hurriedly scan messages in the latter situation – and maybe click on a suspicious link.

Koh, Russello and Lottridge envision a system that would take a back seat in the relaxed scenario but “swoop in for extra support,” as Lottridge puts in, in the high-stress situation. The system they envision would also be personalised, because people might react to situations in different ways and need different kinds of support, whether it’s a reminder to slow down when they’re jumpy or auto-translation when they’re tired.

Though the three computer scientists have been examining this area for a few years, they consider themselves to be in the early stages of the project because it’s such a new area of research. Other researchers have examined aspects of users such as personality, culture and age, but these factors can’t be changed, while situations could be, says Lottridge.

Email design

Lottridge’s background in user experience – she used to research UX for Yahoo in Silicon Valley – has given her the tools to consider various aspects of email design and how they might influence users.

For example, many email service providers use a “clean” design that emphasises an email sender’s name over their email address. However, usernames are easy to manipulate, whereas there’s a world of difference between your.boss@yourcompany.com and your.boss@q3794pa23xx.com.

That’s not to say the clean design is always wrong, but “if you receive an email from someone you’ve never heard from before, maybe the visual presentation could be changed to make certain things very salient,” says Lottridge.

“We need to think more about the design around the user instead of just saying they’re the weakest link.”
Danielle Lottridge

Email alerts as they now stand can also be problematic. For example, the University of Auckland’s email system gives you a warning when you try to send an email outside the university domain, even if you’re emailing a trusted contact you’ve contacted hundreds of times. This type of thing leads users to ignore warnings, says Russello.

“I have a lot of empathy for the user,” says Lottridge. “Design is a big part of this. We need to think more about the design around the user instead of just saying they’re the weakest link.”

Monitoring users and their situations

Russello and Lottridge are running a series of experiments where they set up phishing simulations and monitor people using tools that track physical responses such as heart rate and eye trackers that show where users are looking. Koh is using this data for machine learning with the objective of training a system to intervene when it’s most useful.

“We don’t want to spy on people,” emphasises Russello. “The personalised approach is also a privacy approach.”

“Most of my projects revolve around using machine learning or AI for good,” says Koh. “This fits right in, because it’s about digital well-being, and having control over your data is a big part of digital well-being.”

One approach might be for personalised information to be stored only locally, the way New Zealand’s Covid tracer app stores information about where a user has been only on their phone. Another approach might be to collect de-identified personal information with clear, informed consent.

“Basically, we want to change the conversation from blaming people to being able to better understand situations that lead to more susceptibility. Caring for individuals’ health isn’t just the right thing to do, it also may be one of the best ways to keep organisations safe.”
Danielle Lottridge

“People say, ‘I don’t want to be tracked.’ But you’re already being tracked,” says Lottridge. “An academic institution, where we’re constantly having discussions about ethics and privacy, is a great place for this kind of technology to be created.”

Part of the reason for the experiments is to see which variables are most predictive of problems, so less important variables don’t have to be tracked, adds Lottridge.

Not all the trio’s experiments involve biometric data – surveys about distractions during phishing simulations and fully consented tracking of email habits have also provided valuable information.

Next steps

Eventually, Russello, Lottridge and Koh would like to spin out a company that can commercialise their research. There is a lot of room to make an impact, says Russello – not only in combatting phishing but also in directly combatting malware and ransomware.

Another possible application of their research might be in improving productivity, for example by helping users be aware of when they’re likely to be distracted.

The researchers are also working on expanding their view of issues such as privacy by bringing in Māori and other cultural perspectives, says Russello.

“Basically, we want to change the conversation from blaming people to being able to better understand situations that lead to more susceptibility,” says Lottridge. “Caring for individuals’ health isn’t just the right thing to do, it also may be one of the best ways to keep organisations safe.”

Want to connect with the University’s digital technologies research?