JWEJJF Identity theft ? credit card phishing. Computer laptop with credit cards and fishing hook.
Social engineering involves getting targets to do something that the fraudster wants them to, such as handing over sensitive information or installing viruses © Alamy

While talk of cyber security breaches may conjure up images of advanced weapons, elite hackers and system-wide disruption, many attacks rely on less technically demanding techniques.

Some are the result of social engineering: the art of getting targets to do something that the fraudster wants them to. These include handing over sensitive information such as bank details and passwords, installing viruses or clicking through to a website with malware.

“Social engineering is about influencing someone to take an action that may or may not be in their best interest,” says Ian Harris, a computer scientist at the University of California Irvine. These techniques are not new, he adds. “Now we have the technology that makes it easier to reach people.”

The Nigerian prince scam is the most infamous example of a frequent social engineering attack known as phishing. “[Phishing attacks] tend to be generic and not targeted,” says Prof Harris. They rely on victims carrying out instructions, often with the offer of a payout in return. While unsophisticated, it is easy to send large numbers of these emails at once: if even a few victims offer their details, phishing can prove lucrative for scammers.

But as the public has grown more wary, fraudsters’ techniques have evolved into a confidence trick arms race. “Not only are we seeing online scams become more prevalent, they’re also becoming more sophisticated,” says Gillian Guy, chief executive of Citizens Advice, a UK consumer charity.

Among the more advanced techniques is spear phishing. Whereas regular phishing relies on the probability of a target clicking on a link, the specialised variant requires a more plausible, personal explanation for why they should carry out a given action.

Marcel Carlsson, a security consultant and researcher, says that the amount of information available online has made planning spear phishing attacks easier. “Back in the day people would go through garbage to find letters,” he explains. “Now you have Google Maps, companies post their information on Facebook and people put personal stuff on social media.”

There are a number of options to mitigate the risk from social engineering attacks. At a basic level, email providers have long attempted to filter potentially harmful messages.

Another approach comes from Prof Harris and Mr Carlsson, who have developed a tool to fight fraudsters. It runs on the premise that every example of social engineering has one of two “punchlines”.

“You either have to ask someone a question about private info, or you tell them to do something,” says Prof Harris. The algorithm has been trained on 100,000 phishing emails to highlight the most common examples of attacks.

“It takes each sentence and detects, if it is a question, whether it has a private answer. If it is a command, it looks it up against its blacklist,” Prof Harris says. Mr Carlsson adds that the system can be customised for different sectors with varying buzzwords to maximise its efficiency.

Mr Carlsson says that while there is a growing awareness of the technical component of cyber security, there is also a need for businesses to improve the human aspect. “Businesspeople think [cyber security] is about IT, but [social engineering] is a risk to a business like any other.”

Prof Harris agrees, pointing out how social engineering often exploits the way society works. “It will always be there, the human element: people are usually eager to help you in their jobs.” One way to mitigate the risk is through “red teaming” — having cyber security professionals pretend to be attackers and test how prepared employees are to repel fraudsters.

The internet is not the only avenue for social engineering, with individuals also facing attacks over the phone and in person. Prof Harris found in his experiments that about one-quarter of his students fell for social engineering attacks over the phone. He says that the next step for their tool is to see whether it can be used to detect social engineering calls.

Even as efforts to fight social engineering improve, Artificial Intelligence presents a growing threat to the very idea of truth. “Deepfakes are moving so fast,” says Mr Carlsson, referring to videos, photos and audio clips of humans created by algorithms. “You can download some tools on a laptop and create them.”

There is already evidence that social engineers are seeing their potential value. The Wall Street Journal reported in August that fraudsters used an audio deepfake to imitate a chief executive. “It’s scary that in the near future it will be hard for you to authenticate what you hear,” he continues. “What can you trust?”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments