How Cybercriminals Use Artificial Intelligence (AI) to Automate Spear Phishing Attacks, and We Can Do to Protect Ourselves

Hannes Hartung

Hannes Hartung is co-founder and co-CEO of Increase Your Skills GmbH. Founded in 2017, the company has quickly become a major player in the security awareness market and was listed as one of the 100 fastest-growing startups in Germany in 2020. With their unique, interactive platform, their mission is to provide informative and engaging online training and phishing simulations to educate companies and employees on how to prevent, detect and respond to cyberattacks. In an interview, Mr. Hartung outlines how cyber criminals use AI to automate spear phishing, and what we can do to protect ourselves against such attacks.

Hannes Hartung, co-founder and co-CEO of Increase Your Skills GmbH

Mr. Hartung, at present, there are a multitude of attack vectors in information security. How have these developed over time, and, in your opinion, what is the most common point of entry for cybercriminals?

The biggest challenge for companies lies in the multitude of assets they need to protect. Individual attackers or organizations can pick a target and spend years preparing attacks in advance. As defenders, we must monitor and protect all vectors simultaneously. For many companies, this presents a considerable challenge. Furthermore, it is also important to note that the human attack vector is still the greatest threat.

Well, you say that people are the most critical attack vectors, and we are all familiar with spam emails sent en masse, as well as phishing attacks. How have these attacks changed recently?

Here, too, attack vectors have increased considerably. Digitally based attacks in social engineering range from simple phishing e-mails to SMS attacks (smishing), to telephone attacks (vishing). The cyber criminals’ primary aim here is automating attacks in the form of phishing as a service. While AI as a service has long since arrived in the commercial environment, within the recruiting and sales sectors, for example, attackers are now using artificial intelligence to create very realistic attacks in an automated manner.

How exactly does the artificial generation of these attacks work? Where does AI now support these efforts?

‘Good’ attacks consist of three things: successful analysis of the target, a perfect contextual evaluation, and the creation of the attack that is derived from the former two elements. The analysis of the target usually takes place using what are known as open-source intelligence analyses. Information that is freely available from a variety of sources is used to create an attack profile. Meanwhile, these can be automated by intelligent services such as Humantic AI. The second step is the generation of sensible attack scenarios from the collected data.

In the last step, both analyses are combined and an attack with a specific context is created. Through the integration of psychological factors, such as the perception of authority, manipulating emotions or utilizing time pressure, one can create a perfect attack. The text is written automatically by autoregressive language models, such as GPT-3. These models use deep learning to create text that appears to have been written by real people.

In theory, that sounds exciting initially, but can AI keep up with manual attacks?

A research group from Singapore already presented a multi-year research paper on this topic at the Black Hat conference in the United States in the summer of 2021. As it turned out, the models improved tremendously over time, and current AI-generated e-mails are almost indistinguishable from human text.

Within the company, we have already carried out additional tests, and the results were shocking. The artificially created texts appeared so realistic that they were indistinguishable from texts created manually. In a preceding OSINT (open-source intelligence) analysis, the texts were shown to have specific characteristics that linked them to the test subjects.

If attacks can no longer be differentiated from real e-mails, can we protect ourselves from these attacks at all? If yes, how?

The aim should always be to reduce the likelihood of a successful attack. But one can never completely rule out success where these attacks are concerned. In information security, we talk about striking a balance between user-friendliness and security. For example, I could avoid using e-mail altogether, and then I would have eliminated this attack vector, however, this would not really be possible in a business context. Therefore, it is recommended to implement a sustainable security awareness strategy.

At best, a security awareness management system can be directly integrated into the existing information security management system. In addition to technical protection mechanisms, employee behavior must also be made “safer” through training. Regular attack simulations, training through e-learning, live workshops, and poster campaigns can all help here. Particularly important is to train for potential social-engineering attack scenarios repeatedly, in order to integrate a sustainable level of security in a respective organization. The speed with which the attackers and their attacks continue to develop is alarming, and, due to this, companies must be agile to adapt to current events at all times.

Please remember: This article is based our knowledge at the time it was written – but we learn more every day. Do you think important points are missing or do you see the topic from a different perspective? We would be happy to discuss current developments in greater detail with you and your company’s other experts and welcome your feedback and thoughts.

And one more thing: the fact that an article mentions (or does not mention) a provider does not represent a recommendation from CyberCompare. Recommendations always depend on the customer’s individual situation.