There have been countless articles published about the potential attacks using AI – Artificial Intelligence. But is this merely a product of our imagination, or are we actually already being attacked by AI? Which attacks that we know of have truly been carried out with the use of artificial intelligence? In this article, we discuss several attacks in which AI is known to have been used. Most of these attacks have also been discussed in the Trend, News & Insights reports, which also describe mitigating measures.
The creation of artificial videos and (voice) sounds, where a person is imitated, is becoming increasingly accessible. This has led to a clear trend of attacks where people are misled because they do not recognize these 'deepfakes.' One of the most impressive attacks took place at a large company in Hong Kong, where an employee was the only real person in a Teams video call with several 'deepfakes,' including the CFO, which were controlled by attackers. This call was set up after the employee received a phishing email requesting transactions, but the employee distrusted the email. Thanks to the credibility of the deepfakes, the attackers were able to steal nearly 24 million euros. A recent Dutch study showed that more than half of the participants could not distinguish between a real clip and an artificial clip of radio DJ Ruud de Wild.
Phishing emails are also becoming more realistic and relevant with the help of AI. Thanks to the learning capabilities of LLMs – Large Language Models – it is now possible to draft emails that match the writing style of a company and are highly targeted to the recipient's context. This increases the credibility of the message and, consequently, the likelihood that the recipient will respond. One example of this is attacks targeting new employees after they update their LinkedIn profile with their new company, role, and position. All kinds of information that previously required extensive research by an attacker are now quickly and efficiently utilized for a targeted attack. Attackers continuously monitor changes on platforms like LinkedIn using AI tools. Once an employee updates their profile with a new company, they become an attractive target. Based on existing knowledge of the business email, a targeted phishing email – spearphishing – is sent, requesting overdue payments. To increase urgency and credibility, the CEO is also involved in the conversation, or at least the suggestion of involvement is made.
The generation of large 'database' files filled with 'personal information' is another area where AI excels. In some cases, ransomware attackers have not even bothered to steal actual company data but instead had it generated. The attackers then threatened with a 'stolen' database containing information on 50 million users. Here, too, AI's ability to combine data helped the attackers. However, the victim companies were able to demonstrate that the data was invalid, preventing reputational damage. The use of AI also aids in 'reverse engineering,' particularly of security patches. Analyzing a patch or a PoC – Proof of Concept – to then create a program that targets a vulnerability in an unpatched system is nothing new. But with AI, the speed at which exploits become available has drastically increased. Recently, an exploit was executed on vulnerable systems just 22 minutes after the PoC was released. It is noteworthy that detection measures also utilized AI models to protect the environment. While the specifics of how the models intervened were not shared, it was indicated that the automated intelligence detected the attacks faster than human employees could update firewall rules.
This last point represents the positive side of the story; while AI is used in attacks, it is just as often and just as effectively deployed for defense. It remains a race, but an increasingly faster one. Fully autonomous AI-driven attacks still seem distant; an attacker, a human, still plays a crucial role.
Stay curious, stay safe!