The S-Unit

Facing AI-driven attacks: facts, risks and practical steps to prepare your organisation

Imagine this: you receive a call from your managing director urgently asking you to transfer money. The voice sounds familiar, but it’s not your director. It’s an AI-generated deepfake. Would you fall for it?

Generative AI (GenAI) is evolving at a rapid pace. Businesses are using the technology to create content, speed up processes and serve customers more effectively. But the same innovations are also being exploited by cybercriminals.

And it's moving fast: According to a recent survey by Gartner It appears that the number of cyberattacks using generative AI (GenAI) is increasing significantly. Nearly one in three organisations reported an attack on the infrastructure of their GenAI applications in the past 12 months. In addition, more than six in ten organisations said they had been confronted with deepfakes used for fraud or deception.

This shows that the threat is not a vision of the future but a reality today. For The S-Unit, it confirms what we already observe in practice: organisations are increasingly facing attacks that have become smarter, faster and more convincing through the use of generative AI.

How generative AI is changing attack methods

The rise of generative AI is expanding the attack surface in several ways.

  • Deepfakes and synthetic content
    Where we once mainly warned against poorly written phishing emails, we now face audio and video files that are almost indistinguishable from the real thing. A fraudster can create a convincing video of a CEO urgently demanding a payment, or mimic a manager’s voice in a phone call. For employees, it is almost impossible to recognise this without proper training or tools.
  • Manipulation of AI models
    Attackers know exactly how to manipulate AI. Through clever and often subtle alterations of prompts or input data, they can twist systems into something no longer reliable. The output can suddenly become misleading, biased or even harmful. For organisations that use AI in customer service, financial processes or decision-making, this poses a significant risk.
  • AI accelerated phishing
    Phishing emails full of spelling mistakes? That’s a thing of the past. With the rise of generative AI, cybercriminals can now produce flawless, persuasive and even personalised messages, faster and in greater numbers. Emails and chats so convincing that employees can hardly tell them apart from legitimate communication. The result: the likelihood of someone falling for them is increasing rapidly.
  • Attacks on the AI infrastructure itself
    It’s not just the output that’s vulnerable: the underlying models, APIs and cloud environments have also become targets. An attack on these can lead to data breaches, downtime or unauthorised access to sensitive information.

Why quick fixes don’t solve AI security risks

Many organisations respond instinctively: purchasing a new product, installing a quick tool or setting up an isolated protocol. But as Gartner analysts emphasise, a rushed approach is often ineffective. The reality is that GenAI risks are not separate from existing threats. They represent an evolution of familiar attacks like phishing, social engineering and manipulation, now amplified by AI.

A solid defence must therefore be balanced: securing the fundamentals, understanding new threats and taking targeted measures in response.
Linda van Ruth-Prijs, CCO The S-Unit

Linda van Ruth

CCO The S-Unit

Practical steps to strengthen your cyber resilience

At The S-Unit, we believe in pragmatic, achievable steps. Organisations don’t have to do everything at once, but they can start building resilience today.

Strengthen the foundation

The fundamentals of cybersecurity remain crucial.

  • Strong authentication: Multi-factor authentication (MFA) is essential. AI makes identity fraud easier, so additional layers of verification are crucial.
  • Networksegmentation: By separating systems, you prevent a successful attack from spreading immediately.
  • Monitoring & detection: AI-driven attacks are often subtle. Continuous monitoring with anomaly detection tools helps to quickly identify irregularities.
Increase knowledge and awareness

Seventy per cent of IT security incidents are caused by employees themselves. They might click on a link in a phishing email, download unsafe software or share (confidential) information with people outside the organisation. They are often insufficiently aware of their role, and cybercriminals exploit this. So, what can organisations do?

  • Training in AI threats: Employees need to know that deepfakes exist, how convincing they can be, and how to recognise the warning signs through awareness training.
  • AI literacy: Establish clear policies for the use of GenAI, including which applications are permitted or prohibited, and how employees should handle output and data responsibly.
  • Simulations and exercises: Just like phishing tests, deepfake or AI attack simulations can help organisations increase their alertness.
Integrate AI-specific security measures

Generative AI requires additional measures.

  • Security by design: Work with developers to ensure security is built into AI applications from the start. Adding it afterwards is more costly and less effective.
  • Specific detectiontools: Invest in solutions that can detect anomalies in AI output, such as manipulated prompts or suspicious synthetic content.
  • Collaboration & sharing knowledge: Share experiences with partners and industry peers. AI attacks are new territory, and learning progresses faster through knowledge exchange.

AI: opportunity and challenge

Generative AI is here to stay. In fact, it’s becoming increasingly integrated into the way we work and communicate. The key is to harness its potential while staying aware of the risks.

Our vision at The S-Unit is that digital resilience starts with trust, insight and collaboration. We aim not only to protect organisations against today’s threats, but also to prepare them for the challenges of tomorrow.

Our advice:

  • Take facts seriously: Attacks using GenAI are not a hype but a reality.
  • Innovate responsibly: Implement a sandbox in which GenAI can be handled safely.
  • Proactive approach: Don’t wait for an incident to occur. Take action now, for example by using monitoring tools designed to detect irregular AI activity.

The question isn’t whether AI will be used against your organisation, but when. Deepfakes, AI manipulation and AI-accelerated phishing are making the threat landscape more complex than ever. Yet there’s good news: with a layered defence, informed teams and security built into AI from the start, you’ll take a strong step towards securing your digital future.

AI-driven attacks won’t stop at the door. Find out how resilient your team is. We’re happy to help! 

Speak to our experts for more information.