With the broad public availability of Artificial Intelligence (AI), or more specifically Large Language Models (LLM), it seems we have technologies within reach that are said in the media to either bring about our downfall or solve all of our world's problems. It is clear that it is something that can help us, both for good and for ill. Anything can always be used inappropriately, whether consciously or unconsciously. Various media outlets highlight that cybercriminals are becoming even more dangerous due to the capabilities of LLM. But what exactly are these dangers? What risks does the public availability of Artificial Intelligence pose? What are the offensive dangers of AI? In this article, we describe some of these offensive dangers of Artificial Intelligence.
A well-known risk now relates to the consequences of data leaks through AI systems. Due to the learning capabilities, what is fed into the LLM is reused in subsequent queries. In a simple example where program code is written using AI, and authorization keys are copied and pasted into the code, these keys become 'available' in another uncontrolled environment. It is possible that these keys become publicly accessible and appear in a subsequent AI chat. An attacker only needs to actively search for keys by asking the AI chat for help in writing code that performs authentication based on authorization keys within a program. There is a chance that examples of code containing authorization keys will be provided. After that, it still needs to be validated whether these keys can actually be misused. Initially, a significant amount of manual work and querying is still involved in this process.
Within information security, AI has become an integral part of security systems. With the help of AI systems, it is possible to quickly learn from and detect anomalous patterns to make assessments about whether to block or allow individuals or programs. However, this learning ability based on a large amount of training data is also the weakness of these systems, which attackers can exploit. By manipulating AI models with misleading data, wrong decisions can be made. Intrusion detection systems or malware detectors fail; employees get blocked, attackers are allowed through, and malicious software gets installed. These autonomous decisions also carry the risk that humans no longer have control.
In addition, the processing of large amounts of data and the execution of repetitive tasks by AI systems can also be used as an attack. For example, effortlessly sending massive amounts of disinformation via social media or endlessly conducting phishing attacks. AI can also easily perform continuous 'checks' of vulnerabilities on publicly available systems. With the added intelligence, the goal is to actually exploit these vulnerabilities. So, beyond understanding vulnerabilities, there is also insight into which vulnerabilities can be exploited.
Although this has been applied for some time, it is the scalability and accessibility of this information that presents a realistic danger. Information about vulnerable systems is now available to a larger group of 'malicious' actors. Add to this the support from AI chats in helping to build malicious software to exploit vulnerabilities, something that was previously only done by a limited group, and an unprecedentedly large army is capable of carrying out attacks. Some of these individuals may not even be aware of what they are actually doing. Additionally, current AIs are not yet advanced enough to write perfect code.
The learning ability and endless patience in performing tasks make AI systems highly suitable for carrying out phishing campaigns and social engineering. Since the public launch of ChatGPT, phishing emails have become 'better,' according to various studies. The emails are less generic, more tailored, and very difficult to distinguish from real ones. The standard messages of the past are automatically supplemented with specific company details.
Due to the learning ability, so-called deepfakes are easily produced, in order to deceive users through audio or video. Where in the past a message from the director’s name was a common way to gain trust and prompt (or avoid) a response, it is now possible to send a phone call or voicemail — using the director’s voice — to gain that trust.
Finally, there is the hallucination capability. Not directly an offensive issue, but still problematic if you go mushroom hunting based on a book generated by AI. In the book, which was offered as an e-book through a well-known online store, tasting was recommended as a method to distinguish between dangerous and harmless mushrooms. How can you reasonably trust which information is correct?
The aforementioned risks are not entirely new in principle, but the execution has taken on a new form. Existing protections can often suffice in many situations, but additional and "scaling" measures are desirable.
The key factors are the speed at which attacks are carried out and the frequency with which the type of attack changes. Mitigating measures, therefore, require attention to these factors. For example, by keeping employees more frequently informed about the developments of potential attacks. This could mean running a phishing campaign four times a year instead of just once annually. More frequent short messages on how they can help keep the environment secure. Additionally, it’s important to educate employees on what is acceptable and desirable when using AI within the organization.
Vulnerabilities in systems should be addressed as quickly as possible, almost immediately after patches are released. Insight into these vulnerabilities can be obtained through penetration testing. Alternating these tests with other forms of testing, such as Red Team testing, provides an even better overall view of the security organization’s framework.
Audit trails and logbooks also deserve additional attention. Use a SIEM solution that is configured more specifically than the default rules. Continuously monitor the outcomes of the solution and make adjustments as needed. This is part of the ongoing monitoring and adjustment process; maintain awareness of what activities are taking place in the environment. Stay informed about the developments in attack techniques so that security measures can be refined accordingly.
Stay curious, keep developing!
Read more?
 
															