Large Language Models in the Security Operations Centre: 2025 will be the year of the security analyst with AI support

aDvens, one of the leading independent cybersecurity companies in Europe, has identified the use of Large Language Models (LLMs) as one of the key trends in the field of Security Operations Centers (SOC) for the coming year. LLMs are a type of artificial intelligence (AI) capable of understanding and generating text content through machine learning. Platforms such as ChatGPT and similar tools have brought LLMs to public attention, particularly in the past two years. However, AI has long been used in modern SOCs, especially when it comes to identifying potential threats.

The role of a Security Operations Center (SOC) is to detect, analyze, and resolve security incidents in a network or system in real-time. This process requires a certain degree of flexibility – SOC experts must dynamically adapt to different cases and situations. At the same time, a race against time begins once a security incident is identified. A large volume of information needs to be processed and analyzed to determine whether the incident represents a real threat.

“LLMs are the scouts in the SOC. In the case of a potential incident, they serve to map out all possible scenarios and develop the first steps toward different explanatory approaches. In the next step, the human experts decide which of these approaches should be pursued,” says Arthur Tondereau, Data Scientist at aDvens.

The Benefits of LLMs: Natural Language and Their Evolution into AI Agents

The unique feature of LLMs is their ability to understand natural language and translate it into machine-readable language. For example, SOC experts can pose simple questions such as, “What happened one hour before Incident X?” to which an LLM can retrieve and summarize the requested information via interfaces with other SOC tools. This significantly reduces response times to incidents.

This efficiency gain becomes even more pronounced when LLMs are further developed into AI agent-like solutions. These are LLMs trained to perform specific tasks tailored to the individual needs of experts, such as interfacing with a log analysis tool or the SOC’s knowledge base. The LLM or AI agent thus becomes an essential part of the SOC toolkit for identifying, analyzing, and resolving potential threats.

Preventing Hallucinations

At the same time, LLMs remain a tool that cannot and should not replace human expertise. By default, LLMs are designed to provide an answer “at all costs,” which can lead them to fabricate information when in doubt – a phenomenon known as “hallucination,” familiar to users of tools like ChatGPT.

To prevent this, safeguards are required: SOC teams can integrate safety loops into the process to ensure that information and source references are systematically checked for accuracy. Transparency must be guaranteed at every stage of the process. Each query must be visible to the experts, along with the reasoning and derivations made by the LLMs. Finally, human SOC employees remain indispensable, as only teams with the necessary expertise can verify whether the information provided by the LLM is accurate.

Considering Regulatory and Security Aspects

With the increasing integration of LLMs into SOCs, regulatory aspects must also be taken into account. For instance, under the European Union’s AI Act, SOCs that monitor critical infrastructure will themselves be considered critical infrastructures in the future – and must be protected accordingly. These security requirements will also apply to the LLMs used within SOCs, especially since LLMs, like any other system, have vulnerabilities that attackers could exploit. One example of such a vulnerability would be a prompt injection that forces the LLM to ignore suspicious activities.

In the long run, the use of LLMs will become indispensable for SOC providers to meet the demands of their clients. However, this also places the responsibility on each provider to establish the necessary frameworks within their SOCs to integrate LLMs in a way that enhances productivity, ensures security, and complies with all relevant regulations. This way, LLMs can optimally support expert teams.

aDvens GmbH is part of the Provider Directory

Scroll to Top