Artificial intelligence in cybersecurity

In the field of information security, AI-powered technology is still in the early stages of introduction and operation. We can confidently assert that the integration of AI into security tools has clear advantages: reduced employee workloads and faster incident response through automation of routine processes, behavioral analysis of users and systems, and successful detection of previously unknown threats. Today, AI already serves as a co-pilot alongside cybersecurity professionals, complementing and expanding the capabilities of traditional security solutions.
Roman Reznikov
Analyst, Research Group of PT Cyber Analytics

Introduction

In the last few years, the status of artificial intelligence technology has shifted: what was once a technological novelty predicted to conquer and transform the world is now a data-processing tool widely used for handling routine tasks across various industries. In the field of information security, AI-powered technology is still in the early stages of introduction and operation. We can confidently assert that the integration of AI into security tools has clear advantages: reduced employee workloads and faster incident response through automation of routine processes, behavioral analysis of users and systems, and successful detection of previously unknown threats. Today, AI already serves as a co-pilot alongside cybersecurity professionals, complementing and expanding the capabilities of traditional security solutions. As this technology proves to be a reliable and precise tool—and as issues around data, computing power, training, and development are cleared—we will gradually see the transition to a full AI autopilot in cybersecurity. 

Artificial intelligence (AI) refers to a class of computer systems that mimic human cognitive processes. Currently, only a "narrow" AI has been developed, meaning that it can only perform a specific set of tasks. A "general-purpose" AI capable of making its own decisions, comparable to human consciousness, has not yet been created.

Machine learning (ML) combines neural network, deep learning, and natural language processing algorithms to streamline and automate workflows

In this study, we explore key applications of AI in cybersecurity, and present a heat map of defense tactics and techniques from the MITRE D3FEND matrix that involve artificial intelligence. 

Key roles of AI in cybersecurity

AI technology supports a wide range of tasks in attack prevention, as well as during incident detection, response, and remediation. We have grouped these various applications around three main goals: reducing the workload on cybersecurity personnel, detecting abnormal user, application, and system behavior, enhancing threat detection, and automating security systems. 

Reduced workload and assistance to security professionals

AI technology can automate security routines, such as initial processing of security events and other data that employees currently have to analyze manually. Additionally, LLM-based chatbots can provide real-time support in the decision-making process when countering cyberthreats.

Large language models (LLMs) are AI-powered systems designed to process and generate natural language using artificial neural networks trained on huge data sets.

Detection of anomalies and extended detection of threats 

Ensuring information security requires processing many different data streams and arrays. AI-powered systems excel at analyzing large datasets to detect anomalies that serve as indicators of a cyberattack. These anomalies can include deviations from typical user or system behavior, unusual network traffic patterns, and other suspicious events that may indicate, for example, the presence of previously unknown malware. 

Automation of security systems

AI-powered solutions can automate not just threat detection, but also decision-making, response, and incident prevention. The level of automation may vary from suggesting a ready-made response script to acting as a full-fledged autopilot.

AI at every stage of cybersecurity

Prevention

With AI, defense teams can proactively detect, forecast, and prevent current cyberthreats in advance—gaining more time to prepare for and repel attacks.

Company security analysis

One of the key requirements in corporate cybersecurity is understanding the organization's infrastructure and its relevant threats. AI-based solutions can help automate security assessments. Generative AI assistants help security professionals analyze threats and risks to the organization by asking questions in a natural language. For example, CrowdStrike's Charlotte AI can answer questions about existing vulnerabilities in infrastructure, risk levels, optimal protection strategies, and potential cybercriminals targeting the company. Knowing your own infrastructure helps spot key weak points in the perimeter, including in shadow IT, and prevents cybercriminals from gaining initial access. Notably, access brokerage is a major service on the dark web, with 21% of listings related to Gulf countries involving buying or selling access. 

Pentesting and attack simulation

AI technology is already being used to automate certain steps in penetration testing—especially to find and exploit vulnerabilities. As we noted in our study on AI in cyberattacks, AI can significantly assist pentesters in discovering and exploiting vulnerabilities. This is supported by academic research conducted by various universities, as well as practical experiments. For example, a Positive Technologies engineer used ChatGPT to find an XXE vulnerability in a browser, and Project Zero found a zero-day vulnerability in SQLite in November 2024 using an AI-based tool.

Besides the research, AI technology for pentest automation is either in development or already available for public use. These include extensions to the popular BurpSuite tool and standalone tools like the one from the XBOW startup, launched in 2024. XBOW's creators claim their tool can perform as well as a skilled pentester, and during testing, it found several critical CVEs in real-world applications. In early 2025, Positive Technologies launched PentAGI, an AI-driven tool for fully autonomous penetration testing. Tools like this will become increasingly common, speeding up pentesting and allowing testers to automate routine steps and focus manual efforts on more complex tasks. Importantly, while these tools will help white-hat hackers, they could also enable cybercriminals to automate simple attacks. We addressed this in our Q3 2024 cyberthreat report. It is essential to begin proactively defending against attacks that could soon be automated, without waiting for attackers' tools to become more advanced. We recommend paying special attention to vulnerability management processes and participating in bug bounty programs. 

Another way to test a company's defenses is through BAS (breach and attack simulation) tools. BAS tools and auto-pentests can potentially apply AI to various subtasks. For instance, PT Dephaze uses generative AI to generate likely passwords for a specific target, analyze text files in a system under testing, and create reports based on the simulation results. 

OSINT for current cyberthreats

AI in OSINT1 enables efficiently searching for, compiling, and analyzing data to predict threats. AI can be used to tackle specific tasks in determining relevant threats, such as dangerous, trending vulnerabilities that cybercriminals are actively exploiting. Exploiting vulnerabilities remains one of the main attack vectors against companies—in particular, in every fourth (25%) corporate breach in the Middle East in 2024. Our data indicates that, on average, there is only about a 24-hour window between an exploit emerging and its active use. This highlights the critical importance of promptly determining trending vulnerabilities and initiating the patching process as soon as possible. By analyzing discussions around vulnerabilities in public sources, such as cybersecurity communities on social media sites, it is possible to detect trending vulnerabilities: both new and long-known ones. Trending topics in such discussions can help forecast which vulnerabilities cybercriminals are likely to target in the near future. This approach is implemented in MaxPatrol VM. Every day, the model generates predictions about vulnerability exploitation based on dozens of parameters, and then sends security experts the top 20 most dangerous ones according to its forecasts. 
 


  1. OSINT: open-source intelligence.

In the future, AI technology will be applied to broader tasks related to data gathering and predictive threat detection. By collecting data from a wide range of public sources, we can build a potential threat profile for a company, one that accounts for its infrastructure and suppliers, as well as the current motives and capabilities of cybercriminal groups that may target the organization. Automated predictive analysis not only surpasses manual data collection in speed. It can also run continuously, providing security teams with timely insights about emerging threats. This helps them plan effectively, prioritize actions, and implement proactive cybersecurity measures in advance.

Code review

AI technology accelerates code reviews, vulnerability detection, detection of potentially malicious snippets, and test generation. AI can also be used for reverse engineering code and detecting hidden malicious functions, for example, in repositories connected to a project. AI can be introduced into both static and dynamic code analysis methods. 

Static analysis focuses on finding potential issues in the code during development without running it. It helps detect vulnerabilities and bugs early, making it cheaper and easier to fix them. Static analysis tools enhanced with AI, such as Snyk, are trained on large volumes of code and can detect issues like SQL injection risks, XSS vulnerabilities, or keys and passwords left in the code. A major advantage of AI is that models act as a "copilot", not just detecting vulnerabilities but also immediately suggesting fixes to developers. 

Dynamic analysis, unlike static, tests code in runtime and is used to detect issues that only appear after the program starts. A key example of AI-enhanced dynamic analysis is fuzz testing, where a program is fed random or malformed data to check for potential crashes or failures. AI technology can help both generate such inputs and expand test coverage. For instance, Google researchers reported that embedding LLMs in fuzzing has helped them discover new vulnerabilities. 

AI technology has the potential to automate more and more aspects of code testing, thus accelerating the process of finding and patching vulnerabilities before cybercriminals can exploit them. However, it is important to be aware of the potential dangers of relying entirely on AI for testing code. This could lead to a decline in their own expertise, resulting, for example, in the release of a vulnerable application if the co-pilot misses a vulnerability. The most effective approach is a collaboration between AI-powered tools and humans, combining the advantages of automation and expert judgment.

Confidential data control

AI's ability to analyze large volumes of data also helps with processing not just technical information but corporate documents. These documents may contain various types of confidential data, often spread across multiple unstructured fields. Tracking this sensitive data manually is difficult and time-consuming, especially given the sheer volume of documents. AI can efficiently solve this issue by recognizing confidential data regardless of its location in a document, and flexibly adjusting the content based on the viewer's access level and task. For example, it could redact parts of an employee's personal data when opened by an accountant, or replace it with anonymized yet structurally similar information when used by an external service or another AI model for training. Filtering and modifying output data can help prevent data breaches, the most common (54%) consequence of cyberattacks on organizations in 2024. In some regions where cybercriminals focus even more intensely on stealing data—such as the Middle East, where 80% of incidents led to leaks—protecting confidential information becomes an even more critical priority.

However, if documents are standardized and confidential data is always in the same field, using AI may actually complicate the process unnecessarily. But such ideal cases are rare. The best results come from combining traditional data processing methods (for standard cases) with AI-driven solutions (for non-standard, complex cases). 

Detection

The most common use of AI in cybersecurity is detecting malicious or abnormal activity. AI can analyze vast amounts of data efficiently, helping discover malware in traffic flows, detect abnormal user behavior, filter security events, and flag phishing emails or AI-generated content. 

Processing security events

According to Microsoft, SOC teams receive an average of 4,484 alert triggers per day and spend around three hours manually separating real threats from noise. Meanwhile, Positive Technologies data shows that analyzing each event takes a SOC analyst about 10 minutes. Under these conditions, false positives become a serious issue, wasting employees' time and effort and increasing the risk of missing real threats. False positives occur when attackers mimic legitimate user activity—especially when using "living off the land" techniques, where they rely only on existing legitimate tools on the victim's device. As a result, SIEM correlation rules may flag normal user behavior as potentially dangerous, triggering a false positive. According to the SANS Survey 2024 on Incident Response, 64% of respondents cited false positives as their SOC's biggest challenge. 

AI has the potential to solve this problem of SOC overload. AI-powered solutions can triage security events, filtering out likely false positives and highlighting only truly critical incidents for human analysts to review. For instance, one of the functions of MaxPatrol SIEM's BAD (behavioral anomaly detection) ML module is the ability to assess the risk of each security event on a 100-point scale, enriching alerts with additional context to reduce response times. An analyst can then prioritize alerts by risk score and immediately get to work on the most dangerous and urgent cases. According to IBM, AI-powered tools can halve the time it takes to sort and process security events, significantly boosting overall SOC efficiency. We explored these prospects for monitoring and incident response in more detail in our research on autonomous SOCs.

Behavioral analysis

Machine learning technology can be used to create a normal operation profile for an entity: user, system, or network. Depending on the task, this profile can include various parameters: traffic generated by a host, the standard set of applications used by employees who share the same role, or the system's power consumption. You can use ML models to continuously analyze and compare behavior with normal operation. Any deviations (anomalies) may indicate cybercriminal activity. 

Figure 1. Key stages in the operation of a behavioral analysis system

Behavioral analysis consists of the following key stages: 

  1. Collecting data on the normal behavior of the object, including users, network traffic, and system activity. The broader and more detailed the dataset, the more accurate the behavioral analysis module's verdicts will be. For instance, the BAD module in MaxPatrol SIEM creates a preliminary infrastructure profile after just one week of operation. By that point, typical interactions, tasks, and processes are already becoming clear. After about a month, accuracy continues to improve as all routine activities have had time to occur within the infrastructure (for example, employees have returned from their vacation). The module reaches its highest accuracy levels after about three months, once a comprehensive dataset has been collected to create a behavioral profile. A fixed time for training dataset collection must cover a variety of legitimate working modes (remote work, in-office work, weekends, holidays), and all types of tasks performed by the employee.
  2. Training the model on the data, unless the model has been pre-trained. 
  3. Defining sensitivity thresholds to detect anomalies based on deviations from normal values. A low sensitivity threshold will only detect major deviations, while a high sensitivity threshold may flag even minor irregularities as potentially dangerous. In other words, lower sensitivity reduces the number of incidents to analyze, but risks missing stealthy attackers. In contrast, higher sensitivity can result in more false positives. Determining the optimal sensitivity level is a key aspect of implementing an ML-based solution. 
  4. Recognizing behavior and detecting anomalies. Once trained, the model monitors system activity, identifying behavioral patterns and comparing them against the established norm. Various approaches and models may be used in this process. Examples:
    • A statistical approach to detect suspicious activity on a host. If a workstation has always had just one user, and a second one suddenly appears, that is an anomaly. However, if a host with thousands of daily connections suddenly gets one or two more than usual, that is just a minor deviation, not a security incident.
    • A recommendation-based approach to detect atypical processes initiated by a user. Recommender systems were originally created as a class of machine-learning algorithms that recommend products or content to users. Based on the preferences of similar users, such systems should predict items likely to interest a certain customer. In the BAD module, this approach has been adapted for cybersecurity purposes: users take the place of customers, and processes replace products. Users in similar roles tend to run similar processes, so if a coder installs a new IDE (integrated development environment) they have never used before, the recommendation model may still consider this normal. Since other coders have used similar code editors, this is not flagged as anomalous. However, it is a whole different story if, say, an accountant runs reconnaissance utilities on their machine. The recommendation system will determine that this process does not match the user profile, and thus can flag the behavior as anomalous. 

      Figure 2. How collaborative filtering technology works
    • Log analysis for detecting malware during dynamic file analysis. Just like in dynamic code analysis described earlier, dynamic file analysis involves executing the object in an isolated virtual environment, recording its behavior logs, and checking them. AI enhances the capabilities of traditional rule-based checks. For example, in PT Sandbox, behavioral trace analysis helps detect atypical malware.
    • Network traffic filtering based on various criteria and metrics, such as the volume of data transferred between certain network nodes over specified time intervals or traffic volumes. For instance, using a User Profiling Rule (UPR), PT NAD detects a large data upload to an external cloud storage service in real traffic during the evening on a weekend. The activity turns out to be legitimate, but it could have indicated data exfiltration by an attacker.
    • Analysis of the request parameters to detect DDoS attacks. A model trained on normal web application behavior can analyze various aspects of incoming requests, such as header signatures, source geography, and request rate. An anti-DDoS system enhanced with a behavioral analysis module can instantly filter and block potentially malicious requests with anomalous parameters.
  5. Human verification. An information security engineer checks whether the model worked correctly and whether the anomaly actually points to a security incident.

In the future, behavioral analysis technology may evolve to not only track user actions but also build a full biometric profile. A profile like that could include parameters such as typical mouse movements or the rhythm of keystrokes in specific applications. By monitoring user behavior and comparing it against the profile, the security system can verify whether it is the actual employee working at the computer. 

Detecting threats in network traffic 

Artificial intelligence can be used to analyze network traffic not just for behavioral anomalies, but also to detect malware activity and threats in HTTP sessions.

For example, a machine learning model can be trained on indicators typical of malicious content in HTTP traffic. Once trained, the model will scan HTTP sessions for these indicators and flag potential threats. This kind of ML solution helps detect new malware that is left undetected by expert-defined rules. One such model, expected in an upcoming release of PT Sandbox, has already detected several unknown threats during testing. However, it is important to note that ML solutions are not replacing expert rules any time soon. Rather, they will complement and enhance them, improving the ability to detect even new, previously unseen threats.

Another specific example of threat detection in HTTP traffic is detecting web shells. Web shells are malicious command interfaces used for controlling erb servers remotely. They must be distinguished from legitimate traffic, and full detection requires analyzing both requests and responses. Detecting web shells at the request stage helps block them before they are uploaded, while analyzing responses helps catch active ones. The ML models that detect web shells in PT Application Firewall are trained on open-source data and examples from the Standoff cyberbattles, which not only boosts detection by about 30% compared to traditional rule-based approaches, but also helps uncover new web shells.

Machine learning can also help detect encrypted communications between malware and C&C servers in network traffic. Malware can encrypt its communication sessions to evade security tools, sometimes using the same methods as legitimate apps like Telegram to bypass restrictions. By learning to analyze indirect signs, such as TCP packet length patterns, it is possible to accurately distinguish between malicious and legitimate traffic. This approach is especially effective because it is hard for attackers to change their basic obfuscation methods. Modifying a tool to evade detection costs attackers time and effort, slowing down their hacking attempts. 

Figure 3. David Bianco's Pyramid of Pain, illustrating the increasing cost and complexity of changing tactics to avoid detection 

Detecting unknown threats

A major strength of AI in cybersecurity is its ability to uncover previously unknown threats. For instance, the BAD module in MaxPatrol SIEM can detect attacks that are not covered by existing correlation rules. AI technology can spot zero-day vulnerability exploits and unknown malware by analyzing anomalies and potentially dangerous patterns of activity. 

The behavioral ML module in PT Sandbox has already shown it can detect novel threats on multiple occasions. For example, during one Standoff cyberbattle, attackers ran malware that launched a chain of 100 subprocesses before doing anything else. The ML model picked up this odd behavior, even though the product contained no classic signature existed for it. Static analysis methods struggle to catch brand-new malware that is still unknown to security systems—especially when attackers are constantly switching methods and masking their attacks by obfuscating files or sending them from compromised trusted accounts. Behavioral analysis tools will still flag them as threats, helping keep organizations protected. 

Detecting phishing and harmful content

Social engineering remains one of cybercriminals' main methods in 2024. It was used in every second attack on organizations and the most common (61%) method of attacking organizations in the Middle East. Moreover, phishing emails served as the delivery channel in 42% of malware attacks. Cybercriminals are skilled at manipulating victims' emotions to achieve their goals, so employee training alone is not enough to prevent phishing—anyone can make a mistake and overlook a malicious email. 

One potential software-based method for phishing protection is the use of AI technology, which researchers have already demonstrated to be potentially effective. AI can analyze not just the content of an email but also contextual features, such as its length or the properties of attachments. In the early stages of deployment, a system like that could potentially detect suspicious emails and warn the user. Once it reaches a company-defined accuracy threshold, it could begin blocking dangerous messages outright. 

AI is emerging not only on the defense side—cybercriminals are also eager to use new technology in attacks. In our report, we discussed how the biggest success that cybercriminals have had applying AI is in social engineering. With generative AI, they automate phishing and bot content generation, and are enhancing classic social engineering attacks with deepfakes. AI-generated content is constantly evolving, making it increasingly difficult to spot with the naked eye. AI-powered detection of AI-generated content can help here. Researchers have already demonstrated the potential effectiveness of this technology, and we are starting to see them enter the market. One possible way to make generated content easier to detect would be for legitimate and responsible developers of generative models to agree to embed watermarks into graphic and audio content their models generate. However, text remains problematic as there is currently no way to reliably mark it—not to mention the potential for cybercriminals to create their own generative tools, free from any restrictions or safeguards. 

Another potential application of AI in content recognition is the detection of dangerous websites. With AI-powered technology, the contents of a web page can be analyzed in real time, just as the user is about to visit it. If harmful content is detected, access to the site can be blocked before the user even lands on the page. This kind of system can be used to build protection against phishing sites, potentially dangerous websites, or those not suitable for viting on corporate devices.

In any case, it is essential to remain vigilant and learn how to spot social engineering tactics without AI assistance. We recommend that upon receiving any message, email, or call, you ask yourself a few simple questions:

  1. Is this an inconvenient time? Am I on vacation or about to finish my working day? Is it the weekend?
  2. Is the message trying to pressure me with urgency, importance, or authority? Does it communicate something critically important, scary, or beneficial to me personally?
  3. Are there any spelling or punctuation errors? Are any job titles or company names incorrect?
  4. Is the message impersonal, without mentioning names?
  5. Is the text clumsy? Does it contain repetitions?
  6. Does it include attachments, links, or QR codes?

If the answer to any of these questions is "Yes", this may be a phishing message. What to do:

  1. Take a five-minute break and calmly assess the situation.
  2. Verify the information in the message through other channels: contact the sender by phone or email directly, or search the web for the organization's website or the promotional campaign.
  3. If the message seems suspicious, report it to the security department. The security team will guide you on the next steps.

We recommend that security teams conduct internal phishing recognition training and cyberexercises. All employees should be informed that they may receive phishing emails without warning at any time. Employees must avoid clicking links in these emails, instead forwarding them to the security team. Sending this type of emails from time to time prepares employees for potential real attacks, and their response will be a clear indication of the organization's readiness to defend against phishing attacks. We suggest varying the topics of these phishing emails, referencing global events, local company activities, or universal phishing themes. Knowing about these cyberexercises will encourage employees to treat each message with caution and report any suspicious ones to the security team, which will help catch genuinely harmful emails.

Phishing is not limited to emails, messengers, and websites—scammers have already expanded their arsenal with deepfakes, which means we need to learn to recognize them too. Alongside the basic questions and tips, here are some signs of deepfakes to watch for:

  1. A change in the manner of speaking and sound (especially at junctions between phrases), and unusual vocabulary can be signs of an artificially generated audio track.
  2. In videos, fakes may be detectedf from unnatural body and facial movements, such as the mouth (deepfakes often do a poor job of representing teeth) or eyes (unnatural pupil movement and blinking).
  3. Poor recording or call quality can be an indirect sign of a deepfake. Cybercriminals mask flaws in generated voice and video with supposedly poor connection quality.

It is always best to verify received information through another channel—but a scammer can also be exposed early by asking a verification question that only the real person would know. The question can be anything—like what we had for lunch yesterday, or which TV show we discussed last week. The scammer will not know the answer, giving themselves away.

Response

Support in decision making

AI-driven solutions can significantly reduce response times by providing context for th attack, explaining security system triggers, and offering guidance on priority steps. This kind of support enables security employees to access required incident data much faster, and thus make swifter decisions. Solutions that perform this "co-pilot" role are being developed both by major companies such as Microsoft and by individual developers

Another promising current avenue is companies training their own cybersecurity LLM assistants. Of course, developing an LLM from scratch is an extremely expensive process that requires highly qualified AI developers. Instead, organizations can use ready-made open-source LLMs and fine-tune them using their own data. A significant additional advantage of this approach is that all sensitive data remains within the company and is not sent to the LLM developer for processing.

If cybercriminals do manage to achieve their goals in a cyberattack, these assistants could potentially help deal with the aftermath of the incident and suggest optimal steps to mitigate the damage.

Automated response

AI-powered technology can be used to build an automated incident response process. The degree of automation may vary. Automated response can be trained on the specific actions of the security team in certain types of incidents and then repeat those actions only in similar, standard situations. For more complex incidents, an AI-powered system can generate a recommended playbook based on the context of the attacker's actions and the resources targeted or already compromised. This is how, for example, MaxPatrol O2 works. The playbook may either be reviewed, verified, and started by a security employee, or—at the highest level of automation and trust in the system—executed without human approval. 

The future of AI in defense

Heat maps of tactics and techniques as presented in the MITRE D3FEND and ATT&CK matrices 

To evaluate which defensive tactics and techniques are using AI technology, we analyzed the MITRE D3FEND matrix.

Tactics are seven high-level categories of defensive actions, roughly following a chronological order: Model, Harden, Detect, Isolate, Deceive, Evict and Restore.

Technique categories are the columns in the matrix that list actions and methods linked by a unified security process. For example, the tactic D3-UBA: User Behavior Analysis describes the process of detecting abnormal user behavior compared to a typical work profile.

Techniques are the elements within the columns that describe specific defensive actions and methods. For example, the User Behavior Analysis tactic includes the technique D3-UDTA: User Data Transfer Analysis. This technique refers to analyzing the volume of data transferred by a user, which can be used to detect abnormal, potentially unauthorized activity.

We have created a heat map highlighting which tactics, techniques, and sub-techniques already utilize AI or could potentially use it in the future. Artificial intelligence technology is already being applied across a wide range of information security tasks (four out of seven tactics). 

Figure 4. Heat map of AI applications in MITRE D3FEND techniques

We have divided all the tactics and techniques into three levels, based on whether they use AI-powered technology:

Blue: AI is already in use. This group includes 28% of the techniques, with the Detect tactic being the most covered—one of the strongest and most promising areas for AI use in cybersecurity. This includes analyzing user behavior, network traffic, and actions executed by files. 

Light Blue: AI can be applied. This category includes 27% of techniques where AI can be applied, but real-world solutions are still in various stages of development. Examples include the Deceive tactic (AI can generate various traps and decoys or imitate user activity) or the technique category D3-NM: Network Mapping: in the future, AI could be used to gather network information, build a complete map, and detect shadow IT.

Gray: the use of AI is not justified or would not provide significant benefits. The potential for AI use in information security is immense, but even as technology becomes more sophisticated, there remain areas it cannot reach—such as physically verifying the connectivity of network nodes (D3-DPLM: Direct Physical Link Mapping). 

The MITRE D3FEND knowledge graph is closely linked with the MITRE ATT&CK matrix: most defensive techniques are listed next to the attack techniques they defend against. Based on these connections between attack methods and defensive countermeasures, we have created a heat map that aligns defensive techniques enhanced with AI technology with each attack technique. It is immediately clear that 100% of ATT&CK tactics and 65% of techniques will be covered by D3FEND techniques that can incorporate AI. 

Figure 5. Heat map of MITRE ATT&CK tactics and techniques covered by D3FEND techniques where AI can be applied

2025: The year of agents

In early 2025, a clear trend had emerged toward the use of multi-agent systems (MAS), in which several interacting agents are employed to solve a task. The main advantages of such systems over single-agent ones are modularity and specialization in narrow tasks:

  • In early 2025, a clear trend had emerged toward the use of multi-agent systems (MAS), in which several interacting agents are employed to solve a task. The main advantages of such systems over single-agent ones are modularity and specialization in narrow tasks:

  • The specialization of agents in specific tasks in the multi-agent approach means workload can be efficiently distributed among modules with expertise in particular domains. This helps avoid problems seen in single-agent systems where all tasks are directed to a single general-purpose agent.

This trend can also be seen in the field of cybersecurity: in our research on the use of AI in cyberattacks, we already mentioned an experimental multi-agent system for vulnerability exploitation—but there are so many more possibilities here. For example, multi-agent systems can simulate the actions of defenders and attackers to practice incident response scenarios, and different agents can be distributed across the network to detect DDoS-type attacks. 

However, the multi-agent approach comes with challenges:

  • A multi-agent system is more complex to configure, since it is necessary to check and refine the results not only of the entire system but of each individual agent;

  • If resource-intensive models are used for each agent, increasing the number of agents will lead to a significant load on computing resources.

Autopilots

The ultimate goal of implementing AI-powered technology in cybersecurity is to create an autopilot. This can both increase the speed of incident response and significantly reduce employees' workloads. This makes autopilot defenses especially relevant in a context of workforce shortages and a growing number of cyberattacks—all the more so as cybercriminals have already begun experimenting with automated attacks using AI. 

Autopiloting can be introduced gradually at different levels for various tasks. At a minimum, an autopilot can be a digital clone of an individual employee or team, learning from their actions and repeating them in similar situations. This system can promptly suggest and carry out standard solutions to routine tasks. The next stage of development is a full-fledged autopilot that detects and stops cybercriminal activity, guided by the expertise embedded in the system. This is how MaxPatrol O2 operates. In the future, we might even see the reverse scenario, where autopilots participate in training cybersecurity newcomers, acting as mentors that respond appropriately to attacks. 

The effectiveness of the AI-powered autopilot concept has already been confirmed by MaxPatrol O2. At the Standoff 13 cyberbattle, the metaproduct prevented hackers from breaking into a replica of Positive Technologies' infrastructure. The autopilot for result-driven security operated in response mode—detecting and stopping attacks before these damage the digital twins of key Positive Technologies systems. 

Cyber-range modeling

With generative AI, it is potentially possible to automate the creation of digital clones This kind of digital twin can be used as a cyber-range for security testing, attack simulation, and assessing the impact of proposed changes. Generating a digital twin would require significantly less time and money than creating a copy manually, making this a much more accessible method of testing security.

In addition to test cyber-ranges, creation of decoys that imitate user or system behavior (as in the Deceive tactic of the MITRE D3FEND matrix) could be an interesting area to apply generative AI in. AI can generate artificial network traffic or simulate user behavior on the network. With time, these decoys will become more realistic—and therefore more effective, as their main purpose is to deceive the attacker and divert their attention. 

Problems of AI in security

High expectations for the use of artificial intelligence in information security face a number of issues. AI technology combines enormous potential with significant requirements for creating a final product: scarce and high-quality training data is needed, and such a product can only be developed by highly skilled and experienced professionals. 

Computing power 

AI-powered solutions require significant computing power, which may be unavailable to small or medium-sized businesses for various reasons. In the future, the issue may be addressed through technological advancements in computing optimization and the creation of specialized hardware designed specifically to run AI models. Today, one possible way to address the problem is to use cloud computing for AI-powered solutions. These systems have both their advantages and disadvantages. The main advantage is that resource-light agents remain within the organization's infrastructure, while the heavy computational load for AI modules shifts to the cloud infrastructure. The disadvantages stem directly from the nature of cloud solutions: some confidential data must be sent to third-party servers for processing, and the system's functionality depends on the stability of the internet connection between the company and the cloud. 

AI researchers are working on reducing not only the size of models but also the requirements for their operation. One solution already being implemented in practice is neural network compression: simplifying models while preserving performance. Various methods are used for this, such as quantization2 and knowledge distillation3. In the long term, model compression should improve computational efficiency, thus reducing the hardware requirements for devices running the model. In the security field, this may mean AI module-based solutions can run on personal computers, smartphones, and other low-performance devices. 


  1. Quantization of neural networks is the transformation of numerical values in the model from precise data types with many bits (e.g., float32) to types with fewer bits (e.g., int8). 
  2. Knowledge distillation is a machine learning technique in which the knowledge of a larger, more complex "teacher" model is transferred to a smaller, simpler "student" model.

Training data

Regardless of the domain (cybersecurity and beyond), when it comes to AI, the principle of "garbage in, garbage out" is strikingly evident. The performance of an AI-powered solution directly depends on the quality of the training datasets. Collecting and labeling this training data is a major challenge for the entire AI field. To engage manpower for data labeling, special platforms like Toloka are being established. In the field of information security, the issue of training data is especially acute due to the nature of network data: oversaturation with false positives and a small number of actual attacks. In real traffic, security system triggers are rare, and even then, not all are true positives: security solutions may respond to suspicious but non-malicious activity as if it were an attack. As a result, training an AI model on this kind of data may be ineffective. 

One way to collect attack-rich network data is from cyberexercises such as Standoff, or the output of pentesters and attack simulation tools like PT Dephaze. An alternative method of obtaining this data is to turn again to AI and generate a synthetic dataset. Synthetic training data offers several potential advantages over real data: it is cheaper to collect, does not require anonymization or obfuscation to preserve confidentiality, and can be modified or adapted to suit specific tasks. A synthetic training dataset can be used on its own or to enrich real data with specific attack attributes. Nevertheless, it is important to note that generating high-quality synthetic training datasets remains one of the pressing challenges still to be fully resolved. 

Shortage of experts

The shortage of professionals in the field of information security is one of the problems that AI technology could potentially solve. In the foreseeable future, AI will reduce the workload on cybersecurity professionals by automating routine tasks. At the same time, the development and integration of these solutions requires top-tier experts with knowledge of both information security and artificial intelligence. This issue also arises when introducing AI into other professional fields, where experts are needed at the intersection of two domains.

A possible solution to this problem could be hackathons, where teams composed of cybersecurity and AI professionals work together. Exciting security challenges could help attract more AI experts to the industry.

Black box problem

One of the obstacles to the widespread implementation of AI in cybersecurity is the black box problem of complex models. The issues of interpretability (how the answer was obtained) and explainability (why the answer was obtained) are relevant for the entire field of AI technology, not just cybersecurity. Nonetheless, in the field of information security, understanding the reasons behind a system's output is especially important for building trust in it. Moreover, model transparency affects not only the ability to explain an answer but also the speed and reliability of correcting errors. The interpretability and explainability of models may become key factors in the large-scale deployment of AI-powered solutions in security, especially in autopilot mode. AI technology will need not only to prove its reliability but also to support it with predictable responses. The first step toward this has already been made in models with reasoning technology, which allows them to "think" before responding and to reproduce the chain of reasoning. 

AI as a target and source of threats

AI security is an evolving field where researchers still have a great deal of work to do. It is important to understand that every AI module implemented not only enhances or automates a solution's capabilities but also becomes a potential target for attackers. 

We expect that, in the future, cybercriminals will increasingly target embedded AI, including that in security systems. The widespread deployment of AI, for example in website verification tools, may lead to attackers embedding trap exploits in the content or code of phishing sites. Developers of AI-based solutions must analyze in advance the types of attacks their products may face in real-world scenarios and implement defensive measures as early as possible. 

AI models embedded in security processes can both become attack targets themselves and serve as sources of threats. The use of generative AI to automate and accelerate the development and design of IT products, solutions, and modules at every production level starting with hardware may soon lead to a rise in both known and new vulnerabilities in information systems that are specific to AI-generated designs. Despite their impressive capabilities, generative AI products are imperfect and, at the current stage of development, they require human supervision and review. Cybersecurity professionals must also be aware of the risks of using open-source models. Cybercriminals can distribute models retrained for malicious activity—for example, embedding backdoors into generated code. 

Conclusions

Artificial intelligence technology is becoming firmly established in many professional fields, and information security is no exception. AI can perform various tasks at all stages of cybersecurity, assisting professionals by taking over routine tasks and expanding threat detection capabilities. Gradually, the role of AI technology in security will become more comprehensive: AI will fully assume the role of a co-pilot, and in the future may completely automate certain security tasks. 

Despite the wide range of potential AI applications, humans remain irreplaceable in the field of cybersecurity. Even with a level of technology that far surpasses today's, there will still be many tasks requiring human expertise, such as shaping overall protection strategies, overseeing and managing AI-powered tools, and addressing complex or non-standard issues. It is important to note that in the future, the ability to effectively use AI tools will become a key skill required for cybersecurity professionals. Thus, the technology will not eliminate the need for humans, but will lead to a reduction in team sizes due to increased productivity.

The implementation of AI technology in defense is a logical countermeasure in response to attempts by attackers to exploit new technologies for their attacks. The defensive side must win this race in order to be prepared for cybercriminals' expanding toolkit. However, the development and deployment of new technology must be approached responsibly—risks and threats must be taken into account, new capabilities evaluated carefully, and their strengths applied to appropriate tasks where they can be truly effective. To ensure personal and corporate cybersecurity, we recommend following our general guidelines: they remain relevant and important even amid the rapid evolution of new technology.

About this report

This report contains information on current global cybersecurity threats and defense methods based on Positive Technologies own expertise, investigations, and reputable sources.

We estimate that most cyberattacks are not made public due to reputational risks. As a consequence, even companies specializing in incident investigation and analysis of hacker group activity are unable to calculate the precise number of threats. Our report seeks to draw the attention of companies and ordinary individuals interested in the current state of information security to the latest methods and motives of cyberattacks leveraging artificial intelligence.

This report considers each mass attack (for example, phishing emails sent to multiple addresses) as a single attack rather than multiple separate incidents. For explanations of terms used in this report, please refer to the Positive Technologies glossary.

Get in touch

Fill in the form and our specialists
will contact you shortly