TL;DR: Yes, and there’s a good reason that agentic AI is getting buzz in the cybersecurity space.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor of the Malware Binary Triage (IMBT) course starting this Black Friday and Cyber Monday!
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: This is an affiliate link – your enrollment helps support this platform at no extra cost to you.
Agentic AI has the potential to address critical gaps in cybersecurity operations, particularly in alert triage and investigation, areas where traditional automation has consistently fallen short. However, like any buzzword, I think it’s crucial to cut through the hype and understand both the opportunities and limitations it introduces.
What is Agentic AI?
Agentic AI refers to artificial intelligence systems that act autonomously as “agents,” capable of carrying out tasks, making decisions, and interacting with tools or external systems without constant human intervention. Unlike traditional AI models that analyze data or execute predefined actions, agentic AI combines advanced frameworks to mimic human decision-making processes, dynamically adapting to new challenges and learning from interactions.
AI agents are a natural progression for technology to do more complex tasks and autonomously take actions, building onto the capabilities of Large Language Models (LLMs). It is a technology that many industries are beginning to evaluate, with applications beyond cybersecurity.
Artificial intelligence researcher Andrew Ng gives a great overview of agentic workflows in about 13 minutes, if you want a more detailed explanation:
For security teams, the bottom line is that AI agents mean we can finally move beyond static automations like SOAR playbooks, to automate context-driven decisions in areas like incident investigation, remediation, and case management.
Why Has Security Automation Failed So Far?
In recent years, security automation has made significant strides and succeeded in certain areas:
- Detection: Next-gen security solutions that adopt AI and machine learning techniques excel at identifying anomalies and flagging potential threats.
- Response and Case Management: Automated workflows and SOAR platforms streamline post-detection actions.
However, the bottleneck lies in alert triage and investigation. This is a major issue for security operations centers (SOCs). Current solutions rely heavily on human decision-making to sift through noisy alerts, correlate evidence, and determine if an incident warrants further action. This gap results in overwhelmed security teams, long response times, and high burnout rates.
The core issue? Triage and investigation require context, critical thinking, and nuanced decision-making— things that traditional automation tools lack.
The Opportunity Agentic AI Brings to Cybersecurity
Agentic AI introduces a way to close this critical gap by enabling agents to:
- Perform contextual investigations autonomously (aka no prompting).
- Make dynamic decisions based on real-time data.
- Interact with security tools and orchestrate responses across external systems.
For SOC teams, AI agents reduce reliance on human analysts for routine triage and investigation tasks. This is a clear opportunity to leverage technology as AI analysts that can conduct tasks simultaneously, processing massive volumes of data that isn’t possible for us mortals. This enables analysts to focus on high-priority incidents and strategic initiatives, increasing the overall efficiency and effectiveness of the SOC.
Limitations of Agentic AI for SOC
So where’s the catch?
While Agentic AI holds tremendous promise for streamlining security operations, its effectiveness is inherently tied to the quality and breadth of the tools and data it can access. Without the right tools and integrations, even the most sophisticated AI agents will falter, leading to poor decision-making, inefficiency, or outright failure.
Limited Tools = Limited Decisions
An AI agent is only as good as the tools at its disposal. For example, imagine an agent tasked with triaging a potentially malicious file but armed only with access to VirusTotal hash checks. This setup severely limits the agent’s capability:
- If the hash is flagged, the agent may incorrectly label the file as malicious without further evidence, leading to unnecessary escalations.
- If the hash is unknown, the agent has no additional tools to analyze the file’s behavior, origins, or context, forcing it to defer the decision or, worse, make an uninformed one.
In such cases, the AI agent effectively becomes a bottleneck, contributing little to actual threat resolution.
No Evidence = Bad Decisions
An effective investigation requires gathering and analyzing relevant evidence. Without robust evidence-collection tools, an AI agent could hallucinate conclusions or act on incomplete information. Consider these scenarios:
- Lack of Evidence from Endpoints: An AI agent unable to pull logs, collect files, extract data on suspicious processes, or collect forensic data from a suspect machine will lack the context to assess the severity of a threat. For example, a suspicious process could either be a legitimate update or malware—without endpoint evidence, the AI can’t tell the difference.
- Blind to Email Content: An alert related to a phishing email might include a suspicious QR code or embedded URL. If the AI agent lacks tools to extract and analyze these artifacts, it might dismiss the alert or provide inaccurate insights.
- No Interaction Capability: Many investigations rely on human context. If the AI agent can’t ask an end-user questions (e.g., “Did you recently attempt to log in from a new location?”), it won’t have the context needed to validate or dismiss certain alerts.
Attempting Triage with Minimal Data
AI agents that try to triage alerts based solely on the alert description and a handful of indicators are doomed to fail. Alert descriptions are often vague, providing only a high-level overview that requires additional context to make a meaningful decision. For example:
- An alert might say, “Suspicious process detected on Host A.” Without further details like process behavior, parent processes, network activity, and associated logs, the AI is left guessing.
- Indicators like IP addresses or hashes might appear benign when viewed in isolation but could point to coordinated malicious activity when combined with other evidence.
In such situations, the AI risks either escalating too many false positives or missing genuine threats, undermining trust in its capabilities.
How to Get Started with Agentic AI for SecOps
1. DIY with Agentic Frameworks
For organizations with in-house expertise, open-source frameworks like LangGraph, CrewAi, and OpenAI’s Swarm offer a starting point. These platforms enable the creation of custom AI agents tailored to specific workflows. Then you can pair these frameworks with workflow automation tools like n8n to orchestrate complex operations.
However, the DIY approach comes with challenges, including:
- Building and maintaining robust logic for agents.
- Ensuring integrations are reliable and comprehensive.
- Continuously updating the agent to adapt to evolving threats.
2. Adopt an Autonomous SOC Platfo
For organizations looking for a turnkey solution, platforms like Intezer’s Autonomous SOC come preloaded and tested with:
- Strong AI models optimized for security operations.
- Proven logic and workflows to handle alert triage and investigation.
- Seamless integrations with security tools like SIEMs, EDRs, and more.
A proven platform like Intezer eliminates the need for complex setup and deliver trustworthy results, making them ideal for teams that want to deploy agentic AI quickly and effectively.
Where Intezer stands out is in its commitment to not only developing great AI models but also equipping those models with highly advanced tools for collecting evidence and analyzing suspicious forensic data. Intezer’s AI agents can perform tasks such as collecting files and processes from endpoints, conducting deep memory forensics, recursive URL scanning, reverse engineering software, and even gathering direct feedback from end-users.
This comprehensive toolkit ensures that the AI agent has all the context it needs to make accurate and informed decisions. Our combination of cutting-edge AI and exceptional tooling is what makes Intezer a top choice for a SOC teams and MSSPs that are looking for AI solutions that can operate autonomously, support their team, and improve their processes.
How we approach Agentic AI for security operations at Intezer.
What’s Next for Agentic AI in Cybersecurity?
As the field evolves, we can expect to see:
- Smarter Agents with Org-Specific Data
By leveraging retrieval-augmented generation (RAG) and knowledge graphs, future agents will have deeper contextual understanding of an organization’s unique environment, allowing for even more accurate decision-making. - Better Agentic AI Frameworks
While current frameworks are promising, many remain in a proof-of-concept stage. Future iterations will offer improved functionality, scalability, and ease of use, making it easier for organizations to adopt Agentic AI.
The AI-Driven SOC Strategy
Agentic AI is more than just a buzzword — it’s a game-changer for cybersecurity. By bridging the gap between detection and response, it is already transforming how some security teams operate. However, success depends on understanding its limitations and implementing it strategically. Whether you choose to build your own agents or adopt a ready-made AI SOC solution like Intezer backed by security experts, the key is to start preparing your SOC for the AI-driven future.
Interested to see the Autonomous SOC platform for yourself? Book a demo to learn more about Intezer now.
The post Is Agentic AI the New Cybersecurity Buzzword for 2025? appeared first on Intezer.
Article Link: Is Agentic AI the New Cybersecurity Buzzword for 2025?
1 post – 1 participant
TL;DR: Yes, and there’s a good reason that agentic AI is getting buzz in the cybersecurity space.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor of the Malware Binary Triage (IMBT) course starting this Black Friday and Cyber Monday!
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: This is an affiliate link – your enrollment helps support this platform at no extra cost to you.
Agentic AI has the potential to address critical gaps in cybersecurity operations, particularly in alert triage and investigation, areas where traditional automation has consistently fallen short. However, like any buzzword, I think it’s crucial to cut through the hype and understand both the opportunities and limitations it introduces.
What is Agentic AI?
Agentic AI refers to artificial intelligence systems that act autonomously as “agents,” capable of carrying out tasks, making decisions, and interacting with tools or external systems without constant human intervention. Unlike traditional AI models that analyze data or execute predefined actions, agentic AI combines advanced frameworks to mimic human decision-making processes, dynamically adapting to new challenges and learning from interactions.
AI agents are a natural progression for technology to do more complex tasks and autonomously take actions, building onto the capabilities of Large Language Models (LLMs). It is a technology that many industries are beginning to evaluate, with applications beyond cybersecurity.
Artificial intelligence researcher Andrew Ng gives a great overview of agentic workflows in about 13 minutes, if you want a more detailed explanation:
For security teams, the bottom line is that AI agents mean we can finally move beyond static automations like SOAR playbooks, to automate context-driven decisions in areas like incident investigation, remediation, and case management.
Why Has Security Automation Failed So Far?
In recent years, security automation has made significant strides and succeeded in certain areas:
Detection: Next-gen security solutions that adopt AI and machine learning techniques excel at identifying anomalies and flagging potential threats.
Response and Case Management: Automated workflows and SOAR platforms streamline post-detection actions.
However, the bottleneck lies in alert triage and investigation. This is a major issue for security operations centers (SOCs). Current solutions rely heavily on human decision-making to sift through noisy alerts, correlate evidence, and determine if an incident warrants further action. This gap results in overwhelmed security teams, long response times, and high burnout rates.
The core issue? Triage and investigation require context, critical thinking, and nuanced decision-making— things that traditional automation tools lack.
The Opportunity Agentic AI Brings to Cybersecurity
Agentic AI introduces a way to close this critical gap by enabling agents to:
Perform contextual investigations autonomously (aka no prompting).
Make dynamic decisions based on real-time data.
Interact with security tools and orchestrate responses across external systems.
For SOC teams, AI agents reduce reliance on human analysts for routine triage and investigation tasks. This is a clear opportunity to leverage technology as AI analysts that can conduct tasks simultaneously, processing massive volumes of data that isn’t possible for us mortals. This enables analysts to focus on high-priority incidents and strategic initiatives, increasing the overall efficiency and effectiveness of the SOC.
Limitations of Agentic AI for SOC
So where’s the catch?
While Agentic AI holds tremendous promise for streamlining security operations, its effectiveness is inherently tied to the quality and breadth of the tools and data it can access. Without the right tools and integrations, even the most sophisticated AI agents will falter, leading to poor decision-making, inefficiency, or outright failure.
Limited Tools = Limited Decisions
An AI agent is only as good as the tools at its disposal. For example, imagine an agent tasked with triaging a potentially malicious file but armed only with access to VirusTotal hash checks. This setup severely limits the agent’s capability:
If the hash is flagged, the agent may incorrectly label the file as malicious without further evidence, leading to unnecessary escalations.
If the hash is unknown, the agent has no additional tools to analyze the file’s behavior, origins, or context, forcing it to defer the decision or, worse, make an uninformed one.
In such cases, the AI agent effectively becomes a bottleneck, contributing little to actual threat resolution.
No Evidence = Bad Decisions
An effective investigation requires gathering and analyzing relevant evidence. Without robust evidence-collection tools, an AI agent could hallucinate conclusions or act on incomplete information. Consider these scenarios:
Lack of Evidence from Endpoints: An AI agent unable to pull logs, collect files, extract data on suspicious processes, or collect forensic data from a suspect machine will lack the context to assess the severity of a threat. For example, a suspicious process could either be a legitimate update or malware—without endpoint evidence, the AI can’t tell the difference.
Blind to Email Content: An alert related to a phishing email might include a suspicious QR code or embedded URL. If the AI agent lacks tools to extract and analyze these artifacts, it might dismiss the alert or provide inaccurate insights.
No Interaction Capability: Many investigations rely on human context. If the AI agent can’t ask an end-user questions (e.g., “Did you recently attempt to log in from a new location?”), it won’t have the context needed to validate or dismiss certain alerts.
Attempting Triage with Minimal Data
AI agents that try to triage alerts based solely on the alert description and a handful of indicators are doomed to fail. Alert descriptions are often vague, providing only a high-level overview that requires additional context to make a meaningful decision. For example:
An alert might say, “Suspicious process detected on Host A.” Without further details like process behavior, parent processes, network activity, and associated logs, the AI is left guessing.
Indicators like IP addresses or hashes might appear benign when viewed in isolation but could point to coordinated malicious activity when combined with other evidence.
In such situations, the AI risks either escalating too many false positives or missing genuine threats, undermining trust in its capabilities.
How to Get Started with Agentic AI for SecOps
1. DIY with Agentic Frameworks
For organizations with in-house expertise, open-source frameworks like LangGraph, CrewAi, and OpenAI’s Swarm offer a starting point. These platforms enable the creation of custom AI agents tailored to specific workflows. Then you can pair these frameworks with workflow automation tools like n8n to orchestrate complex operations.
However, the DIY approach comes with challenges, including:
Building and maintaining robust logic for agents.
Ensuring integrations are reliable and comprehensive.
Continuously updating the agent to adapt to evolving threats.
2. Adopt an Autonomous SOC Platfo
For organizations looking for a turnkey solution, platforms like Intezer’s Autonomous SOC come preloaded and tested with:
Strong AI models optimized for security operations.
Proven logic and workflows to handle alert triage and investigation.
Seamless integrations with security tools like SIEMs, EDRs, and more.
A proven platform like Intezer eliminates the need for complex setup and deliver trustworthy results, making them ideal for teams that want to deploy agentic AI quickly and effectively.
Where Intezer stands out is in its commitment to not only developing great AI models but also equipping those models with highly advanced tools for collecting evidence and analyzing suspicious forensic data. Intezer’s AI agents can perform tasks such as collecting files and processes from endpoints, conducting deep memory forensics, recursive URL scanning, reverse engineering software, and even gathering direct feedback from end-users.
This comprehensive toolkit ensures that the AI agent has all the context it needs to make accurate and informed decisions. Our combination of cutting-edge AI and exceptional tooling is what makes Intezer a top choice for a SOC teams and MSSPs that are looking for AI solutions that can operate autonomously, support their team, and improve their processes.
How we approach Agentic AI for security operations at Intezer.
What’s Next for Agentic AI in Cybersecurity?
As the field evolves, we can expect to see:
Smarter Agents with Org-Specific Data By leveraging retrieval-augmented generation (RAG) and knowledge graphs, future agents will have deeper contextual understanding of an organization’s unique environment, allowing for even more accurate decision-making.
Better Agentic AI Frameworks While current frameworks are promising, many remain in a proof-of-concept stage. Future iterations will offer improved functionality, scalability, and ease of use, making it easier for organizations to adopt Agentic AI.
The AI-Driven SOC Strategy
Agentic AI is more than just a buzzword — it’s a game-changer for cybersecurity. By bridging the gap between detection and response, it is already transforming how some security teams operate. However, success depends on understanding its limitations and implementing it strategically. Whether you choose to build your own agents or adopt a ready-made AI SOC solution like Intezer backed by security experts, the key is to start preparing your SOC for the AI-driven future.
Interested to see the Autonomous SOC platform for yourself? Book a demo to learn more about Intezer now.
The post Is Agentic AI the New Cybersecurity Buzzword for 2025? appeared first on Intezer.
Article Link: Is Agentic AI the New Cybersecurity Buzzword for 2025?
1 post – 1 participant
Read full topic