Insights      Artificial Intelligence      Data Security in the Age of Agentic AI

Data Security in the Age of Agentic AI

Agentic AI represents a paradigm shift in software development, redefining how humans interact with technology and expanding the range of tasks that can be automated. While generative AI (Gen AI) might produce text, images or code, an agentic AI system extends that capability by using that generated output to autonomously complete goals and multistep problem-solving tasks with its planning and reasoning capabilities. Agentic systems are also capable of learning from their experiences by taking feedback and adjusting their behavior to achieve a goal.   

In March 2025, Georgian, in partnership with research firm NewtonX, surveyed 300 technical leaders on their adoption of agentic AI. The results revealed that 91% of respondents have already implemented or plan to implement agentic AI use cases. Among them, 45% are currently using agentic AI, while 19% intend to implement an agentic AI use case by September 2025 (+27% planning a longer-term implementation). Only 9% of surveyed technical leaders reported having no plans to implement agentic AI. As agentic AI adoption accelerates, its implications for cybersecurity appear to be increasing. 

AI now lies at the heart of the battle between attackers and defenders, potentially reshaping the dynamics of cybersecurity. While malicious actors exploit AI to identify vulnerabilities and craft sophisticated attacks, security teams are increasingly relying on AI to detect and neutralize threats. In my view, this dichotomy underscores the need to balance innovation with responsible safeguards.

In this article, I explore the evolving role of agentic AI in cybersecurity, the challenges posed by foundational models, the potential promise of the Model Context Protocol (MCP), and suggestions for securing next-generation intelligent systems. I believe that effectively managing both the risks and rewards of AI has the potential to shape the future of cybersecurity.

AI and Cybersecurity: Attackers vs the Defenders

A ‘good vs evil’ dichotomy becomes apparent when we consider the role of attackers and defenders (enhanced by AI) in cybersecurity today. In my experience, as new technologies emerge, cybersecurity practitioners not only have to figure out how to protect against novel threats, but we also have to be early adopters in order to combat new threat vectors. Hackers may use AI to create or exploit vulnerabilities, while security professionals leverage AI to detect and respond to threats in real time. Both attackers and defenders use AI to make themselves more efficient and effective.

For instance, attackers are developing increasingly sophisticated methods to bypass AI-based defenses. Techniques like model bypasses (evasion attacks) and model poisoning threaten to undermine AI systems by manipulating their outputs or corrupting their training data. These risks highlight the need for continuous monitoring and robust safeguards in AI-powered security tools.

Conversely, analysts in security operations centers (SOCs) now use AI to automate Tier 1 and Tier 2 incident response processes. SOC teams are beginning to leverage Agentic AI to streamline investigations, summarize incidents, and execute parallel threat assessments. This approach is intended to allow teams to improve response times, reduce the burden on analysts and spend more time on more sophisticated attacks that still require human review.

The Exploitation of Foundational Models

On April 23, 2025, Anthropic released a report detailing its detection of malicious actors and its strategy for counteracting them. The company noted several attack vectors, including the misuse of Claude to scrape leaked passwords and usernames associated with security cameras. Attackers leveraged the model to build capabilities for forcibly gaining access to these systems. 

While such techniques can be misused by bad actors, they can also serve legitimate purposes when applied responsibly. For example, benign actors may use similar techniques for cybersecurity research or system testing. This duality highlights the importance of evaluating the full context of AI usage to prevent misuse while fostering innovation. Anthropic’s transparency in exposing these vulnerabilities and its proactive mitigation efforts demonstrate how foundational models can be safeguarded while still enabling beneficial applications. 

Agentic Development Using Model Context Protocol (MCP)

Agentic AI represents the next evolution beyond generative AI, enabling systems to autonomously plan, reason about, and execute complex tasks. According to the Georgian and NewtonX survey, 66% of technical respondents prefer some or full human oversight of agentic AI, while 34% are comfortable granting agents greater autonomy. This process of moving  toward greater agentic autonomy, reflects the high stakes involved in adopting agentic systems.

A pivotal enabler of agentic AI is the Model Context Protocol (MCP), an open protocol that standardizes how applications provide context to large language models (LLMs), functioning as a universal connector—akin to a “USB-C port for AI applications.” MCP’s structured workflow, defined by clear requests and responses between clients and servers, enhances not only interoperability but also security.

In my view, the adoption of MCP is reshaping the software development lifecycle (SDLC), enabling the creation of intelligent systems that are efficient and autonomous and provide developers the opportunity to develop more secure AI products. Embedding security measures into MCP workflows is aimed at mitigating risks while unlocking the transformative potential of agentic AI.

Securing Agentic AI Solutions    

The structured workflow of MCP, characterized by a series of requests and responses between clients and servers, provides, in our view, strategic opportunities for the implementation of security checks at various stages of the interaction. Integrating security measures like intent validation, tool call authorization, data integrity and confidentiality, and communication flow monitoring has the potential to significantly improve the security and resilience of agentic systems built on the MCP framework.

Here are some practical security controls to consider when developing agentic workflows:

SDLC Updates 

  • Call out sections of code generated by AI for increased scrutiny, looking for unintended malware, malicious logic, or biased code. 
  • Review and refactor AI-generated code to ensure that AI tools use up-to-date components and minimize unnecessary data exposure. 

Access Control

  • Enhance traditional identity and access management (IAM) by implementing dynamic, role-based access controls (RBAC) tailored to the autonomous nature of AI agents. Dividing tasks among multiple specialized AI agents, each with clearly defined roles and responsibilities, can help to mitigate risk.  
  • A well-defined client-server architecture of MCP, with clear roles for each component, establishes distinct points of interaction where security measures can be effectively implemented and enforced. A thorough understanding of these roles and responsibilities is helpful for identifying potential vulnerabilities and designing robust security controls for agentic systems that utilize MCP.

Threat Modeling 

  • Extend or adapt traditional threat modeling frameworks to address the unique complexities introduced by AI agents. Focus on identifying and mitigating both security and ethical risks associated with the technology. 
  • Analyze how data is collected, processed, trained, tuned and tested to identify vulnerabilities.  Use the product of this analysis to establish where model logging and monitoring activities should be focused.  

Rate Limiting 

  • While this control is no different than rate limiting recommended for traditional web applications, its criticality is heightened by the need for enhanced monitoring of AI agents to detect anomalous behavior. 
  • Implement rate limiting and resource restrictions for AI agents interacting through MCP to safeguard against accidental misuse or malicious attacks. 

MCP Server Management 

  • Just like traditional application servers, the security of agentic systems that rely on MCP is heavily dependent on the security and trustworthiness of the MCP servers they interact with. Apply traditional supply chain security controls to all MCP servers and their dependencies, including practices such as cryptographic signing to verify the integrity of the components, version pinning to ensure the use of known and trusted versions, and thorough package verification. 
  • Consider self-hosting MCP servers, which allows for greater oversight of the server environment and data access.

Navigating the Future of Agentic AI in Cybersecurity

Defenders can stay ahead of attackers by automating critical processes and enhancing overall security posture. However, the same capabilities also empower malicious actors, making it imperative to adopt robust frameworks like the MCP and implement comprehensive security measures.

By prioritizing secure coding, dynamic access controls, threat modeling, and transparent workflows, organizations can responsibly harness the power of agentic AI. As we continue to explore the potential of this technology, I believe that collaboration, vigilance, and innovation will be key to ensuring it serves as a force for good in the evolving battle between attackers and defenders.

Read more like this

How Vibe Coding is Changing the Economics of Software Development

Software development has changed dramatically in recent years. Developers have moved from...

Addressing gaps in Canadian AI Maturity and Implementation

It has been a busy month in Canadian tech; from Toronto’s Tech…

Why Georgian Invested in Island (Again)

Island, the developer of the Enterprise Browser, emerged from stealth in early…