Andrew Ash
CISO, Netacea

Andrew Ash is a CISO at Netacea, a cybersecurity company that protects websites, mobile apps, and APIs from business logic attacks such as account takeover, credential stuffing, and fake account creation. Andrew leads strategic efforts to help enterprises detect and mitigate these complex, evolving threats. With a background spanning across security operations, threat intelligence, and product strategy, he brings a pragmatic, data-informed approach to risk reduction. He specialises in web traffic behaviour, Agentic AI, and business logic abuse, and plays a central role in shaping defensive operating models, enhancing detection capabilities, and aligning cybersecurity efforts with enterprise goals. Andrew also contributes to the wider industry conversation on how businesses can stay resilient against emerging forms of cyber abuse.

 

Agentic AI refers to a new class of artificial intelligence systems capable of acting independently to complete tasks. Unlike traditional automation, these systems do not follow fixed rules or wait for prompts. They operate with goals, interpret context, and adapt in real time. Their growing presence across digital operations signals a shift in how organisations will build, secure, and govern technology.

Understanding the concept of agency helps clarify why this shift matters. Human agency refers to our ability to act intentionally, to make decisions with purpose, and to reflect on outcomes. It involves forethought, self-regulation, and conscious control over our actions. When humans apply technology to achieve work-related goals, they are exercising this form of agency through intention and awareness.

Material agency, in contrast, describes the capacity of technologies to take action based on their design and programming. These systems may not possess consciousness, but they can influence outcomes through autonomous behaviour. An AI system that interprets data, makes decisions, and acts without ongoing human input is exhibiting material agency.

Agentic AI exists at the intersection of these two forms. It is created and directed by human actors, yet capable of executing tasks and producing outcomes independently. This raises important questions about intention, accountability, and control. The increasing sophistication of AI systems means that material agency now operates at a scale and speed that challenges conventional governance frameworks.

Recognising the interplay between human and material agency is critical. It reframes how we think about responsibility, design, and impact in digital systems. As organisations adopt AI agents, they are not just deploying new tools, they are introducing actors into their systems, with all the complexity that implies.

Agentic AI is shifting from peripheral experimentation to a strategic role. Agents are now embedded in customer interactions, product development, and internal operations, moving beyond support tasks to deliver core outcomes. This evolution is led by so-called Frontier Firms, as highlighted in Microsoft’s 2025 Work Trend Index, which adopt emerging technologies early to reshape digital work.

As Agentic AI integrates into core operations, the focus turns to enabling it safely, sustainably, and at scale. This involves coordinated changes across architecture, security, governance, and talent to ensure autonomous systems operate within clear boundaries.

Governance

As agents begin to act within enterprise systems, conventional governance methods may prove insufficient. Organisations need to define ownership models that reflect the hybrid nature of human and machine-driven outcomes. This includes clarifying which decisions can be delegated, how exceptions are managed, and what level of oversight is proportionate to the risk. While agentic systems operate without intent in the human sense, their ability to generate unpredictable results means organisations must adopt governance structures that anticipate variance and maintain clear accountability. This is not about handing over control, but about designing for shared execution in a way that remains auditable and aligned to organisational standards.

Architecture

Systems must support structured data, API accessibility, and observability. As agentic systems begin to span workflows, they increasingly rely on protocols that enable secure agent-to-agent communication. These machine-to-machine interactions allow systems to delegate tasks, share context, and adapt autonomously. One of the emerging architectural patterns is the agent mesh, interconnected networks of autonomous systems that can influence each other’s decisions without clear human supervision. Google’s A2A (Agent-to-Agent) framework exemplifies how structured protocols can support secure, auditable coordination between such agents, maintaining traceability, control, and accountability across complex digital environments.

Security

As agent meshes and autonomous coordination become more common, security models must evolve to address these new risks. Permission frameworks are essential, ensuring each agent operates within clearly defined boundaries. Organisations must manage agent identities, authorise access based on roles or tasks, and enforce full visibility through logging and telemetry. The emergence of shadow AI, unsanctioned deployments initiated by business units, further expands the risk surface. Without central oversight, these agents may access and disclose sensitive data or act beyond their remit. Establishing discovery protocols, mandatory registration, and continuous monitoring is vital to ensure all agents operate safely within governance and security policy.

Talent

Effective deployment depends on interdisciplinary collaboration. Product owners, data scientists, and infrastructure teams must jointly define where autonomy is appropriate, how outcomes are measured, and how to align agentic systems with existing workflows.

A sensible starting point is with co-pilot models, where agents assist by offering recommendations while humans retain the final decision. This allows organisations to monitor performance, adjust oversight mechanisms, and develop confidence in agentic systems before broadening their application.

Agentic AI is not hype. It is a natural extension of current capabilities. It requires a change in thinking, from software as a tool to software as an actor. Organisations that prepare early will be better placed to deploy responsibly and at scale.

Agentic AI will not displace human intelligence, but it will redistribute where it is needed most. It shifts the burden from execution to oversight, from repetition to reasoning, and from managing tasks to managing outcomes.

Content Disclaimer

Related Articles