Securing Agentic AI in the public sector and aerospace & defense

Agentic AI is reshaping mission-critical government and aerospace & defense operations, demanding new security and identity and governance models to ensure trust and resilience
Subscribe
6 min read
Arjun A. Sethi
Arjun A. Sethi
Chief Growth Officer and Global Head, Public Sector, Aerospace & Defense and PE Practice
6 min read
microphone microphone Listen to article
30s Backward
0:00 0:00
30s Forward
Securing Agentic AI in the public sector and aerospace & defense

Agentic AI is emerging as a defining technology for mission-critical sectors. It brings systems that can reason, learn and act independently to enhance decision-making and resilience. For government agencies and the aerospace & defense industry, this evolution presents both opportunity and responsibility.

In this article based on a fireside chat I participated in HCLTech’s pavilion during the 2026 World Economic Forum, I share my perspective on how leaders can secure with purpose, trust and accountability, drawing on developments in global standards, regulation and security practices such as the NIST AI Risk Management Framework and the EU AI Act.

Defining Agentic AI and its urgency

Agentic AI is an intelligent system that can act, learn from outcomes and collaborate with other systems or agents toward a goal. It is an evolution from predictive or assistive toward autonomous decision-making, where systems are no longer just recommending options but executing them.

In the public sector and in , the implications are profound. These environments rely on rapid, accurate and secure decisions. Whether coordinating logistics, managing maintenance or supporting complex missions, Agentic AI can enhance performance and resilience by accelerating insight, reducing latency between sensing and acting and supporting operations in demanding or contested environments.

However, as autonomy increases, so does exposure. When an intelligent system acts independently, its decisions carry weight for mission outcomes, public trust and international norms. Securing these systems from the start is essential to preserve safety, transparency and legitimacy. This imperative is reflected in emerging guidance such as NIST’s AI Risk Management Framework, which emphasizes trustworthy, accountable AI across sectors, and the growing body of work on Agentic AI security and governance in communities like OWASP.

Security and governance challenges

The first challenge is identity. When autonomous systems execute actions, they effectively become non-human entities with access rights and credentials. Managing these identities with precision is as critical as managing human access, especially as the number of machine and agent identities begins to far exceed the number of people in many organizations. Industry trends already show that machine identities are a fast-growing attack vector, requiring stronger lifecycle management and least-privilege controls.

The second challenge is governance. Agentic systems can learn, adapt and use external tools. Oversight must be continuous, not periodic. Every action should be traceable, auditable and explainable. This aligns with AI risk management practices that call for ongoing monitoring, documentation and clear lines of accountability across the AI lifecycle.

The third challenge is the expanded attack surface. In defense, agents may operate across operational and cloud environments, from edge devices to command-and-control systems. In the public sector, they may handle sensitive citizen data and cross-agency systems. Both require adaptive governance models that anticipate rather than simply react to risks. Recent work on agentic security highlights threats such as tool misuse, prompt injection, data exfiltration and “runaway” workflows when agents coordinate at scale.

Rethinking Zero Trust for Agentic AI

Zero Trust has always been about verification. In the age of Agentic AI, that principle must extend from static entities and users to dynamic behavior. Every interaction, decision and transaction an agent performs needs to be validated in real time against policy, context and intent.

This requires granular control over what an agent can access, context-aware verification of its actions and continuous monitoring of its behavior. In public and aerospace & defense contexts, trust is inherently dynamic. It must be earned through each action, not assumed based on status or origin.

NIST’s guidance on Zero Trust Architecture describes a shift from perimeter-based security to protection focused on users, assets and resources. Agentic AI pushes this further: we need Zero Trust models that treat agents as active, adaptive actors whose permissions, sessions and tool use are continually evaluated. This includes verifying not only who the agent is, but what it is doing, where and why, and being able to halt, isolate or constrain it when behavior deviates from policy.

Practical first steps to securing Agentic AI

The first step is visibility. Organizations must understand where autonomy exists today and where it is planned. That means identifying the data, systems and functions that agents touch, formally or informally, and mapping the workflows where agents are already influencing or executing decisions.

The second step is to establish governance early. Before scaling Agentic AI, it is critical to define who sets objectives for agents, who approves their actions and how exceptions are managed. This includes clarifying roles and responsibilities among mission owners, AI developers, security teams and compliance functions, in line with emerging AI governance and regulatory expectations.

The third step is to begin with controlled pilots that deliver value while allowing teams to test oversight frameworks. Examples include predictive maintenance or document processing: use cases with meaningful impact but bounded operational risk. These pilots should be designed not only to prove technical capability, but also to test logging, monitoring, human-in-the-loop review and fallback mechanisms.

Finally, organizations must invest in culture. Security, operations and AI teams need to work together. Securing Agentic AI is as much about coordination and shared accountability as it is about technology. Lessons from early deployments of AI in public services and defense already show that multi-disciplinary governance, transparent risk communication and training are essential to achieving both innovation and safety.

Measuring maturity and progress

Maturity in securing Agentic AI is defined by control and learning. Organizations should track how quickly they can detect and respond to deviations in agent behavior, how consistently actions are logged and reviewed and how effectively they can limit impact when an agent performs outside its policy boundary.

This suggests metrics that combine security operations and AI governance: time to detect anomalous agent activity, proportion of high-risk agent actions subject to review, frequency of policy violations and recovery times when interventions are required.

Beyond metrics, success is about resilience. Systems should be able to explain, contain and correct themselves without compromising mission outcomes or citizen trust. This principle is reflected in AI regulations such as the EU AI Act, which imposes obligations around transparency, human oversight, risk management and post-deployment monitoring for high-risk AI systems, including many public sector and critical-infrastructure applications.

Sector-specific considerations

In the public sector, accountability and transparency are essential. Agents that make or influence decisions must be explainable and compliant with regulatory standards, particularly where they affect access to public services, benefits, law enforcement or fundamental rights. The EU AI Act classifies many public sector uses as “high-risk,” bringing stricter requirements for documentation, oversight and human control. Legacy infrastructure and diverse governance structures across agencies add complexity, but the potential for improved efficiency and responsiveness is significant if these systems are deployed with strong safeguards.

In aerospace & defense, assurance and certification are paramount. Systems operate in high-risk environments where reliability cannot be compromised. Agentic AI can enhance readiness, logistics and threat response, but every action must be predictable and verifiable within established safety, ethical and legal frameworks. Defense communities around the world are already articulating Responsible AI principles and developing implementation pathways to ensure that autonomy is aligned with mission values and international norms.

Across both sectors, autonomy must be balanced with oversight. Control and accountability must evolve together, ensuring that as agents gain more capability, organizations retain the ability to supervise, guide and, when necessary, override their actions.

Designing governance into autonomy

Looking ahead, I see three major shifts. The first shift will be that governance becomes part of design. Guardrails will be built directly into intelligent systems rather than applied externally, so that explainability, logging, policy enforcement and human override are intrinsic features of agent architectures, not afterthoughts.

The second shift will be the rise of connected agent ecosystems. Agents will collaborate across organizations and domains, which will demand interoperable standards and shared trust frameworks. Regulatory initiatives and voluntary codes of practice around AI already point toward a future in which cross-border, cross-sector cooperation on trust and safety becomes the norm.

The third shift will be the rapid growth of non-human identities. Managing and securing these digital actors will become a core capability for every organization, on par with securing human users and traditional infrastructure.

My advice is to embed security and trust into the architecture of autonomy. Do not treat them as afterthoughts. Agentic AI should be designed to enhance human judgment, not replace it. That principle will guide both innovation and responsibility.

Securing Agentic AI is about building systems that act with integrity, transparency and purpose. For the public sector and aerospace & defense industry, the challenge is significant but the opportunity even greater. By grounding innovation in trust and resilience, these sectors can lead the way in shaping how intelligent systems serve humanity with assurance and accountability.

Share
Public Sectors Public Sector Article Securing Agentic AI in the public sector and aerospace & defense