HCLTech and Palo Alto Networks: Securing Agentic AI by Design

Overview

Agentic AI goes beyond traditional chatbots by enabling autonomous Agents to plan and execute tasks across complex workflows using tools, APIs and contextual data. With the rise of platforms like Google Agentspace and A2A protocol, enterprises are increasingly adopting frameworks such as Agent Development Kit, LangChain and Semantic Kernel.

To help secure the evolving Agentic ecosystem, HCLTech and Palo Alto Networks have partnered to launch an Agentic solution that leverages Prisma® AIRS, a secure AI runtime platform designed to help enterprises confidently scale Agentic AI, with built-in security from the ground up. 

Section CTA

The risks of Agentic AI without security

Expanding attack surface through APIs, tools and external systems
Expanding attack surface through APIs, tools and external systems
Supply-chain risks via open-source dependencies
Supply-chain risks via open-source dependencies
Over-permissioned agents making unauthorized decisions
Over-permissioned Agents making unauthorized decisions
Threats from Agent-to-Agent communications
Threats from Agent-to-Agent communications

Introducing Agents secured by Design powered by Prisma® AIRS - AI runtime security platform

HCLTech has developed an Agentic platform on Google Cloud that leverages Prisma® AIRS platform provided by Palo Alto Networks to provide comprehensive runtime security, threat detection mechanisms and Agent security posture management services.

The Prisma® AIRS platform seamlessly integrates into Agent workflows to deliver defense-in-depth security across:

  • Model scanning and software bill of materials (SBOM)
  • Agent security posture management
  • AI red teaming and threat simulation
  • Runtime LLM and Agent firewalls for prompt interception and tool call monitoring
  • Governance for A2A (Agent-to-Agent) and MCP (Model Context Protocol) interactions

 

Core capabilities

Assess and scan AI models
  • Ensure models from open source and managed services are safe and secure
  • Prevent malware from entering your environments
  • Stop execution of malicious code stored in the AI model
  • Secure model against model tampering, malicious scripts and deserialization attacks
Manage app and Agent posture
  • Continuously monitor and remediate your security posture
  • Prevent excessive permissions, sensitive data exposure, platform and access misconfigurations
  • Ensure a secure and compliant AI Agent and application user
Assess AI threats proactively with Red teaming
  • Holistic testing for predefined attack types
  • Detailed reporting on attacks, elaborating on sensitive information extraction
  • Real-time threat prevention via recommendations
Runtime security and AI Agent security
  • At network level via firewall
  • At code level via APIs and SDK
  • Stopping AI Agents specific attacks such as memory manipulation, tool misuse, hallucination attacks and more. 

What we offer

GenAI-Powered Services
Agent experimentation
  • Deploy Agents in sandboxed environments.
  • Model scanning and passive monitoring help identify risks without affecting performance. 
Industry-Led Innovation Labs
Agent pilot build
  • Build scoped use cases (e.g., finance bots) with guardrails.
  • Apply red teaming, SBOM gates and privilege scoping to enforce best practices. 
Data Modernization Services
Launch production-grade Agents
  • Launch Agents into production with Agent governance and guardrails for transparency of Agent performance
  • Apply red teaming, SBOM gates and privilege scoping to enforce best practices. 
Operate
Operate and optimize
  • Monitor Agent performance for drift (against specific KPIs)
  • Run cost optimizations
  • Maintain resilience with continuous red-teaming and threat telemetry feeding into HCLTech’s XDR/SIEM for 24×7 protection. 
Unique offer
Unique offer

We offer a free Agentic AI Security workshop led by our COE. This can be followed by securing 2-3 simple Agents using an AI runtime security platform and can be used to demonstrate real-time threat detection proof-of-concept.

_ Cancel

Contact Us

Want more information? Let’s connect