Governance gaps and guardrail failures: Managing security in the AI era

As AI evolves, leaders in cybersecurity and governance must ensure visibility, proactive security, collaboration, and strict guardrails to balance innovation with trust, compliance and ethics
 
4 minutes read
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
4 minutes read
Share
Governance gaps and guardrail failures: Managing security in the AI era

In the evolving world of  (AI), there are many opportunities. However, there are also governance gaps, compliance challenges andvulnerabilities.

During a recent LinkedIn Live panel on the HCLTech Trends and Insights channel, Governance gaps and guardrail failures: Managing security in the AI eramoderated by Jeff Crume, Distinguished Engineer, Master Inventor, and Cybersecurity Architect at IBM, industry experts explored the critical steps organizations must take to protect themselves in the face of accelerating AI adoption.

Visibility first: Understanding AI’s expanding risk surface

AI adoption introduces a new and significantly expanded attack surface. As Crume put it, "AI [is] the new attack surface because that's in fact what it is. Every time we introduce a new technology, it's another window or door that bad guys can exploit."

Understanding the nuances of AI risk starts with visibility. According to Guy Shanny, Global Data and AI Security Leader at IBM, “Companies need to know what AI use they have across the company…because once they have this visibility, they can better protect themselves.”

Shanny emphasized the difference between various AI adopters, including takers, shapers and makers, each facing distinct security, compliance and governance challenges. AI users range from companies that merely consume AI services (takers) and those customizing and integrating AI (shapers) to those creating foundational AI models (makers). Each category faces unique risks, such as prompt injection attacks, sensitive data leakage, data poisoning and inherent biases.

Amit Mishra, Global Lead for Data Privacy and Security Lead at HCLTech, further pointed out a less obvious risk group, the “onlookers”, users within organizations who adopt AI technologies informally, potentially introducing risks unknowingly.

Mishra explained, “These are the folks who are not adding any value into that whole chain. However, they're the curious kind sitting in your enterprise and they're tinkering with the products. So, they are the one who are bringing in lot of this risk by just…irresponsibly, not necessarily with bad intention, but bringing in this kind of model.”

Striking the right balance: Innovation, trust and compliance

In the race toward innovation, organizations often perceive compliance and security as obstacles rather than enablers. Mishra noted a widespread perception that “either you innovate, or you comply,” emphasizing how critical it is for organizations to break free from this false dichotomy. Effective organizations, he says, balance compliance with innovation by providing clear guidance, offering practical tools and embedding risk management early.

As Crume elaborated, IBM's principles for trustworthy AI highlight critical governance pillars including explainability, fairness, robustness, transparency and privacy. These serve as a practical starting point for enterprises seeking to embed responsible practices into AI adoption.

Future-proofing security in the era of AI

Security measures must evolve alongside AI capabilities. Shanny laid out a clear framework for this: first, gain continuous visibility; then implement proactive security through "continuous monitoring, red teaming in the development phase, posture for the underlying infrastructure…and then have as a last line of defense the AI firewall."

Crume added, “I think [an AI firewall is] a powerful architecture because what's going to happen is a lot of people will try to build in all of those protections into each model separately, which will be difficult…versus separating this out into a firewall in front.”

The imperative of cross-functional collaboration

Mishra emphasized the critical role of cross-functional collaboration in managing AI risks effectively. He argues that despite their apparent complexity, “Most of these complex challenges have a very simple and a very logical way of doing it…that answer starts with the governance.”

Strong governance enables teams from different disciplines, such as security, compliance, ethics, and business, to work together more cohesively.

To foster this collaboration, Mishra suggested practical strategies, including clear executive mandates, simplifying compliance through automation, embedding security early in the development process, often called a “shift-left” approach, and facilitating cross-education or "cross-pollination" among teams to build mutual understanding.

Preparing for a powerful future: AI’s next frontier

When discussing the inevitability of AI's growth, Crume predicted that AI will have both "devastating and amazing" impacts on cybersecurity, highlighting threats like highly sophisticated phishing attacks and advanced prompt injections, but also defensive improvements like accelerated threat analysis.

Shanny warned specifically of AI's potential for making cyberattacks significantly more scalable, customized and challenging to defend against. “With AI, you can transform the different malwares at scale and…avoid the detectors, identifications [and] mechanisms,” he explained.

Yet Mishra offered some optimism: "When [AI] becomes 10 times powerful, we'll be exactly where we are today. Because the offense will be 10 times powerful and so will be the defense."

Perhaps the most urgent emerging risk discussed by the panel is autonomous AI agents. Crume cites the example of “EchoLeak,” an AI-driven zero-click email attack that could exfiltrate sensitive data with no human interaction.

“We can't train a user to do anything differently…because they didn't do anything in the first place,” warned Crume, highlighting the critical importance of tightly controlled AI agency.

 

HCLTech recognized as a Leader in Everest Group’s Managed Detection and Response Services PEAK Matrix® Assessment 2025

 

A governance revolution for a new AI reality

The era of AI demands a governance and cybersecurity evolution. Organizations must embrace AI innovation while diligently maintaining transparency, robust guardrails and effective collaboration across teams.

Ultimately, as Crume summarized, “We've got to make sure that our AI serves us and not the other way around.”

AI’s revolutionary potential is clear but securing that revolution is the new governance imperative.

Share On
_ Cancel

Contact Us

Want more information? Let’s connect