As the EU AI Act becomes a global benchmark for AI regulation, it’s clear that practical risk-based governance helps organizations accelerate AI adoption, build trust and realize measurable ROI. Ahead of her session at Gartner’s Symposium in Barcelona on November 10, Dr Heather Domin, Vice President and Head of the Office of Responsible AI and Governance at HCLTech, explores what it takes to turn regulation into advantage.
How the EU AI Act is reshaping global norms around AI governance
Domin calls this “an exciting time,” with the EU AI Act already “helping to shape global norms” far beyond Europe. She points to its extraterritorial pull: global firms “need to comply,” and standards bodies in other regions are “looking for how [they] can align with the EU AI Act.” In her view, the Act is becoming “a global benchmark,” helping multinational businesses to set their risk-based north star, along with other major regulations, standards and frameworks. That unity matters. With one consistent reference point, organizations can reduce duplication, simplify decision rights and scale use cases across markets without reinventing their governance each time.
AI governance implementation challenges
The harder problem is execution. Governance “touches virtually every area of the business,” says Domin, and “it’s not necessarily just owned by one group.” Legal, privacy, data governance and technology all have stakes, which means “that coordination often becomes a challenge.” Success depends on “really good communication,” clear metrics for value, and the ability to transform processes “at multi-organizational, multi-line-of-business scale.” On the technical side, there are “differences in how you apply AI governance principles across different platforms and using different tools,” so practices must travel across stacks. The result? Even committed organizations can struggle to turn principles into muscle memory unless roles are explicit and operating models evolve with them.
Risk-based classification of AI systems
“A risk-based classification is important. It’s absolutely the right approach.” Crucially, it “puts the focus on the use case…versus the technology itself,” prioritising oversight “where harm is most likely.” She calls out higher-risk scenarios, such as “surveillance or scoring mechanisms,” and contexts like “finance and healthcare,” where stronger controls are warranted. Yet maturity gaps remain. As “all of this is so new” and standards are still being finalized, organizations “know where we need to be” but aren’t always ready to operationalize. Citing joint research HCLTech conducted with MIT, she notes “only 15% of organizations felt really ready to implement controls.” The takeaway: frameworks aren’t the bottleneck. Instead, execution is. And the priority has now shifted from agreeing on the risk taxonomy to operationalizing it: standardising processes, tooling and accountabilities so compliance becomes repeatable at scale.
Practical steps to implement high-risk AI systems
If a system lands in the high-risk tier, Domin recommends starting with visibility: “a formal inventory…is usually step one,” mapping which systems fall in scope and what obligations apply. Next, run “formal risk classification and assessments” for each system to identify gaps. Expect foundational lifts: “quality management systems,” “technical documentation” and “data governance or human oversight protocols” may need to be instituted or strengthened. Depending on the obligation, teams should also prepare to “register the system or undergo conformity assessments,” and establish processes for “incident reporting and post-market monitoring.” The through-line is traceability: being able to show how a system was built, why it behaves as it does and how it is continuously managed.
Embedding EU AI Act compliance into day-to-day operations
Governance sticks when it lives in the work. Practically, Domin advises establishing “some sort of cross-functional AI governance committee…[that] can help bring in that cross-functional perspective.” Leadership and ownership are essential: “People have to know what they have to do [and] what they’re responsible to do.” She suggests aligning to ISO/IEC 42001, the international standard for AI management systems, as a backbone for roles, processes and evidence. Just as importantly, compliance checkpoints should be “embedded…directly into the AI lifecycle,” from deciding “which AI systems you’re going to develop,” through to “development [and] deployment” and “monitor[ing] on an ongoing basis.” When guardrails are built into delivery pipelines, they guide teams rather than stop them.
Balancing innovation with regulation and governance
The fear that governance slows innovation is widespread and, Domin argues, misplaced. “What we’ve seen over and over again is that actually it’s the opposite,” she says. Done well, governance “drive[s] faster innovation and more successful implementations of AI over the long term,” increasing the odds that “any innovation…deliver[s] the ROI that you hope.”
The key is proportionality: “You don’t want to apply heavy controls for lower-risk systems,” but you do want the “appropriate level of governance for the risk level.” She draws a parallel with security: no one “would…deploy something without those checks,” because we’ve learned the risks of skipping them and the value of keeping them. Over time, she expects this to become “just a given,” and the innovation-versus-control debate to fade.
Future-proofing AI strategies for upcoming regulations
To stay ahead of emerging rules, Domin emphasizes adaptability and skills. “We have to remain adaptable,” she says, and cultivate “AI literacy” so the organisation maintains a “growth mindset and continuous learning cycle.” As the technology evolves, “you need to learn about Agentic AI now,” and functions must “speak the same language.”
That cross-functional fluency is a precondition for effective governance and responsible adoption. Frameworks should be agile enough to evolve with new standards, using ISO/IEC 42001 as an operating system while adapting to regional expectations, such as the NIST AI Risk Management Framework in the US.
Finally, Domin stresses the value of strategic partnerships and participation in wide stakeholder discussions with policymakers, academics and industry peers. Staying engaged helps organizations spot what’s next and shape it.
Responsible AI: The path to scale
The legally binding EU AI Act marks a turning point where Responsible AI becomes the path to scale, not a detour from it. The playbook Domin outlines is refreshingly concrete: know your systems, classify risk, embed controls in the lifecycle, make roles explicit and invest in the shared language that lets functions collaborate. Do that work now and compliance shifts from cost center to a capability that accelerates trust, reduces rework and compounds ROI as regulations and technologies continue to evolve.
Want to find out more on how regulation can unlock innovation? Join Heather Domin at Gartner Symposium, Stage 1 in Barcelona on November 10. Details and session times are available here.