Winning with AI: Building future-ready organizations in the public sector

At WEF 2026, leaders from HCLTech, BCG, Invest India, ServiceNow and CrowdStrike outlined what sets AI leaders apart in the public sector, from trust and governance to operating models and scalable AI
Subscribe
7 min read
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
7 min read
Winning with AI: Building future-ready organizations in the public sector

Key takeaways

  • Scaling AI in the public sector depends on trust, governance and process redesign, not just deployment
  • Public-private collaboration must account for local context, including cultural nuance and sovereignty
  • Data foundations, security and resilience are prerequisites for sustained adoption and citizen confidence
  • Inclusion requires infrastructure, representative data and design choices that reduce complexity for end users
  • Future-ready workforces will need AI fluency alongside critical thinking, curiosity and collaboration skills

During a session titled Winning with AI: Building Future-Ready Organizations at HCLTech’s pavilion during the World Economic Forum, Arjun Sethi, Chief Growth Officer, Strategic Segments at HCLTech, hosted a discussion on what it takes to move from AI experimentation to execution at scale, particularly across government, public services and highly regulated environments. He was joined by Vladimir Lukić, Managing Director & Senior Partner, BCG, Nivruti Rai, MD & CEO, Invest India, Combiz Richard Abdolrahimi, Global Head of Government Affairs & Public Policy, ServiceNow and Algirde Pipikaite, Head, Cyber GRC, CrowdStrike.

Public-private partnership is a prerequisite, not a nice-to-have

The panel suggested that public sector programs don’t succeed through technology alone. They require governments and private sector organizations to “play together,” as Rai put it, because the risk landscape spans far beyond any single actor.

Rai framed the challenge by grounding AI in the familiar complexities of software: “there [are] bugs...there is bias...misinformation…and then there is security issue.” Her point was that AI amplifies known risks at speed and at scale, and as a result, demands shared accountability.

A second layer is context. “Any actuation that AI does has to have a cultural or sovereign nuance,” said Rai. The implication for public sector deployment is clear: as AI becomes embedded in systems that affect citizens, governance can’t be generic. It must reflect local norms, laws, and societal expectations.

Trust is the real contract and it can be broken quickly

A consistent thread across the panel was that public sector AI adoption ultimately rises or falls on trust: the trust of citizens in their institutions and the trust of institutions in the systems they procure and deploy.

Pipikaite described this as protecting the “contract between the government and the citizen,” particularly because governments hold the most sensitive data: “tax records, financial records, health records.” The expectation is not only that services become faster, but that privacy, security and resilience hold firm, even under sustained pressure.

Abdolrahimi reinforced that the trust gap is also social, not just technical. “There’s a lot of fear, there’s a lot of skeptics,” he said, arguing that for AI “to flourish and really maximize outcomes,” industry needs to “bring consumers of these technologies into the fold…bringing governments, bringing businesses, bringing society together to…build…a common set of principles, standards.”

The practical point beneath both comments is that trust isn’t built through messaging; it’s built through repeated moments where systems work as promised and fail safely when they don’t.

The bottleneck isn’t AI capability. It’s how institutions work

When the conversation turned to what holds back execution, Lukić challenged a common pattern: AI is being deployed as a “silver bullet” rather than as a tool for redesigning how work gets done.

He described a scenario familiar to citizens in many countries: even when AI can perform checks in “milliseconds,” outcomes stay slow because of human review queues and layered processes. “If I apply for an ID, still takes me 20 days…although…all the background checks… can be done…milliseconds,” he said. The problem is not technical throughput; it’s the operating model.

His underlying diagnosis was blunt: “a lot of the tech firms first build a solution, try to push it…it works, but it never solve[s] the bottleneck.” The organizations that scale AI aren’t those that deploy the most tools; they are the ones willing to map workflows, change incentives and redesign service delivery end-to-end.

Policy and regulation can move faster, if AI modernizes governance itself

On legal and policy constraints, the panel didn’t pretend the friction will disappear. Pipikaite acknowledged the reality directly: “Are there any issues? I wish there was none.” But she also argued there is a new possibility: AI itself can help governments and institutions keep pace.

“We can review contracts, new proposed policies [and] do comparisons in a matter of seconds,” she said, contrasting this with the previous reliance on an “army of paralegals” that could take weeks. In other words, the governance cycle can be accelerated, without removing human judgment, by automating research-heavy steps and freeing experts to focus on decisions, trade-offs and accountability.

India’s AI pathway: Infrastructure constraints, application strength and collaboration

Rai offered a detailed perspective on India’s trajectory; one defined by rapid application development, a push to strengthen infrastructure and a focus on building ecosystem capacity.

She began with energy. She offered a comparison: “India has 1.5 gigawatt of total installed data centers…it’s essentially one watt per person. In comparison, US has 70 watt per person.” That gap, she argued, shapes what can be deployed at scale and what reliability levels are feasible, especially as AI workloads demand more power and resilience.

Rai also pointed to government actions to support compute access: “[The] government has made access to about 28,000 GPUs,” and emphasized cost as a deliberate enabler: “the GPU usage per hour that the government is offering is much less [some hyperscalers] will offer. Why? Because the government wants an ecosystem to be built.”

On talent and implementation, she made a broader claim about India’s position in the global AI stack: “US is leading on infrastructure of AI…and I can tell you, India is leading on the application development of AI.” She tied this to the scale of software talent and the growth of AI work in India: “most of them are working on AI…most of them are building agents.”

But her conclusion wasn’t nationalistic; it was collaborative. In a world of fragmentation, she argued that countries face a choice between aligning with major blocs or pursuing a cooperative third path: “this third choice is the best one.”

Inclusion means outcomes, and outcomes require basics, representative data and simpler systems

When the panel turned to inclusion, Abdolrahimi framed the goal as making AI “truly accessible to all…from the farmers to the blue collars to the white collars to everyone.” He emphasized that “we should be focusing on the outcomes rather than the technology,” pointing to examples like “safe, clean drinking water,” where AI is a means, not an end.

At the same time, Lukić argued that inclusion depends on trust built through basics: if core records are hard to obtain or unreliable, citizens won’t trust AI-based services. “There are too many visible failures,” he said, and “people will just not trust it. They will not use it.”

Pipikaite added another enabling principle: simplicity. “Accessibility and equality will be achieved as well if we think about simplification,” she said, warning that over-complex systems exclude by default. She illustrated this through cyber operations: “on average, you have around 70 tools…there is no way any analyst…will be able to…understand workflows of 70 different tools.” Her takeaway: “Simplification and consolidation…is one of the answers.”

Workforce readiness: AI fluency plus durable human capabilities

The discussion closed on the workforce and what people need to thrive as AI becomes embedded in every sector.

Pipikaite returned to a theme she raised earlier: “critical thinking.” She defined it as “ability to solve problems…navigate complex issues,” and argued foundational skills still matter: “I don’t think we’re going to escape from math.” She also advocated ways to train the mind: “everyone in the world should play chess.”

Abdolrahimi emphasized a broader mindset: “curiosity and…experimentation,” and argued AI literacy must be practical, not abstract: “We should be promoting experimentation…promoting people to be curious.”

Rai’s framing was more direct: “100% of people should know how to use AI,” she said, because if you don’t use it, “you wouldn’t know what it can do…and…if you can be replaced by it.” She emphasized agency: if you can define goals and orchestrate work, “you will be the manager…the architect…and AI becomes your tool.” But “if you are the one who waits for instructions…you will be replaced.”

Lukić tied the thread together with a collaboration lens; learning to “sit in the room with people that have a different point of view” helps people recognize blind spots and detect when tools produce incomplete or biased outputs.

FAQs

What does it mean to scale AI in an organization?
Scaling AI means moving beyond pilots to embed AI into core workflows and operating models, supported by data foundations, governance, talent and change management so the impact holds at enterprise scale.

Why do many AI programs fail to deliver measurable impact?
A frequent issue is deploying AI without redesigning the end-to-end process. If delays are caused by review queues, incentives or fragmented data, AI speed won’t change outcomes unless the operating model changes too.

How can public sector organizations build trust in AI-enabled services?
Trust grows through secure data handling, transparency, resilience and repeated moments of truth, where services work as promised. Without reliable basics, like accurate records and accessible systems, citizen adoption remains low.

How can leaders make AI adoption more inclusive?
Inclusion depends on infrastructure, representative data and simplifying experiences so people can use the services, especially across populations with differing digital access, literacy or language needs.

What skills matter most in an AI-shaped workforce?
AI fluency is becoming a baseline, but durable capabilities, such as critical thinking, problem-solving, curiosity and collaboration, are differentiators, particularly the ability to apply judgment and orchestrate work with AI as a tool.

Share
Public Sectors Public Sector Article Winning with AI: Building future-ready organizations in the public sector