Engineering the AI-native enterprise: From connectivity to platform innovation

AI is pushing enterprises to move beyond isolated pilots toward a more disciplined model built on workflows, data quality and governance
Subscribe
10 min read
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
10 min read
Engineering the AI-native enterprise: From connectivity to platform innovation

At a panel discussion hosted at HCLTech’s booth during , the conversation centered on a question many enterprises are now confronting: what does it really take to become AI-native? Moderated by Anand Vardhan Priyadarshi, Global Sales Director, HCLTech, the discussion brought together Dr. Darren Shea, AI Telecom Leader, PwC, and Adam Spearing, VP of AI GTM, EMEA, ServiceNow and Ashish Gupta, VP, Strategic Partnerships, Circles to explore how organizations are moving from AI ambition to operational reality.

What emerged was not a conversation about technology in isolation. It was a broader discussion about business design, governance, data readiness and leadership accountability. Across the dialogue, one theme stood out clearly: the challenge is no longer whether enterprises should adopt , but how to do so in a way that is scalable, coherent and trusted.

Why point solutions are no longer enough

One of the clearest messages from the discussion was that many organizations are still approaching AI in fragments. Rather than building a unifying strategy, they are often deploying tools in isolated domains in the hope of generating quick wins.

Shea captured the risks of that approach, saying many businesses have created “a wildflower garden” by sowing AI initiatives everywhere. “It looks beautiful, but it’s a bit of a mess,” he said. For Shea, the problem is not a lack of enthusiasm. It is a lack of structure. Many companies have executive mandates to use AI, demonstrate savings and show ROI, but in the rush to respond, they “snatch” at opportunities without first establishing where AI will create the most value.

His argument was that enterprises need to step back and look at the organization as a whole. That starts with understanding capabilities and processes, not just selecting tools. As Shea put it, many businesses discover at that stage that people have been doing the same tasks for years without anyone ever asking them to write down exactly what they do. Once that work is mapped, duplication becomes visible, inefficient tasks can be stripped out and roles can be redesigned around higher-value oversight.

Spearing reinforced the same point from a different perspective. He argued that organizations are operating at “two speeds”: trying to move very quickly in the short term while also needing a long-term strategic view. “AI is not a product, it’s a framework.” That distinction is important. A product can be bought and deployed. A framework must be designed, governed and continually refined.

Together, those perspectives point to the same conclusion. AI maturity does not come from the number of use cases launched. It comes from whether those use cases fit into a wider operating model.

Why structure can accelerate progress

A recurring misconception addressed by both speakers was the belief that a more structured approach will slow innovation. In practice, they argued, the opposite is often true.

Spearing said organizations need to think carefully about “the governance, the data and the foundations,” because that is what underpins long-term success. He described the challenge as deciding “what technologies I’m going to allow into my environment, how do I stop the proliferation in the wrong way and how do I encourage the proliferation in the right way.” In that sense, governance is not a brake on progress. It is what prevents experimentation from becoming chaos.

“Structured doesn’t necessarily mean slow,” said Shea. In fact, he added, “it actually means fast, once you get everything in place.” That observation is central to the current enterprise debate. Too many organizations still treat speed and discipline as competing priorities. Yet the real risk lies in moving quickly without enough coherence to scale.

Gupta echoed that need for deliberate design from a delivery perspective, arguing that in some cases the most effective route is to “start with a clean sheet approach” rather than endlessly trying to untangle old complexity. For businesses trying to become AI-native, that mindset can help leaders distinguish between modernization that creates real readiness and modernization that simply prolongs technical debt.

The panel suggested a more useful model is one where businesses pursue short-term value while designing for horizon-level change. That means delivering incremental gains, but doing so inside a clear framework for data, risk and ownership. Enterprises that can balance those two horizons are likely to outperform those that chase isolated wins without building the foundation beneath them.

Data becomes the strategic battleground

If AI is the engine, then data remains the fuel. That point is hardly new, but the discussion made clear that in the age of Agentic AI, the quality of that fuel becomes even more critical.

“Data is the fuel,” said Spearing. “If you don’t feed it with high-fidelity data, it’s going to make silly decisions.” That warning reflects the shift now underway. In the first wave of Generative AI adoption, much of the concern centered on hallucinations. In the next phase, the concern is broader and more operational. Poor data does not just produce weak answers. It can produce weak actions.

For many large enterprises, particularly in telecom, this is a longstanding problem. Spearing described environments where organizations have “so much data” and “such a mess of that data,” with systems often “held together with sticky tape” for years. Previous efforts to rationalize those estates were often delayed or deprioritized. AI is now changing that calculus.

Gupta was equally direct about the consequences of poor data discipline. “If you don’t have that in order, then it’s garbage in, garbage out,” he said. He said that many telecom environments have been built in a siloed way, with “a lot of bandages being put on top over multiple decades,” creating still more fragmentation. In his view, organizations need technology that is “purpose built,” where data is already structured and core capabilities are brought together “to give a clear, unified data in a simple, organized place.”

Spearing argued that now is the moment to put data squarely on the CEO agenda: “We have to sort our data, and that has to be the way forward.” That is a striking marker of how the conversation has shifted. Data modernization is no longer simply a technology issue. It is becoming a strategic precondition for AI adoption.

Shea echoed the same view, observing that many organizations are now using AI “as a forcing function for data.” In other words, AI is not just exposing weaknesses in the data estate. It is finally creating the urgency to fix them.

The enterprise runs on workflows, not silos

Another major theme in the discussion was the need to move beyond traditional functional structures and focus instead on workflows that cut across the business.

Spearing argued that many organizations misunderstand workflow transformation. Some processes are predictable and deterministic, and in those cases, AI may simply help them move faster. But the real transformation, he said, comes in “these non-deterministic workflows,” where conditions vary, exceptions arise and teams have historically had to escalate issues manually.

He illustrated the point with a complex onboarding example involving a large corporate telecom customer. In those situations, the challenge is not just processing tasks more quickly. It is enabling systems to synthesize information, test scenarios and surface choices when something unexpected happens. As Spearing explained, the aim is for the workflow engine to recognize when it has encountered something unfamiliar and then respond intelligently: “I’ve got stuck here. What do you want me to do? Here are our options. Do we go A or B, or have you got another idea?”

That capability matters because so much human effort today is spent on exception handling and firefighting. Spearing argued that if organizations can use AI to manage that complexity, they can redirect people away from daily operational disruption and toward business improvement. “How do we improve the business? How do we improve the quality of service? How do we change that customer experience?” he asked. That is where he sees the real executive-level opportunity.

Gupta extended that idea by shifting the focus from internal systems to customer journeys. “Life has moved on from a system of records to workflows, or system of actions,” he said. From his perspective, enterprises should start not with how systems are arranged, but with the decision process a customer is trying to complete. “The consumer is not thinking how my MarTech is reacting,” he said. “What the consumer is thinking is an entire decision cycle.” That means information, recommendations and execution all must move seamlessly across CRM, analytics, catalog, billing and service systems. As Gupta put it, the critical question is “what’s the customer workflow or decision process that you’re solving for” and then how the underlying systems come together to support it.

He also suggested that this shift may reshape enterprise structures. In one example, he described a mature organization where leadership had identified nine critical processes and assigned executive accountability for them across traditional silos. If a business truly runs on workflows, then end-to-end process ownership may become just as important as traditional departmental leadership.

Governance as the enabler of trust

For all the emphasis on transformation, the panel was equally clear that governance can’t be treated as an afterthought. As AI systems become more autonomous, governance becomes essential to trust, accountability and adoption.

Shea said governance remains “a very important topic” and one of the biggest concerns he hears from clients. He noted that the space is still evolving rapidly and that enterprises are learning more about agents “on a daily basis.” One concern in particular stands out: “agent drift.” As Shea described it, the fear is that “you set the agent up on day one to do something, and then by day 17, it’s doing something else.” That possibility, he said, “scares the life out of my clients.”

For that reason, most organizations still want humans in the loop. Shea described that caution as sensible. Businesses may like the idea of autonomous agents acting on their behalf, but many are not yet ready to let them run without oversight, especially in highly sensitive environments. In telecom networks, for example, he noted that operational teams are particularly cautious. “The network is super paranoid,” he said. “The fastest way to go from five nines to three nines is to release an agent in there.”

At the same time, Shea also pointed to the need for balance. Too much governance can undermine the value case just as surely as too little. As he put it, if a company removes five people from a team and replaces them with agents, it does not want those individuals to govern those agents. The challenge is to create controls that build confidence without negating efficiency gains.

Spearing made a complementary point earlier in the discussion when he said governance should not be reduced to a tick list exercise. In his view, effective AI governance begins with a more strategic question: what is the organization’s appetite for risk, and how will it manage that risk across the business? When governance is approached that way, it can become an accelerator rather than an administrative exercise.

Gupta added that trust has both an enterprise and consumer dimension. “We are the custodian of customers’ data,” he said, which adds “multiple layers of complexity and responsibility” to any deployment of agents or AI applications. His view was that organizations need to move progressively, using copilots and human intervention first, then expanding autonomy as confidence grows.

The human role is changing, not disappearing

The panel also addressed one of the most sensitive dimensions of AI transformation: its impact on roles, responsibilities and organizational structure.

Spearing predicted that enterprises will increasingly shift from traditional departmental design toward structures built around critical business processes. He argued that if a company were building a telecom business from scratch today, it would not organize it in the same way many incumbents have evolved over time. That creates an opportunity to rethink how work is owned and improved.

At the same time, Shea highlighted the anxiety that is already emerging in the middle layers of organizations. “There’s a lot of confusion in there. There’s a lot of angst,” he said. Many managers worry about whether they will lose their jobs, particularly if they do not consider themselves highly technical. In response, he said organizations need to spend more time on reassurance and explanation, helping employees understand what their future role could look like in a hybrid human-machine model.

For many, he suggested, the job may not be fundamentally different. “It’s just you managing a thing, not someone,” he said. The move toward AI-native operations may reduce some roles and redesign others, but it also creates a new management challenge: people supervising systems, exceptions and synthetic workforces rather than only human teams.

Gupta pointed to a similar need for internal education. AI may be mandated from the top, he said, but different functions still need to be “educated and trained on how to use that technology,” including when to trust it and when to question it. In that sense, organizational readiness depends not just on tools and governance but on building intuition across the workforce before businesses can rely more fully on agentic models.

This is why change management cannot sit on the sidelines of AI strategy. The technology conversation must be matched by investment in training, education and confidence-building across the enterprise.

A fundamental redesign is required

The discussion at MWC made clear that the AI-native enterprise will not emerge through scattered experimentation or technology deployment alone. It will be built through a more fundamental redesign of workflows, data strategy, governance and leadership accountability.

The strongest insight from the panel was that AI maturity is not defined by how many tools an organization launches. It is defined by whether those tools are aligned to a coherent operating model. That requires senior sponsorship, disciplined execution and a willingness to rethink long-established ways of working.

This remains an evolving space and few organizations have all the answers yet. But the direction is becoming clearer. Enterprises that pair ambition with structure, and experimentation with trust, will be better placed to move from AI activity to AI advantage.

Share
TMT Telecom Article Engineering the AI-native enterprise: From connectivity to platform innovation