Application Management Services (AMS) are being reimagined in the age of GenAI and Agentic AI.
As new capabilities mature, leaders are rethinking operating models, workflows and the relationship between humans and “digital colleagues.” In a recent HCLTech Trends and Insights podcast, Kandasamy “Jam” Ramanujam, Senior Vice President, Digital Business at HCLTech, outlined why change is urgent, where to embed AI across the AMS value stream and how to keep trust, compliance and quality at the center.
Why application management must change, now
Ramanujam frames the urgency plainly: “the rate of change of technology has picked up really well in the last few years.” He notes that “it all started with GenAI…and now it is Agentic AI,” and that customers expect providers “to bring that into their environments so that the benefits…can be harvested really quickly.” That expectation forces a rethink of the operating model itself.
The shift he sees most consistently is from function-based operations to product-aligned operating models. As he put it, organizations investing in AI are far more likely (4x according to HCLTech’s latest research) to maximize ROI when they “operate on the product-aligned operating model [rather than] the traditional operating model.” The reason: product-aligned AMS aligns to business outcomes, including customer journeys, experience targets and cost-to-serve, rather than just systems and queues. Executives are asking to “improve the services that are being delivered [and] improve the experience…while keeping the costs low,” and that combination demands a model that is engineered for continuous learning and automation.
Make Total Experience the North Star
For Ramanujam, AMS must be led by Total Experience; “the experience that is delivered to the end customer…the business leaders and the business users…and the IT users.”
Traditionally, application management was optimized for technical metrics and root-cause fixes. Today, he argued for an experience lens: “imagine if you are able to look at it from an experience lens, [including] personas and journeys connected to the application.” When teams can see which journeys are breaking, “these teams are able to identify the breaks that are there in the experience that is being delivered to the various stakeholders” and route the right kind of change, such as quick fixes into product backlogs, larger remediations into change backlogs, and “anything that needs a transformative change” to senior stakeholders for staged implementation.
GenAI accelerates this by turning telemetry into human-readable narratives, such as what’s happening, who’s impacted and how severe the experience hit is, so prioritization reflects real-world pain.
Consider HCLTech’s public case studies: GenAI-powered operations ensure seamless service for millions of subscribers for a telecom company and cutting downtime with automation for an energy organization. Each show how experience-centric AMS, supported by AI, pre-empts issues and shrinks the window from detection to resolution.
From pilot to production: instrument the end-to-end value stream
Scaling GenAI isn’t about bolting a chatbot onto an ITSM tool. Ramanujam recommends mapping the end-to-end AMS value stream and inserting AI where it creates measurable leverage. “We look at it through the end-to-end value stream…before a ticket is created…when a ticket gets created...assignment…automated or human resolution…and then it gets into a feedback loop.” At “each point in the value stream, there are opportunities [for] AI, GenAI and Agentic AI.”
- Ticket prevention (prior to creation of the ticket): With full-stack observability, AI can spot emerging patterns before users feel pain. GenAI adds a plain-English summary (“a human readable description of a situation that is emerging”), while agentic approaches “help in identifying a root cause around what’s happening.” The result is fewer tickets and better sleep for incident commanders
- Triage and assignment: When issues do arrive, natural language understanding classifies them, “identifying the intent behind it,” and routes to the right resolver group, with confidence scoring and suggested playbooks
- Resolution and execution: Where confidence is high, agentic workflows can propose and, in bounded cases, execute fixes. Ramanujam stresses the importance of thresholds “based on the confidence of the solution,” with automatic escalation to humans whenever those thresholds aren’t met. Agentic AI helps in increasing the level of fully automated resolution.
- The feedback loop: Problem management becomes faster and more rigorous. As he describes it, service leads can use GenAI to analyze problems, “look at what kind of problems are there, what are the opportunities for automation,” and, crucially, tie those opportunities back to the experience lens so the backlog reflects what hurts actual users
This stitched-in approach turns isolated pilots into an AI-infused operating rhythm.
What’s ready for agentic automation?
Ramanujam breaks readiness down by persona. For the service engineer, the flow is familiar: a ticket is assigned; the engineer identifies what the ticket is about, the most likely reason, the solution and then executes. An agent can now shoulder much of that work: “identify the intent…identify what is the root cause, what is the best solution that is available to resolve it,” and, depending on autonomy thresholds, “the solution can be actually executed.”
From view-only recommendations to execute with approval to fully autonomous actions for standardized, low-risk fixes, he advocates explicit autonomy tiers to make this safe and scalable. The key is that “the thresholds…to identify the confidence is the one which drives the level of autonomy,” and “whenever the threshold is not met…the agent can get the human-in-the-loop,” seeking approval or allowing the engineer to override based on broader context. Pair this with least-privilege access and auditable trails, and you get speed without sacrificing control.
Working with digital colleagues: Collaboration patterns that stick
The human–agent relationship is not adversarial; it’s collaborative. “We see the agent as a companion [or] teammate who is doing some of the activities,” says Ramanujam. In monitoring, for example, teams are inundated with signals from dashboards and alerts. Agents can “help in processing some of this and then presenting it and prioritizing what is happening,” so humans focus on judgment, escalation and stakeholder communication.
He also points to specialized, role-aligned agents: when “the engineer needs to analyze a set of logs,” a Log Analyzer agent can “identify the different types of logs that need to be accessed…analyze the issue at hand…summarize…and present it to the engineer.”
The test of good collaboration is simple: fewer bridges, faster mean time to resolution, cleaner handoffs across shifts and no surprises for risk, audit, or security teams.
Measurable impact: Productivity and quality
Ramanujam is clear about the payoff: “we are looking at this to be beneficial in two major areas. One is in improving productivity, and the second one is in terms of increasing the quality of the solution that is being offered.” In practice, those show up as reduced detection time, higher first-time-right rates, fewer priority incidents and more time returned to engineering. The case studies mentioned earlier underscore this arc: telecom operations that remain seamless for millions of subscribers thanks to GenAI-assisted assurance and an energy leader that cuts downtime with automation. Both point to a future where the best incidents are the ones users never notice.
And it’s compounding. “We are beginning to see significant benefits in both areas,” he says, adding that this will “continuously improve,” with “significant results” expected over time as models are tuned, runbooks are codified and autonomy grows in well-governed domains.
Catalysts to a new era
GenAI and Agentic AI are not add-ons to AMS; they are catalysts for a new operating model. By anchoring on Total Experience, instrumenting the end-to-end value stream and establishing clear autonomy tiers with human-in-the-loop guardrails, organizations can prevent more issues, resolve the rest faster and learn continuously.
As Ramanujam observes, the twin benefits are enhanced productivity and improved quality, achieved through digital colleagues working in concert with human experts to deliver reliable, experience-led services. A practical pathway is available: begin with user journeys and service level objectives (SLOs), select a small number of high impact use cases, establish the necessary guardrails and scale incrementally. Executed effectively, application management becomes not merely AI-assisted but AI-infused, delivering measurable improvements for customers, business users and IT stakeholders.