Scaling AI responsibly: How human-centric learning will define the AI-native enterprise

As AI becomes more autonomous and agentic, enterprises must balance ROI, trust and adoption by redesigning systems where people and AI learn and operate together
ニュースレターを登録する
9 min 所要時間
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
9 min 所要時間
Scaling AI responsibly: How human-centric learning will define the AI-native enterprise

Key takeaways

  • Scaling AI is a systems redesign challenge, not a tool rollout
  • ROI depends on timeframe: infrastructure is long-term, process redesign is near-term
  • Focus beats breadth: a small set of high-impact use cases builds credibility faster
  • Learning must be embedded in workflows to keep up with AI’s pace
  • Equity requires intentional design: language, accessibility and infrastructure matter

As  evolves from a tool into more autonomous, agentic systems, enterprise leaders are being pushed to rethink what adoption means. It’s no longer enough to deploy models, run pilots or automate isolated tasks. What sits underneath the transformation is a broader question: how do organizations redesign workflows, skills and trust so that humans and AI can operate as a coherent system?

That was the central theme of a conversation between David Treat, CTO at Pearson, and Vijay Guntur, CTO and Head of Ecosystems at HCLTech, during an interview with Wired’s Senior Business Editor, Louisa Matsakis, at the World Economic Forum in Davos.

Across education, enterprise technology and , both leaders returned to the same idea: AI success depends less on the sophistication of the technology and more on how intentionally people, processes, and learning are aligned around it.

Scaling AI responsibly

According to Treat, the CTO playbook is being reframed around systems design rather than technology deployment. In his view, the historic pattern of innovation first, training later doesn’t apply with AI. He described the shift as “a whole new way of thinking,” arguing that organizations must “totally rethink” implementation to be “much more systems oriented around the humans and the technology,” and design for adoption from the outset rather than treating it as an afterthought.

The urgency is amplified by how quickly new capabilities arrive. Treat noted that “the learning curve associated with new technology is [becoming shorter and steeper],” which shifts the challenge from one-time upskilling to continuous adaptation. This marks a critical inflection point: scaling Responsible AI becomes inseparable from building learning systems that evolve at the same pace as the technology itself.

Delivering ROI from AI investments

For Guntur, organizations should treat ROI as something that varies over time. Some investments, he explained, are inherently long-term: “Think about data centers,” he said, pointing to the scale of capital flowing into AI infrastructure. Those bets may take years to pay off.

Near-term returns, however, come from rethinking how work gets done. Organizations need to “not just tweak what you have and put automation into it,” but to “rethink the process and then use AI technology to make it more efficient.”

That distinction matters because it changes what leaders should expect and how they should measure success. Guntur has seen the “shortest” payback “within a year to 18 months” when companies redesign business processes and then apply AI to execute them better. But he reinforces Treat’s point that the returns don’t come from technology alone; organizations must bring people along: “You have to train them on the new processes and new ways of doing things [because] that is where you really start to get benefits.”

Where to invest first with AI

Organizations are better positioned when they build on the workflow and automation foundations laid over the past decade. Before deciding where to apply AI, it is critical to understand “the context of the workflow that you’re…looking to automate,” said Treat, because that context determines where AI reasoning can meaningfully step in. As companies move “from…business logic that we encoded in a very deterministic way to now taking advantage of AI reasoning,” workflow understanding becomes the foundation for deciding where automation should evolve into augmentation.

That thinking extends beyond models and processes to measurement and accountability. It is not enough to know what AI tools exist; organizations must understand “what the AI tools can do and be confident in that,” and be able to “assess it and prove it.” The same rigor applies to people. While leaders have long aimed to put “the right person on the right job,” the shift toward human-AI collaboration requires doing so “in a much more data grounded way,” aligning “human and agent and workflow,” and continuously asking, “How do we measure that? How do we monitor that?”

Early investment priorities aren’t only compute and platforms—they’re also the operational foundations that let humans and AI collaborate safely and measurably.

Why focus matters in AI deployment

If Treat focused on systems and foundations, Guntur focused on execution discipline. The biggest trap, he says, is starting with volume: enterprises collect “a laundry list of about 150 or 200 use cases.” That approach, in his view, is incorrect and will create disappointing results.

Instead, credibility is built through focus and impact. Guntur believes that organizations should focus on a “dozen or less use cases that are really impactful,” because that creates “more belief” and helps change take hold. He grounds this in a concrete domain where he sees immediate impact: software engineering. HCLTech, he explained, has built a platform called  to optimize the full chain “from a requirement that a product manager has to deploying it.” The reason is that enterprises aren’t only looking to get productivity from AI agents; they’re “thinking about their entire chain of how they build software in the company.”

This view represents a practical playbook: pick a small number of business-critical workflows, drive measurable outcomes and use those wins to create organizational momentum.

From R&D innovation to business impact

Turning innovation into business value is rarely limited by technical readiness. More often, it is constrained by how effectively organizations scale what emerges from R&D. Guntur described an innovation developed in an R&D lab that integrated enterprise systems such as SAP with real-time activity on the manufacturing floor, connecting physical AI with core business platforms. The harder part came after the build.

For that innovation to reach the business, people had to be equipped to sell it and use it. That meant “our sales team needs to be trained at scale,” as well as “training our people who are going to use this innovation in their work.”

Learning infrastructure, in this context, becomes a scaling mechanism. The “only way it can scale is if you can educate our teams on how to take that innovation to the market,” said Guntur. While some training can happen in person, much of it depends on the “ability to use learning systems to scale up,” enabling people to actively “have conversations on what that innovation is.”

The speed of change makes this even harder. AI skills relevant today might not be relevant after a “four-month period.” That compression creates demand for “a real-time education system and platform” that keeps pace with both R&D output and market change.

AI, trust and personalization in real-world learning

AI’s impact becomes most visible when it is placed directly in front of learners and workers, rather than confined to internal experimentation. Treat explained that Pearson moved quickly into customer-facing AI, saying “we were unafraid and raced straight to engaging with customers.” That decision created a feedback engine. If “AI is the new UI,” then interaction data from AI-enabled study tools and teacher workflows becomes a signal of real needs, enabling response on “a rapid development cycle.”

The outcomes are measurable. “85% of students that use our study prep tools in higher education are getting the grade they want or better,” said Treat, alongside “a 7% increase in actual grades and performance” when learners receive “a scaffolded learning science based AI interaction that pairs with the professor’s intent.”

Even usage patterns offer insight. The “highest hit rates between one and 3am” reveal that students need help “exactly when the professors are not around to give it,” and now “they can.”

The same logic applies inside enterprises. Treat described launching a “Communication Coach” that listens to meetings and provides feedback “geared towards their level, their role, their context.” The impact can be immediate. “I was shocked at how much it’s helping me,” he said. The pattern is consistent: build the right guardrails, but “get it out there,” because value accelerates when “the interaction” and “the feedback” loops are active.

Equity, access and Responsible AI adoption

As AI becomes more deeply embedded in education and work, questions of equity become unavoidable. Treat framed equity through personalization and learning science. The ability to personalize allows AI to reach people “in whatever context,” which is “wildly powerful,” especially as access expands. “If you have a smartphone, you can get access,” he said.

But he warned that AI can undermine learning if used carelessly. “It can’t be a teleportation tool,” he said, because if learners simply ask for answers, “then you’re not learning.” Responsible design means grounding AI in “a learning science-based approach, a Socratic Method,” using proven techniques that engage the brain.

Guntur extended the equity lens to structural barriers. Much educational content exists in only “two or three languages,” yet “one of the strengths of AI is to be able to deal with languages,” helping reach “a much wider population.” Accessibility is another frontier: “one in seven in the world has some kind of a disability,” and AI can support learners facing different challenges.

Infrastructure remains a constraint. With “over 25% of the world” still lacking internet connectivity, Guntur emphasized that “the governments have a role to play,” alongside enterprises and content creators, in expanding access.

He added: “Companies like Pearson [have done a] great job in building the content, [and now] we need to find [broader] distribution mechanisms [for a wider] target audience.”

The emergence of an AI-native learning and workforce ecosystem

Looking ahead, the conversation shifted from current implementation to how learning and work may change over the next decade. One of the broken models, said Treat, is the idea of stopping work for skills training, noting that people often rush through training without “actually engaging their brain.” Without struggle, “you’re not going to learn.”

The alternative is learning embedded into daily activity. In this model, organizations can “push micro-learning experiences into the flow” of the platforms people already use. AI can notice friction and suggest something lightweight, such as “this two minute video,” to help someone complete the task they are already working on.

Progress then becomes visible and portable. Skills can be tracked “in the form of credentials,” ideally moving “between organizations.” That portability supports career mobility, helping individuals “envision a career path” and understand “what to learn next.”

Zooming out further, Guntur predicted that AI will follow a trajectory like the internet, but faster. The internet has become embedded into almost every aspect or lives, and AI will follow “[a] similar pattern in half the time.” Growth will span enterprise, consumer and physical AI, with “autonomous” systems and robotics becoming critical across industries.

The AI-native enterprise will be built on continuous learning

Across both enterprise transformation and education, the conversation returns to the same core truth: AI adoption isn’t a one-off deployment. It’s a system or process that blends workflows, human capability, feedback loops and trust.

Treat’s emphasis on embedding learning into the flow of work, and grounding AI in learning science, points to a future where skills develop continuously and credentials become more portable and meaningful. Guntur’s focus on timeframe-based ROI, high-impact prioritization and real-time education platforms shows how enterprises can operationalize that future without losing credibility or momentum.

As AI becomes integral to daily life, the organizations that win won’t just build better AI; they’ll build better systems for people to learn, adapt and thrive alongside it.

FAQs

1. How do CTOs scale AI responsibly in an enterprise?
By designing systems that integrate humans, workflows, learning and governance from the start, ensuring adoption, measurement and trust evolve alongside AI capability.

2. What drives the fastest ROI from AI investments?
Redesigning business processes and applying AI to improved workflows can deliver payback within 12-18 months, especially when paired with training and adoption support.

3. Why is focusing on fewer AI use cases important?
 Starting with a small number of impactful use cases builds credibility, accelerates organizational belief and avoids disappointment caused by overextending across hundreds of initiatives.

4. What will AI-native learning look like in the future?
Learning will be embedded into the flow of work through micro-learning moments, personalized support, continuous feedback and portable credentials that track skills across roles and employers.

5. How can AI improve equity in learning and reskilling?
AI can reduce language and accessibility barriers, personalize learning to context and expand reach if combined with responsible learning science and infrastructure support like internet connectivity.

共有
AI AIと生成AI 記事 Scaling AI responsibly: How human-centric learning will define the AI-native enterprise