During Sibos 2025, Alan Trefler Founder and CEO, Pegasystems discussed HCLTech's recently launched Future of Payments: AI everywhere. Trust nowhere? research and what agentic really means when applied to payments. The conversation ranged from how to make autonomy predictable, to concrete use cases with Swift, to why measuring AI impact may already be a solved debate.
The backdrop from the report is that nearly every payments leader is experimenting with AI, with 99% are using it in some part of operations or user experience. Yet trust and readiness gaps loom large.
What makes a payment “agentic”?
Trefler anchored on autonomy: the goal is for payments to “simply flow without manual stops,” with checks handled automatically. He described agents as a means to “self-drive” the movement of money. The research reflects this pivot: over half (52%) of organizations expect to operate as autonomous financial services firms within 24 months, and 17% say they already are.
How do banks get to autonomy, fast and safely?
“Autonomy in a regulated, highly sensitive, mission-critical environment,” said Trefler, requires bringing automation “in line with good and reliable practices.” Crucially, banks must be able to explain to customers, staff and regulators how autonomy will operate “completely predictably.” That emphasis on explainability mirrors industry concern: 91% of executives worry about the risks of applying AI in payments, and 47% lack formal AI policies, leaving governance as a priority task.
High-impact use cases today
Pointing to joint work with Swift, Trefler cited “beautiful agentic use cases”, including payment retractions to resolving incorrect instructions and answering inter-bank queries handling. “Things that used to have to go to a person can now be handled agentically and completely reliably.”
These use cases align with what leaders expect autonomy to impact most in the next two years: real-time fraud detection and resolution (51%), intelligent payment routing (47%) and automated compliance/reporting (47%).
Measuring AI’s value: Is the debate over?
“I don’t think any of our clients doubt the value,” said Trefler. He sees benefits as “self-evident,” such as speed, accuracy and greater autonomy, but warned that debating over whether it’s an 11% or 17% uplift misses the wave the industry is riding.
That perspective mirrors adoption patterns: AI is already embedded across payment optimization, dispute handling and personalization, and 82% of executives say AI is the only viable way to balance seamless experience with fraud prevention, even as 60% still judge current fraud-protection tools more ineffective than effective.
Best practices for scaling AI: Workflows, not prompts
“We believe in workflows, not prompts,” said Trefler. Relying on large language models to “magically figure out the workflow in real time,” he said, undermines predictability because LLMs are sensitive to context and small data variations. Instead, use AI at design time to help teams craft and review the collection of workflows; at runtime, use LLMs narrowly “for translating language, for semantics” to select and execute the right, pre-approved workflow.
That split — creative reasoning in design and constrained semantics in production — reduces hallucination risk and preserves control.
The right AI at the right phase
Extending this theme, Trefler separates AI into two roles. In the design phase, AI helps teams reason creatively and shape the set of approved workflows. In production, AI is constrained to narrow, semantic tasks, such as interpreting messages or extracting fields, while the system executes the pre-defined workflows deterministically. Rather than letting an LLM decide end-to-end actions, agents are invoked only through this orchestration layer. The approach aligns with industry priorities around transparency, data protection and maintaining customer trust.
Ecosystem partnerships: Weaving an agent fabric
Asked about partnerships, Trefler looked to an “agent fabric”: the ability to orchestrate agents from Pega, partners like HCLTech and, importantly, customers, within workflow guardrails, so the system achieves outcomes and prevents risks. That orchestration mindset is timely as organizations push ahead: 54% view AI agents as a long-term strategic investment, yet only 18% feel fully prepared to deploy secure agent-pay capabilities today.
From ambition to dependable autonomy
The conversation highlighted a practical path from experimentation to dependable autonomy: define explainable workflows, use AI where it’s strongest and stitch together a partner-rich agent fabric under robust governance.
The urgency is clear with 87% of leaders fear losing customers without instant capabilities, while only 20% have modernized, cloud-native, real-time data systems to support the future they envision. The next two years will belong to firms that close that execution gap; turning agentic promise into predictable, trusted outcomes across the payment lifecycle.