Engineering intelligence at the physical edge

As AI moves into vehicles, appliances and industrial systems, the next challenge is scaling physical intelligence safely, reliably and in real-world conditions
Subscribe
7 min 50 sec read
Anurag Jain
Anurag Jain
VP, Global head of AI Engineering, HCLTech
7 min 50 sec read
Engineering intelligence at the physical edge

Key takeaways

  • Physical AI is reaching an inflection point as intelligence moves into real-world systems
  • The biggest challenge is not the model alone, but scaling across data, systems and environments
  • Safety, trust and reliability matter more at the edge than they do in purely digital AI
  • Simulation, guardrails and full-stack engineering are becoming essential to deployment
  • Long-term success will depend on repeatable platforms, skilled talent and clear business outcomes

As AI evolves from cloud-based insights to real-world actions, enterprises are facing a new challenge: how to deploy intelligence at scale in physical environments.

That was the focus of a recent episode of AI Talk, hosted by Kevin Craine, which explored how organizations are engineering edge-native AI systems that can operate autonomously in the real world. The discussion looked at how successful pilots can be transformed into industrial-scale deployment across industries, with a particular focus on three themes: industrializing , as the bridge between digital intelligence and real-world impact and the convergence of edge AI, and .

The promise is compelling. In factories, vehicles, appliances and other connected systems, AI at the edge could improve safety, lower latency, increase efficiency and unlock entirely new user experiences. But as the panel made clear, turning that promise into production reality is much harder than moving a software model into the cloud. Physical AI doesn’t live in a clean digital environment. It has to work in messy, regulated and safety-critical conditions where failure has immediate consequences.

That is why the conversation around Physical AI is shifting. The real question is no longer whether the technology is exciting. It is whether enterprises can industrialize it at scale, with the reliability, trust and system integration required for the physical world.

Physical AI is hitting an inflection point

The panel began with a broader question about where Physical AI fits in today’s technology landscape. Anurag Jain, Vice President and Global Head of AI Engineering at HCLTech, explained that: “Physical AI is at a pivotal point” within the broader wave of advanced AI, especially as edge capabilities, robotics, mechanical systems and electrical systems begin to converge with software intelligence.

That is what makes this moment different. AI at the edge is no longer just a computer science problem. It increasingly depends on how software, silicon, machines and operational workflows come together in the real world.

Fabio Albanese, Head of Appliance Engineering Platform at Electrolux Group, reinforced that point by contrasting Physical AI with purely software-based AI. “In consumer AI, 90% accuracy can be really impressive,” he said. “But in [a] physical product, 99.9% might not be okay.” In other words, the tolerance for error changes completely once AI moves from a digital setting into a machine that affects safety, performance or physical behavior.

Pontus Fontaeus, Executive Design Director at GAC R&D, brought in another dimension: trust. In many everyday digital experiences, failure is inconvenient but manageable. At the physical edge, that is no longer true. “How much do you trust?” he asked. Whether in an autonomous vehicle, a personal robot or a medical context, users are placing themselves “at the mercy of that system.” That makes trust not just a user experience issue, but a core adoption barrier.

The pilot-to-production gap is still the hardest problem

Despite strong momentum, the panelists were clear that moving from pilots to scaled deployment remains one of the biggest barriers in Physical AI.

Jain framed this around three dimensions: embedding intelligence to create differentiation and new revenue models, evolving platforms through data and contextual intelligence and delivering a more “human centric” intelligent experience that supports autonomy and operational efficiency.

But in practice, the engineering trade-offs are difficult. Jain said the “number one engineering challenge” in recent projects has been balancing accuracy and optimization. In some cases, model optimization has to be sacrificed to meet the accuracy threshold required in the field. In more critical environments, such as safety-sensitive operations, “there’s no chance of error.”

That challenge becomes even more visible when a solution leaves the lab. Jain described one port safety deployment where the model performed well in testing but struggled in the real environment. The root cause was not the model itself, but poor video input quality caused by aging network cables. “No matter how good the model was, it was not able to give you the right outcomes,” he said.

The lesson was simple and important: Physical AI is “not plug and play.” Enterprises have to think beyond the model and address the wider ecosystem, including data quality, infrastructure, business processes and legacy systems.

Albanese made the same point from the appliance industry. In his view, the biggest obstacle is not the core technology, but “the integration with the legacy system.” In physical environments, even a small failure can lead to “dangers,” “injuries” and real-world consequences. That is why nobody has fully “cracked the code” on scaling Physical AI safely and reliably.

Data, systems thinking and full-stack capability matter most

A recurring theme throughout the discussion was that Physical AI succeeds or fails on the strength of the surrounding system.

Jain argued that enterprises need to focus much more on the data pipeline. “If your data strategy is not right, no matter how good you do, the downstream is not going to work,” he said. In field deployments, data quality remains one of the most persistent and underestimated problems.

He also stressed the need for engineers who can work across the full stack. Physical AI “requires skills from the chip to the business logic,” he said. That includes silicon strategy, middleware, edge deployment, business context and end-user requirements. Organizations cannot simply assemble a few AI specialists and expect industrial-scale deployment to happen in weeks.

Albanese described a similar architectural challenge in connected appliances. For him, the future depends on combining traditional models, sensors, edge AI and cloud AI, each playing different roles within a broader system. The key is to “close the loop between what a system senses, what it decides, and how it acts this physical world.”

That means separating fast-moving intelligence from slower, more regulated control layers. Model development can move quickly through training, simulation and digital twins, but the “safety envelope” around physical behavior must move more carefully. In his view, enterprises should design systems where AI can recommend or optimize, while a more traditional control layer validates action against constraints, guardrails and fail-safe states.

Simulation-first development is becoming essential

The panel also made it clear that Physical AI can’t be developed through real-world trial and error alone.

Albanese emphasized the importance of digital twins, simulation and staged validation before systems are allowed to operate physically. This is partly about speed, but even more about safety. In regulated sectors, it is critical to test and pre-certify components before they are integrated into production systems.

That same logic is playing out in automotive design, though Fontaeus noted that the industry is still early in fully embedding AI into the process. He spoke about how AI already helps design teams move faster from ideation to visual storytelling, allowing concepts to be animated, simulated and communicated more effectively. But when it comes to edge intelligence in vehicles, he returned to the same concern: whether development timelines are moving faster than organizations can properly validate reliability and user experience.

As vehicle cycles compress from years to much shorter windows, the challenge is not just technical. It is about how fast industries can test, certify and build trust into systems that people rely on every day.

Fontaeus also pointed to the human side of adoption, arguing that “we are still failing on how we interact with machines.” His point was that Physical AI will not succeed on technical capability alone. It also must create experiences that feel intuitive, trustworthy and usable in everyday life.

Convergence is real, but roles must stay clear

The final theme in the discussion was the convergence of edge AI, Generative AI and autonomous agents.

Jain described this as a “very unique and powerful combination” across the pillars of AI engineering. Physical AI brings capabilities in vision, robotics and edge intelligence. Generative AI contributes cognitive reasoning, content generation and hyper-personalization. Agentic systems add orchestration and reuse across increasingly complex workflows. But Jain argued that real value will only come when these pieces are combined with compliance, responsibility and regulatory discipline to create “a holistic impact.”

Albanese offered a practical example of what that convergence could look like in appliances. Edge AI would manage real-time control, sensing vibration, imbalance or temperature drift and making fast local adjustments. Cloud and Generative AI would help interpret behavior, explain anomalies and analyze patterns across product fleets. Autonomous agents could then make higher-level decisions, such as recommending different usage patterns or alerting users when behavior diverges from normal.

The common principle is separation of concern. Real-time control belongs close to the device. Higher-level interpretation and long-cycle optimization can happen elsewhere. That division is likely to shape how enterprises architect Physical AI systems in the years ahead.

Controlled intelligence will define the winners

The discussion pointed to a clear conclusion. Physical AI is moving from possibility to practical deployment, but success will not come from model innovation alone.

It will come from designing systems that can operate safely in messy environments, integrating with legacy infrastructure, maintaining the right data quality and balancing speed with trust. It will also require enterprises to invest in the right talent. Jain highlighted the need for “the human in the loop” and for stronger AI talent with systems knowledge and domain expertise.

Fontaeus suggested that the industry should think more openly about collaboration, arguing that AI could help create “a global ecosystem” if companies stop trying to reinvent everything in isolation. That kind of openness may become increasingly important as physical AI matures across industries, platforms and markets.

Perhaps the clearest takeaway came from Albanese, who reminded the audience that “physics wins over software.” In Physical AI, that is the reality every enterprise must design around.

The organizations that succeed will be the ones that treat edge intelligence not as a plug-in feature, but as an engineering discipline, one built on repeatable platforms, clear safety boundaries and a deep understanding of how digital intelligence behaves in the real world.

FAQs

What is physical AI?
Physical AI refers to AI embedded in real-world machines and systems, such as vehicles, appliances, robots and industrial equipment, so they can sense, decide and act in physical environments.

Why is physical AI harder than cloud AI?
Because mistakes at the physical edge can have immediate real-world consequences, especially in environments involving safety, regulation or critical operations.

What is the biggest barrier to scaling edge AI?
The biggest barrier is often not the model itself, but integration across data pipelines, legacy systems, infrastructure and operational workflows.

Why is trust so important in physical AI?
Users are relying on systems that can directly affect safety, mobility, health or daily life, so confidence in reliability and behavior is essential.

How should enterprises approach physical AI deployment?
They should focus on repeatable architectures, strong data foundations, simulation-first development, clear safety guardrails and full-stack engineering capability.

ERS Engineering Article Engineering intelligence at the physical edge