AI for inclusive growth: Leadership lessons from Davos

What are the practical ways to ensure AI expands opportunity, strengthens resilience and supports a more inclusive, equitable future?
ニュースレターを登録する
7 min 所要時間
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
7 min 所要時間
microphone microphone 記事を聴く
30秒戻る
0:00 0:00
30秒進む
AI for inclusive growth: Leadership lessons from Davos

During a private roundtable at HCLTech’s pavilion during the , C-suite and senior leaders explored how AI can drive broader inclusion across organizations and societies.

Richard Lui, Inclusion@work, Advisor, Anchor at MSNBC was the moderator and the panel featured:

  • David Kenny, Executive Chairman, Nielsen
  • Hiro Iwamoto, World’s first blind sailor to complete a non-stop trans-Pacific crossing
  • Sharon Hague, UK Chief Executive Officer, Pearson
  • Ana Kreacic, Chief Knowledge Officer, Oliver Wyman
  • Abhi Shah, Chief Executive Officer, Board Member, Philanthropist
  • Srimathi Shivashankar, Corporate Vice President and Business Head, EdTech Services, HCLTech

Moving beyond gender diversity, the conversation examined the inclusion of skills, demographics, geographies and the emerging care economy.

Participants discussed how design and deployment can create equitable opportunities, enhance workforce resilience and unlock new sources of innovation and value. The session focused on actionable strategies that business leaders can implement to foster inclusive growth while shaping for a sustainable, human-centric future.

Below are seven of the most important learnings from the discussion.

1. Inclusion is expanding beyond identity into capability and access

The group challenged the way inclusion is typically framed. Traditional definitions often prioritize demographic representation, but AI is reshaping what advantage looks like and who gets it. A more complete definition of inclusion now includes those who have access to AI tools, know how to use them and benefit from the productivity, learning and economic mobility those tools can enable.

This reframing matters because it can widen gaps as quickly as it closes them. If AI literacy, coaching and infrastructure remain concentrated in already-advantaged communities, the technology risks creating a new divide: not simply between industries or companies, but between people who can effectively use AI and those who cannot. Participants emphasized that last mile barriers like connectivity, device access, language and training capacity will determine whether AI becomes a real inclusion accelerator.

Leaders also noted that inclusion must account for varied starting points. Different geographies face different baselines in education systems, labor markets and digital infrastructure. A uniform strategy will not work. Inclusive growth requires segmenting populations by needs and constraints, then designing AI-enabled pathways that improve access to learning, employability and participation at scale.

2. Data representation and model assumptions are where inequity gets encoded

A recurring theme was that AI systems tend to scale whatever is embedded in their data and assumptions, including historical bias and gaps in representation. Participants highlighted that many data sources reflect incomplete participation, inconsistent measurement and policy-driven blind spots. When certain populations are undercounted or excluded, models trained on those datasets can misrepresent reality and amplify inequity.

The group discussed how training and evaluation often fail to reflect the full diversity of end users across race, gender, disability, language, region and socioeconomic background. Even within a single geography, there can be meaningful differences that are not captured by default categories. The risk is not only inaccurate outputs but downstream decisions that affect hiring, lending, health recommendations, education support and resource allocation.

Several leaders stressed that Responsible AI must include ongoing scrutiny of datasets, weighting choices, evaluation methods and drift over time. As models become widely adopted, fewer users may verify outputs, and the perceived authority of AI can reduce healthy skepticism. The practical takeaway was to build organizational best practices around model integration: understanding what data is included, what is missing, what assumptions were made and where blind spots could harm real people.

3. Accessibility is one of AI’s clearest inclusion wins, and the upside is bigger than accommodation

The roundtable surfaced compelling evidence that AI is already improving day-to-day independence for people with disabilities, particularly through multimodal interfaces like voice, vision and real-time assistance. Participants discussed how consumer AI tools can help navigate digital barriers, interpret visual information, troubleshoot technology when accessibility tools fail and reduce dependence on others for routine tasks.

Crucially, the conversation moved beyond accommodation to capability. When people are equipped with the right tools, many can perform at higher levels in roles that leverage their strengths. Participants shared examples where disability-linked differences, including heightened auditory perception or certain neurodiverse patterns, can become performance advantages when environments are designed to reduce friction and stress.

The implication for business leaders is twofold. First, AI-enabled accessibility should be treated as a strategic productivity lever, not just a compliance or initiative. Second, inclusive design needs to be built in from the start rather than bolted on later. That means involving diverse users in product testing, prioritizing multimodal interaction and investing in tools that help people operate independently across work and life.

4. Education and skills drive inclusion, but “coaching capacity” is the bottleneck

Participants repeatedly returned to education as the most scalable path to inclusive growth, especially for communities facing poverty, weak infrastructure or limited institutional capacity. AI has potential to personalize learning, provide feedback loops that humans cannot deliver at scale and expand access to instruction in contexts where teacher-to-student ratios are unsustainable.

But the group also flagged a structural constraint: coaching capacity. Many systems lack enough trained educators, and that gap is even larger for special education and inclusive learning support. AI may help bridge parts of this shortage, but only if it is paired with thoughtful implementation, training for educators and safeguards against over-reliance.

Several leaders emphasized that inclusive education is not only a content problem, but an infrastructure and health problem. Attendance, sanitation, clean water, safety and connectivity all shape whether children can participate and benefit. AI strategies that ignore these fundamentals will stall. When those basics are addressed alongside digital enablement, the impact can be dramatic: improved attendance, improved learning outcomes and clearer pathways to higher education and economic mobility.

5. AI is changing how people learn, work and are managed, and that shift is already cultural

Another thread was that AI is altering workforce expectations in subtle but meaningful ways. Participants discussed how AI-based coaching can provide frequent, structured feedback that managers often cannot consistently deliver. That is particularly relevant for communication, confidence and “speaking up,” where cultural norms, language barriers and hierarchical environments can limit participation and advancement.

Leaders noted that the appeal of AI support is not that people want machines to replace humans, but that AI can provide low-friction access, repeatability and psychological safety. The challenge is ensuring that these tools improve capability without diminishing critical thinking or creating dependency. As AI systems are increasingly embedded in classrooms and workplaces, the risk is that users begin to treat outputs as authoritative rather than provisional.

Participants emphasized the need to pair adoption with education in reasoning: how to challenge outputs, how to cross-check and how to ask better questions. In other words, inclusive growth requires not only distributing tools but teaching people how to use them thoughtfully. That becomes a leadership responsibility as much as a technology initiative.

6. Safety is not one thing, and inclusive growth depends on clear guardrails

The conversation on guardrails focused on the reality that “safe AI” varies widely by context. What is acceptable in one society, industry or household may be unacceptable in another. Participants highlighted risks around bias and stereotyping in product design, uneven regulation across countries, and the particular vulnerability of children and young people to harmful digital experiences.

A major concern was that organizations may invest heavily in building models, but underinvest in testing, verification and monitoring. Participants argued that safety needs both technical controls and human discipline: continuous evaluation of training data, red teaming for demographic differences and strong governance around how outputs are used in decision-making. They also discussed the risk of declining verification behavior as AI becomes more normalized, which can quietly increase harm over time.

For leaders, the actionable takeaway was to treat safety and inclusion as operational requirements: clear policies, measurable standards, ongoing audits and explicit accountability. Without that structure, AI can easily reproduce stereotypes, amplify misinformation and widen inequities, even when intentions are positive.

7. Boards and leadership teams can make inclusive growth real through governance, not slogans

Finally, participants discussed what it takes to operationalize inclusive growth at the highest level of governance. The consensus was that inclusion must be designed into strategy, resourcing, measurement and risk management. It can’t be left as a downstream program.

Leaders outlined a board-level approach that starts with how the core business model will change, how the digital strategy will support that shift, who will lead cross-functionally and whether the organization has the right people and culture. From there, it extends into build-versus-buy decisions that include nontraditional partners, capital allocation decisions that balance long-term value with short-term pressure and dashboards that define success in measurable terms.

A key point was that inclusive growth should not be framed as a trade-off against profitable growth. When organizations design AI for broader participation and capability, they can unlock larger addressable markets, stronger talent pipelines and more durable trust. Governance is what turns that potential into execution.

Unlocking new opportunities through inclusive growth

Inclusive growth will not happen by default. The discussion underscored that AI could widen access and capability, but only when leaders treat inclusion, safety and human outcomes as design requirements rather than an afterthought. The practical path forward combines disciplined governance, representative data and testing, investment in education and coaching capacity and product development that reflects the communities it serves.

For organizations, the opportunity is not only to reduce inequity, but to unlock new talent, new markets and stronger resilience by making AI work for more people, in more places, in more ways.

共有
AI AIと生成AI 記事 AI for inclusive growth: Leadership lessons from Davos