Robert Opp On the UNDP’s Global Mission to Build Inclusive A.I.

<img decoding="async" class="wp-image-1591803 size-full-width" src="https://observer.com/wp-content/uploads/sites/2/2025/10/Robert-Opp.jpg?quality=80&w=970" alt='Professional portrait of Robert Opp, chief digital Officer at UNDP, smiling while wearing a gray blazer and white button up shirt. The image includes “A.I. Power Index” branding with his name and title “Chief Digital Officer, UNDP” on the right side.’ width=”970″ height=”647″ data-caption=’Robert Opp is calling for stronger digital foundations and diverse datasets to ensure A.I. serves everyone, not just the Global North. <span class=”media-credit”>Courtesy UNDP</span>’>

Robert Opp, featured on this year’s A.I. Power Index, is one of the clearest voices urging the world to slow down—not in innovation, but in assumption. As chief digital officer of the United Nations Development Programme, Opp is guiding A.I. and data strategy across more than 170 countries. That global perspective has made him deeply skeptical of the idea that A.I. delivers benefits equally, everywhere. “In reality, the benefits have been distributed unequally across and within countries,” he tells Observer.

When data sets and solutions fail to capture local languages or cultural contexts, Opp warns, A.I. doesn’t close gaps, it widens them. His mandate at UNDP is to ensure the opposite: that digital infrastructure, inclusive governance and strong foundations come first, before layering on A.I. solutions. That approach is rooted in experience. At the World Food Programme, Opp helped launch ShareTheMeal, a mobile app that raised $40 million to combat hunger. The lesson—that digital platforms succeed when they reduce friction and build trust—now informs how he thinks about embedding A.I. into humanitarian work. Under his leadership, UNDP has piloted A.I. initiatives in agriculture, health and education, proving the technology’s potential to directly improve lives if deployed responsibly.

What’s one assumption about A.I. that you think is dead wrong?

A common assumption is that A.I. will automatically deliver benefits everywhere in the same way. In reality, the benefits have been distributed unequally across and within countries. It is highly dependent on context, for instance, on whether people have access to relevant data, affordable compute and the necessary skills. If the data sets and A.I. solutions don’t reflect local realities or languages, A.I. can actually amplify exclusion. What’s missing in the conversation is how to localize A.I. so it addresses local problems.

If you had to pick one moment in the last year when you thought “Oh shit, this changes everything” about A.I., what was it?

If I had to pick one moment in the last year, it would be the MIT report from August showing that 95 percent of companies are seeing “zero ROI” from their generative A.I. investments. That felt like a real turning point, a signal that we might finally be moving past the hype cycle and into a more sober conversation about what A.I. is actually good for. To me, it raised fundamental questions we should all be asking: Why are we building this? Do we know if it works well? Do we know who it works well for? And most importantly, how can we ensure that its benefits contribute to shared prosperity?

It also underscored the urgent need for more rigorous evaluation of A.I. tools—especially in the public sector. Without strong evidence of impact, we risk investing time, money and trust into solutions that don’t deliver. But with the right evaluations in place, we can identify which investments are truly transformative and which are not, ensuring that A.I. is a tool for meaningful progress rather than just another wave of tech hype.

What’s something about A.I. development that keeps you up at night that most people aren’t talking about?

Much of the data used to train A.I. is sourced from the Global North, predominantly in English, and it doesn’t capture local realities, languages or cultural context. Without diverse and inclusive datasets, A.I. will continue to misrepresent and even marginalize entire populations. This issue doesn’t make headlines as much as job displacement or safety risks, but it is fundamental to whether A.I. can actually serve everyone.

ShareTheMeal has raised over $40 million through micro-donations. What did that teach you about how people engage with global problems through digital platforms?

It showed that digital platforms can radically reduce the friction of engagement. When people can act instantly from their phones, they are more willing to participate, even in small ways. And those small actions, aggregated at scale, can generate real impact. But more than the technology, it’s about trust: people engage when the purpose is clear, the impact is transparent and the experience feels human.

You’re leading digital transformation across 170 countries with vastly different tech infrastructure. How do you build A.I. solutions that work in both Silicon Valley and rural Bangladesh?

The starting point is not the technology itself, but the foundations: digital infrastructure, enabling policies and capacity building for people. We focus on helping countries build digital public infrastructure, the equivalent of roads and bridges, like digital ID, payments and data exchanges. Once these are in place, A.I. solutions can be layered on top in ways that are safe, inclusive and relevant to local needs. That way, whether in Silicon Valley or rural Bangladesh, the solution works because the foundations are solid. 

The UN has been pushing “digital public goods” as alternatives to Big Tech platforms. What’s one digital public good that’s actually working at scale, and why?

It’s not about pushing alternatives to tech companies; it’s about opening more choices to countries that are trying to build their digital infrastructures. One digital public good that has been adopted at scale is DHIS2, an open-source, web-based software platform most commonly used as a health management information system (HMIS) but adaptable for other sectors. Originally developed by the HISP Centre at the University of Oslo, it has grown through collaboration with a global network of local HISP groups over the past three decades. DHIS2 is now used as the national HMIS in more than 80 low- and middle-income countries, covering about 3.2 billion people, and is also applied in areas such as logistics and education due to its flexible, customizable design. Its global community-based development model combines international standards with local adaptation, making it both widely implemented and locally owned.

You’ve written about South Africa prioritizing A.I. equity over A.I. advancement. Should developing countries leapfrog the “move fast and break things” phase entirely?

The developing countries don’t need to repeat the mistakes of others. They have an opportunity to prioritize equity, inclusion and rights from the start, rather than retrofitting protections later. That doesn’t mean slowing down innovation. It means shaping it with guardrails so that A.I. accelerates sustainable development without leaving populations behind. In other words, putting people first.

UNDP works on everything from climate change to poverty reduction. Where is A.I. making the biggest difference in UN programs?

We are seeing promising applications in agriculture, where A.I. provides farmers with real-time feedback on crops. In the health sector, language models are improving access to information, such as on maternal health. And in education, A.I. can transform education by making learning more accessible, personalized and effective, benefiting both educators and students. These are areas where A.I. directly improves lives—but only if countries have the infrastructure, data and governance to make it work.

How do you balance innovation with protecting vulnerable populations when deploying A.I. in countries with limited data privacy laws?

We take a people-first approach. That means supporting countries in building robust data governance frameworks, privacy protections and trust mechanisms alongside deploying new technologies. One example is our AI Trust and Safety Re-imagination Programme, which moves beyond reactive risk management toward proactive, inclusive, and context-sensitive approaches to A.I. governance. Drawing on insights from the 2025 Human Development Report, the programme strengthens local enabling environments while complementing global research and policy efforts. By engaging innovators across the public and private sectors, it re-imagines trust and safety frameworks that prioritize equity, anticipate and prevent harm and ensure A.I. development benefits communities fairly.

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.