Embracing the AI in International Aid

How artificial intelligence is revolutionizing international development and how we must respond

17 Min Read
A collage of two faces either side of the globe
Getty images

The cacophony of all things AI in 2023 – in which the technology dominated news cycles, investment portfolios and social discourse – continues to reverberate in 2024, perhaps at an even more frenetic pace.

As we absorb the palpable attention, research and opinions on whether AI will spark utopia or dystopia, a growing part of the world is leaning toward possible, profound opportunities.

Today, AI tools are demonstrating reduced barriers to education access, improved flood and earthquake prediction, optimized energy grids and streamlined government service delivery – all of which can contribute to achieving the UN’s Sustainable Development Goals. And developing nations are cautiously optimistic about AI’s potential to drive social and economic progress.

 Alongside recent U.S. and EU actions, other countries are actively adopting AI strategies or policies. Brazil’s AI Strategy, for example, espouses responsible AI coupled with investments to spur innovation – and Brazil is also drafting its first-ever AI legislation. Similar efforts are underway in Morocco, Mexico, Indonesia, Rwanda and Pakistan.

A similar phenomenon is unfolding in the field of international development, albeit at a more tentative pace. For example, practitioners are exploring the use of AI tools to address acute and systemic challenges – like documenting war crimes or improving writing skills and literacy, for better climate-related knowledge management and assistance with grant writing.

Related:Redefining the Software Engineer's Role in the Age of AI

Yet, the international development community, by and large, has not taken the necessary steps to holistically understand AI’s profound impact, embrace developing countries’ optimism and desire for AI-enabled local economies and make AI innovation a strategic priority commensurate with its rise. Nor has it fully grasped the specific hurdles for the Global South, such as lagging or stalled AI growth due to systemic barriers and the potential harms AI poses for certain populations.

Certainly, threats to privacy and cybersecurity, the rise of readily available and inexpensive digital surveillance and the unchecked challenge to information integrity spurred by AI are true causes for concern. Instead of being stymied by these threats, development practitioners must prioritize addressing AI risk as a means to unlock its full potential.

AI is going to have a global impact whether or not a particular industry – including the field of international development – prioritizes it.  What remains unknown, though, is to what end. Will AI’s trajectory be left solely to the discretion of those who build, control and monetize these systems?  If so, we risk undermining development gains by amplifying global inequality, concentrating power within a small circle of actors and potentially erasing minoritized cultures by reinforcing ‘digitally dominant’ information.

Related:AI is Only as Good as Its Data Fuel and the Human Touch

Regulators and lawmakers are actively building guardrails for the future growth of AI.  As they do so, the world is watching to see how AI will impact power dynamics and geopolitics, if not the global world order and the development enterprise must seize the moment to shape responsible AI not just in individual applications but by strengthening entire digital ecosystems. Such a moment necessitates designing and deploying AI models and tools with diverse stakeholder input and in close collaboration with the communities they are intended to serve, including increased representation of the global majority and marginalized communities, in international AI norms-setting, standards-setting and policymaking.

The Stakes – AI and Its Implications on International Development 

Even six years ago, long before the overnight rise of generative AI, the technology was forecasted to be one of the biggest drivers of economic growth in modern times, with global GDP increasing an estimated 16% between 2018 and 2030.

Lurking behind these profound opportunities are very real threats. AI engines powering social media platforms often maximize “engagement” by promoting misleading, polarizing content. AI deployments in employment, criminal justice and other fields routinely demonstrate that algorithmic bias is difficult to avoid. These biases compound challenges faced by vulnerable populations, who might find themselves denied critical services, unfairly targeted, or simply stuck with technology that doesn’t work well. 

But the biggest dangers may come from active misuse. Nefarious actors have rapidly adopted AI to undermine human rights and democratic norms. AI tools are accelerating digital repression, facilitating surveillance and social control (e.g., facial recognition) and driving the rapid spread of online information manipulation.

 With a record number of elections in 2024 — representing roughly 40 percent of the global population — we are witnessing how AI can threaten election integrity and undermine public confidence in democratic institutions. From AI-generated voice or video content that distorts candidates’ or election authorities’ statements to false AI-generated images that may dissuade voting, undermine trust in election results, or incite violence, the possibilities for misuse appear limitless.

The rosy predictions about AI’s impact on GDP also suggested that the growth would be unequal – a mere 5-15 percent boost for developing countries versus 20-25 percent for developed ones. Adding to this inequality, generative AI’s rapid global reach, spurred by large language models, is being fueled by gargantuan amounts of lopsided data. These models absorb data at internet scale but represent the perspectives of a global minority. When these ‘foundation’ AI models are incorporated into a variety of downstream applications, their biases, blind spots and errors can amplify faster — and more pervasively — than previous generations of AI models. 

Alongside this rise, international development practitioners have tested AI to extend development and humanitarian assistance impact. We have learned a lot from this experimentation – that AI can increase farmers’ resilience to climate change, help predict civil unrest, or identify which patients might discontinue medical treatment. AI-enabled systems have applications across sectors, from assisting visually impaired students in Nigeria to introducing AI-enabled drones detecting illegal mining operations or transforming the public sector and service delivery. AI can help document atrocities during times of conflict, direct targeted humanitarian response through better disaster mapping and forecast human displacement after disasters.

Yet across such use cases, critical questions remain. Will AI, for example, be used to empower people, offering agency and voice to those who rarely have it? Or will it be consolidated into the hands of a few and powerful at the mercy of others?

As with previous digital technology interludes, ‘elite capture’ of the AI field is a legitimate and formidable concern – and already well underway. A small group of researchers and enterprises have the resources necessary to build increasingly sophisticated foundation AI models, making the field more centralized. This trend away from locally led, contextually relevant, participatory and inclusive AI can have dire consequences — not only for the usefulness of AI globally but also for those who profit in a future economy where AI generates significant wealth. Creating the conditions for fair play now, where AI companies from developing countries can compete in the burgeoning AI market, is critical to realizing AI’s gains and guarding against monopolistic practices that harm consumers.

Even in these early days, we can see how this centralized model optimizes for better services in higher-resourced languages like Italian or Korean, spoken by fewer people, over lower-resourced languages like Nigerian Pidgin, Hausa, or Brazilian Portuguese, spoken by large numbers of people. And this doesn’t account for varying dialects, slang and regional differences within each language. Given market realities, companies will likely optimize for “bigger” languages, like Hindi and Indonesian, but not for “smaller” languages like Quechua or Maori. Ultimately, this could erode thousands of “small” languages – and hence, cultures – from around the world that don’t present a market opportunity for generative AI. While generative AI companies are continually adding languages and fine-tuning their models, development actors should partner with and motivate, enterprises to accelerate efforts to ensure this does not happen.

If we take this trend forward, it’s reasonable to imagine that these small omissions are imperceptibly introduced into innumerable AI applications over time, leading to big problems like the harmful reinforcement of prejudiced stereotypes. Some of the extractive data collection and labor practices that AI is increasingly built on make this trend even more troublesome. The people doing the manual work of labeling data that feed large models do not influence how models are used and see very little of the profits. This shortsighted approach benefits few in the long run and ends up eroding trust in otherwise helpful AI systems. Our defense cannot be one of ignorance; we have been duly warned and must respond with urgency.

Whose Voice Matters?

Digital technology has become a key part of U.S. foreign policy and, unsurprisingly, AI has gained increased attention in this dialogue. AI is generation-defining not only because of its staggering economic potential, but also because of its profound social, political, human rights and national security implications. What this means for underserved communities — a primary focus of the development industry — cannot be ignored.

The datasets that AI is built on, the research driving AI progress and the teams that build cutting-edge AI tools do not yet represent large parts of the world. Far too often, development practitioners and voices from the Global South, many of whom arguably face the brunt of these harms, are rarely a part of the conversation around AI development, deployment and governance. Designed in such a manner, AI can replicate and even introduce inequalities at scale in ways that can seem opaque and inevitable. This highlights the need to increase diversity in AI talent pools so that women, ethnic and religious minorities, rural populations and other less-represented groups globally are reflected in AI development and leadership teams.

Fortunately, there are promising enterprises underway that are turning those tides. In Africa, Google built an AI Research Center in Ghana and Carnegie Mellon University has an AI-inspired campus in Rwanda. The World Economic Forum launched its AI for Agricultural Innovation Initiative in India and Colombia. Google and Microsoft have AI research labs in Bangalore, India, as does IBM in São Paulo and Rio de Janeiro, Brazil. These efforts, while led by prominent Western institutions, offer opportunities to address AI’s blind spots, include more voices in shaping AI’s growth and build AI tools that meet local needs.

Prioritizing wider representation is not window dressing. Experts from developing countries have lessons to offer for building responsible AI ecosystems. For example, there is an urgent need to balance data inclusivity with data sovereignty – if we prioritize inclusive datasets without accounting for the interests of data owners and how they might want their data collected and used, we could replicate the current data extractive model and actively build untrustworthy AI. Impacted populations, particularly those historically marginalized, should have a meaningful say in how and where their data are collected and used. Some groups are leading this work, but it does not represent the norm.

It is now evident that AI risks cannot be divorced from AI benefits. As “AI for Good” efforts and funding by philanthropies, multilateral groups and the private sector proliferate, they must incorporate the development and institutionalization of safeguards. Highlighting and amplifying these risks, rejecting normative discussions on AI and centering views and perspectives of developing countries are essential to avoiding a lopsided global AI landscape and appropriately advancing “AI for Good.” It is fully in our control to address and address it now.

Embracing the Moment

International development is at a crossroads. Which path will the industry take? Will it (a) lean into the full utility of AI while proactively mitigating the profound risks, (b) abdicate AI’s evolution by leaving it to others, or (c) choose to ignore the moment and conclude that it is irrelevant to our work?

We would like to posit that there is a right option and that is to lean in. If the promise and risks presented by AI seem disconnected from one’s particular field of work, we urge reconsideration. AI is going to become a central part of what we do whether or not we engage.

The development community cannot sit idly by and accept the status quo. We need an urgent shift from a ‘wait-and-see’ approach to a ‘shape-and-act’ posture. We must build on the current momentum - private sector voluntary commitments, philanthropic commitments and principled funding efforts - to guide AI’s trajectory. Engaging is the only way to assert our experience and work with AI developers so they can, in turn, be intentional about aligning profitability with global safeguards. 

How Can We Lean In?

First, this will require changing incentive structures within development organizations to facilitate the shift from a project-based to a progress-based approach to AI. Let us bring a new mindset to embrace this new age – rather than being solutionist and trying to match a tool to a discrete problem, we must look at the bigger picture.

Sectoral programs and policies should be designed or revamped with a holistic AI lens, prioritizing the sustainability of AI projects and ecosystems beyond the funding period, by evaluating the following:

●      Responsible, rights-respecting AI governance to guide AI development, design and use. We must ensure AI systems strengthen democratic values and human rights, not undermine them.

●      Datasets that represent local contexts, both geographically and sectorally. If relevant, abundant and quality datasets are not available, our priority must be to close this gap as a public good – guided by data sovereignty, cybersecurity and privacy.

●      Computing infrastructure in countries where we work. If affordable, accessible and quality compute, cloud and data services are not available, our priority must be to unlock these for local data scientists, researchers and AI companies.

●      Talent and R&D pipelines in countries where we work. If AI talent and locally relevant skills across data scientists, researchers, civil society and governments are lacking, our priority must be to create long-term investments grounded in community diversity, that fill these gaps and foster economic opportunities for talent retention in-country.

●      Inclusion and innovation landscapes for AI in our areas of work. If the environment around AI does not foster greater inclusion while pushing for cutting-edge innovation, donor organizations must fill this gap, such as through funding inclusive AI labs or incubators.

 Second, donors must develop a strategic learning agenda to take stock of what has and has not worked in AI investments, identifying specific risks and opportunities. Even if AI is not a good approach to a problem at present, we need to ask how AI will impact development in the near future and how we can address impacts within program and policy goals. Doing this effectively hinges on leading with AI risk prevention and mitigation across the organizational portfolio to maximize AI’s potential.

Third, this holistic approach requires workforce capacity within our organizations. Deep AI expertise is no longer a “nice to have.” To be effective in today’s digital age, we must understand and respond to the impact of AI. That means bringing on more data scientists and technologists to make sense of the data we collect and make it actionable for those we serve,  and adding a policy lens to our program design that interrogates AI’s effect on disparate communities. It might also involve encouraging at least a partial reset, after nearly 80 years of international development as a discipline, while development actors internalize this paradigm shift.

Fourth, international development professionals, industry leaders and civil society — together — need to build better safeguards for AI. For example, consider Human Rights Impacts Assessments in AI design, development, deployment and use. This involves building diverse coalitions to ensure voices from public institutions, public interest technologists and civil society are represented in AI product development and global AI dialogues — ultimately designing systems to support social inclusion, human rights, democratic values and economic opportunities in the face of potential job displacements.

At the U.S. Agency for International Development (USAID), we have begun to take some of these steps. USAID’s AI Action Plan articulates key actions – embracing AI responsibly and holistically, strengthening partner country digital ecosystems and creating partnerships to advance a global responsible AI agenda.

USAID is also investing in projects that not only build AI tools for specific problems but also tackle AI’s underlying challenges. Through the Equitable AI Challenge, we funded projects to identify and address gender inequity resulting from AI tools in hiring, credit scoring and education. Partnering with Mozilla Foundation, we launched Responsible Computing Challenges in South Asia and Africa to shift the training of future AI technologists to better account for social, ethical and cultural contexts in which their products will work. We are developing a research and learning agenda across sectors – in agriculture we are exploring alternative models for data governance and the role AI can play in inclusively advancing agri-food systems or assessing the AI landscape in certain countries. In humanitarian assistance, we are exploring the use of AI for conducting needs analysis. And in media literacy, we are using AI to detect image and video alterations on social media.

Other donors, such as the Gates Foundation, GIZ, IDRC, Sida, the IDB, Omidyar Network and others, have independent AI funding streams and initiatives. Individually, these efforts may not move the needle globally, which is why we need to work together. Recently USAID joined a group of donors to create a coordinated push for fundamental, ecosystem-level investments and shifts in AI, starting with Africa. We encourage other donors to join these efforts.

AI will impact lives and livelihoods. To what end, that is our remit to shape.

If we, as development practitioners, are not doing everything possible to anticipate how AI will affect our programs and the communities we serve, we are falling short. We are undermining our own industry and goals. We are providing a disservice to the communities for whom we work. And we are ceding ground to other, perhaps more malign actors, to step into the void.

Our mission is to improve people’s lives and support communities around the world to find success on their own terms. If the current operating model continues — with AI development and deployment controlled by the few and foisted upon the masses — it’s worth asking if we are losing sight of our mission at a critical time and if we are shaping the future to ensure that AI — one of the most promising technologies of all time — can be used responsibly and fully to support democratic values, human rights and economic opportunities for generations to come.

About the Authors

Shachee Doshi

Emerging technologies team lead in USAID's technology division., USAID

Dr. Shachee Doshi is the emerging technologies team lead in USAID's technology division. She focuses on understanding the sociotechnical implications of emerging technology trends in the Global South to guide the responsible use of emerging technologies like AI in international development. Shachee’s role includes shaping policy, research and strategic applications of these technologies, ensuring that their deployment aligns with human rights and considers social impacts. She has a strong academic background in neuroscience, having completed her doctoral training at the University of Pennsylvania and now integrates scientific research with global development needs.

Christopher Burns

Chief digital development officer (CDDO) for USAID, USAID

Christopher Burns is the chief digital development officer (CDDO) for USAID and the director of the technology division within the innovation, technology and research Hub.

In the CDDO role, he coordinates and tracks programmatic digital development investments across the Agency; represents USAID's programmatic digital technology work across the interagency and with external stakeholders and partners; and guides the adaptation of the Agency's programs as the digital landscape evolves. In the role of Director for Technology, Christopher leads technical teams focused on Digital Finance, Development Informatics, Digital Inclusion, Emerging Technology such as Artificial Intelligence, Cybersecurity and advanced data and geospatial analysis and the role they play in driving an inclusive digital economy. Before USAID, Christopher spent nearly ten years with the Peace Corps.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like