
By Gemma Galdon Clavell and Ana Pirela-Ríos
Increasingly in Latin America and the Caribbean (LAC), artificial intelligence (AI) is used in everyday decision-making that affects millions of people: scholarship selection processes, subsidies, social service alerts, biometric identification, even counseling for victims of violence.
But, as the Regional Human Development Report 2025 warns, AI is consolidating in a region with persistent inequalities, and the data that feeds these systems inevitably reflects the biases embedded in society. If algorithms learn from these realities, gender bias stops being a laboratory failure and becomes a development problem: it can exclude those who least appear in the records—such as poor, indigenous, migrant or rural women—which would further erode institutional trust.
But the same technology that can deepen inequalities also serves to protect, inform and open opportunities, especially for traditionally excluded groups. The challenge is to reduce this bias and opt for verifiable controls that prioritize equity to expand rights, improve policy targeting and strengthen more inclusive growth.
A “technical” problem that is already in development
One of the main uses of artificial intelligence is based on identifying patterns in large volumes of data to optimize decisions. However, models that “average” diverse populations can disadvantage underrepresented groups and reproduce historical patterns of discrimination. In social protection programs, for example, several LAC countries have incorporated automated models to classify people and allocate benefits, but scoring systems can perpetuate exclusion if they are fed by data where women or other groups are not equitably represented.
Gender bias appears in specific decisions and public safety offers an equally illustrative counterpoint. The region has rapidly adopted biometric and facial recognition technologies, but studies show that false positives weigh more heavily on women, and racialized women in particular. These identification errors compromise freedoms, can trigger unjust arrests and amplify inequalities.
At the same time, when hiring algorithms replicate masculinized work histories or when credit is granted with models that penalize female trajectories according to the criteria of traditional banking, opportunities for women are reduced, productivity is lost and entrepreneurship is limited. The region cannot afford technologies that exclude female talent from already segmented markets.
Investing in representative data and strengthening regulatory frameworks for the use of AI, incorporating equity metrics and accountability mechanisms, are key steps to use this technology in a responsible and inclusive way. Thus, artificial intelligence can become an opportunity not only to improve efficiency in decision-making, but also to expand the base of beneficiaries of innovation, accelerate digital adoption and promote labor and financial inclusion.
It is also worth reviewing the symbolic level: the default feminization of virtual assistants or chatbots—through their names, voices and avatars—reproduces hierarchies. This may be justified in specific services, but as a rule it reinforces stereotypes about the role of women in society. The design of interfaces, increasingly used to improve the provision of public services, is also an element of public policy.
Female leadership: from “outliers” to designers
The principles of non-discrimination, transparency and human supervision already appear among the strategies and frameworks of several countries in the region. The challenge is to translate them into verifiable controls: document the demographic composition of the data; evaluate performance by subgroups (women by age, origin, migratory status or rurality); monitor results after systems deployment; and require mandatory independent audits in high-impact systems (such as those used for social protection, health, justice, and security). With these controls, AI becomes auditable and governable.
Due to historical exclusions and low visibility in formal data, systems tend to classify women as “outliers”, a term that in statistics defines an outlier, that is, an observation that is numerically distant from the rest of the data. From a strictly statistical approach, results from data sets with outliers can lead to erroneous conclusions, so they are generally avoided. However, this does not always apply in more subtle contexts, such as credit applications, job vacancies or social programs, where the characteristics of women may differ from those of men, but should not be a reason for exclusion from selection processes.
But women in the region are not only users of AI, but also leaders in the creation of solutions: feminist frameworks for AI development, open tools to detect stereotypes in language models, and initiatives that incorporate a gender perspective in work on platforms. Placing women at the center—as designers, auditors, regulators, and users—improves the technical quality of systems and accelerates their social acceptance. This is, furthermore, an innovation policy.
In short, reducing gender bias multiplies returns: more precise and legitimate social policies; security compatible with rights; more inclusive and productive labor and financial markets; and greater trust in institutions capable of governing complex technologies. This translates into human development: more real capabilities—health, education, participation, decent work—and more agency to influence one’s own life and environment.
AI is not neutral, but it can be fair. To achieve this, Latin America and the Caribbean needs to embrace a minimum standard already within reach: representative and documented data, equity metrics by subgroups, independent audits and avenues for reparation when there is harm. Reducing gender bias not only opens opportunities for women, but drives development for the entire region.
This article is based on the findings of the Regional Human Development Report 2025, entitled “Under pressure: Recalibrating the future of development”, prepared by the United Nations Development Program (UNDP) in Latin America and the Caribbean.
Gemma Galdon-Clavell. Founder and CEO of Eticas Consulting, an organization dedicated to identifying, measuring and correcting vulnerabilities, biases and inefficiencies in predictive tools and language models (LLM).
Ana Pirela-Rios. Economic Research Analyst at the United Nations Development Program (UNDP) for Latin America and the Caribbean.
