44% of AIs contain gender bias

Marie Donzel

Pour le magazine EVE

March 20, 2026

40% of jobs are now exposed to artificial intelligence. As its use becomes increasingly widespread, it is important to consider the supposed neutrality of AI. Is it truly resistant to bias? A recent study from the Berkeley Haas Center for Equity, Gender and Leadership analyzed 133 artificial intelligence systems in different business sectors. The result? 44% of the sample exhibited sexist stereotypes. How is that possible? Let’s take a closer look.

 

Where do these biases come from?

 

Zinnya del Villar, an AI expert and Head of UN Women, explains that AI is primarily data. For example, to develop machine learning models or generative AI tools, the algorithm is given data on which it can train. However, the data on which AIs are trained is mostly data created by humans. In that data, there are some major blind spots regarding certain segments of society, namely women and people of color (according to the Berkeley Haas Center study, 25% of the AIs studied contained both sexist and racist stereotypes.).

 

Furthermore, the lack of diversity in tech jobs, and particularly in AI, also plays a role. According to one study, 88% of AI algorithms are developed by men, and women only represent 30% of people working in the field of AI. AI systems are currently developed and maintained by mostly white, university-educated men. However, human influence can hardly be eliminated from data, since in most cases it is humans (and therefore men) who decide how the data should be collected and then how it will be categorized and moderated.

 

When AI learns to make sense of human language, it learns to account for the meaning of words but also to observe which words are often seen together. Unfortunately, this learning method provides fertile ground for the perpetuation of sexist biases and essentialization processes. In other words, if you ask an AI to generate a doctor and a nurse, it will tend to generate a male doctor and a female nurse.

 

Finally, Hélène Molinier, Senior Advisor on Digital Cooperation at UN Women, points out that there is currently no system to structure or regulate the AI market. Nothing prevents people from creating and using AI that reproduces sexist and/or racist stereotypes or that does not respect confidentiality and security standards. As for involving new forms of social vulnerability generated by AI, it is currently in embryo stage.

 

Biased AI, consequences for gender equality:

 

Biased algorithms have a big impact. According to Genevieve Smith and Ishita Rustagi, the authors of the Berkeley Haas Center study, “Artificial intelligence makes it possible to automate judgments that were previously made by individuals or teams of people.” AI systems mirror society: they integrate and automate the biases that prevail within it, and therefore tend to amplify them.

 

This trend can be observed in several areas, particularly in the health sector. Since AI is generally trained on male symptoms, they tend to generate incorrect diagnoses and treatments for women. Zinnya de Villar also highlights that voice assistants, which often use female voices, tend to reinforce the association between women and care and service work. AI can also limit women’s professional opportunities in areas such as decision-making, loan approval, recruitment, and judicial decisions. In 2018, for example, Amazon deactivated an AI-powered recruitment tool that favored male CVs. If the data used to feed the AI system is biased, it will tend to associate men and women with certain roles and not others (again, the example of the doctor and the nurse).

 

And it doesn’t stop there. AI could also be a hindrance to women’s employability. It could lead to a significant restructuring of jobs with high automation potential, particularly administrative and support functions. While AI tools can be productivity springboards and boost the careers of those who use them, they also tend to widen the gap and marginalize those who don’t. Women use AI tools approximately 20% less than men, a gap driven not only by limited time and access to training, but also by a tendency toward self-censorship, rooted in the feeling that they don’t belong in tech spaces.

 

How can AI be a tool for professional equality?

 

AI is malleable. It has the potential to be a useful crutch in reducing inequalities, by offering women access to new resources and opportunities through information and training. But to do this, gender equality must be considered from the initial design and construction of AI systems.

 

That means actively selecting data that integrate the diversity of social backgrounds, cultures, genders, and roles. It also involves addressing the root cause of the problem, and diversifying AI system development teams to make them more inclusive. While 73% of executives believe it is important to have more women in this field, only 33% have a woman leading strategic decisions on the subject. It is also essential to draw on gender expertise during the design phase to internalize awareness of gender bias within the process.

 

Finally, it is crucial to make training available to everyone and ensure women have access to AI tools so that they can fully benefit from the productivity and learning opportunities they offer, and not be victims of increasing marginalization.

x