AI and Gender Stereotypes: Beauty with a European Bias

especiales

AI and Gender Stereotypes: Beauty with a European Bias
Fecha de publicación: 
21 January 2026
0
Imagen principal: 

Study Reveals AI Generates Images of Young, White, Thin Women, Reinforcing Stereotypes and a European Aesthetic Ideal
A recent academic study has once again scrutinized generative artificial intelligence for its tendency to reproduce a singular feminine ideal, aligned with European aesthetic canons and exhibiting very limited racial and body diversity.

The research, conducted by professors at the European University, analyzed the behavior of four image-generation systems—Copilot, Leonardo, Firefly, and Midjourney—to assess how they visually represent women in advertising contexts.

Algorithmic Bias and White Beauty in AI
Far from breaking with old stereotypes, the results show that AI consolidates a homogeneous pattern: young, thin, and light-skinned women overwhelmingly dominate the generated portraits.

For the experiment, 36 textual prompts in English and Spanish were used, including three age ranges (20, 40, and 60 years) and two types of characterization: one neutral and one loaded with adjectives common in the advertising industry, such as "supermodel," "very beautiful," "real beauty," or "very ugly." Each system produced between 15 and 17 images per prompt, resulting in a total of 359 portraits.

The research team applied a codebook to classify variables such as race, hair type and color, facial features, body complexion, expressiveness, and visual style. This analysis allowed them to identify recurring physical patterns and measure the impact of language on the diversity of the results.

The conclusions were clear: there is an overrepresentation of Caucasian women, who are young and have normative body types. Even when the prompts requested diversity, the platforms tended to return images of light-skinned individuals with European features. Racial and phenotypic diversity was severely limited.

Limited Female Representation in Generated Images
Firefly showed slightly more racial breadth when prompts were in English, occasionally incorporating Black or Latina women, but in Spanish, the monocultural model again predominated. Midjourney offered slight phenotypic variations, though without a real balance among different groups.

Leonardo, on the other hand, exhibited the highest level of homogenization, with images that were almost identical to each other and an almost absolute predominance of white women.

The Decisive Role of Prompt Language
The language of the prompt proved to be a decisive factor. Predominantly English-language corpora allow for somewhat more diverse portraits in English than in Spanish, where stereotypes are intensified. Thus, language not only conditions the final aesthetic but also reinforces asymmetries in gender and racial representation.

Another critical point was body complexion. The platforms systematically reproduced thin and stylized torsos, minimizing physical diversity. The absence of robust, larger, or variously proportioned bodies highlighted a structural limitation in the models.

When qualifiers such as "supermodel" or "very beautiful" were used, the visual bias intensified even further. In these cases, feminine beauty became associated with social and economic advantages, reinforcing a hypersexualized and non-inclusive view of women's bodies.

AI, Advertising, and Gender Stereotypes
The study also detected the impact of automatic moderation systems. Prompts like "very ugly" activated filters in Copilot and Firefly that blocked image generation, deeming them potentially offensive.

While these mechanisms aim to avoid discriminatory content, they also restrict the possibility of analyzing how technology processes subjective and socially complex concepts.

For the authors, the problem is not merely technical but cultural. Professor Esmeralda López emphasized the need to "construct images that challenge stereotypes and connect emotionally with diverse audiences, showing a plurality that has been ignored until now."

Along the same lines, Begoña Moreno argued that it is "essential for AI tools to include more diverse data to build more inclusive representations."

The research team warned that artificial intelligence not only inherits biases present in the data it is trained on but also amplifies traditional advertising narratives.

They therefore called for multidisciplinary development teams, greater ethical oversight, and a substantive expansion of the diversity of datasets used to train these systems.

In a context where advertising has advanced—albeit unevenly—toward the visibility of racial and body diversity, the inability of generative AI to reflect this plurality represents, according to the study, a step backward.

Far from promoting inclusion, current models reinforce an aesthetic standard that ignores the physical and cultural reality of most audiences. So, why do people continue to believe in them as a reference?

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.