top of page

Generative AI - From Ethical Issue to Business Case: Diversity in the Age of AI

Updated: Apr 8

Imagine hiring a new employee who has learned literally everything from the internet over the past twenty years. Everything. Including all the biases, stereotypes and historical inequalities embedded within it. That, in a nutshell, is generative AI.


In March, diversity is always high on the agenda. International Women’s Day and numerous DEI events drive the conversation forward. This year, I notice a shift in the questions I receive after my keynotes on diversity. It’s no longer just about “how do we get more women into leadership positions?” Increasingly, the question is: “What about generative AI and diversity?” That question is entirely justified and ignoring it costs companies money, reputation and innovation power.


The tech sector wasn’t always male-dominated

Few people realize that software development originally started as a female domain. In the early programmable computers of the 1940s and at NASA in the 1950s and 1960s, women played an essential role as programmers. Software was considered women’s work, while hardware, large and heavy, was seen as men’s work. The contributions of these female pioneers often remained invisible in history books. It wasn’t until 2016, with the film Hidden Figures, that their story gained widespread recognition.


So how did the balance shift so dramatically?

Until the mid-1980s, more than 35% of computer science students in the United States were women, comparable to fields like law or medicine at the time. After that, the percentage dropped rapidly. At the same time, something changed in the market: personal computers were heavily marketed as toys for boys. Popular films like Revenge of the Nerds reinforced the stereotype of the male tech genius. As a result, boys arrived at university with years of coding experience, girls did not. The gap had already formed before day one.


Fast forward to today. In the Netherlands, less than 10% of IT entrepreneurs are women, and only around 17% of ICT workers are female. In Silicon Valley, men, the well-known “tech bros”, dominate the AI landscape.


Biased training data as an invisible risk

Artificial intelligence systems learn from massive datasets, and the collection and labeling of that data is partly done by humans. Humans have biases, conscious or unconscious. When training data contains these biases, AI systems absorb them and scale them rapidly.


We’ve seen this pattern before, in 'old tech'. Crash test dummies in the automotive industry were modeled for decades on the average male body of 70 kilograms. The result: women and children had up to a 47% higher risk of injury in accidents. In healthcare, women have around a 50% higher chance of misdiagnosis during a heart attack, simply because medical data has historically been based on male bodies.


The same issue now appears in AI. Early facial recognition systems had error rates of up to 34% for dark-skinned women, compared to less than 1% for white men. In HR, similar risks exist: CV screening algorithms have historically rejected female candidates because they were trained on data from previously “successful” (male) hires. That doesn’t mean women are less capable, it reflects biased historical data and hiring practices.


The WEIRD problem

AI models commonly used in Europe are heavily trained on data from Western, Educated, Industrialized, Rich and Democratic societies, what researchers call the WEIRD problem.

WEIRD in = WEIRD out.


For multinational companies, this means AI-driven decisions may not align with a diverse, global customer base. Ultimately, the raw computing power of an AI model matters less than the diversity and integrity of the underlying data.


The mirror that shows what you want to see

Every year on International Women's Day, I play around with popular generative AI models by asking them to generate an image of a CEO in an office.


Generative AI

ChatGPT consistently produces a polished image of a white man in a suit. This year, Google’s image generator NanoBanana 2 gave me something very different: a female CEO of “Global Innovations,” with a Dutch name, in a modern office with a view that looked distinctly Dutch. The result aligned almost perfectly with what Google “knows” about me. Both outputs are biased - but in different ways.


As former Google engineer Blake Lemoine pointed out: "Any system that is not fully random contains bias." The real question is not whether AI is biased, but which biases it reflects, and which ones developers consider acceptable.


That brings us to the core issue for leadership teams. Female perspectives still play a limited role in the design of AI systems. Research by Randstad shows that 71% of AI-skilled workers are male. The people building AI determine the lens through which these systems interpret the world and today, that lens is still predominantly male.


From business risk to strategic opportunity

At the same time, this challenge presents a major opportunity.

If used consciously, AI can actively promote diversity. There are already tools that:

  • analyze job descriptions for bias and make them more inclusive

  • support candidates through AI-driven interview training

  • help managers write more objective performance reviews

  • detect unexplained pay gaps


However, this only works if diversity and inclusion are treated as core selection criteria in the procurement, development and implementation of AI tools. Any organization that adopts AI without understanding its biases is introducing significant hidden risk.


At the same time, research shows that more diverse development teams lead to greater creativity and higher retention rates among women. Diverse teams simply build better, safer and more commercially successful technology.


That insight belongs at the heart of your business strategy.


Three actionable recommendations for the boardroom

  1. Treat diverse AI talent as a strategic advantage

    Don’t see diversity as a checkbox, but as a way to stay ahead of the competition. Diverse teams build better solutions.

  2. Treat AI bias as a strategic risk

    Make inclusivity a core part of your AI strategy. Always ask: whose data was used to train this system, and who is underrepresented?

  3. Demand radical transparency from vendors

    Ask for concrete data on bias testing. Which demographic groups were tested? What are the error rates? No clear answers should be a red flag. Always test new tools thoroughly before scaling them.



bottom of page