Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Brain Circuits

Are you addressing diversity bias in GenAI? 

Published April 17, 2025 in Brain Circuits • 2 min read

A recent survey by IMD, Microsoft, and Ringier reveals that just 35% of organizations have strategies to address diversity bias in GenAI. Alexander Fleischmann identifies the risks and suggests ways to mitigate bias in the machine.

The risks

GenAI inherits bias and fairness issues in the real world that are reflected and embedded in its data and design. Left unchallenged, these issues can seriously undermine the reliability and benefits of GenAI output. Worse, they have the potential to widen real-world problems of representation, access, inclusion, and opportunity.

A notable example was Amazon’s AI recruiting tool. The AI tool was trained on resumes submitted over a 10-year period at a time when the tech industry was predominantly male. Because the algorithm was trained on resumes that resembled past successful candidates, it perpetuated gender bias in hiring by “preferring” male candidates. (Amazon scrapped the tool in 2018).

Bias in GenAI systems can also damage the bottom line by impacting:

  • Brand reputation
  • Customer satisfaction
  • Competitiveness
  • Decision-making

How to mitigate bias in the machine

Addressing diversity bias in GenAI hinges on people, processes, and technology:

  • Prioritize a robust and responsible AI framework. By embedding principles such as fairness, transparency, and accountability, you establish a foundation of trust and demonstrate ethical leadership.
  • Focus on empowering and diversifying your teams and bringing in DE&I expertise. Diverse, multidisciplinary AI teams in a psychologically safe environment enhance critical thinking, reveal hidden biases, and ensure your technology serves a broad spectrum of perspectives; ultimately making your products stronger and more inclusive.
  • Ensure that ongoing bias training and education stretches across the organization, reaching developers and users, to drive awareness.
  • Actively question all AI-generated results, remain vigilant to bias, and commit to continuous improvement.

Key learning

Working towards responsible AI calls for a sense of shared accountability. This is essential to building and shaping GenAI in a way that earns trust, respects values, and benefits us all.

Authors

Alexander Fleischmann

Alexander Fleischmann

Equity, Inclusion and Diversity Research Affiliate

Alexander received his PhD in organization studies from WU Vienna University of Economics and Business researching diversity in alternative organizations. His research focuses on inclusion and how it is measured, inclusive language and images, ableism and LGBTQ+ at work as well as possibilities to organize solidarity. His work has appeared in, amongst others, Organization; Work, Employment and Society; Journal of Management and Organization and Gender in Management: An International Journal.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience