AI and Diversity & Inclusion
Deep Dive: AI and Diversity & Inclusion
Business in the Community launched the Responsible AI Lab: a ground-breaking initiative that brought together leaders from business, government, and academia to co-create a comprehensive blueprint for Responsible AI. From this lab, we’ve established a set of actions that all businesses should focus on, our foundational guidance, and four topical deep dives to more thoroughly explore key issue areas. This deep dive explores AI as it relates to diversity and inclusion.
Table of Contents
Introduction
Responsible AI in the workplace requires rigorous data bias checks, responsible leadership and the use of certification and fairness metrics. Bias checks help prevent AI systems from deepening existing inequalities, such as those relating to gender and race, and responsible leadership ensures that the use of AI maintains employees trust and autonomy, protecting workplace culture. Certification and fairness metrics enable organisations to assess if their AI systems produce biased outcomes and support transparency, accountability and trust.
Risks and opportunities
A major risk of AI systems is that they can inherit and amplify human bias if they go without proper oversight. For example, AI recruitment tools are 30% more likely to filter out candidates over the age of 401 and women and ethnic minorities are underrepresented in the leadership roles that benefit most from AI-driven productivity gains. These risks threaten widening existing pay and progression gaps, displacing already vulnerable groups while rewarding those in positions of power.
Additionally, the integration of AI may create a new form of labour market polarisation that risks exacerbating existing inequalities due to unequal access to AI upskilling. Women are 25% less likely than men to use generative AI tools, and underrepresented groups are less likely to be in roles offering upskilling or tech exposure. There are also further risks of regional inequalities appearing if AI adoption is concentrated in areas with higher skills or more resilient economies. Where jobs are displaced or lost, reskilling and supporting transitions is essential.
AI can also support those who require non-standard working patterns, such as carers, by enabling flexible work schedules and smoother returns from career breaks and reduced long-term penalties for caregivers. This especially affects women, who are more likely to take on the majority of unpaid care work and to also take on part time work.
AI also presents strong opportunities to improve inclusion for disabled people through assistive technologies, when deployed thoughtfully. AI-powered tools like real-time speech-to-text transcription and live captioning are transformative for employees who are deaf or hard-of-hearing, ensuring full participation in meetings2. However, these benefits depend on inclusive design and fairness testing to ensure systems do not penalise different working styles and that the full breadth of inclusion needs is considered.
Why does this matter for your business?
If diversity and inclusion are not embedded into AI, it can impact your organisation’s trust, reputation and long-term performance. Poorly governed AI systems could undermine your employees’ confidence, erode workplace culture and cause public backlash where systems are perceived as unfair, discriminatory or intrusive. In turn, this could inhibit your employees’ wellbeing and engagement, leading to higher turnover and lower productivity rates, damaging your organisation’s reputation, credibility and ability to innovate.
Actions by maturity level
Adopting
For organisations beginning to introduce AI into decision-making-making or workplace systems, where the priority is identifying obvious risks and preventing unintended harm.
Embedding
For organisations taking a proactive, evidence-based approach to fairness and behavioural impact as AI becomes embedded in core systems.
Leading
For organisations taking a proactive, evidence-based approach to fairness and behavioural impact as AI becomes embedded in core systems.
Transforming
For organisations seeking to shape wider norms, standards and accountability frameworks beyond their own operations.
Case studies
Adopting
A recruitment tool developed by Amazon to help rate candidates for software engineering roles was abandoned after it was found to discriminate against women3. The ratings of female applicants were downgraded simply because fewer women applied, and there was less data available to assess them. While the recruitment tool was withdrawn, this example demonstrates a reactive instead or preventative approach to the ethical risks of AI. Organisations need to recognise the unintended consequences of AI and why early bias testing and ethical assessments are essential before deployment.
Embedding
Adobe established AI governance through an internal working group, AI@Adobe, and a review board with a diverse group of members to oversee generative AI creation and exploration. Staff act as “customer zero”, testing and guiding new features on generative AI applications such as Firefly4. This approach is a clear example of internal governance that embeds transparency and ethics into everyday practice, while ensuring that generative AI enhances rather than replaces human creativity.
Leading
Google Project Euphonia demonstrates leading practice through data-driven governance and fairness testing. The project addressed bias by training voice-recognition tools with speech data from people with disabilities resulting in a reduction in recognition errors by over 80%5. Google has championed inclusion by involving people with disabilities in the development of AI tools and ultimately increasing the tools accessibility. By testing how AI systems work for those more likely to be excluded by its use, the project shows how organisations can monitor behavioural and inclusion impacts over time and use the evidence collected to improve equity, accessibility and accountability.
Transforming
Do you think your business is transforming in the area of AI and Diversity and Inclusion? If so, we would love to hear from you and share your story here. Contact your Relationship Manager if you are a BITC Member or info@bitc.org.uk.
Endnotes
1. Next-Up, 2024. Ageism in hiring: How AI is crushing talent over 50.
2. Microsoft, 2025. Inclusive innovation: The role of AI in accessibility and neurodiversity. Microsoft Asia Source.
3. Reuters, 2018. Amazon plans to automate jobs and data shows impact on workforce.
4. Great Place To Work, 2025. 100 Best Training Workforce AI.
5. techUK, 2024. From barriers to bridges: Harnessing AI’s transformative role for accessibility.
Explore our foundational guidance and other responsible AI deep dives
AI and Ethics, Governance & Strategy
Building trust through transparent and ethical governance of artificial intelligence.
AI and Employment & Skills
Equipping people with the skills and confidence to thrive in an AI-enabled world.
AI and Health & Wellbeing
Protecting autonomy, setting healthy digital boundaries and supporting mental wellbeing.
AI and the Environment
Reducing AI’s environmental footprint while using the technology to support climate and nature goals.
Thank you to our sponsors and contributors


We would like to thank Deloitte and Verizon for sponsoring the Responsible AI framework. We are also grateful to all the organisations, members and academic partners for their generous contributions, insights and expertise, which have meaningfully shaped the development of this framework, including BITC members, Verizon Business, Deloitte, Grant Thornton, Pinsent Masons, and Shoosmiths, Dr Luca Arnaboldi, Dr. Mehreen Ashraf, Emre Kazim, Dr Felicia Liu, Zhuang Ma, Roberta Pierfederici, Dr Daniel Wheatley, Allwyn UK, Cancer Research UK, Good Things Foundation, Macmillan Cancer Support and UKAI.