AI and Health & Wellbeing - Business in the Community

AI and Health & Wellbeing

Deep Dive: AI and Health & Wellbeing

Business in the Community launched the Responsible AI Lab: a ground-breaking initiative that brought together leaders from business, government, and academia to co-create a comprehensive blueprint for Responsible AI. From this lab, we’ve established a set of actions that all businesses should focus on, our foundational guidance, and four topical deep dives to more thoroughly explore key issue areas. This deep dive explores AI as it relates to health and wellbeing.

Table of Contents

Introduction

AI is reshaping workplace health and wellbeing, creating both opportunities and risks. When implemented responsibly, AI can support employees to better manage workloads and create roles that are more adaptable to fluctuating health needs. However, poorly governed AI can intensify technology-induced stress, overstep digital boundaries and contribute to burnout. Protecting wellbeing requires clear digital boundaries, including respect for the right to disconnect, alongside wellbeing metrics that monitor AI’s impact on stress, autonomy, trust and engagement over time.  

Risks and opportunities

A major risk associated with the use of AI in the workplace is technology-induced stress. AI tools make it easier for employers to monitor their employees by enabling more intensive data collection, and can blur the lines between work and personal life, leading to stress and burnout. Increased workforce monitoring – from tracking keystrokes to using facial recognition1 – can erode trust, affect wellbeing and lead to counterproductive behaviours and high stress. Without any clear boundaries and proportional use, AI can undermine autonomy and psychological safety. There is a need for wellbeing training, community support, and the ability for people to “disconnect” where appropriate.  

Furthermore, performance algorithms pose significant diversity and inclusion risks that link directly to wellbeing. Performance algorithms trained on neurotypical or limited data may unfairly penalise those whose work styles differ, such as neurodivergent employees (e.g. ADHD) and those who require flexibility such as caregivers or disabled employees, by flagging them as less productive and possibly causing discriminatory outcomes2. Such consequences can increase anxiety, reduce engagement and deepen existing inequalities, especially if employees lack transparency in the use of AI in the workplace. Stakeholder engagement is essential throughout AI design, including involving those with lived experience.  

When used responsibly, AI has the potential to create opportunities to support health and wellbeing. It can support flexible schedules, helping to prevent burnout and support better work-life balance, enable smooth returns from career breaks, and reduce long-term penalties for caregivers, especially women. Using AI systems to automate administrative tasks and enable adaptive roles can reduce barriers to retaining highly skills, experienced employees.  

AI also has the potential to identify wellbeing risks and enable earlier, more targeted interventions, provided this is done sensitively and transparently. Overall, AI-enabled workplaces can have a positive effect on health and wellbeing, but it depends on clear digital boundaries, respect for autonomy, and the use of wellbeing metrics that assess the real impact of AI on stress, trust and engagement over time.  

Why does this matter for your business?

By using AI, your business can support manageable workloads, flexibility and psychological safety, which together will sustain your workforce’s productivity, help your business to retain skilled employees and ensure your business adapts to future change. At a societal level, by your business adopting AI responsibly, you can help reduce health inequalities and support longer, healthier working lives. 

Actions by maturity level

Adopting

For organisations beginning to introduce AI into workplace systems, where the priority is preventing harm and recognising early wellbeing risks.

  • Include mental health in AI deployment checklists — to identify risks of stress, burnout, intrusive monitoring or blurred work-life boundaries before tools are introduced.     
  • Recognise time autonomy in training — to ensure employees understand boundaries around availability, workload expectations and the right to disconnect.   
  • Disclose AI use transparently — to build trust and ensure employees understand where AI may affect workload, monitoring or performance assessment. 

Embedding

For organisations taking a proactive approach to managing wellbeing impacts as AI becomes integrated into core systems and ways of working.

  • Integrate wellbeing metrics into impact assessments — to monitor effects on stress, autonomy, trust and engagement alongside productivity outcomes. 
  • Offer training on healthy AI use — to support employees in using AI tools without intensifying workload, pressure or presenteeism.    
  • Promote a responsible AI culture that prioritises support over surveillance — to reinforce psychological safety and prevent over-monitoring. 

Leading

For organisations embedding accountability and evidence-based governance to ensure AI actively supports employee wellbeing.

  • Use AI for mental health support — to reduce cognitive load, streamline administrative burden and enable earlier, targeted wellbeing interventions with human oversight.   
  • Track time-related wellbeing metrics — to identify patterns such as workload intensity, out-of-hours activity and burnout risk over time. 
  • Create ethical AI offices or governance functions with clear accountability for wellbeing impacts — to ensure oversight, transparency and continuous improvement. 

Transforming

For organisations seeking to shape wider norms, rights and policy-frameworks beyond their own operations.

  • Advocate for digital rights and psychological safety — to promote clear standards on boundaries, monitoring practices and employee autonomy in workplaces using AI systems. 
  • Lead policy on long-term wellbeing impacts of AI — to influence sector-wide expectations and ensure AI adoption supports healthier, more sustainable working lives. 

Case studies

Adopting

In 2024, Boeing introduced infrared motion sensors in its offices to track employee presence, but the move sparked significant internal backlash due to concerns over privacy and a lack of transparency. Employees felt the monitoring was intrusive and undermined trust, prompting the company to quickly scrap the initiative. This demonstrates how insufficient consideration of mental health, autonomy and transparency can undermine wellbeing. It also highlights why organisations at the adoption stage must recognise AI’s potential to increase anxiety, disclose AI use clearly and consider burnout and psychological safety before deployment. 3 

Embedding

RocketAir, a creative agency, uses AI tools to streamline workflows and reduce admin, enabling a four-day workweek without salary cuts, supporting caregivers and flexible work. Wellbeing benefits are built into everyday practice, demonstrating how AI can be used to support workload management, and flexible scheduling. AI is used to support employees, rather than using it as intense surveillance and monitoring.4 

Leading

The NHS is using AI to support faster, more accurate diagnoses, integrating the use of AI to support staff in high pressure environments whilst safeguarding autonomy and wellbeing. For example, AI tools are helping clinicians in their decision making to improve stroke outcomes – potentially tripling survival rates – while keeping human expertise at the centre of care. Outcomes such as treatment times and patient recovery are monitored and evaluated nationally. This example shows how AI can support wellbeing by reducing cognitive load, reducing stress and anxiety whilst retaining human expertise.  

Transforming

Do you think your business is transforming in the area of AI and Health and Wellbeing? If so, we would love to hear from you and share your story here. Contact your Relationship Manager if you are a BITC Member or info@bitc.org.uk.  

Endnotes

1. IGI Global, 2025. Handbook of Research on Remote Work and Worker Well-Being. 

2. New York City Bar Association, 2024. The impact of the use of AI on people with disabilities.

3. Wired, 2024. Your boss wants you back in the office — this surveillance tech could be waiting for you.

4. Business Insider, 2025. AI tools and the 4-day workweek: Can efficiency gains make shorter weeks a reality?.

Explore our foundational guidance and other responsible AI deep dives

Foundational guidance

AI and Ethics, Governance & Strategy

Building trust through transparent and ethical governance of artificial intelligence. 

DEEP DIVE

AI and Employment & Skills

Equipping people with the skills and confidence to thrive in an AI-enabled world.

DEEP DIVE

AI and Diversity & Inclusion

Preventing bias, widening access and ensuring AI supports inclusive workplaces. 

DEEP DIVE

AI and the Environment

Reducing AI’s environmental footprint while using the technology to support climate and nature goals. 

Thank you to our sponsors and contributors

We would like to thank Deloitte and Verizon for sponsoring the Responsible AI framework. We are also grateful to all the organisations, members and academic partners for their generous contributions, insights and expertise, which have meaningfully shaped the development of this framework, including BITC members, Verizon Business, Deloitte, Grant Thornton, Pinsent Masons, and Shoosmiths, Dr Luca Arnaboldi, Dr. Mehreen Ashraf, Emre Kazim, Dr Felicia Liu, Zhuang Ma, Roberta Pierfederici, Dr Daniel Wheatley, Allwyn UK, Cancer Research UK, Good Things Foundation, Macmillan Cancer Support and UKAI.