Published: 21 February 2026, The English Chronicle Desk, The English Chronicle Online
The UK mental health charity Mind has launched a year‑long inquiry into artificial intelligence (AI) and its impact on mental health, following revelations from a Guardian investigation that exposed potentially dangerous and misleading mental health advice being generated by Google’s AI Overviews. The commission — the first of its kind globally — aims to examine AI’s growing role in mental wellbeing and put in place safeguards to protect the public.
The Guardian’s reporting revealed that Google’s AI‑generated “Overviews,” which appear atop search results and are viewed by an estimated 2 billion users per month, were providing inaccurate summaries on sensitive mental health topics — including psychosis and eating disorders — that could mislead vulnerable individuals and potentially worsen conditions. Researchers found examples of AI suggesting unhealthy behaviours as acceptable, and oversimplifying complex medical information in a way that obscured crucial context.
In response, Mind’s CEO Dr Sarah Hughes said the charity is prioritising public wellbeing in a digital era where AI increasingly influences people’s search for help and support. The inquiry will bring together mental health professionals, people with lived experience, policymakers and representatives from tech firms to assess risks and recommend standards for ethical and accurate use of AI in mental health contexts.
Rosie Weatherley, Mind’s information content manager, highlighted that while search engines once directed users to rich, credible medical information, the newer AI summaries often replace nuanced expert content with overly simplistic responses that may be misleading or harmful, especially to those in distress. Critics argue AI systems need to prioritise empathetic, evidence‑based guidance rather than generic, high‑level summaries of complex issues.
AI is increasingly being used in contexts related to wellbeing and emotional support, and experts have warned that inaccuracies in AI‑generated advice could prevent people from seeking appropriate help or even reinforce harmful beliefs. Mind’s inquiry is expected to examine these wider implications and investigate whether additional regulation, industry standards or public safeguards are needed to protect users from misleading information.
Google has acknowledged concerns about the reliability of its AI systems and says it is working to improve accuracy and safety in its AI responses, but has stressed that it continues to refine its tools and policy safeguards. The English Chronicle will continue to cover updates from Mind’s inquiry and the ongoing debates over AI’s role in public health and mental wellbeing.


























































































