Trigger Warning/Disclaimer: This blog post mentions suicide. If you or someone you know is experiencing suicidal thoughts or a crisis, please reach out immediately for help. A hotline in your country can be found on befrienders.org.
Author – Katerina Drakos
Governments, startup founders, academics, mental health professionals and others wrestle over who gets to define the future of AI mental health care.
Amidst a lack of regulatory oversight regarding AI-based mental health chatbots, some states in the US have taken steps to ban these systems in order to protect the public. Full bans are in place in Illinois and Nevada, and although Utah has not banned it outright, it still imposes strong restrictions and requirements around transparency, advertising, data use and human professional involvement. Bans as a political strategy and policy risk unintended consequences on a population-wide scale (Oliver et al., 2019).
Why Are Governments Falling into Possibly Ineffective Policymaking Concerning AI Mental Health Therapy?
The government of Illinois stated that the legislation “stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.” (Slaby, 2025). In general, these decisions have been spurred and accelerated following reports of suicide after chatbot interactions have heightened concern for the application of AI in mental health. In fact, the first Senate Hearing on AI Chatbot Harms was dedicated to addressing the ways in which AI chatbots can cause harm, following a lawsuit from families whose children took their own lives following chatbot conversations. In Europe, examples of devastating consequences also exist. For example, in Belgium, a man died by suicide after six weeks of conversations with a chatbot, which reportedly encouraged suicidal ideation (Ben-Zion, 2025). However, unlike the US, the European Union is guided by the EU AI Act, which classifies mental health AI chatbots as high-risk and sets strict requirements for transparency, human oversight, and risk management. Regardless, more specific guidelines, such as for evaluating the safety of AI mental health, are still scarce, and the field continues to suffer from a polarised debate between industry and mental health professionals.
In addition to general-purpose chatbots, AI has been utilised in tools specifically designed for mental health. These AI mental health chatbots have typically been framed and justified as access tools: efficient and scalable services. Tech startups have seized the opportunity, and AI mental health is marketed as a powerful tool available 24/7 (Moore et al., 2025). However, academia reveals itself to be more cautious in providing widespread AI therapy before certain risks can be mitigated. One health professional writes about how AI might be fuelling psychosis through mirroring, validating or amplifying delusional or grandiose content, particularly in users already vulnerable to psychosis (Morrin et al., 2025), while another reports how LLMs express stigma toward those with mental health conditions and respond inappropriately to certain common conditions in therapy settings (Moore et al., 2025).
Uncertainty of what the future holds lingers, and wading through contradictory evidence permeates the lives of all those working in the field. In one study, chatbot responses were rated as significantly higher in both quality and empathy compared to physician responses (Ayers et al., 2023). However, the public remains wary and dubious of LLMs’ empathy and trustworthiness (Stade et al., 2025). Despite this caution, it is undeniable that the unmet need in mental health care and the urgency of providing scalable solutions, especially for vulnerable populations, is driving interest and experimentation with this technology. Notably, a study on the use of LLMs for mental health found that 24% of surveyed participants use general-purpose LLMs for this (Stade et al., 2025). These users are more likely to be young, male, Black, have poorer mental health and quality of life, as well as difficulty accessing traditional mental health treatment, given the cost of insurance coverage (Stade et al., 2025).
The field of AI in mental health is constantly evolving to adapt to the needs of industry, policymakers, and researchers, which can sometimes feel overwhelming.
How Can People Do the Right Thing If the Context Is Changing on An Hourly Basis?
This is where value alignment can be helpful. Bans and regulatory advances will continue to shape the political landscape, but the technology will remain regardless of these updates. To avoid the risks and downsides of such technology, protocols (trialled and co-designed with users and clinicians) can be an effective way of mitigating them. AI should reaffirm its non-human nature, flag patterns of language in prompts indicative of psychological distress and have conversational boundaries (i.e. no emotional intimacy or discussion of suicide) (Petterson et al., 2025). Teams should include a multidisciplinary background with health professionals, ethicists and human-AI specialists in the design and development of the tool. Accountability must be clear from the start, with assurance that there is clear responsibility for failures that lead to any negative consequences. Additional safety measures include clear and transparent guidelines for acceptable behaviour, as well as the provision of tools for users to report concerns (Morrin et al., 2025).
It is our duty to ensure that the technology we bring into our lives works within the limits that humanity imposes upon it. When these limits aren’t clear and unintended consequences are not taken into consideration, disasters may occur, and a lack of responsibility leads to polarisation and hostile accusations. We should strive to practice empathy, understanding and critical thinking when challenging each other’s work and opinions, and ultimately settle on what is best for the well-being of the population.
References:
Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Internal Medicine, 183(6), 589–596. https://doi.org/10.1001/jamainternmed.2023.1838.
Ben-Zion, Z. (2025). Why we need mandatory safeguards for emotionally responsive AI. Nature, 643(8070), 9–9. https://doi.org/10.1038/d41586-025-02031-w.
Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 599–627. https://doi.org/10.1145/3715275.3732039.
Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., Bhattacharyya, S., MacCabe, J., Tognin, S., Twumasi, R., Alderson-Day, B., & Pollak, T. (2025). Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). OSF. https://doi.org/10.31234/osf.io/cmy7n_v5.
Oliver, K., Lorenc, T., Tinkler, J., & Bonell, C. (2019). Understanding the unintended consequences of public health policies: The views of policymakers and evaluators. BMC Public Health, 19(1), 1057. https://doi.org/10.1186/s12889-019-7389-6.
Petterson, A., Mattka, J., & Chandra, P. (2025). Expanding Care Conceptualizations: An Integrative Literature Review of Care in HCI. Proceedings of the ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies, 481–503. https://doi.org/10.1145/3715335.3735486.
Slaby, C. (2025). Gov. Pritzker Signs Legislation Prohibiting AI Therapy in Illinois.
Stade, E. C., Tait, Z. M., Campione, S. T., & Stirman, S. W. (2025). Current Real-World Use of Large Language Models for Mental Health.
Further reading/watching/listening:
Books & Articles:
Halder, S., Halder B. & Mahato A. K. (2025). Navigating AI in Mental Health Care: Innovations, Ethics, and Future Trends. Springer https://link.springer.com/book/10.1007/978-981-96-9744-1.
Videos & Podcasts:
“AI psychosis”: could chatbots fuel delusional thinking? | Science Weekly | Hannah Devlin, Ian Sample & Nicola Davis | https://podcasts.apple.com/nz/podcast/ai-psychosis-could-chatbots-fuel-delusional-thinking/id136697669?i=1000723825961.

Image Attribution
Generated by: Better Images of AI Library
Date: 10/10/2025
Prompt: “AI Impacts” “A Rising Tide Lifts All Bots” by Rose Willis & Kathryn Conrad”
