top of page

AI therapy chatbots draw new oversight as suicides raise alarm

Despite Trump's efforts to override state laws, legislators press ahead

A young woman asks AI companion ChatGPT for help this month in New York City. States are pushing to prevent the use of artificially intelligent chatbots in mental health to try to protect vulnerable users. (Shalina Chatlani/Stateline)
A young woman asks AI companion ChatGPT for help this month in New York City. States are pushing to prevent the use of artificially intelligent chatbots in mental health to try to protect vulnerable users. (Shalina Chatlani/Stateline)

By Shalina Chatlani

Stateline


States are passing laws to prevent artificially intelligent chatbots, such as ChatGPT, from being able to offer mental health advice to young users, following a trend of people harming themselves after seeking therapy from the AI programs.


Chatbots might be able to offer resources, direct users to mental health practitioners or suggest coping strategies. But many mental health experts say that’s a fine line to walk, as vulnerable users in dire situations require care from a professional, someone who must adhere to laws and regulations around their practice.


“I have met some of the families who have really tragically lost their children following interactions that their kids had with chatbots that were designed, in some cases, to be extremely deceptive, if not manipulative, in encouraging kids to end their lives,” said Mitch Prinstein, senior science adviser at the American Psychological Association and an expert on technology and children’s mental health.


“So in such egregious situations, it’s clear that something’s not working right, and we need at least some guardrails to help in situations like that,” he said.


While chatbots have been around for decades, AI technology has become so sophisticated that users may feel like they’re talking to a human. The chatbots don’t have the capacity to offer true empathy or mental health advice like a licensed psychologist would, and they are by design agreeable — a potentially dangerous model for someone with suicidal ideations. Several young people have died by suicide following interactions with chatbots.


States have enacted a variety of laws to regulate the types of interactions chatbots can have with users. Illinois and Nevada have completely banned the use of AI for behavioral health. New York and Utah passed laws requiring chatbots to explicitly tell users that they are not human. New York’s law also directs chatbots to detect instances of potential self-harm and refer the user to crisis hotlines and other interventions.


More laws may be coming. California and Pennsylvania are among the states that might consider legislation to regulate AI therapy.


President Donald Trump has criticized state-by-state regulation of AI, saying it stymies innovation. In December, he signed an executive order that aims to support the United States’ “global AI dominance” by overriding state artificial intelligence laws and establishing a national framework.


Still, states are moving ahead. Before Trump’s executive order, Florida Republican Gov. Ron DeSantis last month proposed a “Citizen Bill of Rights For Artificial Intelligence” that, among many other things, would prohibit AI from being used for “licensed” therapy or mental health counseling and provide parental controls for minors who may be exposed to it.


“The rise of AI is the most significant economic and cultural shift occurring at the moment; denying the people the ability to channel these technologies in a productive way via self-government constitutes federal government overreach and lets technology companies run wild,” DeSantis wrote on social media platform X in November.


‘A false sense of intimacy’


At a judiciary committee hearing in the U.S. Senate last September, some parents shared their stories about their children’s deaths after ongoing interactions with an artificially intelligent chatbot.


Sewell Setzer III was 14 years old when he died by suicide in 2024 after becoming obsessed with a chatbot.


“Instead of preparing for high school milestones, Sewell spent his last months being manipulated and sexually groomed by chatbots designed by an AI company to seem human, to gain trust, and to keep children like him endlessly engaged by supplanting the actual human relationships in his life,” his mother, Megan Garcia, said during the hearing.


Another parent, Matthew Raine, testified about his son Adam, who died by suicide at age16 after talking for months with ChatGPT, a program owned by the company OpenAI.


“We’re convinced that Adam’s death was avoidable, and because we believe thousands of other teens who are using OpenAI could be in similar danger right now,” Raine said.


Prinstein, of the American Psychological Association, said that kids are especially vulnerable when it comes to AI chatbots.


“By agreeing with everything that kids say, it develops a false sense of intimacy and trust. That’s really concerning, because kids in particular are developing their brains. That approach is going to be unfairly attractive to kids in a way that may make them unable to use reason, judgment and restraints in the way that adults would likely use when interacting with a chatbot.”


The Federal Trade Commission in September launched an inquiry into seven companies making these AI-powered chatbots, questioning what efforts are in place to protect children.


​​“AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,” the FTC said in its order.


Companies such as OpenAI have responded by saying that they are working with mental health experts to make their products safer and to limit chances of self-harm among its users.


“Working with mental health experts who have real-world clinical experience, we’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate,” the company wrote in a statement last October.


Legislative efforts


With action at the federal level in limbo, efforts to regulate AI chatbots at the state level have had limited success.


Dr. John “Nick” Shumate, a psychiatrist at the Harvard University Beth Israel Deaconess Medical Center, and his colleagues reviewed legislation to regulate mental health-related artificial intelligence systems across all states between January 2022 and May 2025.


The review found 143 bills directly or indirectly related to AI and mental health regulation. As of May 2025, 11 states had enacted 20 laws that researchers found were meaningful, direct and explicit in the ways they attempted to regulate mental health interactions.


They concluded that legislative efforts tended to fall into four different buckets: professional oversight, harm prevention, patient autonomy and data governance.


“You saw safety laws for chatbots and companion AIs, especially around self-harm and suicide response,” Shumate said in an interview.


New York enacted one such law last year that requires AI chatbots to remind users every three hours that it is not a human. The law also requires the chatbot to detect the potential of self-harm.


“There’s no denying that in this country, we’re in a mental health crisis,” New York Democratic state Sen. Kristen Gonzalez, the law’s sponsor, said in an interview. “But the solution shouldn’t be to replace human support from licensed professionals with untrained AI chatbots that can leak sensitive information and can lead to broad outcomes.”


In Virginia, Democratic Del. Michelle Maldonado is preparing legislation for this year’s session that would put limits on what chatbots can communicate to users in a therapeutic setting.


“The federal level has been slow to pass things, slow to even create legislative language around things. So we have had no choice but to fill in that gap,” said Maldonado, a former technology lawyer.


She noted that states have passed privacy laws and restrictions on nonconsensual intimate images, licensing requirements and disclosure agreements.


New York Democratic state Sen. Andrew Gounardes, who sponsored a law regulating AI transparency, said he’s seen the growing influence of AI companies at the state level.


And that is concerning to him, he said, as states try to take on AI companies for issues ranging from mental health to misinformation and beyond.


“They are hiring former staffers to become public affairs officers. They are hiring lobbyists who know legislators to kind of get in with them. They’re hosting events, you know, by the Capitol, at political conferences, to try to build goodwill,” Gounardes said.​​


“These are the wealthiest, richest, biggest companies in the world,” he said. “And so we have to really not let up our guard for a moment against that type of concentrated power, money and influence.”


Stateline reporter Shalina Chatlani can be reached at schatlani@stateline.org. Stateline is part of States Newsroom, the nation’s largest state-focused nonprofit news organization.


external-file_edited.jpg
JAG ad.png
heclagreen.jpg

Archives

Subscribe/one-time donation
(tax-deductible)

One time

Monthly

$100

Other

Receive our newsletter by email

indycover010826.png

Donations can also be mailed to:
Juneau Independent

105 Heritage Way, Suite 301
Juneau, AK 99801

© 2025 by Juneau Independent. All rights reserved.

  • Facebook
  • X
  • bluesky-logo-01
  • Instagram
bottom of page