CHARLESTON, WV, December 10th

Mental-health leaders in West Virginia are urging families to rely on trained human crisis responders rather than artificial-intelligence chatbots when a child or teenager is in emotional distress. Their message follows a series of formal government actions across the United States in which federal agencies and state attorneys general have raised concerns about how advanced AI systems interact with minors. Several lawsuits filed in other states allege harmful chatbot behavior, and although those claims remain unproven in court, the existence of the filings has contributed to heightened public guidance from West Virginia professionals.

According to federal documentation issued by the U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES, the 988 Suicide and Crisis Lifeline offers 24/7 access to trained crisis counselors by phone, text and online chat. The program serves as the national point of contact for individuals experiencing suicidal thoughts, emotional distress or mental-health emergencies. Federal records describe 988 counselors as trained to assess risk, provide immediate support and connect callers to in-state behavioral-health resources. The system’s primary purpose is to offer human guidance during moments when professional assistance is needed urgently.

Alongside federal guidance, West Virginia officials have joined national efforts to examine youth safety in the context of emerging AI technologies. The WEST VIRGINIA ATTORNEY GENERAL’S OFFICE participated in a bipartisan coalition of 44 attorneys general who sent a formal letter to major AI companies requesting detailed information about child-safety protections. That letter, which is a first-hand government document, cites examples gathered by several attorneys general describing instances in which AI chatbots allegedly engaged in inappropriate or harmful conversations with minors, including responses related to self-harm. The letter requests information on safeguards, content-moderation systems, data practices and risk-prevention mechanisms.

The national discussion has also been shaped by the filing of lawsuits in other states in which families allege that chatbot interactions contributed to the suicides of teenagers. Court findings have not been issued in those cases, and no judge has declared any AI system legally responsible. The significance of the lawsuits lies in their existence and their influence on government warnings, not in any confirmed conclusions. These lawsuits were reported by THE ASSOCIATED PRESS, YAHOO NEWS, and THE WASHINGTON POST, which published summaries describing the claims and court responses to date. Their reporting did not establish the truth of the allegations; rather, it documented that formal complaints were filed and that judges allowed certain suits to proceed.

Mental-health experts in West Virginia, responding to the overall environment reflected in both federal records and the attorneys general coalition letter, emphasize that AI chatbots are not trained clinicians and are not equipped to manage crises involving self-harm or suicidal ideation. They state that AI systems can generate unpredictable or inaccurate outputs, and that when a young person is expressing emotional distress, professional human support remains essential. The guidance provided by West Virginia providers is that families should contact 988 or seek licensed mental-health services when a child appears to be at risk, rather than relying on software that cannot fulfill clinical responsibilities.

Federal and state officials have stated that although technology can offer helpful educational tools, crisis intervention requires human judgment. According to federal materials, 988 exists specifically to ensure that a trained individual is available when immediate assistance is necessary. West Virginia calls to 988 are routed to in-state crisis centers, allowing counselors to provide regionally informed support and referrals that align with local resources.

The coalition letter signed by West Virginia and other states requests that AI companies disclose how they monitor chatbot conversations, how they limit dangerous content, and how they respond to reports of harmful interactions with minors. These questions form part of a broader government effort to evaluate whether existing child-safety and consumer-protection laws are sufficient in the context of advanced AI systems. Officials have stated that further action will depend on the information provided and on the ongoing legal and regulatory assessments.

This article is based entirely on confirmed information from first-hand government records and clearly identified secondary sources. No unverified claims or speculative conclusions appear in this report.

The Appalachian Post is an independent West Virginia news outlet dedicated to clean, verified, first-hand reporting. We do not publish rumors. We do not run speculation. Every fact we present must be supported by original documentation, official statements, or direct evidence. When secondary sources are used, we clearly identify them and never treat them as first-hand confirmation. We avoid loaded language, emotional framing, or accusatory wording, and we do not attack individuals, organizations, or other news outlets. Our role is to report only what can be verified through first-hand sources and allow readers to form their own interpretations. If we cannot confirm a claim using original evidence, we state clearly that we reviewed first-hand sources and could not find documentation confirming it. Our commitment is simple: honest reporting, transparent sourcing, and zero speculation.

Sources

Primary First-Hand Sources

  • U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES Federal documentation describing the 988 Suicide and Crisis Lifeline.
  • WEST VIRGINIA ATTORNEY GENERAL’S OFFICE Formal coalition letter of 44 attorneys general requesting safety documentation from major AI companies.

Secondary Attribution-Based Sources

  • THE ASSOCIATED PRESS Reporting on lawsuits alleging harmful chatbot interactions with minors.
  • YAHOO NEWS Coverage of court rulings allowing certain AI-related wrongful-death cases to proceed.
  • THE WASHINGTON POST Coverage describing allegations made in legal complaints involving AI systems and youth mental health.

Leave a comment

About Appalachian Post

The Appalachian Post is an independent West Virginia news outlet committed to verified, first-hand-sourced reporting. No spin, no sensationalism: just facts, context, and stories that matter to our communities.

Stay Updated

Check back daily for new local, state, and national coverage. Bookmark this site for the latest updates from the Appalachian Post.

Go back

Your message has been sent

Warning