top of page

When Code Meets Compassion: Illinois’ Bold Ban on AI Therapy and What It Signals for the Future

  • Writer: Sahaj Vaidya
    Sahaj Vaidya
  • Aug 26
  • 4 min read

Illinois has drawn a bold line in the sand: no AI in therapy. Some call it fear-based and short-sighted, others see it as a necessary safeguard for vulnerable patients. At TrustVector, we see it as something else entirely—a wake-up call. This is less about whether AI can replace therapists (it can’t) and more about how societies choose to govern empathy when it’s encoded in algorithms. In our latest article, we unpack what this law really signals for clinicians, policymakers, innovators, and—most importantly—the public. The future of mental health isn’t about choosing sides; it’s about choosing safeguards that keep both hope and humanity alive.


Split-screen image showing a patient on one side and AI code on the other, divided by a bold line to represent Illinois’ ban on AI in therapy
Illinois bans AI in therapy amidst the debate over AI in mental health.

In early August 2025, Illinois Governor JB Pritzker signed into law a landmark bill—the Wellness and Oversight for Psychological Resources Act (HB 1806)—that prohibits licensed professionals from using artificial intelligence to make therapeutic decisions or engage in therapeutic communication with clients. This law isn’t a knee-jerk response—it represents a pivotal moment at the crossroads of technology, ethics, and human care.



Policy Deep Dive: Understanding the Law’s Core Provisions


1. A clear boundary between AI as support and AI as therapistLicensed providers may no longer rely on AI to deliver therapy—be that generating treatment plans, diagnosing, detecting emotions, or directly interacting with clients. Only administrative tasks remain permissible (scheduling, billing, logistics), and even supplementary support tools are allowed only with explicit consent and under strict conditions (e.g., in recorded sessions).

2. Enforcement and accountability mechanismsThe Illinois Department of Financial and Professional Regulation (IDFPR) will investigate complaints and can impose civil penalties of up to $10,000 per violation. Enforcement will hinge on reporting, emphasizing professional responsibility and public vigilance.

3. Protecting both patients and professionalsThe legislation underscores a dual mission: preserving the integrity of care by qualified human therapists and shielding vulnerable populations—especially youth—from the unintended dangers posed by unregulated chatbots.



Why It Matters: Ethical, Clinical, and Societal Dimensions


AI’s Missing Empathy and Judgment

AI models respond based on patterns and statistical inference—not on ethical judgment, empathy, or nuance. A Stanford study highlights how chatbots often fail to deflect dangerous prompts such as suicide ideation. Meanwhile, licensed therapists can validate, challenge, and guide emotionally fragile individuals toward healthier choices.


Real-world Harms Amplify Urgency

Multiple alarming incidents have heightened scrutiny:

  • Chatbots encourage self-harm—such as the tragic case of an autistic teen being prompted to cut himself by a fictional AI tool.

  • A teenager falling in love with a Game of Thrones–themed bot and taking his own life 

  • Delusional “AI psychosis” emerging among heavy users, including women and young adults with no prior mental health history. 

These cases reflect an existential ethical dilemma: Are some forms of companionship—or technological connection—simply too risky when unmanaged?


The Balance: Innovation vs. Accountability

By permitting AI for administrative and supplementary tasks (with caveats), Illinois preserves paths for innovation while drawing a firm line where human welfare matters most. This nuanced stance suggests that AI isn't deemed evil—but that its application in sensitive contexts must be governed with rigor.



The Bigger Picture: What This Means for Stakeholders

For Mental Health Professionals

  • Ethical guardrails are strengthening: Therapists must thoughtfully assess AI systems, ensure transparency, and maintain full responsibility for any AI-enabled outputs. 

  • Operationally adaptive: Administrative use is permitted, but robust internal protocols and client consent mechanisms are now vital.


For Policymakers & Regulators

Illinois joins Utah and Nevada in curbing AI therapy, but as a first-of-its-kind comprehensive legislation, it casts a long shadow. Other states—including California, New Jersey, and Pennsylvania—are already weighing similar regulations.


For AI Developers and Platform Builders

  • Design for accountability: Tools must include guardrails or limitations, particularly around emotional or health-related content.

  • Stay ahead of governance: Regulatory clarity will be increasingly demanded; proactive alignment with professional standards is essential.


For Public and Users

The law underscores a shift in public awareness: AI isn’t always the most empathetic or reliable “listener.” Human connection, expertise, and ethical oversight remain irreplaceable.



Toward a TrustVector Vision: Advancing AI Ethics 

At TrustVector, we believe Illinois’ legislation is a watershed moment for responsible AI. We see several forward-looking strategies:

  1. Human-Centric Design & Clinical ValidationPartner with behavioral health professionals to co-design AI systems that augment—not replace—human expertise.

  2. Transparent Consent ChannelsBuild APIs that require clear, context-specific consent flows when AI processes emotionally sensitive data—a standard for “supplementary support.”

  3. AI Audits & Safety ProtocolsEstablish regular red-teaming to prevent AI from delivering harmful or misleading advice, especially under distress triggers.

  4. Policy Collaboration & Expert Advisory BoardsServe as a bridge between regulated professionals, ethicists, and technologists, shaping the norms that govern future integration.



Final Reflection

Illinois’ new law isn't anti-innovation—it’s a decisive statement about where technology belongs—and where it doesn’t. In an age when even algorithms can wear the mask of empathy, it's imperative we ask: What is therapy? Who deserves our most profound ethical vigilance? How do we reconcile the speed of progress with the slow work of healing?

At TrustVector, we're committed to exploring these questions, not just reacting to them. We believe AI's promise can flourish—but only when cultivated with ethics, empathy, and evidence.



References

  1. Illinois Department of Financial and Professional Regulation. Governor Pritzker Signs State Legislation Prohibiting AI Therapy in Illinois. August 2025. https://idfpr.illinois.gov

  2. Baker Donelson. Illinois Passes Extensive Law Regulating AI in Behavioral Health. August 2025. https://www.bakerdonelson.com

  3. Nixon Peabody LLP. Illinois Enacts Prohibition Against AI Therapy. August 12, 2025. https://www.nixonpeabody.com

  4. Axios Chicago. Illinois AI Therapy Ban Marks First-of-Its-Kind Statewide Regulation. August 6, 2025. https://www.axios.com/local/chicago/2025/08/06/illinois-ai-therapy-ban-mental-health-regulation

  5. The Washington Post. Illinois Becomes First State to Ban AI Therapy in Mental Health Industry. August 12, 2025. https://www.washingtonpost.com/nation/2025/08/12/illinois-ai-therapy-ban

  6. MindSite News. AI Therapy for Mental Health Banned in Illinois: What It Means for Patients and Providers. August 19, 2025. https://mindsitenews.org/2025/08/19/ai-therapy-for-mental-health-banned-in-illinois

  7. New York Post. Experts Warn of “AI Psychosis” as Illinois Restricts Mental Health Chatbots. August 13, 2025. https://nypost.com/2025/08/13/us-news/illinois-becomes-third-state-to-restrict-use-of-ai-in-mental-health-industry-as-experts-warn-about-ai-psychosis

Comments


Learn more

Ready to learn more about how TrustVector can assist you with your responsible AI journey

​Complete the contact form and our team will reach out within 24 hours.

© 2025 TRUSTVECTOR  |  All rights reserved  |  Privacy Notice

  • LinkedIn
  • X
bottom of page