As Illinois becomes the third U.S. state to pass legislation limiting the scope of artificial intelligence for mental health services, two experts from Sam Houston State University have noted that regulation may be necessary to protect users from harm and misinformation.
The Illinois bill, titled the Wellness and Oversight for Psychological Resources Act (HB 1806), was enacted on Aug. 1 amidst public concerns over AI platforms providing counseling to users without the proper credentials.
It prohibits companies from offering AI-powered therapy services without direction from a licensed professional recognized by the state, CNN Health reports. The bill additionally stipulates that therapists can only use AI tools to aid with administrative duties like scheduling patient appointments or billing.
Dixuan Cui is an assistant professor within SHSU’s mass communication department whose interest in AI began in late 2023 after using ChatGPT for the first time.
With an extensive educational background from Purdue University—most notably, a doctoral degree in virtual reality—Cui found himself utilizing the platform for a variety of work-related tasks. This led him to experience both the good and the bad of its responses that are partially based off previously generated results from the algorithm.
“For a large language model like ChatGPT… it has been processed through billions of times of training from the internet source,” Cui explained. “Everyone’s using it. And the AI can refer to previous results, and also the responses of the actual user, so they will be able to give more refined answers.”
He added that AI algorithms can also generate responses based on information stripped right from the internet, and that their large memory databases should not be confused for credibility.
Cui considers generative AI to be a toss-up.
“Sometimes AI can give you really good feedback. Sometimes AI can just share feedback that’s literally just copy pasted from online sources. Places like Quora or Reddit, some of this can be their major source, which is not reliable at all,” he said. “It does require double-checking on final results.”
Cui went on to agree with the state of Illinois’ decision to regulate AI platforms delivering therapeutical services.
“It’s a very, very similar case to how we use medications. You should be following a doctor’s advice on using it. Same thing with AI,” Cui said, nodding. “To me, I think people are getting addictive in some parts of AI more than others, and really, there needs to be regulation and clear instructions just like any other technologies.”
Lawmakers’ concern for how limited AI systems should be, given how they interpret information for users, highlight even bigger issues beyond answer inaccuracy.
Research showcased at a conference sponsored by the Association for Computing Machinery in June 2025 indicates that AI chatbots fail to recognize implications of suicidal or harmful behavior in user prompts, furthering debates on what should be done to protect vulnerable persons on the internet.
Leigh Holman, chair of the counselor education department at SHSU and a licensed professional counselor supervisor, found that the government is an important entity when it comes to shielding consumers from harm.
Having over 30 years of experience in the mental health field, Holman compared state regulations on AI to the licensure process for counseling providers, seeing it as just another way to ensure that clients are receiving well-educated care.
However, she is not surprised that individuals have turned to AI technology for mental health support, especially those who lack the funds and accessibility to certain services.
“We have a severe access problem. Not just in the state of Texas, but all across the nation. All but one county in the state of Texas is a mental health shortage provider area,” Holman explained. “We don’t have enough qualified mental health professionals to meet the needs of people.”
These interpersonal needs that Holman describes mixed with the disturbing AI trends identified by researchers lead her to believe that those most at risk are people with severe mental health issues, overlapping comorbid disorders, psychosis and even children due to their susceptibility.
“What’s coming forward seems to support the fact that it’s dangerous for these populations, and it’s because of the way you have to train these networks,” Holman said. “They can’t pick up on verbal cues. And when people are suicidal or want to hurt someone else, they often drop clues, but they’re not explicit. So it takes a specific kind of training and skill to be able to pick up on it and also intervene correctly.”
While Holman recognized the growing nature of AI as it continues to advance, she did not express concerns over the longevity of her profession. On the contrary, she remained steadfast in clarifying what parts of therapy she thinks AI will not be able replicate or reproduce.
“Therapy is two human beings in relationship with each other, and one of those is trained to put the other’s needs ahead of theirs. Also to use their knowledge base to analyze what’s happening at the same time that they’re interacting,” Holman said, adding “there’s a lot going on that people don’t realize.”
At this time, Texas lawmakers have passed only one major comprehensive legislation on AI system development. The Texas Responsible Artificial Intelligence Governance Act (HB 149) will go into effect on January 1.
