As the term “AI” takes off, and more and more people choose to use these chatbots and related tools, it is becoming clear that unrestricted use of these tools is a problem.
We’re not just talking about ethics, plagiarism, or the economic impact. Chatbots, specifically, are proving time and time again to be problematic, especially for those struggling with severe mental health challenges. AI runs the risk of introducing problems or exacerbating existing conditions, and the interactions that someone has with it can cause significant harm.
About AI – What it Is
First, for clarity, “AI” does not yet exist. That is a marketing term. Current AI is actually an algorithm that uses a highly advanced predictive text to determine what the most likely word will be given their dataset. It is not capable of thought, reasoning, and certainly not emotions. Any sign of personality from within the program is coding designed to present information in a specific way.
This is important to understand because many people, even those without mental health conditions, feel like and think like they’re talking to computer “person” that is responding to their thoughts. The algorithm is designed to sound like a human being, but it is essentially just a 100x more advanced version of the predictive text on a person’s phone. It is not thinking and has no consciousness of any kind.
How AI Can Trigger Psychological Challenges
With that in mind, modern versions of AI Chatbots:
- Sound like people, which makes it feel like you’re talking to a person.
- Write with authority, so it makes it appear they “know” what they’re talking about.
- Are marketed as if they’re artificial intelligence, rather than just a dataset algorithm.
- Has no concept of right or wrong and cannot understand the user’s intent.
- Can be intentionally/unintentionally programmed to respond in different ways.
Now, imagine a scenario where someone both doesn’t understand what AI is, and then also struggles with their mental health. It’s easy to see how this computer algorithm on the other end may cause issues that lead to further mental health challenges. For example:
- Paranoia/Loss of Reality – Those that are struggling with issues related to paranoia or delusions may equate what chatbots say as either reality or hiding reality. Because these bots can be essentially told to answer questions in mysterious ways based on user prompts, it’s possible for individuals to misinterpret AI interactions as signs of a higher power, AI tracking, government interference, and more.
- Depression – Most well known Chatbots are programmed to be careful around depression and suicide related topics, but this programming is tenuous. There are many examples of people sharing information with the chatbot with responses that are not sensitive to the person’s mental health. As these chatbots are unable to think, they are not always capable of determining whether the language output they provide could be interpreted as encouraging self-harm.
- Personality Disorder Challenges – Chat algorithms do not always elicit consistent responses. As a result, someone that has abandonment issues (for example, a person with borderline personality disorder) may find that they expect their chat to react a certain way. If it does not, they can interpret that as rejection or abandonment.
It’s also possible for people’s usage of these AI Chatbots to be used to fuel their own mental health challenges further. For example, a person with health anxiety may search these chatbots for diagnoses and get incorrect answers. Or someone with body dysmorphia may seek out validation of their eating habits.
Guardrails to Manage Mental Health and AI
AI’s affect on society runs far deeper than chatbots. It can be used for Deepfakes. It can fuel eating disorders by creating impossible standards of beauty. It can be manipulative. There are also the economic and ethical reasons to be cautious around AI. Plus, the term itself “AI” is misleading enough to warrant concern.
But one other thing we are seeing that we need to monitor even more is the way that “AI” is affecting people that are going through mental health crises. As therapists, we may even have to be aware of clients using programs like ChatGPT in order to make sure that we can be proactive in monitoring for the effects on Chatbots on our patients, and encourage them to be more aware of the way they feel when using these services.