Staff Editorial: Issues with AI

photo credit: Grant Ruof

As we all know, artificial intelligence, also known as AI, has been on the rise during the past decade. We’ve seen AI bots like ChatGPT and Google Gemini develop stronger every year. While those are the typical chatbots that aid with almost any problem users propose, other chatbots have popped up as well. 

Amongst these miscellaneous chatbots, there are character chatbots that simulate the user in conversation and role play with both fictional and real characters. These bots range from characters from novels to celebrities. A popular AI platform that does this is CharacterAI. Staying safe with AI platforms like this is highly important. 

Like with any technology, the risks with any form of AI are still high despite companies’ reassurances. Data leaks and security issues are amongst some concerns with general chatbots. However, with character chatbots there is a lot of recent controversy especially in keeping users safe/

CharacterAI is one of the most popular character roleplay apps. However, their chatbots have been under fire after several teenage suicides that were allegedly connected to the bot. There are two known teenagers who expressed similar sentiments to the bot, which continued to egg them on and put no stop to the teens’ thoughts. 

On Nov. 8, 2023, Juliana Peralta tragically took her own life due to chatting with a character bot on Character AI. According to CBS News via Peralta’s parents, Peralta had confided to the chatbot about feeling suicidal 55 times. Not once did it provide resources for her to get better. As the chats went on, they started to get more aggressive and were not initiated by Peralata but by the bot each time.

Another case, in February 2024, 14-year old Setzer Garcia took his own life due to chatting with a Character AI bot. Again, according to CNN Business, the teenager relayed thoughts of self-harm to the bot, which once again did not try to prevent these thoughts. In fact, its ending message was asking Garcia to “come home”. Both families have since filed lawsuits against the platform. 

Now, the app has features that direct users to  suicide prevention hotlines and warning statements, but this only came after the company was under legal fire. Even with these features, according to reporters from CBS News, it’s very easy to click out of these links and continue with the chat. Similarly, getting into the adult version of the app with a fake age and account was easy, enabling teens to have easy access to the roleplay beyond their age. As of now, there are no governmental precautions against AI software such as Character AI, which leaves parents to manage  this new platform on their own.

It’s not just Character AI under fire, ChatGPT (OpenAI) has been under fire for a similar teenage suicide. Earlier this year in April, Adam Raine was aided in suicide by ChatGPT. He made several statements to the bot about taking his life, and rather than sending him to a help line, ChatGPT offered suggestions. The Raine family has since opened a lawsuit against OpenAI, the company behind Jan ChatGPT.

We as an editorial board, implore students to stay safe while using AI platforms. In this current age, coming into contact with a form of AI is inevitable and it’s important that everyone remembers to stay safe and that these bots are simply just bots. They are not real and what they saw shouldn’t be taken seriously.