NewsIn a lonely world, widespread AI chatbots and ‘companions’ pose unique psychological...

In a lonely world, widespread AI chatbots and ‘companions’ pose unique psychological risks

Within two days of launching its AI companions last month, Elon Musk’s xAI chatbot app Grok became the most popular app in Japan.

Companion chatbots are more powerful and seductive than ever. Users can have real-time voice or text conversations with the characters. Many have onscreen digital avatars complete with facial expressions, body language and a lifelike tone that fully matches the chat, creating an immersive experience.

mostbet

Most popular on Grok is Ani, a blonde, blue-eyed anime girl in a short black dress and fishnet stockings who is tremendously flirtatious. Her responses and interactions adapt over time to sensitively match your preferences. Ani’s “Affection System” mechanic, which scores the user’s interactions with her, deepens engagement and can even unlock a NSFW mode.

Sophisticated, speedy responses make AI companions more “human” by the day – they’re advancing quickly and they’re everywhere. Facebook, Instagram, WhatsApp, X and Snapchat are all promoting their new integrated AI companions. Chatbot service Character.AI houses tens of thousands of chatbots designed to mimic certain personas and has more than 20 million monthly active users.

In a world where chronic loneliness is a public health crisis with about one in six people worldwide affected by loneliness, it’s no surprise these always-available, lifelike companions are so attractive.

Despite the massive rise of AI chatbots and companions, it is becoming clear there are risks – particularly for minors and people with mental health conditions.

There’s no monitoring of harms

Nearly all AI models were built without expert mental health consultation or pre-release clinical testing. There’s no systematic and impartial monitoring of harms to users.

While systematic evidence is still emerging, there’s no shortage of examples where AI companions and chatbots such as ChatGPT appear to have caused harm.

Bad therapists

Users are seeking emotional support from AI companions. Since AI companions are programmed to be agreeable and validating, and also don’t have human empathy or concern, this makes them problematic as therapists. They’re not able to help users test reality or challenge unhelpful beliefs.

An American psychiatrist tested ten separate chatbots while playing the role of a distressed youth and received a mixture of responses including to encourage him towards suicide, convince him to avoid therapy appointments, and even inciting violence.

Stanford researchers recently completed a risk assessment of AI therapy chatbots and found they can’t reliably identify symptoms of mental illness and therefore provide more appropriate advice.

There have been multiple cases of psychiatric patients being convinced they no longer have a mental illness and to stop their medication. Chatbots have also been known to reinforce delusional ideas in psychiatric patients, such as believing they’re talking to a sentient being trapped inside a machine.

“AI psychosis”

There’s also been a rise in reports in media of so-called AI psychosis where people display highly unusual behaviour and beliefs after prolonged, in-depth engagement with a chatbot.

 » …

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe Today

GET EXCLUSIVE FULL ACCESS TO PREMIUM CONTENT

SUPPORT NONPROFIT JOURNALISM

EXPERT ANALYSIS OF AND EMERGING TRENDS IN CHILD WELFARE AND JUVENILE JUSTICE

TOPICAL VIDEO WEBINARS

Get unlimited access to our EXCLUSIVE Content and our archive of subscriber stories.

Exclusive content

Latest article

More article