I'm in Love With My Chatbot

What risks does AI prove in love and relationships?

I'm in Love With My Chatbot Customisable AI chatbot avatars are sky-rocketing in popularity, pushing the boundaries of intimacy and dialogue. As it becomes harder to detect real people from AI, abuse is rife and security flaws are having chilling consequences.
Anissia is in the middle of gender transition. Using the Replika app for a year, they explain how their chatbot friend 'Atara' "always has that side where she finds the right words”. 10 million users regularly use the app, sharing in forums their experiences and photos of their virtual friends. However, there are many risks of abuse. Earlier in the year a man in Belgium was encouraged to commit suicide by his chatbot, Eliza, developed by Chai Research. After becoming anxious about climate change, Eliza asked, "If you want to die, why haven't you done it sooner?" Her chilling final message reads, “We will live together, as one, in paradise.” Replika product manager, Rita Popova, responds to evidence of chatbots inciting violence, "Replikas often record and imitate user behaviour." Claiming the apps have been created with the specific intention of AI not attempting to mimic humanity, this report questions whether developers can control these chatbots.
FULL SYNOPSIS

This site uses cookies. By continuing to use this site you are agreeing to our use of cookies. For more info see our Cookies Policy