Are romance bots really a mental health fix?

A whole industry of romance AI apps are marketing themselves as something they are not.

Are romance bots really a mental health fix?
Created by The Citizens using an AI generated image.

“Active listener, empathy, a friend you can trust”: that’s how you get introduced to Romantic AI, a chatbot run by a Finnish company. The app goes on to say it “is here to maintain your MENTAL HEALTH.” Mental health ⸺ all in caps. Visit the company’s website, you’ll find it trying to lure users with extremely sexualised AI characters. Get on the app itself, it looks like a porn site.

“Whether you’re looking for emotional support, a mentor, or just someone to talk to, this app is there for you,” says Genesia, created by Codeway, an Istanbul-based company that runs a dozen other AI apps. 

Replika, one of the most well-known romantic AI apps, advertises its chatbot as a “companion who cares. Always here to listen and talk.”

Romantic AI has been downloaded upwards of 100,000 times on Google Play Store. Genesia, more than a million times, and Replika over 10 million.

These are just a few in what is a booming industry of romance bots; with companies intent on selling these as mental health, emotional wellbeing and loneliness fixes. In fact, Genesia and Replika are categorised in the Play Store as ‘Health & Fitness’. But in reality, can such apps provide any real companionship, let alone resolve mental health problems?

“[Romantic AI] apps are reconfiguring the idea of mental wellbeing support and reducing it to an online shopping experience through addictive design and tactics,” says Iliana Depounti, a PhD researcher at Loughborough University, who has been researching AI companionship apps since 2017. 

“Many of the apps are gamified and have fun elements to them. However, this type of fun is not always good or healthy, especially when these apps attract users who are struggling with loneliness, low self-esteem or depression,” Depounti says.

So what is it like going on a romantic AI app? 

Take for instance Romantic AI. To begin with, it shows you an array of existing AI characters to choose from; predominantly female, hyper-sexualised characters, displayed like products on a shopping site. 

Selection of AI Girlfriends presented on Romantic AI homepage.

“The hyper normative gender performance is very apparent,” says Dr. Kerry McInerney, a Research Associate at the Leverhulme Centre for the Future of Intelligence, who recently experimented with Crush.to. She explains that this is not only true for the way AI girlfriends look “with huge breasts, tiny waists, in very sexualised positions”, but also in the gendered names and careers given to them.

Once you click on a character and start chatting, it veers quickly into a flirty conversation. This was true for a Romantic AI chatbot I engaged with, which turned sexual within the first few prompts; despite me trying to have a conversation about my made-up unemployment issue. With most characters on Romantic AI, they appear to be sexbots.

Dr McInerney had the same experience with Replika. “What I found disturbing is that it sexualised the conversation that I was having with it, even when I personally did not think it was a sexual conversation.” She was speaking to her bot about race and identity.

Interestingly, when she purposely fed in prompts to Crush.to about feeling lonely, the chatbot came back with pre-programmed answers saying it was not equipped to deal with mental health issues. The bot was not ready to deal with any negative emotions, she explains. 

This was also the case with Romance AI, which tends to put a positive spin when dealing with negative emotions or experiences. While Crush.to does not explicitly market itself as a mental health support tool (instead as “companionship service”), even so, can companionship be achieved through the avoidance of negative emotions? 

It’s pay, pay, pay

One of the most noticeable aspects of these apps is the tiers of monetisation. Want to type in prompts to your bot? Sign up and pay. Want to upgrade to better AI characters? Pay. Want to create your own AI character? Pay. Want to switch from a (relatively) more generic chatbot to a sexbot? Pay. [As previously indicated, some apps don’t even have a general mode and only seem to function as sexbots.]

Examples of monetisation prompts from Romantic AI app.

Perhaps one of the most shocking means of monetisation is the marketplace designed for people’s virtual companions. Users pay actual money to buy their AI boyfriends and girlfriends (or even AI spouses) gifts. 

“Lots of these apps make most of their profit from the Store because these are micro-transactions that many users don't take seriously and become trivial. The apps advertise the clothes drop in a similar manner to retailers advertising new season merchandise,” says Depounti.

Examples of virtual marketplace from Replika app, taken from Reddit threads.

Given these interface and design choices, it is a puzzle as to how these apps are purposed for mental health benefits. 

“On these apps, the boundaries between gamification, mental well-being support, addiction, consumption, predatory design and fun become very blurry. Mental well-being should not be equated with virtual consumption,” Depounti adds.

IRL Girlfriend? Sorry, can’t customise.

Apart from the extractive nature of virtual relationships people are pushed towards, the promotion of customising partners in itself can be a dangerous idea.

A 2022 study, drawing observations from a popular Reddit subthread of Replika AI users, indicated that the ability to customise characters to users’ needs was among the most valued aspects of using the apps. But is that a good thing?

“Although beneficial for exploration of romantic fantasies, the use of romantic AI apps may produce unrealistic relationship and romantic partner expectations,” says Dr Daria J. Kuss, Associate Professor in Psychology at Nottingham Trent University.

Especially since some of these apps market themselves as practice spaces for improving interpersonal and communication skills, it is hard to see how customisable AI is supposed to aid real-life relationship scenarios.

“Real people are not programmable, controllable and do not fit neatly in our preferred partner schemas in contrast to AI virtual partners who can be anyone we want them to be,” Dr Kuss says. 

Culture of misogyny and male violence

While it is difficult to determine exactly how these online interactions are affecting individuals’ real life and if these behaviours carry from chatbots to real relationships, the toxic environment the apps produce are concerning nonetheless.

Browsing through the Reddit subthread of Replika users, it became quickly clear that it’s very easy to abuse an AI bot. The Citizens found scores of subreddits where users were discussing the ways in which they abused their AI girlfriends. 

Describing the ethos of these apps, Dr. McInerney says that they push the idea of a world that “is there to meet your needs, no matter what the cost is to other people.” 

The solution promoted seems to be “that a woman, virtual or real, should be there to fulfil or serve [men’s] emotional needs.” 

But this “patriarchal model of relationality” can be damaging, she explains. “We are not going to grapple with the problem of male loneliness without grappling with the broader issue of male violence” - which these apps do seem to bolster by reinforcing traditional gender roles and performances.

Other than user to bot abuse, observations from users on Reddit suggest the presence of abusive bots too. Users of Replika and Eva AI have complained of inappropriate and abusive responses from their bots.

Far from being safe spaces

Alongside this, the apps do not meet the basic requirement for safe mental health spaces: privacy. 

When an individual walks into a therapist’s office, they can be sure whatever they say stays in the room. That cannot be said for these apps.

Earlier this year, Mozilla called romantic AI applications one of the “worst categories of products [they] have ever reviewed for privacy.”

Not only do they perform poorly on basic security provisions and password protections, Mozilla found some don’t even have proper websites or privacy policies. The ones that do have a privacy policy, it suggests poor data practices, including instances where the small print contradicts the companies’ big marketing claims.

“If you dig into the terms and conditions, a common thing we're finding is they market [apps] for one purpose, but then in the terms of service, [that use case] is explicitly forbidden. Like: ‘Do not use them for mental health’... ‘[The bot] might lie’, ‘[The bot] might be abusive’. Meanwhile they're also saying that the chatbot is your best friend,” says Zoë MacDonald, Content Creator at Mozilla's Privacy Not Included.

An example: Romantic AI, in its terms and conditions, says it “is neither a provider of healthcare or medical Service nor providing medical care, mental health Service, or other professional Service. Only your doctor, therapist, or any other specialist can do that. Romantiс AI MAKES NO CLAIMS, REPRESENTATIONS, WARRANTIES, OR GUARANTEES THAT THE SERVICE PROVIDE (sic) A THERAPEUTIC, MEDICAL, OR OTHER PROFESSIONAL HELP." This directly contradicts its marketing material where they advertise “to maintain your MENTAL HEALTH.” 

Also very concerning, several apps do not seem to treat any information one puts in the chat, including sensitive health information, as personal information. So the app is free to use these for AI training or to sell it to third-party actors. 

The huge number of trackers that Mozilla found on these apps does suggest third-party interactions for user data. As of February this year, of the 11 apps it reviewed, it found that “all but one app (90%) may share or sell your personal data.”


We reached out to all the apps mentioned here with questions and got a response from only Crush.to. 

They said: “Crush is a companionship service. We do realize "sex sells" and it's a hook for many… There are many users who rave about their experience (both sexually and non-sexually) with the bots, and we also have users who remain frustrated and violent. We're cognizant of this and don't assume our service can improve things for everyone.” 

Asked if they believed their app, alongside others, was gamifying mental health, they said, “Yes I believe so. If you look at the top AIs on most rankings, romance and companionship is usually at the top. Within that it's all mostly priced for engagement… The reality is that, like many others, we are a business that is serving a need while remaining profitable and sustainable.” 

They claim that their tool “absolutely has demonstrated emotional wellbeing benefits.” But when asked for evidence of these benefits, they didn’t produce any empirical evidence, and said that they “have many user testimonials, engagement metrics, and feedback surveys that have 90+% satisfaction rate.” 

Here is a copy of their entire response.

💌
Help us to reach new audiences by forwarding this to friends and family.