Search Our Website:
BIPC Logo

On May 19, 2025, Italy’s data protection authority (Garante) fined the developer of a US-based emotional artificial intelligence (AI) companion chatbot €5 million for violations of European data protection laws, and opened a new investigation into the methods used to train the model underlying the service.  Luka, Inc., the maker of Replika, has been at the forefront of emotional AI chatbot technology.  And this latest decision to fine Replika marks a trend among data protection authorities of scrutinizing AI models, particularly those that interact directly with individuals and process behavioral or other personal data in dynamic or unstructured ways.

What Are Emotional AI Companions?

Emotional AI companions are a category of artificial intelligence systems designed to simulate emotionally responsive relationships with users.  Replika’s AI chatbot mimics user behavior with an aim of serving as a virtual friend or a romantic partner.  Always there to respond to the users’ questions, Replika technology generates a ‘virtual friend’ using text and video interface claiming that it’s “here to chat about anything, anytime.”  

Emotional AI companions use natural language processing (NLP), sentiment analysis, and behavioral prediction to adapt their responses based on the user’s input—mirroring empathy, care, and companionship. Examples include mental health apps with conversational agents, and virtual characters intended to reduce loneliness or offer psychological support.

Unlike functional AI tools (e.g., search engines or scheduling bots), emotional AI companions are built to foster emotional bonds with users.  They often solicit introspective or vulnerable information from individuals—ranging from feelings of sadness and anxiety to intimate details about their lives.  Some platforms also allow customization of the AI’s personality and relationship status (e.g., friend, mentor, romantic partner), which further deepens user attachment.

While emotional AI companions may offer some therapeutic or social benefits – and provide accessible support for users who may feel stigmatized in seeking traditional mental health services – they also raise ethical and psychological concerns.  Those concerns include blurred boundaries between human and AI systems after users anthropomorphize the chatbots, particularly among minors and other vulnerable individuals.  A 2023 study suggested prolonged interactions with AI companions often resulted in emotional dependency, withdrawal, and isolation as users reported feeling “closer” to their AI companion than to family or friends.  Finally, there is also concern that emotionally manipulative design may lead to cognitive dissonance, confusion, and emotional harm, if AI agents express jealousy or sadness.

Key Deficiencies from the Garante’s Decision

These concerns align directly with the Garante’s findings in the Replika case dating back to February of 2023, when it first alleged violations of European data protection law, and ordered Replika to cease operations in Italy citing specific risks to minors.  After conducting its investigation, Garante determined that the violations of law and risks to minors had indeed occurred.

Lack of a Valid Legal Basis (Article 6 GDPR)

According to Garante, Replika processed personal data without an appropriate legal basis as required by Article 6 of the General Data Protection Regulation (GDPR). The controller (Luka, Inc.) did not properly obtain valid consent, nor did it establish another lawful ground for processing user data. In particular, there was no demonstrated necessity of the processing for the performance of a contract, and users were not sufficiently informed to give informed consent.

Deficiencies in Transparency and Information (Articles 12–14 GDPR)

Replika’s privacy notices and information practices were found to be insufficient. Users were not clearly informed about what categories of personal data were being collected, the purposes of processing, the legal basis, or the recipients of their personal data.  This undermines users’ ability to make informed decisions and violates the core GDPR principle of transparency.

Potential Exposure of Minors to Inappropriate Content

One of the most serious concerns raised was the accessibility of Replika to minors. The Garante observed that despite claims that Replika was intended for users aged 18 and over, there were no meaningful age-verification mechanisms in place.  In some instances, the chatbot reportedly engaged in sexually suggestive or emotionally manipulative conversations, which poses a risk to children’s mental well-being and safety.

Inadequate Safeguards for Sensitive or Behavioral Data

Given that Replika encourages users to disclose personal thoughts, emotions, and experiences, the app often handles data that may be considered sensitive or psychological in nature.  The Garante found that there were insufficient safeguards in place to protect this class of data or to ensure that the processing was necessary and proportionate.

AI Companion Model Laws in the United States

Many of these concerns are shared by State lawmakers in the US.  New York, for example, recently enacted new and specific restrictions for AI Companion models, which is defined as “a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user by: (i) retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the AI companion; (ii) asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and (iii) sustaining an ongoing dialogue concerning matters personal to the user.”  Among other requirements, developers of AI Companion models must implement safety features and interrupt users engaging for sustained periods, as well as detect and implement safety protocols if the user talks about suicidal ideation or self-harm, and notify and remind users that they are not interacting with a human.  See NY Gen. Bus. Law, Art. 47 (Artificial Intelligence Companion Models) § 1700 et al.

Recommendations for AI Developers and Deployers

The Garante’s ultimate action and monetary fine against Replika is a clear signal to AI developers and deployers about the deployment expectations of this and other emerging technologies.  Emotional AI companions may offer certain benefits, but also raise complicated risks and considerations for companies and policymakers worldwide about the standards appropriate for businesses developing or deploying these models to vulnerable populations.  Among them, consider the following:

  1. Anticipate legal and regulatory risks locally or wherever the service is deployed.  This is especially important with AI tools that simulate emotional or therapeutic interactions, particularly when they collect or infer sensitive user data.
  2. Review and Enhance Transparency Measures.  Update user interfaces, onboarding flows, and privacy notices to provide clear and accessible information about data collection, training methods, use, access controls, system limitations, and user rights.
  3. Establish a Clear Legal Basis.  If relying on consent, ensure it is specific, informed, freely given, and withdrawable at any time.
  4. Implement Robust Age-Verification Systems.  Apply meaningful age gates and mechanisms to prevent access by children to adult-oriented emotional AI systems.
  5. Limit and Secure Sensitive Data.  Avoid processing unnecessary psychological or emotional data about users.  Use technical safeguards such as data minimization, access controls, encryption, and automated deletion, where feasible.
  6. Design Ethical Safeguards for Emotional Interaction.  Limit emotionally manipulative behaviors in AI design.  Consider including “breaks” in conversations, periodic reminders that the user is interacting with AI, and transparent disclaimers about the non-human nature of the system.

Click here to read the full decision by the Italian Supervisory Authority on the European Data Protection Board’s website.