Google Play Hosts Deepfake Apps Despite Policy Ban


“Bas do photo upload karo” (“Just upload two photos”), reads an ad on YouTube for the generative AI deepfake content creation tool, AI Catch. The next visuals show a woman clinging to and kissing a man on a rainy road. The app’s core feature is generating deepfake videos, such as two people kissing, immediately after users upload photos of anyone.

Beyond its ability to generate non-consensual sexual imagery, it also highlights how AI tools have removed barriers and made the creation of deepfake content alarmingly easy. It further underscores the lack of specific guidelines from the Ministry of Electronics and Information Technology (MeitY) for app stores that host and enable the generation of deepfake content.

Why is this a violation of Google Play Store’s Policies?

Despite the guidelines, the Google Play Store hosts several applications that enable the creation of deepfake content, including videos of two people kissing intimately. Some of the screenshots are shown below.

Screenshots from the AI Catch App.
[Image Source: MediaNama]
Similar Applications on the Google Play Store that enable deepfake content creation.
[Image Source: MediaNama]

According to Google Play’s AI-Generated Content policy, the following are examples of prohibited AI-generated content:

  • Non-consensual deepfake sexual material
  • Generative AI meant for sexual gratification
  • Content used for bullying or harassment
  • Content promoting harmful or dangerous behaviour
  • Voice or video used to enable scams
  • Demonstrably false election-related material
  • AI-generated official documents enabling dishonesty
  • Malicious code or malware generation

As per Google Play’s Inappropriate Content policies, sexual content is described as follows:

Sexual Content and Profanity

According to Google Play’s policies, apps may not contain or promote sexual content or profanity, including pornography or material meant for sexual gratification. Apps also may not promote or solicit sexual acts for compensation. The policy prohibits content linked to sexually predatory behaviour and any non-consensual sexual material. Limited nudity is allowed only when it is primarily “educational, documentary, scientific, or artistic, and not gratuitous.”

For catalogue-based apps, Google Play allows books or videos with sexual content only if:

  • Such titles make up a minor part of the overall catalogue.
  • The app does not actively promote them, although they may appear in recommendations or broad promotions.
  • The app does not distribute content involving child endangerment, pornographic, or other illegal sexual material.
  • The app restricts minors from accessing sexual content.

If content violates Google Play’s policy but is considered appropriate in a particular region, the app may be offered there but will remain unavailable elsewhere.

Additionally, apps that relate to the following categories, including those that enable the generation of AI-created content are restricted:

Advertisements


  • Hate speech
  • violence
  • Violent Extremism
  • Sensitive Events
  • Bullying and Harassment
  • Dangerous Products
  • Marijuana
  • Tobacco and Alcohol

Why are the Meity Guidelines insufficient to counter the Deepfake videos?

In November 2025, the MeitY released a Standard Operating Procedure (SOP) to curb the dissemination of non-consensual intimate imagery (NCII) online. According to these guidelines, if an individual files a complaint, Platforms are required to remove reported content within 24 hours, in compliance with Rule 3(2)(b) of the IT Rules, which outlines the due diligence and grievance-redressal requirements for intermediaries.

In addition to these guidelines for the removal of deepfake content, in October 2025, MeitY proposed draft amendments to regulate and include deepfake content within the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021). The draft proposed labelling requirements, including metadata, for “synthetically generated information,” to help viewers distinguish such content from original material.

This framework places technical compliance obligations on significant social media intermediaries (SSMIs) like YouTube and Instagram to ensure adequate technical measures are in place to flag deepfake content.

However, both mechanisms rely heavily on the due diligence of intermediaries and place the burden on victims to pursue remedies for deepfake-powered non-consensual sexual content. They also fail to address guidelines or directives for app stores, which serve as distribution marketplaces for applications that enable the generation of sexually explicit content—content that is already a violation of Google’s own policies.

MediaNama has sent the following questions to Google. We will update the copy once we receive a response:

  • Google Play currently hosts multiple apps that generate deepfake content, including nonconsensual and sexually explicit videos. How does Google justify allowing these apps despite explicit violations of its AI-generated content and sexual content policies?
  • Why are apps that clearly violate Google’s policies on sexual content, harassment, and harmful AI outputs still available for download? Does Google plan to strengthen policy enforcement?
  • What technical or human-review processes does Google use to verify whether generative AI apps are producing illegal or non-consensual content?
  • Will Google introduce new, stricter guidelines specifically addressing deepfake-generation apps, beyond its current AI content policies? If yes, what changes are being considered?

Also Read: 

Support our journalism:

For You



Source link

Recent Articles

Related Stories