Big Tech Faces New EU Reality: Voluntary “Chat Control” and 16+ Social Media Rules


In a significant dual push for online safety regulation, European Union institutions advanced two major frameworks on Wednesday: a Council agreement on the Child Sexual Abuse Material (CSAM) regulation that preserves end-to-end encryption, and a Parliament vote demanding a harmonized digital age of consent of 16.

Breaking a years-long stalemate, member states adopted a position on “Chat Control” that removes controversial mandatory detection orders for encrypted services.

Reached by the Council of the EU, this compromise represents a shift from the original Commission proposal, addressing privacy concerns by removing obligations to decrypt data while maintaining voluntary scanning mechanisms.

Promo

Simultaneously, the European Parliament signaled a broader crackdown on teen access to platforms. Lawmakers voted overwhelmingly for a report calling for a minimum age of 16 for social media use without parental consent, alongside bans on “addictive designs” like infinite scrolling.

Germany’s vote proved decisive in unlocking the Council agreement on CSAM. Previously holding a blocking minority due to concerns over privacy and encryption, Berlin flipped to support the proposal after the text was amended to explicitly protect secure communication technologies.

According to the voting results, the compromise secured enough support despite opposition from the Netherlands, Poland, the Czech Republic, and Slovakia.

Specific Safeguards for Cybersecurity

Specific safeguards for cybersecurity measures are included in the agreed text. Under the new position, the regulation is framed to avoid weakening encryption protocols.

The Council’s general approach document states:

“This Regulation shall not prohibit, make impossible, weaken, circumvent or otherwise undermine cybersecurity measures, in particular encryption, including end-to-end encryption, implemented by the relevant information society services or by the users.”

“This Regulation shall not create any obligation that would require a provider of hosting services or a provider of interpersonal communications services to decrypt data or create access to end-to-end encrypted data, or that would prevent providers from offering end-to-end encrypted services.”

Explicitly ruling out decryption mandates, the provision aims to reassure critics who feared that “client-side scanning” or backdoor mandates would effectively break end-to-end encryption. By removing obligations to decrypt data, the Council attempts to balance law enforcement needs with fundamental privacy rights.

Despite the removal of mandatory detection orders for encrypted services, the compromise indefinitely extends a controversial mechanism. The agreement makes permanent the temporary exemption under Regulation 2021/1232, which currently allows platforms to voluntarily scan for CSAM without violating e-privacy rules. Previously, this exemption was set to expire on 3 April 2026.

According to the Presidency compromise text:

“The derogation from certain provisions of Directive 2002/58/EC for the purpose of combating child sexual abuse is made permanent through an amendment of Regulation (EU) 2021/1232.”

“The voluntary activities of providers using the derogation under Regulation (EU) 2021/1232 are included as a possible mitigation measure, without imposing any detection obligations on providers.”

Peter Hummelgaard, the Danish Minister for Justice, justified the urgency of the measures by pointing to the scale of the problem.

He noted that “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.”

This sentiment reflects the pressure on member states to act against the proliferation of illegal material while navigating the technical complexities of modern digital services.

Under the agreed text, a new “High Risk” categorization for platforms will be introduced based on objective criteria. Services classified as high risk could face pressure to adopt these “voluntary” measures to mitigate their liability or risk status.

Privacy advocates, including the Pirate Party, argue this structure creates a “de facto” obligation to scan, effectively bypassing the encryption protections on paper.

To support these efforts, the regulation establishes a new EU Centre on Child Sexual Abuse. This agency will manage the database of indicators used for detection and support national authorities in enforcing the rules. The location of the Centre remains undecided and will be determined in future negotiations.

Parliament’s Push: 16+ Age Limit & Addictive Design

Separately, the European Parliament adopted a non-legislative report with a decisive 483 votes in favor to 92 against. Replacing the current flexibility under the GDPR, the report calls for a harmonized EU digital minimum age of 16 for social media access.

Unlike the Australian crackdown on teen accounts which proposes a total ban for under-16s, the adopted report outlines a model where access for 13-to-16-year-olds is permitted provided there is parental consent. Aimed at balancing protection with digital rights, this approach avoids a complete exclusion from online spaces.

Christel Schaldemose, the lead rapporteur for the Parliament, emphasized the shift in regulatory tone, stating: “We are finally drawing a line. We are saying clearly to platforms: your services are not designed for children. And the experiment ends here.”

Her comments reflect a growing consensus among lawmakers that voluntary industry measures have failed to adequately protect children.

Beyond age limits, the report demands a ban on “addictive designs” for minors. Specifically, the Parliament targets features such as infinite scrolling and auto-play, which are seen as exploiting the vulnerabilities of young users to maximize engagement.

Lawmakers argue these design choices contribute to mental health issues and should be prohibited by default for minor accounts.

To enforce these prohibitions, lawmakers are calling for them to be codified in the Digital Fairness Act. This upcoming legislative proposal is expected to address broader consumer protection issues in the digital sphere, providing a legal vehicle for these specific child safety measures.

Included in the proposal are measures to tackle “kidfluencing,” aiming to protect minors from commercial exploitation on platforms. The report suggests that parents should have greater control over their children’s digital footprint and that platforms should not monetize the content generated by young children without strict safeguards.

The Trilogue Battlefield & Global Context

With the Council position now agreed, the two institutions must enter “trilogue” negotiations to reconcile their texts. The Parliament previously voted to remove E2EE scanning entirely in November 2023, setting up a potential clash over the Council’s retention of voluntary scanning mechanisms.

A key conflict point will be the “voluntary” scanning mechanism. The Parliament views this as a loophole that could lead to mass surveillance, while the Council sees it as a necessary tool for law enforcement. The final regulation will need to bridge these opposing views on how to handle encrypted communications.

Globally, the EU’s approach contrasts with developments elsewhere. In Australia, the government is pursuing a strict ban on under-16s accessing social media, a move that is currently facing the High Court challenge. While Australia focuses on a hard age cutoff, the EU approach leans heavily on “safety by design” and risk mitigation obligations for providers.

Parallels exist with the US context, where safety frameworks and legislative proposals are also targeting platform design. However, the EU’s dual focus on encryption privacy and child safety creates a unique legislative tension not present in the same way in other jurisdictions.

Tech giants face a fragmented compliance landscape. With different regions adopting varying standards for age verification, content scanning, and design features, companies may be forced to develop different versions of platforms like Instagram and TikTok. This fragmentation is driven by divergent national approaches and state-level litigation alongside federal bans.



Source link

Recent Articles

Related Stories