TL;DR
- Whistleblower Allegations: More than a dozen former Meta and TikTok employees told the BBC that both companies deliberately weakened content moderation to chase engagement.
- Safety Trade-offs: Meta assigned 700 staff to grow Reels while refusing key child-protection and election integrity hires, according to the investigation.
- Child Safety Failures: TikTok’s internal dashboards prioritised cases involving political figures over reports of harm affecting teenagers, former staff revealed.
- Regulatory Response: The European Commission has opened proceedings against both companies for breaching Digital Services Act transparency rules.
More than a dozen former employees from Meta and TikTok told the BBC that both companies deliberately weakened content moderation to compete for users during the short-form video boom.
Internal documents show the platforms prioritised engagement metrics over protections against violence, sexual exploitation of minors, and extremist content. Both platforms collectively reach more than three billion users. Meta denied the allegations. “Any suggestion that we deliberately amplify harmful content for financial gain is wrong,” a spokesperson said.
Revealed in the BBC documentary “Inside the Rage Machine,” the investigation draws on testimony from current and former staff at both companies. Among its sharper revelations: TikTok’s internal moderation dashboards showed a political figure mocked by comparison to a chicken received higher priority than a 16-year-old in Iraq reporting sexualised images of herself. Former employees say the disparity reflected deliberate policy choices, not oversight.
Meta’s Engagement Race
At Meta, competitive pressure to match TikTok triggered decisions that insiders say directly eroded safety. Meta launched Instagram Reels in 2020 as its direct competitor to TikTok. According to the BBC investigation, the company assigned 700 staff to grow Reels while safety teams were refused just two specialist child-protection roles and 10 additional positions for election integrity.
Matt Motyl, a former senior Meta researcher who ran experiments on hundreds of millions of users between 2019 and 2023, warned that when safety goes wrong at a platform serving billions, the consequences are severe.
Competitive pressure to catch TikTok drove a cultural shift inside Meta. “They sort of told us that it’s because the stock price is down,” Tim, a former Meta engineer, told the BBC, describing why engineers were instructed to allow more borderline content through. According to the BBC, a senior vice-president reporting directly to Mark Zuckerberg ordered a halt to limits on harmful-but-legal content that users were engaging with. Senior leaders instructed teams to relax restrictions on material linked to misogyny, conspiracy theories, and inflammatory rhetoric.
Tim told the BBC how competitive panic drove safety trade-offs:
“You’re losing to TikTok and your stock price must suffer. People started becoming paranoid and reactive and they were like, let’s just do whatever we can to catch up.”
Tim, former Meta engineer (via BBC)
Internal Research Contradicts Official Denials
What made these decisions harder to defend was that Meta’s own data documented the harm. According to internal Meta research shared with the BBC, Reels comments had 75% higher prevalence of bullying and harassment, 19% higher hate speech, and 7% higher violence and incitement compared to the main Instagram feed. An internal Facebook study separately acknowledged that its algorithms presumed users wanted more outrage-driven content, citing “disproportionate engagement” as the driver.
Meta’s own research concluded that its algorithmic systems offered content creators a path that maximised profits at the expense of audience wellbeing, and that financial incentives did not appear aligned with the company’s stated mission. Positioning Meta’s safety failures as a governance problem rather than an engineering one, this body of internal evidence shows the company had data identifying harm but chose growth instead.
Brandon Silverman, whose social media monitoring tool CrowdTangle was acquired by Facebook in 2016, argued that Meta contributes unnecessarily to polarisation and could reduce its impact with modest changes.
Prior legal action underscores how entrenched this pattern has become. In October 2025, New York City filed a youth mental health lawsuit accusing Meta, Google, and TikTok of fuelling a crisis among young users. Earlier that year, parents protested outside Meta’s Manhattan offices demanding Zuckerberg address online harms.
Multi-state lawsuits dating back to 2023 have similarly accused Meta of exploiting youth with addictive features on Facebook and Instagram.
TikTok’s Safety Failures
Internal systems at TikTok reveal a parallel pattern of deprioritised safety. Dashboards reviewed by the BBC showed that the political figure’s case received higher priority than a 17-year-old cyberbullying victim in France and a 16-year-old in Iraq reporting sexualised images of herself. Moderation teams were instructed to prioritise cases involving political figures over reports of harm affecting teenagers, in order to maintain relationships with governments and avoid regulation, the BBC reported.
Nick, a former TikTok trust and safety team member, described how cases involving minors were treated in the system.
“If you look at the country where this report comes from, it’s very high risk because it’s a minor and it involves sexual blackmail and then you can see the priority here. The urgency is not high.”
Nick, TikTok trust and safety team member (via BBC)
Nick said daily guilt over his instructions ultimately drove him to speak out. TikTok is also replacing human moderation roles with AI, raising further questions about oversight capacity as safety teams shrink. In 2024, the company laid off hundreds of content moderation staff globally as part of its broader shift toward automated systems.
Ruofan Ding, a former TikTok machine-learning engineer who built the platform’s recommendation engine from 2020 to 2024, illustrated the detachment between engineers and the content they amplified. “To us, all the content is just an ID, a different number,” Ding said. He compared TikTok’s structure to a car where the acceleration team assumed the braking team had safety covered, without anyone verifying that assumption.
According to Ding, rapid engagement-focused updates sometimes introduced more extreme content into user feeds as an unintended side effect. Engineers optimised for watch time and interaction without examining what those metrics meant in practice for vulnerable users. For a platform with over one billion monthly active users, safety was treated as a separate concern rather than an integrated constraint.
Human Cost and Wider Fallout
Algorithmic amplification of harmful content carries real-world consequences that extend beyond platform metrics. Calum, now 19, described being drawn into increasingly angry content from age 14, with videos that energised him in destructive ways and reflected the anger he felt internally. His experience mirrors patterns documented in the US Surgeon General’s 2024 call for social media warning labels, which cited growing evidence of platform-driven harm to adolescent mental health.
Law enforcement has reached similar conclusions. Counter-terror police specialists in the UK have observed the consequences directly, with an anonymous officer telling the BBC that people have grown “more desensitised to real-world violence” and less inhibited about sharing extremist views.
Sarah Wynn Williams, a former Meta executive, has separately detailed how the company’s internal culture prioritised growth targets over ethical concerns in her recent memoir, reinforcing the pattern described by whistleblowers in the BBC investigation.
Despite these accounts, both companies continue to dispute the severity of their failures. TikTok claims its teen accounts have more than 50 preset safety features automatically enabled, but whistleblower accounts suggest these measures have not prevented systematic deprioritisation of child safety cases. In January 2025, Meta ended third-party fact-checking in favour of community notes, further reducing external oversight of content moderation.
Meanwhile, Meta claimed in May 2025 to have halved takedown errors after its moderation policy shift, framing the change as a net positive for free expression.
Under the EU’s Digital Services Act and the UK’s Online Safety Act, regulators now have new enforcement powers over algorithmic recommendation systems. In October 2025, the European Commission opened proceedings against both companies for breaching DSA transparency rules, threatening major fines.
Whether those frameworks prove sufficient to curb the engagement-over-safety dynamics described by these whistleblowers remains the central question for platform accountability in 2026. With both companies facing simultaneous regulatory pressure in Europe and litigation in the United States, the cost of prioritising engagement over safety may soon be measured not just in user harm but in financial penalties and structural reform mandates.

