New X Feature Accidentally Exposes Foreign Influence MAGA Operations


Intended to prove transparency, a new feature on X has instead inadvertently exposed an extensive network of foreign-operated accounts masquerading as American political voices. The new “About This Account” feature, rolled out over the weekend, flagged prominent “patriotic” users as operating from Nigeria, Vietnam, and Eastern Europe.

Validating the work of Trust and Safety teams that owner Elon Musk dismantled, the data dump confirms that engagement-based payouts are funding foreign click-farms. Faced with the chaotic revelation, X’s Head of Product Nikita Bier admitted the rollout had “rough edges” and promised a fix by Tuesday.

Unmasking the ‘Patriots’

Officially unveiled on November 22, the “About This Account” feature is designed to function as a transparency mechanism, allowing users to verify the authenticity of the profiles they interact with.

Spearheaded by X’s Head of Product Nikita Bier, the tool provided granular data points previously hidden from public view, including the specific country where an account is based and the method used to connect to the platform, whether via the web or a mobile app downloaded from a specific regional store.

Promo

Upon its release, Bier championed the update as a key moment for the platform’s trustworthiness, describing it as “an important first step to securing the integrity of the global town square.”

 

The theoretical utility is clear: users can click on a profile’s signup date and immediately discern if a “concerned American voter” was actually posting from a foreign jurisdiction.

However, the rollout has immediately triggered a cascade of unintended exposures that humiliated some of the platform’s most vocal political communities. Far from validating the authenticity of the user base, the tool peels back the curtain on a vast ecosystem of roleplaying accounts.

High-profile “MAGA” and “patriotic” influencers, who have amassed hundreds of thousands of followers by posting hyper-nationalist US content, are revealed as operating from nations including Nigeria, Bangladesh, Russia, and Vietnam.

The scale of the deception is significant. Twitter’s foreign influence problem is now suddenly quantifiable, with specific heavy hitters in the right-wing ecosystem exposed.

For instance, “MAGA Nation X,” an account with nearly 400,000 followers, is operating from a non-EU Eastern European country.

Other accounts using American iconography, such as “Native American Soul” is operated out of Kosovo, and “America_First0” from Bangladesh.  Other influential MAGA related accounts are based in Nigeria.

The chaos is not limited to the exposure of bad actors; the tool’s “rough edges” also generate false positives that fueled further confusion.

Prominent US creators like Hank Green are inexplicably labeled as being based in Japan. This technical instability allows actual bad actors to claim plausible deniability, asserting that their foreign location tags were merely glitches.

The backlash to the issue has been swift and bipartisan so far. While liberal users mocked the exposure of “Russian bots,” prominent right-wing figures who had previously championed Musk’s takeover also expressed outrage.

Ian Miles Cheong, a Malaysian commentator frequently amplified by Musk, complained that the feature amounted to “doxxing” users. Facing a flood of complaints regarding privacy violations and technical inaccuracies, Bier attempted to stem the bleeding.

He attributed many of the location discrepancies to users connecting via VPNs or Starlink terminals, which can route traffic through different countries, and promised a software update to bring accuracy to “nearly 99.99%.”

Overwhelmed by the immediate collapse of the feature’s intended purpose and the resulting user revolt, Bier offered a candid assessment of the weekend’s events on his own profile, simply posting: “I need a drink.” BTW – “Bier” means beer in German, no pun intended.

 

Monetizing the Mob

Driving this surge in inauthentic behavior is X’s “Creator Program,” which pays users based on engagement metrics like views and replies. Security researchers argue that this financial incentive has turned disinformation into a viable business model for operators in low-income regions.

By posting inflammatory US political content, these accounts can generate significant revenue. Joan Donovan, a disinformation researcher at the Critical Internet Studies Institute, notes that “engagement hacking has long been a strategy of media manipulators, who make money off of operating a combination of tactics that leverage platform vulnerabilities.”

The dynamic creates a perverse incentive structure where authenticity is irrelevant, and outrage is the only currency that matters.

Eliot Higgins, founder of Bellingcat, puts it this way: “actors aren’t communicating; they’re staging provocations for yield. The result is disordered discourse: signals detached from truth, identity shaped by escalation, and a feedback loop where the performance eclipses reality itself.”

Although the scale of this is still to be assessed, this is the inevitable outcome of the monetization of the algorithm. Rage-bait becomes a personal revenue model. Nothing about this should be a surprise.

[image or embed]

— Eliot Higgins (@eliothiggins.bsky.social) November 23, 2025 at 9:02 PM

 

Vindication of the ‘Censorship Industrial Complex’

For years, Musk and his allies have argued that previous moderation efforts were part of a “Censorship Industrial Complex,” a narrative pushed heavily in the “Twitter Files.”

Data revealed by the tool confirms that the “foreign influence operations” cited by former Trust and Safety teams were legitimate threats, not ideological fabrications.

As Mike Masnick, editor of Techdirt, points out, “this isn’t a bug in the system. It’s a feature Musk designed, then acted surprised when it attracted exactly the kind of behavior that trust & safety teams used to work to identify and limit.”

With those teams now dismantled, the platform lacks the internal guardrails to manage the very problem its own engineering just made public.

A Pattern of Platform Instability

The X location rollout failure fits a broader trend of controversial product launches and quality control issues at X under Musk’s leadership.

Recent analysis of the systematic amplification of right-wing content revealed a systemic algorithmic bias boosting right-wing content in the UK, mirroring the foreign interference patterns seen here.

Similarly, the launch of Grokipedia was marred by reliance on blacklisted sources, further degrading the platform’s information quality.

Even technical features are affected, with the recent rollout of critical security flaws in encrypted chat admitting to vulnerabilities that leave user metadata exposed.





Source link

Recent Articles

Related Stories