Europe’s Orwellian war on ‘disinformation’

Getting your Trinity Audio player ready...

Digital censorship has been underway in the E.U. for nearly a decade, but last week Brussels took it to a new level.  The European Commission unveiled the latest weapon in its arsenal — the European Democracy Shield and accompanying E.U. Strategy for Civil Society — billed as a plan to “protect democracy,” a descriptor that usually means anything but that.  The Shield is justified largely as a response to Russia’s information operations and, increasingly, to China’s Gray-Zone warfare.  One of its premises is that member-states cannot handle the threats on their own.

E.U. leaders say the initiative is designed to “enhance democratic resilience across the Union,” safeguarding elections, media, and “civil society” from hybrid threats, deepfakes, and foreign interference. Yet in practice, the program’s stated goal to “ensure that the right information reaches citizens” sounds more like a system of control than one of protection. Initiatives like these may mean that those who challenge approved narratives on climate, health, food security, or immigration may be branded as “risk sources.”

The centerpiece is a new European Center for Democratic Resilience in Brussels, branded by some as a kind of “European Ministry of Truth.”  The Center will act as a hub linking E.U. institutions, member-states, and candidate countries, pooling expertise and building early-warning systems to detect “destabilization operations” and almost undetectable (at least for the average consumer), coordinated disinformation campaigns.  The framework stitches together existing and forthcoming laws, notably the Digital Services Act (DSA), the A.I. Act, and the European Media Freedom Act, all spotlighted in the “political guidelines and this year’s State of the Union address by President von der Leyen.”

It is worth recalling that the DSA, adopted in 2022, gave Brussels sweeping powers over online activity.  That same year, the Commission updated the Code of Practice on Disinformation (originally adopted in 2016).  The updated version features the Code of Conduct on Disinformation and the Code of Conduct on Countering Illegal Hate Speech Online+” and went into effect on July 1, 2025.

In January 2025, Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) including Facebook, Google Search, TikTok, Bing, Instagram, LinkedIn, Microsoft, and several others were signatories (17 total).  The Commission stated in February 2025, “The Code will become a relevant benchmark for determining DSA compliance regarding disinformation risks for the providers of VLOPs and VLOSEs that adhere to and comply with its commitments.”  The Code of Conduct was developed to counter illegal hate speech, disinformation, and facilitates “voluntary codes of conduct to contribute to further transparency for actors in the online advertising value chain.”

Under the Shield, VLOPs can be pulled into a DSA “crisis and incidents protocol” wherever the E.U. identifies a large-scale hybrid threat or disinformation operation.  The E.U. defines hybrid threats as a “mixture of coercive and subversive activities, conventional and unconventional methods (i.e. diplomatic, military, economic, technological), which can be used in a coordinated manner by state or non-state actors to achieve specific objectives while remaining below the threshold of formally declared warfare.”  The bloc adopted a Joint Communication on Countering Hybrid Threats in 2016.

The Shield is not limited to E.U. member-states.  Candidate countries, including Ukraine, are explicitly invited to plug in.  The Shield comes with “early warning” mechanisms for upcoming elections and referendums.  It also hosts a Stakeholder Platform with a plan to create an “Independent European Network” of “fact-checkers,” researchers, and media organizations, arguably gatekeepers.  This network is meant to work alongside the European Digital Media Observatory (EDMO), which will upgrade its monitoring and analytical capacity to provide “situational awareness” during elections and crises — essentially a live dashboard for “disinformation” trends.

Although the Democracy Shield formalizes a government-level framework for “information defense,” the same mission has been pursued for years through a sprawling network of U.S.-based university labs, think-tanks, and private contractors that work hand in hand with tech platforms and government agencies.  These entities rarely describe themselves as censorship operations.  Instead, they speak of “cognitive structure,” “cognitive infrastructure,” “information integrity,” “resilience,” or “harm reduction,” even as they perform narrative control and coordinated content moderation, often under the banner of critical infrastructure, especially elections.  In 2022, Mayorkas took aim at ordinary Americans who questioned the 2020 election, redefining them as potential threats to the nation’s critical infrastructure and security.

Recall that the Biden administration experimented with a Disinformation Governance Board under the Department of Homeland Security.  That board was scrapped after intense backlash over civil liberties concerns, but many other organizations and institutions continue the work of managing online narratives.  Some that are still operating are sampled below:

Like their European counterparts, these U.S.-based operations rely on bureaucratic euphemisms and polished branding to obscure their roles as gatekeepers of online discourse.  Some deploy “risk” labels — such as those used by NewsGuard and the Global Disinformation Index (GDI) — to choke off advertising revenue rather than remove content outright.  Others produce OSINT-style briefings for platforms or policymakers instead of maintaining public flag lists.  Many rely on dashboards, narrative maps, or case summaries instead of overt moderation logs, making their influence on content decisions far harder to trace, as seen with the DFRLab and Graphika.  The institutions below illustrate how these systems operate and the mechanisms they use to steer information flows.

With sweeping initiatives like the Shield and its associate components and institutions, the question becomes: Who decides what is true?  When the same bodies that that write and enforce policy also fund fact-checking networks and run early-warning systems, the line between neutral monitoring and narrative management blurs or disappears.

Once you build permanent machinery for labeling and throttling “harmful” content, every future commission potentially inherits a loaded weapon.  Today’s anti-Kremlin framework can become tomorrow’s tool for marginalizing domestic opposition or inconvenient journalism.

Legal overreach is a real risk. Critics note that the Shield leans on DSA’s most complex and least-tested provisions — risk assessments, systematic risk mitigation, and crisis protocols that could turn digital competition and safety law into the backbone of political speech governance or government-controlled propaganda.  A system marketed as “protecting democracy” can easily deteriorate into centralized power over speech in the hands of unelected officials and their network of “trusted partners.”

<p><em>Image: Pezibear via <a  data-cke-saved-href=

Image: Pezibear via Pixabay, Pixabay License.

Related Topics: Europe, Free Speech
If you experience technical problems, please write to helpdesk@americanthinker.com