BRUSSELS' LONG WAR ON FREE SPEECH
How the European Union's Digital Services Act Became the World's Most Sophisticated Censorship Machine, and Why Americans Should Be Furious
There is a document you should read. It is 160 pages long, written by the staff of the U.S. House Judiciary Committee, and it contains something remarkable: receipts. Thousands upon thousands of pages of internal corporate communications, produced under congressional subpoena from ten of the world’s largest technology companies (Meta, Google, TikTok, X, Apple, Amazon, Microsoft, and others), documenting in meticulous, damning detail how officials of the European Commission spent a decade quietly pressuring Silicon Valley to silence speech they didn’t like. Not illegal speech. Not dangerous speech. Political speech. Conservative speech. Speech about immigration, COVID-19, gender ideology, and election integrity.
The Committee has now published two interim staff reports. Part I arrived in July 2025 and Part II in February 2026. Together they constitute the most comprehensive public accounting yet of what critics have long suspected, and defenders of the EU have long denied: that the Digital Services Act (DSA) is not a safety regulation. It is a censorship regime. One with global reach, designed and wielded with partisan intent, and one that has already shaped the outcome of elections on both sides of the Atlantic.
This essay summarizes what those reports found, surveys the subsequent evidence that has emerged, and makes the case that every American who cares about the First Amendment should be paying close attention.
Prologue: The Philosopher-King of Censorship
Before we get to Brussels, we should spend a moment in Palo Alto. Because the intellectual scaffolding for everything the European Commission has built, the justifications, the framing, the moral confidence, was articulated with unusual clarity by an American, on American soil, four years before this controversy reached its current boiling point.
On April 21, 2022, former President Barack Obama delivered the keynote address at a symposium titled “Challenges to Democracy in the Digital Information Realm,” hosted jointly by Stanford University’s Cyber Policy Center and the Obama Foundation. It was a polished, hour-long speech, delivered with the former president’s characteristic eloquence and earnest self-assurance. It was also, at its core, a sophisticated argument for why governments must control speech to save democracy, and why people who resist that control are, at best, naive and, at worst, complicit in authoritarianism.
“People like Putin, and Steve Bannon for that matter, understand it’s not necessary for people to believe this information in order to weaken democratic institutions. You just have to flood a country’s public square with enough raw sewage.” - Barack Obama, Stanford University, April 2022
Obama’s central thesis was that the free flow of information, the very thing the First Amendment was designed to protect, had become democracy’s greatest vulnerability. Social media platforms, he argued, were “turbocharging” humanity’s worst impulses, amplifying disinformation, sowing distrust, and creating the conditions in which autocrats flourish. He invoked Myanmar, Ethiopia, Russia, and Hungary. He mentioned Vladimir Putin and Steve Bannon in the same breath as fellow travelers in the project of epistemological destruction. He declared that “people are dying because of misinformation,” citing COVID-19 vaccine hesitancy as Exhibit A.
The solution, in Obama’s telling, was for technology companies to “redesign” themselves under government oversight. Content moderation, he said, does not go far enough. Platforms have a “financial incentive” to keep dangerous content circulating and must be compelled by regulation to go further. He explicitly called for Section 230 reform, the legal provision that shields platforms from liability for user content, and argued that tech must be subject to the same kind of public safety regulation as other industries. Decisions about what is true and what is dangerous, he argued, should not be left solely to private companies. They must be subject to government oversight.
Note what Obama was not saying. He was not calling for government bureaucrats to maintain a list of banned opinions. He framed his argument carefully, acknowledging the First Amendment, praising the “transformative power” of the open internet, and insisting he was merely talking about mitigating the “worst harms” of disinformation. He is too careful a lawyer and too skilled a politician to make the mistake of sounding like a censor.
But the logic of his argument, followed to its conclusion, leads exactly there. If disinformation is an existential threat to democracy, and if platforms cannot be trusted to address it voluntarily, and if government regulation is required to compel them, then someone must be empowered to decide what constitutes disinformation. In a democracy, that someone is ultimately the state. And the state, being composed of human beings with political interests and ideological commitments, will exercise that power in politically interested and ideologically committed ways.
Who decides what is disinformation? Whoever holds the regulatory pen. And the pen is never held by no one.
Obama did not answer this question in his Stanford speech. He glided past it with characteristic grace, offering reassurances about independent oversight, journalistic standards, and citizens’ own responsibility to consume news critically. These are not bad ideas. But they are not answers to the structural problem his own argument creates.
The European Commission answered the question for him. They decided who holds the pen. It is the Commission. And the Commission, as we are about to see, has wielded that pen with a specificity of political purpose that ought to trouble anyone who took Obama’s professed concerns about democracy seriously.
The irony is exquisite and worth sitting with. Obama gave his Stanford speech, warning about the dangers of foreign authoritarian interference in democratic information ecosystems. He specifically named Russia’s manipulation of social media platforms as a threat to Western democracy. He called for regulatory frameworks to protect the integrity of public discourse. And the European regulatory framework that emerged from exactly that ideological tradition, built on exactly those justifications by bureaucrats who share exactly those values, has been used, as extensively documented by the House Judiciary Committee’s evidence, to cancel election results, suppress political opponents, and shape the information environment of European voters in ways that happen to favor the established political order.
Obama’s Stanford speech is the ur-text of the censorship-as-democracy-protection ideology. It is thoughtful, earnest, and wrong in a way that matters enormously. Not because disinformation is not real. It is real. Not because foreign manipulation of social media platforms is not a genuine threat. It is. But because the cure Obama prescribed is more dangerous than the disease, for the same reason that has always made prior restraint on speech dangerous: you cannot give governments the power to define truth without giving them the power to define convenient truth.
The story that follows is what happens when you do.
I. The Machine Is Built in the Dark
The story begins not with the DSA itself, which passed in 2022 (the same year as Obama’s speech justifying this approach), but with a decade of groundwork. Beginning around 2015 and 2016, according to the House Judiciary Committee’s investigation, senior European Commission officials began convening a series of meetings with the major social media platforms. The stated purpose was anodyne: combating “hate speech” and “disinformation.” The actual purpose, the documents suggest, was something far more specific.
Europe was experiencing the same populist insurgency that was rattling establishment politics everywhere in the Western world. Voters angry about mass migration, economic stagnation, and elite condescension were flocking to parties the European press described as “far right”, parties that in most cases simply held views that had been mainstream a generation earlier. The platforms, with their algorithmic indifference to editorial gatekeepers, were giving these movements a megaphone that bypassed state broadcasters and legacy newspapers.
“The Commission worked to censor true information and political speech about some of the most important policy debates in recent history, including the COVID-19 pandemic, mass migration, and transgender issues.” - House Judiciary Committee Staff Report, February 2026
The Commission’s solution was to turn the platforms into enforcers. Over the course of more than 100 closed-door meetings documented in the subpoenaed materials, Commission officials pressed company representatives to tighten their content moderation rules globally, not just within Europe. The trick, the documents reveal, was that they understood a fundamental reality of how platforms work: you cannot easily run separate content moderation regimes for different countries. When Europe demands that a certain type of speech be suppressed, the practical effect is that it is suppressed everywhere.
The COVID-19 pandemic accelerated the process dramatically. In November 2021, the Commission reached out to TikTok, asking how the platform planned to fight “disinformation” about COVID vaccines, specifically in the United States, not in Europe. The Commission requested information about TikTok’s plans to “remove” certain claims about vaccine efficacy targeting American children. This was a foreign government bureaucracy directing an American company to censor American speech on American soil. It happened in an email.
II. The Law That Made It Permanent
The informal pressure campaign was codified and supercharged when the Digital Services Act came into force in 2023. The DSA is, on its face, a platform regulation. It requires large platforms to assess and mitigate “systemic risks” to civic discourse, electoral processes, and public health. It empowers regulators to appoint “trusted flaggers”, approved organizations whose content removal requests must be fast-tracked by platforms. It threatens non-compliant companies with fines of up to six percent of their global annual revenue, a number that, for a company like Meta, could run to billions of dollars.
The DSA does not just regulate Europe. It exports European speech standards to the entire world.
The law’s extraterritorial ambition is the key to understanding why it matters for Americans. Platforms like Facebook, YouTube, X, and TikTok do not, for practical reasons, maintain country-specific content moderation systems. Users travel. VPNs are ubiquitous. The cost and complexity of geographically precise moderation are prohibitive. This means that when the Commission pressures a platform to change its global community guidelines, as the subpoenaed documents show it explicitly did, telling platforms at a closed-door May 2025 workshop that “continuous review of global community guidelines” was a DSA compliance best practice. Those changes apply to users in Des Moines as surely as they apply to users in Düsseldorf.
The Part I report contains a particularly striking example from a Commission workshop exercise. Regulators labeled a hypothetical social media post reading “we need to take back our country”, a phrase used by politicians across the ideological spectrum, including numerous Democratic Party figures, as “illegal hate speech” that platforms are required to censor under the DSA. This is not a fringe interpretation. This is what Commission officials were teaching platform compliance teams in a training exercise documented in internal corporate files.
III. Elections: The Clearest Evidence
If the broad claims about content moderation policy feel abstract, the election-specific evidence is harder to dismiss. The Part II report identifies at least eight European national elections in which the Commission activated what it called a “rapid response system”, a mechanism through which approved fact-checkers and government-designated “trusted flaggers” can file priority content removal requests against platforms in the days before and after voting.
The elections named in the report span the Continent:
• Slovakia (2023): Platforms reportedly censored statements including “there are only two genders” as hate speech under Commission pressure, removing content that had nothing to do with the election itself.
• The Netherlands (2023 and 2025): Government bodies were granted “trusted flagger” status, enabling faster removal of content ahead of elections won by the populist Geert Wilders.
• France (2024): Pre-election coordination meetings between Commission officials, national regulators, and “left-wing NGOs” discussed which political content should be moderated.
• Ireland (2024 general election and 2025 presidential election): The Irish media regulator hosted “DSA Election Roundtables” with the Commission and platforms. Meta confirmed it had updated its “election risk assessment and mitigations”, with additional moderation steps implemented at regulators’ urging.
• Romania (2024): The most dramatic case, discussed in detail below.
The Commission’s response to all of this has been to call the reports “pure nonsense” and “completely unfounded.” But the documents are not the Committee’s invention. They were produced under legal compulsion by the companies themselves. The question is not whether these meetings happened. The emails prove they did. The question is whether they constitute legitimate election integrity work or partisan interference dressed up in the language of safety.
The answer depends almost entirely on whether you trust the Commission’s judgment about what constitutes dangerous “disinformation”. That trust has been catastrophically eroded by what happened in Romania.
IV. Romania: The Censorship That Cancelled an Election
In November 2024, a political outsider named Călin Georgescu unexpectedly won the first round of Romania’s presidential election with 23 percent of the vote, surging from single-digit polling in a matter of weeks. Romania’s intelligence services, the Constitutional Court, and the European Commission immediately pointed to TikTok: Russian-linked bot networks, they said, had artificially amplified Georgescu’s content and manipulated the platform’s algorithm. On December 6, 2024, the Constitutional Court made history. It cancelled the election results, the first time a European nation had ever done so on grounds of social media interference.
The narrative was clean, compelling, and politically convenient. Georgescu was portrayed as pro-Russian and anti-NATO. His victory would have been an embarrassment for the EU establishment. The interference claim gave authorities the legal basis to void the result and ban him from the re-run election held in May 2025, which was won by a pro-EU candidate.
Then the receipts arrived. TikTok’s own submission to the European Commission, documents produced under the House Judiciary Committee’s subpoena, stated that the company “ha[d] not found, nor been presented with, any evidence of a coordinated network of 25,000 accounts associated with Mr. Georgescu’s campaign.” This is the platform that was accused of being the vector of the interference. It found no evidence of the interference.
TikTok told the European Commission it had found no evidence of the Russian bot network cited by the Romanian Constitutional Court to justify cancelling the election.
Furthermore, Romanian investigative journalism outlet snoop.ro, citing confidential sources from Romania’s own tax authority, reported that at least one of the TikTok influence campaigns had in fact been funded by Romania’s National Liberal Party, a member of the mainstream establishment coalition rather than a foreign government.
None of this led to the reinstatement of the election results. The re-run proceeded. The pro-EU candidate won. And the Commission, which had used the Romanian case as justification for its TikTok investigation under the DSA, has never publicly grappled with TikTok’s own denial of the core factual predicate.
This is the most serious allegation in the entire debate: that the censorship apparatus built under the DSA was used not to protect an election, but to change one. Whether one believes Georgescu was a genuine threat or a legitimate democratic choice, the principle at stake is the same. Governments should not be in the business of deciding which election results to honor based on post-hoc disinformation findings that the relevant platform disputes.
V. The German Paradox
The German federal election of February 2025 is instructive precisely because the evidence confounds the simple censorship narrative, though in ways that raise their own troubling questions.
Ahead of the vote, the Commission conducted stress tests with major platforms and convened roundtables with the German Digital Services Coordinator to discuss “risks.” Germany’s domestic intelligence service warned of Russian disinformation campaigns. A task force was established in the state of Hesse to “analyze and coordinate measures regarding opinions on social media platforms.” State officials warned police officers against membership in regional branches of the AfD, a party polling above 20 percent.
And yet: independent research by Global Witness and the Institute for Strategic Dialogue found that TikTok’s and X’s algorithms were, in fact, amplifying AfD content disproportionately compared with other parties. Elon Musk used X’s platform to openly endorse the AfD, host its leader in a livestream, and tell his 220 million followers to vote for it. The AfD finished second with 20.8 percent. Then, months after the election, Germany’s domestic intelligence agency labeled the entire AfD, the main opposition party in the Bundestag, an “extremist organization” subject to enhanced surveillance. The designation was rapidly suspended pending legal challenge.
What the German case demonstrates is that the DSA’s election integrity framework coexists with other forms of state pressure and does not prevent their use against opposition parties. The AfD was not suppressed on TikTok. It was suppressed in other ways: through intelligence designations, the refusal of other parties to cooperate with it legislatively, and an overwhelmingly hostile media environment. The DSA was simply one instrument in a broader toolkit.
VI. What This Means for Americans
The Trump administration has responded to these findings with unusual vigor. Secretary of State Marco Rubio imposed visa bans on five European officials, including former DSA architect Thierry Breton, describing them as leaders of the “global censorship-industrial complex.” The State Department launched an internal investigation into DSA enforcement in early 2025. The House Judiciary Committee has continued to issue subpoenas, including to Meta, as recently as March 2026.
The administration’s critics, including former U.S. Ambassador to Russia Michael McFaul, have called these responses overblown and politically motivated. But the underlying legal concern is not trivial. The DSA researcher access provision, as enforced in the Commission’s December 2025 fine against X, asserts the right to demand that an American company hand over data on American users to researchers approved by European regulators. This is an extraordinary extraterritorial claim over American citizens’ private information, made by an unelected foreign bureaucracy.
When a European regulator tells a platform to change its “global community guidelines,” it is making content moderation decisions for Iowa as surely as for Ireland.
More broadly, the structural reality of global content moderation means that the First Amendment’s protections are being quietly eroded by foreign regulatory pressure. This is not a hypothetical future threat. The subpoenaed documents show it has already happened: platforms changed their global moderation rules in response to Commission pressure, and those rule changes applied to Americans. The COVID-19 content labelled as misinformation in Brussels was also labelled as misinformation in Baltimore. The immigration discussion that crossed the line into “hate speech” by EU standards crossed that line for American users, too.
The $120 million fine levied against X in December 2025, the first ever under the DSA, was ostensibly about blue checkmark transparency and advertising repositories. But Rubio called it “an attack on all American tech platforms and the American people by foreign governments.” Whether you share his confrontational framing, the principle he defends is straightforward: American companies serving American users should not be regulated by unaccountable foreign bureaucrats whose definition of acceptable speech is fundamentally at odds with the First Amendment.
VII. The Censors’ Defense
In fairness, the Commission’s defenders make several arguments that deserve engagement rather than dismissal.
First, they note that the DSA contains no reference to targeting conservative, populist, or right-wing content. Its language is neutral: it addresses “illegal content,” “systemic risks” to civic discourse, and algorithmic transparency. European legal scholars point to the regulation’s first article, which commits to upholding freedom of expression, and argue that the law’s goal is precisely to prevent censorship of political speech, not to enable it.
Second, they argue that the German example, where algorithmic analysis showed the AfD being amplified rather than suppressed, demonstrates that concerns about anti-conservative bias in DSA enforcement are at least partly unfounded. If the machine is supposed to suppress the right, it did a poor job in Germany’s biggest recent election.
Third, they contend that the House Judiciary Committee reports are themselves political documents, produced by the Republican majority under Chairman Jim Jordan, a fierce Trump ally, to attack EU regulations that constrain American tech companies, whose executives have grown close to the Trump orbit. The timing of the reports, the critics note, aligns perfectly with the administration’s tariff negotiations and broader pressure campaign against Brussels.
These are not frivolous objections. The committee’s framing is unambiguously prosecutorial. It does not wrestle seriously with the possibility that some of what it calls censorship is legitimate moderation of genuinely illegal content. And the political context, Trump, Musk, Jordan, the EU tech regulation fight, is impossible to separate from the evidentiary claims.
But here is what is not in dispute: the meetings happened. The pressure was applied. The platforms changed their rules. And in Romania, a national election was cancelled on the basis of interference claims that the platform at the center of the story says it could not verify.
VIII. A New Kind of War
There is a framework that makes sense of everything described in this essay. It is not the framework preferred by the European Commission’s defenders, who present the DSA as a straightforward consumer-protection regulation. It is not even the framework most often used by the Commission’s American critics, who tend to reach for First Amendment arguments. The framework that fits most precisely comes from military doctrine. It is called fifth-generation warfare. And once you see the DSA through that lens, it is difficult to unsee it.
Classical warfare, the kind studied at West Point and Sandhurst, is a contest between armies. Second and third-generation warfare added industrial firepower, maneuver, and combined arms. Fourth-generation warfare, the form that defined conflicts in Vietnam, Afghanistan, and Iraq, erased the boundary between soldier and civilian, military and political, battlefield and home front. The state’s monopoly on organized violence was broken.
Fifth-generation warfare goes further still. It erases the boundary between war and peace. In fifth-generation warfare, the primary battlefield is not territory. It is not infrastructure. It is the human mind. The target is not an enemy army but an enemy population’s capacity to perceive reality clearly, to form coherent political judgments, and to act collectively on its own interests. The weapon is information itself, or more precisely, the management of information. Victory is achieved not when the enemy surrenders but when the enemy population can no longer distinguish truth from falsehood, friend from foe, or its own interests from those of its adversaries.
In fifth-generation warfare, the battlefield is the human mind. Victory is achieved when a population can no longer think clearly enough to defend itself.
The concept was developed primarily in the context of non-state actors and foreign adversaries. Russian theorists call their version of it the Gerasimov Doctrine, after General Valery Gerasimov’s 2013 essay arguing that the lines between war and peace have blurred beyond recognition, and that informational, psychological, and political tools are now as decisive as tanks and missiles. Barack Obama himself alluded to this at Stanford, quoting Putin’s alleged insight that you do not need people to believe disinformation. You simply need to flood the public square with enough noise that citizens can no longer know what to believe.
What has received far less attention is the question of what happens when these techniques are deployed not by foreign adversaries but by governments against their own citizens. This is sometimes called reflexive control, a term from Soviet and Russian military psychology describing the manipulation of an adversary’s decision-making process by feeding it a carefully curated picture of reality. The adversary, operating on false or incomplete information, makes decisions that serve the manipulator’s interests while believing it is acting freely. The manipulation is invisible because the target never realizes its information environment has been shaped.
Reflexive control: the manipulation of a population’s decisions by curating the information it receives, so that people choose what the controller wants while believing they are choosing freely.
The DSA’s architecture maps onto this framework with uncomfortable precision. Consider what the law actually does in practice, as documented in the House Judiciary Committee’s evidence. It does not, for the most part, order the deletion of specific pieces of content by government decree. That would be too obvious, too legally vulnerable, too reminiscent of the censorship regimes that Europeans are supposed to have rejected in 1945. Instead, it creates a system of incentives and pressures that cause platforms to curate their own information environments in directions the Commission prefers.
The threat of ruinous fines, up to six percent of global revenue, creates a powerful incentive for platforms to err on the side of over-removal when content is in a gray area. The trusted flagger system grants approved organizations, selected by government regulators, the authority to fast-track content for review in the critical days before elections. The systemic risk provisions require platforms to assess and mitigate broad categories of speech, including entirely legal speech, if regulators determine that such speech poses risks to civic discourse or electoral integrity. Compliance teams, knowing they will be audited and potentially fined, develop an institutional instinct for caution that consistently resolves ambiguity in the direction of less speech rather than more.
The result is an information environment that has been shaped, at the margins, in ways that favor established political narratives over insurgent ones, institutional authority over popular skepticism, and approved experts over unofficial voices. Citizens navigating this environment believe they are encountering a natural marketplace of ideas. They do not know that the marketplace has been quietly reorganized by regulatory pressure applied in closed-door workshops between Commission officials and platform compliance teams. This is reflexive control exercised by democratic populations over their own governing institutions.
Citizens believe they are encountering a natural marketplace of ideas. They do not know the marketplace has been quietly reorganized by bureaucrats in closed-door workshops.
The psychological warfare dimension is not incidental to the DSA’s design. It is structural. Psychological operations, in the military sense, achieve their effects by shaping the information environment of a target population. Effective psychological operations are invisible: the target population does not experience them as manipulation but as organic reality. The DSA achieves the same effect through a legal mechanism rather than a military one, but the underlying dynamic is identical. When the Commission labeled the phrase ‘we need to take back our country’ as illegal hate speech in a training exercise for platform compliance teams, it was not merely making a legal determination. It was encoding a political judgment into the informational infrastructure of European society.
The word ‘infrastructure’ is important here. Infrastructure is what you do not notice until it fails. For most of history, governments that wanted to control information had to do it visibly: banning books, shutting down newspapers, arresting editors. These actions were recognizable as censorship and generated corresponding resistance. The genius of the DSA model, whether intended or not, is that it operates at the infrastructure level. It does not tell citizens what to think. It shapes the information environment through which citizens form their own thoughts, without their awareness or consent.
Military psychologists distinguish between first-order and second-order effects in information operations. First-order effects are direct: a piece of false information is believed. Second-order effects are more powerful and more durable: the target population’s epistemic confidence is degraded. It becomes less certain about what is true, more susceptible to official narratives, and more dependent on authoritative sources to interpret reality for it. The Commission’s disinformation framework, in its long-term operation, risks producing exactly these second-order effects on European citizens, not through enemy action but through the actions of their own governments.
This is what makes the Romanian case so significant beyond its immediate facts. Whether Georgescu was genuinely propelled by Russian bots or by domestic political manipulation, the constitutional annulment of the election result sent a message to every voter in Europe: the outcome of an election can be reversed by state authorities citing informational threats that citizens cannot independently verify, using evidence that the platform at the center of the story disputes, in proceedings that are not subject to full public scrutiny. The message, whether received unconsciously or explicitly, is that electoral democracy operates within boundaries set by the state’s informational assessments. That is not democracy. That is managed democracy, the form of governance that Vladimir Putin, the man Obama identified as the master of information warfare, has practiced in Russia for twenty years.
The Commission’s model risks producing the same second-order effects that military psychological operations are designed to achieve: populations that are epistemically dependent on official sources to interpret reality.
None of this requires the European Commission to be consciously engaged in psychological warfare against its own citizens. The most powerful systemic effects are rarely the product of conscious design. The Commission believes, sincerely and not without some evidence, that it is protecting European democracy from genuine threats: Russian interference, algorithmic amplification of extremism, coordinated disinformation campaigns. Its officials do not consider themselves information warriors. They think of themselves as regulators.
But good intentions do not neutralize structural effects. A regulatory framework that systematically disadvantages unofficial speech, that operates through closed-door pressure rather than transparent legal process, that treats political speech as a category of risk to be managed, and that has now demonstrated its willingness to invalidate election results on the basis of disputed informational claims, is functionally indistinguishable from a psychological warfare operation against the political autonomy of European citizens. The instrument is law. The effect is control. The target is thought.
Americans watching this from a distance should resist the temptation to treat it as a foreign problem. The House Judiciary Committee’s evidence establishes that the Commission’s content moderation demands have already reached across the Atlantic and reshaped the global policies of American platforms. Fifth-generation warfare, by definition, does not respect national boundaries. An information environment shaped by foreign regulatory pressure is an information environment shaped by a foreign power, regardless of whether that power considers itself an adversary or a partner.
The Founders designed the First Amendment precisely because they understood that governments would always be tempted to manage the information environment in ways that served the governing class. They had lived under a government that did exactly that, and they built a constitutional firewall against it. That firewall is now being flanked, not by foreign enemies but by allied bureaucracies deploying the language of safety and the mechanisms of law to achieve what tyrants historically achieved through force.
IX. The Bigger Picture
Step back from the partisan noise, and the picture that emerges is genuinely alarming, regardless of one’s political sympathies. Democratic self-governance requires that voters be able to freely receive and discuss information. Social media platforms have become the dominant venue for that discussion. And those platforms are now subject to a regulatory framework, enforced by unelected officials in Brussels, that gives government-approved bodies the power to fast-track content removal requests in the weeks before elections, that defines broad categories of political speech as “systemic risks” requiring mitigation, and that threatens companies with ruinous fines for non-compliance.
Whether this framework is deployed with partisan intent or genuine neutrality, its structure creates a serious vulnerability. Whoever controls the definition of “disinformation” controls what voters are allowed to easily find and discuss online. The EU’s answer is that independent regulators and judicial review provide sufficient safeguards. The answer of the House Judiciary Committee, and increasingly of many European voices as well, from the Polish president who vetoed DSA implementation legislation to far-right MEPs across the Continent, is that those safeguards are insufficient and that the framework itself is the problem.
The right answer probably lies somewhere between these poles. But the conversation cannot happen honestly if the Commission continues to insist that its actions constitute “pure nonsense” rather than engaging with tens of thousands of pages of its own internal communications.
Conclusion: Read the Documents
The two House Judiciary Committee reports are available in full at the links below. They are imperfect documents, prosecutorial in tone, written for political effect, and produced by people with strong institutional interests in their conclusions. Read them with appropriate skepticism.
But read them. Because whatever you think of the Trump administration’s motives, whatever you think of Jim Jordan, whatever you think of Elon Musk, the documents behind those reports are real. The emails between Commission officials and platform compliance teams are real. TikTok’s denial of the Romanian bot network is real. The Irish regulator’s DSA election roundtables are real. The global content moderation rule changes are real.
A foreign, unelected bureaucracy has spent a decade trying to shape what you can say and read online. The question is not whether you like the speech it suppressed. The question is whether you believe unelected foreign officials should have that power at all.
“Freedom of speech is the foundation of all other freedoms. When Brussels decides what is disinformation, it decides what is true. That is not a power any government should hold.”
Primary Sources
The following official documents underpin this analysis:
• House Judiciary Committee: Part II Report (February 3, 2026): https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/2026-02/THE-FOREIGN-CENSORSHIP-THREAT-PART-II-2-3-26.pdf
• House Judiciary Committee: Part I Report (July 25, 2025): https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/2025-07/DSA_Report&Appendix(07.25.25).pdf
• Part II Press Release: https://judiciary.house.gov/media/press-releases/new-report-exposes-european-commission-decade-long-campaign-censor-american
• Part I Press Release: https://judiciary.house.gov/media/press-releases/foreign-censorship-threat-how-european-unions-digital-services-act-compels
EDITORIAL NOTE
This essay is written as a summary and analysis of the House Judiciary Committee’s findings and related publicly available reporting. It reflects one viewpoint in an actively contested debate. The European Commission and a significant body of European legal scholars dispute the characterizations in the Committee reports. Readers are strongly encouraged to consult primary sources and multiple perspectives before forming conclusions. The factual claims in this essay are sourced from Congressional reports, peer-reviewed research, and reporting from Reuters, Euronews, EU Observer, TechCrunch, Friends of Europe, and other outlets.



This essay was prepared and published in support of my roundtable discussion with Mike Benz at CPAC today.
There were so many points that caught my eye, but the ability to discern between false narrative and true information is largely unavailable to the average person. That's why Obama and others of oratory skill have been such a powerful figure in all this. His ability to con his way thru these structures is polished and precise. This vulnerability to not know the deeper strategy behind a convincible argument is a problem of vast proportions. Just ask R.F.K. JR.