The EU AI Act and the US are on a collision course. The EU’s enforcement regime — with binding deadlines in August 2026, fines up to 7% of global revenue, and active investigations underway — is the world’s most comprehensive AI rulebook. The US, meanwhile, is actively dismantling AI oversight at every level. For any company building or deploying AI globally, operating across both regimes is no longer a legal challenge. It’s a strategic crisis.
What Is the EU-US Regulatory Divide?
The world’s two largest AI markets have chosen structurally incompatible approaches to governing artificial intelligence. The European Union built a precautionary, rights-based framework that treats AI governance as a prerequisite to deployment. The United States, under the Trump administration, has embraced a permissive, innovation-first philosophy that treats regulation itself as a threat to competitiveness.
This isn’t a matter of degree. It’s a matter of first principles.
The EU AI Act — which entered into force on August 1, 2024 — imposes mandatory compliance obligations on AI systems ranging from low-risk chatbots to high-stakes systems in healthcare, hiring, and financial services. It applies to any company touching EU users, regardless of where that company is headquartered. The regulation’s extraterritorial scope is explicit and enforced.
The US, by contrast, has no federal AI law. What it does have is an executive order — signed December 11, 2025 — that explicitly preempts state AI regulations, conditions $42 billion in federal broadband funding on states repealing AI laws deemed “onerous,” and created a Department of Justice AI Litigation Task Force to challenge state-level AI rules in court.1
The EU’s Enforcement Clock Is Running
As of August 2, 2025, obligations for General-Purpose AI (GPAI) model providers became applicable — covering foundation models like GPT, Gemini, Claude, and Llama. From August 2, 2026, the European Commission’s enforcement powers fully activate, including the power to impose fines.2
The fines are not symbolic. For GPAI violations, penalties reach up to 3% of global annual turnover or €15 million, whichever is higher.3 For high-risk AI system violations — covering hiring algorithms, credit scoring, biometric identification, and critical infrastructure — fines climb to 7% of global turnover or €35 million.
The EU AI Office, which oversees GPAI compliance, has already moved past administrative formality. In January 2026, it issued a formal data retention order against X (formerly Twitter), directing the company to preserve all internal records related to its Grok chatbot following allegations that “Spicy Mode” was generating non-consensual sexualized imagery.4 That investigation invokes both the AI Act and the Digital Services Act, putting X at risk of fines reaching 6% of global revenue.
Key EU AI Act enforcement milestones:
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Prohibited AI practices banned (facial recognition misuse, social scoring, subliminal manipulation) |
| August 2, 2025 | GPAI model obligations apply; Code of Practice finalised |
| August 2, 2026 | Full enforcement with fines; high-risk system compliance deadline |
| December 2, 2027 | Proposed backstop deadline under Digital Omnibus for high-risk obligations |
| August 2, 2027 | Legacy GPAI providers (pre-August 2025) must comply |
One important caveat: in November 2025, the European Commission proposed the “Digital Omnibus” package, which would push the high-risk AI obligations deadline back to December 2027, conditional on harmonized standards becoming available.5 The EU Parliament’s internal market committee formally backed the delay on March 18, 2026. But the Digital Omnibus is still under negotiation — prudent compliance planning treats August 2026 as the binding date.
The US Goes the Other Direction
While the EU builds its enforcement apparatus, the US is systematically dismantling its oversight infrastructure.
The Trump administration’s approach to AI policy centers on three pillars: deregulating at the federal level, preempting state-level regulation, and promoting US AI exports as a geopolitical instrument. The AI Action Plan, released July 2025, called for “full-stack AI export packages” — hardware, models, software, and standards — to be exported to allied nations as a strategic tool for maintaining US AI dominance.6
On March 20, 2026, the White House released a national legislative blueprint for AI policy. It recommends “targeted” federal standards in a narrow set of areas (child safety, digital replicas, critical infrastructure) while explicitly declining to mandate safety disclosures or impose liability frameworks on AI developers.7
The practical result: in the US, there is no federal equivalent to the EU AI Act’s requirements for technical documentation, human oversight, conformity assessments, or risk management systems. A company deploying a hiring algorithm that screens 10 million job applicants faces zero federal mandatory requirements — while the same deployment in the EU must complete a conformity assessment, maintain audit logs, ensure human review of consequential decisions, and notify affected individuals of automated decision-making.
A Diplomatic War Over AI Rules
The regulatory divergence has gone beyond legal complexity into active geopolitical friction.
In February 2026, Politico and Reuters reported that the US State Department has instructed American diplomats to lobby foreign governments against data sovereignty and AI governance initiatives that could restrict US tech companies.8 The directive frames European AI regulation as a trade barrier rather than a legitimate policy choice.
The friction has produced individual casualties. Before Christmas 2025, the US State Department imposed visa restrictions on five EU and British officials — including former European Commissioner Thierry Breton, widely considered the architect of the Digital Services Act. Breton publicly condemned the action as a “witch hunt.”9
The analytical firm Control Risks, writing in January 2026, concluded bluntly: “Transatlantic alignment will not converge in the near term, and companies should anticipate sustained tension — particularly around frontier models, data flows, and critical compute infrastructure.”10
What Global Companies Must Navigate
The practical compliance burden for any company operating in both jurisdictions is significant. The regimes don’t just differ — they conflict.
| Requirement | EU AI Act | US Federal Policy |
|---|---|---|
| Risk classification | Mandatory (4 tiers) | None |
| Technical documentation | Required for all GPAI models | None |
| Human oversight (high-risk) | Mandatory | Not required |
| Conformity assessment | Required before deployment | None |
| Transparency to affected persons | Required | Not required |
| Copyright compliance | Mandatory training data summaries | Unresolved (courts deciding) |
| Fines for non-compliance | Up to 7% global revenue | None at federal level |
| Extraterritorial scope | Yes — applies to all EU market access | No |
As of early 2026, 26 major AI providers — including Microsoft, Google, Amazon, OpenAI, and Anthropic — have signed the EU’s voluntary Code of Practice for GPAI models, which provides a pathway to demonstrate compliance.11 Meta has opted not to sign, instead pursuing direct compliance through “alternative adequate means” — a higher-scrutiny path that the AI Office will evaluate case-by-case.
Signing the Code of Practice does not eliminate fine risk, but the EU AI Office has stated it will account for Code commitments when calculating penalties. Non-signatories should expect closer examination.
The Brussels Effect: One Standard to Rule Them All?
There is historical precedent for what happens when the EU sets a standard that US companies must meet: GDPR, which became the de facto global data protection standard not because regulators around the world adopted it, but because multinationals found it cheaper to apply one rulebook everywhere than to segment their systems by jurisdiction.
Early evidence suggests the AI Act is following the same trajectory. Adobe has deployed C2PA content authenticity watermarking into its global product suite — not just for EU users — rather than geofencing the feature. OpenAI has stationed a “Head of Preparedness” in Brussels to coordinate AI safety pipelines for its flagship models globally.12 Alphabet and Microsoft have built EU-compliant transparency tools into products served to users worldwide.
The academic and legal community is split on whether this will hold. A 2025 Brookings Institution analysis argued the AI Act’s technical complexity and novel obligations — unlike GDPR’s more straightforward data principles — may limit its Brussels Effect, because compliance costs are higher and the systems being regulated are more heterogeneous.13 A GovAI research paper similarly cautioned that the AI Act could fragment rather than harmonize global AI governance if its requirements prove too burdensome for non-EU developers to adopt voluntarily.
At minimum, the market signal is clear: major AI providers are investing in EU compliance infrastructure, and some are choosing to apply those standards globally as the path of least resistance.
The Balkanization Risk
The deeper risk is not that companies struggle to comply with two different regimes. It’s that they stop trying.
If EU compliance costs are high enough and the US market is large enough, some AI companies will choose to operate bifurcated products: one for the EU, one for the US and the rest. This creates a two-tier AI landscape where EU users receive more constrained, more audited, more expensive AI products, while US users receive faster-moving, less scrutinized systems.
More concerning is the effect on smaller players. The compliance burden of the EU AI Act — technical documentation, risk assessments, conformity audits, human oversight systems — is proportionally much heavier for startups and mid-sized companies than for hyperscalers. A startup deploying a hiring tool in the EU faces requirements that cost tens to hundreds of thousands of dollars to implement correctly. OpenAI and Google have entire legal teams dedicated to nothing else. The regulatory asymmetry entrenches incumbent advantage and raises barriers to entry.14
The competitive dynamics compound geopolitically. The UK post-Brexit is pursuing a lighter-touch “pro-innovation” approach, while Singapore and the UAE are positioning as AI-friendly jurisdictions. As regulatory divergence widens, the temptation for capital and talent to migrate toward lower-compliance environments increases — a dynamic that could accelerate AI development fragmentation along geopolitical fault lines rather than technological ones.
As of March 2026, the transatlantic divide shows no sign of narrowing. The EU is enforcing. The US is deregulating. And global AI companies are building compliance architecture for both — or making the calculation that they can’t afford to.
Frequently Asked Questions
Q: Does the EU AI Act apply to US companies? A: Yes. The EU AI Act has explicit extraterritorial scope — it applies to any company placing AI systems on the EU market or whose AI systems affect EU users, regardless of where the company is headquartered. US companies that serve European customers are directly subject to the regulation.
Q: What is the most important EU AI Act deadline in 2026? A: August 2, 2026 is when the European Commission’s enforcement powers fully activate, including the ability to issue fines. GPAI model obligations have been in force since August 2025; high-risk AI system obligations also apply from August 2026, though the Digital Omnibus proposal may push that deadline to December 2027.
Q: What is the US doing instead of federal AI regulation? A: The Trump administration’s December 2025 executive order focuses on preempting state AI laws, not creating federal requirements. The March 2026 national AI policy framework recommends narrow federal standards (child safety, digital replicas) while leaving most AI development unregulated at the federal level.
Q: What is the “Brussels Effect” in AI? A: The Brussels Effect describes how EU regulations become de facto global standards because it is more efficient for multinationals to maintain one compliance framework globally than to segment by jurisdiction. With AI, several major providers are applying EU-compliant transparency and safety features to all users worldwide, not just European ones.
Q: How should a company approach dual US-EU compliance? A: The consensus among compliance experts is to build to the EU standard globally and adapt downward where local rules permit. This avoids maintaining parallel product and documentation tracks. The alternative — separate EU-compliant and US-unrestricted versions of the same product — is technically feasible but operationally expensive and strategically risky if EU standards spread further.
Footnotes
-
Paul Hastings LLP. “President Trump Signs Executive Order Challenging State AI Laws.” January 2026. https://www.paulhastings.com/insights/client-alerts/president-trump-signs-executive-order-challenging-state-ai-laws ↩
-
EU AI Act Implementation Timeline. https://artificialintelligenceact.eu/implementation-timeline/ ↩
-
Latham & Watkins. “EU AI Act: GPAI Model Obligations in Force.” 2025. https://www.lw.com/en/insights/eu-ai-act-gpai-model-obligations-in-force-and-final-gpai-code-of-practice-in-place ↩
-
Financial Content / TokenRing. “The Brussels Effect in Action: EU AI Act Enforcement Targets X and Meta.” January 9, 2026. https://markets.financialcontent.com/wral/article/tokenring-2026-1-9-the-brussels-effect-in-action-eu-ai-act-enforcement-targets-x-and-meta-as-global-standards-solidify ↩
-
OneTrust Blog. “EU Digital Omnibus Proposes Delay of AI Compliance Deadlines.” 2025. https://www.onetrust.com/blog/eu-digital-omnibus-proposes-delay-of-ai-compliance-deadlines/ ↩
-
McDermott Will & Emery. “White House Releases America’s AI Action Plan.” 2025. https://www.mwe.com/insights/white-house-releases-americas-ai-action-plan/ ↩
-
Sullivan & Cromwell. “Trump Administration Releases National Policy Framework on Artificial Intelligence.” March 2026. https://www.sullcrom.com/insights/memo/2026/March/White-House-Releases-National-Policy-Framework-AI ↩
-
US News. “US Orders Diplomats to Fight Data Sovereignty Initiatives.” February 25, 2026. https://www.usnews.com/news/top-news/articles/2026-02-25/exclusive-us-orders-diplomats-to-fight-data-sovereignty-initiatives ↩
-
Xinhua. “Transatlantic Friction Puts Europe’s Tech Ambition to the Test.” January 2026. https://english.news.cn/20260109/c405f1fb989a49fe931eb39c7f34e570/c.html ↩
-
Control Risks. “AI Visions in 2026: A Transatlantic Strategic Divide.” 2026. https://www.controlrisks.com/our-thinking/insights/ai-visions-in-2026-a-transatlantic-strategic-divide ↩
-
Financial Content / TokenRing. “The Brussels Effect 2.0: EU AI Act Implementation Reshapes Global Tech Landscape in Early 2026.” January 12, 2026. https://markets.financialcontent.com/wral/article/tokenring-2026-1-12-the-brussels-effect-20-eu-ai-act-implementation-reshapes-global-tech-landscape-in-early-2026 ↩
-
Financial Content / TokenRing. “The Age of Enforcement: How the EU AI Act is Redefining Global Intelligence in 2026.” January 28, 2026. https://markets.financialcontent.com/stocks/article/tokenring-2026-1-28-the-age-of-enforcement-how-the-eu-ai-act-is-redefining-global-intelligence-in-2026 ↩
-
Brookings Institution. “The EU AI Act Will Have Global Impact, but a Limited Brussels Effect.” https://www.brookings.edu/articles/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/ ↩
-
Bloomsbury Intelligence and Security Institute. “Global Fragmentation of AI Governance and Regulation.” https://bisi.org.uk/reports/global-fragmentation-of-ai-governance ↩