Table of Contents

The EU AI Act enters full enforcement in August 2026. The Trump administration simultaneously revoked its predecessor’s AI safety framework and is using the courts and federal funding as weapons against state-level AI regulation. Global AI companies now face two irreconcilable compliance regimes—and no international coordination mechanism to bridge them.

What Is Happening Right Now

The split is not coming. It is already here.

On February 2, 2025, the EU AI Act’s prohibition tier became enforceable—banning real-time biometric surveillance in public spaces, emotion recognition in workplaces, and AI-powered social scoring outright.1 On January 20, 2025—two weeks earlier—President Trump revoked Executive Order 14110, the Biden-era framework that had mandated safety testing for high-risk AI models, directed agencies to develop AI risk standards, and established a structured federal oversight approach.2

The directional divergence could not be starker. Brussels is enforcing. Washington is dismantling.

How the EU AI Act Actually Works

The EU’s framework is risk-tiered, not blanket. Understanding the structure matters for practitioners navigating the August 2026 deadline.

Prohibited practices (in force since February 2, 2025): Eight categories are outright banned. These include untargeted scraping of internet or CCTV footage to build facial recognition databases, emotion recognition in the workplace or educational settings (except for narrow medical/safety applications), and AI systems that exploit cognitive vulnerabilities for behavioral manipulation.1 Violations carry fines up to €35 million or 7% of global annual turnover—whichever is higher.

General Purpose AI (GPAI) models (obligations in force since August 2, 2025): Providers of foundation models—OpenAI, Google, Anthropic, Mistral, Amazon, Microsoft—face transparency requirements, technical documentation obligations, and adherence to a Code of Practice. Twenty-six major providers signed the Code in August 2025.3 The EU AI Office operated an informal grace period through early 2026 but will apply fines from August 2, 2026.

High-risk AI systems (Annex III deadline: August 2, 2026): This is where enterprises face the heaviest lift. AI used in employment decisions, credit scoring, educational assessment, biometric identification, law enforcement, and border control must achieve full conformity—quality management systems, technical documentation, conformity assessments, and registration in the EU database.4

The European Commission’s proposed Digital Omnibus package (November 2025) could extend the Annex III deadline to December 2027 and Annex I product-safety AI to August 2028.5 However, the proposal requires approval from the European Council and Parliament—and legal advisors warn against treating the extension as guaranteed. Prudent organizations plan for August 2026.

TierWhat It CoversIn ForceMax Penalty
ProhibitedSocial scoring, real-time biometrics, emotion recognition in workplacesFeb 2, 2025€35M / 7% global turnover
GPAI ModelsFoundation model providers (OpenAI, Google, Anthropic, etc.)Aug 2, 2025 (enforcement Aug 2026)€15M / 3% global turnover
High-Risk (Annex III)Employment, credit, education, law enforcement AIAug 2, 2026 (proposed delay: Dec 2027)€15M / 3% global turnover
Limited RiskChatbots, deepfakes, AI-generated contentDisclosure obligations activeLower fines
Minimal RiskSpam filters, AI in video gamesNo specific obligationsN/A

How the US Is Moving in the Opposite Direction

The Trump administration’s approach has three distinct vectors.

Revocation without replacement. Executive Order 14179 (January 23, 2025), “Removing Barriers to American Leadership in Artificial Intelligence,” struck down Biden’s comprehensive AI oversight framework while providing no equivalent safety infrastructure.6 The new order articulates broad aspirations—“sustain and enhance America’s global AI dominance”—but delegates specifics to an AI Action Plan due within 180 days, which has not produced binding regulations at time of writing.

Pre-empting state AI laws. The December 11, 2025 executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” targets state-level AI governance directly.7 Its mechanisms include:

  • An AI Litigation Task Force within the Department of Justice, empowered from January 10, 2026, to challenge state AI laws as unconstitutional burdens on interstate commerce
  • A directive to the FTC to classify state-mandated bias mitigation requirements as per se deceptive trade practices
  • Conditional federal funding: states with “onerous AI laws” risk losing access to broadband infrastructure grants under the BEAD program
  • An FCC proceeding to establish a federal AI disclosure standard that would preempt state equivalents

States pushing back. California (TFAIA, effective January 1, 2026), Texas (RAIGA, effective January 1, 2026), and Colorado (SB 24-205, effective June 30, 2026) have enacted AI laws with meaningful consumer protections.8 Legal analysts note that, absent congressional action, existing state laws likely remain enforceable in the near term—meaning US companies face a fractured domestic landscape even as the federal government attempts consolidation.

The Compliance Nightmare for Global Companies

For any AI company operating in both markets, the structural problem is now concrete.

The EU AI Act applies extraterritorially: any AI system deployed to EU users, regardless of where the provider is headquartered, falls under its scope. A US company selling an AI-powered HR tool to a European client must register in the EU database, maintain a risk management system, conduct conformity assessments, and disclose technical documentation—all before the system goes live in Europe.

That same US company, operating domestically, faces the inverse pressure: the Trump administration is actively working to invalidate state-level bias testing requirements that partially overlap with EU standards.

The financial reality is stark. Compliance costs for large enterprises managing high-risk AI systems run $8–15 million per deployment, according to industry analysis.9 Gartner projects global AI governance spending will reach $492 million in 2026 and surpass $1 billion by 2030, driven entirely by regulatory divergence.10 Smaller AI providers face an existential calculus: absorb compliance costs, exit the EU market, or consolidate through acquisition.

The market structure implications extend beyond cost. Regulatory divergence creates de facto market segmentation. A model trained on practices acceptable under US law—including certain data acquisition methods or absence of bias auditing—may be categorically prohibited in the EU. Over time, organizations may maintain separate model lineages for separate markets, increasing development costs and introducing consistency risks.

What Practitioners Need to Know

The GPAI grace period is ending. The AI Office’s informal collaborative period with Code of Practice signatories expires August 2, 2026. Companies like OpenAI and Anthropic that signed the Code have until then to achieve full technical compliance—after which the AI Office will use enforcement powers, including fines.3

The Digital Omnibus delay is not law yet. The proposal to extend Annex III deadlines to December 2027 must pass the European legislative process. Organizations treating the delay as certain are taking compliance risk.5

The US litigation landscape will decide state law survival. The DOJ’s AI Litigation Task Force has authority to challenge state laws in federal court from January 2026. Whether Colorado, California, or Texas AI laws survive constitutional challenge depends on cases that have not yet been filed or decided.

Governance platform investment is not optional. Gartner data indicates organizations deploying AI governance platforms are 3.4 times more likely to achieve high governance effectiveness than those managing compliance through manual processes.10 The compliance complexity across jurisdictions is beyond spreadsheet management.

The Bigger Picture: Will AI Development Balkanize?

The fragmentation is not only regulatory. It reflects genuinely different theories of AI governance.

The EU’s approach assumes AI poses systemic risks that markets cannot self-regulate—hence binding rules, third-party conformity assessments, and direct liability for deployers. The Trump administration’s approach treats regulation as a barrier to competitiveness, betting that American AI dominance is better served by removing friction than by imposing safety floors.

Both positions have internal logic. The EU is trading development speed for legal certainty and consumer trust. The US is trading safety infrastructure for deployment velocity.

What neither position accounts for is the market reality: global AI development does not respect national borders. Foundation models are trained by US companies and deployed by EU enterprises. The compliance architecture required to bridge these regimes will impose overhead on everyone—estimated at $492 million in 2026 alone—and may ultimately accelerate market concentration toward large players who can absorb it.

By early 2026, over 72 countries have launched more than 1,000 AI policy initiatives with no harmonization mechanism in sight.9 The Balkanization is not a hypothetical future state. It is the current operating environment.


Frequently Asked Questions

Q: Does the EU AI Act apply to US companies? A: Yes. The EU AI Act has extraterritorial reach—any AI system placed on the EU market or deployed to EU users triggers compliance obligations, regardless of where the provider is headquartered. US companies selling AI tools to European customers must comply.

Q: What changed when Trump revoked Biden’s AI executive order? A: Biden’s EO 14110 mandated safety testing for high-risk models, required agencies to develop AI risk standards, and established structured federal oversight. Trump’s EO 14179 revoked this entirely, replacing it with a pro-innovation directive that explicitly avoids binding safety requirements—delegating specifics to an Action Plan that remains incomplete as of March 2026.

Q: Is the August 2026 EU high-risk AI deadline definitely happening? A: Uncertain. The European Commission proposed the Digital Omnibus package in November 2025 to delay Annex III compliance to December 2027. However, the proposal must pass the European Council and Parliament. Legal advisors consistently recommend treating August 2026 as the binding deadline until a delay is formally enacted.

Q: How should a global AI company structure its compliance program given the divergence? A: The pragmatic approach is to build to EU standards as the global floor. EU requirements are more demanding and more clearly defined than equivalent US state laws. Systems passing EU conformity assessment will satisfy most domestic US requirements where they exist. Maintaining separate compliance architectures per jurisdiction is possible but significantly more expensive—$8–15 million per high-risk deployment according to current industry estimates.

Q: Are US state AI laws enforceable despite Trump’s executive order targeting them? A: Currently yes, absent congressional legislation preempting them. The December 2025 executive order directs the DOJ to challenge state laws in court and the FTC to reclassify state bias requirements—but executive orders cannot unilaterally override state laws. California, Texas, and Colorado AI laws remain active at time of writing, and federal courts have not yet ruled on the constitutional challenges.



Sources consulted:

Footnotes

  1. European Commission. “Article 5: Prohibited AI Practices.” EU Artificial Intelligence Act. https://artificialintelligenceact.eu/article/5/ 2

  2. Wiley Law. “President Trump Revokes Biden Administration’s AI EO: What To Know.” January 2025. https://www.wiley.law/alert-President-Trump-Revokes-Biden-Administrations-AI-EO-What-To-Know

  3. Latham & Watkins. “EU AI Act: GPAI Model Obligations in Force and Final GPAI Code of Practice in Place.” https://www.lw.com/en/insights/eu-ai-act-gpai-model-obligations-in-force-and-final-gpai-code-of-practice-in-place 2

  4. Modulos. “352 Days to Compliance: Why EU AI Act High-Risk Deadlines Are Already Critical.” https://www.modulos.ai/blog/eu-ai-act-high-risk-compliance-deadline-2026/

  5. Latham & Watkins. “Digital Omnibus: EU Commission Proposes to Streamline GDPR and EU AI Act.” https://www.lw.com/en/insights/digital-omnibus-eu-commission-proposes-to-streamline-gdpr-and-eu-ai-act 2

  6. White House. “Removing Barriers to American Leadership in Artificial Intelligence.” Executive Order 14179, January 23, 2025. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/

  7. Sidley Austin. “Unpacking the December 11, 2025 Executive Order: Ensuring a National Policy Framework for Artificial Intelligence.” https://www.sidley.com/en/insights/newsupdates/2025/12/unpacking-the-december-11-2025-executive-order

  8. King & Spalding. “New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption.” https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption

  9. Bloomsbury Intelligence and Security Institute. “Global Fragmentation of AI Governance and Regulation.” https://bisi.org.uk/reports/global-fragmentation-of-ai-governance 2

  10. Gartner. “Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms.” February 17, 2026. https://www.gartner.com/en/newsroom/press-releases/2026-02-17-gartner-global-ai-regulations-fuel-billion-dollar-market-for-ai-governance-platforms 2

Enjoyed this article?

Stay updated with our latest insights on AI and technology.