US vs EU AI Playbooks – Deregulation vs Trustworthy‑by‑Design

Spread the love

Lately I’ve been brushing up on the EU AI Act and how it meshes with the GDPR (see last week’s post). During my usual 30‑second LinkedIn pit‑stop (yeah, a bit sad, I should do anything else during my breaks, but I almost always spot something worth sharing or digging into), I stumbled on a post showing off the brand‑new America’s AI Action Plan signed by Trump this year. I’d never really looked at it—only heard snippets—so I unleashed my trusty AI sidekick to pull out the highlights and, hey, pit them against the European flavour.  The interesting thing is how different are US’s and EU’s visions.

Artificial Intelligence (AI) has become a strategic pillar for economic growth, national security and global competitiveness. For this reason, both the United States and the European Union have launched comprehensive plans that pursue two common goals through different approaches:

  • Foster innovation and technological leadership – by ensuring companies have access to compute, research capabilities and favourable markets.
  • Mitigate risks – by safeguarding fundamental rights, security and democratic values.

The “America’s AI Action Plan” (White House, July 2025) was drafted to reinforce U.S. supremacy against Chinese and European initiatives, focusing on deregulation, infrastructure and defence‑industry integration.

In parallel, the European strategy combines the Coordinated Plan on AI (2021 review) with the AI Act (EU 2024/1689), aiming to create a reliable, “human‑centric” single market and to mobilise large‑scale public‑private investment.


1 · Key points of the “America’s AI Action Plan” (2025)

Context – Published on 12 July 2025 by the White House, the Action Plan is the first AI strategy of the second Trump administration. It seeks to restore U.S. technological leadership after Chinese and European moves and reflects the “America First” agenda: deregulation, domestic industry centrality and national security. The tone echoes President Trump’s rhetoric of competition with Beijing and defence of free speech.

Pillar I – Accelerate AI Innovation

  • Federal deregulation (“remove red tape”) – An Executive Order instructs federal agencies to review within 90 days any rules that hamper frontier‑model deployment, eliminating redundant licences and authorising royalty‑free use of public datasets.
  • Regulatory sandboxes and centres of excellence – The Department of Commerce and the FTC will run experimental environments with temporary penalty waivers for AI start‑ups, alongside five NSF‑DOE hubs where universities and companies will co‑develop generative models and evaluation methods.
  • Compute access via NAIRR – The National AI Research Resource budget rises to USD 3 billion/year to provide up to 250 PFLOPS of free compute and curated datasets to researchers and SMEs, complemented by federal cloud credits on AWS, Azure and Google.
  • “Worker‑first” upskilling plan and tax incentives – A tax credit of up to USD 12 000 per employee trained on AI skills and Department of Labor vouchers for community‑college courses, targeting the upskilling of 2 million workers by 2028.

Pillar II – Build American AI Infrastructure

  • Fast‑track permits for data centres and chip foundries – Cuts NEPA reviews from 24 to 6 months, creates one‑stop state permit desks and allocates USD 10 billion DOE guarantees to modular micro‑reactors powering AI clusters.
  • “Build, Baby, Build!” programme – A USD 45 billion package co‑funding 765 kV transmission lines, rural fibre rollout and a 25 % CAPEX credit for <3 nm fabs, adopting federal “zero‑trust” security for data centres.

Pillar III – Lead in International AI Diplomacy & Security

  • Stronger export controls – Extends EAR rules to block GPUs >600 TFLOPS to “countries of concern” and requires licences for foreign cloud services delivering >1 PFLOPS for AI training.
  • Allied alignment & countering Chinese influence – Launches the “Chip‑4 on AI” with Japan, Korea and Taiwan for common standards and secure value chains; sets up a NATO task‑force on dual‑use AI threats.

U.S. financial instruments

  • SBIR/STTR – Phase II ceiling raised to USD 3 million with a fast‑track for generative‑AI solutions in defence, healthcare and logistics.
  • CHIPS R&D and fab credits – Additional USD 25 billion for advanced packaging and LLM‑specific chips; 40 % CAPEX tax credit for fabs operational by 2030.
  • Defense Production Act Title III – Authorises up to USD 10 billion in direct wafer and ASIC orders for federal demand, stabilising the national supply chain.
  • NAIRR pilot – Funds USD 3 billion in compute vouchers and de‑identified synthetic datasets (health, climate) prioritised for HBCUs and rural universities.

2 · The European framework

Context – The European setup relies on two complementary texts: the Coordinated Plan on AI (co‑drafted with the Member States) and the EU AI Act. This architecture mirrors the Union’s polycentric nature: Member States retain room to manoeuvre on implementation and co‑funding, making commitment more diffuse and negotiated than in the U.S. approach. It leads to co‑regulation processes and gradual alignment of national laws.

2.1 Coordinated Plan on AI (2021 Review)

  • “Accelerate – Act – Align” objectives – Calls on Member States to double public AI R&D spending between 2021 and 2027, to launch at least one national testbed by 2025 and to align investment plans with EU Data Spaces.
  • European Data Spaces and cloud federation – Foresees 14 sectoral data spaces (health, energy, mobility, agrifood) based on the Gaia‑X framework, financed via the Digital Europe Programme to guarantee interoperability and technological sovereignty.
  • EUR 20 billion per year investment target – Combines EU funds (Horizon, DEP, InvestEU) with national contributions, seeks a 1:3 private leverage and introduces KPIs on deep‑tech start‑ups and SME AI adoption.

2.2 AI Act (Regulation EU 2024/1689)

  • Outright prohibitions – Articles 5–6 ban practices such as public social scoring and mass biometric surveillance; National competent authorities can order market withdrawal within 48 hours.
  • Risk classification – Annex III lists eight high‑risk areas; imposes ISO 42001 risk management, high‑quality datasets, human oversight and post‑market auditing via the EU AI Database.
  • Fundamental‑Rights Impact Assessment (FRIA) – Public bodies must publish a rights‑impact analysis before deploying the AI system and subject it to public consultation; independent review at least biennially.
  • Fines up to 7 % – The most serious breaches can be fined up to 7 % of global turnover or EUR 40 million, with aggravating factors for recidivism and actual harm.

2.3 AI Act & GDPR interaction

  • GDPR fully applicable – Recital 82 confirms that the AI Act does not provide a legal basis per se to process personal data; GDPR principles of minimisation, lawfulness and privacy‑by‑design remain in force.
  • Sensitive datasets for debiasing – Allowed only if strictly necessary and subject to a DPIA; require pseudonymisation, homomorphic encryption and deletion of data after training.

3 · Approach comparison

DimensionUSA – AI Action PlanEU – AI Act & Coordinated Plan
Strategic visionTechnological & military dominance; maximise market competitiveness and counter China.“Human‑centric” AI reconciling innovation, fundamental rights and the single market.
RegulationDeregulation with voluntary guidelines (e.g., NIST) and tax incentives; regulatory review every 3 years.Binding legal framework with targeted bans and risk‑proportional requirements; regulation review every 5 years.
Data governancePrevalence of public “open government” datasets, CLOUD Act protections; strong First Amendment.Data sovereignty through European Data Spaces and GDPR principles (minimisation, lawfulness, purpose limitation).
Freedom of expressionProtection of open‑source and “open‑weight” models; no federal watermark mandate.Mandatory watermarking for deepfakes, transparency on synthetic content and right to meaningful information.
GeopoliticsExport controls on advanced chips, “Chip‑4” partnerships and NATO task‑force; goal to contain Chinese influence.Normative diplomacy: promoting EU standards at UN/OECD level and mutual‑recognition deals with trusted partners.
Public fundingTargeted instruments (CHIPS, SBIR, DPA) designed to mobilise private capital and reduce market barriers.Multi‑annual framework programmes (Horizon, DEP, RRF) with mandatory Member‑State co‑funding and EU‑level KPIs.

They are clearly two different visions of the topics.


4 · Funding programmes available

Before we dive into the acronyms and euro‑ or dollar‑denominated billions, a quick orientation: Washington prefers rapid‑fire grants and tax breaks that de‑risk private capital, while Brussels dishes out hefty, multi‑year envelopes that come with strings (and a fair bit of paperwork) but reward cross‑border teamwork.

4.1 United States

  • SBIR / STTR – Competitive grants delivered in three phases (Proof‑of‑Concept, Prototype, Commercialisation) of up to USD 3 million. A fast‑track lane favours generative‑AI solutions for defence, healthcare and logistics.
  • CHIPS and Science Act – R&D & fab creditsUSD 52 billion in total (2022‑30), of which USD 25 billion earmarked for R&D in advanced packaging, design and LLM‑oriented chips. A 40 % CAPEX tax credit is available for fabs that start production before 2030.
  • Defense Production Act, Title III – Revolving fund of up to USD 10 billion for direct federal orders of wafers, ASICs and high‑end memory. Advance payments stabilise demand along the strategic supply chain.
  • National AI Research Resource (NAIRR)USD 3 billion per year to provide up to 250 PFLOPS of free compute, curated or synthetic de‑identified datasets (health, climate, defence) and engineering support. 15 % of compute cycles are ring‑fenced for HBCUs and rural institutions.
  • DARPA “AI Forward” programme – Three‑year challenges worth USD 150 million each on robustness, interpretability, conceptual AI and bio‑security, offering Other Transaction Authority contracts and direct collaboration with national labs.

4.2 European Union

  • Horizon Europe (Cluster 4 “Digital‑Industry‑Space”) – Overall budget of ≈ EUR 95 billion (2021‑27). The co‑programmed AI‑Data‑Robotics Partnership channels ≈ EUR 1 billion per year (50 % industry‑matched) into foundation models, edge‑AI and AI‑enabled manufacturing.
  • Digital Europe Programme (DEP)EUR 7.5 billion for HPC, AI, cybersecurity and digital skills, including calls for Testing & Experimentation Facilities (TEFs) with grants up to EUR 30 million per testbed.
  • Recovery & Resilience Facility (RRF) – Member States must allocate at least 20 % of their plans to the digital transition. This unlocks ≈ EUR 134 billion for federated cloud, data spaces and AI solutions for the public sector and SMEs.
  • EuroHPC Joint Undertaking – Co‑funds exascale supercomputers (LUMI, LEONARDO, JUPITER). Academic and industrial projects can access cycles either free of charge or at marginal cost via JU calls.
  • Key Digital Technologies JU / IPCEI Micro‑electronics & Communication – Up to EUR 8.1 billion in approved State aid (2023), leveraging ≈ EUR 13.7 billion in industrial co‑investment for chips, sensors and edge‑AI components.
  • European Digital Innovation Hubs (EDIHs) – Network of >150 hubs offering test‑before‑invest, training and matchmaking services. SMEs can receive DEP vouchers of up to EUR 15 000 to experiment with AI.

5 · Conclusions

The United States is opting for speed and industrial supremacy, relying on deregulation, targeted fiscal incentives and a strong geopolitical focus. The goal is to scale AI faster than any competitor while securing the domestic hardware supply chain.

The European Union prioritises a “trustworthy‑by‑design” approach, coupling binding rules with shared, long‑term investment. Although compliance costs will be higher, the EU could gain global normative leadership.

Opportunities for operators – Both jurisdictions offer deep funding pools. Choosing where to base an AI venture depends on the business model and regulatory risk appetite. Companies with expertise in privacy and safety may gain an edge in the EU market, while projects requiring rapid scale‑up or compute‑intensive infrastructure might prefer the U.S. ecosystem.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.