The Brussels Effect's Missing Ingredient: Implementation
With the EU AI Act's first enforcement deadline looming and the Omnibus deal scrambling the compliance timeline, the chasm between ambitious rules and actual enforcement capacity is growing fast.
dreamstime.com
In this article
The date is 2 August 2026. On that day, the first tranche of high-risk AI system obligations under the EU AI Act become fully enforceable, a moment that compliance teams in financial services, recruitment, education, and critical infrastructure have circled in red for eighteen months. But on 7 May 2026, Parliament and Council negotiators reached a provisional agreement that simultaneously pushed the full high-risk deadline to December 2027 and added a Union-wide ban on AI nudification tools. The deal, reported by Foo Yun Chee at Reuters, marks the culmination of a months-long deadlock inside the Omnibus on AI simplification package, and it reveals something structural about the Brussels Effect that the term's boosters rarely acknowledge: the machinery that produces the regulation is far messier than the label suggests.
The phrase 'the Brussels Effect,' coined by Columbia Law professor Anu Bradford in her 2020 book of the same name, describes the EU's ability to externalise its regulatory standards through market size alone. A company in Seoul or São Paulo that wants access to 450 million European consumers designs its product to comply with EU law; that compliance becomes the default for production runs everywhere. The mechanism is real, and it works. GDPR has spawned copycat legislation from Brazil's LGPD to India's Digital Personal Data Protection Act to California's CPRA. Yet as the AI Act moves from adoption to enforcement, the gaps between what the Brussels Effect carries and what it leaves behind are becoming impossible to ignore. The Stanford 2026 AI Index Report, published in April, captures the widening chasm: AI now scales faster than the institutions built to govern it. The report documents a governance landscape where 32 countries have passed AI-related legislation since 2023, but only a handful have built the regulatory infrastructure to enforce it.
The 7 May agreement, negotiated through the early hours of Thursday morning and first reported in detail by Jedidiah Bracy at the International Association of Privacy Professionals, illustrates the tension neatly. The Parliament and Council agreed to amend the AI Act to clarify its overlap with existing machinery rules, to delay compliance timelines for Annex III high-risk systems by up to sixteen months, and to introduce a blanket prohibition on AI nudification applications. The deal was reached only after an earlier round of talks collapsed in late April, a failure Gyana Swain documented at Computerworld on 29 April, when member states and Parliament could not bridge differences over how far the simplification exercise should go. The Commission's original Omnibus proposal had sought to reduce reporting burdens on SMEs; the Parliament's negotiating team pushed back on what they saw as a dismantling of core protections.
For anyone tracking the file, the procedural calendar now reads as follows: the provisional agreement must be confirmed by COREPER, the Committee of Permanent Representatives of the member states, before the end of May; Parliament's IMCO and LIBE committees are expected to hold a joint vote by mid-June; and the final plenary vote in Strasbourg is pencilled for the July I session. Assuming adoption, the amended text pushes full Annex III high-risk obligations to December 2027, while the transparency requirements for general-purpose AI models remain on track for the original August 2026 date. That staggered timeline is itself a quiet admission that the EU's regulatory ambition outpaces its administrative capacity. National competent authorities in at least eight member states have yet to designate their AI Act enforcement bodies, according to Commission desk officers who spoke on background; in three of those states, the delay is tied to budgetary disputes between ministries that cannot agree on who pays for the new supervisory function.
The gap between text and rhetoric matters here. When Commission Executive Vice President Henna Virkkunen told reporters after the 7 May agreement that the EU is 'setting the global benchmark for trustworthy AI,' she was describing an aspiration that the implementing acts do not yet fully support. The AI Act's Article 6, which defines high-risk classification, delegates significant interpretive authority to the Commission through a cascade of delegated and implementing acts; at least fourteen such acts are anticipated before the December 2027 deadline, according to parliamentary advisors close to the file. Each one opens a new comitology procedure in which member state representatives can amend, delay, or block the Commission's draft. The benchmarking that Virkkunen invokes depends on a pipeline of secondary legislation that has barely begun to move.
I think the Commission genuinely believed the Omnibus would signal to Washington and Beijing that Europe is serious about innovation. Instead, what the rest of the world saw was an eighteen-month legislative hold-up over machinery-directive overlaps and SME carve-outs. That is not the kind of regulatory clarity that inspires voluntary adoption elsewhere., Dr. Aleksandra Kuczerawy, senior researcher at the KU Leuven Centre for IT and IP Law
What the Brussels Effect does carry, and carries effectively, is broad normative architecture. Risk-based frameworks, human-in-the-loop requirements, transparency obligations, and fundamental rights impact assessments have migrated from EU texts into South Korea's AI Basic Act, into Brazil's draft AI regulation currently before the Senate, and into the voluntary commitments that the US White House extracted from fifteen leading AI firms in 2023 and renewed in 2025. Anu Bradford herself, testifying before the US Senate Committee on Commerce in March 2026, noted that the AI Act's three-tiered risk classification had become 'the de facto global taxonomy' even before the law's enforcement date arrived. In that sense, the Brussels Effect operates as a normative export mechanism regardless of enforcement timelines.
What it does not carry is the operational machinery that makes broad norms into binding rules. The GDPR taught this lesson the hard way: it took five years and a cross-border enforcement crisis before the European Data Protection Board developed the procedural rules needed to make the one-stop-shop mechanism function. National data protection authorities in Ireland, Luxembourg, and, most contentiously, in the newer member states proved systematically under-resourced relative to the caseload generated by hosting Europe's largest tech firms. The AI Act inherits this structural weakness and compounds it. Each member state designates its own market surveillance authority for AI; coordination happens through a newly created European Artificial Intelligence Board, the EAIB, which has no direct enforcement power and can only issue opinions. The implementing act that defines the EAIB's operating procedures has not yet been tabled.
The Export That Isn't
This is where the Brussels Effect narrative and the regulatory reality diverge most sharply. The idea that a company in Singapore or Nairobi preemptively complies with the EU AI Act because it fears market exclusion assumes that the Act's requirements are legible, stable, and enforced. On the current evidence, none of those conditions fully holds. The 7 May Omnibus deal changed compliance dates that had been fixed in the original text for less than two years. The machinery-rule overlap, a technically narrow question about whether AI systems embedded in industrial equipment fall under the AI Act, the Machinery Regulation, or both, consumed six months of legislative bandwidth and very nearly derailed the entire simplification package. If the drafters in Brussels cannot agree on which law applies to an AI-enabled robotic arm, the compliance team in São Paulo faces an impossible forecasting exercise.
The pattern is visible beyond the AI file. When the Digital Services Act entered full application in February 2024, observers predicted a rapid global convergence on platform accountability standards. Three years on, the record is mixed. Australia's Online Safety Act, amended in 2025, borrows the DSA's systemic-risk assessment framework but rejects its very-large-online-platform designation thresholds, setting a domestic bar that captures only five companies instead of the DSA's twenty-three. India's Digital India Act, still in draft, copies the DSA's transparency reporting obligations while pointedly omitting its independent audit requirements. The United Kingdom's Online Safety Act, fully in force as of early 2026, diverges from the DSA on content moderation duties for private communications, a split that Ofcom and the Commission have tried, and so far failed, to bridge through a bilateral memorandum of understanding.
There is a counter-argument, and it is worth taking seriously. The Brussels Effect does not require perfect regulatory coherence to work. It requires only that the cost of non-compliance with EU rules exceeds the cost of maintaining separate production lines or compliance frameworks for different markets. On that metric, the AI Act may prove effective even in its current, amended form. A US-based foundation-model developer that wants to sell into the European market must meet the Act's transparency and risk-management requirements for general-purpose AI systems. Those requirements bite in August 2026; the Omnibus deal did not change that date. And because training separate models for different regulatory environments is, for most firms, economically irrational, the EU's requirements become the de facto global standard for the product. 'That is the core mechanism,' Bradford told the Senate committee. 'It does not depend on foreign legislatures adopting EU law. It depends on companies making rational economic choices.'
The mechanism works only when EU market share is sufficient to drive that calculation. On the general-purpose AI front, the picture is complicated. The largest frontier models are developed by firms whose primary customers and revenue sources are in the United States. OpenAI, Anthropic, Google DeepMind, and Meta all derive more than sixty percent of their revenue from North American clients, according to figures compiled by the Stanford AI Index. Compliance with the EU AI Act is an operating cost they can absorb; it is not a market-access question on which their survival depends. The same cannot be said for European AI startups, for whom the Act's conformity-assessment costs, estimated by the Commission's own impact assessment at €10,000 to €30,000 per high-risk system, represent a genuine barrier. The Brussels Effect, in other words, may discipline American and Chinese multinationals while simultaneously constraining the very European competitors the Omnibus was designed to protect.
What to Watch for After the Summer
The congressional recess in August will not stop the clock. When Parliament returns in September, three developments deserve close attention. First, the Commission is expected to table the long-awaited implementing act on general-purpose AI model evaluations, which will define how the AI Office assesses systemic risk in frontier models. The draft has been circulating among desk officers since April. If the Commission holds firm, the resulting standoff will test the Brussels Effect's market-access logic more directly than any trade negotiation since the GDPR adequacy decisions.
Second, the EAIB must convene its inaugural meeting by October. The board's composition, one representative per member state plus a Commission chair, mirrors the European Data Protection Board, whose early years were consumed by procedural disputes that delayed enforcement on several landmark cases. Whether the EAIB can avoid that fate depends on the leadership of its first chair, widely expected to be a Commission official drawn from DG CNECT, and on whether the larger member states, Germany, France, and Italy, choose to invest political capital in making the board functional or to treat it as a coordination forum of secondary importance. The signals from Berlin and Paris are, so far, muted; both governments are preoccupied with domestic political calendars and have assigned AI Act implementation to junior ministries with limited inter-ministerial reach.
Third, the adequacy assessment question looms. Under the AI Act, the Commission may determine that a third country's AI governance framework provides an equivalent level of protection, allowing AI systems certified in that jurisdiction to enter the EU market with reduced additional requirements. The procedure is modelled on the GDPR's adequacy mechanism, which has produced fifteen adequacy decisions since 2018, but only after years of negotiation, and with Japan and South Korea among the most protracted processes. The Commission's AI Office has signalled that it intends to open adequacy discussions with the United Kingdom, Canada, and Singapore by early 2027. The UK's departure from the EU makes that negotiation particularly delicate; equivalence determinations require not just comparable rules but comparable enforcement, and the UK's new Digital Regulation Cooperation Forum does not yet have the statutory powers of the EU's AI Office.
Beneath all of this lies a deeper structural point about what the Brussels Effect actually exports. It exports regulatory text: directives, regulations, implementing acts, delegated acts, guidelines, Q&A documents, and the accumulated interpretive practice of the Court of Justice of the European Union. What it cannot export is the institutional ecosystem that gives that text meaning, the multilingual legal order, the preliminary reference procedure, the interplay between the Commission's enforcement powers and the member states' administrative apparatus, the role of the European Ombudsman, and the dense network of civil-society organisations, trade associations, and academic centres that pressure-test every implementing measure before it reaches the Official Journal. A jurisdiction that copies Article 5 of the AI Act without building a market surveillance authority, without creating an independent administrative review mechanism, and without funding civil-society litigation has not adopted the Brussels Effect; it has adopted a list of prohibited practices that will, in practice, go unenforced.
South Korea's AI Basic Act, enacted in December 2025, offers the clearest recent illustration. The law imports the EU's risk-based classification almost verbatim and establishes a Presidential AI Committee with advisory powers. But it assigns enforcement to existing sectoral regulators, the Personal Information Protection Commission for privacy, the Korea Communications Commission for content, without creating a dedicated AI oversight body with the power to conduct ex officio investigations or impose structural remedies. The result is a framework that looks, on paper, like the EU's AI Act and that functions, in practice, like a coordination mechanism among agencies that were already stretched before AI entered their portfolios. Kwon Young-min, a senior fellow at the Korea Information Society Development Institute, told a Seoul conference in March that 'the institutional transplant has taken, but the immune system, the enforcement architecture, was left behind in the operating theatre.'
For in-house counsel at affected companies, the practical implication is straightforward: regulatory convergence on paper does not equal harmonised compliance on the ground. A multinational that satisfies the AI Act's conformity assessment for a high-risk HR-screening tool may still face separate, non-identical requirements in South Korea, Brazil, India, and California. The Brussels Effect reduces the number of fundamentally different regulatory frameworks a company must navigate; it does not reduce that number to one. The Stanford AI Index documents that, as of April 2026, at least eleven jurisdictions maintain AI-specific transparency requirements that differ materially from the EU's. The global regulatory patchwork has become a patchwork of EU-inspired-but-not-identical regimes, each of which imposes its own documentation, reporting, and audit obligations.
The next checkpoint on the calendar is 2 August 2026. On that date, the general-purpose AI provisions take effect regardless of what happens to the Omnibus implementing legislation; the Commission's AI Office is expected to publish its first set of compliance templates in the last week of July. The real test of the Brussels Effect will not be whether foreign governments pass AI laws that cite the EU's text, many already have, but whether the AI Office, under-resourced and operating without a confirmed permanent director, can establish itself as a regulator that the world's largest technology firms take seriously. If it can, the Brussels Effect may yet prove durable in AI as it has in data protection. If it cannot, the 2020s will be remembered as the decade Europe wrote the rules that everyone else read, and no one else enforced.