Ten Years of the Brussels Effect: Europe's Rulebook Exports and Limits
The EU's AI Act Omnibus deal and California's adoption of transparency rules show that the global spread of European tech regulation is far from automatic, with enforcement gaps and political pushback shaping its real reach.
In the early hours of 7 May 2026, after roughly nine hours of closed-door negotiations, representatives of the European Parliament, the Council of the European Union, and the European Commission reached a provisional agreement on the Omnibus on AI simplification package. The deal pushes the compliance deadline for high-risk artificial intelligence systems under the EU AI Act (Regulation 2024/1689) from August 2026 to December 2027, extends sector-specific obligations for toys and medical devices to August 2028, and inserts an explicit prohibition on so-called nudification applications. The agreement, reported The Next Web, ends a months-long deadlock that had left compliance officers across three continents staring at a regulatory calendar they could not plan against.
The Omnibus deal is the latest data point in a decade-long experiment in regulatory globalisation that Columbia Law School professor Anu Bradford named, in her 2020 book, The Brussels Effect: How the European Union Rules the World. The core mechanism is straightforward: the European Union is a market of roughly 450 million consumers with a gross domestic product exceeding €16 trillion. When Brussels writes a rule, multinational firms often find it cheaper to adopt that standard globally than to maintain separate production lines, separate compliance teams, and separate product architectures for the European market alone. The effect does not require any foreign legislature to vote; it travels through supply chains, terms of service, and internal corporate policy.
A decade of evidence shows the Brussels Effect is real but narrower than the shorthand implies. It has carried the General Data Protection Regulation (GDPR) into the privacy laws of Brazil, Japan, South Korea, and at least a dozen other jurisdictions. It has carried the Digital Markets Act's (DMA) interoperability logic into competition bills debated in New Delhi, London, and Tokyo. And, as of this year, it has carried the AI Act's transparency architecture directly into California.
The transparency template travels west
On 1 January 2026, California's AI transparency regulations took effect, requiring developers of large-scale generative AI systems to disclose training data summaries and to label AI-generated content. In a legal analysis published in February, the law firm Morrison Foerster noted that the California rules "will bear a striking resemblance" to the EU AI Act's transparency provisions, framing the development explicitly as a "Return of the Brussels Effect." The analysis, carried on JD Supra, observed that enterprises with global footprints would recognise the California requirements as functionally equivalent to what Brussels had already demanded.
California is not alone. The New York RAISE Act and California's SB 53, both advancing through committee in spring 2026, establish standards and reporting requirements for catastrophic AI risks that borrow heavily from the EU AI Act's risk-classification framework, according to reporting from the International Association of Privacy Professionals. South Korea's AI Basic Act, which the National Assembly passed in late 2025 and which entered its implementation phase in early 2026, similarly adopts a risk-tiered architecture, a structure that Korean parliamentary advisors have acknowledged was modelled in part on the EU approach, as the IAPP detailed in a 6 May analysis.
What makes these cases instructive is that none of them required a treaty, a bilateral agreement, or even diplomatic coordination. In each instance, legislators and regulators in the adopting jurisdiction studied the EU text, extracted the provisions that fit their domestic political economy, and discarded the rest. The Brussels Effect, in practice, is not a photocopier; it is a menu.
The DMA tells a parallel story. As The Next Web reported on 23 April, the European Commission is drafting binding measures that would require Google to grant rival AI assistants, OpenAI's ChatGPT, Anthropic's Claude, the same operating-system-level access on Android that Google's own Gemini assistant enjoys. The decision, due by July 2026, would be the first time the DMA has been applied to compel platform-level interoperability for artificial intelligence. Google has not publicly commented on the draft measures; Apple, however, echoed Google's opposition, with the company's chief compliance officer warning that the DMA's review process threatens user privacy, according to reporting carried by MSN.
The vote extends a period of uncertainty for businesses operating in Europe, which have already faced delays after the EU missed its own deadlines to publish key guidance and changed elements of the law., Robert Hart, reporter, The Verge (26 March 2026)
What the Brussels Effect does not carry
For all the regulatory export happening at the level of statutory text, three things are conspicuously not travelling: enforcement capacity, administrative coherence, and political acquiescence from Washington.
Begin with enforcement capacity. The European Commission has fined United States technology firms more than $7 billion over the past two years under the GDPR, the DMA, and competition law, CNBC reported on 10 April. The figure captures decisions against Meta, Google, Apple, and others, and it represents a level of administrative resolve that no other jurisdiction has matched. Brazil's data protection authority, the ANPD, has issued fewer than two dozen fines since the LGPD entered force in 2020. India's Data Protection Board is still staffing up. South Korea's Personal Information Protection Commission has been active but operates at a fraction of the Commission's budget. The text may travel; the willingness and ability to enforce it, at scale, does not.
The administrative coherence problem is visible inside the EU itself. The European Parliament's 26 March vote to delay key AI Act deadlines, covered by The Verge, came after the Commission repeatedly missed its own timetables for publishing the harmonised standards and codes of practice that businesses needed to comply. The AI Office, the body charged with overseeing the AI Act's implementation, had not yet issued final guidance on what constitutes a high-risk system when the original August 2026 deadline was four months away. The Omnibus deal of 7 May formalised the delays, but it did not resolve the underlying capacity gap.
The MEPs and committee staff who worked the file acknowledge, on the record in committee hearings, though not always in plenary, that the AI Act's original timeline was set before the complexity of the implementing acts was fully understood. The rapporteurs who shepherded the text through the 2023-2024 trilogue negotiations built a regulatory architecture that presumed a functioning standards-development process in parallel. That process lagged. The result is that the EU's most ambitious digital regulation since the GDPR enters force in slow motion.
Then there is the political pushback from Washington. The Trump administration has escalated its rhetoric against EU digital regulation throughout 2025 and into 2026, framing the fines, the DMA interoperability mandates, and the AI Act's transparency requirements as extraterritorial overreach. The CNBC report cited unnamed administration officials describing the fines as a "tax on American innovation." TikTok, owned by ByteDance, made a last-ditch challenge to its designation as a DMA gatekeeper before the Court of Justice of the European Union on 12 May, Reuters reported. The challenge is procedural in form but strategic in substance: a test of whether the Commission's gatekeeper designation methodology can survive judicial scrutiny at the highest level.
Apple has been no less combative. In a statement that circulated widely in early May, the company's chief compliance officer described the DMA's review process as a privacy threat, according to MacTech, aligning Apple with Google's position that the Commission's interoperability demands would undermine the security architecture of their respective operating systems. The arguments are substantive; they are also a signal that the largest American technology platforms intend to litigate the DMA's application for years, not months.
The Brussels Effect, in Bradford's original formulation, depended on a set of conditions: a large, wealthy single market; a regulatory apparatus with sufficient capacity to write rules and enforce them; and a political environment in which the major trading partners did not retaliate with their own punitive measures. The first condition holds. The second is increasingly stretched. The third is under active pressure.
A less discussed limitation concerns the parts of the EU rulebook that foreign jurisdictions explicitly reject. The AI Act's prohibition on social scoring systems, drafted with China's social credit infrastructure in mind, has been adopted almost nowhere outside Europe. The DMA's requirement that gatekeepers offer rival search engines equal default status on device setup screens has been studied by regulators in Australia and the United Kingdom but has not been legislated. The provisions that travel are the ones that align with an adopting jurisdiction's existing policy preferences; the ones that do not are quietly ignored.
Where does the file stand now? The 7 May Omnibus deal must still be formally adopted by the Parliament in plenary and by the Council. The AI Office has indicated that it expects to publish its long-awaited guidance on high-risk classification by September 2026, giving enterprises roughly fifteen months to prepare for the revised December 2027 deadline. The DMA's Android interoperability decision is expected by July 2026. The Commission's review of the DMA's effectiveness, required under Article 53 of the regulation, is due before the end of the year and will shape whether additional services are designated as core platform services. The TikTok gatekeeper challenge at the CJEU will be argued through the autumn.
What the calendar reveals is a regulatory project that is simultaneously more influential and more constrained than the Brussels Effect label suggests. The EU writes the draft; the world edits it. The next checkpoint is the July DMA decision, the first test of whether the Commission can translate a regulatory principle into a binding, operational order on a competitor's behalf without triggering a trade conflict that makes the 2025 tariff disputes look modest.