UK Regulatory Divergence Sets Global Non-EU Policy Template
As Meta fights Ofcom's fee ruling under the Online Safety Act and the FCA rejects strict AI rules, the UK's regulatory divergence offers a non-EU policy template.
theguardian.com
In this article
On 7 May 2026, Meta filed a judicial review against Ofcom at the High Court in London, challenging the way the UK communications regulator calculates fees and penalties under the Online Safety Act (OSA). The filing, first reported by Dan Milmo and Aisha Down in The Guardian, contests Ofcom's use of global revenue as the basis for charging regulated platforms, rather than UK-specific turnover. Meta's action is not merely a procedural dispute over accounting methodology; it represents the most significant direct challenge yet to the enforcement architecture of the United Kingdom's flagship post-Brexit digital regulation, and it arrives at precisely the moment policymakers in London are beginning to frame the OSA as a viable export model for jurisdictions that do not wish to adopt the European Union's more prescriptive approach.
The timing matters because the Meta filing is one node in a much larger lattice of developments that, taken together, reveal a British regulatory philosophy consolidating around a distinct identity. It is rules-based but not codified in a single omnibus statute; it is interventionist on competition but restrained on prescriptive AI mandates; and it is, crucially, attracting attention from governments in the Asia-Pacific region and the Gulf states that are searching for a regulatory template less sweeping than the EU AI Act and the Digital Services Act (DSA). This is the question now being asked in committee rooms and at trade-policy roundtables: can the UK model scale as a third-way regulatory standard, and will it hold together under the strain of well-resourced legal challenges like Meta's?
The Online Safety Act, which received Royal Assent in October 2023 after an unusually protracted legislative journey through both Houses, is the most structurally ambitious piece of platform regulation ever enacted outside the European Union. It imposes statutory duties of care on user-to-user services and search engines, creates a new regulatory regime administered by Ofcom with powers to levy fines of up to ten percent of qualifying worldwide revenue, and introduces criminal liability for senior managers who fail to comply with information requests. The EU's equivalent framework, the DSA, shares many objectives but diverges on enforcement architecture: the European Commission retains direct supervisory authority over the largest platforms and search engines, while Ofcom has been given a more arms-length mandate under the OSA.
The parliamentary dimension of this emerging strategy received a sharp airing on 24 April 2026, when Chi Onwurah MP, chair of the House of Commons Science, Innovation and Technology Committee, questioned the government's approach to technological sovereignty in a hearing that drew unusually detailed responses from DSIT officials. As reported by Computer Weekly, Onwurah pressed on whether the UK's regulatory divergence was being matched by investment in domestic digital infrastructure, or whether the government was merely building a rulebook without the industrial base to benefit from it. Her intervention matters because the committee exercises significant influence over the legislative timetable for digital regulation, and because it signals growing parliamentary scrutiny of whether regulatory divergence is being pursued as a deliberate competitive strategy or simply as a reactive consequence of Brexit.
Simultaneously, the Competition and Markets Authority (CMA) has been building a portfolio of enforcement actions that underscore the UK's willingness to act where Brussels has been slow to move. On 31 March 2026, the CMA opened a formal investigation into Microsoft's business software ecosystem, citing concerns about licensing practices that may restrict competition in the cloud productivity market, as reported by CNBC. The investigation is being conducted under the Competition Act 1998 rather than the new Digital Markets, Competition and Consumers (DMCC) Act, which received Royal Assent in 2024 and is expected to enter force in late 2026, but the CMA's choice of instrument reflects a deliberate sequencing strategy: establish enforcement credibility under existing powers before activating the more interventionist DMCC regime, which will designate firms with 'strategic market status' and impose conduct requirements analogous to, though not identical with, the EU's Digital Markets Act (DMA).
In financial services regulation, the divergence is subtler but arguably more consequential. The Financial Conduct Authority (FCA) has elected not to introduce a bespoke rulebook for artificial intelligence, opting instead to apply its existing Principles for Businesses and the Senior Managers and Certification Regime (SMCR) to AI-related risks. A legal analysis published by JD Supra on 30 April 2026, authored by the firm's financial regulation practice, notes that the FCA's approach 'contrasts markedly with the EU's AI Act,' which classifies certain financial-sector AI applications as high-risk and imposes mandatory conformity assessments that will become enforceable on 2 August 2026. The FCA's position, the analysis explains, rests on the premise that AI risk is a subcategory of operational risk and does not require a standalone regulatory taxonomy, a view that has found favour with the Monetary Authority of Singapore and the Hong Kong Monetary Authority, both of which have issued AI governance frameworks that emphasise principles over prescriptive rules.
The cross-border divergence phenomenon has now matured to the point where it is attracting attention not only from compliance departments but from shareholder activists and institutional investors. In a detailed analysis published on 12 March 2026, Akin Gump Strauss Hauer and Feld LLP argued in a JD Supra advisory that 'for much of the past two decades, multinational companies operated under a working assumption that, despite local variation, global regulation was slowly moving toward convergence,' and that this assumption is now 'demonstrably false.' The firm's analysis identifies three categories of cost arising from regulatory divergence: governance overhead as boards struggle to oversee compliance across incompatible regimes, strategic friction as product roadmaps must account for divergent requirements, and a new lever for shareholder activism whereby investors demand disclosures quantifying divergence exposure.
The Environmental, Social, and Governance (ESG) regulatory landscape provides perhaps the cleanest illustration of the multi-speed divergence now under way, because it spans the UK, the EU, and international standard-setters simultaneously. Hogan Lovells, in its UK/EU/International ESG Regulation Monthly Round-Up for April 2026, catalogued a month of parallel developments: the International Sustainability Standards Board (ISSB) advanced its nature-related disclosure standards; Canada formed a new working group on sustainable finance taxonomy; and the UK's Transition Plan Taskforce published its final sector-specific guidance, which is voluntary but widely expected to inform the FCA's forthcoming listing rules. The round-up noted that the EU's Corporate Sustainability Reporting Directive (CSRD) remains the most prescriptive framework globally, and that UK-based firms with EU operations are increasingly running dual compliance programmes, a costly arrangement that lobby groups including TheCityUK have flagged as a competitiveness concern.
Digital finance regulation tells a similar story of fragmentation with the UK occupying a distinct middle ground. Bloomberg's April 2026 Global Regulatory Brief on digital finance, authored by the firm's Regulatory Affairs Specialists, documented a flurry of activity across multiple jurisdictions: the UK's Prudential Regulation Authority (PRA) issued a consultation on the operational resilience of critical third-party technology providers; the Monetary Authority of Singapore updated its technology risk management guidelines; and the US Securities and Exchange Commission finalised its long-awaited rule on predictive data analytics by broker-dealers. The brief highlighted that the PRA's approach to critical third parties, which will grant the regulator direct oversight powers over cloud providers and data analytics firms that serve systemic UK financial institutions, closely tracks the EU's Digital Operational Resilience Act (DORA), but with a crucial difference: the PRA's framework grants the regulator discretion to designate firms case-by-case, rather than applying a blanket statutory classification, a design choice that the brief characterised as providing 'greater flexibility for firms operating across multiple jurisdictions.'
The Fortune analysis published on 11 April 2026, headlined 'America and Europe have taken different routes on trying to control AI,' crystallised the binary that the UK is actively working to escape. The piece, which ran via Yahoo Finance, contrasted the EU's regulatory-first posture with the United States' more laissez-faire approach, quoting an industry executive who warned that Europe's path leads to 'slow agony.' What the Fortune analysis did not capture, and what the UK's policy apparatus has begun to articulate more explicitly in 2026, is that Britain is attempting to construct a third option: a regime that is enforceable and rights-respecting, but also proportionate and innovation-permissive in ways that the AI Act, at least in industry's assessment, has failed to be.
The FCA has elected not to introduce a bespoke AI rule book and has instead applied its existing principles-based framework to AI-related risks, a position that contrasts markedly with the EU's AI Act., Financial regulation practice, JD Supra analysis, 30 April 2026
The risk to the UK's emerging model is not that it will be ignored but that it will be contested to the point of paralysis. Meta's judicial review of Ofcom is the most visible example, but it is not the only one. In the payments sector, the FCA announced on 6 May 2026 that it was investigating Mastercard, Visa, and PayPal over suspected anti-competitive conduct, a probe that Reuters reported could serve as a test case for the DMCC Act's market investigation powers once they come online. The cumulative effect of these enforcement actions is a regulatory apparatus that looks energetic from the outside but which, critics argue, is spreading limited resources across too many fronts.
The UK's strategy also faces a structural challenge that the EU does not: the lack of a single market as a bargaining chip. The EU's regulatory influence, often described as the 'Brussels effect,' rests substantially on the fact that any company wishing to access 447 million consumers must comply with EU law, and that many firms find it more efficient to apply EU standards globally than to maintain parallel compliance programmes. The UK, with a domestic market of 68 million, cannot exert the same gravitational pull. Instead, it is pursuing what might be called a 'demonstration effect': proving that its regulatory model produces better outcomes, in the hope that other jurisdictions will adopt similar frameworks voluntarily, creating a de facto standard through alignment rather than coercion. Whether this strategy can succeed depends on outcomes that will not be measurable for years, not months.
Academic comparativists tracking the policy-export question have begun to note an intriguing pattern. The jurisdictions most actively studying the UK's approach, according to research emerging from the Oxford Internet Institute's digital regulation programme, are not mid-sized European neighbours but rather Canada, Australia, Japan, and the Gulf Cooperation Council states, all of which share characteristics that make the EU model unattractive: significant domestic tech sectors they wish to protect, constitutional or political constraints on delegation to supranational bodies, and trade relationships that require regulatory interoperability with both the US and Asian markets. A senior researcher at the programme, who spoke on condition that the work not be cited until peer review is complete, said the UK's 'sectoral, principles-based, regulator-led' model is being studied 'with an intensity that Brussels should not ignore,' precisely because it offers a path to alignment on outcomes without alignment on legal form.
The steel safeguard regulation agreed provisionally by the Council and the European Parliament on 13 April 2026 provides a useful counterpoint, because it shows that on questions of trade defence, the UK and the EU are moving in broadly parallel directions, even if through different legal instruments. The regulation, which addresses the negative trade-related effects of global overcapacity on the EU steel industry, mirrors the UK's own Trade Remedies Authority investigations into steel imports, and both frameworks are responding to the same underlying market distortion: Chinese state-subsidised overproduction. In trade defence, convergence persists because the economic incentives are aligned. In digital regulation, divergence is expanding because the policy preferences have fundamentally parted ways.
What the Meta Filing Reveals About the UK Model's Stress Points
Meta's challenge to Ofcom turns on a technical question: whether the OSA permits the regulator to calculate fees based on a company's global revenue, or whether it must be limited to UK-derived revenue. Ofcom's position, set out in its consultation on fees published in March 2026, is that worldwide revenue is the appropriate metric because the costs of regulating a platform with global infrastructure are not proportionate to its UK revenue alone. Meta's counter-argument, which will be tested in the High Court in the autumn term of 2026, is that the OSA's statutory language does not support this interpretation and that Ofcom is exceeding its statutory powers. The case matters because the fee structure determines the resources available to Ofcom for enforcement, and because a ruling against the regulator would significantly constrain its operational capacity just as the OSA's most demanding duties, including the child safety risk assessment requirements, begin to bite.
The judicial review also exposes a deeper vulnerability in the UK model: the legal system's willingness to entertain substantive challenges to regulatory discretion, a dynamic that is far less pronounced in the EU, where the Court of Justice of the European Union has historically granted significant deference to the Commission's implementing powers and delegated acts. If Meta succeeds in narrowing Ofcom's funding base, the signal to other regulated entities will be clear: the UK's regulatory architecture can be challenged not only on policy grounds but on the procedural and statutory interpretation grounds that English administrative law is uniquely equipped to adjudicate. That is not a weakness per se, but it is a feature of the system that jurisdictions considering the UK model need to understand before they adopt it.
Looking ahead, the next milestone on the calendar is 2 August 2026, when the EU AI Act's high-risk obligations become fully enforceable for Annex III systems, including those used in financial services credit scoring, biometric categorisation, and critical infrastructure management. That date will mark a genuine stress test for the transatlantic and UK-EU regulatory divergence, because firms operating in all three jurisdictions will need to demonstrate compliance with three substantively different AI governance frameworks simultaneously. The UK's Information Commissioner's Office has indicated it will publish updated guidance on AI and data protection in July 2026, and the DSIT is expected to issue its long-delayed white paper response shortly thereafter. The parliamentary committee chaired by Chi Onwurah has scheduled a further evidence session for 15 July 2026, at which the Secretary of State for Science, Innovation and Technology is expected to appear. The answers given in that session will shape whether the UK's regulatory divergence is understood as a coherent strategy or as a series of improvisations that happen to have cohered into something that looks, from a distance, like a plan.