EU AI Omnibus Extends High-Risk Deadlines, Open-Weights Status Unclear
Brussels secured a 16-month extension for high-risk AI rules, but the clear open-weights carve-out that model providers demanded remains absent from the Omnibus deal.
thenextweb.com
On 7 May 2026, after two failed trilogues and a twelve-hour session that collapsed in late April, negotiators from the European Parliament and the Council finally landed a political agreement on the AI Omnibus, the package of amendments meant to soften the application of the bloc's Artificial Intelligence Act. The headline change is a timeline extension: the obligations on standalone high-risk AI systems listed in Annex III, covering biometrics, education, employment, essential services, law enforcement, and border management, will now apply from 2 December 2027 rather than 2 August 2026. That buys roughly sixteen months for companies sitting on partly built compliance programmes. Rules for AI embedded in regulated products under Annex I are pushed even further, to 2 August 2028.
The extension was framed as practical rather than political. Brussels insists the postponement is a function of unfinished standards work: harmonised standards from CEN-CENELEC and a fuller library of guidance documents are, the Commission says, the precondition for switching the obligations on. Executive vice-president for tech sovereignty Henna Virkkunen told The Next Web the deal would let companies "focus on building, not on paperwork", positioning the agreement as proof that Europe can keep its rules-based approach while making them workable for industry. Smaller firms get more concrete relief: templated technical documentation, lower fees, and easier access to regulatory sandboxes, with simplifications previously available only to SMEs now extended to small mid-cap companies. The intent is to scale obligations to organisational size.
None of this touches the open-weights question. The AI Omnibus adds a long-promised ban on non-consensual intimate imagery, lightens paperwork, and shifts deadlines. But the carve-out that would matter most to the model-release ecosystem, a clear, enforceable definition of what the Act's open-source exemption actually covers and who gets to claim it, remains untouched. The original AI Act, adopted in 2024, includes a free and open-source licence exemption in Article 2. The exemption says that AI systems released under free and open-source licences are carved out from certain obligations, unless they fall into the high-risk category, constitute a general-purpose AI model with systemic risk, or are prohibited outright. The text is there. The interpretation is not.
And interpretation is everything. The most important ambiguity is the one that separates "open weights" from "open source." A model whose weights are published on Hugging Face under, say, Apache 2.0 but whose training data, training code, and evaluation methodology are kept private is not open source in any OSI-compliant sense of the term. It is, in the language that has become standard across the release community, an open-weights model. The EU AI Act's recitals gesture toward this distinction but do not settle it. Recital 102 notes that the exemption should cover licences that allow the AI system to be "freely shared, used, modified and redistributed by anyone." Read narrowly, that covers Apache and MIT. Read maximally, it might not even cover RAIL licences that include use restrictions, the very licences that many open-weights releases ship with precisely to mitigate downstream misuse.
The practical consequence is straightforward. If you are a startup or a mid-sized enterprise deploying an open-weights model inside the EU after August 2026, you do not actually know whether your deployment falls under the exemption or triggers the full compliance apparatus. If the model is classified as a general-purpose AI system, you face transparency obligations under Article 53. If authorities determine that your "open-weights" release does not qualify as open source because the training data is withheld or because the licence includes a responsible-use clause, the exemption evaporates. The Commission has issued no binding guidance on this. The AI Office, tasked with overseeing GPAI model rules, has not published a definitive interpretative document. What exists is a patchwork of law-firm analyses and position papers, most of which hedge.
This regulatory vacuum is not unique to AI. Across the EU's digital rulebook, industries are pushing for carve-outs from legislation they argue was written before the market matured. Bloomberg reported in April that 39 signatories, including Boerse Stuttgart Group and Nasdaq, asked the European Commission and Parliament to fast-track a standalone review of the distributed ledger technology pilot regime, arguing that Europe is losing ground to the United States. The demand is structurally identical to what the open-weights community wants: a regulatory framework that acknowledges the technology's specific characteristics rather than shoehorning it into rules designed for centralised, proprietary systems. In the DLT case, the argument is that EU legislation governing distributed ledgers treats decentralised infrastructure as though it were a traditional financial intermediary. In the AI case, the argument is that open-weights releases are being evaluated under a compliance model built for closed, API-gated systems where the provider controls every layer of the stack.
The difference is that the DLT industry has a coordinated lobbying apparatus and a clear ask: a bespoke legislative vehicle. The open-weights community has neither. What it has is Mistral, the Paris-based company widely viewed as Europe's only credible rival to OpenAI and Anthropic, which has built a $14 billion valuation on the back of open-weights releases that users can customise and run offline. Forbes reported in April that Mistral's strategy of not being American and not being closed has turned into a genuine commercial advantage, particularly among buyers who care less about benchmark-topping performance than about data sovereignty and deployment flexibility. Mistral releases its models under Apache 2.0 where possible, but the company's own legal exposure under the AI Act remains an open question: its flagship models are general-purpose, and at sufficient scale they could trigger systemic-risk designation, at which point the open-source exemption would no longer apply.
Meta's Llama models face the same uncertainty. Llama is distributed under a community licence that includes a use policy restricting certain applications, which puts it in a grey zone for the exemption. Meta has not publicly detailed how it intends to comply with the GPAI obligations under Article 53 if the exemption is found not to apply. The licence itself is not OSI-approved. The training data is not published. The weights are. That is the definition of open weights, not open source, and the EU AI Act, as currently drafted and unamended by the Omnibus, does not contain the phrase "open weights" anywhere in its text.
The Omnibus deal does at least provide one signal about the direction of travel. By extending the high-risk compliance deadline to December 2027, the co-legislators have created an eighteen-month window during which the AI Office and the Commission can issue the standards, guidance documents, and interpretative communications that will determine how the exemption is applied in practice. That window is also a lobbying window. Companies that want the exemption interpreted broadly, to cover open-weights releases with responsible-use clauses, or to be extended into a standalone open-weights carve-out, will need to make their case before the guidance hardens into enforcement practice.
The risk, from the perspective of the open-weights ecosystem, is that the Commission defaults to a narrow reading. A narrow reading would treat only OSI-compliant, fully open-source releases as qualifying for the exemption. That would exclude virtually every major open-weights release of the past three years, from Llama to Mistral Large to Qwen to DeepSeek. All of these models ship weights under permissive licences but withhold training data. All of them would, under a narrow interpretation, face the full GPAI transparency regime: documentation of training data, energy consumption reporting, copyright policy disclosure, and downstream risk monitoring. For smaller European AI labs that rely on open-weights releases as their primary distribution strategy, that compliance burden could be existential.
There is a counterargument, pressed by civil-society groups and some academic researchers, that a broad exemption for open weights would gut the Act's accountability provisions. The concern is not hypothetical. Open-weights models, once released, cannot be recalled. If a model is downloaded 50,000 times and then found to produce systematically biased outputs in an employment-screening context, the provider has no mechanism to push a patch. The downstream deployer may have no idea the model carries that risk. A broad carve-out, the argument goes, would create a regulatory loophole through which any company could route its models by publishing weights under a permissive licence while keeping everything else proprietary, a move that satisfies the letter of the exemption while defeating its purpose.
This tension is not going to be resolved by the Omnibus. It was not designed to. The Omnibus is a simplification package, not a structural reform. It pushes deadlines and lightens paperwork. The open-weights question requires something closer to a legislative clarification, and there is no sign that the Commission has the appetite for another round of AI Act amendments before the 2027 deadline arrives. What the open-weights community will be watching for over the next eighteen months is not legislation but guidance: the standards that CEN-CENELEC produces, the AI Office's interpretative communications, and, most importantly, the enforcement posture of national competent authorities when the first open-weights-related cases land on their desks.
A useful checkpoint is the Finextra analysis published on 1 May, which noted that 2 August 2026 remains the date on which the AI Act's high-risk obligations become fully enforceable for any financial services firm using AI in credit scoring, fraud detection, or insurance underwriting. The Omnibus now pushes that to December 2027, but it does not change the underlying structure: if a bank deploys a fine-tuned Llama derivative for customer-facing automation and the model qualifies as high-risk, the bank carries the compliance burden regardless of whether the base model was released under an open-weights licence. The exemption shields the upstream provider, not the downstream deployer. This is the detail that most of the open-weights narrative elides. The carve-out matters for model publishers. It matters far less for the enterprises actually putting models into production inside regulated sectors. For them, the question has never been whether the model is open source. It is whether the use case triggers Annex III.
What happens next depends on how the Commission chooses to spend the time the Omnibus bought. The AI Office has indicated it will prioritise the code of practice for general-purpose AI models, a document that could, if drafted ambitiously, include a section on open-weights governance that clarifies the exemption's boundaries without requiring a full legislative fix. The code of practice is not legally binding, but in a regulatory environment where binding guidance is scarce, a well-drafted code becomes the de facto standard that auditors, lawyers, and national authorities reference. The open-weights community should pay attention to every draft of that document that circulates between now and the end of 2027. The carve-out they want may not come as a carve-out at all. It may arrive as a paragraph in a code of practice that says, in effect, "this is what we mean by open source." Whether that paragraph includes the words "open weights" is the only question that matters.