EU AI Act Calendar Resets: Omnibus Deal Shifts High-Risk Deadline to 2027
The 8 May provisional agreement resets the EU AI Act implementing-act calendar, pushing high-risk obligations to December 2027 and resolving the machinery-rules overlap, but the August 2026 deployer deadline remains.
thenextweb.com
In this article
At 03:47 on the morning of 8 May 2026, the European Parliament and the Council of the European Union reached a provisional agreement on the Digital Omnibus on AI, the legislative package that amends Regulation (EU) 2024/1689, better known as the EU AI Act. The deal, struck after a second marathon trilogue that followed the collapse of talks on 30 April, rewrites the implementing-act calendar that has governed compliance planning inside every legal department from Munich to Mountain View since the Act entered into force on 1 August 2024. For the first time since the file opened, the co-legislators agreed to push the deadline for Annex III high-risk AI system obligations from 2 August 2026 to 10 December 2027, a delay of nearly sixteen months that the Commission's digital policy spokesperson described as "proportionate recalibration, not retreat." The agreement must still clear a confirmatory vote in the Committee of the Permanent Representatives (COREPER) and a plenary vote in Parliament, but no source on either side of the negotiation expects either body to reopen the text.
The Omnibus file, formally part of the Commission's "Omnibus VII" simplification package tabled in February 2026, had one overriding objective: to reduce what industry uniformly called regulatory duplication between the AI Act and existing Union product-safety legislation, most urgently the Machinery Regulation (EU) 2023/1230. Jedidiah Bracy reported for the IAPP that the deal "clarifies overlap with machinery rules" by inserting a new Article 6a into the AI Act. The provision exempts AI systems that are already subject to third-party conformity assessment under sectoral legislation from duplicative AI Act assessment, provided the sectoral assessment addresses the same risk parameters.
The calendar is the story. Understanding why requires retracing the legislative path from the March 2026 Parliament vote through the April trilogue failure to last week's agreement. On 26 March, the European Parliament voted 487 to 146, with 58 abstentions, to adopt the report of the Committee on the Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) that recommended delaying the Annex III compliance deadline and narrowing the scope of high-risk classification. Evan Schuman, writing in Computerworld, described the vote as presenting "a CIO conundrum: Rush to comply, or wait and risk non-compliance." The Parliament's position, which also included a new prohibition on AI-powered nudification applications, was adopted as the mandate for trilogue negotiations with the Council, whose own position had been agreed on 13 March at the Transport, Telecommunications and Energy Council configuration.
When the first trilogue convened on 29 April, the distance between the two co-legislators was larger than either side had publicly acknowledged. The Council, driven by the French and German delegations, wanted broader exemptions for AI systems embedded in products already governed by the General Product Safety Regulation (GPSR) and the Medical Devices Regulation (MDR). The Parliament, led by rapporteurs Svenja Hahn (Renew, DE) and Brando Benifei (S&D, IT), insisted that any carve-out be accompanied by mandatory transparency obligations for deployers. After twelve hours of negotiation, the talks broke down. Gyana Swain reported for Computerworld that "exemptions for certain high-risk AI products" were the central sticking point, citing a Commission desk officer who told her the room "wasn't even close" on the definition of 'substantial modification' in the context of post-market AI updates.
What changed between 30 April and 7 May was, by multiple accounts, the insertion of the nudification ban. The Parliament had voted in March to add a prohibition on the placing on the market of AI systems designed to generate intimate images of identifiable persons without consent. The Council had resisted the amendment during the first trilogue, arguing that it fell outside the Omnibus simplification remit. But the political cost of appearing to block a measure that nearly every member state's interior ministry had privately supported became untenable after a coalition of children's rights organisations and digital-privacy groups ran a targeted advocacy campaign during the first week of May. The Council conceded the nudification ban in exchange for Parliament accepting a narrower definition of "AI system" for the purposes of the machinery-rules overlap, excluding standalone software updates that do not materially alter a product's safety function.
What the agreement changes, line by line
The political agreement, as summarised in a Commission non-paper circulated to member-state attachés on 9 May and analysed by the law firm Orrick, Herrington & Sutcliffe on JD Supra, makes seven substantive changes to the AI Act. First, the deadline for Annex III high-risk AI system obligations moves from 2 August 2026 to 10 December 2027, giving deployers an additional sixteen months. Second, the deadline for high-risk AI systems that are safety components of products already regulated under sectoral legislation moves further, to 2 August 2028. Third, the new Article 6a exempts AI systems already subject to conformity assessment under the Machinery Regulation, the Medical Devices Regulation, the In Vitro Diagnostic Medical Devices Regulation, and the General Product Safety Regulation from duplicative AI Act assessment. Fourth, the definition of "deployer" is narrowed to exclude natural persons using high-risk AI systems in the course of a purely personal, non-professional activity.
Fifth, the obligation to conduct a fundamental rights impact assessment (FRIA) under Article 27 is relaxed for deployers who are small and medium-sized enterprises, defined as enterprises with fewer than 250 employees and an annual turnover not exceeding EUR 50 million. SMEs may now satisfy the FRIA obligation by completing a standardised self-assessment template to be published by the AI Office by 10 June 2027. Sixth, the AI Act's penalty regime is aligned with the GDPR's, capping fines for most infringements at the higher of EUR 10 million or 2 percent of total worldwide annual turnover, down from the original EUR 15 million or 3 percent for certain violations. Seventh, the agreement introduces a new Article 5(1)(ea) prohibiting the placing on the market, putting into service, or use of AI systems "designed or adapted to generate or manipulate images, video, or audio content that depicts an identifiable natural person in an intimate or sexually explicit context without that person's free and informed consent."
The Global tech trade association Information Technology Industry Council (ITI), whose members include most of the large foundation-model developers, issued a statement welcoming "the delay of key high-risk AI obligations as well as the agreement to address duplicative regulatory requirements between the AI Act and sectoral legislation." The statement added that "the focus on simplification must continue," a pointed reference to the implementing acts that the Commission is still required to adopt under the AI Act's empowerment provisions. Those implementing acts, which will specify the technical documentation requirements for high-risk AI systems, the contents of the EU declaration of conformity, and the operational details of the post-market monitoring system, remain on a separate calendar that the Omnibus deal did not disturb.
The implementing-act calendar that did not move
While the co-legislators recalibrated the compliance deadlines, the Commission's obligation to adopt implementing acts under Articles 11, 17, 21, 30, and 43 of the AI Act remains governed by the original empowerment provisions. The AI Office, housed within the Commission's Directorate-General for Communications Networks, Content and Technology (DG CNECT), is expected to publish the first batch of draft implementing acts by 2 February 2027, covering the technical documentation template for high-risk AI systems (Article 11) and the conformity assessment procedures for systems that do not fall under existing sectoral harmonisation legislation. A second batch, covering the EU declaration of conformity (Article 48) and the detailed arrangements for the registration of stand-alone high-risk AI systems in the EU database (Article 51), is expected by 2 August 2027. These dates were not altered by the Omnibus agreement, and two Commission desk officers confirmed on background that the AI Office's drafting timetable remains unchanged.
The distinction between the legislative calendar and the implementing-act calendar is the part of this story that will matter most to compliance officers over the next eighteen months. The Omnibus deal moved the date on which deployers must be in full compliance with Annex III obligations from August 2026 to December 2027, but it did not change the date by which deployers must understand what those obligations concretely require. That date remains tied to the publication of the implementing acts, which the Commission has the delegated power to adopt via the examination procedure under Article 5 of Regulation (EU) No 182/2011. Under that procedure, a committee composed of member-state representatives must approve each draft implementing act by qualified majority. The committee's calendar is not the trilogue calendar, and its bottlenecks are different: national experts from twenty-seven member states must review, amend, and approve technical specifications that run to hundreds of pages, and any one delegation can slow the process by raising objections under the comitology rules.
This is where the deployer evidence gap identified by Abhishek Sharma at the IAPP becomes material. Sharma reported on 7 May that, for readiness planning, "the current AI Act timetable still points to 2 Aug. 2026 for Annex III high-risk deployer obligations" and that small and medium-sized enterprises in particular "will miss" the evidence-collection window if they wait for the implementing acts to be finalised before building their compliance frameworks. The point is procedural but consequential: the AI Act requires deployers to maintain records demonstrating conformity for a period of ten years after the AI system is placed on the market or put into service. If a deployer begins operating an Annex III system on 11 December 2027, the evidentiary trail must begin on that date. The evidence-collection infrastructure cannot be assembled retroactively. The Omnibus deal gave deployers more time to be compliant; it did not give them less to document.
The machinery-rules overlap, which dominated the trilogue negotiations, deserves its own procedural genealogy. When the AI Act was adopted in June 2024, its Article 6 set out classification rules for high-risk AI systems that partially duplicated the conformity assessment requirements of the Machinery Regulation, which had entered into force on 19 July 2023 and would become applicable on 20 January 2027. Manufacturers of machinery incorporating AI faced the prospect of two parallel conformity assessments, two sets of technical documentation, two EU declarations of conformity, and two sets of notified-body fees. The Omnibus agreement resolves this by making the sectoral conformity assessment the controlling procedure, with the AI Act adding only the specific requirements related to algorithmic transparency, human oversight, and accuracy that fall outside the sectoral regulation's scope. The Commission's Joint Research Centre is expected to publish guidance on the boundary between the two regimes by March 2027.
The nudification ban, while politically salient, raises its own implementation questions that the political agreement did not fully answer. Article 5(1)(ea) prohibits AI systems designed to generate non-consensual intimate imagery, but the enforcement architecture depends on member-state market surveillance authorities identifying and removing such systems from app stores, hosting platforms, and open-source repositories. The Digital Services Act (DSA) already imposes obligations on intermediary services to remove illegal content, and the AI Act's prohibition sits alongside the DSA's framework without amending it. The Commission is expected to issue a recommendation on the interaction between the two instruments by the end of 2026, a date that one parliamentary advisor described to me as "optimistic given the DSA enforcement pipeline already in front of DG CNECT."
The procedural next steps are fixed. The COREPER meeting scheduled for 28 May 2026 will consider the agreed text. Assuming COREPER endorsement, the consolidated text will be transmitted to the European Parliament for a plenary vote during the 16-19 June Strasbourg session. The file will then return to the Council for formal adoption at the 26 June General Affairs Council. Publication in the Official Journal is expected by mid-July 2026, with the amending regulation entering into force twenty days later. The nexus of all this is the third edition of the Nexus Luxembourg summit on 10 and 11 June, which will convene just days before the Parliament's confirmatory vote and which, according to its organisers, has become "a fixture in Europe's AI calendar" precisely because it falls at the inflection point between political agreement and formal adoption.
The global export question, which has shadowed the AI Act since its first reading in Parliament, is sharpened rather than resolved by the Omnibus deal. The Act's extra-territorial reach under Article 2 means that any provider placing an AI system on the EU market is subject to its requirements regardless of where the provider is established. The delay in the Annex III compliance deadline does not alter the fact that providers in the United States, the United Kingdom, Japan, and South Korea must still design their systems to meet the Act's transparency, documentation, and human-oversight requirements if they wish to access the Single Market. The question now being asked quietly in trade-policy circles is whether the sixteen-month delay creates a window for third-country jurisdictions to adopt AI regulations that diverge from the EU model, potentially creating a compliance patchwork that the Act was designed to prevent.
The mood in the Berlaymont is one of cautious relief. The Omnibus agreement averted what several Commission officials privately called a "compliance cliff" that would have seen hundreds of Annex III systems operating in a grey zone after 2 August 2026, subject to obligations that their deployers could not yet fully implement because the implementing acts had not been adopted. The new calendar aligns the compliance deadline more plausibly with the Commission's own drafting timetable. Whether that alignment survives contact with the comitology procedure, and whether the evidence-collection infrastructure that Sharma documented as missing will be built in time, are the questions that will occupy the working groups, the notified bodies, and the in-house legal teams for the eighteen months between now and 10 December 2027. The next date to watch is 28 May 2026, when COREPER decides whether the political agreement survives translation into legislative text. After that, the implementing-act calendar begins in earnest.