The Council presidency and European Parliament negotiators today reached a provisional agreement to streamline certain rules governing artificial intelligence across the European Union. The deal, announced on May 7, 2026, resolves months of uncertainty that had left companies unsure of precisely when the AI Act's most demanding requirements would apply to their products and services.
The agreement is part of the so-called Omnibus VII legislative package - the seventh in a series of ten omnibus proposals put forward by the European Commission since February 2025. It specifically targets Regulation (EU) 2024/1689, the AI Act, alongside Regulation (EU) 2018/1139 on civil aviation safety. The underlying Commission proposal, registered as COM(2025) 836 final, was transmitted to the Council of the European Union on November 20, 2025.
What the deal changes on timelines
The most immediately significant change involves the dates by which providers of high-risk AI systems must comply with Chapter III of the AI Act - the section covering requirements for data governance, transparency, documentation, human oversight, and robustness.
According to the agreed text, high-risk AI systems classified under Article 6(2) and Annex III - which covers applications such as biometric identification, employment screening, and credit scoring - must comply by December 2, 2027. Systems classified under Article 6(1) and Annex I, meaning AI components embedded in products already governed by existing EU product safety legislation such as medical devices, machinery, lifts, and watercraft, face a later deadline of August 2, 2028.
These dates replace a mechanism the Commission originally proposed in November 2025 that would have linked compliance to a separate Commission decision confirming that harmonised standards and compliance tools were available. Under that structure, a six-month or twelve-month countdown would begin only after the Commission issued such a decision. The European Parliament's joint committee report, adopted on March 18, 2026 with a vote of 101 in favour, 9 against, and 8 abstentions, had already rejected that approach as creating unacceptable uncertainty, and called for fixed dates instead. Today's provisional agreement delivers exactly that.
The Commission's original proposal had itself identified the problem clearly. According to the explanatory memorandum accompanying COM(2025) 836 final, "the delayed preparation of standards, which should provide technical solutions for providers of high-risk AI systems to ensure compliance with their obligations under that regulation, and the delayed establishment of the governance and the conformity assessment frameworks at national level result in a compliance burden that is heavier than expected."
Prohibited practices expanded to cover CSAM and non-consensual intimate imagery
One of the more striking additions in the provisional agreement is a new provision explicitly prohibiting AI practices related to the generation of non-consensual sexual and intimate content or child sexual abuse material (CSAM). This was not in the Commission's original proposal and was added by the co-legislators during negotiations.
The Council's negotiating mandate, approved on March 13, 2026, had already introduced this provision, and the Parliament accepted it as part of the final text. Its inclusion reflects political pressure to address harms that the original AI Act, adopted in June 2024, did not specifically enumerate within its prohibited practices list under Article 5.
SMEs and the new small mid-cap category
The agreement extends several regulatory privileges that the AI Act already grants to small and medium-sized enterprises to a new category: small mid-cap enterprises, or SMCs. According to the proposal's explanatory memorandum, SMCs are defined by reference to Commission Recommendation (EU) 2025/1099 of May 21, 2025.
The rationale is straightforward. According to the Commission's text, "enterprises outgrowing the micro, small and medium-sized enterprises (SME) definition - the 'small mid-cap enterprises' (SMCs) - play a vital role in the Union's economy." They grow faster and innovate more intensively than typical SMEs, but face comparable administrative burdens when they cross the SME threshold. The proposal is designed to prevent a so-called cliff-edge effect, where compliance costs jump sharply the moment a company grows beyond the SME ceiling.
Practical benefits for SMCs and SMEs include the ability to provide technical documentation for high-risk AI systems in a simplified form using a template the Commission is required to establish, a proportionate approach to quality management systems under Article 17 of the AI Act, special consideration in the calculation of penalties under Article 99, and priority access to any AI regulatory sandbox the EU establishes at Union level.
According to the Commission's initial impact assessment, the broader set of simplification measures could produce savings of approximately 297.2 to 433.2 million euros. These are described as initial estimates, and non-quantifiable benefits from a streamlined compliance environment are expected to add further value.
Registration requirements and bias detection rules reinstated
Two provisions from the Commission's original November 2025 proposal that had been more permissive were tightened back by the co-legislators.
First, the Commission had proposed removing the obligation for providers to register AI systems in the EU database where those providers concluded their systems were not high-risk because they perform only narrow or procedural tasks. The provisional agreement reinstates that registration obligation. Providers who consider a system exempt from high-risk classification under Article 6(3) must still document that assessment and register the system.
Second, on the processing of special categories of personal data for bias detection and correction, the Commission's proposal had sought to allow this more broadly across all AI systems and models. The agreement reinstates what the documents describe as the standard of strict necessity - meaning such sensitive data processing is permitted only to the extent genuinely required for bias detection and mitigation, with appropriate safeguards remaining in place.
AI literacy obligation shifted from providers to governments
Article 4 of the original AI Act placed an obligation on providers and deployers of AI systems to ensure their staff achieved a sufficient level of AI literacy. According to the explanatory memorandum, stakeholder consultations revealed that this horizontal requirement was "ineffective in achieving the objective pursued by this provision" because a single standard cannot suit all types of organisations.
The amendment transforms the obligation. Rather than requiring each company to ensure AI literacy internally, the revised text requires the Commission and member states to encourage and facilitate providers and deployers to take appropriate measures. The European Artificial Intelligence Board is assigned a coordination role, and the Apply AI Alliance is identified as a channel for engaging the wider industry community.
AI Office gets broader supervisory scope
The agreement reinforces the role of the EU's AI Office - the body established within the European Commission to oversee compliance with the AI Act. Under the amended Article 75, the AI Office becomes exclusively competent for the supervision and enforcement of AI systems that are built on a general-purpose AI model where the model and the system are developed by the same provider. It also assumes exclusive competence over AI systems integrated into designated very large online platforms or very large online search engines within the meaning of the Digital Services Act, Regulation (EU) 2022/2065.
This centralisation is significant. Rather than having many national authorities each attempt to supervise highly complex general-purpose AI systems, those responsibilities flow to a single body with concentrated expertise. The provisional agreement also clarifies exceptions: law enforcement, border management, judicial authorities, and financial institutions remain subject to national competent authority supervision rather than the AI Office.
To carry out these expanded tasks, the Commission estimated in the legislative financial statement accompanying the proposal that 53 full-time equivalent positions would be required. Of those, 15 can be covered by internal redeployment. The additional 38 positions would require new resources, to be funded through the Digital Europe Programme.
AI regulatory sandboxes
The provisional agreement postpones the deadline for national competent authorities to establish AI regulatory sandboxes to August 2, 2027 - one year later than the Council had initially proposed in its March 2026 mandate, which set the deadline at December 2, 2027. This adjustment was made during trilogue negotiations with the Parliament, which pressed for the earlier date.
The AI Office is separately empowered to establish a Union-level AI regulatory sandbox for categories of AI systems within its exclusive supervisory competence. According to the legislative financial statement, the first EU-level sandbox is expected to be operational by 2028.
Transparency marking and generative AI
Providers of generative AI systems - systems that generate synthetic audio, images, video, or text - that were already on the market before August 2, 2026, now have until December 2, 2026 to ensure their outputs are machine-readable and detectable as artificially generated or manipulated. This is a three-month grace period from the baseline August 2, 2026 date, reduced from the six months the Commission originally proposed.
Industrial AI and sectoral legislation overlap
One technically complex area resolved in the agreement involves the interaction between the AI Act's high-risk requirements and existing EU sectoral legislation - for instance, the rules governing medical devices, toys, lifts, machinery, and watercraft. The co-legislators agreed on a mechanism allowing implementing acts to resolve situations where sectoral law contains AI-specific requirements that overlap with the AI Act's provisions, limiting the AI Act's direct application in those specific cases.
The machinery regulation was separately exempted from direct AI Act applicability. Instead, the Commission was empowered to adopt delegated acts under the machinery regulation that would introduce health and safety requirements aligned with the AI Act's high-risk classification criteria.
Context: the broader omnibus programme
This agreement sits within the tenth omnibus package programme the Commission launched in February 2025 in response to calls from EU leaders to reduce administrative burden on businesses. The programme targets at least a 25 percent reduction in administrative costs for all businesses, equivalent to savings of 37.5 billion euros, and at least a 35 percent reduction for SMEs by 2030, according to the Council's background documentation.
The AI Act itself, Regulation (EU) 2024/1689, entered into force on August 1, 2024. Prohibitions on unacceptable AI practices applied from February 2, 2025. Obligations for general-purpose AI model providers began applying on August 2, 2025 - the point at which the EU published its General-Purpose AI Code of Practice and the Commission released its classification guidelines establishing the 10 to the power of 23 floating-point operations threshold for general-purpose model classification.
The Digital Omnibus on AI was announced as a consultation on September 16, 2025, with the formal proposal transmitted to the Council on November 20, 2025. The Council agreed its negotiating position on March 13, 2026. Overnight negotiations in late April had collapsed, briefly leaving the original August 2, 2026 deadline intact. Today's deal resolves that impasse.
What happens next
The provisional agreement must still be formally endorsed by both the Council and the European Parliament. Following that endorsement, the text goes through a legal and linguistic revision before formal adoption as a published regulation. The regulation enters into force on the third day following its publication in the Official Journal of the European Union. Given that Chapter III obligations for the most common category of high-risk systems were originally set to apply from August 2, 2026, the speed of formal adoption will determine whether the new December 2, 2027 date takes legal effect before the original date arrives.
For digital advertising and marketing technology companies, the stakes around these timeline changes have been considerable. AI systems used in audience segmentation, automated bidding, personalisation, and content recommendation may fall within the high-risk categories under Annex III depending on how they interact with employment, credit, or biometric functions. The EU Parliament committee's earlier analysis had already noted this intersection explicitly. The Commission's earlier enforcement framework draft, published March 12, 2026, had shown what investigative powers would look like in practice once enforcement became active. Today's deal does not change those powers - it changes when they bite for the high-risk tier.
Timeline
- August 8, 2024 - AI Act (Regulation (EU) 2024/1689) enters into force, one week after publication
- February 2, 2025 - Prohibitions on unacceptable AI practices begin applying
- August 2, 2025 - General-purpose AI model obligations begin applying; EU publishes final GPAI Code of Practice
- September 16, 2025 - Commission opens Digital Omnibus consultation
- November 19-20, 2025 - Commission formally transmits Digital Omnibus on AI proposal (COM(2025) 836 final) to the Council
- March 13, 2026 - Council agrees its negotiating mandate on the Digital Omnibus on AI
- March 18, 2026 - European Parliament joint committee adopts report (A10-0073/2026) backing fixed deadlines, 101-9 vote
- Late April / Early May 2026 - Brussels trilogue talks collapse overnight; original August 2026 deadline briefly stands
- May 7, 2026 - Council presidency and European Parliament reach provisional agreement on Digital Omnibus on AI
Summary
Who: The Council of the European Union presidency (held by Cyprus) and European Parliament negotiators, operating within the broader Omnibus VII legislative package initiated by the European Commission.
What: A provisional agreement to amend the EU AI Act (Regulation (EU) 2024/1689) through the Digital Omnibus on AI, pushing high-risk AI system compliance deadlines to December 2, 2027 and August 2, 2028, adding a prohibition on AI-generated CSAM and non-consensual intimate content, extending SME regulatory privileges to small mid-cap enterprises, reinstating AI system registration requirements, centralising oversight of general-purpose AI-based systems under the AI Office, and adjusting regulatory sandbox timelines.
When: The provisional agreement was reached on May 7, 2026. The original Commission proposal was dated November 19, 2025. Formal adoption by both institutions and publication in the Official Journal are still pending.
Where: The agreement was reached in Brussels, through the trilogue process between the European Parliament, the Council of the EU, and the European Commission. The rules, once formally adopted, apply across all EU member states.
Why: Delays in developing harmonised standards, the late designation of national competent authorities, and the absence of available conformity assessment bodies created a compliance environment that stakeholders and the Commission itself described as more burdensome than originally expected. The agreement trades the original Commission's flexible trigger mechanism for fixed calendar dates, giving businesses and public authorities a defined planning horizon.