Overnight negotiations in Brussels failed to produce an agreement on proposed changes to the EU AI Act through the Digital Omnibus package. The deadline under Regulation (EU) 2024/1689 - 2 August 2026 - remains unchanged. No postponement has been granted. No additional window has opened for organizations that have been waiting on the sidelines. The breakdown was confirmed in a LinkedIn post by Giulio Coraggio, Head of Intellectual Property and partner at law firm DLA Piper, who tracks EU AI governance developments and leads the AILX - Artificial Intelligence Law eXperts community.
The failure of the Brussels talks adds uncertainty to an already complex legislative picture. The Digital Omnibus is a European Commission package introduced on November 19, 2025, which aims to simplify multiple digital regulations simultaneously, including the AI Act, the General Data Protection Regulation, the ePrivacy Directive, and cybersecurity reporting obligations. Its AI-specific provisions had been expected to change compliance deadlines - potentially delaying the high-risk system obligations that were originally set for August 2026 and August 2027 respectively.
What the negotiations were about
The talks that collapsed overnight centred specifically on the AI Act provisions within the Digital Omnibus. As tracked by PPC Land since March 2026, the European Parliament's Committee on the Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs had already voted 101 in favour, 9 against, and 8 abstentions on March 18, 2026, backing a proposal to replace the current deadlines with fixed dates. Under that proposal, high-risk AI systems covered by Annex III - employment screening, credit scoring, biometric identification - would face a new application date of 2 December 2027. Systems under Annex I, covering products governed by existing EU safety legislation such as machinery, medical devices, and lifts, would face a deadline of 2 August 2028.
Those proposed new dates have not been adopted. The trilogue process - the three-way negotiation between the European Parliament, the Council of the EU, and the European Commission - has not yet concluded. Until it does, the original regulation stands.
The underlying Commission proposal, COM(2025)0836, was submitted to Parliament on November 19, 2025, and announced in plenary on January 19, 2026. The Commission had originally proposed linking the date of application of high-risk AI system obligations to a separate decision it would issue once compliance support measures - harmonised standards, common specifications, and guidelines - were ready. Parliament's rapporteurs, Arba Kokalari and Michael McNamara, appointed on January 21, 2026, rejected that structure as creating unacceptable uncertainty.
The current legal position
The AI Act entered into force on August 1, 2024. Obligations rolled out in phases: prohibited practice restrictions began applying on February 2, 2025, and obligations for general-purpose AI models began on August 2, 2025. The next major activation point, August 2, 2026, covers transparency obligations under Article 50 and triggers full enforcement powers for the AI Office over new general-purpose models. Models already on the market before August 2025 face a separate enforcement date of August 2, 2027.
According to Coraggio, the message from these failed negotiations is unambiguous: "the deadline under the AI Act is still 2 August 2026. No delay (at least for now). No additional time to figure things out."
That formulation - "at least for now" - matters. The trilogue is ongoing. A deal remains possible. But as of the latest breakdown, no agreed text exists, no postponement order has been issued, and no formal extension mechanism has been triggered.
Organizational preparedness - the state of play
Coraggio has been consulting directly with clients on these questions. According to the post, "there is a recurring question: should we wait... or move now?" His answer is direct. Organizations that treat the current political uncertainty as a reason to defer preparation are, in his framing, not taking a prudent wait-and-see approach but instead accepting a concrete regulatory risk.
The post identifies three preparation areas. The first is visibility - establishing an accurate internal picture of where AI is actually deployed across an organization. The second is risk prioritisation - identifying which use cases could plausibly fall into the high-risk categories defined under Annex III or Annex I of the AI Act. The third is governance - building compliance structures even in their early, imperfect form.
Several commenters on the LinkedIn post reinforced the point. René Judak, identified as a head of unit covering AI, communication, projects, and strategy, wrote in response that "the real risk is not preparing too early for compliance that may still evolve, but discovering too late that you do not even have control over your AI use-case inventory." Coraggio replied: "there is a rush to implement use cases that are in some instances not even known by the compliance department."
Elodie Flenniau, who describes herself as transitioning to independent practice in AI Governance in May 2026, added that getting ready now "even if it's just inventory of tools used, shadow IT and unaware AI integration, will be companies' pitfalls." She also identified staff training as a second immediate priority, writing that "the required AI knowledge is what will make the difference."
The political difficulty behind the breakdown
The post does not identify which delegations blocked agreement or what specific provisions remained contested. But Coraggio frames the structural issue clearly: "Europe is still trying to reconcile two legitimate goals: protecting individuals... and staying competitive. And right now, that balance is still fragile."
That tension has been visible throughout the Digital Omnibus process. The Commission's November 2025 proposals on GDPR amendments were designed in part to give AI developers more legal space to train on personal data - a move that privacy advocates and several member states pushed back on. The Netherlands raised formal concerns about the Digital Omnibus privacy provisions in December 2025. Germany had separately pushed for broader data protection simplification. A strong parliamentary majority of 101 votes in the committee stage does not automatically translate into a rapid trilogue conclusion.
One commenter, José Luis Tudela, founder of ANTROPOLOGIC, raised a broader conceptual concern about the regulatory framework itself. According to Tudela, "the EU AI Act is regulating a fiction. It assumes that systems can be bounded, understood, and overseen by a human at the point of decision. Agentic systems break that completely. They do not wait for oversight. They construct reality, shape decisions, and act across time, tools, and environments." Emanuel Celano echoed the practical consensus: "the real risk is not the deadline shifting, but being unprepared when it doesn't."
What August 2, 2026 actually triggers
The August 2026 date carries different significance for different categories of organisation. For providers of general-purpose AI models released after August 2, 2025, full enforcement powers of the AI Office become active on that date. The AI Office can investigate suspected violations under Article 101(1) of the Act, request access to source code, appoint independent experts to conduct model evaluations, and impose fines reaching 3% of global annual turnover or €15 million, whichever is higher. The draft implementing regulation published by the Commission on March 12, 2026, and registered as Ares(2026)2709234, sets out the precise procedural mechanics for those investigations - including how independence of appointed experts is assessed and what happens before a formal proceeding is opened.
For organisations deploying AI systems classified as high-risk under Article 6(2) and Annex III - the employment, credit, and biometric identification category - the original regulation sets August 2026 as the application date. It is specifically these obligations that the parliamentary committees proposed extending to December 2027. Those proposals have not yet been adopted, and the overnight breakdown makes their adoption timeline less certain.
Transparency obligations under Article 50 also activate on August 2, 2026. These cover requirements for providers and deployers of interactive AI systems to disclose their AI nature, mark synthetic content, and notify users of emotion recognition and biometric categorisation. The Commission opened a consultation on these guidelines in September 2025, running from September 4 to October 2, 2025.
The competitive and compliance backdrop
The AI Act does not exist in isolation. Organisations in Europe are simultaneously navigating GDPR, the Digital Services Act, and the Digital Markets Act - all of which intersect with AI deployment in different ways. A study tracking GDPR's impact on European venture investment found a 20.63% reduction in EU deals led by US investors and a 13.15% decline in investment amounts following GDPR's May 2018 rollout, representing over $1.58 billion annually in lost US capital flowing to European technology startups. European companies are estimated to spend approximately 16 billion euros annually on GDPR compliance alone.
These figures inform the political difficulty of the Brussels negotiations. Advocates of delay argue that piling AI Act compliance obligations onto already-stretched organisations risks further reducing European competitiveness. Proponents of the original timeline - or of any timeline at all - counter that extending deadlines without an agreed end date creates a different kind of uncertainty that makes long-term investment planning even harder.
The parliamentary committees made exactly this argument in March 2026. Their report explicitly criticised the Commission's original mechanism for triggering compliance dates through a future Commission decision rather than through fixed calendar dates. According to the explanatory statement of the report, the rapporteurs considered that "a postponement of the application date is necessary, considering the delayed preparation of standards, which are necessary to support compliance, as well as the delayed guidelines, governance and conformity assessment frameworks." Fixing dates - even later dates - provides the planning horizon that organisations need. Leaving dates contingent on a future Commission decision does not.
What marketing and advertising technology organisations face
The marketing and advertising technology sector has particular exposure to the AI Act because AI systems are now embedded throughout core operational workflows - audience segmentation, automated bidding, dynamic creative optimisation, content personalisation, and attribution modelling. Several of these functions either already fall within or could be analysed under the high-risk categories in Annex III, particularly where decisions affect individuals' access to credit, employment screening, or service eligibility.
The Commission's enforcement powers over general-purpose AI models established a threshold of 10²³ floating-point operations during training as the classification benchmark for models subject to the Act's heaviest obligations. Marketing technology vendors integrating frontier AI models into their platforms carry downstream compliance responsibilities that are shaped by whether their upstream model providers meet those definitions. That structure has been in place since the Commission guidelines of July 18, 2025, and it does not change as a result of the overnight Brussels failure.
The Belgian Data Protection Authority's 2026-2028 strategic plan explicitly identifies large-scale advertising technology platforms and data brokers as enforcement priorities. The DPA's April 2026 citizen guide on AI and privacy sits in this context - a regulatory authority building public awareness of rights at precisely the moment when the broader legislative framework is under revision.
What happens next
Coraggio's post ends with a question to his professional network: "Are you already acting on the 2 August 2026 deadline... or do you think the market is still expecting a political 'reset'?" It reflects a genuine split in how organisations are approaching the situation. Some are preparing on the assumption the deadline holds. Others are waiting for a political signal that it will shift.
The Brussels breakdown does not rule out a deal. Negotiations can resume. The Digital Omnibus can still proceed through the trilogue. But each failed session narrows the practical window for any agreed delay to reach transposition before August 2, 2026. Even if a deal were struck in the coming weeks, it would then need to move through the remaining legislative steps before binding effect. The Commission has not issued any statement formally postponing the deadline.
For organisations that have not yet mapped their AI use cases, the practical consequence is unchanged: the August 2, 2026 date remains the operative compliance horizon unless and until a formally adopted legal text says otherwise.
Timeline
- August 1, 2024 - EU AI Act (Regulation (EU) 2024/1689) enters into force
- August 2, 2024 - General obligations and prohibited practices under the AI Act begin applying
- February 2, 2025 - Prohibited practice restrictions enter application
- July 10, 2025 - Commission receives final General-Purpose AI Code of Practice
- July 18, 2025 - Commission publishes implementation guidelines, establishing 10²³ FLOP threshold for general-purpose AI model classification
- August 2, 2025 - Obligations for general-purpose AI models begin applying; AI Office assumes supervision responsibilities
- September 4, 2025 - Commission opens consultation on Article 50 transparency guidelines, running until October 2, 2025
- September 16, 2025 - Commission launches Digital Omnibus consultation
- November 19, 2025 - Commission formally submits Digital Omnibus proposal COM(2025)0836 to Parliament
- January 19, 2026 - Commission proposal announced in plenary
- January 21, 2026 - Rapporteurs Arba Kokalari and Michael McNamara appointed for the joint committee report
- March 12, 2026 - Commission publishes draft implementing regulation Ares(2026)2709234 on AI model investigation procedures
- March 18, 2026 - Parliament's joint committees adopt report A10-0073/2026 with 101 votes in favour, 9 against, 8 abstentions, proposing new deadlines of December 2, 2027 (Annex III) and August 2, 2028 (Annex I)
- March 19, 2026 - Report tabled; scheduled for plenary discussion March 26, 2026
- April 9, 2026 - Feedback period for Ares(2026)2709234 closes
- Late April 2026 - Brussels overnight negotiations on Digital Omnibus AI Act changes fail; August 2, 2026 deadline confirmed as unchanged
- August 2, 2026 - Transparency obligations under Article 50 become applicable; full AI Office enforcement powers activate for new general-purpose models
- August 2, 2027 - Compliance deadline for general-purpose AI models placed on market before August 2, 2025
Summary
Who: European Union institutions - the European Commission, European Parliament, and Council of the EU - involved in trilogue negotiations on the Digital Omnibus package, alongside legal experts and organisations subject to the EU AI Act, including technology companies, AI model providers, and marketing technology platforms across all 27 member states.
What: Overnight negotiations in Brussels on proposed changes to the EU AI Act through the Digital Omnibus package failed to produce an agreement. The 2 August 2026 compliance deadline - covering transparency obligations under Article 50, full AI Office enforcement powers over new general-purpose models, and high-risk system obligations under Annex III - remains unchanged. No formal postponement has been issued.
When: The breakdown occurred overnight before the LinkedIn post by Giulio Coraggio, published approximately April 29, 2026. The operative deadline is August 2, 2026. The Digital Omnibus proposal was submitted November 19, 2025; the parliamentary committee vote backing deadline changes took place March 18, 2026.
Where: Brussels, Belgium, at EU institutional level. The regulation applies across all 27 EU member states and affects any organisation placing AI systems on the EU market regardless of geographic origin.
Why: Europe has been attempting to reconcile two objectives - protecting individuals through AI governance and maintaining competitive capacity against non-European AI developers - while also giving organisations time to comply with standards and frameworks that have not been completed on schedule. The failed negotiations reflect unresolved disagreement on how to balance those goals, leaving the original statutory deadline as the default compliance horizon.