The European Parliament's Committee on the Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs tabled a joint report on March 19, 2026 proposing significant amendments to the EU AI Act through a legislative package known as the Digital Omnibus on AI. Filed under reference A10-0073/2026, the report was formally adopted by the committees on March 18, 2026, following a vote of 101 in favour, 9 against, and 8 abstentions.

The outcome marks a pivotal moment in how Europe intends to apply its landmark AI legislation. Rather than leaving compliance deadlines contingent on a Commission decision - as originally proposed - the committees have backed a plan that replaces uncertainty with fixed calendar dates. That distinction matters enormously for any company currently preparing AI systems for the European market.

What the Digital Omnibus on AI actually proposes

The underlying Commission proposal, referenced as COM(2025)0836, was submitted to Parliament on November 19, 2025, and announced in plenary on January 19, 2026. It targets Regulation (EU) 2024/1689 - the AI Act - alongside Regulation (EU) 2018/1139 on aviation safety, with the stated goal of simplifying implementation and harmonising rules across member states.

The Commission had proposed linking the date of application of high-risk AI system obligations to a separate decision it would issue once compliance support measures - harmonised standards, common specifications, and guidelines - were ready. Only after that decision would a six-month countdown begin. The rapporteurs for the Parliament's joint committee report, Arba Kokalari and Michael McNamara, appointed on January 21, 2026, pushed back against this structure. According to the explanatory statement of the report, they consider that "a postponement of the application date is necessary, considering the delayed preparation of standards, which are necessary to support compliance, as well as the delayed guidelines, governance and conformity assessment frameworks."

Their solution is direct: fix the dates in the legislation itself. For AI systems classified as high-risk under Article 6(2) and Annex III of the AI Act - which covers applications such as employment screening, credit scoring, and biometric identification - the proposed date of application is 2 December 2027. For systems classified under Article 6(1) and Annex I, which concerns products governed by existing EU product safety legislation such as machinery, medical devices, and lifts, the deadline shifts to 2 August 2028. These replace the current deadlines of 2 August 2027 and 2 August 2026, respectively.

Why the timeline matters, and why marketing technology firms should pay close attention

The original AI Act structure - tracked by PPC Land since the Commission's September 2025 consultation launch - already contained staggered deadlines. General obligations and prohibited practices applied from August 2, 2024 and February 2, 2025 respectively. Obligations for general-purpose AI models entered application on August 2, 2025. The European Commission released its general-purpose AI model guidelines on July 18, 2025, establishing the 10²³ floating-point operations threshold as the classification benchmark. High-risk system obligations under Chapters III and IV were then set to apply from August 2026 and August 2027.

The Digital Omnibus now proposes to push those latter deadlines out further. But - and this is the key point the committees are making - it does so with certainty rather than ambiguity. The Commission's original mechanism, under which obligations would activate only after a formal Commission decision confirmed that standards were ready, left businesses in limbo. The Parliament committees argue this structure fails to provide the "legal certainty and predictability" that companies need to plan investment and development cycles.

For advertising technology companies and marketing platforms, which frequently deploy AI systems in audience segmentation, automated bidding, and content personalisation, the implications are direct. Tools that analyse employment history or financial data as signals for audience targeting may fall within high-risk categories. Systems using biometric data, even in aggregate or anonymised form, could attract scrutiny under Annex III. PPC Land has previously reported how multiple regulatory frameworks are converging, noting that the AI Act obligations for high-risk systems overlap significantly with GDPR requirements on automated decision-making and special category data.

Scope changes: the AI Office's supervisory reach

One of the more technical but significant adjustments in the report concerns the supervisory jurisdiction of the AI Office. The Commission's original proposal gave the AI Office powers to monitor AI systems built on general-purpose AI models where those systems and their underlying models come from the same provider or from companies within the same group of undertakings.

The Parliament committees have refined this. According to the report, the AI Office's supervisory scope now explicitly excludes AI systems "related to products covered by the Union harmonisation legislation listed in Annex I and AI systems referred to in Annex III, point 2." The second point of Annex III covers AI systems used as safety components in road traffic management, and the exclusion reflects a policy judgement that sectoral regulators - not the AI Office - should retain primary oversight over those applications.

The report also refines the provision on records of incidents. Providers and deployers of general-purpose AI models with systemic risk must document incidents in a "consistent manner," according to the amended text. What counts as systemic risk remains anchored to the 10²⁵ FLOP training threshold established in the original AI Act, and the Commission published detailed implementation rules for general-purpose model supervision on March 12, 2026, opening a feedback window until April 9, 2026.

SMEs and small mid-cap enterprises: targeted relief proposed

A recurring concern throughout the AI Act's development has been its disproportionate burden on smaller companies. The Digital Omnibus addresses this directly. The report notes that 99.8% of all companies in the European Union qualify as small and medium-sized enterprises. It then introduces a new category for the AI Act's purposes: small mid-cap enterprises (SMCs), defined by reference to Commission Recommendation 2025/3500/EC of May 21, 2025.

SMCs, according to the explanatory logic in the recitals, grow faster and innovate more intensively than typical SMEs, but still face compliance burdens comparable to smaller firms when they cross the SME threshold. The proposal extends several SME-specific measures in the AI Act to SMCs as well. This is intended to prevent the so-called "cliff-edge" effect, where a company's compliance burden increases sharply the moment it outgrows the SME definition.

For the marketing technology sector, this distinction has practical relevance. Many adtech companies and measurement platforms operating across European markets remain mid-sized by global standards yet face regulatory requirements designed with large enterprise resources in mind. Research tracked by PPC Land has documented how regulatory compliance costs are not borne uniformly across company sizes, with smaller ventures absorbing disproportionate burdens.

AI literacy: from obligation to encouragement

Article 4 of the current AI Act imposes a mandatory obligation on all providers and deployers of AI systems to ensure that their staff maintain a sufficient level of AI literacy. The committees propose softening this obligation substantially. According to the report's explanatory statement, "experience shared by stakeholders reveals that a solution imposing stringent obligations to ensure a sufficient level of AI literacy is not suitable for all types of providers and deployers."

The proposed amendment shifts the language from obligation to support. Providers and deployers would be required to "support AI literacy" of relevant staff rather than "ensure" it. The European Commission would be tasked with promoting AI literacy more broadly, with Public Private Partnerships listed as one possible vehicle. The AI Office would retain responsibility for issuing guidance on practical implementation.

This is a meaningful distinction for any organisation deploying AI tools in marketing operations. The original formulation created enforceable obligations with potential sanctions. The revised text is softer, though companies that ignore AI literacy development entirely would still face reputational and governance risks as regulators develop expectations around responsible deployment.

Bias detection and special category data

One of the more technically complex amendments concerns the processing of special categories of personal data - data relating to race, ethnicity, political opinions, religious beliefs, health, or sexual orientation - for the purposes of detecting and correcting bias in AI systems.

Under the existing AI Act, Article 10(5) allows high-risk AI system providers to process such data in strictly defined circumstances for bias detection. The Commission proposed extending this legal basis to providers and deployers of other AI systems and models, not just high-risk ones. The committees have retained and refined this extension, tying it explicitly to the legal basis established in Article 9(2)(g) of the GDPR, which permits processing of sensitive data where there is "substantial public interest."

The report's amendment specifies that this legal basis is subject to the same conditions and safeguards as apply under the existing Article 10(5). For marketing technology providers, the provision has practical implications: AI tools that optimise ad targeting across demographic segments may require access to sensitive data signals for bias auditing, and the AI Act's legal basis provides a route that the GDPR alone does not clearly accommodate. The European Commission proposed broader GDPR changes for AI development through the Digital Omnibus in November 2025, and the AI-specific bias detection provision sits alongside those wider proposals.

Transparency obligations and generative AI: the November 2026 window

The report addresses the transparency marking obligations under Article 50(2) of the AI Act, which require providers of generative AI systems - those producing synthetic audio, image, video, or text - to ensure their outputs are detectable as machine-generated. The Commission's consultation on Article 50 guidelines opened in September 2025 and the European Commission opened that formal consultation on September 4, 2025.

The Digital Omnibus originally proposed giving providers whose systems were placed on the market before August 2, 2026, a three-month transitional period - running until February 2, 2027 - to comply with the marking obligations. The Parliament committees' amendment tightens this. According to the amended text, providers of such systems placed on the market before August 2, 2026, would need to comply with Article 50(2) by 2 November 2026 - three months earlier than the Commission proposed.

For companies running AI-generated content at scale in advertising, including dynamic creative optimisation tools, product description generators, and automated copywriting systems, the November 2026 date represents an earlier practical deadline. Watermarking standards, detection protocols, and disclosure mechanisms all need to be in place before that point.

What happens next

The report was tabled on March 19, 2026, three days before this article was published, following its adoption in committee on March 18. According to the LinkedIn post by Luis Alberto Montezuma, a data and privacy policy commentator, the European Parliament is scheduled to discuss and approve the report on Digital Omnibus on AI in plenary on March 26, 2026. Once Parliament's mandate is approved in plenary, negotiations with the Council of the EU can begin.

That Council negotiation - known as the trilogue process - will determine the final shape of the regulation. Member states hold significant influence over the outcome. The Netherlands, for instance, raised formal concerns about the Digital Omnibus privacy proposals in December 2025, identifying risks to automated decision-making protections and special category data rules. Germany has separately pushed for broader data protection simplification.

The committee vote of 101 in favour represents a strong majority across political groups, suggesting Parliament enters the trilogue with a relatively unified position. The nine votes against and eight abstentions indicate residual dissent, but the numbers reflect cross-party convergence on the core logic: delay the deadlines, but fix them in law rather than leaving them to Commission discretion.

For the marketing technology community, the regulatory path is clarifying, even if the final text remains some months away. High-risk AI system obligations will not apply in August 2026 if the committees' position prevails. Companies now have a clearer window - potentially until December 2027 for Annex III systems - to prepare conformity assessments, notified body engagements, and technical documentation. That window is longer than originally expected, but the committees are explicit that it comes with conditions: the Commission must ensure that "measures in support of compliance are in place in due time to avoid further application delays."

Timeline

Summary

Who: The European Parliament's Committee on the Internal Market and Consumer Protection (IMCO) and Committee on Civil Liberties, Justice and Home Affairs (LIBE), acting jointly under Rule 59 of Parliament's Rules of Procedure, with rapporteurs Arba Kokalari and Michael McNamara.

What: A joint committee report (A10-0073/2026) proposing amendments to the EU AI Act through the Digital Omnibus on AI legislative package. Key changes include replacing a Commission-triggered compliance deadline mechanism with fixed dates of December 2, 2027 (Annex III high-risk systems) and August 2, 2028 (Annex I high-risk systems), extending SME protections to small mid-cap enterprises, softening AI literacy obligations, refining the AI Office's supervisory scope, and tightening the transparency marking deadline for generative AI to November 2, 2026.

When: The report was adopted in committee on March 18, 2026, with 101 votes in favour, 9 against, and 8 abstentions. It was tabled on March 19, 2026, and is scheduled for European Parliament plenary discussion on March 26, 2026. The underlying Commission proposal was submitted on November 19, 2025.

Where: The European Parliament in Brussels, operating under the ordinary legislative procedure at first reading. The regulation applies across all 27 European Union member states and affects any organisation placing AI systems on the EU market, regardless of geographic origin.

Why: The committees determined that the Commission's original proposal - which linked high-risk AI system compliance deadlines to a future Commission decision - created unacceptable legal uncertainty. Standards, guidelines, and conformity assessment frameworks have not been completed on schedule, making the existing August 2026 and 2027 deadlines practically unworkable for industry. The committees chose to provide certainty through fixed statutory dates while also addressing gaps in the original text related to SME treatment, bias detection, AI literacy obligations, and AI Office supervisory jurisdiction.

Share this article
The link has been copied!