The European Commission this week published a draft implementing regulation that spells out, for the first time, the precise procedural steps Brussels will follow when investigating and fining providers of general-purpose AI models under the EU AI Act. Registered as reference Ares(2026)2709234 and dated March 12, 2026, the document opens a four-week public feedback window running until April 9, 2026, at midnight Brussels time. Commission adoption is planned for the second quarter of 2026.
The draft, which has not yet been formally adopted, carries an important disclaimer: according to the document, any views expressed "are the preliminary views of the Commission services and may not in any circumstances be regarded as stating an official position of the Commission." Its significance, however, is hard to overstate. For the first time, companies building and deploying large AI models can read exactly how European authorities intend to knock on their door, request their source code, and ultimately issue a fine.
Why this draft matters now
The AI Act - Regulation (EU) 2024/1689 - entered into force on August 1, 2024, and laid down obligations for general-purpose AI model providers that began applying on August 2, 2025. Full enforcement powers for the AI Office are scheduled to become active in August 2026 for new models, and in August 2027 for models already on the market before the August 2025 deadline. This implementing regulation is the procedural engine that will power those enforcement powers - it does not create new substantive obligations, but determines how the Commission will act when it suspects a violation has occurred.
The AI Act itself authorizes the Commission to fine providers of general-purpose AI models under Article 101(1). Fines can reach 3% of global annual turnover or €15 million, whichever is higher. The draft regulation fleshes out what happens before that number is issued.
Access to models: APIs, weights, and source code
Chapter II of the draft addresses evaluations - the mechanism by which the Commission assesses whether a model meets its obligations or poses systemic risks. Article 92(1) of the AI Act gives the Commission authority to conduct these evaluations. Article 92(2) allows it to appoint independent experts to act on its behalf. Article 92(3) gives it the right to demand access to the model itself.
The draft implementing regulation clarifies what that access can look like. According to the text, a Commission decision requesting access "shall specify the technical means, components, and conditions by which the provider shall provide such access." The list of possible access types is broad. It covers access through application programming interfaces, internal access, access to source code, access to model weights, access to the infrastructure used for hosting the model, and - strikingly - "access to inspect and modify system state interaction with the model, and all levels of access granted to employees of the provider."
That last clause will attract attention. Granting the Commission the same access level as internal employees goes considerably further than conventional regulatory inspections. The document also specifies that the Commission may require providers "to disable and remove any logging measures that could track or record the Commission's access to the general-purpose AI model, to the extent necessary to ensure the integrity and confidentiality of the evaluation process." In practice, this means a company under evaluation may be required to switch off its own monitoring of what the regulator examines.
Early public feedback published on the European Commission's website signals concern about this provision. AI & Partners, a Netherlands-based business association, submitted feedback on March 12, 2026 - the same day the draft was published. The group argued, according to the published feedback, that the phrase "all levels of access granted to employees" sets "an inconsistent, provider-dependent standard" and called for replacement language: "access necessary and proportionate to the evaluation objectives." The association also raised concerns under Article 47 of the EU Charter of Fundamental Rights regarding the absence of any mechanism to contest access terms before compliance is required.
Independent experts: conflict-of-interest rules and a 12-month look-back
When the Commission appoints an independent expert to conduct an evaluation, Article 3 of the draft sets out how independence is assessed. The Commission must check for shared ownership, governance, management, personnel, or resources between the expert and any AI provider. Critically, it must also examine contractual relationships between the expert and the provider "over the 12 months prior to the evaluation." Once appointed, the expert must remain independent for the entire duration of the appointment.
Confidentiality obligations are strict. Experts must commit to "maintaining the confidentiality, integrity and availability of sensitive information" they encounter, in line with Article 78 of the AI Act. The Commission is required to check, before appointing an expert, whether that person has access to appropriate internal and external information security protocols.
Article 4 addresses selection. The standard route is an open and transparent selection procedure. Two exceptions apply. First, the Commission may appoint experts directly if they are already members of the scientific panel established under Article 68 of the AI Act. Second, it may appoint experts via a procedure under Article 167 of the EU's financial regulation, Regulation (EU, Euratom) 2024/2509, which covers service contracts and grants.
Opening and closing proceedings: interim measures included
Chapter III covers the formal mechanics of proceedings. The Commission may open proceedings whenever it suspects conduct listed in Article 101(1) of the AI Act. Notably, it does not need to open proceedings before exercising investigatory powers - Article 5(2) of the draft makes clear that the Commission can gather information and take measures before a formal proceeding begins.
Interim measures are explicitly provided for. Article 5(3) states that "before opening proceedings," the Commission may order interim measures "on grounds of urgency due to a risk of serious damage to health, safety requirements, or other grounds relating to the public interest," including preventing a general-purpose AI model from being made available on the market, based on a preliminary finding of infringement. This power to effectively pull a model off the market before a formal finding is made will be particularly notable for the marketing technology sector, where campaigns built on third-party AI tools could be disrupted if the underlying model is suspended.
Proceedings can be closed if the Commission finds no grounds for a decision. But closure is not permanent. Article 6(2) states the Commission may reopen proceedings "at any time," including when previously requested measures turn out to be ineffective or when there is a "significant change to the systemic risks posed by the general-purpose AI model concerned at Union level." For companies that receive a closure decision, this creates ongoing uncertainty rather than a clean bill of health.
When proceedings were originally triggered by a downstream provider's complaint under Article 89(2) of the AI Act, the complainant must be given the opportunity to express its views before the Commission closes the case.
Right to be heard: 14-day minimum and a 50-page cap
Chapter IV introduces procedural safeguards for providers who receive preliminary findings. Article 7 requires the addressee to respond within a time limit set by the Commission "that shall be no less than 14 days." The 14-day figure is a floor, not a ceiling; the Commission sets the actual deadline in each case. Extensions are possible, but only on a reasoned request submitted before the original deadline expires.
The annex to the implementing regulation contains notable constraints on submissions. Written observations "shall not exceed 50 pages in length," formatted in A4 size in a standard font such as Times New Roman, Courier, or Arial, at minimum 12-point in body text and 10-point in footnotes, with single line spacing and margins of at least 2.5 centimetres. The maximum density is 4,700 characters per page. Annexes accompanying the observations do not count toward the page limit, provided they serve a "purely evidential and instrumental function and are proportionate in number and length."
All submissions must be in a language official in the European Union and signed with a qualified electronic signaturecomplying with Regulation (EU) 910/2014, commonly known as eIDAS. The Commission must acknowledge receipt without delay.
Access to the file: tiered disclosure with a three-year tail
Article 8 establishes a layered system for providing the addressee with access to the Commission's file. Access cannot be granted before preliminary findings are notified. At that point, the addressee receives non-confidential versions of all documents mentioned in the findings.
Full, unredacted access goes further. According to the draft, such access is granted "only to a limited number of specified external legal and economic counsel and external technical experts engaged by the addressee and whose names shall be communicated to the Commission in advance." These persons must not be employees of the addressee. They are bound by terms of disclosure set out in a Commission decision.
The rules carry a significant restriction even after a case ends. If a specified external counsel or expert subsequently enters into an employment relationship with the addressee "or with other undertakings active on the same markets as the addressee during the investigation or during the three years following the end of the Commission's investigation," they must promptly inform the Commission. They must also provide assurances that they no longer have access to the confidential documents. The three-year post-investigation tail is aimed at preventing confidential competitive information from flowing back into the market through revolving-door arrangements.
Documents obtained through file access "shall only be used for the purposes of the relevant proceedings" or of related judicial and administrative proceedings concerning the AI Act.
Limitation periods: five years to fine, five years to collect
Articles 10 and 11 introduce limitation periods. The Commission has five years from the date the infringing conduct was carried out to adopt a fining decision. For continuing or repeated conduct, the five-year clock starts on the day the conduct stops. Several types of investigative action interrupt the clock and reset it: requests for documentation, requests for access to conduct model evaluations, invitations to structured dialogue, and the formal opening of a proceeding.
A hard ceiling applies regardless of interruptions. The limitation period "shall expire at the latest on the day on which a period equal to twice the limitation period has elapsed" without a fine being imposed - that is, after ten years. If a Commission decision is appealed to the Court of Justice of the European Union, the clock is suspended for the duration of proceedings.
Once a fine decision has been issued, the Commission has five years to enforce it. That enforcement period is similarly subject to interruption and suspension rules, with suspension applying when time to pay has been allowed or when a national or European court has suspended enforcement.
Digital-first transmission and the eIDAS requirement
Article 14 mandates that all document transmission to and from the Commission occur by digital means. Qualified electronic signatures under eIDAS are required. Documents are deemed received on the day the Commission sends an acknowledgment of receipt, not on the day of transmission. Fallback to registered mail or hand delivery is permitted only "under exceptional circumstances which make transmission by digital means impossible or exceedingly difficult."
For real-time or near-real-time information shared through APIs, the Commission "shall define the methods and duration of such sharing of information" - a provision directly relevant to the API access rights described in Article 2.
The implementing regulation enters into force on the twentieth day following publication in the Official Journal of the European Union and applies in its entirety across all member states. The Commission adoption is currently planned for the second quarter of 2026, meaning providers could face live procedural rules before the year is out.
Context: a framework years in the making
The AI Act itself was adopted on June 13, 2024, and published in the Official Journal on July 12, 2024. The General-Purpose AI Code of Practice - a voluntary compliance framework - was finalized and received by the Commission on July 10, 2025, with nearly 1,000 participants involved in its development. The Commission released detailed guidelines on July 18, 2025, specifying that models exceeding 10²³ floating-point operations during training and capable of generating language or images qualify as general-purpose AI models subject to the Act's heaviest obligations. Industry response to these voluntary frameworks has been uneven, with companies such as Google, Microsoft, OpenAI, and Anthropic committing to the code while Meta declined.
For advertising and marketing technology companies, the regulatory picture has steadily grown more complex. The Commission opened a formal antitrust probe into Google's use of publisher and YouTube content for AI purposes in December 2025, illustrating how AI Act enforcement intersects with broader competition law activity. The EU AI Act's prohibitions on influence and manipulation, clarified in February 2025, set boundaries with direct implications for AI systems used in advertising targeting and optimization.
The public consultation on the implementing regulation remains open until April 9, 2026. Five pieces of feedback had already been published on the Commission's website as of March 13, 2026 - one day after publication. Broader engagement from the marketing technology sector, model providers, and downstream deployers is likely as the April 9 deadline approaches.
Timeline
- June 13, 2024 - The European Parliament and Council adopt Regulation (EU) 2024/1689 (the EU AI Act), laying down harmonised rules on artificial intelligence.
- July 12, 2024 - The AI Act is published in the Official Journal of the European Union.
- August 1, 2024 - The AI Act enters into force.
- July 10, 2025 - The European Commission receives the finalized General-Purpose AI Code of Practice, developed with nearly 1,000 participants.
- July 18, 2025 - The Commission publishes 36-page guidelines establishing the 10²³ FLOP threshold for general-purpose AI model classification. Microsoft signals it will sign the voluntary code; Meta announces it will not.
- August 2, 2025 - AI Act obligations for general-purpose AI models begin applying; the AI Office assumes supervision responsibilities.
- September 4, 2025 - The Commission launches a consultation to develop guidelines and a code of practice for AI transparency under Article 50.
- December 9, 2025 - The European Commission opens a formal antitrust probe into Google's use of publisher and YouTube content for AI purposes.
- March 12, 2026 - The Commission publishes draft implementing regulation Ares(2026)2709234, opening a feedback period running until April 9, 2026.
- April 9, 2026 - Feedback period closes at midnight Brussels time.
- Second quarter 2026 - Commission adoption of the implementing regulation is planned.
- August 2, 2026 - AI Act enforcement powers for new general-purpose AI models are scheduled to become fully active.
- August 2, 2027 - Compliance deadline for general-purpose AI models placed on the market before August 2, 2025.
Summary
Who: The European Commission, providers of general-purpose AI models placed on the EU market, downstream AI system providers, and independent technical experts who may be appointed to conduct model evaluations.
What: A draft implementing regulation (Ares(2026)2709234) setting out the detailed procedural arrangements for how the Commission will evaluate general-purpose AI models under Article 92 of the EU AI Act and how it will conduct enforcement proceedings, including the issuance of fines, under Article 101. The draft covers access rights to model weights and source code, independent expert selection and conflict-of-interest rules, the opening and closing of proceedings, interim measures, the right to be heard, access to the Commission's file, and five-year limitation periods for fines and their enforcement.
When: Published March 12, 2026, with a feedback period open until April 9, 2026, and Commission adoption planned for the second quarter of 2026. The regulation will enter into force 20 days after publication in the Official Journal.
Where: The regulation applies across all EU member states and affects any provider placing general-purpose AI models on the EU market, regardless of where the provider is located.
Why: The AI Act authorized the Commission to evaluate and fine general-purpose AI model providers but did not specify the procedural steps for doing so. This implementing regulation fills that gap, providing legal certainty for providers and ensuring procedural safeguards - including the right to be heard and access to the file - are clearly defined before enforcement powers become fully active in August 2026.