Google commits to EU AI code amid compliance concerns

Google and other US tech giants prepare for European AI regulation with mixed industry response.

AI neural networks connecting with European institutional buildings representing EU AI Act compliance and regulation.
AI neural networks connecting with European institutional buildings representing EU AI Act compliance and regulation.

Google announced on July 30, 2025, its commitment to sign the European Union's General Purpose AI Code of Practice. The announcement from Kent Walker, President of Global Affairs at Google and Alphabet, marks a significant step toward compliance with the bloc's AI Act requirements.

"We will join several other companies, including U.S. model providers, in signing the European Union's General Purpose AI Code of Practice," Walker stated in the announcement. This decision positions Google alongside Anthropic, Microsoft, and OpenAI in supporting the voluntary framework ahead of August 2, 2025 enforcement deadlines.

The signing represents more than symbolic cooperation. According to Walker's statement, the code implementation aims to "promote European citizens' and businesses' access to secure, first-rate AI tools as they become available." This access carries substantial economic implications, with Europe potentially boosting its economy by 8% (€1.4 trillion) annually by 2034.

However, Google's commitment comes with significant reservations about regulatory implementation. Walker expressed concerns that "the AI Act and Code risk slowing Europe's development and deployment of AI." The company specifically cited concerns about departures from EU copyright law, approval delays, and requirements exposing trade secrets as potential barriers to European model development.

The voluntary code emerged from extensive multi-stakeholder development involving nearly 1,000 participants. The European Commission officially received the finalized framework on July 10, 2025, addressing transparency, copyright, and safety obligations under Articles 53 and 55 of the AI Act. The three-chapter structure establishes voluntary guidelines for AI model providers ahead of mandatory compliance requirements taking effect August 2, 2025.

Industry response reveals stark divisions over European regulatory approaches. Microsoft president Brad Smith indicated on July 18 that his company would likely sign the voluntary guidelines, describing them as "an opportunity for industry engagement with regulators." In contrast, Meta announced its refusal to participate through Chief Global Affairs Officer Joel Kaplan, who criticized "legal uncertainties for model developers" and measures that "go far beyond the scope of the AI Act."

Anthropic announced its intention to sign on July 21, stating the company's commitment to "working with the EU AI Office and safety organizations to ensure the Code remains both robust and responsive to emerging technologies." These divergent approaches highlight varying strategies for managing regulatory relationships across global markets.

The framework establishes specific technical thresholds for determining when AI models qualify as general-purpose systems under EU regulation. Models exceeding 10²³ floating-point operations (FLOP) during training and capable of generating language, text-to-image, or text-to-video content face mandatory compliance obligations.

Documentation requirements form a cornerstone of the regulatory framework. Providers must maintain current technical information for downstream providers and regulatory authorities throughout each model's lifecycle. Copyright compliance receives particular attention, requiring policies addressing Union copyright law throughout the model lifecycle and implementation of rights reservation protocols under Article 4(3) of Directive (EU) 2019/790.

The safety chapter applies specifically to models with systemic risk capabilities - those exceeding 10²⁵ FLOP during training or designated by the Commission based on reach and capabilities. These high-capacity models face enhanced obligations including comprehensive risk assessments, mitigation measures, and governance frameworks.

European AI regulation is becoming increasingly complex for digital advertising, with marketing organizations facing new compliance requirements for AI-powered advertising tools, optimization systems, and customer targeting applications. The framework creates information flows between AI model providers and downstream system developers, facilitating assessment of model capabilities and limitations relevant to advertising applications.

Companies deploying AI systems for customer data analysis, personalization engines, or automated decision-making must consider enhanced compliance obligations when evaluating technology partners. The enforcement timeline provides marketing organizations with adjustment periods to evaluate current AI tool usage and develop compliant workflows.

Enforcement mechanisms include information requests, model evaluations, mitigation measures, and financial penalties up to 3% of global annual turnover or EUR 15 million. The AI Office assumes supervision responsibilities beginning August 2, 2025, with full enforcement powers taking effect August 2026. The graduated implementation schedule allows companies to assess existing technology partnerships and plan transitions if necessary.

Open-source exemptions apply to models released under free licenses allowing access, modification, and distribution without monetization. However, these exemptions exclude models classified as having systemic risk, ensuring the most powerful systems maintain full regulatory oversight regardless of licensing arrangements.

Microsoft's support for the voluntary code contrasts with Meta's resistance, creating potential competitive implications as European regulators assess industry cooperation. OpenAI and Mistral have already signed the voluntary code, demonstrating varied industry approaches to European compliance.

The multi-stakeholder development process involved three working groups addressing different aspects of AI governance. Working Group 1 focused on transparency and copyright under chairs Nuria Oliver (ELLIS Alicante Foundation) and Alexander Peukert (Goethe University Frankfurt). Working Groups 2-4 addressed risk identification, technical mitigation, and governance under additional academic leadership.

Training compute estimation requires providers to account for all computational resources directly contributing to parameter updates. This includes pre-training, synthetic data generation, fine-tuning, and other capability-enhancing activities. The Commission provides two estimation approaches: hardware-based tracking of GPU utilization and architecture-based calculation of operations per parameter.

Denmark positioned itself as a regulatory leader by becoming the first EU member state to adopt national legislation implementing AI Act provisions on May 8, 2025. The Danish framework establishes three national competent authorities, with the Agency for Digital Government serving multiple oversight roles.

Google's commitment emphasizes working with the AI Office to ensure proportionate and responsive implementation. Walker stated the company "will be an active voice in supporting a pro-innovation approach that leads to future investment and innovation in Europe that benefits everyone." This collaborative stance reflects the complex balance between regulatory compliance and innovation objectives.

The framework addresses modification scenarios where downstream actors alter existing models. Modifications utilizing compute exceeding one-third of the original model's training compute trigger provider obligations for the modifying entity. This relative threshold aims to identify substantial changes warranting separate regulatory oversight while avoiding excessive burden on minor adjustments.

Member States and the Commission will assess the code's adequacy in coming weeks, with potential approval through implementing acts providing "general validity within the Union." The AI Office retains authority to develop common implementation rules if the code proves inadequate or cannot be finalized by required deadlines.

Provider obligations extend beyond initial deployment to encompass the entire model lifecycle. Companies must maintain compliance from initial training through deployment, updates, and retirement. This lifecycle approach ensures ongoing regulatory alignment as AI capabilities evolve and applications expand across European markets.

The code's completion reflects the compressed development schedule mandated by Article 56 of the AI Act, which required finalization by May 2, 2025. The process included three drafting rounds with working group meetings and provider workshops between October 2024 and April 2025, demonstrating the intensive coordination required for multi-stakeholder regulatory development.

Complaint mechanisms must allow rightsholders to submit "sufficiently precise and adequately substantiated complaints concerning non-compliance" with copyright commitments. Providers must respond within "reasonable time" unless complaints are "manifestly unfounded" or identical to previously addressed issues, creating ongoing obligations for copyright dispute resolution.

The regulatory framework reflects European priorities of establishing global AI governance standards while maintaining technological competitiveness. Implementation timelines accommodate existing industry practices while ensuring compliance with fundamental rights protections and safety requirements throughout AI system deployment.

Timeline

Key Terms Explained

AI Act The European Union's comprehensive regulatory framework governing artificial intelligence systems, officially known as the AI Act, establishes mandatory obligations for AI providers operating within European markets. This legislation creates the world's first comprehensive AI regulation, setting standards for transparency, safety, and fundamental rights protection. The Act distinguishes between different risk categories of AI systems, with general-purpose AI models facing specific requirements under Articles 53 and 55. Implementation begins August 2, 2025, with graduated enforcement timelines extending through 2027.

General-Purpose AI Models These represent AI systems capable of performing diverse tasks across multiple domains, distinguished from narrow AI applications designed for specific functions. The EU framework defines general-purpose models as those exceeding 10²³ floating-point operations during training and capable of generating language, text-to-image, or text-to-video content. Companies developing or deploying these models face comprehensive documentation, transparency, and safety obligations under European regulation.

Code of Practice The voluntary compliance framework developed through multi-stakeholder processes involving nearly 1,000 participants, providing standardized approaches for meeting AI Act requirements. This three-chapter document addresses transparency, copyright, and safety obligations, offering AI providers structured pathways to demonstrate regulatory compliance. Companies adhering to approved codes potentially reduce administrative burden and enforcement scrutiny while maintaining operational flexibility.

Compliance Requirements Mandatory obligations encompassing documentation, transparency, safety assessments, and ongoing monitoring throughout AI model lifecycles. These requirements include maintaining technical specifications for downstream providers, implementing copyright protection policies, and conducting risk assessments for high-capability systems. Marketing organizations must understand these obligations when selecting AI-powered advertising tools and evaluating technology partnerships.

Training Compute The computational resources measured in floating-point operations (FLOP) used during AI model development, serving as the primary metric for determining regulatory classification. Models exceeding 10²³ FLOP during training qualify as general-purpose systems subject to EU obligations, while those surpassing 10²⁵ FLOP face additional systemic risk requirements. This technical threshold creates clear boundaries for regulatory applicability across different model categories.

European Commission The executive branch of the European Union responsible for implementing and enforcing AI Act provisions through designated authorities and coordinated oversight mechanisms. The Commission received the final Code of Practice on July 10, 2025, and maintains authority to approve voluntary frameworks through implementing acts. Member States coordinate with Commission guidance to establish national competent authorities and enforcement procedures.

Transparency Obligations Requirements for AI providers to maintain comprehensive documentation and disclosure protocols throughout model development and deployment phases. These obligations facilitate information flows between upstream model providers and downstream system developers, enabling informed decision-making about AI tool capabilities and limitations. Transparency measures particularly impact marketing applications involving customer data analysis and automated decision-making processes.

Systemic Risk Models High-capability AI systems exceeding 10²⁵ FLOP during training or designated by the Commission based on reach and impact potential. These models face enhanced obligations including comprehensive risk assessments, mitigation measures, governance frameworks, and ongoing monitoring requirements. The systemic risk classification ensures continued regulatory oversight of the most powerful AI systems regardless of licensing arrangements or distribution methods.

Copyright Compliance Policies and technical measures addressing European Union copyright law throughout AI model lifecycles, including rights reservation protocols under Directive (EU) 2019/790. Providers must implement complaint mechanisms allowing rightsholders to submit substantiated concerns about non-compliance with copyright commitments. These requirements particularly impact marketing applications generating creative content, advertising copy, or visual materials.

Enforcement Timeline The graduated implementation schedule providing adjustment periods for different market participants, with AI Act obligations taking effect August 2, 2025. Full enforcement powers become applicable August 2026 for new models and August 2027 for existing models placed on markets before the initial deadline. This timeline allows marketing organizations to evaluate current AI tool usage, assess technology partnerships, and develop compliant workflows.

Summary

Who: Google, represented by Kent Walker (President of Global Affairs), announces commitment to sign the EU's General Purpose AI Code of Practice alongside other major technology companies including Microsoft, Anthropic, and OpenAI, while Meta refuses participation.

What: Google's commitment to sign the voluntary EU AI compliance framework that addresses transparency, copyright, and safety obligations for general-purpose AI models, with potential economic benefits of €1.4 trillion annually for Europe by 2034.

When: The announcement was made on July 30, 2025, with the voluntary code finalized on July 10, 2025, and mandatory compliance requirements taking effect August 2, 2025.

Where: The commitment applies to Google's AI operations within the European Union market, with the regulatory framework governing general-purpose AI model providers regardless of their geographic location.

Why: Google seeks to maintain European market access while expressing concerns that overly restrictive regulations could slow AI development and deployment in Europe, emphasizing the need for proportionate implementation that balances innovation with compliance requirements.