Microsoft to sign EU AI code while Meta refuses compliance
Microsoft president signals company will likely sign voluntary AI Act guidelines while Facebook parent cites legal uncertainties.

Microsoft president Brad Smith confirmed on July 18 that the technology company will likely sign the European Union's voluntary code of practice for general-purpose artificial intelligence models, while Meta Platforms announced it would refuse participation in the compliance framework.
According to Smith's remarks to Reuters, Microsoft sees the voluntary guidelines as an opportunity for industry engagement with regulators. "I think it's likely we will sign. We need to read the documents," Smith stated during an interview on July 18. The Microsoft president emphasized his company's goal of finding "a way to be supportive" while welcoming "the direct engagement by the AI Office with industry."
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The divergent responses highlight the technology sector's fractured approach to European artificial intelligence regulation. The code of practice, published on July 10 by the European Commission, aims to provide legal certainty for companies developing general-purpose AI models ahead of mandatory enforcement beginning August 2, 2025.
The comprehensive framework addresses three primary areas: transparency obligations, copyright compliance, and safety measures for AI systems. Model providers must implement detailed documentation requirements throughout the development lifecycle, establish copyright compliance policies, and conduct systemic risk assessments for models exceeding computational thresholds.
According to the code's technical specifications, general-purpose AI models are defined as systems capable of performing various tasks across multiple domains. The framework establishes specific computational thresholds measured in floating-point operations to determine which models fall under regulatory scope. Models requiring more than 10²³ FLOPs during training must comply with basic obligations, while systems exceeding 10²⁵ FLOPs face enhanced systemic risk requirements.
Documentation obligations require providers to maintain current technical information for downstream providers and regulatory authorities. The code specifies that providers must "publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law."
The framework's Safety and Security chapter establishes protocols for incident reporting and risk mitigation. Providers must implement continuous monitoring systems to identify potential malfunctions or security vulnerabilities. The code defines serious incidents as malfunctions leading to events specified in Article 3(49) of the AI Act, requiring immediate notification to regulatory authorities.
Meta's refusal to participate signals broader industry concerns about regulatory overreach. Chief global affairs officer Joel Kaplan criticized the code in a LinkedIn post on July 18, stating that Meta "won't be signing it" due to "legal uncertainties for model developers" and measures that "go far beyond the scope of the AI Act."
Kaplan cited support from 45 European companies sharing similar concerns about regulatory impact on AI development. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," he wrote.
The Meta executive's statement extends warnings made earlier in 2025, when the company's policy chief Joel Kaplan indicated Meta would seek Trump administration intervention if Brussels imposed unfair penalties. These tensions reflect broader transatlantic technology policy disputes as companies navigate competing regulatory frameworks.
OpenAI and Mistral have already signed the voluntary code, demonstrating varied industry approaches to European compliance. The disparate responses suggest technology companies are adopting different strategies for managing regulatory relationships across global markets.
The AI Act's implementation schedule creates graduated compliance requirements for different market participants. According to Commission guidance, enforcement becomes applicable "one year later as regards new models and two years later as regards existing models" placed on the market before August 2025.
The framework includes specific exemptions for models released under free and open-source licenses meeting Article 53(2) conditions. However, these exemptions do not apply to general-purpose AI models with systemic risk capabilities, ensuring continued oversight of the most capable systems.
Commission documentation indicates that providers implementing adequate codes of practice demonstrate compliance with Articles 53(1) and 55(1) obligations. The Commission may approve codes through implementing acts, providing general validity within the Union and streamlined compliance pathways for signatories.
Transitional provisions require existing model providers to take "necessary steps" for compliance by August 2, 2027. This extended timeline allows companies to adapt current systems while ensuring new models meet regulatory requirements from the August 2025 effective date.
Marketing implications shape industry adaptation
The regulatory framework creates significant implications for marketing technology providers and agencies utilizing AI-powered tools. Documentation requirements enable better assessment of model capabilities when selecting systems for campaign optimization, content generation, and audience targeting applications.
Compliance pathways developed through the code of practice provide marketing teams with clearer guidelines for evaluating AI tool selection. The transparency obligations require model providers to disclose training data sources and system capabilities, enabling more informed technology adoption decisions.
Copyright compliance measures particularly impact content creation workflows. The framework requires providers to implement policies addressing Union copyright law throughout model lifecycles, potentially affecting AI-generated marketing materials and campaign content.
Risk assessment requirements for high-capability models may influence enterprise software selection. Companies deploying AI systems for customer data analysis, personalization engines, or automated decision-making must consider enhanced compliance obligations when evaluating technology partners.
The enforcement timeline provides marketing organizations with adjustment periods to evaluate current AI tool usage and develop compliant workflows. The graduated implementation schedule allows companies to assess existing technology partnerships and plan transitions if necessary.
International coordination challenges emerge
The code of practice represents one component of broader European efforts to establish global AI governance frameworks. The multi-stakeholder development process involved nearly 1,000 participants, demonstrating the complexity of achieving consensus across diverse industry interests.
Member States and the Commission will assess the code's adequacy in coming weeks, with potential approval through implementing acts providing "general validity within the Union." The AI Office retains authority to develop common implementation rules if the code proves inadequate or cannot be finalized by required deadlines.
The framework's voluntary nature during initial phases provides companies with opportunities to influence regulatory development through participation. However, mandatory enforcement beginning in 2025 ensures eventual compliance regardless of voluntary code adoption.
International coordination efforts extend beyond European initiatives. The framework aligns with broader global AI governance developments, including the G7 Hiroshima AI Process and various national AI strategies, though the Expert Group argues current efforts remain insufficient given the challenge scale.
The divergent company responses to the voluntary code foreshadow potential compliance challenges as mandatory requirements take effect. Technology companies must balance innovation objectives with regulatory obligations across multiple jurisdictions, creating complex strategic considerations for global AI development programs.
Timeline
- July 10, 2025: European Commission officially receives final General-Purpose AI Code of Practice
- July 17, 2025: AI Office invites providers to sign the GPAI Code of Practice
- July 18, 2025: Microsoft president indicates likely signing while Meta announces refusal to participate
- August 1, 2025: Signatories publicly listed ahead of mandatory enforcement
- August 2, 2025: AI Act obligations for general-purpose AI models take effect
- August 2026: Enforcement becomes applicable for new models
- August 2027: Enforcement becomes applicable for existing models placed on market before August 2025
Key Terms Explained
Campaign Optimization
Campaign optimization refers to the systematic process of improving advertising performance through data analysis and strategic adjustments. In the context of AI regulation, campaign optimization tools must now comply with transparency requirements that affect how algorithms process user data and make targeting decisions. The documentation obligations under the EU framework require marketing teams to understand the underlying AI models powering their optimization platforms, enabling more informed decisions about tool selection and compliance strategies.
Content Generation
Content generation encompasses the automated creation of marketing materials using artificial intelligence systems. The EU code of practice directly impacts this area through copyright compliance requirements, as AI models used for content creation must implement policies addressing Union copyright law throughout their operational lifecycle. Marketing teams utilizing AI-generated content must now consider the training data sources and potential copyright implications of their chosen platforms.
Audience Targeting
Audience targeting involves the strategic selection and segmentation of potential customers based on behavioral, demographic, or psychographic data. The regulatory framework affects audience targeting through enhanced transparency obligations that require AI model providers to disclose system capabilities and data processing methods. This transparency enables marketing professionals to better evaluate targeting tool effectiveness while ensuring compliance with evolving privacy and AI governance requirements.
Risk Assessment
Risk assessment in marketing contexts involves evaluating potential negative outcomes from AI-powered marketing activities. The EU framework establishes specific risk assessment requirements for high-capability AI models, particularly those exceeding computational thresholds that trigger systemic risk obligations. Marketing organizations must now consider these enhanced compliance requirements when selecting AI tools for customer data analysis and automated decision-making processes.
Compliance Pathways
Compliance pathways represent the structured approaches organizations can take to meet regulatory requirements while maintaining operational effectiveness. The voluntary code of practice creates multiple compliance pathways for AI model providers, which indirectly affects marketing teams through tool availability and feature sets. Understanding these pathways helps marketing professionals anticipate changes in AI tool capabilities and plan technology adoption strategies accordingly.
Technology Adoption
Technology adoption describes the process by which organizations integrate new technological solutions into their operational workflows. The graduated enforcement timeline for AI regulation affects technology adoption decisions by providing adjustment periods for evaluating current AI tool usage and developing compliant workflows. Marketing teams must balance innovation opportunities with compliance obligations when adopting new AI-powered marketing technologies.
Model Capabilities
Model capabilities refer to the specific functions and performance characteristics of AI systems used in marketing applications. The documentation requirements under the EU framework mandate disclosure of model capabilities, enabling marketing professionals to make more informed decisions about tool selection for specific use cases. Understanding model capabilities becomes crucial for assessing whether AI tools meet both performance requirements and compliance obligations.
Regulatory Frameworks
Regulatory frameworks encompass the comprehensive set of rules, guidelines, and enforcement mechanisms governing AI development and deployment. The EU AI Act represents a landmark regulatory framework that affects marketing technology through compliance requirements for general-purpose AI models. Marketing organizations must navigate these frameworks to ensure their AI-powered activities remain compliant while maximizing business effectiveness.
Data Processing
Data processing involves the collection, analysis, and utilization of information for marketing purposes using AI-powered systems. The EU code of practice affects data processing through enhanced transparency and documentation requirements that govern how AI models handle training data and user information. Marketing teams must understand these requirements to ensure their data processing activities align with regulatory expectations.
System Integration
System integration refers to the process of combining different AI tools and platforms to create cohesive marketing technology stacks. The regulatory framework affects system integration through requirements for downstream AI system compliance and provider accountability. Marketing organizations must consider how integration decisions impact overall compliance posture and ensure that combined systems meet regulatory obligations across the entire technology stack.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Microsoft president Brad Smith signals likely company participation in EU voluntary AI compliance framework while Meta Platforms chief global affairs officer Joel Kaplan announces refusal to sign code of practice.
What: The European Commission's voluntary General-Purpose AI Code of Practice establishes transparency, copyright, and safety obligations for AI model providers ahead of mandatory AI Act enforcement.
When: Smith's comments came July 18, 2025, one day after the AI Office invited providers to sign the code published July 10, with mandatory compliance beginning August 2, 2025.
Where: The framework applies to providers placing general-purpose AI models on the European Union market regardless of provider location, with particular focus on high-capability systems.
Why: The code provides voluntary compliance pathways ahead of mandatory enforcement while addressing transparency needs, copyright protection, and systemic risk management for increasingly capable AI systems.