Legal experts warn AI terms of service may prove unenforceable in court

New research shows licensing restrictions on AI models face significant legal hurdles, casting doubt on tech firms' control.

In an academic paper published on December 9, 2024, Stanford Law Professor Mark A. Lemley and Princeton University Professor Peter Henderson argue that artificial intelligence companies' terms of service are built on legally uncertain ground, potentially undermining efforts to control how their technology is used.

According to the research paper, titled "The Mirage of Artificial Intelligence Terms of Use Restrictions," major AI companies like OpenAI, Google, and Meta commonly attach restrictive terms to both their models and model outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation.

However, the researchers found that companies face significant challenges in enforcing these restrictions. "We argue that there is little basis for a company to claim IP rights in anything its generative AI delivers to its users," the paper states. While AI companies sell access to their systems, the researchers note it remains unclear what legal rights they actually have in their models or outputs.

The analysis highlights several key barriers to enforcement. First, model creators likely lack copyrightable ownership of their model weights - the core numerical parameters that determine how AI systems process information. These weights, often containing billions of parameters, are created through automated processes rather than direct human authorship.

Additionally, most companies explicitly disclaim ownership of AI-generated outputs. OpenAI's terms of service, for example, assign any potential rights in outputs to users while still attempting to restrict how those outputs can be used competitively.

The paper examines legal precedent across multiple U.S. circuit courts, finding that attempts to enforce restrictions through contract law may be preempted by federal copyright statutes in many jurisdictions. Recent cases like ML Genius Holdings LLC v. Google LLC suggest courts are increasingly skeptical of attempts to control copying of uncopyrightable content through terms of service.

For open-source or "open-weight" models that publicly release their parameters, enforcement faces even steeper hurdles. Traditional open-source licensing depends on underlying copyright protection - without it, restrictions on downstream use become difficult to maintain legally.

The research carries implications for ongoing policy debates. While companies like Meta have promoted their terms of service as key tools for preventing AI misuse, the paper suggests such terms may prove ineffective. The National Telecommunications and Information Administration cited licensing restrictions in its July 2024 recommendations on AI safety, but the authors argue policymakers should be cautious about relying on terms of dubious enforceability.

The findings also raise questions about market competition. Many AI companies' terms prohibit using their outputs to train competing models, even as they train their own systems on publicly available data. Without clear legal basis for such restrictions, researchers say this asymmetry deserves scrutiny.

The 68-page analysis, published as a Princeton University Program in Law & Public Affairs Research Paper, has received 433 downloads and 1,467 abstract views since its release in early January 2024. Neither OpenAI nor other major AI companies had publicly responded to the findings as of publication.