Google and Meta sue California over social media age restrictions law

Google, YouTube and Meta filed separate lawsuits November 13 challenging California Senate Bill 976 provisions restricting personalized feeds for minors.

Google and Meta sue California over social media age restrictions law

Google, YouTube and Meta filed separate federal lawsuits on November 13, 2025, challenging California Senate Bill 976's restrictions on personalized content feeds for users under 18, marking a significant confrontation between tech platforms and state regulators over First Amendment rights and content curation.

The complaints, filed in the United States District Court for the Northern District of California, target specific provisions requiring parental consent before minors can access personalized feeds and mandating default settings that disable such features. Both companies argue these restrictions violate their editorial rights to curate third-party content and burden minors' ability to access speech.

Google and YouTube filed their complaint as Case No. 5:24-cv-07886, while Meta filed separately as Case No. 3:25-cv-09792. Both cases name California Attorney General Rob Bonta as defendant in his official capacity.

Senate Bill 976, signed by Governor Gavin Newsom in September 2024 and officially known as the Protecting Our Kids from Social Media Addiction Act, defines "addictive feeds" as websites where user-generated content is "recommended, selected, or prioritized for display to a user based ... on information provided by the user, or otherwise associated with the user or the user's device." The law was scheduled to take effect January 25, 2025, but faces temporary injunction during appeals.

YouTube argues recommendation system protects minors

Google's 23-page complaint emphasizes YouTube serves as the most-viewed television distribution platform and leading destination for music and podcasts, with over 20 billion videos in its library. More than 500 hours of video content are uploaded to the platform every minute from creators spanning more than 100 countries and 80 languages.

The company argues its personalized recommendations reflect "YouTube's judgment as to how to arrange and prioritize videos to ensure that each user can find videos that are relevant, trustworthy, and valuable for that particular user." These recommendations incorporate YouTube's values about appropriate content, particularly for younger audiences.

According to the complaint, YouTube's Recommender System was created by YouTube engineers who continue to refine it to deliver "relevant, enjoyable, valuable, and high-quality content for a particular user at a particular moment." The system uses numerous signals ranging from basic information like language and location to complex factors like what new genre of video a user might enjoy based on preferences of similar users.

For children and teens specifically, YouTube has developed quality principles in consultation with experts that prioritize content demonstrating healthy habits, promoting critical thinking, and including life lessons with positive characters. The system actively deprioritizes heavily commercial content, material encouraging dangerous activities, and sensational or misleading information.

YouTube also ensures teens are not overexposed to content that may be harmful if viewed repeatedly, including material displaying social aggression, portraying delinquency, depicting teens as cruel or malicious, or comparing physical features and idealizing certain body types. Videos falling within certain categories like nudity and sexually suggestive content, violent or graphic content, and vulgar language are age-gated and not recommended to users under 18.

Meta details editorial control across three platforms

Meta's 30-page complaint describes how Facebook, Instagram and Threads enable billions of people worldwide to share ideas and discuss important topics including politics, public health and social issues through curated feeds. The company contends its content moderation policies and Community Standards reflect editorial judgments about appropriate content, implemented through both human reviewers and algorithms designed by Meta employees.

According to the complaint, Meta presents each user with "a continually updating stream of other users' posts" through prioritization achieved through algorithms at scale. The selection and ranking is based on users' expressed interests and past activities, as well as more general features of the communication or its creator.

Meta's Community Standards detail 23 separate policies restricting specific categories of content related to coordinating harm and promoting crime, dangerous organizations and individuals, fraud and scams, restricted goods and services, violence and incitement, adult sexual exploitation, bullying and harassment, child sexual exploitation, human exploitation, suicide and self-injury, eating disorders, adult nudity and sexual activity, hateful conduct, privacy violations, violent and graphic content, authentic identity representation, cybersecurity, inauthentic behavior, misinformation, spam, intellectual property infringement, and locally illegal content.

In the second quarter of 2025 alone, according to the complaint, Meta took action on and removed 6.7 million pieces of content on Facebook and 10 million pieces on Instagram that violated its suicide, self-injury and eating disorder policy; 4.1 million pieces on Facebook and 3.3 million on Instagram violating its bullying and harassment policy; and 165 million pieces on Facebook and 93.2 million on Instagram violating its spam policy.

Meta emphasizes its Feeds are "not feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards." The company's Community Standards make "a wealth of user-agnostic judgments about what kinds of speech, including what viewpoints, are not worthy of promotion."

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

Supreme Court precedent cited extensively

Both companies cite the Supreme Court's 2024 decision in Moody v. NetChoice extensively throughout their complaints. The court held that "deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own" that receives First Amendment protection.

According to Google's complaint, this ruling established that platforms' curation decisions remain protected when implemented through algorithms that "simply act as a tool to implement a conscious human choice." The complaint quotes the court's recognition that "the presentation of an edited compilation of speech generated by other persons is a staple of most newspapers' opinion pages" and similar curated content "falls squarely within the core of First Amendment security."

Meta's complaint emphasizes the Supreme Court's statement that a platform's activity "is not unlike traditional media curated by human editors." The company argues its Feeds receive First Amendment protection because they represent Meta's "expressive choices" in deciding which user-generated content to display and how to organize it.

California's exemptions create content-based restrictions

California's law exempts feeds from coverage if platforms curate content only based on information "not persistently associated with the user or user's device" or search terms without persistent association. The Act also exempts feeds displaying "exclusively the next media in a preexisting sequence from the same author, creator, poster, or source" and feeds operated primarily for cloud storage purposes.

According to Meta's complaint, these exemptions create content-based and speaker-based restrictions favoring certain editorial choices over others. The law exempts websites where "interactions between users are limited to commercial transactions or to consumer reviews of products, sellers, services, events, or places."

The company notes video and music streaming services like Netflix and Hulu can continue offering personalized feeds without restriction since they primarily distribute non-user content rather than user-generated material. The Act thereby "elevates provider-generated content over user-generated content," treating services differently based solely on the type of speech they primarily host.

Google's complaint emphasizes that feeds curated according to California's prescribed editorial standards—such as chronological ordering or single-creator sequences—receive exemption from the law's requirements. This means "any service can continue to provide the very same feeds that the State asserts are purportedly harmful if other aspects of the overall website mean those feeds are not a 'significant part' of the website."

Specific examples demonstrate practical impacts

Google identifies numerous specific examples where California's restrictions would prevent YouTube from serving relevant content to teenage users. The platform could not recommend additional science experiment videos to teens who demonstrated interest in such content through previous viewing. Similar restrictions would apply to local scholarship information, California trade school content, college and university information, military recruitment materials, local volunteering opportunities, content from the San Francisco Chronicle, and posts from California political officials.

According to the complaint, if a teen searches for "San Francisco Giants" content, YouTube might be permitted to show such content briefly. However, "it may be impermissible to display such content to the same user again outside of their specific search" because doing so would rely on information persistently associated with that user.

The restrictions would also prevent YouTube from avoiding repetitive content. If a teen has already seen a breaking news story or sound clip from a recent debate, "the Act would preclude Meta from choosing not to show the exact same content to the same teen twice" because making that choice requires accounting for the user's previous interactions with media.

Meta points to similar concerns with Instagram's Teen Accounts, which provide Limited Content, Ages 13+, and More Content settings developed by Meta personnel based on their judgment regarding appropriate content for different restriction levels. Meta defaults teens to the Ages 13+ setting, but teens may elect the more restrictive Limited Content setting, or with parental permission, opt for the More Content setting.

According to the complaint, "the Act would restrict Meta from disseminating curated compilations of third-party speech while accounting for individual users' preferences as between the Limited Content and Ages 13+ content settings" because doing so relies on "information provided by the user" without qualifying for the Act's exceptions.

The law would also interfere with Meta's ability to de-prioritize content based on users' expressed preferences. For example, "the Act would restrict Meta from de-prioritizing content that it predicts will be of low value to a particular user based on their content preference settings across Facebook, Instagram, and Threads." Users might indicate they want to see less political content or sensitive material about certain topics, but the Act's restrictions on using information "provided by the user" could prevent Meta from honoring these preferences.

Companies argue law is seriously underinclusive

Both complaints argue California's law is seriously underinclusive because it permits minors to access the same content through alternative means while restricting access through personalized feeds. According to Google's complaint, "parental consent does not actually address the harm to minors that the State asserts" because "if a child or teen is suffering severe mental or physical harm from social media as the State claims (but does not prove), then one parent authorizing that activity to continue would not solve the issue."

The law permits websites to show precisely the same content to minors if the minors search for it, "so long as one parent ... says it's OK," or if the website qualifies for any of the Act's exceptions. Meta's complaint notes that California Attorney General Rob Bonta has judicially admitted that "the Act ... allows platforms to show the same content to [minors] through different means"—in other words, through different State-preferred editorial choices.

Both companies emphasize the law entirely exempts personalized feeds on countless websites that primarily disseminate first-party and non-user content. According to Meta's complaint, this means "Facebook, Instagram, and Threads may not display personalized feeds to minors by default, but services that do not display user-generated content as a significant part of their service—like video and music streaming services—can continue to offer feeds, including personalized feeds, and curate them however they wish without restriction."

The complaints also argue the restrictions are seriously overinclusive. Google's complaint notes "the Act's restrictions sweep in services—like Facebook, Instagram, and Threads—that voluntarily engage in substantial content moderation efforts, or voluntarily provide tools allowing parents and guardians to set daily time limits for their teens or limit their use during select days and hours, among many available supervision tools."

Both companies emphasize the law's one-size-fits-all approach treats all minors identically "from websites' youngest users to seventeen-year-olds, regardless of differences in those minors' developmental stage." This overbreadth is particularly significant given that both platforms already prohibit users under 13 from creating accounts.

Extensive parental controls already exist

Meta emphasizes it offers supervision tools on Facebook and Instagram allowing parents to set daily time limits for teens or limit use during select days and hours; set reminders to close the app; see which accounts their teen is following, are following their teen, and that their teen has blocked; view content topics their teen has chosen to see; remove teens' ability to see, leave, or receive comments under posts; see their teen's settings for account privacy and messaging; see and change their teen's content settings; and approve or deny their teens' requests to change default protection and privacy settings to a less strict state.

According to the complaint, Meta provides publicly accessible and free educational resources for interested parents to learn how to use these tools through its Family Center and various help documentation. The company notes that by virtue of teens' Instagram accounts historically being linked to Threads accounts, these supervision tools were also available for the Threads service until approximately November 2025.

Google similarly highlights YouTube Kids for users under 13 and Supervised Experiences allowing parents to supervise and manage children's activity on YouTube. Through these features, parents can block specific channels; adjust their child's permissible content-level settings; review, pause, or clear their child's watch history; view a teen user's number of uploads, subscriptions, and comments; and receive notifications when their teen uploads a video or starts a livestream.

The platform has also implemented digital wellbeing tools including default settings that turn off YouTube's autoplay feature, send "take a break" reminders, or remind users to go to bed. YouTube notifications are silent by default for all mobile app users on eligible devices between 10:00 PM and 8:00 AM.

Both complaints note that beyond platform-specific tools, parents have numerous options for restricting or limiting their children's internet access generally. Cell carriers and broadband providers offer tools to block certain apps and sites, ensure children contact only trusted individuals, and restrict screen time during certain hours. Many wireless routers offer parental control settings allowing parents to block certain websites, set content filters, monitor website visits, turn off internet at particular times, pause internet access for specific devices or users, and limit time spent on particular websites.

Additional parental controls exist at the device level. iPhones and iPads enable parents to approve or decline app download requests, limit device time, choose which applications children can use, set age-related content restrictions, filter online content, and control privacy settings. Google and Microsoft offer similar parental controls for devices using their operating systems. Numerous third-party applications also allow parents to control and monitor children's use of internet-connected devices and online services.

The lawsuits follow a complex procedural history involving NetChoice, an internet trade association representing both companies. NetChoice filed its lawsuit challenging SB 976 on November 12, 2024. Before the Act's January 25, 2025 effective date, the federal court granted a preliminary injunction in part but declined to enjoin most of SB 976's provisions.

According to both complaints, the court rejected NetChoice's facial challenge to the personalized-feed restriction because NetChoice had "not made a record ... to show facial unconstitutionality." The court also concluded that NetChoice lacked associational standing to raise an as-applied challenge for its members, reasoning that "NetChoice's individual members" would need "to participate in this lawsuit" for the court to decide the as-applied First Amendment claims.

NetChoice appealed and the Ninth Circuit initially granted a temporary injunction barring the State from enforcing the Act during the appeal. After expedited briefing and argument, the Ninth Circuit affirmed the lower court's ruling on November 6, 2025, holding that NetChoice lacked associational standing to mount an as-applied challenge to the personalized-feed provisions and that NetChoice failed to show those provisions were facially unconstitutional.

The court of appeals specifically noted that the lower court had observed "there is little question" that NetChoice's members, including Google and YouTube, "would otherwise have standing to sue in their own right" to challenge the Act's constitutionality. The Ninth Circuit denied NetChoice's petition for rehearing and rehearing en banc on November 6, 2025, and issued its mandate on November 13, 2025—the same day Google, YouTube and Meta filed their individual lawsuits.

Broader regulatory context

California's restrictions occur amid broader regulatory developments affecting social media platforms and technology companies. Governor Newsom signed SB 976 as part of a package of legislation targeting child online safety in September 2024. According to a Governor's Office press release issued October 13, 2025, Newsom subsequently signed additional legislation in October 2025 to "further strengthen the state's protections for children online and create safeguards for new and emerging technology, such as artificial intelligence."

The October 2025 package included SB 243, which requires chatbots to disclose they are AI and tell minors every three hours to "take a break"; AB 1043, which requires device makers like Apple and Google to implement tools to verify user ages in their app stores; AB 56, which requires social media platforms to add labels warning users of potential mental health risks; and AB 621, which heightens penalties for companies whose platforms distribute deepfake pornography.

In his statement announcing the October signings, Governor Newsom said: "Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids. We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability."

New COPPA rules took effect June 23, 2025, strengthening children's online privacy protections at the federal level. The amendments significantly expanded disclosure requirements for operators and mandated written comprehensive security programs with specific elements. According to FTC documentation, violations can result in civil penalties of up to $43,792 per violation.

Google tightened advertising rules for minors in January 2025, consolidating five distinct policies into a comprehensive Ad protections for children and teens policy hub. The policies align with various child-directed regulations including the Children's Online Privacy Protection Act, the Age Appropriate Design Code, and the Australia Online Safety Act.

Instagram adopted PG-13 movie rating standards for Teen Accounts in October 2025, filtering mature content for users under 18. Meta's announcement indicated the company reviewed its age-appropriate content guidelines against content standards informing age-appropriate movie ratings "with the goal of having teens see content that is generally similar to what they would see in an age-appropriate movie."

Enforcement authority and potential penalties

California Attorney General Rob Bonta has exclusive enforcement authority under the Act. According to the statute, the law "may only be enforced in a civil action brought in the name of the people of the State of California by the Attorney General."

Both complaints note that Defendant has publicly pursued enforcement against Meta and other platforms under various laws. Google's complaint specifically references litigation filed by Bonta against Meta in October 2023 "advancing claims related to minors' online welfare." Meta's complaint notes Defendant "has publicly pursued enforcement against Meta under other laws."

According to Google's complaint, Defendant has stressed his desire to enforce SB 976 as soon as possible, arguing in opposition to NetChoice's motion for a stay pending appeal that it was "pressing that the Court adjudicate those elements of SB 976 that were originally intended to go into effect on January 1, [2025,] so that the State may proceed to enforce them."

The lawsuits emphasize that platforms face irreparable First Amendment injury if forced to comply with the restrictions. Google's complaint states: "The loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury." Both companies argue this injury is imminent because California "has not suggested that the ... law will not be enforced."

Google's complaint brings four counts: violation of the First Amendment as incorporated by the Fourteenth Amendment (Count I); void for vagueness under the Fourteenth Amendment (Count II); equitable relief (Count III); and declaratory judgment (Count IV). The void for vagueness claim applies only to the Default-Restriction Provisions found in sections 27002(b)(2) and 27002(b)(4).

The vagueness challenge centers on the term "verified parent," which the Act does not define. According to the complaint, "the term 'verified parent' does not have a readily understood plain meaning. And verifying that an individual is the parent of a minor user encompasses a wide range of options that impose an equally wide range of burdens on YouTube, parents, and minor users."

Google argues the Act obligates YouTube to restructure its site to implement default settings without providing "appropriate guidance or standards from the State on the meaning of 'verified parents.'" The company contends this vagueness "permits Defendant to engage in arbitrary enforcement against YouTube."

Meta's complaint brings a single count alleging violation of the First Amendment as incorporated by the Fourteenth Amendment, challenging the personalized feeds restrictions as applied to Facebook, Instagram and Threads. The company emphasizes it "does not bring this complaint lightly" but does so "now that the Ninth Circuit has ruled that Meta's individual participation is necessary if an as-applied challenge" is to proceed.

Both companies request declarations that the challenged provisions violate the First Amendment; permanent injunctions prohibiting enforcement against their platforms; entry of judgment in their favor; attorneys' fees and costs under 42 U.S.C. § 1988(b) for successful Section 1983 claims; and such other relief as the court deems proper and just.

Timeline

Summary

Who: Google LLC, YouTube LLC and Meta Platforms Inc. filed lawsuits against California Attorney General Rob Bonta in his official capacity as enforcement authority for Senate Bill 976, which Governor Gavin Newsom signed in September 2024.

What: The companies challenge provisions requiring parental consent before minors access personalized feeds (Section 27001(a)) and mandating default settings that either disable personalized feeds entirely (Section 27002(b)(4)) or limit access to one hour daily (Section 27002(b)(2)), arguing these restrictions violate First Amendment rights to curate third-party content and disseminate speech while burdening minors' constitutional right to access information.

When: Both complaints were filed November 13, 2025, the same day the Ninth Circuit issued its mandate in the NetChoice litigation determining the trade association lacked standing for as-applied challenges to the personalized feed provisions, clearing the way for individual companies to bring their own cases.

Where: The cases were filed in the United States District Court for the Northern District of California, with venue in San Jose Division due to companies' principal places of business in Santa Clara County (Google and YouTube) and San Mateo County (Meta), where the injuries giving rise to the actions have been and will continue to be suffered.

Why: The platforms contend California's law unconstitutionally burdens their editorial activity by mandating specific content organization approaches while restricting their ability to use personalized information to make speech more effective for individual users, particularly minors, substituting government control for both parental supervision and platform judgment through seriously underinclusive and overinclusive restrictions that fail to serve the State's asserted interests through the least restrictive means available.