A federal court ordered Google to share its search data with rivals in September 2025. Figuring out how to actually do that fairly is a harder problem than the order itself acknowledges - and a new legal paper argues the answer was worked out by computer scientists decades ago.
Giovanna Massarotto, a lecturer at the University of Pennsylvania Carey Law School, published "Algorithmic Remedies for Google's Data Monopoly" on SSRN on August 14, 2025, with a last revision date of November 27, 2025. The paper, forthcoming in the Harvard Business Law Review, has attracted 2,680 abstract views and 339 downloads on SSRN. Massarotto announced the publication via LinkedIn, generating more than 107 reactions and prompting engagement from legal and advertising professionals. The paper's central argument is direct: when courts order companies to share indivisible digital resources, they face the same coordination problem that computer science solved in the 1960s through algorithms governing shared access to databases and processors. That problem is called mutual exclusion.
The antitrust backdrop
Two federal antitrust cases form the legal foundation for the paper. On August 5, 2024, Judge Amit P. Mehta of the U.S. District Court for the District of Columbia found that Google had abused its monopoly in the online search market, violating Section 2 of the Sherman Act. Then, on September 2, 2025, the same judge imposed data-sharing as the primary remedy - requiring Google to grant so-called "Qualified Competitors" a one-time snapshot of the Google Search Index and access to search syndication services at commercial terms.
PPC Land reported on the September 2, 2025 ruling at the time, noting that the court ordered Google to make available "certain search index and user-interaction data, though not ads data, as such sharing will deny Google the fruits of its exclusionary acts and promote competition." Google subsequently filed an appeal in January 2026 seeking to pause the data-sharing mandates, characterizing them as threats to user privacy and competitive innovation.
The second case concerns ad tech. On April 17, 2025, Judge Leonie Brinkema of the U.S. District Court for the Eastern District of Virginia found Google had willfully acquired and maintained monopoly power in the publisher ad server and ad exchange markets, violating Sections 1 and 2 of the Sherman Act. A remedies trial ran from September 22 through early October 2025, with closing arguments on November 17. PPC Land has tracked both the remedies trial witness preparations and the final competing proposals filed by the DOJ and Google in November 2025. Data sharing features prominently among the DOJ's demands in both cases.
State-level proceedings add further complexity. The Ohio Attorney General sued Google in June 2021 seeking to have Google Search classified as a common carrier or public utility under Ohio common law - a case now before the Ohio Court of Appeal. Texas, joined by fifteen other states and Puerto Rico, separately alleges that Google unlawfully dominated digital advertising markets.
Why scale is the core of the problem
Both judges placed data scale at the center of their findings. According to Massarotto's paper, Judge Mehta found that users enter nine times more queries on Google than on all rivals combined - and on mobile devices, that multiplier rises to 19 times. "Armed with its scale advantage, Google continues to use that data to improve search quality," Mehta observed, according to the paper. Judge Brinkema similarly noted that "scale is a crucial factor for ad tech companies' ability to compete because of the importance of big data analytics for optimizing ad tech services and the significant network effects that exist in programmatic advertising."
This scale was not accidental. Google paid Apple $20 billion in a single year to secure Google Search as the default search engine on Apple devices - a practice the court found illegal. Similar exclusionary agreements existed with Motorola, Samsung, AT&T, Verizon, and Mozilla. The agreements allowed Google to collect query volumes that rivals could not replicate, creating what the paper describes as a self-reinforcing engine. As PPC Land documented in its coverage of the DOJ's final remedies proposals, the government characterized this as "unlawfully gained scale advantages" requiring structural correction.
Google's former CEO himself acknowledged the dynamic: "Scale is the key. We just have so much scale in terms of the data we can bring to bear," according to the complaint cited in the paper.
The resource allocation problem courts haven't solved
Massarotto's paper identifies a critical gap in how courts have approached data-sharing remedies. Ordering a company to share data is one thing; deciding who gets access, in what order, and under what rules is quite another. The paper draws a parallel to a century of antitrust experience with physical infrastructure. In the 1912 Terminal Railroad case, the Supreme Court required a railroad monopolist to allow competing lines access to its St. Louis bridge and switching facilities on non-discriminatory terms - but left the implementation to the association itself, giving no technical guidance.
More than a century later, similar questions arise with Google's data centers and search indexes. The Google Search Index stores approximately 100 million gigabytes of web content as of 2025, according to the paper. Duplicating that infrastructure would not only be enormously expensive - Google has disclosed plans to spend approximately $75 billion on data centers in 2025 alone - but would also raise privacy and environmental concerns that the paper argues make copying less efficient than sharing.
The paper points to the European experience as a cautionary illustration. The Digital Markets Act entered into force in November 2022 and became binding for designated gatekeepers including Google in March 2024. Rivals have since claimed Google is intentionally failing to implement data sharing properly. In March 2025, the European Commission opened non-compliance investigations against Alphabet under the DMA. PPC Land has covered this regulatory pressure extensively, including 18 civil society groups urging the Commission in March 2026 to act on Google's search non-compliance. Centralized approaches - where Google itself controls implementation - appear, according to the paper, to create conditions for discrimination.
The mutual exclusion parallel
Here the paper introduces its main theoretical contribution. In 1965, Dutch computer scientist Edsger Dijkstra formalized a problem he called mutual exclusion: how do multiple processes share a resource that only one can use at a time? The classic example is an office printer. Ten people share it but only one print job can run at a time; a system must decide whose job goes next. Databases face the same constraint at scale. If multiple processes write to the same database simultaneously without coordination, the data can be corrupted and the system can fail.
According to the paper, "the legal challenge of sharing indivisible company's resources is a version of a problem computer science solved decades ago: the mutual exclusion problem." Computer scientists spent sixty years developing algorithms that solve this problem while satisfying two core requirements: efficiency and non-discrimination. Those are precisely the requirements antitrust law demands when mandating facility sharing.
The analogy is not superficial. Databases and data centers are indivisible in the same functional sense as a railroad bridge or an electrical grid. Only so many queries can be served at once. Data centers cannot process an infinite number of simultaneous requests without degrading performance or corrupting outputs. Even the most advanced commercial databases face coordination constraints when managing concurrent access.
Three algorithmic approaches
Massarotto develops a framework of three approaches derived from distributed computing research, each with parallels in existing legal mechanisms.
The token-based approach works by creating a unique digital object - a token - that circulates among participants in a network. Only the participant currently holding the token can access the shared resource. In law, the paper draws an analogy to a deed or title document, or to the exclusive access period in a time-share property. Token-based systems can implement fairness by broadcasting token requests across all participants rather than routing them through a hierarchy. Hierarchical tree structures, the paper notes, risk recreating the same single-point-of-failure problems as centralized systems and can disadvantage participants lower in the hierarchy.
The permission-based approach draws on the Bakery algorithm developed by Leslie Lamport in 1974. Each process takes a number - like a bakery queue ticket - and the process with the smallest number gets access first. Timestamps establish a total order of events and determine priority when two requests arrive simultaneously. The paper notes parallels to the public land registry, which assigns priority based on recording order, and to patent law's first-to-file rule. In a Google context, this approach would automate authorization: a rival seeking access to Google's data facilities would need authorization from all other stakeholders in a distributed network, with timestamps resolving conflicts. No single party - including Google itself or the DOJ - would hold unilateral priority-setting power.
The quorum-based approach requires a process to obtain approval from a majority or supermajority of participants before accessing the shared resource. The paper draws an analogy to corporate board decision-making. The advantage is decentralization: if one participant in the network stops responding, a decision can still be reached. Quorum systems are also scalable and reduce the volume of messages exchanged among participants compared to permission-based systems, making them more efficient at large scale. The trade-off is complexity: quorum systems can produce deadlock if overlapping quorum sets generate conflicting requests. Maekawa's algorithm, the first quorum-based mutual exclusion algorithm, uses timestamps to establish priority order when conflicts arise.
The paper presents a trade-off table summarizing the three approaches. Permission-based systems use timestamps to guarantee non-discrimination through first-in, first-served mechanisms. Quorum-based systems reduce message complexity and achieve fairness through decentralized decisions. Token-based systems are efficient but can be unfair if requests are routed hierarchically rather than broadcast to all participants.
Applying the framework to Google's cases
In the Google Search case, the paper argues that the judge's data-sharing remedy as currently structured is both inefficient and potentially discriminatory. Requiring rivals to receive a one-time snapshot of the Search Index assumes they have the same incentives and resources to build data infrastructure as Google. DuckDuckGo, for instance, explicitly does not store user data as part of its business model. Microsoft's Bing, by contrast, operates more than 400 data centers globally. Treating these two competitors identically in a data-copying requirement is neither efficient nor non-discriminatory, the paper argues. The alternative - sharing access to Google's existing data facilities rather than requiring duplication - is the approach the paper advocates, with one of the three algorithmic frameworks governing access.
Judge Mehta established a Technical Committee to oversee compliance during a six-year period, consisting of experts in software engineering, information retrieval, artificial intelligence, economics, and behavioral science. The paper suggests this committee is a natural vehicle for implementing one of the three algorithmic frameworks. Under a token-based system, the committee could manage a broadcasting mechanism by which rivals request the token from all network participants simultaneously - Google, the DOJ, and qualified competitors - with sequence numbers tracking the order of access requests. Under the permission-based approach, the automated distribution of authorization rights would remove the need for any single party to set priorities. Under the quorum approach, the committee could define the quorum composition and thresholds.
For the Ad Tech case, the same three approaches apply, with one adjustment: the relevant data is the ad exchange and DoubleClick for Publishers datasets rather than the Search Index. The paper identifies scale as equally central in the ad tech context. According to Judge Brinkema's findings cited in the paper, scale drives competitive advantage in programmatic advertising through network effects and big data analytics.
Pricing the shared access
The paper devotes a section to the question courts consistently struggle with: how much should competitors pay for access to mandated infrastructure? Drawing on the telecommunications sector's experience with network unbundling, the paper argues that market forces should drive price-setting wherever possible, with baseball-style arbitration as a dispute resolution mechanism. Under that format, used previously in the Comcast/NBC Universal merger consent decree, each party submits its best final offer and an arbitrator selects one of the two - a mechanism that discourages extreme proposals.
For reference, Apple paid Google $300 million for Google Cloud storage services in 2021, according to the paper, and Spotify moved its data to Google's servers in 2016. Google's own publicly available pricing for cloud storage services provides a market baseline. The paper argues that setting data-access fees too low would eliminate Google's incentive to maintain data quality; setting them too high would make the remedy ineffective. The FCC's experience regulating network unbundling under the 1996 Telecommunications Act - which spawned years of legal battles before regulatory and judicial bodies - illustrates the dangers of entrusting price-setting entirely to an administrative agency.
Why this matters for the marketing and advertising industry
The advertising industry sits at the center of both antitrust cases. Google's ad tech business - publisher ad servers, the AdX exchange, and the DoubleClick infrastructure - was found to have generated supracompetitive fees for more than a decade, according to court findings cited in PPC Land's reporting. PPC Land has also tracked the cascade of follow-on litigation from publishers and ad tech firms, including PubMatic and Dotdash Meredith, seeking damages based on the established monopoly findings.
For programmatic advertisers and publishers, the question of how data-sharing remedies get implemented is not abstract. If rivals gain meaningful, non-discriminatory access to Google's search index, user interaction signals, and ad data on workable technical terms, the competitive dynamics in search advertising could shift. Smaller search engines could narrow the quality gap, potentially making them viable alternatives for advertisers currently concentrated on Google. If the European experience repeats itself - rivals gaining nominal access but encountering systematic technical friction - the competitive landscape would change little.
The paper notes that AI development is now inseparable from this question. AI models require large training datasets, and the same data that flows through Google's search and ad systems is also the raw material for AI development. As the paper observes, the more users rely on AI tools like ChatGPT, the more they give private data to companies and reduce publicly available data. Requiring data-sharing with verified competitors could therefore have implications that extend beyond traditional search competition into the emerging AI infrastructure market.
Google has announced plans to appeal the September 2025 search remedy ruling, a process that legal analysts expect could extend into 2027 or 2028 before appellate courts reach final rulings. In the ad tech case, a final ruling on remedies from Judge Brinkema is still pending. The algorithmic framework Massarotto proposes would be relevant to both outcomes - and could, the paper argues, give courts new technical tools for assessing whether Google is complying with sharing obligations in a genuinely non-discriminatory way, rather than relying solely on economic price analysis.
Timeline
- June 2011: FTC opens formal antitrust investigation into Google's search advertising practices
- January 2013: FTC investigation closes; Google agrees to provide patent access to competitors on fair terms
- October 2020: DOJ files antitrust lawsuit against Google targeting search distribution agreements
- June 2021: Ohio Attorney General sues Google seeking common carrier or public utility classification
- January 2023: DOJ and state attorneys general file antitrust lawsuit against Google targeting ad tech markets
- August 5, 2024: Judge Mehta rules Google illegally maintained a monopoly in online search
- March 7, 2025: DOJ submits final proposed remedies including Chrome divestiture and data-sharing requirements
- April 17, 2025: Judge Brinkema rules Google violated antitrust law in the publisher ad server and ad exchange markets
- August 14, 2025: Massarotto paper on algorithmic remedies first posted to SSRN
- August 16, 2025: Witness lists filed for Google ad tech remedies trial starting September 22
- August 26, 2025: Federal court decision on Google antitrust remedies expected to transform AI landscape
- August 29, 2025: Ohio Court of Appeal issues ruling in Ohio v. Google common carrier case
- September 2, 2025: Judge Mehta imposes data-sharing as primary remedy in Google Search case; Chrome divestiture rejected
- September 4, 2025: PPC Land reports on Glue data system and RankEmbed sharing requirements
- September 22 - October 2025: Ad tech remedies trial in Eastern District of Virginia
- November 3-17, 2025: Final post-trial briefs and closing arguments in ad tech remedies case
- November 27, 2025: Massarotto paper revised on SSRN
- January 16, 2026: Google files appeal seeking to pause data-sharing remedies
- March 2026: 18 civil society groups urge EU Commission to act on Google's DMA non-compliance
Summary
Who: Giovanna Massarotto, Lecturer at the University of Pennsylvania Carey Law School and affiliate of the school's Center for Technology, Innovation and Competition, is the author of the paper. The primary parties affected are Google LLC, the U.S. Department of Justice, state attorneys general in Ohio and Texas, and competitors seeking access to Google's data infrastructure, including DuckDuckGo and Microsoft Bing.
What: A 54-page legal paper forthcoming in the Harvard Business Law Review proposes applying three algorithmic frameworks from computer science - token-based, permission-based, and quorum-based approaches - to implement data-sharing obligations imposed on Google in two federal antitrust cases. The paper argues that courts ordering data sharing have not addressed the practical question of how access to indivisible digital infrastructure can be managed in a technically efficient and non-discriminatory way. The mutual exclusion problem from computer science, studied since 1965, offers a rigorous theoretical foundation for doing so.
When: The paper was first posted to SSRN on August 14, 2025, and last revised on November 27, 2025. It responds directly to the September 2, 2025 search remedy ruling and the ongoing ad tech remedies proceedings in Virginia.
Where: The legal proceedings are in U.S. federal courts - the District of Columbia for the search case and the Eastern District of Virginia for the ad tech case. State proceedings are pending in Ohio and Texas. The paper is published through SSRN and is forthcoming in the Harvard Business Law Review.
Why: The paper argues that existing antitrust and legal frameworks have not developed adequate tools for implementing data-sharing obligations in digital markets. Without a clear coordination mechanism, courts either leave implementation to the monopolist - creating conditions for discrimination - or burden enforcement agencies with technical decisions beyond their expertise. Drawing on algorithmic principles from distributed computing, the paper offers a neutral technical framework that courts could use to evaluate whether data-sharing obligations are being met efficiently and fairly. The outcome of Google's antitrust cases will, according to the paper, shape data-driven markets including AI for decades, much as the Microsoft antitrust consent decree of 2001 shaped the personal computer industry.