Germany tests algorithmic transparency through landmark enforcement cases
Four German legal actions against X, TikTok, Amazon, and Meta probe platform algorithms under DSA, GDPR, and AI Act, establishing precedents for democratic accountability.
Germany has emerged as a testing ground for algorithmic accountability through four landmark cases targeting major technology platforms. On November 24, 2025, EU DisinfoLab published research mapping how German courts and regulators are deploying the Digital Services Act, GDPR, AI Act, and competition law to expose how platform architectures shape visibility, influence user behavior, and impact democratic processes.
The report examines legal actions against X, TikTok, Amazon, and Meta that collectively probe the inner workings of algorithmic systems ranging from recommender design and data access to market control and tracking infrastructures. These cases demonstrate willingness to challenge the opacity that has long characterized platform operations, particularly regarding how algorithms amplify or suppress information.
Algorithmic amplification represents the process by which systems selectively elevate the visibility, reach, or spread of certain content. Unlike the technical term "algorithm," which has broadly accepted definitions, algorithmic amplification lacks standardized meaning across industry and academia. Each platform applies its own logic for curation and ranking through what EU DisinfoLab describes as an ever-changing "recipe" with largely undisclosed ingredients.
According to the research, algorithmic amplification can constitute a systemic risk under DSA provisions by distorting information flows and amplifying disinformative or manipulative content. The report positions access to platform data and transparency in recommender systems as essential prerequisites for independent scrutiny and democratic oversight.
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Data access establishes accountability foundation
In February 2025, Berlin-based organizations Democracy Reporting International and Gesellschaft für Freiheitsrechte sued X under DSA Article 40(12), demanding access to publicly-available engagement data ahead of Germany's federal election. The Regional Court of Berlin issued an injunction on February 6 ordering X to provide data access through the election period.
X challenged the decision claiming German courts lacked jurisdiction given the platform's Irish headquarters. The Court found in May 2025 that DRI could bring claims in Germany, establishing precedent that EU national courts may hold platforms accountable for data-access failures regardless of where they are headquartered.
The requested metrics including likes, shares, reach, and real-time engagement form the raw material of amplification mechanisms. These indicators serve as inputs that recommender and ranking systems use to determine which content gets surfaced, prioritized, or hidden. Without access to this data, tracing how visibility distributes across political content or identifying when narratives receive algorithmic boosts becomes impossible.
Although the lawsuit centers on data access rather than algorithms directly, analyzing engagement surges, echo chamber dynamics, or virality patterns in released data provides insight into algorithm-led processes. The decision reinforces that platforms whose algorithms shape political communication must submit to independent examination.
Class actions target manipulative recommendation design
On February 5, 2025, Dutch consumer foundation SOMI filed four cross-border class actions in Germany against TikTok and X, represented by German law firm Spirit Legal. Germany's Federal Office of Justice added the TikTok case to the official register for collective lawsuits, allowing users and small businesses to join the action seeking several billion euros in compensation.
The complaints claim both platforms unlawfully process personal data including sensitive information and minors' data while deploying manipulative, addictive product designs that fuel recommendation algorithms. The filings position the cases as early tests of DSA, GDPR, and AI Act provisions.
Under GDPR, the claimants argue TikTok and X rely on invalid consent mechanisms, handle sensitive data categories without proper legal basis, profile minors, and fail basic principles including transparency, purpose limitation, and data minimization. Under DSA Articles 34-35, the suits claim both platforms breach duties to assess and mitigate systemic risks related to election integrity and minor protection while failing to grant researchers sufficient data access for independent scrutiny of recommender operations.
The AI Act provisions frame certain features such as recommendation and profiling systems as high-impact AI requiring governance to prevent manipulation and ensure accountability, particularly where they influence minors or public debate. Courts must clarify how these provisions apply as the Act gradually takes effect.
At the center sits the claim that TikTok manipulates young users by feeding recommendation algorithms with sensitive data while X uses sensitive data without valid legal basis to power its algorithm. Unlawful data collection and processing fuels opaque recommender systems determining which voices and narratives receive amplification online.
The complaints describe tracking and profiling feeding ranking algorithms that optimize for watch time, clicks, or shares. When these systems use sensitive traits or child data, they can amplify attention-grabbing content such as disinformation, deepfakes, or polarizing material while filtering out alternative views. This produces visibility gaps users cannot detect or challenge.
Recent European Parliament findings highlighted TikTok's risk profile around addictive design, data practices, and election-related harms. These lawsuits question whether the data platforms use and behaviors they optimize for are lawful, fair, and safe while demonstrating how DSA, GDPR, and AI Act work complementarily toward unified remedy of making algorithms auditable, transparent, and accountable.
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
Competition case exposes algorithmic pricing power
In June 2025, Germany's Bundeskartellamt issued preliminary assessment finding Amazon's price-control mechanisms likely breach national and EU competition law. The authority argues the platform monitors sellers' prices both on its marketplace and elsewhere using automated checks and algorithmic enforcement.
When prices fall outside Amazon's acceptable range, affected listings may lose visibility, advertising eligibility, or face removal. The FCO characterizes this practice as restricting sellers' pricing freedom and limiting competition both on and beyond Amazon's platform.
Under Section 19a(2) of German competition law, companies with exceptional market power face stricter rules against self-preferencing and exploitative practices. Amazon received this classification in 2022, confirmed by the Federal Court of Justice, meaning it must not use dominance to disadvantage others.
The investigation places Amazon's algorithmic systems, particularly automated price-checking and ranking mechanisms, at the abuse's core. These systems operationalize Amazon's notion of price fairness by constantly comparing listed prices against external websites. Sellers whose offers are judged too expensive may see algorithmic downgrading of visibility, removal from advertising slots, or exclusion from the Buy Box default purchase option.
Offers fitting Amazon's internal thresholds receive amplification through higher search results placement, increased Buy Box wins, and prominent recommendation featuring. This algorithmic amplification and suppression dynamic means the system actively penalizes deviation rather than simply rewarding competitive pricing.
According to EU DisinfoLab's research, because recommendation criteria remain opaque and automated, sellers cannot predict or contest listing treatment. The FCO views this as non-transparent governance strengthening Amazon's control over pricing and market visibility, creating dependency where sellers must conform to algorithmic expectations for commercial viability.
Many merchants report inability to set independent prices even on their own websites without risking Amazon visibility loss. This creates effects on discounting elsewhere as sellers align broader pricing strategies with Amazon's algorithmic thresholds. Over time, this may reduce price diversity across e-commerce ecosystems, entrenching Amazon as the de facto arbiter of fair pricing.
The FCO frames these mechanisms as structural market control achieved through algorithmic conformity. While Amazon positions its algorithms as optimizing efficiency and protecting consumers from uncompetitive prices, the competition authority suggests this creates higher overall prices, reduced choice as listings above certain thresholds get hidden, and algorithmic steering controlling what users see based on the platform's commercial logic rather than user preferences.
Court challenges Meta's tracking infrastructure
In July 2025, the Regional Court of Leipzig ruled Meta Platforms violated GDPR by using tracking technologies including Meta Pixel and software development kits embedded across third-party websites and apps. These tools collected extensive user data from individuals not logged into Facebook or Instagram and transmitted it to Meta without valid legal basis.
The court found that technical identifiers combined with behavioral data allowed Meta to personally identify and profile users, breaching EU privacy law. The ruling fits broader European regulatory efforts to limit cross-site tracking and data aggregation feeding large-scale profiling systems.
The decision rests on GDPR Articles 6 and 9, requiring lawful basis and explicit consent for processing personal or special-category data. Meta's tracking systems exceeded these limits as users faced tracking across domains without informed consent while logged-out or non-Meta users still had data collected and linked to unique identifiers.
Although the case primarily addresses data protection, it holds clear implications for algorithmic transparency and amplification. Tracking technologies like Meta Pixel create the data infrastructure fueling algorithmic recommendation, advertising, and ranking systems. Each data point regarding what users view, click, or abandon feeds models determining which content or ads appear, to whom, and how often.
By ruling against Meta's data-collection methods, the court indirectly challenges inputs driving amplification. Without such profiling data, algorithms have less capacity to microtarget or prioritize content based on inferred emotions or preferences. The decision acts on algorithmic amplification's foundation layer even without directly addressing recommender bias or content visibility.
The judgment highlights persistent opacity problems as users and regulators still lack visibility into how collected behavioral data transforms into algorithmic outputs. Strong data-protection safeguards represent preconditions for digital freedom where users must understand and control when their data fuel algorithmic profiling, especially when those profiles drive amplification and targeting remaining largely invisible.
Integrated enforcement bridges regulation and evidence
The German cases collectively demonstrate how DSA, GDPR, AI Act, and competition law form mutually reinforcing frameworks for algorithmic accountability. Each addresses different transparency, data protection, and risk management aspects.
Oliver Marsch from AlgorithmWatch differentiates between amplification resulting from "logic of the algorithm" and "disproportionately" high visibility levels relative to baseline expectations. In the first case, specific content receives high visibility by following system logic even when outcomes prove undesirable. In the second case, content receives disproportionately high visibility relative to expected baseline, potentially explained by deliberate tweaking of algorithmic decision-making factors.
AIForensics director Marc Faddoul describes algorithmic amplification as the "editorial function" of platforms pursuing "engagement maximization," often rewarding attention-grabbing, financially rewarding, or new content through mechanisms including user interactions, content recency, and geo-specificity mirroring journalism principles.
According to EU DisinfoLab's research, algorithmic amplification has far-reaching repercussions. It can heighten users' exposure to disinformation and malicious campaigns, bias visibility toward certain political viewpoints, and create economic disparities as platform visibility increasingly mediates access to income and opportunity.
Triggering, polarizing, and often inaccurate content travels faster and farther across platforms as negative and emotive stimuli elicit stronger reactions that algorithms reward. Malign actors exploit these mechanisms through Coordinated Inauthentic Behavior, strategically manipulating platform dynamics to amplify content.
Recommender systems reinforce this pattern by prioritizing inflammatory or sensational material generating engagement. While disinformation may originate from multiple sources, engagement-optimized algorithmic systems often propel such narratives into mainstream visibility, shifting debate from free speech to free reach.
The challenge involves tackling amplification dynamics in ways protecting free expression while reducing distortions, focusing on transparency, user choice, and systemic risk mitigation rather than expanding platforms' discretion over lawful content. Under DSA Articles 34 and 35 and Recital 79, algorithmic amplification itself constitutes a systemic risk as integral platform design while also exacerbating other risks including illegal content spread and fundamental rights erosion.
Germany's enforcement actions illustrate how national actors can serve as early enforcers testing EU law in concrete contexts, influencing regulatory interpretation and setting precedents shaping future understandings of systemic risk, algorithmic responsibility, and digital governance across the Union.
The research emphasizes that meaningful reform may require rethinking foundational platform architecture dynamics rather than implementing algorithmic tweaks. Moving away from global social network models toward spatial or group-based models could make interactions more local and less globally interconnected.
For marketing professionals, these developments signal increased regulatory scrutiny affecting how platforms handle algorithmic systems. When platforms face potential fines and operational restrictions, advertising ecosystems can experience disruption. Changes to content moderation or compliance measures could affect ad delivery, targeting capabilities, and overall campaign performance.
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Timeline
- October 19, 2022: Digital Services Act enters into force across European Union
- April 25, 2023: European Commission designates seventeen entities as Very Large Online Platforms
- February 17, 2024: DSA becomes fully operational for all platforms
- February 5, 2025: SOMI files cross-border class actions against TikTok and X in Germany
- February 6, 2025: Berlin Regional Court orders X to provide data access through election period
- May 2025: Court establishes precedent allowing EU national courts to hold platforms accountable for data-access duties
- May 2, 2025: Irish DPC fines TikTok €530 million over data transfers to China
- June 2025: Bundeskartellamt issues preliminary assessment on Amazon's price-control mechanisms
- July 4, 2025: Leipzig Regional Court awards €5,000 for Meta Business Tools GDPR violations
- October 2, 2024: European Commission requests information from YouTube, Snapchat, and TikTok about recommender systems
- October 24, 2025: EU finds TikTok and Meta in breach of DSA transparency obligations
- November 24, 2025: EU DisinfoLab publishes algorithmic amplification research examining German enforcement
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Summary
Who: German courts and regulatory authorities including the Regional Court of Berlin, Regional Court of Leipzig, and Bundeskartellamt took action against major platforms X, TikTok, Amazon, and Meta. Civil society organizations Democracy Reporting International, Gesellschaft für Freiheitsrechte, and SOMI initiated several legal challenges. EU DisinfoLab authored the research analysis.
What: Four landmark legal cases tested algorithmic accountability across multiple regulatory frameworks. The cases addressed data access for election research, manipulative recommendation systems targeting minors, algorithmic pricing controls affecting market competition, and unlawful tracking technologies feeding profiling systems. The actions collectively probe how platform algorithms shape content visibility, user behavior, and democratic processes.
When: The legal actions occurred between February and July 2025, with the Berlin court ruling on data access in February and May, SOMI filing class actions in February, Bundeskartellamt issuing Amazon assessment in June, and Leipzig court ruling on Meta tracking in July. EU DisinfoLab published its comprehensive research on November 24, 2025.
Where: The enforcement actions took place in Germany through courts in Berlin and Leipzig and regulatory proceedings by the Bundeskartellamt. The cases have implications across the European Union's 27 member states as they interpret DSA, GDPR, AI Act, and competition law provisions applicable throughout the bloc.
Why: The cases aim to establish transparency and accountability for algorithmic systems that shape online visibility and democratic discourse. They address concerns that opaque recommendation algorithms amplify disinformation, manipulate user behavior, restrict market competition, and process personal data unlawfully. The enforcement tests whether existing EU regulations can effectively govern platform power while protecting fundamental rights including privacy, free expression, and fair competition. Germany serves as testing ground for integrated regulatory approaches bridging data protection, platform governance, AI oversight, and competition enforcement.