Paligo published on April 2, 2026, an interactive data investigation that maps the full infrastructure stack powering every AI response - from the single Dutch factory that makes the chips to the documentation quality that determines whether the answer is right or wrong.

The investigation, published at paligo.net/ai-supply-game/, is structured as a five-act walkthrough covering supply chain dependencies, energy and water consumption, model training costs, inference economics, and what Paligo calls the content layer - the one part of the stack most organisations actually control. The project draws on public data and presents it through live counters, a disruption simulator, and side-by-side comparisons of AI answers generated from structured versus fragmented documentation.

The timing is pointed. AI infrastructure spending is accelerating globally, and questions about its sustainability - financial, physical, and geopolitical - are becoming harder to ignore.

A supply chain concentrated into very few hands

The investigation opens with what Paligo describes as a dependency chain most people have never mapped. According to the findings, every advanced AI chip on earth depends on a single category of lithography machine, made by one company: ASML, headquartered in the Netherlands. There is no alternative supplier for extreme ultraviolet lithography equipment. According to the investigation, ASML ships around 50 EUV machines per year. That figure represents the entire global supply.

The concentration does not stop at lithography. According to Paligo's research, TSMC manufactures over 90% of the world's most advanced semiconductors. The next largest producer is not close in volume or capability. The combination of a single lithography supplier and a single dominant fabricator means that geopolitical or operational disruption at either node cascades through the entire AI chip stack with no immediate alternative pathway. The investigation built a disruption simulator around precisely this logic. Users can trigger scenarios - a Taiwan conflict, an ASML export ban, an energy grid failure - and watch the model illustrate how each event propagates across the dependency chain.

For the marketing and advertising technology community, this is not an abstract geopolitical concern. It has direct implications for the availability and pricing of the compute underpinning AI-powered targeting, bidding, creative generation, and measurement tools. The Dutch grid crisis documented by PPC Land in July 2025 illustrates one dimension of this fragility: ASML itself is headquartered in Eindhoven, a region where the grid operator Tennet has been rationing electricity, with no significant new capacity expected until 2027.

The scale of what it takes to keep a model running

The second section of the investigation focuses on the environmental and physical footprint of AI inference - not training, but the ongoing cost of serving responses. The numbers are large. According to Paligo's data, there are more than 11,800 data centers globally, with 45% located in the United States. AI data centers are projected to consume 945 terawatt-hours of electricity annually by 2030. That figure is roughly equivalent to the entire annual energy output of a mid-sized industrialised nation.

The 945 TWh projection is not unique to this investigation. PPC Land reported the same International Energy Agency figure in coverage of Europe's grid infrastructure gap, which noted that global data center consumption is expected to more than double to that level by 2030, driven primarily by AI. The European dimension of that growth represents a serious infrastructure challenge: the IEA estimates Europe currently accounts for approximately 15% of global data center electricity consumption at 70 TWh in 2024, a figure projected to grow by more than 45 TWh by 2030.

Water consumption is a less-discussed part of the same equation. According to Paligo's investigation, running a large language model requires keeping thousands of GPUs at stable operating temperatures. Cooling systems across major providers evaporate millions of litres of water per day to achieve this. The investigation presents live counters for electricity and water consumption, allowing users to watch the figures accumulate in real time rather than encounter them as static numbers.

Meta's infrastructure plans put the scale in concrete terms. The company's announced investment in AI data center clusters, reported by PPC Land following a July 14, 2025, announcement, includes the Prometheus facility in New Albany, Ohio, delivering over 1 gigawatt of capacity - equivalent to powering approximately 750,000 homes continuously - and the Hyperion facility in Richland Parish, Louisiana, eventually reaching 5 gigawatts across multiple years. These are not edge cases. They are the direction the industry is moving.

Gartner's November 2024 analysis projected that power shortages would restrict 40% of AI data centers by 2027 - suggesting that energy availability, rather than algorithmic capability, may become the binding constraint on AI development within the current planning horizon.

The cost of training and the compounding cost of inference

The investigation's third section addresses the economics of model development, and makes a distinction that tends to get lost in public discussion: the cost of training a model and the cost of serving it are not the same number, and the second is arguably more consequential.

According to Paligo's data, training GPT-4 cost over $100 million. That is a large number, but it is a one-time expenditure. Serving GPT-5 requires over 200,000 GPUs operating continuously. According to the investigation, inference costs are currently doubling every quarter. The infrastructure required to keep a deployed model answering questions now exceeds, in ongoing cost terms, what it took to build it in the first place.

The model's relationship with knowledge compounds this problem. According to the investigation, once training is complete, a model is frozen. Everything it knows reflects a snapshot of information from a fixed point in the past. When asked about something current, specific, or proprietary, it has two options: retrieve the information from an external source, or produce a plausible-sounding answer based on outdated patterns - what the industry calls hallucination.

The research into why language models hallucinate, published by OpenAI and Georgia Tech researchers on September 4, 2025, and covered by PPC Land, found that models hallucinate in part because they are effectively rewarded for guessing when uncertain, rather than abstaining. The same study noted that 77% of businesses surveyed by Deloitte express concerns about AI hallucinations affecting their operations. The Paligo investigation frames the frozen-model problem as upstream of hallucination: a model without access to current or structured external content will hallucinate more, not less, regardless of its underlying capability.

PPC Land's coverage of AI models faking understanding adds a related dimension. Research published on June 26, 2025, by scientists from MIT, Harvard, and the University of Chicago found that large language models frequently define concepts correctly when benchmarked but fail to apply the same knowledge in practical scenarios - a pattern the researchers called "potemkin understanding." The gap between demonstrated and operational capability has direct implications for any organisation deploying AI in customer-facing or operational contexts.

The content layer: the one variable organisations control

The fourth and most operationally relevant section of the Paligo investigation concerns what the company calls the content layer - the documentation, product information, and institutional knowledge that an AI system retrieves when generating a response. Every other layer of the stack requires billions of dollars and industrial-scale engineering. The content layer is something most organisations already own, and most have allowed to deteriorate.

According to the investigation, most organisations carry years of accumulated content debt: conflicting PDFs, outdated wikis, and fragmented documentation spread across systems that were never designed to talk to each other. That debt was manageable when it primarily affected internal processes. AI has made it a live liability. When a retrieval-augmented system pulls from fragmented or contradictory documentation, the model produces unreliable outputs - not necessarily because the model is weak, but because the content it is retrieving is inconsistent.

The investigation demonstrates this with a side-by-side comparison: the same question answered by an AI drawing from structured documentation, and answered again from fragmented documentation. The difference in output quality is presented as the core argument for investment in content infrastructure - specifically, in structured content systems that tag, version, and organise information in machine-readable formats.

Paligo, the company behind the investigation, is itself a provider of cloud-based Component Content Management Systems (CCMS). Founded in 2015 and operating across 37 countries with customers in software, technology, manufacturing, and life sciences, the company markets structured authoring as preparation for AI deployment. According to the company's website, customers report creating 12 versions of documents in under an hour using content reuse features, compared to the better part of a week for 3 versions using previous workflows. Paligo claims up to 75% savings on content reuse, up to 25% time saved reviewing, and up to 90% translation efficiency - figures drawn from its own ROI calculator based on industry research.

The structured content argument is not confined to vendors with an interest in promoting it. The Gracenote case, covered by PPC Land in February 2026, illustrates how structured, persistently identified content has become a critical input for AI-powered systems. Gracenote's database assigns TMS IDs - persistent, standardised identifiers - to more than 50 million entertainment titles across 260+ streaming catalogs. Samsung and Google both signed agreements for access to this infrastructure within fifteen days of each other in February 2026. The underlying logic - that AI answers are only as good as the structure of the information they retrieve - runs directly parallel to what Paligo's investigation argues.

The Gracenote MCP Server, launched September 3, 2025, took this further by connecting LLMs directly to continuously updated entertainment data, enabling real-time cross-checking of responses against verified content before they reach users. The technical approach is a direct response to the frozen-model problem Paligo identifies.

Why this matters for marketers and ad tech professionals

The Paligo investigation is framed around a question with implications well beyond documentation management. AI has become infrastructure for digital marketing - for campaign bidding, creative generation, audience modelling, and increasingly for the search interfaces through which consumers encounter brands. The reliability of those systems depends on the entire stack the investigation maps.

At the chip level, geopolitical concentration creates pricing and availability risk that is difficult to hedge. At the energy level, the 945 TWh projection and the data center rationing already under way in parts of Europe suggest physical constraints are not a distant scenario. At the model level, inference costs doubling quarterly translate directly into the economics of AI-powered advertising tools. At the content layer, fragmented documentation degrades AI outputs in ways that affect how products are described, how support questions are answered, and how accurately AI systems represent organisations to their customers.

The investigation's disruption simulator is designed to make the dependency visible rather than abstract. A Taiwan conflict disrupts TSMC, which disrupts chip supply, which delays or raises the cost of the GPUs that run inference, which affects every platform that has built AI features into its product. An ASML export ban produces a similar cascade. An energy grid failure triggers data center capacity constraints that have already begun appearing in operational form in the Netherlands and elsewhere.

For ad tech specifically, the structural question is whether the current pace of AI infrastructure investment is sustainable, or whether the cost curves - inference doubling quarterly, energy demand doubling by 2030, training costs already exceeding $100 million per frontier model - will force consolidation, rationing, or architectural shifts before the decade is out.

Timeline

  • 2015: Paligo founded in Sweden, subsequently expanding to customers in 37 countries across software, technology, manufacturing, and life sciences.
  • November 2024: Gartner predicts power shortages will restrict 40% of AI data centers by 2027.
  • January 13, 2025: Dutch grid crisis reported, with more than 11,900 businesses awaiting electricity network connections in the Netherlands; Tennet confirms no significant new capacity in Eindhoven until 2027.
  • June 26, 2025: MIT, Harvard, and University of Chicago researchers publish "Potemkin Understanding in Large Language Models", documenting gaps between AI benchmark performance and practical application.
  • July 13, 2025: PPC Land reports on the European AI energy infrastructure gap, citing IEA projections of 945 TWh data center consumption by 2030.
  • July 14, 2025: Meta CEO Mark Zuckerberg announces investment in AI infrastructure, including Prometheus (1+ gigawatt) and Hyperion (5 gigawatt) data center clusters.
  • September 3, 2025: Gracenote launches its Video MCP Server, connecting LLMs to a continuously updated entertainment database to prevent hallucinations.
  • September 4, 2025: OpenAI and Georgia Tech researchers publish "Why Language Models Hallucinate", finding statistical causes behind AI false responses.
  • February 10, 2026: Google signs agreement with Gracenote for access to structured entertainment metadata infrastructure.
  • February 25, 2026: Samsung signs agreement with Gracenote; PPC Land covers both deals as evidence that structured content has become critical input for AI-powered systems.
  • April 2, 2026: Paligo publishes interactive data investigation at paligo.net/ai-supply-game/, tracing the full infrastructure dependency chain behind AI responses and demonstrating the impact of content structure on output quality.

Summary

Who: Paligo, a Swedish cloud-based Component Content Management System provider founded in 2015, with customers in 37 countries.

What: An interactive data investigation published at paligo.net/ai-supply-game/ mapping the full infrastructure stack behind AI responses - covering chip supply chain dependencies, data center energy and water consumption, model training and inference costs, and the quality of the content AI systems retrieve when generating answers. Key figures include ASML shipping around 50 EUV lithography machines annually as the sole global supplier, TSMC manufacturing over 90% of advanced semiconductors, projected AI data center consumption of 945 TWh by 2030, and inference costs for GPT-5 requiring over 200,000 GPUs.

When: Published on April 2, 2026.

Where: Online at paligo.net/ai-supply-game/. Paligo is headquartered in Solna, Sweden.

Why: The investigation argues that most organisations focus on the highly capitalised layers of the AI stack - chips, compute, models - while neglecting the content layer they already own. Fragmented or outdated documentation increases the likelihood of AI hallucination and degrades the quality of AI-generated responses. The broader argument is that AI answers depend, ultimately, on whether the information they retrieve has been structured properly.

Share this article
The link has been copied!