When users ask ChatGPT for product recommendations, the top results rarely appear by chance. A growing number of businesses now pay substantial sums to influence what artificial intelligence chatbots recommend, according to a Wall Street Journal investigation published yesterday.
This represents an evolution of search engine optimization into what practitioners call generative engine optimization, or GEO. Some also refer to it as answer engine optimization (AEO). The industry's rapid growth highlights a fundamental vulnerability in how AI chatbots source and present information to users.
"A recommendation from AI isn't verified the way one from a human might be," the investigation noted. Today's AI systems function as shallow readers of the internet, and their responses can be manipulated through deliberate content placement strategies.
The traffic shift from Google to ChatGPT
Evan Bailyn, chief executive of First Page Sage, provided concrete data showing the magnitude of this shift. His company started as an SEO firm. Now, when users ask any chatbot about leading authorities on GEO, Bailyn or his company appears prominently in results. This demonstrates his optimization skills rather than objective authority.
According to Bailyn, 90% of referral traffic came from Google just one year ago. Starting in summer 2025, AI chatbot referrals began rising dramatically. Today, 44% of his clients' referrals originate from AI platforms.
Across the broader internet, people received links from chatbots more than 230 million times monthly as of September 2025, according to Similarweb data cited in the Wall Street Journal report. That represents three times the monthly referrals from just one year earlier.
These referrals demonstrate measurable value. Compared with users sent to websites by Google, those arriving via ChatGPT tend to spend more time on sites, view more pages, and show higher likelihood of completing transactions.
The figures likely understate AI chatbot influence, according to Aleyda Solis, founder of SEO and AI optimization agency Orainti. Most users don't complete transactions within chatbots yet. A user might receive a ChatGPT recommendation, then turn to a web browser or Amazon app to make the actual purchase. That represents a real referral, but it isn't tracked.
Solis commented on the Wall Street Journal investigation via X on February 1, 2026, expressing satisfaction at being quoted in the piece while noting a critical omission. "One thing I would have loved to see, which I see is missing from the piece, is the clarification about the difference between those who optimize brands to appear for relevant answers for which they deserve to be shown, vs those that aren't," Solis wrote.
The distinction matters significantly. Optimization can mean making accurate information accessible. Manipulation involves hiding truth or promoting content that doesn't merit inclusion.
How manipulation works
The ease of influencing AI varies by topic, according to Nick Koudas, a professor of computer science at the University of Toronto who authored recent research on the subject. Think of an AI like a human expert, Koudas suggested. If it already possesses significant knowledge on a subject, changing its perspective becomes more difficult. But when it knows less about something, it can be more easily swayed.
"This isn't just an observation about the behavior of large language models," Koudas noted. "This is all based on fairly standard mathematical intuitions."
For Bailyn, this creates opportunity. Many of his clients are relatively obscure, mid-market companies making specialized products - industrial fittings, hot tubs - that fiercely compete with similar offerings.
To boost placement in AI results, Bailyn's company plants what they call a "brand authority statement" on at least 10 websites, typically owned by other clients. For instance, to become the first answer to "What's the best hot tub for sciatica?" in ChatGPT, associating a client with the phrase "highest-rated for sciatica" on various company blogs can suffice to convince the AI.
OpenAI crawls the web with bots to build an index for ChatGPT, similar to Google's approach. But OpenAI also purchases scraped Google search results, according to recent court filings.
The takeaway: ranking highly in traditional SEO can help a company's standing with AI systems. But the bots also seek superlatives like "top performer" or "most cited," because they're not just surfacing results - they're constructing narratives.
Another crucial factor involves where praise appears, since bots are programmed to value some sources over others. A review in The Wall Street Journal or another independent media outlet carries more influence than a comment on Reddit, although those can help too.
The variety of signals chatbots rely on provides some protection for users, because certain AI results prove harder to manipulate. One of Bailyn's clients wanted to convince ChatGPT it wasn't based in China, but its origins had already been covered by major media. "We can't fight against that," Bailyn acknowledged.
Industry response and concerns
The manipulation capabilities concern industry observers. Christopher Mims, who authored the Wall Street Journal investigation, noted that ChatGPT and other chatbots strive to source answers from reputable sources and attempt to weight sources by veracity. But users should take all advice with skepticism.
If the answer really matters, users should seek second opinions from human-powered platforms like trusted news sources, online reviews, or real-life experts.
OpenAI stated that ChatGPT uses its own in-house search index plus third-party search technologies such as Bing and licensed data providers to provide high-quality, current information. The company also says it takes steps to detect, disrupt and expose low-credibility or suspected covert-influence sources, adding that this represents an ongoing program that will continue evolving.
"On search, our AI features rely on our core search ranking systems that have been honed for years against activity like keyword stuffing," a Google spokesman stated. "In Gemini, our models use filtering and other quality-assurance methods to verify data and model quality to help prevent any large-scale gaming of our systems."
The broader context
The manipulation tactics described in the Wall Street Journal investigation align with broader industry developments in AI search optimization. Marketing professionals have developed sophisticated frameworks for understanding how to influence AI responses, though debate continues about whether these represent legitimate optimization or manipulation.
Google's John Mueller warned in August 2025 that aggressive promotion of new AI search optimization acronyms may indicate spam and scamming activities. "The higher the urgency, and the stronger the push of new acronyms, the more likely they're just making spam and scamming," Mueller stated on Bluesky.
Meanwhile, research has demonstrated that AI responses can be successfully manipulated through strategically placed content across low-authority domains. A study published in July 2025 by Reboot Online Marketing showed that content published on expired domains with domain rating scores below 5 could influence ChatGPT and Perplexity responses within days.
The traffic quality from AI search has proven substantial. Ahrefs research published in June 2025 found that visitors from AI-powered search platforms convert at rates 23 times higher than conventional search engine visits. Despite representing just 0.5% of total website visits, AI search visitors generated 12.1% of all signups during the measurement period.
This conversion efficiency creates strong financial incentives for businesses to invest in AI search optimization, whether through legitimate tactics or manipulative practices.
Platform responses and tools
Multiple platforms have launched tools to help brands monitor and influence their AI search visibility. Amplitude introduced AI Visibility tools with competitive tracking capabilities in November 2025. Adobe launched its LLM Optimizer in October 2025. These platforms enable brands to track how they appear in ChatGPT, Claude, Gemini, and other large language models.
However, not all observers believe specialized GEO tools provide value. The founder of Lorelight, an AI visibility tracking tool, shut down the service in October 2025 after concluding that brands with high AI search visibility all shared characteristics identical to traditional SEO and brand-building best practices: quality content, mentions in authoritative publications, strong reputation, and genuine expertise.
"There was no secret formula. No hidden hack. No special optimization technique that only applied to AI," founder Dan Houy stated in his shutdown announcement.
The ethical divide
The Wall Street Journal investigation brings attention to an uncomfortable reality in digital marketing. The line between optimization and manipulation remains contested territory.
On one side sit practitioners who argue they help deserving brands appear for relevant queries. Research from iPullRank, which released a comprehensive 20-chapter AI Search Manual in August 2025, emphasizes technical approaches that align with how AI platforms discover and synthesize content.
On the other side are those employing tactics specifically designed to game AI systems through planted "brand authority statements" and coordinated cross-website promotion schemes.
The distinction becomes blurrier when considering that even legitimate optimization requires understanding and exploiting how AI systems evaluate sources. The same techniques that help quality brands gain visibility can be deployed by less scrupulous actors.
Industry implications
For marketing professionals, the Wall Street Journal findings confirm what many have observed throughout 2025. AI search traffic has grown consistently, though it still represents a small fraction of total web traffic compared to Google.
But the conversion metrics tell a different story. When AI search visitors worth 4.4 times more than traditional organic traffic arrive at websites, the business case for optimization becomes compelling regardless of absolute traffic volumes.
The challenge involves navigating optimization strategies that remain effective while avoiding manipulative tactics that could damage brand reputation or trigger platform enforcement actions. Google has already launched spam updatestargeting various manipulation techniques, and AI platforms will likely develop similar enforcement mechanisms.
Publishers face particular challenges as AI Overviews and chatbot features potentially reduce traditional click-through behavior. Dotdash Meredith reported in May 2025 that Google's AI Overviews appeared on approximately one-third of search results for their content, contributing to measurable traffic impacts.
What's next
The Wall Street Journal investigation arrives at a critical moment in search evolution. ChatGPT has reached top 10 global websites in traffic rankings, while OpenAI continues upgrading ChatGPT search capabilities with enhanced response quality and improved conversational context handling.
The proliferation of GEO services and tactics suggests this market will continue expanding. Whether platforms can develop effective countermeasures against manipulation while preserving legitimate optimization opportunities remains uncertain.
For users, the key takeaway involves maintaining healthy skepticism toward AI recommendations. The chatbots may present answers with confidence, but the investigation reveals those answers increasingly reflect commercial optimization efforts rather than purely objective evaluation of merit.
Solis's distinction between deserving optimization and undeserving manipulation captures the central tension. The technology enables both, and determining which is which requires understanding the incentives, methods, and ethics of the practitioners involved.
The marketing community now faces decisions about which side of that line to operate on, knowing that platforms, regulators, and users will increasingly scrutinize the tactics employed to influence what AI chatbots recommend.
Timeline
- November 2022: ChatGPT launches, beginning transformation of search behavior patterns
- March 2024: Google begins expanding AI Overviews beyond experimental phases
- May 2024: Google AI Overviews launch, zero-click searches begin rising
- October 31, 2024: OpenAI officially announces ChatGPT Search functionality
- May 2025: Dotdash Meredith reports AI Overviews impact traffic in Q1 earnings
- June 9, 2025: Semrush publishes study showing AI search visitors worth 4.4x more than organic traffic
- June 13, 2025: OpenAI upgrades ChatGPT search with enhanced response quality
- June 16, 2025: Ahrefs finds AI search visitors convert 23x higher than organic traffic
- June 27, 2025: Marketing consultant unveils four-layer SEO framework including GEO
- July 2, 2025: Similarweb announces ChatGPT referrals grow 25x year-over-year
- July 8, 2025: Brainlabs report reveals AI search fundamentally changes SEO
- July 15, 2025: Marketing agency proves AI responses can be manipulated
- July 19, 2025: Research proves ChatGPT uses Google Search despite Bing partnership claims
- July 26, 2025: NP Digital study finds marketing concerns over AI search may be overblown
- August 14, 2025: Google's John Mueller warns AI SEO acronyms signal spam tactics
- August 19, 2025: Ahrefs releases analysis showing ChatGPT traffic at 0.19% vs Google's 41.9%
- August 26, 2025: Google launches spam update targeting global search violations
- August 29, 2025: iPullRank releases 20-chapter AI search optimization manual
- October 31, 2025: Lorelight founder shuts down AI visibility tracking tool
- November 1, 2025: Amplitude introduces AI Visibility tool with competitive tracking
- December 10, 2025: Ahrefs publishes fake brand experiment on AI search manipulation
- December 22, 2025: Impact.com partners with Evertune to enable brands to track AI mentions
- February 9, 2025: ChatGPT reaches top 10 global websites as search features expand
- January 30, 2026: Wall Street Journal publishes investigation on ChatGPT manipulation tactics
- February 1, 2026: Aleyda Solis comments on Wall Street Journal investigation
Summary
Who: Evan Bailyn (CEO of First Page Sage), Aleyda Solis (founder of Orainti), Nick Koudas (University of Toronto professor), Christopher Mims (Wall Street Journal), OpenAI, Google, and businesses investing in generative engine optimization.
What: A Wall Street Journal investigation revealed that businesses pay substantial sums to manipulate ChatGPT and other AI chatbot recommendations through tactics called generative engine optimization (GEO). Companies plant "brand authority statements" across multiple websites, use superlatives to trigger AI algorithms, and exploit how chatbots construct narratives from web content. AI chatbot referrals now account for 44% of traffic for some optimization firms, up from nearly zero a year ago.
When: Published January 30, 2026, the investigation documents practices that emerged throughout 2025 as ChatGPT and other AI platforms gained mainstream adoption. AI chatbot referrals grew from under 1 million visits in early 2024 to more than 230 million monthly by September 2025.
Where: The manipulation affects global search ecosystems, particularly ChatGPT, Perplexity, Claude, and other large language models. Businesses deploy tactics across websites, expired domains, and client-owned platforms to influence AI recommendations. The practices impact users worldwide who rely on AI chatbots for product and service recommendations.
Why: Financial incentives drive the manipulation. Visitors arriving from AI chatbots spend more time on sites, view more pages, and convert at rates 23 times higher than traditional search visitors according to Ahrefs research. Despite representing just 0.5% of total traffic, AI search visitors generate 12.1% of signups. The investigation highlights tensions between legitimate optimization and manipulation, with industry observers debating where ethical boundaries lie in influencing AI recommendations.