OpenAI CEO Sam Altman this week directly challenged Anthropic's multimillion-dollar Super Bowl advertising campaign criticizing ChatGPT's pivot toward advertising revenue, characterizing the commercials as "clearly dishonest" during a live interview on the TBPN tech talk show. The comments arrived hours after OpenAI launched GPT-5.3-Codex, a coding model the company positions as "the best coding model in the world" with significantly improved speed and interaction capabilities.
The confrontation between the two leading artificial intelligence companies intensified on February 5 as Anthropic prepared to air two Super Bowl commercials targeting OpenAI's January 16 announcement that advertisements would appear in ChatGPT responses. Altman acknowledged finding the advertisements "funny" before pivoting to substantive criticism of what he characterized as deceptive marketing tactics deployed against a competitor's clearly stated advertising principles.
"Using a deceptive ad to criticize deceptive ads feels, I don't know, something doesn't sit right with me about that," Altman stated during the interview, which aired live on February 5. The response marked OpenAI's most direct public engagement with Anthropic's aggressive consumer marketing campaign, which represents a departure from typical enterprise software positioning by directly attacking competitor business practices.
OpenAI's advertising principles versus Anthropic's characterization
The fundamental dispute centers on how advertisements will integrate with ChatGPT's conversational interface. OpenAI's stated principles, according to Altman, explicitly prohibit sponsored content from appearing within the large language model response stream itself. "Our most important principle for ads says that we won't do exactly this; we would obviously never run ads in the way Anthropic depicts them," Altman explained.
According to OpenAI's January announcement, advertisements will appear "at the bottom of answers in ChatGPT when there's a relevant sponsored product or service based on your current conversation." The company emphasized that separate AI systems will evaluate whether conversations demonstrate commercial intent before surfacing advertisements, with placements occurring only after users show clear interest or purchase intent.
The technical implementation differs substantially from Anthropic's commercial depiction. OpenAI maintains that advertisements will be clearly labeled and separated from organic answers, appearing as distinct placements rather than integrated within conversational responses. The distinction matters because conversational AI interfaces blur traditional boundaries between editorial content and sponsored placements that exist in search advertising.
Altman characterized the advertising format as potentially "dystopic like bad sci-fi movie" if implemented as Anthropic's commercials suggested. "We are not stupid. We respect our users. We understand that if we did something like what those ads depict, people would rightfully stop using the product," he stated. The emphasis on user trust reflects OpenAI's positioning that sustainable advertising models require maintaining clear separation between organic responses and commercial content.
Scale and accessibility arguments
Beyond technical implementation details, Altman framed the advertising debate within broader questions about AI accessibility and business model sustainability. The CEO emphasized that ChatGPT maintains significantly larger scale than Anthropic's Claude assistant, stating that "more Texans use ChatGPT for free than total people use Claude in the US."
According to data OpenAI reported in October 2025, ChatGPT reached 800 million weekly users, providing substantial scale advantages for advertising-based revenue models. The platform processed approximately 2.5 billion messages daily by July 2025, representing roughly 29,000 interactions per second. These usage levels create economic conditions that enable OpenAI to subsidize free tier access through advertising revenue while maintaining premium subscription options without advertisements.
Altman characterized Anthropic as serving "an expensive product to rich people" while OpenAI pursues broader accessibility through advertising-supported free tiers alongside paid subscriptions. The positioning reflects fundamental strategic differences between the two companies regarding how artificial intelligence capabilities should reach different market segments.
ChatGPT's free tier provides substantial functionality to users who cannot afford Pro, Business, or Enterprise subscriptions. OpenAI established a baseline CPM of approximately $60 for advertising placements, pricing that rivals inventory costs for premium broadcast events including Las Vegas Sphere displays and NFL game broadcasts. The high CPM reflects the platform's contextual targeting capabilities and user intent signals that traditional advertising placements cannot match.
GPT-5.3-Codex capabilities and mid-turn interaction
The interview coincided with OpenAI's launch of GPT-5.3-Codex, representing what Altman characterized as "a very big step forward" for coding applications. The model incorporates feedback from the previous GPT-5.2-Codex release while delivering substantial improvements across multiple dimensions including programming intelligence, processing speed, personality refinement, and computer use capabilities.
The most distinctive capability involves mid-turn interaction, enabling users to provide feedback and course corrections while the model executes long-running tasks. "People are starting to use these tools for very long pieces of work at one time, you know, multi-hour tasks," Altman explained. The ability to steer models during execution addresses a fundamental limitation of one-shot generation approaches where specification errors or environmental changes can derail extended work sessions.
Altman compared the capability to workplace dynamics where interrupting a co-worker making a mistake represents efficient collaboration rather than rudeness. "If you see a co-worker making a mistake and you don't interrupt them that's rude, right? Like it's deeply inefficient," he noted. The analogy reflects OpenAI's framing of AI models as collaborative team members operating within human-directed workflows rather than autonomous systems requiring perfect upfront specifications.
The technical achievement reflects broader industry progress toward agentic systems capable of managing complex, multi-step workflows. Altman suggested that users will increasingly "feel like they're managing a team of agents" as AI capabilities advance, with workers "keep operating at a higher and higher level of abstraction" as models handle increasingly sophisticated task decomposition and execution.
Expert users detected the model deployment before OpenAI's public announcement. "It was funny as we were deploying it this morning, a couple of very extremely experts at using these models noticed and said, 'Man, something's really different with Codex,'" Altman recounted. The observation suggests that capability improvements register immediately with experienced practitioners even without explicit communication about model updates.
Forward-deployed engineering and enterprise adoption
OpenAI's forward-deployed engineers serve critical roles in translating model capabilities into enterprise implementations, according to Altman's description of their responsibilities. These technical specialists address fundamental questions companies face when adopting AI systems, including infrastructure integration, security protocols, data protection measures, and agent orchestration approaches.
"You go into a company that is not AI native and say, okay, you've said you want to deploy AI. They really are not sure what to do, how do I hook this up to my systems, do I need to fine-tune a model on my codebase, how do I think about orchestrating agents and using things from different companies," Altman explained. The role encompasses both technical implementation and strategic guidance as organizations navigate complex decisions about AI integration across existing technology stacks.
Data security concerns dominate enterprise conversations, according to Altman. Companies want assurance that AI co-working agents will not access unauthorized information or create context exploitation vulnerabilities. "How do I know that these AI co-working agents are not going to go access a bunch of information and share it in ways they shouldn't or get a context exploit or something like that?" represents a common question forward-deployed engineers address.
The Frontier platform represents OpenAI's enterprise offering for organizations deploying multiple agents and complex workflows. Altman characterized the system as enabling companies to "connect to an AI platform so that you can use all these agents and workflows and everything else you want." The infrastructure provides standardized access to OpenAI's capabilities while addressing security, compliance, and integration requirements that enterprise deployments demand.
Compute bottlenecks and infrastructure constraints
Asked about the primary bottleneck constraining AI development, Altman identified chips as the current limiting factor, though he noted that energy and compute constraints alternate as binding limitations. "It goes back and forth. Right now again it's chips," he stated, acknowledging that different resources constrain progress at different times depending on infrastructure buildout cycles.
The chip shortage particularly affects training runs for advanced models and deployment capacity for inference workloads. Altman suggested that "normal capitalism may solve it" but indicated that coordinated societal investment in wafer capacity expansion would accelerate progress. "Somehow deciding as a society that we are going to increase the wafer capacity of the world and we're going to fund that and we're going to get the whole supply chain and the talented people we need to make that happen would be a very good thing to do," he noted.
The infrastructure challenges extend beyond individual companies to encompass global semiconductor manufacturing capacity. Expanding fabrication capacity requires multi-year investment cycles, specialized equipment, and technical talent that cannot scale rapidly through market mechanisms alone. OpenAI's resource requirements reflect broader industry patterns as multiple companies pursue AI capabilities requiring massive computational resources.
Codex Desktop adoption and workflow transformation
The Codex Desktop application represents a "somewhat of a surprise" to Altman in terms of adoption velocity and user enthusiasm. The application provides a dedicated interface for coding tasks with improved ergonomics compared to web-based interactions. "The 10% of polish of the experience of using these models, especially when there's so much capability overhang, goes an extremely long way to what you can build and how you interact with this stuff," he observed.
Altman described building an autocompleting to-do list as an example of Codex Desktop's practical applications. The system accepts task descriptions in natural language, attempts autonomous completion for straightforward items, requests clarification for ambiguous instructions, and maintains traditional manual completion options for tasks requiring human judgment. "An interface like that where all the stuff you want to do, you just sort of explain to a computer or your AI and it tries to go off and do them" represents Altman's vision for general knowledge work automation.
The application currently requires reasonable technical proficiency but will eventually support broader audiences as capabilities mature. "Obviously, we'll find a version of this product that can do other knowledge work tasks and control your computer and things like that where you don't have to be" technically sophisticated, Altman explained. The evolution reflects OpenAI's trajectory toward general-purpose work agents accessible to non-technical users.
Mobile integration remains incomplete but planned. "Of course, we should have an ability to kick off new tasks from mobile and we'll do that," Altman confirmed. The unified backend architecture will enable users to interact with their AI assistant across multiple surfaces including desktop, mobile, and web interfaces with shared context and task continuity.
Voice mode limitations and model improvements
Current voice mode implementations suffer from pausing issues that degrade user experience during real-time conversations. Asked whether the problem requires new hardware or model capabilities, Altman attributed it primarily to model limitations. "We need new model. We may need some new hardware too, but mostly we just need a new model," he stated, predicting that OpenAI will deliver "a great voice mode by the end of this year."
The technical challenge involves maintaining conversational flow without artificial pauses that interrupt natural dialogue rhythms. Voice interactions demand lower latency than text-based exchanges while managing complex turn-taking dynamics that humans navigate effortlessly but AI systems struggle to replicate. Solving these challenges requires model architectures optimized for real-time audio processing rather than text generation approaches adapted to voice interfaces.
Revenue models and advertising implementation timeline
OpenAI's advertising implementation timeline remains deliberately vague, with Altman acknowledging that "we haven't even started the test yet. We start the test soon." The cautious approach reflects the company's recognition that advertising formats require iterative refinement before broader deployment. "It's going to take us some number of iterations to figure out the right ad unit, the right kind of, the right way this all works," he explained.
The delayed implementation follows internal reorganization where OpenAI issued a "code red" directive on December 2, 2025, directing company resources toward improving ChatGPT's core functionality while postponing advertising rollout and other revenue initiatives. CEO Altman told employees that newer projects including advertising, AI agents for health and shopping, and a personal assistant called Pulse would be delayed as the company addressed competitive pressure from Google's Gemini AI.
Asked whether he wished OpenAI had started advertising earlier, Altman defended the prioritization decisions. "We have gone from like not a company, you know, three years and three months ago or something like that. We were like a research lab and now we are like a pretty big company with a lot of products," he noted. The rapid scaling created numerous competing priorities that required strategic trade-offs about resource allocation across product development, infrastructure buildout, and revenue generation initiatives.
OpenAI's financial pressures create urgency around diversifying revenue streams beyond subscriptions. The company must grow revenue to roughly $200 billion to turn a profit in 2030, according to financial projections reported in December 2025. The advertising business provides a potential path toward that scale, particularly given ChatGPT's 800 million weekly users and high engagement levels.
Software industry transformation and API-first strategies
Asked whether "software is dead," Altman characterized the industry as "different" rather than obsolete. The transformation involves fundamental changes to how software gets created, distributed, and monetized. "What software, like how you create it, how you're going to use it, how much you're going to have written for you each time you need it versus how much you'll want sort of a consistent UX. That's all going to change," he explained.
Traditional SaaS companies face volatility as markets reassess valuations based on AI-driven disruption potential. "There have been a number of these big sell-offs of SaaS stocks over the last few years as these models have rolled out," Altman noted. He expects continued turbulence as some companies demonstrate durability while others struggle to adapt to agent-based consumption patterns.
The observation that "every company is an API company now whether they want to be or not" reflects how AI agents will interact with software services. Rather than human users navigating graphical interfaces, agents will programmatically access functionality through APIs to accomplish tasks on behalf of users. This shift rewards companies that expose clean, well-documented programmatic interfaces while challenging those dependent on user interface friction and engagement metrics.
Altman cited a recent conversation with Uber CEO Dara Khosrowshahi as exemplifying the right strategic approach. Khosrowshahi recognized that consumers want to order Uber rides via their preferred AI agent regardless of potential impacts on Uber's advertising business. "The consumer wants to order an Uber via their preferred agent. You should let them, otherwise you're going to have other problems," Altman summarized. The customer-centric perspective acknowledges that platform companies must adapt to agent-mediated transactions rather than attempting to preserve legacy business models through technical restrictions.
Space-based data centers and infrastructure speculation
Asked about the timeline for space-based data centers providing meaningful compute for OpenAI, Altman expressed skepticism about near-term viability. When prompted about two-year, three-year, five-year, and ten-year horizons, he declined to provide specific predictions. "I wish Ethan luck," he stated, referencing entrepreneurs pursuing orbital computing infrastructure.
The response reflects practical constraints around launch costs, power generation, thermal management, and network latency that make terrestrial data centers more economical for foreseeable AI workloads. While space-based computing may eventually address specific use cases requiring unique orbital characteristics, the fundamental economics favor ground-based infrastructure for training and inference operations that constitute OpenAI's core compute requirements.
Timeline and context
The February 5 interview occurred during a compressed period of competitive maneuvering between OpenAI and Anthropic over advertising business models and market positioning. Anthropic announced on February 4 that Claude would remain advertisement-free across all subscription tiers, establishing principles prohibiting sponsored content from influencing responses. The company simultaneously launched its multimillion-dollar Super Bowl campaign featuring commercials directly criticizing OpenAI's January 16 announcement that advertisements would appear in ChatGPT.
OpenAI's advertising infrastructure development accelerated throughout 2025. The company posted a job listing on September 24, 2025, seeking a Growth Paid Marketing Platform Engineer to develop campaign management tools, establish integrations with major advertising platforms, construct real-time attribution pipelines, and implement experimentation frameworks. The technical scope encompasses backend API development, data pipeline construction, and service deployment supporting campaign management functions, attribution tracking, and advertising spend optimization.
The Information reported in December 2025 that OpenAI's AI models could prioritize sponsored content to ensure it appears in ChatGPT responses, representing a significant departure from subscription-focused revenue models. Employees considered multiple approaches including sponsored information receiving preferential treatment in responses, disclosure labels indicating sponsored results, and advertisements appearing only after conversations progress in specific directions.
Industry analyst Debra Aho Williamson predicted in December 2025 that ChatGPT would reach 1 billion weekly users by the end of 2025 and begin displaying advertisements in 2026. She expects new ad formats to launch on AI-driven platforms including Google AI Overviews, Microsoft Copilot, and Amazon's Rufus, effectively turning AI chats into new media channels for advertisers seeking purchase-oriented conversation placements.
Implications for marketing professionals
The advertising debate between OpenAI and Anthropic represents more than competitive positioning between two AI companies. The business model decisions these platforms make will fundamentally shape how marketers reach audiences as conversational AI increasingly mediates consumer information discovery and purchase decisions.
OpenAI's emphasis on advertising-supported free access creates a mass-market channel potentially comparable to how search and social platforms evolved into advertising destinations. The $60 CPM pricing with 700 million weekly users represents billions in potential annual advertising revenue that could enable OpenAI to subsidize free tier capabilities, invest more aggressively in model development, and compete on accessibility rather than premium features.
Anthropic's advertising prohibition reflects different strategic priorities emphasizing enterprise subscriptions and developer adoption over mass-market reach. The company's $13 billion Series F funding provides runway for an advertising-free strategy, though the $183 billion post-money valuation will eventually require returns that may pressure the business model. Anthropic acknowledged potential future changes through language stating "Should we need to revisit this approach, we'll be transparent about our reasons for doing so."
The contextual targeting capabilities that ChatGPT advertisements will enable differ substantially from traditional display or search advertising. According to StackAdapt CTO Yang Han, the platform can target users at precise moments based on conversation content, with ChatGPT maintaining memory of user interactions that potentially reveals intent signals more valuable than search queries or browsing behavior. When someone discusses marathon training, the system knows not just that they searched for running shoes but the specific context around injury prevention, pace goals, or upcoming races that inform purchase decisions.
Recent research from WordStream found that one in five AI responses for PPC strategy contain inaccuracies, highlighting the importance of grounding AI systems in accurate, real-time business data. The finding underscores risks that advertisers face when platforms prioritize sponsored content within AI-generated responses where attribution and accuracy become difficult for users to verify.
For marketing professionals, the immediate implications involve monitoring how OpenAI's advertising test evolves and whether Anthropic maintains its advertising prohibition as competitive dynamics intensify. The platforms' different approaches create strategic choices about where to allocate resources based on whether brands prioritize reach through advertising placements or credibility through sponsoring advertising-free experiences.
Timeline
- September 24, 2025 - OpenAI posts job listing for Growth Paid Marketing Platform Engineer to build advertising infrastructure
- December 2, 2025 - OpenAI issues "code red" directive postponing advertising implementation to focus on core ChatGPT improvements
- December 2025 - The Information reports OpenAI exploring sponsored content prioritization in ChatGPT responses
- January 16, 2026 - OpenAI confirms advertising tests for ChatGPT free and Go tiers with approximately $60 baseline CPM
- February 4, 2026 - Anthropic announces Claude will remain ad-free and launches Super Bowl advertising campaign targeting OpenAI
- February 5, 2026 - Sam Altman characterizes Anthropic's Super Bowl ads as "dishonest" during TBPN interview
- February 5, 2026 - OpenAI launches GPT-5.3-Codex coding model with mid-turn interaction capabilities
- February 8, 2026 - Anthropic's two Super Bowl commercials scheduled to air during Super Bowl 60
- February 8, 2026 - Super Bowl LX scheduled at Levi's Stadium in Santa Clara, California
Summary
Who: Sam Altman, CEO of OpenAI, responded to Anthropic's competitive advertising campaign during a live interview on the TBPN tech talk show hosted by John Coogan and Jordi Hays. The interview addressed Anthropic's multimillion-dollar Super Bowl commercials criticizing OpenAI's decision to introduce advertising in ChatGPT.
What: Altman characterized Anthropic's Super Bowl advertisements as "clearly dishonest" in their depiction of how OpenAI will implement ChatGPT advertising, emphasizing that OpenAI's principles explicitly prohibit placing ads within the LLM response stream. The interview coincided with OpenAI's launch of GPT-5.3-Codex, a coding model featuring mid-turn interaction capabilities enabling users to provide feedback during execution of multi-hour tasks. Altman defended OpenAI's advertising-supported free tier as enabling broader accessibility compared to Anthropic's premium-only model, noting that more Texans use ChatGPT for free than total people use Claude in the United States.
When: The interview aired on February 5, 2026, one day after Anthropic's announcement that Claude would remain advertisement-free and three weeks after OpenAI confirmed plans to test advertising within ChatGPT. The GPT-5.3-Codex launch occurred the same day, with the Super Bowl commercials scheduled to air on February 8.
Where: The conversation took place during a live broadcast of TBPN, a tech talk show streaming on X and YouTube from 11 AM to 2 PM Pacific Time. The program recently featured other technology executives including Mark Zuckerberg and Satya Nadella. OpenAI's advertising test will initially appear in ChatGPT's free and Go subscription tiers, while Pro, Business, and Enterprise subscriptions will remain ad-free.
Why: The confrontation reflects fundamental strategic differences between artificial intelligence platforms pursuing divergent business models and market positioning. OpenAI emphasizes accessibility through advertising-supported free tiers alongside paid subscriptions, while Anthropic positions Claude as a premium product without advertising across all tiers. The debate carries significant implications for marketers as conversational AI platforms establish monetization approaches that will shape how brands reach audiences through AI-mediated interactions. OpenAI faces financial pressure to reach roughly $200 billion in revenue by 2030, making advertising diversification strategically necessary beyond subscription revenue alone.