Italy's competition authority today announced it has closed three separate investigations into generative artificial intelligence providers, securing binding commitments from DeepSeek, Mistral AI, and Nova AI (operated by Scaleup Yazilim Hizmetleri Anonim Sirketi) to improve how the services disclose the risk of so-called hallucinations to Italian users. The Autorita Garante della Concorrenza e del Mercato (AGCM) published the announcement on 30 April 2026, covering proceedings numbered PS12942, PS12968, and PS12973.

Each case was closed without a finding of infringement under article 27(7) of the Italian Consumer Code (Decreto Legislativo 6 settembre 2005, n. 206). No fines were imposed. The companies instead offered commitments that, once accepted and made mandatory by the authority, require implementation within defined time windows. Should any company fail to follow through, the AGCM may reopen proceedings and apply administrative fines ranging from 10,000 to 10,000,000 euros; repeated non-compliance can result in business suspension for up to 30 days.

What triggered the three investigations

All three proceedings shared a common starting point: an alleged failure to inform consumers clearly, immediately, and intelligibly about the possibility that AI-generated content may contain inaccurate, misleading, or entirely fabricated information. In technical parlance the phenomenon is called a hallucination - a situation where a large language model produces outputs that appear authoritative but do not correspond to fact.

The AGCM framed the omission not merely as a product quality problem but as a matter of commercial practice. According to the authority's decisions, a consumer's choice to use an AI chatbot constitutes a "commercial decision" under the Consumer Code, even when no monetary payment is involved. That framing is significant: it anchors the regulatory intervention in unfair commercial practices law rather than in the EU AI Act or the Digital Services Act, a jurisdictional question that generated a parallel dispute between the AGCM and Italy's communications regulator, as discussed below.

The DeepSeek investigation (PS12942) was launched on 2 April 2025 and covers both Hangzhou DeepSeek Artificial Intelligence Co. Ltd and Beijing DeepSeek Artificial Intelligence Co. Ltd. The authority noted that the web chat service had been accessible from Italy since 2 November 2023, and that the DeepSeek mobile app was made available globally on 15 January 2025 before the company voluntarily removed it from Italian app stores on 29 January 2025. The Mistral AI investigation (PS12968) opened on 3 June 2025 and targets Mistral AI SAS, headquartered at 15 rue des Halles, Paris. According to the proceeding documents, the Le Chat service became accessible in Italy via the web from 6 February 2025, with active promotion in the Italian market beginning 29 May 2025 through a partnership with Iliad Italia S.p.A. The Nova AI investigation (PS12973) began on 30 May 2025, targeting the cross-platform chatbot service operated by Scaleup Yazilim Hizmetleri Anonim Sirketi, based in Izmir, Turkey, registered at the Izmir trade registry under company number 2104671.

What each company agreed to do

The commitments are detailed and technically specific. Each company negotiated its own package of measures, though they share a common structural logic: add visible, permanent disclaimers at the point of AI use, expand pre-contractual information, and translate key disclosures into Italian.

For DeepSeek, the package runs to four commitments. The company agreed to translate its general terms and conditions into Italian, with visibility triggered by the user's IP address - meaning Italian users will see Italian-language terms automatically. A permanent banner in Italian will appear at the bottom of the chat input box at all times, reading "Contenuto generato da IA. Verifica sempre l'accuratezza delle risposte, che possono contenere inesattezze" (AI-generated content. Always verify the accuracy of the responses, which may contain inaccuracies). The same warning will also appear on the sign-up page before the registration button. For queries touching medical, legal, or financial topics, an additional inline warning will appear inside the generated response itself: "Questa risposta e generata da IA. Controllarne l'accuratezza" (This response is AI-generated. Check its accuracy).

The fourth DeepSeek commitment goes further than the others. According to the decisions, DeepSeek also agreed to invest in technology aimed at reducing the incidence of hallucinations. The authority described a three-phase approach: a pre-training phase involving meticulous data filtering to remove low-quality training data; a post-training phase using specialised question-answer pairs and reinforcement learning to reduce hallucination-containing outputs; and an implementation phase using real-time access to external information and prioritisation of authoritative sources. The AGCM described this last commitment as having "particular technological significance" and as "an experiment in self-correction for AI models." DeepSeek was given 90 days from the acceptance decision to implement all measures, with a compliance report due to the authority within 120 days of notification. The decision accepting these commitments was issued at the authority's session on 16 December 2025.

Mistral AI's commitments, accepted at the AGCM session of 17 February 2026, are structured as four numbered undertakings. The first covers disclaimer visibility in Le Chat's chat windows. Mistral committed to displaying a permanent banner below the chat input bar on both the web interface and the mobile app, before and after user login. According to the commitments document, the web version will read "Le Chat puo commettere errori. Controlla le risposte. Scopri di piu" (Le Chat may make errors. Check the responses. Learn more); the app version, where screen space is limited, reads "Possibili errori nelle risposte. Scopri di piu" (Possible errors in responses. Learn more). The "Scopri di piu" link takes users directly to the Italian-language terms of service. Mistral committed to implementing the web disclaimer within one day of the authority accepting the commitments and the app version within one week.

The second Mistral commitment requires expanded language in the terms of service. A new section titled "Imprecisione dell'Output" (Output Imprecision) was committed to include the following statement - translated from the document: "Mistral AI products are based on large language models (LLMs), a technology in continuous evolution. For this reason, generated output may occasionally be imprecise. Mistral AI products are not authoritative or infallible sources of information. In any case, one must not rely on output generated by Mistral AI products as the sole source of truth or as a substitute for professional advice (for example legal, tax, or medical)." The third commitment covers accessibility, adding links to the terms of service at six distinct touchpoints: the web chat menu, the app and web sign-up pages, the app and web login pages, and the App Store and Google Play listings. The fourth commitment requires translation of the entire mistral.ai website - including the help centre - into Italian, to be completed within four months of acceptance.

Nova AI's commitments, accepted at the AGCM session of 21 April 2026, address two distinct concerns: hallucination disclosure and clarity about what the service actually is. The AGCM found that Nova AI had presented its offering confusingly, with users left uncertain whether the service was a novel AI system that processed and combined outputs from other chatbots, or simply a single interface providing pass-through access to them. Nova AI is the latter - according to the company's own description in the proceedings, a "multi-model functional integration platform" that, in its paid version, provides unified access to models including from OpenAI, Anthropic, Google, and others including DeepSeek - but prior communication had not made this sufficiently clear.

On the hallucination side, Nova AI's Italian-language disclaimer for the chat interface will read "Le risposte dell'IA potrebbero non essere accurate. Assicurati di verificare che le informazioni siano corrette" (AI responses may not be accurate. Make sure to verify the information is correct). This appears on the web platform, inside the mobile app above the chat window for Italian IP addresses, and on the sign-up page before registration. The disclaimer functions as a hyperlink to the Italian terms of service. On the service description side, the company agreed to revise its homepage tagline to read: "Nova: Your AI Assistant - A single interface to multiple AI models, delivering efficiency of use through aggregation." Pricing screens were redesigned to show all available plans - Free, Pro, and Ultra Pro - with granular per-feature breakdowns visible before purchase. The terms of service will include a new Italian-language section clarifying that the app "aggregates [AI models] in a single platform to improve functionality and efficiency of use for users." Nova AI has 60 days to complete implementation following notification and 90 days to report compliance.

The jurisdictional dispute with AGCom

In all three proceedings, the AGCM consulted Italy's communications regulator, the Autorita per le Garanzie nelle Comunicazioni (AGCom), as required by article 27(6) of the Consumer Code when a commercial practice is disseminated via the internet. In all three cases, AGCom declined to issue the requested opinion, on the grounds that the conduct in question fell within the scope of the Digital Services Act (DSA), over which AGCom acts as Italy's Digital Services Coordinator.

AGCom's position was that the AI chatbots functioned, in operational terms, as search engines built on generative AI - intermediaries mediating access to digital knowledge - and therefore qualified as intermediary services under the DSA. Specifically, AGCom pointed to recital 119 of the EU AI Act as support for classifying these systems as intermediary services. The AGCM rejected this analysis in detailed terms across all three decisions. It argued that large language model chatbots do not behave like traditional search engines: they do not return lists of links retrieved from the live web, but instead generate responses from knowledge acquired during training.

The optional web-search toggle present in interfaces like DeepSeek and Mistral is activated only when the user explicitly selects it, and even then the model may respond from its training base rather than live web results. Moreover, the AGCM noted, LLMs serve entirely non-search functions - solving mathematical problems, writing poetry, generating code, holding empathetic conversation - that bear no relationship to information retrieval in any meaningful sense.

More pointedly, the AGCM noted that even accepting AGCom's characterisation, the DSA does not displace consumer protection law. Article 2 and recital 10 of the DSA explicitly preserve existing consumer protection frameworks, and article 27(1-bis) of the Italian Consumer Code assigns exclusive jurisdiction to the AGCM over unfair commercial practices, including those that might simultaneously violate sector-specific regulations. The AGCM's legal analysis drew on a Court of Justice of the European Union ruling from 13 September 2018 (joined cases C-54/17 and C-55/17), which established that consumer protection obligations yield to sector-specific rules only when those rules impose obligations incompatible with consumer law and leave professionals no margin of discretion - a bar the DSA does not meet in this context. AGCom, for its part, reserved the right to take independent action regarding any DSA violations it might identify based on the evidence gathered during the AGCM proceedings.

Why this matters for marketing and advertising professionals

The three decisions represent one of the first times an EU member state competition authority has converted AI hallucination disclosure into a binding, enforceable legal obligation across multiple providers simultaneously. The AGCM's willingness to close without formal infringement findings - provided the commitments are adequate - signals that regulators view negotiated transparency as achievable without prolonged enforcement.

For marketing and advertising professionals who use AI chatbots as part of research, content generation, or competitive intelligence workflows, the cases define a regulatory baseline: a permanent, visible, Italian-language warning in the chat interface itself; an equivalent warning at account registration; and detailed terms of service that are reachable without navigational friction. The IAB transparency and disclosure framework launched in January 2026 pointed in the same direction from a self-regulatory standpoint, noting that disclosure can determine whether AI adoption becomes a long-term value driver or a liability.

The Nova AI case adds a further dimension. That proceeding addressed not only accuracy disclosure but the clarity of the commercial proposition itself - specifically, the obligation to tell users what they are actually buying when subscribing to a bundled AI service. Companies that aggregate access to third-party AI models must communicate that aggregation role explicitly, both in their terms and in their marketing messaging. As the AGCM has previously engaged with questions about AI chatbot services and market access in Italy, the authority has shown consistent interest in how AI services interact with Italian consumers.

The cases also illustrate the jurisdictional complexity that AI services navigate under overlapping EU frameworks. The same chatbot interface could simultaneously attract scrutiny under the Consumer Code (AGCM), the DSA (AGCom as Digital Services Coordinator), the EU AI Act, and the GDPR. Research published in September 2025 observed that Australian regulators found no large language models with hallucination rates below 1%, underscoring that disclosure is necessarily imperfect mitigation: it alerts users that a problem exists but cannot prevent it.

DeepSeek's fourth commitment - to invest in technical measures to reduce hallucination rates across four identified categories (search hallucinations, rewriting hallucinations, legal hallucinations, and behavioural hallucinations) - is the most novel element across all three proceedings. The authority framed this as going beyond disclosure to address the underlying phenomenon. Whether that commitment can be independently verified remains an open question; the decision requires only that DeepSeek submit a compliance report confirming measures have been implemented, not that it demonstrate a measurable reduction in error rates.

Timeline

  • 2 November 2023 - DeepSeek Chat first becomes accessible from Italy
  • April 2023 - Mistral AI founded in France
  • 15 January 2025 - DeepSeek mobile app launches globally
  • 29 January 2025 - DeepSeek voluntarily removes app from Italian stores
  • 6 February 2025 - Le Chat becomes accessible in Italy via web and app
  • 2 April 2025 - AGCM opens PS12942 against DeepSeek
  • 29 May 2025 - Mistral begins active promotion in Italy via Iliad Italia partnership
  • 3 June 2025 - AGCM opens PS12968 against Mistral AI
  • 30 May 2025 - AGCM opens PS12973 against Scaleup / Nova AI
  • 29 July 2025 - Mistral files initial commitments proposal
  • 15 September 2025 - DeepSeek files initial commitments proposal
  • 16 September 2025 - AGCM hears DeepSeek in formal session
  • September 2025 - New research explains why large language models hallucinate
  • 24 September 2025 - AGCM hears Mistral in formal session
  • 29 September 2025 - PS12973 opening notice published in AGCM Bulletin no. 38
  • 1 October 2025 - Nova AI and DeepSeek add first English-language disclaimers to chat interfaces
  • 8 October 2025 - Mistral supplements its commitments proposal
  • 9 October 2025 - AGCM requests AGCom opinion in the DeepSeek case
  • 16 October 2025 - Nova AI (Scaleup) gains access to the PS12973 case file
  • 14 November 2025 - AGCom declines to issue opinion in the DeepSeek case
  • 21 November 2025 - DeepSeek files final commitments proposal
  • 28 November 2025 - Nova AI files initial commitments proposal
  • November 2025 - AGCM opens antitrust probe into Meta over WhatsApp AI chatbot exclusion
  • 6 November 2025 - AGCM requests AGCom opinion in the Mistral case
  • 9 December 2025 - AGCom declines to issue opinion in the Mistral case
  • 16 December 2025 - AGCM accepts DeepSeek commitments, closes PS12942 (decision session)
  • 16 January 2026 - AGCM hears Scaleup (Nova AI) in formal session
  • January 2026 - IAB introduces AI disclosure framework as Gen Z trust in AI ads falls
  • 9 February 2026 - Nova AI supplements its commitments; Mistral updates web disclaimer to new Italian-compatible format
  • 11 February 2026 - AGCM extends Nova AI proceeding deadline to 24 April 2026
  • 17 February 2026 - AGCM accepts Mistral commitments, closes PS12968 (decision session)
  • 3 March 2026 - Nova AI files final consolidated commitments proposal
  • 11 March 2026 - AGCM communicates close of investigative phase to Nova AI
  • 12 March 2026 - AGCM requests AGCom opinion in the Nova AI case
  • 3 April 2026 - AGCom declines to issue opinion in the Nova AI case
  • 21 April 2026 - AGCM accepts Nova AI commitments, closes PS12973 (decision session)
  • 30 April 2026 - AGCM publishes press release announcing all three closures

Summary

Who: Italy's Autorita Garante della Concorrenza e del Mercato (AGCM), with proceedings targeting Hangzhou DeepSeek Artificial Intelligence Co. Ltd and Beijing DeepSeek Artificial Intelligence Co. Ltd (China), Mistral AI SAS (France), and Scaleup Yazilim Hizmetleri Anonim Sirketi operating Nova AI (Turkey).

What: Three separate consumer protection investigations into allegedly insufficient disclosure of AI hallucination risks were closed without formal infringement findings under article 27(7) of the Italian Consumer Code. Each company submitted binding commitments requiring permanent Italian-language disclaimers in chat interfaces and on registration screens, expanded terms of service, and improved accessibility of pre-contractual information. DeepSeek additionally committed to technical investment in hallucination reduction across four identified categories. Nova AI also committed to clarifying that its service aggregates third-party AI models rather than operating a proprietary AI system.

When: Proceedings were opened between April and May 2025. The DeepSeek case closed at the authority session of 16 December 2025, the Mistral case at the session of 17 February 2026, and the Nova AI case at the session of 21 April 2026. The AGCM published a joint press release on 30 April 2026.

Where: The proceedings were conducted by the AGCM in Rome and concern services accessible to Italian consumers via the web and mobile applications. The companies involved are headquartered in China (DeepSeek), France (Mistral), and Turkey (Scaleup / Nova AI).

Why: The AGCM determined that consumers choosing to use AI chatbots make a commercial decision under Italian consumer law, even without monetary payment, and that omitting clear warnings about the possibility of inaccurate or fabricated outputs constitutes a potentially unfair commercial practice under articles 20, 21, and 22 of the Consumer Code. A parallel jurisdictional dispute with AGCom, which claimed oversight under the Digital Services Act, was resolved in the AGCM's favour on the grounds that consumer protection law applies regardless of concurrent DSA obligations, and that LLM chatbots do not qualify as search engines or intermediary services under DSA definitions.

Share this article
The link has been copied!