The UK government today published a national consultation asking citizens, parents, children, and technology companies to help shape the future of children's online safety. Titled "Growing up in the online world: a national conversation," the document - reference CP 1528 - was presented to Parliament by the Secretary of State for Science, Innovation and Technology, Liz Kendall, by Command of His Majesty. The consultation closes on 26 May 2026, giving participants nearly three months to respond.
The scope is broad. It addresses whether social media platforms should face a legally mandated minimum age, whether the digital age of consent should be raised beyond 13, what design features on platforms should be restricted for younger users, how AI chatbots should be treated, and whether school mobile phone guidance should become statutory. The geographic scope is the United Kingdom as a whole; internet policy remains a reserved matter, while education policy is devolved.
Why this matters now
The timing follows months of political pressure. According to the consultation document, on 16 February 2026, the Prime Minister announced that the government would be taking new legal powers to lay the groundwork for immediate action following the consultation's conclusions - powers specifically designed to allow the government to act "within months, not years" of receiving evidence, rather than waiting for new primary legislation to pass through Parliament.
Liz Kendall was direct in the consultation's foreword. "My very first act as Secretary of State was to strengthen the UK's Online Safety Act," she wrote, adding that the government is "taking new legal powers so that we can act immediately to implement the findings of this consultation." The emphasis on speed is notable; online safety consultations have historically been slow to translate into enforceable rules.
This is also not the first step in UK regulation of this space. The Online Safety Act 2023 already required platforms to implement age verification for the most harmful content - including pornography and suicide material - and gave Ofcom enforcement powers that came fully into force in 2025. Since then, Ofcom has opened investigations into companies responsible for over 90 services. This consultation is positioned as the next layer on top of that existing framework.
The social media minimum age question
The most politically prominent question in the consultation is whether a legal minimum age should be set for social media access. There is no current minimum age in law, although most platforms set their own threshold at 13. The consultation asks whether that threshold should be mandated at 13, raised to 16, or set somewhere in between.
The numbers in the underlying evidence are striking. According to the consultation document, Ofcom's 2025 Parents and Children: Media Use and Attitudes Report found that 81% of 10-12-year-olds use at least one social media app or site. That is despite most platforms having a voluntary minimum age of 13. A further 86% of the same age group had their own account on a social media, messaging or video sharing platform.
Over 180,000 families have signed the Smartphone Free Childhood Parent Pact, committing to delay smartphones until at least 14 and social media until 16 - a grassroots movement that has grown substantially in only two years. The Teachers' Union NASUWT has called on government to tighten legislation so that technology firms face fines for allowing children under 16 on their platforms.
Yet the consultation also documents countervailing arguments. Civil society organisations including the NSPCC, the Molly Rose Foundation, and the Internet Watch Foundation have raised concerns that a hard ban could create a "cliff edge" for older teens, drive children to less well-regulated platforms, and make children less likely to seek adult help if they encounter harmful content - particularly if they have taken steps to circumvent a minimum age limit.
Australia serves as the main international reference point. Since 10 December 2025, age-restricted social media platforms in Australia have been required to take "reasonable steps" to prevent under-16s from having accounts. Platforms that fail to comply face penalties of up to $49.5 million AUD. The Australian eSafety Commissioner has formally assessed 10 services as age-restricted platforms and a further 10 as outside scope. The consultation document notes that parents in Australia have reportedly been widely supportive of the approach.
The government is explicit about one aspect: "No parent, carer or child will be fined or punished if these safeguards are circumvented by children."
The digital age of consent
Separate from the social media minimum age question is the age of digital consent - currently set at 13 in the UK under UK GDPR. This is the age at which a child can independently consent to an online platform processing their personal data for purposes such as profiling for targeted advertising.
The consultation examines whether this threshold should be raised. Several European countries have already set it higher: 14 in Austria, Italy and Spain; 15 in France; 16 in Germany and Ireland. For children below the threshold, platforms are required to make reasonable efforts to obtain consent from someone with parental responsibility - for example through authentication via a registered credit card or a referral to a parent's email.
The document cites research suggesting that children's understanding of how their personal data is collected, tracked, profiled and monetised develops gradually across adolescence, and that "there is no clear developmental milestone at age 13 that would justify the current threshold." Adding a parental check for 13-15-year-olds could, the document argues, reduce the risk of personal data being used in ways children in that age group might not understand.
However, the consultation also notes risks in raising the age. It could limit access to useful services such as educational technology that rely on consent to process children's data. It could also widen existing digital inequalities, since parents with less confidence in digital processes might inadvertently restrict their children's access more than intended.
Platform design features under scrutiny
A substantial portion of the consultation deals with specific design features - what the document calls "persuasive" or "compulsive" mechanisms - that may extend children's time online. These include infinite scrolling, autoplay, affirmation features such as likes and comments, alerts and push notifications, and content recommendation algorithms.
The evidence cited is detailed. Ofcom's Children's Passive Online Measurement tool found that children aged 8-14 spend nearly three hours a day online across smartphone, tablet and computer. Research shows over 60% of 8-14-year-olds used their smartphone, tablet or computer between 11pm and 5am at least once over a four-week period, with associated sleep disruption. Additionally, 33% of 8-17-year-olds agreed that their own screentime was too high.
A 2025 Danish study by the Competition and Consumer Authority provides a concrete data point on interventions. Examining 269 teenagers aged 13-17 across TikTok, Snapchat and Instagram, the study used "nudge" techniques - including forcing a 6-second pause before accessing a platform and asking children to declare how much time they planned to spend on it. The result was a 31% to 36% drop in usage, equivalent to around an hour a day less for a child who had previously spent 3 hours a day on social media.
The consultation asks whether such features should be age-restricted for children aged 13-15, whether mandatory daily screentime limits and nighttime curfews should apply, and whether nudge techniques should be required of platforms rather than offered voluntarily. TikTok and Instagram already have voluntary screentime limit features; the question is whether these should become mandatory and harder for children to dismiss.
The consultation also asks about specific high-risk functionalities. Livestreaming is one focus. The NSPCC's 2018 survey of 40,000 children aged 7-16 found that 24% had livestreamed, and 6% of those received requests to remove or change their clothes. TikTok already requires users to be at least 18 to host a live stream. The consultation asks whether further restrictions - on disappearing messages, location sharing, stranger-pairing features, and the ability to send and receive images containing nudity - should be mandated below certain ages.
AI chatbots: a new regulatory frontier
The consultation dedicates a full chapter to AI chatbots, an area the document acknowledges is relatively new regulatory territory. According to Nominet polling in 2026, 58% of young people aged 8-17 said that AI makes their lives better, and 48% said AI is an important part of their everyday life. A separate Internet Matters study found that 64% of 9-17-year-olds use AI chatbots, with 23% having turned to chatbots for advice, and one in eight - 12% - saying they use them because they have no one else to talk to.
The consultation frames chatbot risks around both content and interaction design. As the document notes, "emotional dependence is an emerging harm related to chatbots and one which children may be especially susceptible to." Research on US users suggests that long sessions and persistent, anthropomorphic design can lead children to form what psychologists call parasocial attachments with chatbots - sometimes treating them as friends or intimate partners.
Under the existing Online Safety Act, AI chatbots fall within scope only if they enable user-to-user sharing or search the live internet. Those that simply draw from an underlying model without such features are currently outside the Act's main protections - except where they can generate pornographic content, in which case highly effective age assurance is required. The government has now committed to introducing new powers to bring out-of-scope chatbots under duties to protect users from illegal content.
The parallels with regulatory activity in other jurisdictions are substantial. PPC Land has documented the FTC's ordersissued in September 2025 to seven AI chatbot companies - including Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap and X.AI - requiring detailed reports on child safety practices. The US Attorneys General from 44 jurisdictions sent a formal letter in August 2025 to 12 major AI companies demanding enhanced child protection. Brazil's government demanded that Meta remove sexual chatbots with child-like personas. The UK consultation arrives, therefore, within a globally coordinated, if still fragmented, regulatory moment.
Age assurance: the technical challenge
Any restriction on children's access to social media or platform features depends on reliable age assurance - the ability to verify or estimate a user's age. The consultation is candid about the limitations. Ofcom currently estimates that 7.8 million UK visitors per day access adult services with age assurance already in place. Current techniques are generally effective at distinguishing adults from under-18s. However, fewer solutions exist to distinguish, for example, a 14-year-old from a 16-year-old, and facial age estimation technologies perform less accurately at younger ages.
The VPN question is also addressed at length. According to the consultation document, VPN usage more than doubled in the UK following the rollout of highly effective age assurance requirements, rising from around 650,000 daily users before 25 July 2025 and peaking at over 1.4 million in mid-August 2025 - data PPC Land has covered directly in its reporting on the Online Safety Act's implementation. It had declined to around 980,000 users by the end of 2025. Internet Matters found that 8% of children used a VPN in the past twelve months, with 66% of those using it to protect their personal data.
The age verification debate has also played out internationally. Following UK implementation, the EU launched its own digital identity system in mid-2025, and the FTC in the US published an enforcement policy in February 2026 giving conditional protection to platforms that collect children's data solely for age verification purposes.
The UK consultation asks whether government could improve the interoperability of age checks across services - including device-level options so protections are simpler to apply and easier for parents to trust. The Australian Age Assurance Technology Trial, conducted by UK-based company Age Check Certification Scheme, found that "while there is no one-size-fits-all solution, age assurance can be done in Australia privately, efficiently and effectively" - though it also highlighted accuracy challenges for younger age groups.
Mobile phones in schools
The consultation's third major chapter concerns enforcement of mobile phone policies in schools. According to data cited in the document, 99.8% of primary schools and 90% of secondary schools had a mobile phone policy in place. Despite this, 58% of secondary school pupils reported that mobile devices were used in lessons when they were not supposed to be, according to Department for Education data. NASUWT data shows that over 50% of secondary school teachers cite mobile phone distraction as one of their most significant day-to-day behavioural concerns.
Simply having a mobile phone in proximity can draw pupils' attention away from learning, and once distracted, pupils can take up to 20 minutes to refocus, according to research cited in the document. The government had already published strengthened guidance making all schools mobile phone-free environments, and from April 2025, Ofsted would examine mobile phone policies during inspections. The consultation now asks whether the current non-statutory guidance should be made statutory - giving schools a legal duty to follow it unless there are good reasons not to.
Screentime guidance and pilots
Running in parallel with the consultation, the government has announced several live pilots testing specific interventions with 13-15-year-olds, including a social media ban, curfews, and defined daily limits. These will inform decisions by showing implementation challenges and real-world impacts on children's sleep, mood and physical activity.
The Department for Education will separately publish a call for evidence on screentime guidance for 5-16-year-olds. This builds on guidance already being developed for under-5s, which was commissioned in January 2026 with publication planned for April. A major screentime trial, co-led by Dr Lewer and Professor Orben, will involve around 4,000 students from ten Bradford secondary schools across years 8, 9 and 10. The trial launches in spring and summer 2026, with data analysis expected to be completed by early summer 2027.
The evidence base on mental health impacts remains mixed, the document acknowledges. A 2025 consortium review by UK academics led by Professor Amy Orben found "a lack of high quality causal evidence linking children's mental health and wellbeing and their use of digital technologies" but noted that "this doesn't mean a causal link doesn't exist," pointing to associative evidence. The UK Chief Medical Officers' 2019 study similarly found associations between screen-based activities and increased risk of anxiety or depression.
What it means for the marketing community
The consultation carries significant implications for advertisers and marketing professionals. Any minimum age requirements for social media would directly affect the audiences available on those platforms, as well as the data that platforms can legally process for targeting purposes. Raising the digital age of consent from 13 would reduce the pool of users for whom consent-based data processing - including personalised advertising - is available without parental authorisation.
The consultation addresses this connection in its discussion of "addiction" and compulsive design. One proposed intervention is restricting the ability of services to advertise to children, or age-restricting features that enable in-service purchases such as shops or loot-boxes. The document notes that business models built around keeping users online for longer are a central concern: "thinking about these questions through the lens of the business model can be helpful."
Meta's tightening of teen account restrictions from April 2025 - including making mandatory the blurring of images suspected of containing nudity in direct messages, and restricting Instagram Live for under-16s - reflects how platforms have been proactively adjusting ahead of formal regulation. Those changes already affected how brands can interact with younger demographics across Meta's platforms. Mandatory rules going further would create substantially greater constraints.
Bluesky's age verification implementation for UK users in July 2025 and X's age assurance measures from the same period show how compliance obligations have already been cascading through the platform landscape. For marketing professionals, the UK consultation signals that the next phase of regulation will not stop at content - it will reach into the structural features that platforms use to drive engagement, the data pipelines that power advertising, and the design choices that determine how long and how intensively children use these services.
Timeline
- October 26, 2023 - UK Online Safety Act receives Royal Assent, establishing the foundation for children's online safety regulation.
- January 17, 2025 - Age verification duties for Part 5 services (pornography platforms) take effect under the Online Safety Act.
- April 8, 2025 - Meta introduces additional Instagram Teen Account protections, restricting Live streaming and nudity filters for under-16s.
- July 10, 2025 - Bluesky announces UK age verification implementation for users under 18 in response to Online Safety Act obligations.
- July 25, 2025 - UK Online Safety Act's age assurance requirements take full effect; Proton VPN records a 1,400% surge in UK signups within hours.
- July 27, 2025 - EU announces plans to follow UK with its own age verification system by end of 2026.
- August 3, 2025 - X implements age assurance measures following Online Safety Act compliance requirements, drawing criticism for placing verification behind a premium paywall.
- August 25, 2025 - 44 US Attorneys General sign formal letter to 12 major AI companies demanding enhanced child protection measures.
- September 10, 2025 - FTC issues special report orders to seven AI chatbot companies, including Alphabet, Meta, OpenAI and Snap, requiring details on child safety practices.
- December 10, 2025 - Australia's minimum age restrictions for social media take effect, requiring platforms to prevent under-16s from having accounts.
- February 5, 2026 - UK ICO fines MediaLab £247,590 for failing to implement age-assurance measures on Imgur.
- February 16, 2026 - UK Prime Minister announces new legal powers to enable fast implementation of consultation findings.
- February 25, 2026 - FTC publishes enforcement policy statement giving conditional protection to platforms collecting children's data solely for age verification.
- March 2, 2026 - UK Department for Science, Innovation and Technology publishes "Growing up in the online world: a national conversation" (CP 1528), opening the formal consultation period.
- May 26, 2026 - Consultation closes; government has committed to swift action on findings.
Summary
Who: The UK Department for Science, Innovation and Technology, led by Secretary of State Liz Kendall, presented the consultation to Parliament. It is addressed to the general public, including parents, children, civil society organisations, technology companies, educators and researchers across the United Kingdom.
What: A national public consultation - document reference CP 1528 - covering five core areas: understanding how children use technology; interventions for safer, more positive experiences (including potential social media minimum ages, feature restrictions and curfews); effective compliance and enforcement; preparing children for a digital future; and supporting families. The consultation poses over 50 specific questions and also references live pilots of restrictions with 13-15-year-olds.
When: Published on 2 March 2026. The consultation is open until 26 May 2026. The government has committed to acting on its findings within months, supported by new legal powers announced by the Prime Minister on 16 February 2026.
Where: The geographic scope is the United Kingdom. Internet policy is a reserved matter. Responses can be submitted online at the SmartSurvey platform or via email to [email protected].
Why: The government argues that the Online Safety Act 2023, while establishing a strong baseline, does not go far enough to address how platform design, features, and business models affect children's daily lives. The consultation is positioned as the next stage of online regulation - moving beyond illegal and harmful content to the structural and commercial factors that shape how children experience digital environments. Internationally, Australia had already implemented a social media ban for under-16s, and regulatory pressure from the US, EU and other jurisdictions has mounted, particularly around AI chatbots and age verification. The government wants to gather evidence before acting, but has made clear it intends to act quickly.