ICO publishes guidance on profiling tools for online safety compliance
Data protection watchdog clarifies requirements for trust systems ahead of UK regulatory changes.

The Information Commissioner's Office published comprehensive guidance on July 30, 2025, addressing data protection obligations for organizations deploying profiling tools in trust and safety systems. The 12-section document establishes compliance frameworks for user-to-user services implementing behavioral analysis technologies to meet Online Safety Act requirements.
According to the ICO, profiling tools analyze user characteristics, behavior, interests or preferences through automated processing of personal data. These systems support trust and safety operations including detection of grooming behavior, terrorism, violent extremism, bullying, harassment, fraud, scams, spamming, and fake accounts. The guidance applies to both organizations developing internal profiling systems and third-party providers offering trust and safety technologies.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The regulator defines profiling tools as systems that assign ratings or scores to users, such as risk scores or reputation indicators showing likelihood of terms of service violations or bot account probability. Tools may retain and continuously update analysis outputs or produce temporary evaluations for specific decision points.
Technical requirements establish processing boundaries
Organizations must identify lawful processing bases before deploying profiling tools, with legal obligation and legitimate interests representing the most relevant options for trust and safety applications. According to the guidance, legal obligation applies when processing supports compliance with Online Safety Act duties, provided the personal information processing remains necessary and proportionate to achieve compliance.
The legitimate interests basis requires organizations to conduct three-part assessments identifying their legitimate interest, demonstrating processing necessity, and balancing interests against user rights and freedoms. Organizations using profiling for terms of service enforcement or Online Safety Act compliance may establish legitimate interests, but must demonstrate compelling justification for privacy-intrusive methods.
Contract serves as a lawful basis when profiling tools are integral to core service delivery and represent proportionate methods for achieving organizational purposes. However, consent becomes relevant primarily when organizations require user permission for storage and access technologies under Privacy and Electronic Communications Regulations.
Data protection impact assessments become mandatory
The ICO requires data protection impact assessments before deploying profiling tools due to high-risk processing characteristics. These systems collect information at significant scale, potentially leading to unwarranted intrusion and loss of personal data control. They make decisions producing significant user effects, potentially causing discrimination, reputational damage or financial harm through loss of income or employment.
Organizations must document processing nature, scope, context and purposes while assessing necessity and proportionality of planned operations. Risk identification must consider likelihood and severity of potential harm to user rights and freedoms, with mitigation measures documented for each identified risk.
According to the guidance, "Given your processing is likely to be high risk, you must carry out a data protection impact assessment (DPIA) prior to processing personal information in your profiling tools." The requirement extends to all profiling deployments regardless of organizational size or user volume.
Privacy by design principles shape system architecture
Data protection by design and default requirements mandate privacy considerations throughout profiling tool development and operation. Organizations must implement appropriate technical and organizational measures designed to effectively implement data protection principles while integrating necessary safeguards into processing operations.
The guidance states organizations must "make data protection an essential component of the core functionality of your profiling tools" while processing only personal information necessary for specified purposes. Users must receive information enabling easy understanding of personal information usage in profiling systems.
For children's data processing, the ICO requires conformance with the Children's Code, which recommends switching profiling off by default unless compelling reasons justify activation. Examples include profiling to meet legal or regulatory requirements, prevent child sexual exploitation or abuse, or support age assurance functions.
Transparency obligations address user understanding
Organizations must inform users about profiling tool deployment through privacy notices meeting UK GDPR Article 13 requirements. Information must include processing confirmation, purposes, lawful bases, retention periods, third-party sharing arrangements, user rights, and automated decision-making details affecting users.
The guidance emphasizes additional transparency considerations given profiling tools' intrusive nature. Organizations should consider providing information about decision types made through tools and automated technologies utilized, balanced against risks of malicious users circumventing detection systems.
According to the document, "Transparency is closely linked to fairness. You are unlikely to be treating users fairly if you do not tell them about how you use their personal information." Organizations must use varied communication techniques including dedicated website areas, signup information, user dashboards, and moderation action notifications.
Special category and criminal offense data require additional conditions
Processing special category information through profiling tools requires Article 9 conditions beyond lawful bases. This applies when systems use race, ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric identification data, health data, sex life or sexual orientation information as inputs or generate such inferences.
The substantial public interest condition under Schedule 1 of the Data Protection Act 2018 provides the most relevant framework for trust and safety applications. Organizations may rely on preventing or detecting unlawful acts, regulatory requirements, or safeguarding of children and at-risk individuals conditions, typically requiring appropriate policy documents.
Criminal offense information processing requires Article 10 conditions and Schedule 1 compliance. This encompasses suspicion or allegations of criminal activity alongside formal convictions and offenses. Organizations must demonstrate processing occurs under official authority control or domestic law authorization through specific Schedule 1 conditions.
Accuracy principles balance statistical precision with factual correctness
The guidance distinguishes between data protection accuracy requirements and statistical accuracy concepts. Organizations must take reasonable steps ensuring personal information used and generated by profiling tools remains factually correct while keeping information current when necessary.
Statistical accuracy concerns how frequently AI systems produce correct predictions measured against properly labeled test data. Organizations must consider statistical adequacy for intended purposes without requiring 100% accuracy, but should evaluate incorrect assessment possibilities and their user impact.
According to the document, profiling tool outputs often represent predictions about behavioral likelihood rather than factual user information. Organizations should ensure records indicate outputs constitute "statistically informed guesses, rather than facts" to prevent misinterpretation as definitive user characteristics.
User rights create additional compliance obligations
Profiling tool deployment triggers enhanced user rights under data protection legislation. Subject access requests must provide confirmation of personal information processing, copies of input and output data, and moderation decision information generated through profiling analysis.
The guidance notes personal information responses should exclude non-personal commercial or confidential information while addressing potential multi-person data situations. Organizations must consider whether subject access compliance can occur without disclosing third-party information or whether consent or reasonable disclosure circumstances apply.
Rectification requests require reconsideration of information accuracy even when organizations previously validated system functioning. Users objecting to legitimate interests processing may compel cessation unless organizations demonstrate compelling legitimate grounds overriding user interests, rights and freedoms, or processing supports legal claim establishment, exercise or defense.
Article 22 restrictions apply to automated decision-making
Solely automated decisions producing legal or similarly significant user effects require Article 22 compliance when profiling supports such determinations. The guidance defines solely automated decisions as those "taken without any meaningful human involvement" while legal or similarly significant effects include financial impact, exclusion, discrimination, or substantial influence on user behavior or choices.
Organizations relying on domestic law authorization, contract necessity, or explicit consent exceptions must implement appropriate safeguards protecting user rights, freedoms and legitimate interests. Domestic law authorization includes Online Safety Act compliance where solely automated decision-making represents the most appropriate implementation method.
Required safeguards include enabling human intervention, allowing users to express viewpoints, and providing decision contestation mechanisms. Online Safety Act complaints processes may satisfy these requirements when properly implemented alongside data protection law obligations.
Marketing technology implications drive industry preparation
The guidance affects programmatic advertising platforms, customer data platforms, and marketing automation systems utilizing systematic monitoring, profiling, or large-scale personal data processing. Organizations operating comprehensive analytics solutions tracking user behavior across digital touchpoints likely require data protection officer appointments under emerging regulatory frameworks.
Recent coverage on PPC Land examined ICO's consent or pay model guidance, demonstrating the regulator's comprehensive approach to balancing commercial viability with privacy protection. The profiling guidance extends this framework to trust and safety technologies increasingly deployed across digital platforms.
Marketing professionals must prepare for enhanced regulatory visibility into data processing practices as UK authorities continue coordinating with European counterparts despite post-Brexit regulatory divergence. Cross-border operations require navigation of both UK reporting requirements and European transparency obligations under varying jurisdictional frameworks.
Timeline
- July 30, 2025: ICO publishes profiling tools for online safety guidance
- June 19, 2025: Data (Use and Access) Act comes into law with review implications
- January 23, 2025: ICO confirms consent or pay models for publishers meeting data protection standards
- July 2021: ICO investigates real-time bidding compliance with GDPR requirements
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Key Terms Explained
Data Protection Impact Assessments (DPIA): These mandatory evaluations identify and mitigate privacy risks before deploying profiling tools. Organizations must document processing nature, scope, context, and purposes while assessing necessity and proportionality. The ICO requires DPIAs due to profiling's high-risk characteristics including large-scale data collection, significant user effects, and potential discrimination risks.
Online Safety Act: This UK legislation establishes duties for user-to-user services to protect users from illegal content and harmful behaviors. The Act creates legal obligations that may justify profiling tool deployment under data protection law's legal obligation basis, provided processing remains necessary and proportionate for compliance.
Personal Information: Any data relating to identified or identifiable individuals, including user-generated content, account details, activity data, and behavioral inferences. Profiling tools process extensive personal information across input, analysis, output, and application stages, requiring careful consideration of data minimization and accuracy principles.
Legitimate Interests: A lawful processing basis requiring three-part assessment of organizational interests, processing necessity, and balancing against user rights. Organizations using profiling for terms of service enforcement or safety compliance may establish legitimate interests but must demonstrate compelling justification for privacy-intrusive methods.
Special Category Information: Sensitive personal data including race, political opinions, religious beliefs, health data, and sexual orientation requiring additional Article 9 processing conditions. Profiling tools may process this information directly as inputs or generate such inferences, necessitating substantial public interest conditions and appropriate policy documents.
Automated Decision-Making: Processing that produces decisions without meaningful human involvement, potentially triggering Article 22 restrictions when creating legal or similarly significant effects. Organizations must implement safeguards including human intervention rights, viewpoint expression opportunities, and decision contestation mechanisms.
Trust and Safety Systems: Organizational processes protecting users from harmful experiences through detection and prevention of grooming, terrorism, fraud, harassment, and other malicious activities. These systems increasingly rely on profiling tools to analyze user behavior patterns and characteristics for proactive threat identification.
Profiling Tools: Technologies using automated processing to evaluate user characteristics, behavior, interests, or preferences for predictive analysis. These systems assign risk scores, reputation indicators, or behavioral classifications to support moderation decisions and safety measures across digital platforms.
Processing: Any operation performed on personal data including collection, analysis, storage, sharing, and decision-making. Profiling involves extensive processing across multiple stages from input data gathering through output application, requiring comprehensive compliance with data protection principles.
User Rights: Individual entitlements under data protection law including access, rectification, objection, and protection from solely automated decision-making. Organizations deploying profiling tools must facilitate these rights through appropriate response procedures, accuracy reconsideration processes, and decision contestation mechanisms.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: The Information Commissioner's Office published guidance for organizations deploying profiling tools in trust and safety systems, including user-to-user services and third-party providers.
What: Comprehensive guidance addressing data protection obligations for profiling tools that analyze user characteristics, behavior, interests or preferences through automated processing to support Online Safety Act compliance.
When: Published July 30, 2025, with review scheduled due to Data (Use and Access) Act implementation on June 19, 2025.
Where: UK organizations using profiling for trust and safety purposes, with implications for cross-border operations requiring coordination between UK and European privacy frameworks.
Why: Organizations require clear compliance frameworks for behavioral analysis technologies detecting grooming, terrorism, fraud, and other harmful online activities while protecting user privacy rights and freedoms.