Workday faces bigger discrimination lawsuit

Judge Rita Lin ruled July 29 that Workday must include HiredScore AI features in discrimination case affecting millions of job applicants nationwide.

Middle-aged Black man receives AI job rejection from Workday platform amid discrimination lawsuit
Middle-aged Black man receives AI job rejection from Workday platform amid discrimination lawsuit

On July 29, 2025, U.S. District Judge Rita Lin ordered Workday Inc. to expand the scope of its high-profile AI bias lawsuit to include applicants processed through HiredScore artificial intelligence features. The Northern District of California ruling requires the HR software giant to provide a comprehensive list of customers who enabled HiredScore AI features by August 20, 2025.

The decision marks the latest development in Mobley v. Workday, Inc., a closely watched case that has become a landmark challenge to AI-powered hiring tools. Derek Mobley, an African-American man over 40 who identifies as having anxiety and depression, filed the original complaint on February 21, 2023, alleging systemic discrimination through Workday's AI screening algorithms.

According to court documents filed on July 10, 2025, the parties had disagreed on whether applicants subjected to HiredScore AI features should be included in the collective action. Workday argued that HiredScore was "a separate product, built on a wholly separate technology platform" that was acquired in April 2024, more than a year after the original complaint was filed.

Judge Lin rejected Workday's position, determining that the HiredScore AI features are part of "Workday, Inc.'s job application platform" as defined in the collective certification order from May 16, 2025. The court noted that material differences in scoring algorithms between HiredScore features and Workday's Candidate Skills Match system would be "best addressed at the decertification stage."

Technical details emerge about AI discrimination claims

The lawsuit centers on allegations that Workday's AI-powered tools systematically discriminate against job applicants based on race, age, and disability. Mobley claims that since 2017, he applied for more than 100 positions at companies using Workday's screening technology and was rejected every time.

Court documents reveal telling details about the automated nature of these rejections. In one instance documented in the case, Mobley received a rejection email at 1:50 a.m., less than one hour after submitting his application at 12:55 a.m. Judge Lin accepted this rapid turnaround as evidence of automated decision-making rather than human review.

Workday's AI systems include multiple components. The company's Candidate Skills Match feature compares skills extracted from resumes to job requirements and assigns matching scores. HiredScore, acquired in April 2024, includes two AI features: Spotlight, which matches candidate information to job requisitions, and Fetch, which surfaces internal employees or previously unsuccessful candidates for alternate roles.

According to the joint letter filed on July 16, 2025, these systems operate differently. Spotlight "considers the job title and description, the location of the role and the candidate, minimum years or industry-specific experience, education level and major, and more," while Candidate Skills Match focuses primarily on skills alignment.

Massive scale of potential impact

The scope of the collective action has expanded dramatically since its preliminary certification in May 2025. Judge Lin's May 16 order granted certification for a nationwide class of individuals aged 40 and older who applied through Workday's platform since September 24, 2020, and were denied employment recommendations.

Workday operates one of the world's largest HR technology platforms, serving over 11,000 organizations worldwide with millions of job listings processed monthly. The company's own court filings suggest the collective could include "hundreds of millions" of job applicants.

The marketing community has been watching the case closely, as it represents a test case for algorithmic accountability in automated decision-making systems. PPC Land has extensively covered how similar AI bias issues affect advertising and marketing automation platforms, where targeting algorithms can inadvertently discriminate against protected groups.

The case highlights broader concerns about AI systems perpetuating historical biases present in training data. If a company's existing workforce lacks diversity, AI trained on employee data might favor candidates with similar demographic characteristics to current staff members.

The Mobley case breaks new ground by pursuing direct liability against an AI technology vendor rather than just the employers who use the tools. On July 12, 2024, Judge Lin denied Workday's motion to dismiss, ruling that the company could potentially be held liable as an "agent" of employers under federal anti-discrimination laws.

This "agent theory" of liability represents a significant expansion of traditional employment discrimination law. The Equal Employment Opportunity Commission filed an amicus brief supporting the novel approach, arguing that AI vendors should face accountability for discriminatory outcomes.

The court distinguished Workday's alleged role from that of "a simple spreadsheet or email tool," suggesting that the degree of automation and decision-making authority was relevant to determining liability. This reasoning could have implications for other AI vendors across industries.

Legal experts note that proving age discrimination in AI-driven hiring remains challenging because algorithmic decisions are often opaque. However, the Mobley case has moved forward by focusing on disparate impact theory rather than requiring proof of intentional discrimination.

Workday's defense and industry response

Workday has consistently denied the discrimination allegations. In court filings, the company maintains that it "does not screen prospective employees for customers" and that its technology "does not make hiring decisions." A company spokesperson stated that Workday's AI capabilities "look only at the qualifications listed in a candidate's job application and compare them with the qualifications the employer has identified as needed for the job."

The company argues that its AI systems "are not trained to use—or even identify—protected characteristics like race, age, or disability." In March 2025, Workday announced it received two third-party accreditations for its "commitment to developing AI responsibly and transparently."

Industry observers point to Amazon's experience as a cautionary tale. In 2014, Amazon developed a machine-learning hiring tool that showed bias against female candidates because it was trained primarily on male employees' resumes. The company disbanded the project in 2017 after unsuccessful attempts to eliminate the bias.

Recent University of Washington research published in October 2024 found significant racial and gender bias in how three state-of-the-art large language models ranked resumes, with systems favoring white-associated names and male candidates. The study revealed unique patterns of discrimination, including that AI systems "never preferred what are perceived as Black male names to white male names."

Broader implications for automated hiring

The Workday litigation occurs against a backdrop of rapidly expanding AI adoption in recruitment. An estimated 99% of Fortune 500 companies now use some form of automation in their hiring processes, according to recent industry data. Some studies suggest 492 of the Fortune 500 companies used applicant tracking systems in 2024.

The case also unfolds amid shifting federal policy on AI regulation. President Trump's January 2025 executive order Removing Barriers to American Leadership in Artificial Intelligence led the EEOC to remove guidance documents about responsible AI use in hiring from its website. However, existing federal anti-discrimination laws still apply to AI-powered employment decisions.

New York City implemented the first major AI hiring regulation in 2023, requiring bias audits for automated employment decision tools. Other jurisdictions are considering similar measures as concerns about algorithmic discrimination grow.

The legal landscape remains complex. While federal agencies may have reduced their AI oversight focus, private lawsuits under established civil rights laws continue to proceed. The Age Discrimination in Employment Act, which protects workers over 40, explicitly allows disparate impact claims regardless of intent to discriminate.

Next steps and timeline

The court has established clear deadlines for moving the case forward. Workday must provide its list of customers who enabled HiredScore AI features by August 20, 2025. If the company can definitively determine that certain customers who enabled the features did not receive scores or screen candidates based on the AI, those customers may be excluded from the list.

The parties continue to work through the discovery process, exchanging information and evidence to prepare for trial. A case management conference was held on July 9, 2025, to discuss the notice plan for potential collective action members.

The class certification schedule has been extended, with Workday's motion for class certification due January 16, 2026, and the opposition motion to decertify due March 13, 2026. The class certification hearing is scheduled for June 2, 2026.

The case represents one of the first large-scale tests of AI hiring tools in federal court. Its outcome could establish important precedents for both AI vendor liability and the application of civil rights laws to algorithmic decision-making across industries.

Timeline

  • February 21, 2023: Derek Mobley files original class action complaint against Workday alleging AI discrimination
  • January 19, 2024: Judge grants Workday's motion to dismiss with leave to amend
  • February 20, 2024: Mobley files amended complaint redefining Workday's role under anti-discrimination law
  • April 9, 2024: EEOC files amicus brief supporting plaintiff's novel AI vendor liability theory
  • April 29, 2024: Workday completes acquisition of HiredScore, integrating AI talent orchestration technology
  • July 12, 2024: Judge denies Workday's second motion to dismiss, allowing case to proceed under "agent" theory
  • February 6, 2025: Mobley files motion for conditional certification of collective action
  • May 16, 2025: Judge grants preliminary certification allowing nationwide collective action for age discrimination claims
  • July 9, 2025: Case management conference held regarding notice plan for collective action
  • July 16, 2025: Parties submit joint letter regarding HiredScore scope dispute
  • July 29, 2025: Judge orders expansion of collective to include HiredScore AI features; Workday must provide customer list by August 20, 2025

The AI hiring paradox

Despite promises of efficiency and objectivity, artificial intelligence in recruitment has created a broken system where qualified candidates can't find jobs and employers struggle to identify suitable hires. The technology meant to solve hiring challenges has instead generated new forms of dysfunction that affect millions of job seekers and thousands of companies.

Research indicates that 88% of employers believe they are losing qualified candidates due to AI screening systems that filter out applicants who don't submit "ATS-friendly" resumes with specific keywords and formatting. Meanwhile, 70% of resumes that don't match algorithmic criteria are immediately removed from databases without human review, creating a system where technical resume optimization matters more than actual qualifications.

The volume problem has worsened rather than improved. Online job platforms have made applying so frictionless that the average job posting now receives 250 or more applications, but only four to six candidates typically receive interviews. This flood of applications has made human recruiters more dependent on AI filtering, creating a vicious cycle where automation becomes necessary to manage the volume that automation itself helped create.

Job seekers report increasingly frustrating experiences with automated systems. More than 92% of applicants never complete their applications when faced with lengthy, repetitive forms across multiple platforms. Those who do complete applications often face immediate rejections from AI systems programmed with impossibly narrow criteria or contradictory requirements that no human candidate could satisfy.

The "black box" nature of AI decision-making compounds these problems. Candidates receive generic rejection emails with no explanation of why they were eliminated, making it impossible to improve future applications. Companies, meanwhile, often don't understand how their own AI tools make decisions, leading to situations where qualified internal candidates are rejected for roles they could easily perform.

AI systems frequently prioritize irrelevant factors over job-relevant skills. University of Washington research documented cases where resume screening tools favored candidates who mentioned "baseball" over those who listed "softball," despite the job having nothing to do with sports. These arbitrary correlations demonstrate how AI can make hiring decisions based on statistically significant but meaningless patterns in training data.

The technology has also created new forms of discrimination that are harder to detect and challenge than traditional bias. Older workers find themselves systematically excluded by algorithms that prioritize recent graduation dates or specific technology keywords. Candidates from non-traditional educational backgrounds or career paths are filtered out by systems trained on narrow definitions of success.

Companies report that AI-selected candidates often lack the soft skills, cultural fit, or practical problem-solving abilities that matter most for job performance. The focus on keyword matching and quantifiable metrics has led to hiring processes that excel at identifying candidates who are good at gaming algorithmic systems rather than those who would excel in the actual roles.

The result is a hiring ecosystem where both sides are dissatisfied. Employers complain about receiving too many unqualified applications while missing strong candidates who don't fit algorithmic profiles. Job seekers spend countless hours optimizing resumes for machines rather than showcasing their actual capabilities to humans. The technology that promised to streamline hiring has instead created additional layers of complexity and frustration for everyone involved.

Industry data shows that companies using AI tools experience 40% lower turnover rates, suggesting the technology may help with retention. However, this statistic may reflect the difficulty of getting hired rather than better job matches—employees may stay longer simply because finding new positions has become more challenging in an AI-dominated job market.

PPC Land explains

AI Discrimination: The practice where artificial intelligence systems produce biased outcomes that unfairly disadvantage certain groups based on protected characteristics like race, age, or disability. In the Workday case, plaintiffs allege that hiring algorithms systematically screen out older applicants, people of color, and individuals with disabilities without human oversight. This form of discrimination is particularly insidious because it can operate at massive scale while appearing objective, making it difficult for affected candidates to identify or challenge the bias.

Disparate Impact Theory: A legal doctrine that allows discrimination claims even when there was no intent to discriminate, focusing instead on whether a policy or practice disproportionately affects protected groups. Under this theory, Mobley doesn't need to prove Workday intentionally designed its AI to discriminate against older workers—only that the system's outcomes have a statistically significant adverse effect on people over 40. This approach has become crucial for AI bias cases because proving intentional algorithmic discrimination is often impossible given the opacity of machine learning systems.

Collective Action: A legal procedure similar to a class action lawsuit but typically used for employment cases under laws like the Age Discrimination in Employment Act. In Mobley v. Workday, the court certified a nationwide collective representing potentially hundreds of millions of job applicants over age 40 who were denied employment recommendations through Workday's platform since September 2020. Unlike class actions where members are automatically included, collective actions require individuals to "opt in" to participate in the lawsuit.

HiredScore AI Features: Workday's acquired talent orchestration technology that includes two main components: Spotlight, which matches candidate information to job requirements, and Fetch, which surfaces internal employees or previously unsuccessful candidates for alternate roles. Workday acquired HiredScore in April 2024, and the court's July 29 ruling expanded the lawsuit scope to include applicants processed through these systems, despite Workday's argument that they operate on different technology platforms than its original AI tools.

Algorithmic Bias: The systematic prejudice embedded in artificial intelligence systems that leads to unfair treatment of certain groups. This bias often stems from training data that reflects historical discrimination or from design choices made by developers. In hiring contexts, algorithmic bias can manifest when AI systems favor resumes with certain keywords, penalize gaps in employment history, or make assumptions based on demographic proxies like zip codes or school names that correlate with protected characteristics.

Employment Agency Liability: A legal theory under which companies that help employers find workers can be held directly responsible for discrimination, even if they don't make final hiring decisions. Judge Lin rejected this approach for Workday but allowed the case to proceed under an "agent" theory, where Workday could be liable for discrimination as a representative of the employers using its tools. This distinction is crucial because it expands potential liability beyond traditional employer-employee relationships to include technology vendors.

Candidate Skills Match: Workday's original AI feature that compares skills extracted from candidate resumes with job requirements and assigns matching scores to suggest how well applicants fit specific positions. The system analyzes resume content to identify relevant skills and experience, then ranks candidates based on alignment with employer-specified criteria. Plaintiffs argue this seemingly objective process actually perpetuates bias by favoring certain types of experience or educational backgrounds that correlate with demographic characteristics.

Automated Resume Screening: The use of artificial intelligence to review, rank, and filter job applications without human oversight. These systems can process thousands of applications in minutes, identifying keywords, assessing qualifications, and making preliminary decisions about which candidates advance in the hiring process. The Mobley case highlights concerns about these tools when rapid rejection times—such as receiving a denial at 1:50 a.m. within an hour of applying—suggest purely algorithmic decision-making without human review.

Protected Characteristics: Legally protected attributes under federal anti-discrimination laws, including race, color, religion, sex, national origin, age (for those 40 and older), and disability status. The Workday lawsuit alleges that AI hiring tools discriminate based on multiple protected characteristics simultaneously, creating what researchers call "intersectional bias" where the effects on individuals with multiple protected identities may be different from the sum of individual biases. This complexity makes detecting and addressing AI discrimination particularly challenging.

Training Data Bias: The phenomenon where artificial intelligence systems inherit prejudices present in the historical data used to train them. If past hiring data reflects discriminatory practices—such as predominantly male workforces in certain roles—AI systems may learn to replicate these patterns by favoring similar candidates. This creates a feedback loop where historical discrimination becomes embedded in automated systems, potentially amplifying bias rather than eliminating it. The issue is particularly problematic because training data bias can be subtle and difficult to detect without comprehensive auditing.

Summary of the article

Who: Derek Mobley and a growing collective of job applicants over age 40 are suing Workday Inc., a major HR software company serving over 11,000 organizations worldwide.

What: The lawsuit alleges that Workday's AI-powered hiring tools systematically discriminate against applicants based on race, age, and disability, with the court recently expanding the case to include HiredScore AI features acquired in April 2024.

When: The original complaint was filed February 21, 2023, with the most recent significant ruling on July 29, 2025, expanding the lawsuit's scope to potentially affect hundreds of millions of job applicants.

Where: The case is proceeding in the U.S. District Court for the Northern District of California before Judge Rita Lin, with implications for employers and job seekers nationwide.

Why: The case challenges whether AI vendors can be held directly liable for discriminatory outcomes and represents a landmark test of how civil rights laws apply to automated hiring systems that allegedly perpetuate historical workplace bias.