Most California employers using AI-powered hiring tools have never heard of the California Automated Decision Systems regulations. That is a problem, because the California Civil Rights Council rules have been in effect since October 1, 2025, and the penalties for non-compliance flow through the Fair Employment and Housing Act. Individual candidates can file civil rights claims against your organization.
This is not a future risk. It is a current one.
Who this applies to: Any employer operating in California that uses automated software to screen, score, rank, or otherwise evaluate job candidates. This includes AI resume screening tools, applicant tracking systems with automated scoring, and any tool that produces a recommendation about a candidate without a human making that determination independently.
What the Regulation Actually Requires
The California Civil Rights Council regulations on automated employment decision tools establish four core requirements. These are not suggestions. They are obligations that attach the moment you deploy an AI hiring tool with California-based applicants in scope.
The Four Requirements
The Most Common Mistake California Employers Make
The assumption that vendor liability covers employer liability. It does not.
If you subscribe to a resume screening platform and that platform produces biased outputs, the FEHA claim lands on your organization, not the vendor. The vendor may have contractual indemnification obligations, but that is a civil matter between you and the vendor. The regulatory obligation to comply with California ADS requirements belongs to the employer.
This means your compliance posture depends entirely on whether you can answer these questions:
- Can you produce a log of every candidate your AI tool evaluated in the last 12 months?
- Do you have documentation showing the tool was tested for bias across race, gender, age, and other protected classes?
- Can you show the test results confirming no protected class was systematically disadvantaged?
- Do your candidates know an AI tool is being used to evaluate them?
If the answer to any of those is no, you have a compliance gap that needs to be closed before a candidate or regulator asks.
Read the regulation directly: The California Civil Rights Council publishes the full ADS regulations at calcivilrights.ca.gov. If you are an HR professional or legal counsel, read the primary source rather than relying on summaries.
What the 4/5ths Rule Means for Your Hiring Tool
The 4/5ths rule, established by the EEOC Uniform Guidelines on Employee Selection Procedures, states that if the selection rate for any protected group is less than 80% of the rate for the highest-scoring group, there is evidence of adverse impact. This is the standard California regulators use when evaluating whether a hiring tool is producing discriminatory outcomes.
In practice: if your AI tool recommends candidates from one demographic group for interviews at a rate of 60%, and recommends candidates from a protected class at a rate of 40%, that 67% ratio falls below the 4/5ths threshold and creates legal exposure. Falling below 4/5ths does not automatically prove discrimination, but it shifts the burden to the employer to demonstrate that the tool is job-related and consistent with business necessity.
If you cannot produce that documentation, you lose that argument.
Why This Is Harder Than It Looks
Most AI hiring tools were not built with California ADS compliance in mind. They generate a score and a recommendation. They do not maintain audit logs in a format that can be produced on regulatory request. They do not run weekly bias tests. They do not give you an adverse impact report.
That means even if the underlying AI is not producing biased outputs, you have no way to prove it. And a tool that cannot prove it is not biased is, from a legal standpoint, indistinguishable from one that is.
The compliance infrastructure needs to be built into the tool from the start. Retroactive audits of tools without logging capabilities are expensive, incomplete, and often insufficient as evidence of ongoing compliance.
How TrueScan HR Addresses CA ADS Compliance
TrueScan HR was built with California ADS compliance as a design requirement. The compliance infrastructure runs at the platform level, in the background, without changing anything a hiring manager sees.
Audit logging captures every scan: timestamp, user ID, anonymized resume hash, job description hash, score, and recommendation. This log is retained and queryable on demand. If a candidate or regulator requests documentation of how their evaluation was handled, that record exists and can be produced.
Weekly automated bias testing runs 21 paired-resume comparisons every Monday across 9 protected dimensions: race and ethnicity, gender, age, pronouns, veteran status, disability, national origin, caregiving history, and intersectional combinations. Results are stored and retained for audit.
Proxy signal scrubbing strips signals that correlate with protected class membership but are not job-relevant, including graduation year as an age proxy and caregiving employment gaps as a gender proxy. This runs in the background without affecting the visible scan result.
Adverse impact reporting on Enterprise plans generates an EEOC 4/5ths analysis across your candidate pool on demand. This is the first document California regulators typically request when investigating a hiring bias complaint.
If you use AI to screen candidates in California and your current tool cannot produce this documentation, see how TrueScan HR approaches compliance or read the California Civil Rights Council regulations directly.