- posted: Mar. 26, 2026
- Employment Law,  Discrimination
Artificial intelligence tools have become increasingly popular among employers seeking efficient ways to manage high volumes of job applications. These technologies promise to streamline recruitment but also carry significant legal risks, especially of disparate impact discrimination. Although vendors of AI-driven solutions may claim their products are bias-free, employers remain legally accountable for any discriminatory effects. The law does not shield employers from liability simply because discrimination was caused by a vendor’s algorithm rather than intentional human decision-making.
Automated recruiting tools can unintentionally amplify bias in several ways:
Data-driven bias occurs when algorithms are trained on historical hiring data, which may reflect past preferences for certain groups, thus perpetuating existing inequalities.
Reliance on proxy variables such as ZIP code, college attended or unexplained gaps in employment may appear neutral but are often closely correlated with protected characteristics like race, ethnicity or socioeconomic status.
Automated scoring systems that analyze an applicant’s tone, facial expressions or speech patterns risk disadvantaging candidates with disabilities, neurodivergent individuals or those with limited English proficiency.
The tendency to over rely on automation can lead employers to treat algorithmic output as objective, even when they lack a meaningful understanding of how the model makes its recommendations.
California law offers robust protections against discriminatory hiring practices. Under the Fair Employment and Housing Act (FEHA), a seemingly neutral practice disproportionately impacting a protected group is enough to trigger scrutiny. FEHA prohibits both intentional and unintentional discrimination, requiring employers to ensure their screening tools do not have a disparate impact on protected groups. The state’s Civil Rights Department (CRD) is increasingly focused on the risks of algorithmic discrimination and automated decision-making.
The California Privacy Rights Act (CPRA) further mandates transparency and places limits on fully automated hiring decisions. Employers must also uphold obligations to provide reasonable accommodations under both the ADA and FEHA, particularly when hiring tools measure attributes such as speech, movement or cognitive functions. Emerging local ordinances and developing federal guidance emphasize transparency, fairness and the ability to audit algorithmic tools.
Employers can take the following practical steps aimed at minimizing legal exposure:
Conduct a pre-deployment impact assessment to determine if an AI tool disproportionately screens out protected groups.
Demand transparency from vendors, including requesting validation studies, data sources and clear explanations of the algorithm’s functioning.
Review and update job-related criteria regularly to ensure the tool only measures skills and qualifications relevant to the position.
Implement human oversight so automated scores do not dictate hiring decisions.
Provide reasonable accommodations, such as alternative assessments for applicants with disabilities.
Thoroughly document all audits, vendor communications and internal decision-making processes and train HR and hiring managers to critically interpret algorithmic recommendations and avoid overreliance on automated tools.
While AI holds the promise of making hiring more efficient, this benefit is secured only when paired with strong human oversight and a rigorous compliance framework. By proactively addressing both legal obligations and ethical considerations and seeking guidance from experienced employment counsel, employers can support equitable hiring practices and protect themselves from liability.
Garcia & Gurney A Law Corporation ALC in Pleasanton, California guides employers throughout Alameda and Contra Costa counties in all aspects of federal and state legal compliance. Call 925-468-0400 or contact us online to schedule a consultation.