close
close

Association-anemone

Bite-sized brilliance in every update

California’s Proposed AI Employment Decision Rules: October 2024 Update
asane

California’s Proposed AI Employment Decision Rules: October 2024 Update

In recent years, California has been at the forefront of artificial intelligence (AI) and automated decision-making systems (ADS) regulation, particularly in employment practices. The California Civil Rights Division (CRD) has been actively refining its proposed rules to address concerns about the use of these systems in hiring, promotion, and other employment decisions. In response to public testimony during a July hearing, the CRD issued revised rules in October 2024. These changes reflect adjustments made in response to public testimony and provide clearer definitions of key terms such as “automated decision system,” “agent,” and “employment agency.”

Revised definitions in focus

One of the most significant updates in the CRD’s October revision of the proposed rules revolves around definitions of key terms that directly impact employers who rely on AI and ADS in their employment practices.

Automated Decision System (ADS)

In the revised rules, the definition of an ADS has been expanded to include a “calculation process that makes a decision or facilitates human decision-making about an employment benefit.” This broad definition encompasses systems that use AI, machine learning, algorithms, statistics, and other data processing tools.

While the original definition provided a general framework, the October version goes further in specifying the activities in which these systems might engage, such as screening, assessment, classification and making recommendations. The rules make it clear that the ADS can facilitate decisions about hiring, promotions, wages, benefits and other employment-related matters. However, ambiguity remains around the term “facilitating” human decision-making, raising questions about whether systems that assist but do not fully automate decisions fall under the rules.

For example, an AI tool used to verify educational credentials may flag discrepancies between an applicant’s claimed degree and the degree they actually earned. Although the tool does not make the final decision, it can influence the human decision maker. This raises questions as to whether such a tool, which aids but does not supplement the decision-making process, qualifies as a regulated ADS. It is therefore essential that employers recognize that automated systems used for even seemingly minor employment decisions may fall within this scope in the absence of further clarification.

In particular, the proposed rule specifically excludes technologies such as word processing and spreadsheet software, website hosting, and cybersecurity tools. However, there is still room for interpretation as to whether basic automation processes such as simple if/then workflows fall within the scope of these regulations.

Agent

The definition of “agent” has also undergone notable clarifications. An agent is now defined as any person who acts on behalf of an employer to perform a function traditionally performed by the employer. The concept of “function traditionally performed by the employer” is central to the revised definition. The CRD now clarifies that an agent is any person who acts on behalf of an employer to perform tasks that would historically be the employer’s responsibility. This includes key functions such as candidate recruitment, selection, hiring, promotion, and decisions related to salary, benefits, or leave. By framing it this way, the revised rule refines the scope of who can be considered an agent, ensuring that third parties who step in to perform these traditionally employer-driven activities — such as vendors who use AI to screen resumes or to assist in employment – are also subject to the same compliance standards.

This expanded definition emphasizes that even when employers outsource administrative functions, such as managing payroll or benefits, these providers can still be considered agents if they perform tasks typically handled by the employer. Employers should therefore closely examine their partnerships with third-party providers, particularly those using AI or machine learning, to ensure compliance with the revised definition. This reinforces the need for employers to recognize that delegating traditionally employer-led functions to third parties does not relieve them of regulatory obligations.

The employment agency

The definition of an employment agency has been refined to include any entity that undertakes, for compensation, services to identify, select and procure job applicants or employees. The emphasis in the revised rules is on “screening” as a crucial step in attracting an applicant, positioning it as a key function of employment agencies.

This revision emphasizes the distinction between selecting CVs for specific terms or models and the broader candidate selection process, aligning more closely with the concept of procurement for applicants. However, the proposed rules lack a clear definition of what “screening” entails, creating potential uncertainty for employers, particularly those who rely on third-party providers for background checks. Without explicit guidance on whether background checks fall within the definition of screening, employers may have difficulty determining their compliance obligations.

Considerations for Employers Using AI in Criminal Enforcement

The CRD’s revised proposed rules place a strong emphasis on preventing discriminatory practices when using AI and ADS in employment decisions, including the sensitive area of ​​criminal background checks. Employers must ensure that the use of these systems adheres to the same legal standards as human decision-making, in particular the requirement that criminal records can only be considered after a conditional offer of employment has been made.

The rules require the ADS used in criminal background checks to operate transparently. Employers must provide applicants with the reports and decision-making criteria used by the system, ensuring compliance with anti-discrimination laws. In addition, employers must periodically conduct anti-bias tests of these systems and keep records of these tests, along with any data used, for at least four years.

The focus on transparency and fairness aligns with broader trends such as the White House Blueprint for an AI Bill of Rights and the EEOC’s guidelines on algorithmic fairness. Employers should be diligent in auditing their AI systems to avoid disparate effects on protected classes, especially when it comes to decisions involving criminal records. They must ensure that the criteria used by ADS are job-related and necessary for commercial purposes and consider less discriminatory alternatives where available.

CRD is seeking public input

The California Division of Civil Rights continues to refine its proposed rules on automated employment decision systems, and now is the time for employers to get involved in the process. The CRD is accepting written comments on the latest changes until November 18, 2024. This is a critical opportunity for employers to ensure that AI and ADS regulations are clear, practical and reflect modern employment practices.

Comments can be emailed to [email protected]. For more information and to review the proposed changes, visit the CRD’s webpage at calcivilrights.ca.gov/civilrightscouncil.

Parting thoughts

October’s revisions to the CRD’s proposed rules on automated decision-making systems represent a significant step in California’s efforts to regulate AI in employment practices. For employers, this means taking a closer look at how AI and ADS are used, particularly in recruitment and criminal background checks. The expanded definitions of ADS, agent, and employment agency require careful consideration of both the use of technology and relationships with third-party providers. As these rules continue to evolve, staying informed and proactively assessing compliance will be critical to navigating California’s future employment AI regulations.