EU: Artificial Intelligence Act finalized; will impact HR policies 

Smart industry control concept.Hands holding tablet on blurred automation machine as background  
Hands holding tablet on blurred automation machine as background  

May 21, 2024

 

The European Union’s (EU) Artificial Intelligence (AI) Act was finalized on May 21, 2024, to provide a risk-based approach to AI regulation. The Act will enter into force 20 days after its publication in the EU’s Official Journal, and the measures will be phased-in and generally apply within two years. For example, prohibited AI systems will be banned within six months, and compliance by “low,” “medium” and “high risk” AI systems will be required within 24 months.

This summary focuses on aspects of the Act relevant to employment/human resource (HR) matters. According to the Act, “[t]hroughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such [AI] systems may perpetuate historical patterns of discrimination.” Other workplace decisions, such as the allocation of tasks based on individual behavior, personal traits or characteristics, and the monitoring or evaluation of individual workers are also covered.

Highlights

  • The Act includes a comprehensive definition of AI systems applicable to several different types of applications.
  • General Purpose AI (GPAI) Models cover foundation AI models, such as OpenAI’s Chat GPT, that are used for a variety of purposes and can be used directly or integrated with other AI systems. The Act lists the obligations for providers of GPAI models and classifies them according to their systemic risk.
  • The Act categorizes AI systems into four tiers of risk according to the sensitivity of the data involved, and how the AI system is used. Stricter measures will apply to AI applications with higher risks, and AI applications that pose particular threats to ethics, privacy and fundamental rights will be prohibited. Two of the four risk tiers — “prohibited” and “high risk” AI systems — have particular relevance to certain HR management processes. 
  • “Prohibited” AI applications include biometric categorization (such as racial or ethnic origin, political opinions, trade union membership, religious belief or sexual orientation), social scoring that evaluates and scores individuals based on their social behavior or personal characteristics, and emotion inference that infers emotions in workplace or educational institutions. 
  • “High Risk” AI systems are divided into two groups (Annex II Systems, and Annex III Systems), and each group is subject to specific governance and technical measures. Annex III Systems are AI systems used for specific purposes, such as employment and HR systems, employment evaluation or recruitment systems, biometric identification systems, educational and vocational training or evaluation systems, etc. The use of “high risk” AI systems will be strictly regulated, and their use subject to assessment by the EU’s AI Office and registration in an EU database. The Act allows the European Commission (commission) and AI Office to develop practical classification guidance within 18 months of the Act’s entry into force.
  • AI systems that process personal data will be subject to both the Act and the EU’s General Data Protection Regulation.
  • The Act will have extra-territorial effect, impacting non-EU providers of AI systems or models that are placed into the EU, organizations that put AI systems into service in the EU, and situations where an AI system that is not located in the EU is used within the region.
  • Enforcement will be provided by an AI Office (established in February 2024) and an AI Board that will be established at the EU level, in addition to surveillance authorities in each EU member state. Member states must designate one national supervisory authority to participate in the European AI Board.
  • Financial penalties will apply — up to 7% of global annual turnover, or €35 million (whichever is greater) — to breaches of “prohibited” AI. The penalty for failure to comply with the requirements of high-risk systems will be 3% of global annual turnover, and 1.5% (or €7.5 million) for supplying incorrect, incomplete or misleading information.
  • The commission will launch the AI Pact to foster early implementation of the Act. The pact will help companies prepare for the Act, encouraging them to voluntarily communicate the processes and practices they will use to support AI compliance.

Related resources

Non-Mercer resources

Mercer Law & Policy resource

About the author(s)
Related insights
Related solutions