Skip to content
Governance & Compliance

EU AI Act and HR: What Applies to AI in Personnel Decisions from August 2026

HR AI is high-risk under the EU AI Act. What this means, which obligations apply, and how the Decision Layer meets the requirements architecturally.

Gosign GmbH 6 min read

The Classification: HR AI is High-Risk

The EU AI Act classifies AI systems used in employment, worker management, and access to self-employment as high-risk (Annex III, No. 4). This specifically covers:

AI systems for recruiting and candidate selection. AI systems that influence promotion, termination, task assignment, or performance monitoring. AI systems that affect working conditions – including salary adjustments, classifications, and shift planning.

In short: almost every AI agent that prepares, supports, or makes decisions in HR processes falls under the high-risk category.

The Deadlines

Since August 2025, prohibitions on unacceptable AI practices have been in force – including social scoring and manipulative techniques. From August 2026, the full obligations for high-risk AI systems take effect. The transition period is not generous: companies that are not building governance structures today will not be compliant in August 2026.

What Is Specifically Required – and How the Decision Layer Fulfils It

The following requirements apply to every operator of a high-risk AI system in the HR domain:

Article 9 – Risk Management System: The EU AI Act requires a continuous risk management system that identifies, assesses, and mitigates risks. In the Decision Layer, this is implemented through Confidence Routing: every agent decision is automatically evaluated by confidence and risk category. High risk or low confidence leads to escalation to a human. Thresholds are configurable and documented.

Article 10 – Data Governance: Versioned rule sets in the Decision Layer ensure that the data basis of every decision is traceable. Collective agreements, works council agreements (Betriebsvereinbarungen), and compliance rules have versions, validity dates, and scopes. During an audit, it is traceable which rule set in which version applied at the time of the decision.

Article 12 – Record-Keeping Obligations: The audit trail in the Decision Layer generates a complete, immutable data record for every decision: input, model, rule set, confidence, routing decision, result, timestamp. Automatically, not compiled after the fact.

Article 13 – Transparency: Every agent decision is traceable in the Auditor Portal. Works councils, data protection officers, and auditors can view the decision path. No black box.

Article 14 – Human Oversight: Human-in-the-Loop is an architectural principle in the Decision Layer, not an optional setting. For defined decision types – discrimination potential, co-determination topics, value thresholds – the architecture enforces human review. An agent cannot bypass this review.

Article 15 – Accuracy, Robustness, and Cybersecurity: Bias monitoring systematically checks for discriminatory patterns. Confidence thresholds ensure that the agent only decides autonomously with sufficient certainty. Model-agnostic design enables switching the language model without changing the governance logic.

What This Means for HR Departments

Companies that use or plan to use AI in HR processes today must build governance structures by August 2026. This specifically means:

Documented decision logic for every AI-supported HR process. Technically enforced Human-in-the-Loop mechanisms for decisions with personnel impact. Audit-proof audit trails that make traceable how every decision was made. Bias monitoring that detects and reports discriminatory patterns.

In Germany, the requirements of the Works Constitution Act (Betriebsverfassungsgesetz) add to this: works councils have a co-determination right for technical facilities that monitor the behaviour or performance of employees (§ 87(1) No. 6 BetrVG). AI agents in HR processes fall under this category.

The Decision Layer addresses both requirement blocks – EU AI Act and German co-determination law (Mitbestimmungsrecht) – in one architecture.

Decision Layer in detail

Co-determination and works council

HR Agent

Schedule a call – We’ll show you which of your HR processes fall under the high-risk category and how the Decision Layer meets the requirements.

EU AI Act HR High-Risk Decision Layer Governance Compliance
Share this article

Frequently Asked Questions

Do HR AI systems fall under the EU AI Act?

Yes. AI systems used in employment, worker management, and access to self-employment fall under Annex III No. 4 of the EU AI Act and are classified as high-risk AI systems.

When do the high-risk obligations take effect?

The obligations for high-risk AI systems under the EU AI Act apply from August 2026. Prohibited AI practices have been in effect since August 2025.

What happens if a company fails to meet the obligations?

Fines of up to 35 million euros or 7% of global annual turnover. Additionally, reputational risks and potential liability claims.

How does the Decision Layer meet the AI Act requirements?

The Decision Layer maps the requirements architecturally: risk management through Confidence Routing, data governance through versioned rule sets, transparency through audit trail, human oversight through Human-in-the-Loop.

Which process should your first agent handle?

Talk to us about a concrete use case.

Schedule a call