Skip to content
Governance & Compliance

EU AI Act: What HR Departments Must Do Now

The EU AI Act directly affects HR processes. Risk classification, bias monitoring, human oversight – what is now mandatory and how to prepare.

Gosign 8 min read

What the EU AI Act Means for HR

The EU AI Act classifies AI systems by risk. AI systems used in human resources – recruiting, performance management, payroll decisions, time tracking – fall under the high-risk category. This is not interpretation, it’s in the law.

For HR departments, this means: Every AI tool that influences decisions about employees must meet specific requirements. Not eventually, but now.

The Four Requirements That Affect HR

Risk Classification

Every AI system must be classified: minimal, limited, high or unacceptable risk. HR systems that prepare or influence decisions are typically high-risk. This doesn’t just apply to recruiting AI, but also to Document Agents processing sick leave certificates or Knowledge Agents answering compliance questions.

Bias Monitoring

High-risk AI systems must be tested for bias. For HR this means: Does the agent treat part-time employees differently than full-time? Are there systematic differences by location, gender, age? This must be documented and regularly reviewed.

Human Oversight

Human oversight is not optional, it is mandatory. Every high-risk AI system must be designed so that a human can monitor, intervene and override it. In practice this means: Human-in-the-loop must be built into the architecture, not bolted on afterwards.

Transparency and Documentation

Affected employees must be informed that AI systems are being used. Technical documentation must be comprehensive: what data is processed, what logic is applied, what decisions are prepared. An audit trail is mandatory.

What Most Companies Get Wrong

Three typical mistakes. First, the legal department evaluates tools retroactively instead of building the architecture to be compliant from the start. Second, AI tools are declared as “merely supportive” even though they de facto prepare decisions. Third, works councils (Betriebsräte) are informed only when the system is running, instead of being integrated into the governance structure.

Governance by Design Instead of Retroactive Compliance

The right approach is not: build first, audit later. It’s: Governance by Design. This means compliance requirements flow into the architecture from day 1. Logging, versioning, explainability, Human-in-the-loop – these are not features, they are architectural principles.

For collaboration with the works council, this means: When you build a works council-ready architecture from the start, the works agreement becomes an accelerator, not a blocker.

Concrete Next Steps for HR

First: Inventory all AI systems used in HR – officially and unofficially. Second: Risk-classify each system according to EU AI Act categories. Third: Gap analysis – where is logging, bias monitoring, human oversight missing? Fourth: Architecture decision – retrofit or rebuild? Fifth: Involve the works council.

At Gosign, EU AI Act readiness is part of every agent development. Risk classification, bias testing, complete logging and Human-in-the-loop are not add-ons but the foundation of our architecture.

EU AI Act HR Compliance Governance Regulation
Share this article

Frequently Asked Questions

Do HR agents fall under the EU AI Act?

Yes. AI systems used in human resources that influence decisions about employees fall under the high-risk category of the EU AI Act.

What happens with non-compliance?

Fines up to 35 million euros or 7% of global annual revenue. Additionally, reputational risk and potential management liability.

Do existing AI tools need to be retrofitted?

Yes, transition periods apply for high-risk systems. New systems must be compliant from the start. Retrofitting is significantly more expensive than building correctly from the beginning.

Which process should your first agent handle?

Talk to us about a concrete use case.

Schedule a call