Skip to content
Governance & Compliance

EU AI Act 2026: What Applies Now, What's Coming, What You Must Do

Status February 2026: prohibitions active, AI literacy mandatory, high-risk deadline in six months. Timeline, obligations, action items.

Gosign 11 min read

The World’s First Comprehensive AI Law

The EU AI Act has been in force since August 2024. It is the world’s first comprehensive legislation for regulating artificial intelligence — and it applies to every organization that develops, deploys, or provides AI systems. In February 2026, we are in the middle of the implementation phase: two deadlines have already passed, the next arrives in six months. Organizations that have not prepared have a problem. Organizations that prepare now still have time.

This article provides a sober overview of the current state: what already applies, what is coming when, which obligations affect your organization, and what you should do in the next 90 days.

Timeline: Five Milestones

The phased implementation of the EU AI Act spans three years. Each milestone activates different obligations.

August 2024          February 2025         August 2025         August 2026          August 2027
    │                    │                    │                   │                    │
    ▼                    ▼                    ▼                   ▼                    ▼
Entry into Force   Prohibited AI        GPAI Obligations    High-Risk Systems    Remaining
                   Practices +          Take Effect         Must Be Compliant    Provisions
                   AI Literacy
                   ACTIVE                ACTIVE              IN 6 MONTHS          2027

For organizations, this means: two stages are already legally binding. The third — and for many enterprises the most critical — takes effect in six months. Preparation typically requires four to six months. Time is short.

What Applies Now

Prohibited AI Practices (since February 2025)

Since February 2, 2025, certain AI applications have been fully prohibited in the EU. This covers:

  • Social scoring: AI systems that evaluate individuals based on their social behavior and derive disadvantages in unrelated contexts.
  • Manipulative AI: Systems that manipulate human behavior through subliminal techniques — such as dark patterns that coerce purchasing decisions or consent.
  • Real-time biometrics in public spaces: Real-time biometric identification is fundamentally prohibited. Narrowly defined exceptions exist for law enforcement in cases involving serious crimes, counterterrorism, and missing persons searches — each requiring judicial authorization.
  • Emotion recognition in the workplace and educational institutions: AI systems that detect emotions of employees or learners are impermissible.
  • Predictive policing based on individual characteristics: Risk assessments for criminal behavior based solely on personal attributes.

Penalties: Violations of the prohibition provisions are punishable by fines of up to 35 million euros or 7 percent of global annual turnover — whichever amount is higher.

For most enterprises, these prohibitions are not directly action-relevant because the described applications rarely occur in a business context. But the review is mandatory: ensure that none of your AI systems falls under these categories.

AI Literacy (since February 2025)

In parallel with the prohibitions, the AI literacy obligation under Article 4 has been in effect since February 2025: all persons who operate, deploy, or use AI systems must possess a sufficient level of AI competence. The competence must be appropriate to the respective context — a developer requires deeper knowledge than an end user who uses a chatbot.

What this means in practice:

  • Training obligation: Organizations must be able to demonstrate that their employees have been trained.
  • Documentation obligation: Training content, participant lists, and refresh intervals must be documented.
  • Context appropriateness: Training must match the role. A generic 30-minute e-learning module is insufficient for decision makers who select and take responsibility for AI systems.

The AI literacy obligation is frequently underestimated because it does not impose high technical requirements. But it is already enforceable. And it applies to every organization that uses AI — regardless of the risk class of the system. More on the organizational implications can be found in the article Works Council & AI Literacy: The Organizational Questions.

GPAI Obligations (since August 2025)

Since August 2025, the transparency and documentation obligations for General-Purpose AI models (GPAI) are in effect. These primarily concern the providers of language models, not the organizations that use them. But as a deployer — an organization that uses a GPAI model in its own applications — you have obligations:

  • Usage notices: If your application generates content that could be mistaken for human-created, you must label it accordingly.
  • Transparency toward users: Individuals interacting with an AI system must be informed of that fact.
  • Governance infrastructure: You must be able to document which GPAI models you deploy, in which context, and with which safeguards.

The GPAI obligations require a clean inventory: Which AI models do you deploy? From which provider? In which application? With which risk classification? This information forms the foundation for the high-risk compliance that takes effect in six months.

What Arrives in Six Months: High-Risk Systems (August 2026)

The high-risk deadline of August 2, 2026, is the most critical deadline in the EU AI Act for most organizations. From that date, all AI systems falling under Annex III must be fully compliant. The requirements are extensive.

Which Systems Fall Under High Risk?

Annex III of the EU AI Act defines eight areas in which AI systems are classified as high-risk. The most relevant for enterprises:

  • Employment, personnel management, and access to self-employment: AI systems for job postings, candidate selection, performance evaluation, promotion decisions, and terminations.
  • Creditworthiness and insurance: Automated credit scoring, risk scoring.
  • Biometric identification: Facial recognition, voice identification — including in non-public spaces.
  • Critical infrastructure: AI systems in energy, water, transportation, telecommunications.
  • Education and vocational training: Automated exam grading, access control to educational institutions.

Requirements for High-Risk Systems

If any of your AI systems is classified as high-risk, you must meet the following requirements by August 2026:

  1. Risk management system: A documented system for identifying, analyzing, and mitigating risks throughout the entire lifecycle of the AI system.
  2. Data governance: Requirements for quality, representativeness, and accuracy of training data. When using pre-trained models: documentation of data provenance and fine-tuning.
  3. Technical documentation: Comprehensive documentation of the system prior to deployment — architecture, training procedures, performance metrics, testing procedures, limitations.
  4. Record-keeping obligations: Automatic logging of all relevant events to ensure the traceability of decisions.
  5. Transparency: Instructions for deployers that enable proper use.
  6. Human oversight: Technical measures that enable effective human monitoring. The Decision Layer is an architecture that implements precisely this requirement.
  7. Accuracy, robustness, cybersecurity: The system must reliably deliver its declared performance and be protected against manipulation.
  8. Conformity assessment: For certain categories, an assessment by a notified body (conformity assessment body) is required. For others, a self-assessment is sufficient.

Penalties: Violations of the high-risk obligations are punishable by fines of up to 15 million euros or 3 percent of global annual turnover.

Particular Relevance for HR

The HR department is the business area where AI applications most frequently fall under the high-risk category. This is due to Annex III, Number 4 — Employment and Personnel Management. The classification covers:

Automated screening of applications: high-risk. Any AI system that pre-sorts, evaluates, or filters job applications falls under the high-risk category. Regardless of whether the final decision is made by a human. The pre-selection alone is regulated.

AI-assisted performance evaluations: high-risk. When AI systems analyze performance data and derive evaluations or prepare evaluations from them, that is high-risk. This applies even to systems that only issue recommendations.

Predictive attrition: high-risk. AI systems that predict which employees are likely to leave the organization process personal data to derive employment decisions. That is high-risk.

Automated shift optimization: potentially high-risk. If an AI system creates shift schedules while processing individual preferences, performance data, or health information, it may fall under high-risk. The classification depends on the specific scope of data involved.

For HR departments, this means: inventory all AI systems used in employment contexts. Review the classification. Begin compliance preparation — the six months until August 2026 are tight for achieving full high-risk conformity. Further information on the interplay of AI and HR can be found at HR & AI Agents.

Digital Omnibus: Possible Deferral

The European Commission proposed a Digital Omnibus package at the end of 2025 that could, among other things, defer the high-risk deadlines of the EU AI Act to December 2027. The package also addresses a simplification of reporting obligations and a raising of thresholds for SMEs.

Current status: The Digital Omnibus package is a Commission proposal. It must be adopted by the European Parliament and the Council. As of February 2026, the legislative process is not complete. There is no guarantee that the package will be adopted in the proposed form. There is no guarantee it will be adopted at all.

The recommendation: Plan with August 2026. If the Digital Omnibus does bring a deferral, you have gained additional time. If it does not, you are prepared. Compliance planning that bets on an uncertain deadline extension is an avoidable risk.

Practical Recommendation: Start an AI System Inventory

Regardless of whether your AI systems fall under high-risk or not: the first step is always the same. You need a complete inventory of all AI systems in your organization.

What Must Be Captured

For each AI system, document:

  • System designation and description: What does the system do? Which process does it support?
  • Provider and model: Which AI model is being used? From which provider? Cloud API or self-hosted?
  • Your organization’s role: Are you the provider, deployer, or both?
  • Risk classification: Does the system fall under one of the categories in Annex III (high-risk)? Under the prohibition provisions in Article 5? Or is it a system with limited risk?
  • Affected individuals: Which persons are affected by the system’s decisions or outputs?
  • Data processing: What data does the system process? Personal data? Trade secrets?
  • Safeguards: What technical and organizational measures are implemented? Human oversight? Audit trail?

Timeline

An AI system inventory for a mid-sized organization is achievable in four to eight weeks. The effort depends on the number of systems, the state of documentation, and internal coordination. Start with the obvious systems — the officially procured AI tools — and then expand to shadow AI: AI systems that employees use independently without IT knowledge.

The inventory is not a one-time task. It must be continuously updated as new systems are added, existing systems are modified, and regulatory assessments evolve. The governance infrastructure must be designed so that the inventory remains a living document.

Summary: What You Should Do Now

  1. Review the prohibition provisions. Ensure that none of your AI systems falls under the practices prohibited since February 2025.
  2. Fulfill the AI literacy obligation. Document training for all AI users in your organization. The obligation is in effect now.
  3. Create an AI system inventory. Capture all AI systems, their providers, deployment context, and risk classification. Timeframe: four to eight weeks.
  4. Identify high-risk systems. Review the HR area, credit decisions, and automated processes with direct impact on individuals in particular.
  5. Begin high-risk compliance. For systems falling under Annex III: establish risk management system, data governance, technical documentation, and human oversight. Deadline: August 2026.
  6. Do not plan based on the Digital Omnibus. The possible deferral is a bonus, not a planning basis.

📘 Enterprise AI Infrastructure Blueprint 2026 – Article Series

← PreviousOverviewNext →
What AI Really Costs: TCO Comparison for EnterprisesOverviewWorks Council & AI Literacy: The Organizational Questions

All articles in this series: Enterprise AI Infrastructure Blueprint 2026


Gosign supports organizations with EU AI Act compliance — from system inventory to conformity assessment. If you want to know where your organization stands, talk to us.

Book a consultation — 30 minutes to assess your compliance status.

EU AI Act AI Regulation High Risk Compliance HR 2026
Share this article

Frequently Asked Questions

Which AI practices have been banned since February 2025?

Social scoring, manipulative AI systems, and real-time biometrics in public spaces (with narrow exceptions for law enforcement). Penalties: up to €35 million or 7% of global annual turnover.

Are AI applications in HR considered high-risk?

Almost always. Automated screening of job applications, AI-assisted performance evaluations, predictive attrition, and automated shift optimization fall under the high-risk category (Annex III). Compliance deadline: August 2026.

What does the AI literacy requirement mean?

Since February 2025, all persons operating or using AI systems must have a sufficient level of AI competence. Companies must be able to demonstrate training measures.

Which process should your first agent handle?

Talk to us about a concrete use case.

Schedule a call