Skip to content
HR & People Operations

Works Council & AI Literacy: The Organizational Questions

Why AI projects fail on organization, not technology. Co-determination as a design requirement and mandatory training since 2025.

Gosign 10 min read

Technology Is Rarely the Problem

When AI projects fail in organizations, the cause is almost never the technology. The models work. The APIs are stable. The infrastructure is available. What fails is the organization: the works council (Betriebsrat) blocks deployment because it was informed too late. Employees do not use the new tools because they were not trained. Roles and responsibilities shift faster than the organization can adapt.

This article addresses the two organizational questions that accompany every AI deployment: How do you gain the works council as an ally? And how do you fulfill the legally mandated AI literacy requirement?

Works Council and Co-determination (Mitbestimmung) — From Blocker to Enabler

Why the Works Council Must Be Involved

In German companies, the works council (Betriebsrat) has a co-determination right (Mitbestimmungsrecht) under Section 87(1)(6) of the Works Constitution Act (Betriebsverfassungsgesetz) for any technical system capable of monitoring employee behavior or performance. AI systems almost always fall under this provision: they process usage data, log interactions, and their decisions directly or indirectly affect working conditions. Similar employee representation structures exist in many other European countries, where comparable consultation and co-determination (Mitbestimmung) requirements apply.

This is not an obstacle; it is a design framework. The works council does not have the right to prevent AI — it has the right to shape the conditions of its deployment. That is a distinction frequently overlooked in practice.

The problem arises not from co-determination (Mitbestimmung) itself, but from the timing at which the works council is brought in. In most failed AI projects, the works council is only informed once the technical decision has already been made. It receives a presentation of finished results, has no opportunity to influence the architecture, and responds with the only instrument available: rejection.

Architecture as the Answer

The works council’s questions are legitimate and predictable: What data does the AI system process? Who has access? Is performance data captured? Who makes the final decision — the AI or a human? Can decisions be traced?

A Decision Layer answers these questions technically, not just in a company agreement (Betriebsvereinbarung) on paper. The architectural layer defines:

  • Decision boundaries: What the AI may prepare, what a rule engine decides, what a human must approve.
  • Human-in-the-loop: Technically enforced human approval for defined decision classes. No human review can be bypassed because the architecture does not permit it.
  • Audit trail: Every AI decision is logged — which model, which input, which output, which rule applied, whether a human was involved.
  • Role-based access control (RBAC): Role-based permissions prevent unauthorized access to sensitive data or functions.

Company Agreements as System Constraints

The decisive advantage of a well-designed AI architecture: company agreements (Betriebsvereinbarungen) are not merely documented on paper but implemented as technical rules in the system.

First: Company agreements as configurable rule sets. What the company agreement states — for example, “Performance data may not be evaluated without the employee’s consent” — is implemented as a constraint in the Decision Layer. The system cannot bypass the agreement because the rule is technically enforced.

Second: Transparency through a complete audit trail. The works council can trace at any time which decisions the AI system made, on what basis, and whether human approvals were granted. No black box, no trust question.

Third: RBAC prevents uncontrolled access. Only defined roles have access to specific AI functions. A team lead can use the chatbot but not the performance analysis. An HR manager can view the performance analysis but cannot export raw data. Permissions are granularly configurable.

Fourth: No profiling without explicit authorization. The architecture technically ensures that evaluations relating to individuals can only be conducted with explicit authorization — configured in the rule set.

Practical Recommendation: The Architecture Workshop

Invite the works council to the architecture workshop — not to the results meeting. Half a day in which the works council understands how the Decision Layer works, what data is processed, and how company agreements are technically implemented saves months of negotiations.

Experience shows: when the works council understands the architecture and sees that its concerns are not merely heard but technically implemented, it transforms from a potential blocker into an active supporter. The works council does not want to prevent AI. It wants to ensure that employee rights are preserved. A transparent architecture provides exactly that assurance.

Article 4 of the EU AI Act obligates all providers and deployers of AI systems to ensure that their employees possess a sufficient level of AI competence. The obligation has been in effect since February 2, 2025, and applies to every organization that deploys AI — regardless of size and regardless of the risk class of the AI system.

The phrase “sufficient level of AI competence” is deliberately open-ended. It must be interpreted in context: a board member who decides on deploying an AI system needs different competencies than a clerk who uses a chatbot. But both need competencies. And both must be documented.

Penalties: Violations of the AI literacy obligation fall under the general penalty provisions of the EU AI Act. Maximum fines are up to 35 million euros or 7 percent of global annual turnover. In practice, initial enforcement for literacy-only violations is likely to involve warnings and orders — but the legal basis for substantial penalties exists.

Who Must Be Trained?

All persons who use, operate, or make decisions about AI systems in any capacity. This includes:

  • Board and executive management (decision responsibility)
  • Department heads and team leads (usage responsibility)
  • Specialists and clerks (operational use)
  • IT and development (technical operations)
  • Works council (co-determination (Mitbestimmung) responsibility)

What Must Be Covered?

Content must be context-appropriate. At minimum, four competency areas are recommended:

  1. Foundational understanding of how AI works: How does a language model function? What is the difference between analysis and decision? What can AI do, what can it not?
  2. Hallucination recognition: Language models generate plausible-sounding but factually incorrect statements. Users must be able to critically evaluate results.
  3. Responsible use: What data may be entered? What must not? What happens to inputs? Where are the boundaries of appropriate use?
  4. Data protection: What personal data may be processed? What consents are required? What data leaves the corporate network?

How Must Compliance Be Documented?

The EU AI Act requires proof that training has taken place. This means:

  • Documentation of training content and materials
  • Participant lists with date and signature
  • Regular refreshers (recommended: annually; event-driven upon significant system changes)
  • Differentiation by role and responsibility

A generic 30-minute webinar does not meet the requirements. Training must be role-specific, address the organization’s particular AI systems, and include interactive elements that verifiably assess understanding.

Practical Recommendation: Enterprise AI Portal as Competency Development

AI literacy does not have to be delivered exclusively through formal training programs. A well-designed Enterprise AI Portal with built-in guidance, disclaimer notices, and feedback mechanisms is already part of competency development.

When employees interact with AI in a controlled environment that explains what the model is doing, flags limitations, and guides responsible use, they build competency through practice — not through presentations. The portal becomes both a productivity tool and a training instrument.

This does not replace formal training requirements. But it supplements them in a way that is continuous, contextual, and directly relevant to employees’ actual work.

Conclusion: Organization Decides

The two organizational questions — co-determination (Mitbestimmung) and competency — are not side issues in AI deployment. They are the main issues. The technology is available, affordable, and capable. The question is whether your organization is equipped to use it.

A works council that understands the architecture becomes a supporter. Employees who are trained use the tools productively. And an organization that addresses these dimensions proactively gains a competitive advantage that no model upgrade can replace.


📘 Enterprise AI Infrastructure Blueprint 2026 – Article Series

← PreviousOverviewNext →
EU AI Act 2026: What Applies Now, What’s Coming, What You Must DoOverviewAgent Orchestration: n8n, Camunda, and Alternatives Compared

All articles in this series: Enterprise AI Infrastructure Blueprint 2026


Gosign supports organizations with the organizational dimensions of AI deployment — from works council negotiations to AI literacy programs. If you want to know how to prepare your organization, talk to us.

Book a consultation — 30 minutes to address your organizational questions.

Works Council AI Literacy Co-determination EU AI Act HR Company Agreement
Share this article

Frequently Asked Questions

Must the works council (Betriebsrat) be involved in AI deployment?

Yes. In German companies, the works council has co-determination rights for any AI deployment affecting working conditions. An architecture with a Decision Layer makes AI works-council-ready: transparent decisions, architecturally enforced human-in-the-loop, auditable rule logic.

What does AI literacy mean under the EU AI Act?

Since February 2025, all AI users must have sufficient AI competence. This includes understanding how AI works, recognizing hallucinations, responsible use, and data protection. Penalties for non-compliance: up to €35 million or 7% of annual turnover.

Which process should your first agent handle?

Talk to us about a concrete use case.

Schedule a call