Skip to content
Governance & Compliance

Human-in-the-Loop - Architectural Principle, Not Checkbox

Human-in-the-Loop for AI agents means architecturally enforced human review, not optional approval. Confidence Routing, escalation rules, bias checks.

Bert Gogolin
Bert Gogolin
CEO & Founder 7 min read

The Problem: HITL as a Marketing Claim

Nearly every AI vendor claims their solution has “Human-in-the-Loop.” In practice, this often means: Somewhere in the process there is an approval button. A human can click it. But does not have to.

That is not Human-in-the-Loop. That is optional approval that falls away under time pressure. When a clerk processes 200 documents per day and 195 of them are correct, they will eventually approve all of them without review.

Real Human-in-the-Loop is an architectural principle. It means: For defined decision types, the agent physically cannot act autonomously. The workflow pauses. The target system is not contacted. Only after human review and documented approval does the process continue.

HITL as a Technical Architectural Principle

In the Gosign architecture, Human-in-the-Loop is implemented in the Decision Layer. The decision about when a human is involved is based on three criteria:

Confidence Routing: Every agent decision has a confidence score. If the confidence falls below the defined threshold, escalation happens automatically. The threshold is configurable per tenant.

Risk Classification: Certain decision types are always escalated regardless of confidence. Decisions with potential for discrimination, co-determination matters, value threshold breaches.

Rule-Based Mandatory Escalation: New rule sets being applied for the first time always go through human review during the introduction phase. Only after a validated learning phase does the agent switch to autonomous mode, for that specific rule.

The Human-in-the-Loop requirement is technically enforced. There is no workaround, no shortcut, no admin override. The agent cannot bypass the escalation.

What HITL Looks Like in Practice

An HR agent processes a request for a special payment. The Document Agent reads the request. The Knowledge Agent checks the applicable works council agreement. The Decision Layer evaluates:

Result: Special payment eligible for approval per works council agreement Section 12, Paragraph 3. Confidence: 94%. Risk: low.

However: The decision involves a compensation component. In the HITL configuration, it is defined: Compensation decisions are always escalated, regardless of confidence. The workflow pauses.

The responsible clerk sees in the dashboard: The request, the agent’s proposal, the applied rule in its current version, the confidence score, the escalation reason. They review, confirm, or correct. Their decision is documented in the Audit Trail, including the information that this was a Human-in-the-Loop decision.

HITL and the Works Council

Human-in-the-Loop is the technical answer to an organizational requirement: co-determination. Works councils have co-determination rights when AI systems are introduced, covering technical monitoring systems and workplace design.

The Decision Layer transforms works council agreements into technical constraints. When a works council agreement states: “Decisions about performance reviews may not be made fully automatically,” this is implemented as a HITL rule in the Decision Layer. The agent cannot bypass this rule.

The result: The works council can verify that its requirements are technically enforced, not merely organizationally promised.

HITL and the EU AI Act

The EU AI Act requires human oversight for high-risk AI systems (Art. 14). HR processes explicitly fall under the high-risk category: recruiting, performance reviews, promotion decisions, compensation, termination.

Human-in-the-Loop as an architectural principle fulfills the EU AI Act’s requirements for human oversight. It is not sufficient to have a human in the process who could theoretically intervene. The EU AI Act demands effective human oversight, meaning: The human must be able to understand the decision, must be able to stop it, and their intervention must be documented.

The Decision Layer documents for every HITL decision: Who reviewed? When? What was the agent’s proposal? What was the human decision? Do they align or diverge?

The Limits: What HITL Does Not Solve

Human-in-the-Loop is not a solution for all governance problems. Specifically:

HITL does not solve the problem of bias in training data. If the language model is systematically biased, a clerk will not detect this from individual cases. That requires statistical bias monitoring across all agent decisions.

HITL does not scale linearly. If the agent makes 10,000 decisions per day and 20% are escalated, you need resources for 2,000 manual reviews. HITL thresholds must be calibrated so that the escalation rate remains manageable without compromising governance.

HITL is one building block of the governance architecture, alongside Audit Trail, bias monitoring, rule set versioning, and Cert-Ready Controls.

More on this: Co-Determination and AI

Book a consultation - We will show you what Human-in-the-Loop looks like in your architecture.

Human-in-the-Loop Governance Co-Determination EU AI Act Decision Layer
Share this article

Frequently Asked Questions

What does Human-in-the-Loop mean in AI?

Human-in-the-Loop (HITL) refers to the architecturally enforced involvement of a human in certain AI decisions. In the enterprise context, it is a technical architectural principle that ensures defined decision types cannot reach the target system without human review.

When must a human intervene?

In decisions with potential for discrimination, in co-determination matters, in decisions above defined value thresholds, when new rules are applied for the first time, and when the agent's confidence is low.

Is Human-in-the-Loop required by the EU AI Act?

Yes. The EU AI Act requires human oversight for high-risk AI systems. HR processes such as recruiting, performance reviews, and compensation decisions fall under this category.

Which process should your first agent handle?

Talk to us about a concrete use case.

Schedule a call