Skip to content

EU AI Act Readiness

Our architecture addresses the central requirements of the EU AI Act as design principles. This is an architectural statement, not a conformity certificate.

Architecture, Not Checklist

The EU AI Act (Regulation (EU) 2024/1689) regulates AI systems in the European Union. For enterprise AI agents making automated or semi-automated decisions in business-critical processes, four areas are particularly relevant.

The Gosign architecture addresses these areas as design principles -- not as a retroactive compliance layer.

EU AI Act Architecture Mapping

Requirement Article Gosign Implementation
Transparency Art. 13 Decision Layer documents every decision path. Audit trail shows input → model → assessment → outcome.
Human Oversight Art. 14 Human-in-the-Loop architecturally enforced. For risk decisions, the human decides. Not optional, but a design decision.
Explainability Art. 13(1) Every decision includes reasoning: which data, which model, which assessment logic, which result, which alternatives.
Risk Management Art. 9 Governance Layer with bias monitoring, confidence tracking, anomaly detection. Cert-Ready Controls with automatic evidence generation.
Data Quality Art. 10 Input validation in Agent Layer. Data provenance documented in audit trail.
Record-Keeping Art. 12 Complete audit trail with timestamps, input hashes, model versions, decision paths. Immutable, exportable.
Technical Robustness Art. 15 Multi-model capability (no single point of failure). Fallback strategies. Versioned models with rollback.

How the Architecture Enforces Compliance

Compliance is not a subsequent audit, but a result of the architecture. The following mechanisms ensure that EU AI Act requirements are technically met.

1

Decision Layer: Transparency and Explainability

The Decision Layer documents every decision path completely. For every agent decision: input data, model used, model version, confidence score, assessment logic, result, and rejected alternatives. This documentation is created automatically as a byproduct of the decision, not as a retroactive record.

2

Human-in-the-Loop: Human Oversight

Human-in-the-Loop is architecturally enforced, not optionally configured. The Decision Layer routes decisions automatically based on confidence score and risk category. For risk decisions, a human must review and approve. The agent cannot bypass this step. Escalation criteria are transparent and stored in the system.

3

Governance Layer: Risk Management and Monitoring

The Governance Layer continuously monitors all agent activities. Bias monitoring detects systematic biases. Confidence tracking identifies model degradation. Anomaly detection reports unexpected decision patterns. Cert-Ready Controls with automatic evidence generation ensure all audit evidence is available at any time.

4

Audit Trail: Record-Keeping

The audit trail captures every decision with timestamps, input hashes, model versions, and complete decision paths. The record is immutable and exportable in JSON, PDF, and CSV. System metadata such as models, versions, and deployment purpose are structured and available for registration obligations.

Risk Classification

Classification of an AI system into an EU AI Act risk class depends on the specific use case:

Potentially High-Risk (depending on context): AI agents preparing or influencing personnel decisions (Art. 6, Annex III No. 4), AI agents used in creditworthiness assessment, AI agents influencing access to essential services.

Not High-Risk (typically): Document Agents classifying documents without making personnel decisions, Knowledge Agents providing information without making decisions.

The Gosign architecture is designed for the strictest requirements. When an agent is deployed in a high-risk context, the architectural prerequisites are already in place.

Compliance Position

EU AI Act compliant by design.

Our architecture is designed from the ground up for EU AI Act requirements.

Transparency, explainability, and human oversight are structurally unavoidable, not optionally configurable.

Every decision is automatically documented, not retroactively reconstructed.

The system is prepared for high-risk requirements, regardless of the specific use case classification.

Cert-Ready Controls ensure that audit evidence is available and exportable at any time.

Distinction

This page describes architectural measures, not legal conformity. Actual EU AI Act compliance depends on the specific deployment context, risk classification, and legal assessment on a case-by-case basis.

Gosign delivers the technical architecture addressing EU AI Act requirements. Legal assessment and formal conformity declaration are the responsibility of the operator and their legal advisors.

Frequently Asked Questions about the EU AI Act

Do Gosign AI agents fall under the EU AI Act?

AI agents making automated decisions in HR, finance, or other business-critical areas are subject to EU AI Act requirements depending on their use case and risk classification. The Gosign architecture is designed to meet these requirements.

What risk class do the agents have?

Risk classification depends on the specific use case, not the agent itself. AI agents in HR (e.g., salary or personnel decisions) may be classified as high-risk AI. The architecture is designed for the strictest requirements.

What is the difference between EU AI Act compliance and readiness?

Compliance is a legal assessment depending on the specific deployment context. Readiness means the architecture has the technical prerequisites for meeting the requirements already built in.

Talk to us about EU AI Act compliance.

Compliant by design. Not retroactively. Not optional. Architecturally.

Book a Meeting