Definition

The EU AI Act (Regulation (EU) 2024/1689) is the European Union's comprehensive legal framework for artificial intelligence, establishing risk-based obligations for AI developers, deployers, and importers operating in the EU market.

Key Takeaways

  • Entered into force August 2024; most obligations apply from August 2026.
  • Risk-based framework: prohibited AI, high-risk AI, limited-risk AI, and minimal-risk AI.
  • High-risk AI systems face mandatory technical documentation, testing, and conformity assessment.
  • Article 12 requires automatic logging for high-risk AI operating autonomously.

EU AI Act — Definition and Compliance Overview

The EU AI Act establishes risk-based legal requirements for AI systems in the EU. Learn the risk classification system, key obligations for high-risk AI, and compliance timeline.

High-Risk AI Classification

Annex III of the EU AI Act lists high-risk AI use cases including: biometric identification, critical infrastructure management, education (determining access or outcomes), employment (CV screening, monitoring), essential services (credit scoring, insurance risk), law enforcement, migration and border control, and administration of justice. High-risk AI systems face a full set of obligations including technical documentation (Article 11), data governance (Article 10), logging (Article 12), and conformity assessment (Article 43).

Article 12 — Logging Obligations

Article 12 requires high-risk AI systems that operate autonomously to be designed with automatic logging capabilities. Logs must capture the system's operation, enable post-incident traceability, and be retained for at least 6 months (or longer per applicable sector-specific regulation). These logging requirements directly correspond to decision logging and audit trail infrastructure.

CertifiedData.io provides cryptographic certification infrastructure for synthetic datasets and AI artifacts, producing tamper-evident records for audit and EU AI Act compliance.