Skip to main content
World's First AI Regulation

EU AI Act
Complete Guide

The European Union's AI Act is the world's first comprehensive legal framework for artificial intelligence. Understand the risk-based approach, compliance deadlines, and what it means for your AI systems.

High-Risk Deadline
179
Days Remaining
Until August 2, 2026

Risk-Based Classification System

The EU AI Act categorizes AI systems into four risk tiers, each with different compliance requirements and penalties.

Prohibited Risk

Deadline: February 2025

AI systems that pose an unacceptable risk to safety, livelihoods, and rights.

Examples:

  • Social scoring by governments
  • Real-time biometric surveillance in public spaces
  • Manipulation of vulnerable groups
  • Predictive policing based on profiling

Maximum Penalty:

7% of global annual revenue or €35M

High-Risk Risk

Deadline: August 2026

AI systems that significantly impact health, safety, or fundamental rights.

Examples:

  • Medical devices and diagnostics
  • Employment and HR decisions
  • Credit scoring and financial access
  • Educational assessment systems

Maximum Penalty:

3% of global annual revenue or €15M

Limited Risk Risk

Deadline: August 2025

AI systems with transparency obligations requiring user disclosure.

Examples:

  • Chatbots (must disclose AI interaction)
  • Emotion recognition systems
  • Deepfake generators
  • Biometric categorization

Maximum Penalty:

1.5% of global annual revenue or €7.5M

Minimal Risk Risk

Deadline: No deadline

AI systems with no specific regulatory requirements under the Act.

Examples:

  • AI-powered spam filters
  • Recommendation engines
  • AI in video games
  • Inventory management AI

Maximum Penalty:

Voluntary codes of conduct

Compliance Timeline

The EU AI Act is being implemented in phases. Know your deadlines.

August 2024

EU AI Act enters into force

February 2025

Prohibited AI practices banned

August 2025ACTION REQUIRED

Transparency obligations for limited-risk AI

August 2026ACTION REQUIRED

Full compliance required for high-risk AI systems

August 2027

Obligations for general-purpose AI models

High-Risk AI Requirements

If your AI system is classified as high-risk, you must implement these mandatory requirements. ISO 42001 maps directly to most of these obligations.

01

Risk Management System

Establish, implement, document, and maintain a continuous risk management system throughout the AI lifecycle.

Clause 6.1 - Risk Assessment
02

Data Governance

Ensure training, validation, and testing datasets are relevant, representative, and free from errors.

Clause 7.2 - Data Management
03

Technical Documentation

Maintain comprehensive technical documentation demonstrating compliance before market placement.

Clause 7.5 - Documented Information
04

Record-Keeping

Automatic logging of events for traceability during the AI system operation.

Clause 9.2 - Internal Audit
05

Transparency

Provide clear information to deployers about capabilities, limitations, and intended use.

Clause 8.4 - Communication
06

Human Oversight

Design systems to allow effective human oversight during operation.

Clause 8.1 - Operational Planning
07

Accuracy & Robustness

Ensure appropriate levels of accuracy, robustness, and cybersecurity.

Clause 8.3 - AI System Development
08

Quality Management

Implement a quality management system ensuring ongoing compliance.

Clause 4.4 - AI Management System

ISO 42001 + EU AI Act = Complete Coverage

Implementing ISO 42001 addresses 90%+ of EU AI Act high-risk requirements. Get certified to demonstrate compliance.

See Full Mapping

Don't Wait Until August 2026

Start your EU AI Act compliance journey today. Get a free assessment of your AI systems and a roadmap to compliance.

KA

Kevin A

CISSPCISMCCSPAWS Security Specialist

Principal Security & GRC Engineer

Kevin is a security engineer turned GRC specialist. He focuses on mapping cloud-native infrastructure (AWS/Azure/GCP) to modern compliance frameworks, ensuring that security controls are both robust and auditor-ready without slowing down development cycles.