The EU AI Act is the world’s first comprehensive legal framework regulating the use of artificial intelligence. The Act introduces a risk-based approach in which transparency, safety, explainability and human oversight are central. Organisations that develop or deploy AI are given clear obligations around data quality, governance, monitoring and documentation. The impact is significant: from high-risk models to generative AI and business processes in which algorithms play a role. Below, we summarise the key elements of the EU AI Act in ten practical bullet points, so you can quickly see what this means for your organisation.
- Risk-based approach
AI systems are classified into risk categories: prohibited (“unacceptable”), high-risk, subject to transparency obligations (“transparency risks”), and lower/no additional risk. Organisations must classify each system and apply the corresponding requirements.
- Scope & extraterritorial impact
The rules apply not only to companies within the EU, but also to those outside the EU that place AI systems on the EU market or have them used within the EU. This means that multinationals outside the Netherlands must also comply if they supply (or resupply) into the EU.
- Obligations for providers, deployers, importers and distributors
Different parties in the value chain have different responsibilities. IT must ensure technical compliance; finance/risk functions are responsible for reporting and audit trails; management and IT share responsibility for governance, etc.
- High-risk AI systems
For systems that fall under Annex III or high-risk safety categories, additional stringent requirements apply: risk management, monitoring, data quality, documentation, conformity assessments, and regular review of models.
- Transparency obligations for ‘lower-risk’ AI applications
For AI systems with limited or mainly transparency-related risks, obligations include clearly informing users that they are interacting with AI, and providing information about how it works, its limitations, etc.
- Algorithms with systemic impact / general-purpose AI
Special rules apply to “foundation models” / general-purpose AI. Providers of such models must maintain technical documentation, ensure that downstream users understand the limitations, and, for models with systemic risk, perform additional testing, monitoring and (potentially) certification.
- Compliance & enforcement
Member States must designate supervisory authorities; an EU AI Office will be established; harmonisation is encouraged via standards; and there will be supervision of compliance as well as sanctions (fines) for violations.
- Sanctions – substantial amounts
Fines can be significant: depending on the risk category, they can run up to tens of millions of euros or a percentage of the company’s global annual turnover (GAT). For exact amounts, see the Act itself – but the potential impact of non-compliance is considerable.
- Phased entry into force & transitional periods
Most obligations will only apply after a period (24 to 36 months after entry into force) for existing AI systems. New high-risk applications will have less time to become compliant. Important: use this transition period to prepare.
- Support for SMEs & innovation
“Regulatory sandboxes” will be set up, along with dedicated support, standards, tools and templates. Where possible, smaller companies will receive some relief or phased obligations.