The EU AI Act is no longer a future concern. As of February 2026, enforcement is active, investigations are underway, and the penalties are severe — up to 7% of global annual revenue for the most serious violations. After years of debate and preparation, the world's most comprehensive AI regulation has teeth.
What's Now Enforceable
The Act's enforcement is phased, with different requirements taking effect at different times. As of early 2026, the following are actively enforced:
- Banned AI practices — Social scoring systems, real-time biometric surveillance (with narrow exceptions), and AI that exploits vulnerable groups are prohibited. Violations carry the maximum 7% revenue penalty
- Transparency obligations — AI systems that interact with people must disclose they are AI. Deepfake content must be labelled. Chatbots must identify themselves as non-human
- High-risk system requirements — AI used in hiring, credit scoring, law enforcement, and critical infrastructure must meet strict documentation, testing, and human oversight requirements
For a comprehensive breakdown of what's required, see our EU AI Act compliance guide.
The First Investigations
The European AI Office has opened its first formal investigations, though specific targets haven't been publicly named. Reports suggest the initial focus is on:
- Generative AI transparency — Whether major chatbot providers adequately disclose AI interaction to users
- Training data compliance — Whether companies properly documented the data used to train foundation models
- High-risk classification — Whether AI systems used in recruitment and lending have completed mandatory conformity assessments
The pattern follows the GDPR playbook: start with high-profile investigations to establish precedent and signal seriousness.
What Developers Must Do
If you're building AI applications that serve EU users — which includes most global SaaS products — the requirements apply to you regardless of where your company is based.
The practical minimum:
- Classify your AI systems by risk level using the Act's framework
- Document training data sources and maintain records of model evaluations
- Implement disclosure mechanisms — If users interact with AI, tell them
- Build human oversight into high-risk applications — A human must be able to review and override AI decisions
- Conduct bias testing before deployment for any system that affects people's rights or opportunities
Understanding the broader AI landscape helps put these requirements in context. FreeAcademy's AI Essentials course covers the foundational concepts including responsible AI development. Their guide on AI skills that make you irreplaceable includes regulatory literacy as a key differentiator.
The Global Ripple Effect
The EU AI Act is doing what the GDPR did for data privacy: setting a global standard. Companies building for international markets are implementing EU-compliant practices everywhere rather than maintaining separate systems. The UK's AI Safety framework and the US executive order on AI both draw heavily from the EU's risk-based approach.
Whether you view this as necessary protection or regulatory overreach, the compliance reality is the same. The enforcement has begun, the fines are real, and the time to prepare was last year.


