One Year Into the EU AI Act and the AI World Looks Different
- lee6782
- Jun 6
- 3 min read
Updated: 5 days ago

In August 2024, a quiet but defining moment in the history of artificial intelligence unfolded. There were no fireworks or sweeping headlines, no viral announcements or glowing product demos, just a formal entry in the European Union’s Official Journal. And with that, the EU AI Act, the world’s most comprehensive regulatory framework for artificial intelligence, became law.
This was the culmination of years of debate, negotiation, revision, and political compromise. But now, with the Act officially in force, businesses across Europe and beyond need to be ready.
Back in 2021, when the European Commission first proposed the AI Act, AI was exciting but still relatively niche. Few could have predicted how quickly tools like ChatGPT, Midjourney, and other general-purpose AI systems would burst into public consciousness just a year later, rewriting assumptions about creativity, labour, and even intelligence itself.
What began as a cautious regulatory framework for managing algorithmic decision-making soon found itself at the heart of a global debate: How do we regulate something that’s evolving faster than we can define it?
To its credit, the EU stuck to its course. Over the next three years, the AI Act was revised and restructured, incorporating provisions for general-purpose AI, introducing stricter governance for high-risk systems, and banning outright those applications considered antithetical to EU values—like biometric surveillance in public spaces and AI systems designed to manipulate human behaviour.
It was not perfect. But it was ambitious. And more importantly, it was actionable.
What Happens Now?
The AI Act officially came into force on 1 August 2024. But this isn’t a flick-the-switch moment. The EU has opted for a phased implementation, giving stakeholders time to adapt, assess, and align.
Here’s what the timeline looks like:
1 August 2024 | The Act becomes law. This is the starting pistol. |
2 February 2025 | The ban on "unacceptable risk" AI applications becomes enforceable. These include systems used for social scoring, real-time biometric tracking, and manipulative targeting. |
2 August 2025 | General-purpose AI models (GPAI) must meet transparency and disclosure obligations. |
2 August 2026 | The full Act becomes applicable, including strict conformity assessments, documentation, and oversight for high-risk AI systems. |
Why This Matters
If you're reading this from the Isle of Man, Singapore, London, or Silicon Valley, you may be tempted to breathe easy. But here’s the catch: The EU AI Act applies extraterritorially. If your AI system is used in the EU, or affects EU citizens, you’re within scope, regardless of where you're based.
And this isn’t theoretical. The Isle of Man, for instance, has positioned itself as a hub for digital innovation. From blockchain to online gambling, companies here are developing and exporting products globally, including into the EU.
The question isn’t “Will this affect me?” The question is, “When, and how much?”
Accountability
One of the Act’s most significant contributions is its risk-based approach:
Unacceptable risk | Systems that are banned outright. No grey area. |
High-risk | Systems used in critical domains like healthcare, education, law enforcement, or finance. These must meet rigorous standards for transparency, safety, data governance, and human oversight. |
Limited risk | Systems like chatbots must disclose their nature as AI systems. |
Minimal risk | Think AI-driven spellcheckers or content filters. No specific obligations. |
This structure allows innovation to flourish while ensuring accountability where it matters most. And in sectors like online gambling—where AI is increasingly used for AML, fraud detection, and responsible gambling controls—this matters a great deal.
So, What Should You Be Doing?
If you're running an AI-driven business, here’s your strategic checklist for the next 12–24 months:
Audit Your AI Systems - Map out where and how you're using AI. Which tools are in-house? Which are third-party? What data do they process?
Classify According to Risk - Use the EU's risk tiers to classify your systems. If any qualify as “high-risk,” begin preparing for conformity assessments.
Develop a Compliance Roadmap - Get ahead of the timelines. Start documenting processes, data sources, and risk mitigation strategies now.
Revisit Transparency - Are your customers and users aware when they’re interacting with AI? That’s going to matter—a lot.
Stay Close to Guidance - The European Commission and the newly established AI Office will be releasing detailed guidance. Subscribe, follow, engage.