As the EU AI Act begins to take shape across the European market, many business leaders breathe a sigh of relief: we finally have a legal framework — now it’s just a matter of complying. But that assumption hides a deeper problem.
The law sets strict rules, but it doesn’t provide step-by-step instructions. Instead, it relies on technical standards: detailed “how-to” manuals that explain what compliance looks like in practice. If a company follows them, it is presumed to be in line with the law. That makes these standards essential for product development, legal certainty, and market access in the EU.
There’s just one problem: the standards don’t exist yet.
In 2023, the EU tasked a group of experts — under the CEN-CENELEC JTC 21 committee — with drafting around 35 technical standards covering everything from risk management and data quality to transparency, cybersecurity, and human oversight. These were originally due in April 2025, then delayed to August 2025, and now are unlikely to be ready before 2026.
That’s a massive problem. Once published, these standards must still go through review, approval, and official release, likely in early 2026 — giving companies only 6 to 8 months to digest, implement, and validate dozens of new requirements before enforcement begins for high-risk systems in August 2026. For smaller teams without legal departments or compliance infrastructure, this timeline is unrealistic.
Meanwhile, the standards — which are critical to demonstrating compliance — are often locked behind paywalls. Many cost €100 to €300 per document, creating a two-speed system: one for companies that can afford to buy legal certainty, and another for those left to guess. This clashes with a recent EU Court ruling, which says that any standard supporting EU law must be freely accessible. But that ruling is being resisted by standardisation bodies that depend on document sales for revenue.
There’s also the issue of who is writing these standards. Large tech firms and consultancies dominate the committees. Startups, SMEs, and civil society groups are vastly underrepresented. This creates a dangerous imbalance: those with the most market power are shaping the very rules they will be judged by.
To bridge the gap, the Commission is developing a voluntary Code of Practice for General-Purpose AI (GPAI), but it’s just a temporary fix. It doesn’t solve the core issue: the longer the delay, the harder it becomes to implement the law fairly.
Europe has promised ethical, trustworthy and inclusive AI. That promise rings hollow if access to legality depends on your budget and your Brussels influence.