Up North AIUp North
Back to insights
7 min read

The EU Digital Omnibus: What the Proposed Delay Means for AI Act Compliance

The European Commission's Digital Omnibus package proposes pushing high-risk AI deadlines to December 2027. But transparency and literacy obligations remain unchanged — and they're already here.

ai-actregulationomnibus
Share

The delay that isn't really a delay

On February 26, 2026, the European Commission published its Digital Omnibus package — a sweeping set of proposed amendments aimed at simplifying EU digital regulation. Among the most discussed changes: a proposal to extend the high-risk AI system compliance deadline from August 2026 to December 2027.

The immediate reaction from many companies was relief. "We have more time." LinkedIn posts celebrated the reprieve. Compliance teams slowed down. Some organizations paused their AI governance programs entirely.

That reaction is understandable. It's also wrong.

The Digital Omnibus is a proposal, not a law. It must pass through trilogue negotiations between the Commission, Parliament, and Council. That process takes months — often many months. There is no guarantee the delay will be adopted as proposed, or at all. And even if it passes exactly as written, several critical AI Act obligations are entirely unaffected.

What the Omnibus actually proposes

The Digital Omnibus package addresses multiple EU regulations simultaneously — the AI Act, GDPR, the Cybersecurity Act, and others. For the AI Act specifically, the key proposed change is straightforward: push the compliance deadline for high-risk AI systems (Article 6 and Annex III) from August 2, 2026 to December 31, 2027.

High-risk systems are the heart of the AI Act's regulatory framework. These are AI systems used in areas like employment, education, creditworthiness, law enforcement, and critical infrastructure. If your company deploys AI in any of these domains, the high-risk requirements — conformity assessments, risk management systems, data governance, human oversight, technical documentation — are the heavy obligations.

A 16-month extension on those requirements is significant. It gives companies more time to implement quality management systems, conduct conformity assessments, and build the technical documentation the Act requires.

But here's what many people miss: the Omnibus does not touch the obligations that are already in force or coming into force in 2026.

What is NOT delayed

Three critical timelines remain exactly where they are:

Article 4 — AI Literacy (already active since February 2, 2025). Every organisation that deploys or develops AI must ensure their staff have "sufficient AI literacy." This isn't a future obligation. It's been in force for over a year. If you haven't addressed it, you're already behind.

Article 5 — Prohibited AI practices (already active since February 2, 2025). The outright bans on social scoring, manipulative AI, emotion recognition in workplaces and schools, and untargeted facial recognition scraping are in force. These aren't changing.

Article 50 — Transparency obligations for certain AI systems (August 2, 2026). This is the one that catches most companies off guard. Article 50 requires that anyone deploying AI systems that interact with people must disclose that interaction. Chatbots must be labelled as AI. Deepfakes must be marked. AI-generated content must be identifiable. This deadline is August 2026, and the Omnibus does not propose changing it.

For companies using customer-facing AI — chatbots, virtual assistants, AI-generated marketing content — the August 2026 deadline is real and approaching fast.

The trilogue timeline

Even if you're focused on the high-risk extension, it's worth understanding the political process. The Omnibus proposal enters trilogue negotiations where three EU institutions must agree on the final text. Historically, digital regulation trilogues have taken anywhere from six months to over two years.

The AI Act's own trilogue took roughly nine months, but it was a high-priority file with strong political momentum. The Omnibus is a broader, more complex package touching multiple regulations. Some provisions — particularly the proposed GDPR changes — are politically contentious.

A realistic timeline suggests the Omnibus could be finalized by late 2026 or early 2027. That means companies won't have legal certainty about the high-risk extension until well into 2026. Planning around an unconfirmed delay is a gamble.

There's also a possibility that the December 2027 date gets shortened during negotiations. The European Parliament has historically pushed for faster implementation timelines on AI regulation. The final date could land anywhere between the original August 2026 and the proposed December 2027.

The strategic mistake: treating delay as permission to wait

We've seen this pattern before. When the AI Act's final text was published in mid-2024, many companies treated the two-year implementation period as a reason to defer action. Then Article 4 (literacy) and Article 5 (prohibited practices) kicked in on February 2, 2025 — catching organisations that hadn't been paying attention.

The same dynamic is playing out now with the Omnibus. Companies that pause their compliance programs will find themselves scrambling if:

  • The trilogue shortens the extension
  • Article 50 transparency deadlines arrive in August 2026 as scheduled
  • National authorities begin enforcement on literacy and prohibited practices (several member states are already setting up their AI offices)
  • Customers and partners begin asking for AI governance documentation as part of procurement processes

That last point is often underestimated. Regulatory deadlines matter, but market pressure frequently arrives earlier. Large enterprises are already including AI Act compliance questions in their vendor assessments. Being able to demonstrate a mature governance posture is becoming a competitive advantage — not just a legal requirement.

What to do now

The practical advice is the same whether the Omnibus passes or not:

1. Address AI literacy immediately. Article 4 is already in force. Document your training approach, ensure role-specific education, and keep records. This is the lowest-effort, highest-certainty compliance action you can take today.

2. Prepare for Article 50 transparency by August 2026. Audit every customer-facing AI touchpoint. Chatbots, AI-generated content, virtual assistants, automated emails — all need clear AI disclosure. This deadline is firm.

3. Inventory your AI systems now. Even if high-risk deadlines shift to 2027, you need to know what you're working with. A comprehensive AI inventory is the foundation for every other compliance activity. You can't assess risk, implement governance, or train staff on systems you don't know about.

4. Build governance infrastructure incrementally. Don't try to achieve full high-risk compliance in one push. Start with the inventory, add risk classification, then layer on documentation and oversight processes. This iterative approach works regardless of the final deadline.

5. Monitor the trilogue. The negotiation positions of the Parliament and Council will signal where the final text lands. If Parliament pushes back on the extension, the timeline could tighten significantly.

The bottom line

The Digital Omnibus may give companies more time on high-risk AI obligations — but that time isn't confirmed, and it doesn't apply to the obligations that matter most right now. AI literacy is already mandatory. Transparency deadlines are unchanged. And the companies that treat this moment as a reason to accelerate — not decelerate — will be the ones best positioned regardless of what the trilogue produces.

The worst outcome isn't strict regulation. It's being caught unprepared when the deadlines arrive — whether that's August 2026 or December 2027.

If you're not sure where your organisation stands, start with what you actually use. Run a free AI systems scan to see your current exposure and get a clear picture of what needs attention now versus what can wait.

Want to go deeper?

We explore the frontier of AI-built software by actually building it. See what we're working on.