Two weeks ago, the EU Council and Parliament reached a provisional deal that pushed the AI Act's biggest enforcement wave back by sixteen months. That sounds like a win for everyone behind on compliance. It is not. The August 2, 2026 deadline still triggers a long list of obligations, and the part of the law that moved still becomes binding on December 2, 2027. Eighty days is a short window if your AI inventory is still a guess and your transparency wiring is still a wishlist.
This is the map we use at StudioMeyer to think about the law, the dates, and the engineering work that has to happen between now and the end of next year. We host AI products in Frankfurt and we build memory systems for European customers. We have written this once for ourselves, and we are writing it again here because most of the things published about the AI Act this month are either too legal to be useful or too vague to be wrong. We also offer dedicated advisory engagements for teams that want help mapping their systems to the law, so the second half of this article describes how we approach that work in practice.
The deadlines that already happened
The Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and is being switched on in phases. Two of those phases are already behind us.
On 2 February 2025, the prohibited practices in Article 5 became enforceable. Social scoring, manipulative subliminal techniques, untargeted facial image scraping, biometric categorisation that infers sensitive attributes, and emotion recognition in workplaces and schools are all banned outright. The fine for breaking these rules is up to €35 million or 7 percent of global annual turnover, whichever is higher. On the same date, the AI literacy obligation in Article 4 turned on, requiring providers and deployers to make sure their staff understand the systems they ship and use.
On 2 August 2025, the rules for general-purpose AI models in Articles 51 to 56 became binding. Foundation model providers (Anthropic, OpenAI, Google, Mistral, Meta, and others) now have to publish a training data summary, maintain technical documentation, share information with downstream providers, and follow a copyright compliance policy that respects the Text and Data Mining opt-out. The voluntary GPAI Code of Practice published on 10 July 2025 is the Commission's preferred route to demonstrate compliance and reduce administrative burden, and most major providers have signed it.
If your team uses Claude or GPT-4 through an API, you do not inherit the model provider's obligations. You inherit the obligations of being a deployer or, more often, a provider of the system you built on top.
The shift that happened on 7 May 2026
For the past year, every compliance article ended with the same sentence: 2 August 2026 is the date the rest of the Act becomes enforceable. That sentence is now partly wrong.
On 7 May 2026, the Council and Parliament announced a provisional political agreement on the Digital Omnibus, a targeted simplification package the Commission proposed in November 2025. The reason was prosaic. Harmonised standards are not finished, notified bodies are not in place, and most member states are behind on designating their competent authorities. The Commission decided that demanding compliance with a framework that does not yet have the supporting infrastructure was setting the law up to fail.
If the agreement is formally adopted, four things change. Annex III high-risk AI systems (standalone systems used in employment, credit, education, biometrics, critical infrastructure, law enforcement, migration, and justice) become enforceable on 2 December 2027 instead of 2 August 2026, as confirmed by both Hogan Lovells and Orrick. Annex I high-risk AI inside regulated products (medical devices, vehicles, machinery, lifts) moves from August 2027 to 2 August 2028. The Article 50(2) watermarking obligation for AI content providers shifts to 2 December 2026 instead of August. And the deadline for member states to set up an AI regulatory sandbox moves from August 2026 to August 2027, with a parallel EU-level sandbox operated by the AI Office for SMEs, start-ups, and small mid-caps.
Two important caveats. The Omnibus is a provisional agreement, not adopted law. It still has to go through formal trilogue confirmation and publication in the Official Journal. Until then, the current legal baseline remains 2 August 2026. And every obligation that is not on the postponement list still hits in August.
What still triggers on 2 August 2026
The deployer-facing parts of Article 50 are not delayed. If your chatbot speaks to people, every conversation must start with a disclosure that the user is interacting with AI. If your product generates synthetic content, the output must be marked. If a deepfake leaves your system, the recipient must be told. Voice agents that ring real callers are bound by the same rule, even though the underlying compute lives in California.
GPAI enforcement powers also become fully active. The European AI Office gains the ability to request information from model providers, demand access to the models themselves, and issue corrective measures or recalls if a foundation model violates Articles 53 to 55. The penalty ceiling for GPAI breaches is €15 million or 3 percent of global turnover.
National competent authorities must be designated and operational. Penalty regimes have to be written into national law. And the entire enforcement framework around Annex III, even with the Omnibus delay, has to be ready to switch on, because December 2027 is closer than it looks.
Risk tiers in plain language
The Act sorts every AI system into one of four risk tiers, and your tier decides your work.
Unacceptable risk is the Article 5 list above. No deployment in the EU, no exceptions.
High risk is the long list in Annex III plus AI embedded in regulated products. If your system makes or shapes decisions in employment, credit, education, essential services, law enforcement, biometrics, justice administration, or migration, you are likely high risk. The obligations are heavy. Documented risk management (Article 9), training data governance (Article 10), Annex IV technical documentation (Article 11), automatic logging with at least six months of retention (Article 12), transparency to deployers (Article 13), human oversight that allows intervention and override (Article 14), and demonstrated accuracy, robustness, and cybersecurity (Article 15). For deployers in this tier, a fundamental rights impact assessment is required before first use (Article 27).
Limited risk is the chatbot tier. Your obligation is Article 50 transparency, which is short to read and not short to implement well. Tell the user they are talking to AI at the start of the conversation, give them a path to a human if the conversation goes off the rails, and label any AI-generated content the system emits.
Minimal risk is everything else. Spam filters, recommendation engines that do not touch protected decisions, autocomplete, and the long tail of internal tooling. No specific AI Act obligations, although GDPR and sector law still apply.
The boundary case that catches most teams is the AI agent that takes actions on a user's behalf. A customer support chatbot is limited risk. The same chatbot wired to a CRM that can refund payments, send emails, or delete records is closer to high risk. The classification follows the consequences, not the model.
What this means for AI agent builders
We build agents for a living. The Article 14 and Article 15 requirements are the ones that change how you write code, not just how you write policy.
Article 14 requires that an operator can interrupt the agent. In our base agent class, that translates to iteration limits, hard timeouts, and a kill switch the operator can fire mid-execution. Tool calls that do anything irreversible (sending email, moving money, deleting data, calling an external API that costs real money) need an explicit human approval step. The phrase the Act uses is "effective human oversight", and effective is doing the heavy lifting.
Article 12 requires automatic logging of events over the system's lifetime. That means every tool call, every LLM round-trip, every decision branch, every input the agent saw, and every output it produced. Logs must be retained long enough to support post-market monitoring and incident reporting (Articles 72 and 73), and deployers must keep their copies for at least six months under Article 26.
Article 15 requires robustness. Input validation that catches prompt injection, output validation that stops hallucinated data from propagating, resource limits that prevent token-bombing and runaway loops, and adversarial testing against known agent failure modes. None of this is novel security engineering. The change is that for high-risk agents, it is now legally required, with documentation.
Germany: the Bundesnetzagentur takes the wheel
For teams in DACH, the practical question is who knocks on your door if something goes wrong. On 11 February 2026, the German Federal Cabinet approved the KI-MIG draft bill (KI-Marktüberwachungs- und Innovationsförderungsgesetz), and the answer is the Bundesnetzagentur, the Federal Network Agency. It will serve as Germany's primary market surveillance authority, notifying authority, and single point of contact under the Act.
The BNetzA is not building this from scratch. It already runs market surveillance for the Radio Equipment Directive and the Ecodesign rules, and it coordinates the German implementation of the Digital Services Act. A new internal body, the independent AI Market Surveillance Chamber, will handle sensitive cases (law enforcement, border management, justice), and a Coordination and Competence Centre called KoKIVO will pool AI expertise across sectors.
Two things matter in the meantime. The KI-Service Desk inside the BNetzA has been operational since July 2025 and is one of the few live SME compliance support channels in the EU. And BaFin retains sector-specific authority for high-risk AI directly tied to regulated financial activities, so banks and insurers will face two supervisors, not one. The KI-MIG is still going through Bundestag and Bundesrat, with second and third readings expected before the summer recess.
How we keep our own AI products compliant
We run our products on European infrastructure. Memory MCP and the rest of our SaaS surfaces are hosted on Hetzner in Frankfurt, not on AWS Frankfurt, which sits under the US CLOUD Act regardless of the data centre. The distinction matters more than most procurement teams realise.
Our code is open where it can be. The MCP server implementations for memory, CRM, GEO, and crew live on the studiomeyer-io GitHub organisation under MIT, so customers can read what we do with their data before they sign anything. Multi-tenant isolation runs at row level with explicit tenant IDs threaded through every query, and a static test in CI breaks the build if a handler forgets to include the tenant filter.
The memory product itself is built around an audit trail. Decisions, learnings, and entity observations carry a source, a date, and a confidence score. That is useful for the AI engineer who wants to know why a memory was stored, and it is the same shape of artefact the AI Act asks for when it talks about traceability and post-market monitoring. We did not build it for compliance. We built it because we got tired of memories that lied to us. The compliance fit is a bonus.
Every chatbot we ship discloses its nature on first contact, and the Article 50 transparency wiring is shared across our products through a single library. We also maintain a DACH Legal Playbook inside the Academy that walks through GDPR, AI Act, and German implementation overlaps for the teams who work with us.
How we help customers comply
Most of the EU AI Act work for a small or mid-sized AI shop is not legal work. It is engineering and documentation. We offer dedicated advisory engagements for teams that want to stop guessing where they stand.
A discovery workshop maps every AI system in your stack to its risk tier. We have done this for chatbots that turned out to be high-risk agents, and for elaborate ML pipelines that turned out to be minimal-risk infrastructure. The classification is half the battle. The other half is knowing which obligations attach.
For deployers and providers heading into high-risk territory (the December 2027 cliff if the Omnibus passes, August 2026 if it does not), we set up the Annex IV technical file, the Article 9 risk management process, and the Article 12 logging that an audit team can actually read. We integrate the Article 14 human oversight pattern into your agent framework instead of bolting it on afterwards.
For chatbot and voice-agent teams, we wire the Article 50 transparency disclosure into your existing UX without breaking conversation flow, build the human escalation path, and document the result for your file.
For teams whose AI memory or knowledge layer sits on a US cloud and now has to move, we have done the Hetzner Frankfurt migration on our own products and on customer systems. The trade-offs (latency, region pinning, DPA chains, Schrems II evidence) are concrete and known.
Engagements range from a half-day classification workshop to a full audit-readiness package with documentation, logging, and oversight patterns deployed in code. We do not sell certifications, and we do not pretend to be lawyers. We sit between the legal advice you already have and the systems you actually have to ship.
The one thing to do this week
Pick the single highest-risk AI system you have in production or in development, and write down its classification under Annex III in one paragraph. If you cannot finish the paragraph, that is the gap. If the paragraph is easy, do it for the next system. The shape of an EU AI Act compliance programme is not different from the shape of any other engineering programme. The first thing you build is a list of what you have. Everything else follows.
The Digital Omnibus may give you sixteen more months on the heavy work. The deployer transparency obligations, the GPAI enforcement, and the national supervisor switches do not wait. August 2 is still a date that matters. So is December 2 if your training data summary is incomplete or your watermarking logic is not in code yet. And December 2027 sounds far until you start writing a fundamental rights impact assessment from scratch.
If you want help reading your stack against the law, we are here. The map is easier to draw with two pairs of eyes on it.
