About usProductsServicesUse casesBlog
Let's talk
  1. Simplito
  2. blog
  3. ai act compliance is a system property
AICompliance

AI Act Compliance Is a System Property

How AI Act compliance is shaped by built-in control and system design

NM.jpg

Natasza Mikołajczak

michal.jpg

Michał Nosowski

February 19, 2026 12 minutes read

Blog covers - Digital Sovereignty.png

The EU AI Act is often explained through categories: prohibited uses, high-risk systems, transparency obligations, documentation duties, etc., etc.

This makes sense. It is, after all, what the regulations explicitly state. But it’s also a cognitive trap.

The AI Act is not as much about the technology itself as it is about responsibility. More precisely, it regulates whether an organization can prove control over how an AI system is trained, supervised, withdrawn, and, most importantly, operated. 

Unfortunately, most organizations can’t.

From GDPR to the AI Act: how the compliance model has changed

GDPR changed how organizations think about personal data. It introduced ideas like lawfulness, purpose limitation, and accountability, and it pushed companies to formalize how data processing decisions were made.

The AI Act builds on that logic, but applies it to AI systems. 

As Michał Nosowski, a partner at Bytelaw law firm who provides professional legal support for IT companies, puts it:

“The AI Act does not introduce a new legal principle. Accountability has always been present in EU law. What changes is the level of technical proof required to support it.”

At a very basic level, it sets one expectation: if you use highly-developed AI solutions (so called high-risk systems), you should be able to explain how they work, what data they rely on, and who is responsible for the outcomes they produce. Compliance moved away from simply showing that something was done correctly, and toward being able to show how and why it was done in the first place.

That difference might sound subtle, but it has real consequences. It’s also one of the reasons so many organizations are now struggling to feel confident about their AI Act compliance.

“We did everything right” is no longer a defence

If you read the AI Act carefully, one thing comes up again and again: traceability over time.

Logs, records, monitoring, and documentation are more than just paperwork. They’re how regulators assess whether safeguards actually existed (and worked) while the system was running. That includes how outputs were produced and how oversight was exercised.

The issue is many AI systems generate outputs without keeping enough context to explain them later.

Michał Nosowski:

“Legal risk doesn’t only appear when something goes wrong. It also appears when an organization can’t demonstrate that the right precautions were actually in place, even if the outcome itself looks fine. It’s not about being right, or even about trying to be right. It’s about being able to show the exact process that led to a decision.”


Why policies and reviews alone are not enough for AI Act compliance

In response to the regulations, many organizations have tried to make their AI more compliant through policies, access rules, and review processes. That work matters and shouldn’t disappear. It just doesn’t solve the problem permanently.

The issue is these controls sit around the system, not inside it. Most AI solutions were not built to carry control as part of how they run, but under the AI Act, responsibility attaches at the moment a system is deployed.

Michał Nosowski:

“When an AI system influences a decision, the AI Act assumes that someone must be able to answer for that decision. That obligation cannot be delegated away through contracts or policy statements. It sits with the organization that deployed the system.”

At first glance, that can feel strict, but it’s internally consistent: compliance is more than just proving what happened, it’s also about being able to show what didn’t happen - the ability to demonstrate that nothing inappropriate occurred.

If you can’t reconstruct it, you can’t defend it or why control needs to be built-in

To make AI compliance hold up in practice, control has to be built into the system itself. A control plane provides that foundation by defining data access boundaries, recording system behaviour, and preserving the context behind decisions. The system only functions within those limits, and that is what turns compliance into a property of the system.

Take a look at DeepFellow. It’s a private AI control layer that sits between AI models, data sources, and the organization’s users. Its role is to make sure that every AI interaction is bounded, observable, and accountable by default.

From a compliance perspective, this translates into a few concrete capabilities.

  1. Control over training, validation, and test data
    The AI Act requires organizations to manage and document the data used to train and validate AI systems, particularly for high-risk use cases. With control layers like DeepFellow, training and test datasets remain internal and inspectable.

    How does this help companies stay compliant?
    ☑︎ You can clearly show what data was used and why.
    ☑︎ Data minimisation and suitability rules can be applied consistently.
    ☑︎ You’re not dependent on opaque, externally managed training pipelines.
  2. Logging and traceability of system operation
    The AI Act requires logging of relevant events during AI system operation to enable traceability and post-hoc review.DeepFellow generates operational logs as part of normal system execution. Things like interactions, data access, and system behaviour are all internally recorded and stay available for inspection.

    How does this help companies stay compliant?
    ☑︎ You can reconstruct how the system behaved at a specific point in time.
    ☑︎ Regulatory reviews don’t depend on third parties supplying or interpreting logs.
    ☑︎ What’s documented matches how the system actually behaves.

    This creates an audit trail that doesn’t require relying on external providers to supply or interpret logs.

    As Michał Nosowski puts it, "for high-risk AI systems, the AI Act requires logging, traceability, and effective human oversight. The key question is whether the organization can independently produce that evidence. If that depends on a vendor, the evidentiary burden is hard to discharge."
  3. Transparency and explainability at system level
    For high-risk AI systems, the AI Act requires outputs to be understandable in the context in which they were produced. DeepFellow preserves decision-relevant context across interactions. That makes it possible to explain not just what the system produced, but how it got there.

    How does this help companies stay compliant?
    ☑︎ You can answer regulatory and audit questions with system-level evidence.
    ☑︎ You can defend decisions after the fact without manually rebuilding missing context.
    ☑︎ Technical explainability lines up with legal transparency expectations.
  4. Enforceable human oversight
    The AI Act requires that AI systems remain subject to effective human oversight throughout their lifecycle.DeepFellow and similar control layers make oversight real. AI systems are constantly monitored, making intervention possible at any moment. This means oversight happens while the system runs, which is exactly when responsibility applies.

    How does this help companies stay compliant?
    ☑︎ Intervention is both immediate and demonstrable
    ☑︎ Evidence of oversight is automatically generated

    For DPOs, all of this means fewer situations where compliance depends on assurances or retrospective analysis.

    For CTOs, it means governance that can truly scale with the system.

Achieving system-level control under the EU AI Act

Once you look at AI through the lens of the AI Act, most implementation choices collapse into a single design question: where is control enforced while the system is running?

At this point, discussions often turn into feature comparisons between AIs. The thing is, that’s not really helpful. In practice, different AI deployment approaches lead to very different compliance postures under the AI Act. 

The table below summarises how these differences tend to play out at system level.

Architectural Choice as Regulatory Destiny: Control & Compliance under the AI Act

What the AI Act ultimately forces organizations to decide

It is tempting to frame the AI Act as a new obstacle. A regulatory burden layered onto otherwise functional systems. But the weaknesses exposed by the AI Act existed long before the regulation, they were simply easier to ignore.

From the beginning, AI was deployed with a quiet assumption that responsibility could be dealt with later. The thing about later is it always comes sooner than expected. 

If an AI system acts, someone must be able to answer for it. Some organizations can already show how their systems behave, who governs them, and how they deal with oversight. Others rely on explanations that only exist once something goes wrong.

How your organization responds when the time comes is entirely up to you and the AI systems of your choice, so make these choices wisely.

Authors

NM.jpg

Natasza Mikołajczak

Writer and marketer with 4 years of experience writing about technology. Natasza combines her professional background with training in social and cultural sciences to make complex ideas easy to understand and hard to forget.

more posts from Natasza Mikołajczak
michal.jpg

Michał Nosowski

An attorney-at-law, focused on data law, intellectual property law, and contract law. A partner in Bytelaw law firm, providing professional legal support for IT companies. He is fascinated by the interconnections between law and the world of IT. He works closely with entrepreneurs from the IT sector, AI companies and start-ups.

more posts from Michał Nosowski

Scroll & discover

From GDPR to the AI Act: how the compliance model has changed“We did everything right” is no longer a defenceWhy policies and reviews alone are not enough for AI Act complianceIf you can’t reconstruct it, you can’t defend it or why control needs to be built-inAchieving system-level control under the EU AI ActWhat the AI Act ultimately forces organizations to decide

Simplito sp. z o.o.

1-3 Grudziądzka

87-100 Toruń, Poland

KRS 0000305883

VAT EU: PL9562217643

Share Capital: 336 100 PLN

  • Company

  • About us
  • Services
  • Use cases
  • Contact
  • Products

  • Deep Fellow
  • PrivMX
  • Resources

  • Github
  • Blog
  • Terms & privacy
  • Privacy Policy

Copyright © 2026 All rights reserved. Simplito sp. z o. o.