There is a conversation happening in a lot of AI product teams right now. Someone has mentioned the EU AI Act. Legal has sent a link to something long and dense. The launch date has not moved. And nobody has quite worked out whether this is urgent, someone else’s problem, or both.
It is probably both. And the good news is that it is more manageable than it looks, if you approach it as a product discipline rather than a legal one.
Almost certainly yes.
The EU AI Act came into force in August 2024 and is now progressively applying obligations across different categories of AI systems. If you are building an AI product and placing it on the market, whether for payment or for free, you are almost certainly a Provider or Deployer under the Act. That is the category that carries the most significant obligations.
The scope is wider than most teams assume. If your product’s output is used by anyone in the EU, the Act applies to you, regardless of where your company is based. A US startup shipping a GenAI product used by European customers is in scope. A UK company deploying an AI tool in Germany is in scope.
The obligations vary depending on how your system is classified. High-risk systems, those used in areas like employment, education, law enforcement, or critical infrastructure, face the most stringent requirements. Regardless of risk category, most systems still carry transparency and evaluation obligations that are not optional.
If you want to work out exactly where your system sits, the Future of Life Institute has built a clear compliance checker at artificialintelligenceact.eu that walks you through it without a law degree.
Strip away the legal language, and the Act is asking three things of AI product providers.
The Act requires technical documentation that covers how the system was designed, what data it was trained on, and how it was tested. This is not about sharing proprietary information. It is about being able to demonstrate, if asked, that the system was built with care.
This is the part most teams are least prepared for. The Act requires providers to demonstrate that the system was evaluated before deployment, particularly in relation to safety, accuracy, and the potential for harm. Automated testing alone is not sufficient evidence. The Act emphasises human oversight and real-world validation.
The Act requires that AI systems do not undermine the ability of users to make informed decisions. Transparency about what the system does and how it reaches outputs is a core obligation, not an optional feature.
None of these are unreasonable. Most well-run product teams are doing versions of all three already. The gap is usually in documentation and evaluation evidence, not intent.
Here is the thing most teams miss when they approach the EU AI Act as a compliance exercise.
The evidence the Act requires you to generate, structured evaluation findings, documented methodology, and demonstration of real-world validation, is exactly the same evidence that gives your product team, your leadership, and your customers confidence that the product is ready.
Compliance and market readiness are pointing in the same direction. The teams that understand this are building one process that serves both purposes, rather than two separate workstreams that duplicate effort and slow everything down.
The Act requires you to show that you evaluated your product with real users in real contexts. GAT’s experience evaluating GenAI products across 190+ countries is that this evaluation reveals things automated testing never surfaces: cultural misalignments, trust failures, and edge cases that only appear when real people in real markets interact with the product.
A global fintech company preparing to launch an AI-powered assistant across Europe conducted a human-led evaluation across five markets. The evaluation surfaced 11 cases where the model’s tone shifted from “efficient” to “dismissive” depending on language and cultural context, and two scenarios where responses could be interpreted as non-compliant with local financial guidance standards. None of these issues had been identified through internal prompt testing or automated evaluation.
That finding was useful for compliance. It was more useful for the product. It changed what shipped.
If you are a Provider or Deployer under the EU AI Act and you are not sure where to begin, three things are worth doing before anything else.
The EU AI Act is not going away. The teams that treat it as a compliance exercise will move slowly and defensibly. The teams that treat it as a product discipline will ship better products, faster, with evidence to back it up.
That is the real advantage.
If you want to understand what structured real-world human evaluation looks like in practice, and how it can serve both your compliance and market readiness needs, the AI GroundTruth team is available for a conversation.