top of page

Europe relaxes AI regulations: pragmatism or strategic risk?

In a recent broadcast on BNR Nieuwsradio, I discussed an important development in European AI regulation. The European Council has decided to delay part of the AI Act while also easing certain requirements. That may sound like a technical adjustment, but it says a lot about Europe’s position in the global AI race.


ai regulations

Delayed rules for high-risk AI systems

The strictest rules within the AI Act apply to so-called high-risk applications, such as AI used in healthcare, the labor market, education and government. These are the areas where AI has a direct impact on people.


The implementation of these rules is now postponed until the end of 2027, and in some cases even until 2028. The official reason is that the necessary standards and evaluation frameworks are not yet ready. Without clear guidelines, enforcement would create confusion rather than clarity.


At the same time, medium-sized companies are being given more room to prepare for compliance.


More flexibility for companies, more pressure from industry

This shift can be seen as pragmatism. But there is more at play.


In recent months, there has been strong pressure from industry and from member states such as France and Germany, which are keen to protect their own AI companies in an increasingly competitive landscape dominated by the United States and China.


Europe has long been known as the global rulemaker in digital policy. Now it faces a growing tension between protecting citizens and staying competitive.


Balancing protection and innovation

Good regulation has clear benefits. It protects citizens and provides companies with legal certainty. That can even become a competitive advantage if Europe positions itself as a trusted AI region.


At the same time, there are real downsides. Regulation can slow down innovation, especially for startups and smaller companies that do not have large legal teams.


In a fast-moving market like AI, every month of delay allows other regions to move ahead.


Not a simple relaxation, but a shift in priorities

Although it may seem that Europe is becoming less strict, the reality is more nuanced. On some fronts, regulation is being delayed or softened. On others, it is becoming stricter.


The European Commission is working on new transparency requirements, ensuring that users can identify AI-generated content. There are also proposals to restrict harmful applications, such as generating non-consensual sexual content or abusive material.


This is not a move toward less regulation, but rather a shift in priorities.


2026 will be the year of implementation

The real challenge lies not in drafting regulation, but in implementing it.


The coming period will focus on execution. Companies need to understand what is expected of them. Regulators must be able to enforce the rules. And clear, practical guidelines are required.


The key question is simple: can organizations actually work with these rules in practice?


What this means for your organization

For organizations working with AI, this is a critical moment.


The conversation is shifting from abstract principles to practical application. What do you need to do, when and how can you remain compliant without slowing down innovation?


AI regulation is no longer just a legal topic. It has become a strategic issue.


Are you looking for a keynote that helps your audience understand what these changes mean for your sector and organization? Feel free to get in touch.




bottom of page