EU Lawmakers Reach Provisional Deal on AI Regulations

After nearly 36 hours of intense negotiations, EU lawmakers reached a provisional agreement on the landmark AI Act, set to take effect in 2026.

DM
Daniel Moretti

May 10, 2026 · 3 min read

European Union lawmakers in a tense negotiation, surrounded by glowing digital data streams and legal documents, finalizing the landmark AI Act.

After nearly 36 hours of intense negotiations, EU lawmakers reached a provisional agreement on the landmark AI Act, set to take effect in 2026. This extensive regulation could impose fines up to 35 million euros or 7% of a company's global turnover for non-compliance, according to Reuters.

The European Union aims to foster innovation with this new AI Act. Yet, its stringent compliance requirements and potential for hefty fines may stifle smaller European tech companies, according to industry reports from European startup associations.

While the EU's AI Act marks a significant step towards ethical AI, its true impact on innovation and global competitiveness remains uncertain as implementation challenges emerge.

The Core of the EU's AI Rulebook

  • The agreement bans real-time biometric identification in public spaces, with narrow exceptions, according to the European Parliament.
  • High-risk AI systems, like those in critical infrastructure or employment, will face strict obligations, according to the EU Commission.
  • General-purpose AI models, including platforms like ChatGPT, will have transparency requirements, according to a Negotiators' Statement.

The Act categorizes AI systems by risk level, imposing varying degrees of scrutiny to protect fundamental rights in sensitive applications. While aiming for safety, this 'high-risk' classification will disproportionately burden European SMEs with compliance costs that could exceed their annual revenue.

Navigating the New Regulatory Landscape

Fines for non-compliance with the EU AI Act could reach up to 7% of a company's global annual turnover or 35 million euros, whichever is higher, according to the Draft Text. Unprecedented fines establish regulatory compliance as a primary competitive differentiator, effectively creating a barrier to entry for AI startups lacking substantial legal and auditing resources.

Some industry groups voice concerns about innovation being stifled by these compliance burdens, according to a Tech Industry Lobby. Legal experts, conversely, argue the Act's broad definitions of 'AI system' and 'high-risk' are necessary for future-proofing against evolving technology. Ambiguity, however, creates legal uncertainty, making it difficult for developers to innovate without fear of retrospective non-compliance, according to tech industry lobbyists.

Civil society groups welcome the new protections but note loopholes, particularly regarding biometric surveillance, according to a Digital Rights NGO. The Act's broad 'high-risk' AI definition, coupled with fragmented enforcement across member states, means European tech companies will likely prioritize regulatory avoidance over groundbreaking innovation. Ceding leadership in critical AI sectors to less regulated markets is a risk.

What Happens Next? Your Questions Answered

Implementation in 2026

The EU AI Act is expected to take full effect in 2026, following a two-year implementation period, according to Marketingprofs. During this time, member states will establish national supervisory authorities, and companies will adapt their AI systems to meet stringent compliance requirements.

Support for Smaller Companies

The Act includes a 'sandbox' approach, allowing small and medium-sized enterprises (SMEs) to test AI systems under supervision, according to the EU Commission. However, despite intentions to foster innovation, bureaucratic complexity and legal uncertainty will likely deter early-stage European startups from utilizing these mechanisms, pushing them towards less regulated markets.

Enforcement

Member states will establish national supervisory authorities to enforce the EU AI Act, according to the Draft Text. Fragmented interpretation across 27 EU member states will undermine a unified European digital single market for AI, creating additional complexity and cost for cross-border companies. The EU's ambition to be a global standard-setter for ethical AI, while laudable, appears to trade immediate European innovation capacity for long-term regulatory influence. By 2026, the strategy could leave European AI startups like Aleph Alpha lagging behind global competitors, with larger firms better equipped to manage the regulatory burdens.