EU Parliament Adopts Position on AI Act Simplification: High-Risk AI Rules Delayed to 2027
In March 2026, the European Parliament adopted its negotiating position on the AI Act Digital Omnibus Simplification, delaying high-risk AI system compliance from August 2026 to August 2027. Key factors include prohibitive compliance costs for SMEs (500K-2M euros), delayed technical standard development, and competitive pressure from the US and China. The act introduces expanded regulatory sandboxes and simplified SME procedures. Final version requires EU Council trilogue.
EU Parliament Adopts Negotiating Position on AI Act 'Digital Omnibus Simplification': High-Risk AI Systems Delayed to 2027
Legislative Background
In March 2026, the European Parliament formally adopted its negotiating position on the AI Act's "Digital Omnibus Simplification Act," signaling a major adjustment to the EU's original AI regulatory timeline. The most significant change is the postponement of compliance requirements for high-risk AI systems to 2027, reflecting the EU's attempt to balance technological innovation with regulatory burden.
The EU AI Act, the world's first comprehensive AI regulation, was officially passed in 2024. It employs a risk-based regulatory framework: unacceptable-risk AI systems (such as social scoring) are completely banned; high-risk AI systems (including medical diagnostics, recruitment screening, and judicial assistance) must meet strict transparency, data governance, and human oversight requirements; while limited-risk and minimal-risk AI systems face lighter regulation.
Reasons for the Delay
The decision to postpone high-risk AI compliance to 2027 stems from multiple factors. First, European AI companies have consistently reported that compliance costs are prohibitively high. According to a DIGITALEUROPE survey, small and medium-sized AI enterprises need to invest between €500,000 and €2 million to meet high-risk system compliance standards — a particularly heavy burden for startups.
Second, technical standard development has fallen behind schedule. The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) are responsible for developing the AI Act's technical standards, but as of March 2026, several critical standards remain in draft form.
Third, international competitive pressures have intensified. The United States has significantly relaxed AI regulation under the Trump administration, while China is accelerating AI industrialization through its "AI+" strategy. The EU is concerned that an overly strict regulatory timeline could drive AI talent and investment overseas. French President Macron has publicly advocated on multiple occasions that "regulation should not become an obstacle to innovation."
Core Content of the Digital Omnibus Simplification Act
The adopted negotiating position includes several important adjustments. First, the compliance deadline for high-risk AI systems is pushed from August 2026 to August 2027, providing companies an additional 12 months of preparation time. Second, an expanded "regulatory sandbox" program allows companies to test high-risk AI applications in controlled environments without immediately meeting full compliance requirements. Third, simplified compliance procedures for SMEs include reduced documentation requirements and free compliance consulting services. Fourth, clearer tiered regulatory standards for general-purpose AI (GPAI) models, with more specific transparency and safety assessment requirements for systemic risk models (such as those with over 100 billion parameters).
Stakeholder Reactions
The European tech industry broadly welcomed the decision. Mistral AI's CEO stated that "the delay is a pragmatic choice that gives the industry necessary breathing room." The German AI Startup Association also issued a supportive statement, arguing that the additional time will help companies better understand and meet compliance requirements.
However, AI ethics and civil rights organizations expressed concern. The European Digital Rights organization (EDRi) criticized the delay as "a compromise favoring corporate interests at the expense of fundamental rights protection." EDRi pointed out that during the gap period without effective high-risk AI regulation, consumers may face risks from opaque AI decision-making, particularly in sensitive areas such as credit assessment, job screening, and medical diagnostics.
Global Regulatory Landscape
The EU AI Act's adjustment is also influencing global AI regulatory trends. Japan and South Korea are developing their own AI regulations referencing the EU framework, and the delay may encourage Asian countries to adopt more gradual regulatory approaches. While the United States lacks unified federal AI legislation, state governments in California, Colorado, and New York are advancing similar risk-based regulatory frameworks.
China has taken a different path, regulating specific AI applications through a series of targeted regulations (such as deep synthesis management provisions and generative AI management measures) rather than adopting an EU-style comprehensive legislative approach.
Future Outlook
The Parliament's negotiating position still requires trilogue negotiations with the EU Council. The final version is expected to be agreed upon in the second half of 2026. Regardless of the final timeline, the EU AI Act remains the world's most ambitious AI regulatory framework, and its implementation experience will provide important reference for other countries and regions.