EU to delay ‘high risk’ AI rules until 2027 after Big Tech pushback

By Supantha Mukherjee and Bart H. Meijer

BRUSSELS/STOCKHOLM (Reuters) -The European Commission proposed on Wednesday streamlining and easing a slew of tech regulations, including delaying some provisions of its AI Act, in an attempt to cut red tape, head off criticism from Big Tech and boost Europe’s competitiveness.

The move by the EU comes after it watered down some environmental laws after blowback from business and the U.S. government. Europe’s tech rules have faced similar opposition, though the Commission has said the rules will remain robust.

“Simplification is not deregulation. Simplification means that we are taking a critical look at our regulatory landscape,” a Commission official said during a briefing.

‘HIGH RISK’ AI USE IN JOB APPLICATIONS, BIOMETRICS

In a ‘Digital Omnibus’, which will still face debate and votes from European countries, the Commission proposed to delay the EU’s stricter rules on the use of AI in a range of areas seen as more high risk, to December 2027 from August 2026.

That includes AI use in biometric identification, road traffic applications, utilities supply, job applications and exams, health services, creditworthiness and law enforcement. Consent for pop-up ‘cookies’ would also be simplified.

The Digital Omnibus or simplification package covers the AI Act which became law last year, the landmark privacy legislation known as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act, among others.

Proposed changes to the GDPR would also allow Alphabet’s Google, Meta, OpenAI and other tech companies to use Europeans’ personal data to train their AI models.

(Reporting by Supantha Mukherjee in Stockholm and Jan Strupczewski and Foo Yun Chee in Brussels)

tagreuters.com2025binary_LYNXMPELAI0NX-VIEWIMAGE