On April 7, CDT Europe and a group of civil society organizations expressed concerns in an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath. The letter follows the recent decision of the Commission to withdraw the Artificial Intelligence Liability Directive (AILD). The organizations emphasized the necessity of having a liability framework that would help individuals seeking compensation for damages caused by AI systems. Without it, proving harm would be challenging.
In a scheduled hearing before the European Parliament's JURI Committee, Commissioner Virkkunen defended the AILD withdrawal. She highlighted the goal of reducing overlapping obligations and ensuring simpler compliance for businesses. Virkkunen stressed that the AI Act should be fully implemented before introducing new legislation.
Following this, Axel Voss and Brando Benifei, rapporteurs of the Directive and AI Act respectively, expressed their concern in a joint letter. They noted the gaps for victims of AI-related harm and suggested that the European Commission include a revised proposal in the upcoming Digital Omnibus Package.
On April 9, the European Commission introduced the AI Continent Action Plan to facilitate AI development across the EU. The strategy focuses on computing infrastructure, data handling, regulatory simplification, and attracting talent. Noteworthy components include a Data Union Strategy and steps to ease compliance burdens for AI developers. The strategy also plans the establishment of an AI Act Service Desk in July 2025 for providing compliance support to startups and SMEs. Plans for a Cloud and AI Development Act by early 2026 were also mentioned.
Other announcements in the plan involve five consultative processes, including public consultations on the Data Union Strategy and calls for evidence on various AI initiatives. These processes handle evidence submissions from stakeholders until June 2025.
Additionally, the European Commission has opened a public consultation to gather input on guidelines for general-purpose AI (GPAI) models, aiming to provide clarity on issues such as model definitions, provider roles, and open-source exemptions. This will complement the Code of Practice on GPAI, with both the guidelines and the final Code expected by August 2025.
Further EU news in AI includes an Irish Data Protection Commission investigation into Grok AI, developed by xAI. Meanwhile, MEPs have warned the Commission against weakening its open-source AI definitions, urging clarification that certain models are not open-source under existing guidelines.
Spain's draft AI bill has been criticized for exempting public authorities from fines, potentially weakening AI safeguards. Critics advocate for stricter measures, including penalties for officials.
The next AI Pact webinar, promoting understanding of AI regulations, is set for May 27.