Bastiaan Bruyndonckx
Information Communication Technology
Information Governance & Data Protection
Telecommunications, Media & Technology
Commercial law
Dispute Resolution
Intellectual Property (IP)
bastiaan.bruyndonckx@lydian.be
The rapid development and deployment of artificial intelligence (AI), its increasing use in almost any aspect of our lives and the new regulatory framework presents businesses around the world with numerous challenges. Yesterday, one of the most ambitious projects of law-making, the AI Act, which aims to harmonise the protection of citizens from this rapidly evolving technology and to encourage research and industrial capacity in this regard was approved by a vast majority in the European Parliament. This ground-breaking regulation will significantly reshape how businesses and organisations in Europe use AI.
The popular buzzword had already caught the attention of lawmakers a few years ago. For several years, the European Commission (EC) had been researching the topic, leading to the first proposal in April 2021, followed by the positions of the Council (December 2022) and Parliament (June 2023). After a political agreement on the text had been reached on 9 December 2023, finally, on 13 March 2024, the European Parliament approved the proposed AI Act.
The prolonged legislative process for the AI Act is unsurprising, given its ambitious aims. Regulating AI requires careful consideration of ethical, technological, and societal factors. The extended timeline reflects the complexity of crafting effective legislation to address evolving challenges.
In this article, we delve into the crucial components of the AI Act and its implications for businesses operating within the EU.
The ambition of the AI Act is to establish a framework that is futureproof. In a world where technology is constantly evolving, defining what constitutes an ‘AI System’ and thereby determining the scope of application of the AI Act proved one of the most challenging tasks. The European legislator finally opted for a broad and easily understandable definition, that emphasises the autonomy of the system and its ability to adapt.
An ‘AI System’ is defined by the AI Act as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The obligations foreseen in the AI Act are not only imposed on developers. Instead, the AI Act captures the entire value chain, making providers, distributors and users equally subject to its scope of application. It therefore applies to a wide range of actors, including:
The scope of application particularly broad given the extraterritorial effect of the AI Act, meaning that actors established in a third country are also subject to the AI Act to the extent the AI systems affect persons located within the EU. Hence, the AI Act is likely to have a similar effect on worldwide AI regulation as the GDPR has had on the worldwide development of data protection regulation.
The final text of the AI Act lists several exclusions to its scope of application. More in particular, the AI Act shall not apply to:
The AI Act follows a risk-based approach, identifies different risk categories and establishes obligations for AI systems based on their potential risks and level of impact.
The AI Act prohibits certain AI practices that are considered a clear threat to the safety, livelihoods, and rights of people, including:
High-risk AI systems will be subject to a comprehensive mandatory compliance regime.
An AI system could be considered high risk when deployed in any of the following areas as described in Annex III:
Moreover, AI Act recognises that AI systems may be an AI system and another form of product at the same time. Where those products are already subject to certain EU regulation, the EU AI Act provides for AI systems constituting such products to be considered ‘high-risk’ AI systems, as is the case for medical devices, vehicles and planes, toys etc. Similarly, in case of AI systems that can also be used as a safety component of a product that is subject to such EU regulation and are required to undergo a third-party conformity assessment before they are placed on the market or put into service in the EU under that legislation, the AI system safety component for that product will be automatically considered to be a ‘high-risk’ AI system.
High-risk AI systems will be subject to specific obligations, including:
Besides the specific obligations for high-risk AI systems, the AI Act foresees additional transparency obligations for providers and deployers of certain AI systems towards natural persons:
AI models that display significant generality and are capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, so-called general-purpose AI models, must meet certain requirements, including:
Additionally, for general-purpose AI models with systemic risk, additional requirements are imposed, including:
Interestingly and in line with the EC’s ambition to make the EU a worldwide leader in AI and to support SMEs, while the AI Act aims at laying down a regulatory framework for AI, it also contains measures in support of innovation. Such measures include (i) AI regulatory sandboxing schemes, (ii) measures to reduce the regulatory burden for SMEs and start-ups and (iii) real-world testing.
Non-compliance with the rules laid down in the AI Act can give rise to strict enforcement measures, including administrative fines, warnings and non-monetary measures. The penalties provided for must be effective, proportionate and dissuasive and must take into account the interests of SMEs, including start-ups, and their economic viability.
The fines that could be imposed are even higher than the GDPR, as the administrative fines could be up to EUR 35 000 000 or, in the case of an undertaking, up to 7% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
As there are still some errors in the different language versions, the AI Act will be subject to a corrigendum procedure and will receive a linguistic review. It is expected to be finally adopted before the end of the legislature but still needs to be formally endorsed by the Council.
The AI Act will enter into force twenty (20) days after its publication in the Official Journal. The AI Act will apply depending on the obligations itself, but be fully applicable 24 months after its entry into force. For some topics, the application date will be different (mostly shorter), such as for:
If you seek further insight into the specifics of the AI Act and the obligations it imposes, Lydian is hosting a webinar on 28 March 2024 at 11:30 dedicated to this new regulatory framework. Join us for a deep dive into the AI Act by subscribing here.
Information Communication Technology
Information Governance & Data Protection
Telecommunications, Media & Technology
Commercial law
Dispute Resolution
Intellectual Property (IP)
bastiaan.bruyndonckx@lydian.be
Commercial law
Dispute Resolution
Information Communication Technology
Information Governance & Data Protection
Intellectual Property (IP)
Telecommunications, Media & Technology
liese.kuyken@lydian.be