Skip to main content

EU AI Act is on its way!

Mar 22, 2024

Table of contents
    Arttu Ahava

    The author, Arttu Ahava, is a European trade mark and design attorney and lawyer working with Berggren Oy, Finland’s leading IPR service provider. The author has been involved in the preparation of the revised Trademarks Act and assisted clients in connection with a similar revision process carried out with regards to EU trademarks.

    The AI Act is almost ready

    At the tail end of a long, multi-stage legislative process, the Artificial Intelligence Act (AI Act) of the European Union is finally nearing completion as the European Parliament approved the latest version on March 13, 2024. The Council still needs to approve the final version before its publication in the Official Journal of the European Union.

    The act will enter into force 20 days after its publication, and will be fully applicable 24 months later, with some exceptions. For example, regulations concerning generative AI will take effect just after nine months, whereas obligations related to high-risk AI systems will be applicable after 36 months.

    The Commission initially published the AI legislative proposal on April 21, 2021. Due to the rapid development of AI, the original proposal became somewhat outdated during its legislative process – for instance, it did not include any rules on generative AI, which is unsurprising as those applications were only in the development phase at the time. Rules regarding generative AI were incorporated along the way, and the legislative proposal has evolved significantly. The EU has felt a strong need to act as a trailblazer and set a framework for AI systems while some Member States try to protect their own growing AI business ecosystems from overly strict regulation. In December 2023, the Parliament and the Council reached a consensus on AI legislation, after which the process has progressed rapidly.

    The AI Act aims to mitigate risks of AI to society and individuals

    The goal of the AI Act is to protect fundamental rights from harmful effects of AI, to promote innovation, and to improve the functioning of the internal market by creating a common legal framework for AI. The act introduces new obligations for both developers and users of AI applications and includes a four-tier risk classification for AI systems. These obligations are based on the potential risks of AI and the extent of their impacts.

    The AI Act classifies AI systems into four risk levels based on the concrete risk they pose: unacceptable risk, high risk, limited risk, and minimal risk. Harmful AI applications that pose ”unacceptable risk” endangering users’ rights are completely prohibited. These include, for example, biometric classification systems based on sensitive characteristics. In principle, the use of biometric identification systems by law enforcement authorities is prohibited. The AI Act includes an exhaustive list of exceptions where this main rule can be deviated from, also requiring strict protective measures to be followed. One such situation is the prevention of a terrorist attack. In this respect, the legislative proposal has changed – as a result of strong lobbying, the original proposal's virtually complete ban on real-time biometric identification (e.g., facial recognition by AI) has been relaxed, though the outcome, with all its exceptions, is complicated and hard to interpret or apply.

    Core area of the act: high-risk systems

    The AI Act primarily regulates high-risk AI systems that may cause harmful effects to users' safety or threaten their fundamental rights. High-risk AI systems must meet several requirements regarding risk management before they can be put on the market or taken into use.

    High-risk systems include, among others, safety components of machines or medical devices, safety systems for various vehicles, AI systems used as medical or diagnostic devices, safety systems for critical infrastructure (e.g. water and electricity distribution), and HR and recruitment systems.

    The AI Act will particularly affect AI systems that fall into the high-risk category. The providers of these systems must evaluate and manage risks themselves, maintain usage logs, be transparent and precise, and ensure proper supervision. Users have the right to file complaints about AI systems and to receive meaningful explanations of decisions based on high-risk AI systems that affect their rights.

    Changes to the (presumably) final version of the act

    Compared to the Commission's original proposal, the final proposal includes new elements due to the development of AI:

    1. The proposal includes new rules for high-impact general-purpose AI models (commonly used generative AI models like ChatGPT) that may pose systemic risks. These rules also apply to high-risk AI systems.
    2. A revised governance system, which includes some implementation requirements at the EU level.
    3. The prohibition list for AI systems posing an "unacceptable risk" has been expanded, but the use of biometric remote identifiers by law enforcement authorities in public spaces is possible with certain protective measures.
    4. Developers of risk-prone AI systems are required to evaluate the realization of fundamental rights before deploying AI systems.


    Interestingly, there's also a requirement to label content generated by generative AI in the metadata of a picture or video, for example. As the use of generative AI becomes more widespread, marking AI-generated content has by no means been common practice, so this may affect how generative AI is used, in the EU market at least.

    What's next?

    The AI Act introduces many new obligations, especially for those using AI in high-risk areas. Although the transition period is lengthy, it is crucial to determine as soon as possible whether your business uses AI in such areas. With the AI Act, even AI applications in machine learning or other "conventional" AI that have been in use for years may suddenly fall into the high-risk category, requiring compliance actions from developers. We at Berggren are happy to assist you with any AI-related questions.

    The blog is written in cooperation with Legal Trainee Katariina Kokkonen.

    Stay up to date with IP news

    Subscribe to our newsletter to get latest articles.