Wed. Feb 12th, 2025
artificial intelligence, network, programming
The development of AI proceeds. Photo by geralt on Pixabay

Brussels, 3 February 2025

As of Sunday, 2 February, the first rules under the Artificial Intelligence Act (AI Act) started to apply. This includes the AI system definition, AI literacy, as well as a very limited number of prohibited AI use cases outlined in the AI Act that pose unacceptable risks in the EU.

To facilitate innovation in AI, the Commission will publish guidelines on AI system definition. This aims to assist the industry in determining whether a software system constitutes an AI system.

The Commission will also release a living repository of AI literacy practices gathered from AI systems’ providers and deployers. This will encourage learning and exchange among them while ensuring that users develop the necessary skills and understanding to effectively use AI technologies.

To help ensure compliance with the AI Act, the Commission will publish guidelines on the prohibited AI practices that are posing unacceptable risks to citizens’ safety and fundamental rights.

These guidelines will explain the legal concepts and provide practical use cases, based on stakeholder input. They are not binding and will be updated as necessary. The Commission has launched several initiatives to promote innovation in AI, from the AI innovation package supporting startups and SMEs to the upcoming AI Factories which will provide access to the massive computing power that start-ups, industry and researchers need to develop their AI models and systems.

More information and the AI Act.

 


AI Act background

The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence)is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe.

The AI Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package, the launch of AI Factories, and the Coordinated Plan on AI. Together, these measures guarantee safety, fundamental rights and human-centric AI, and strengthen uptake, investment and innovation in AI across the EU.

To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation, engage with stakeholders and invite AI providers and deployers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.

Why do we need rules on AI?

The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.

For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

A risk-based approach

The AI Act defines 4 levels of risk for AI systems:

pyramid showing the four levels of risk: Unacceptable risk; High-risk; limited risk, minimal or no risk
Pyramid showing the four levels of risk. Source – EU Commission
Unacceptable risk

All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned. The AI Act prohibits eight practices, namely:

  1. harmful AI-based manipulation and deception
  2. harmful AI-based exploitation of vulnerabilities
  3. social scoring
  4. Individual criminal offence risk assessment or prediction
  5. untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
  6. emotion recognition in workplaces and education institutions
  7. biometric categorisation to deduce certain protected characteristics
  8. real-time remote biometric identification for law enforcement purposes in publicly accessible spaces
High risk

AI use cases that can pose serious risks to health, safety or fundamental rights are classified as high-risk. These high-risk use-cases include:

  • AI safety components in critical infrastructures (e.g. transport), the failure of which could put the life and health of citizens at risk
  • AI solutions used in education institutions, that may determine the access to education and course of someone’s professional life (e.g. scoring of exams)
  • AI-based safety components of products (e.g. AI application in robot-assisted surgery)
  • AI tools for employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment)
  • Certain AI use-cases utilised to give access to essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
  • AI systems used for remote biometric identification, emotion recognition and biometric categorisation (e.g AI system to retroactively identify a shoplifter)
  • AI use-cases in law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
  • AI use-cases in migration, asylum and border control management (e.g. automated examination of visa applications)
  • AI solutions used in the administration of justice and democratic processes (e.g. AI solutions to prepare court rulings)

High-risk AI systems are subject to strict obligations before they can be put on the market:

  • adequate risk assessment and mitigation systems
  • high-quality of the datasets feeding the system to minimise risks of discriminatory outcomes
  • logging of activity to ensure traceability of results
  • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
  • clear and adequate information to the deployer
  • appropriate human oversight measures
  • high level of robustness, cybersecurity and accuracy
Transparency risk

This refers to the risks associated with a need for transparency around the use of AI. The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision.

Moreover, providers of generative AI have to ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely deep fakes and text published with the purpose to inform the public on matters of public interest.

Minimal or no risk

The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters.

How does it all work in practice for providers of high-risk AI systems?

How does it all work in practice for providers of high-risk AI systems?
step-by-step process for declaration of conformity
Step-by-step process for declaration of conformity

Once an AI system is on the market, authorities are in charge of market surveillance, deployers ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and deployers will also report serious incidents and malfunctioning.

A solution for the trustworthy use of large AI models

General-purpose AI models can perform a wide range of tasks and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are very capable or widely used. To ensure safe and trustworthy AI, the AI Act puts in place rules for providers of such models. This includes transparency and copyright-related rules. For models that may carry systemic risks, providers should assess and mitigate these risks.

The AI Act rules on general-purpose AI will become effective in August 2025. The AI Office is facilitating the drawing-up of a Code of Practice to detail out these rules. The Code should represent a central tool for providers to demonstrate compliance with the AI Act, incorporating state-of-the-art practices.

Governance and implementation

The European AI Office, established in February 2024 within the Commission, oversees the AI Act’s enforcement and implementation in the EU Member States. It will also be responsible to supervise the most powerful AI models, so-called general-purpose AI models. EU Member States supervise the rules for AI systems and are due to establish supervisory authorities by 2 August 2025.

The AI Act’s governance will be steered by three advisory bodies:

  • the European Artificial Intelligence Board, composed of representatives from the EU Member States
  • the Scientific Panel, composed of independent experts in the field of AI
  • the Advisory Forum, representing a diverse selection of stakeholders, both commercial and non-commercial

This multistakeholder governance will ensure a balanced approach to the implementation of the AI Act.

Next steps

The AI Act entered into force on 1 August 2024, and will be fully applicable 2 years later on 2 August 2026, with some exceptions:

  • prohibitions and AI literacy obligations entered into application from 2 February 2025
  • the governance rules and the obligations for general-purpose AI models become applicable on 2 August 2025
  • the rules for high-risk AI systems – embedded into regulated products – have an extended transition period until 2 August 2027

Source – EU Commission

 

Forward to your friends