Thu. Sep 19th, 2024
fingers, hands, touch
The EU aims at human-centric and ethical Artificial Intelligence (AI). Photo by Tumisu on Pixabay
Brussels, 11 May 2023

To ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for AI systems.

On Thursday, the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the first ever rules for Artificial Intelligence with 84 votes in favour, 7 against and 12 abstentions. In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.

Risk based approach to AI –Prohibited AI practices

The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).

MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
High-risk AI

MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.

General-purpose AI – transparency measures

MEPs included obligations for providers of foundation models – a new and fast evolving development in the field of AI – who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.

Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

Supporting innovation and protecting citizens’ rights

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Quotes

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:

“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level. We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe”.

Co-rapporteur Dragos Tudorache (Renew, Romania) said:

“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe. We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement.”

Next steps

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

Source – EU Parliament


EU regulation addresses concerns of AI critics 

 

Brussels, 11 May 2023

For the ECR Group, the adopted text on artificial intelligence (AI) is a good starting point for further negotiations. The ECR Group wants AI systems to be trustworthy and human-centred. Their development, training and marketing should be transparent and based on a sound assessment of risks to safety, health and human rights. Following today’s vote in the European Parliament’s Internal Market and the Civil Liberties Committees, ECR IMCO Shadow Rapporteur Kosma Złotowski is particularly pleased that ECR proposals, such as regulatory sandboxes, safe and closed spaces for testing European AI-based innovations, and measures helping SMEs use the technology, made it into the compromise text.

“We ensured that AI-based products and services entering the European market will be safe for users. Public authorities using advanced algorithms must take the utmost care to ensure that decisions made using AI can be understood by the public and that the process is transparent”, Złotowski said.

ECR Shadow Rapporteur in the Civil Liberties Committee Rob Rooken said:

“The developments in the AI world are going very fast and will have a lot of impact on our lives. We are probably underestimating how big of an impact that will be. With the adopted AI Act today, the European Parliament has made an effort to protect the fundamental rights of EU citizens.”

During the negotiations, Złotowski emphasised the enormous potential of artificial intelligence.

“Artificial intelligence can help in many areas of life and in many sectors of the economy. It is worth investing in and improving this technology in the EU. If we are realistic about a shorter working week, we need to increase our productivity, and this is possible through the use of AI-based tools,” he said.

“In recent months, we have heard many extreme views on the implications of the widespread use of artificial intelligence and calls for distrust of this technology. I hope that this legislation will address these concerns, although it is clear that it will need to be reviewed and updated as it develops,” Złotowski concluded.

The draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

Source – ECR Group – Email


Don’t fear AI, but regulate risks

 

Brussels, 11 May 2023
The EPP Group wants clear standards for a human-centred approach to Artificial Intelligence (AI), based on European ethical standards and democratic values. Don’t fear AI, but regulate risks. Europe must have guardrails in place to ensure that new powerful AI systems, such as ChatGPT, are developed and deployed responsibly. This is the stance the EPP Group took today when the joint Committees on Civil Liberties (LIBE) and on Consumer Protection (IMCO) voted on the planned EU Artificial Intelligence Act (AI Act).

“The AI Act is the right step to ensure that AI is used for the benefit of our citizens and to strengthen European democratic values in the global market”, said Axel Voss MEP, who negotiated the law on behalf of the EPP Group in the LIBE Committee. “However, some people seem to have a fear-driven approach to AI and this stifles the opportunities of the new technology. The EPP Group wants a harmonised and flexible regulatory environment that takes into account all needs and prevents unnecessary administrative burdens for SMEs and start-ups. The EU must create a framework that boosts innovation. I want this law to also strengthen our industrial location for new technologies. Until now, our industry is still not getting the chance it needs to keep up with the USA or China”, Voss added.

Deirdre Clune MEP, who negotiated the law on behalf of the EPP Group in the IMCO Committee, highlighted:

“This is a world first and a ground-breaking piece of legislation. It could become the de facto global standard to regulating Artificial Intelligence, ensuring that such technology is developed and used in a responsible, ethical manner, while also supporting innovation and economic growth. The EU will require that high-risk AI meets technical fairness and safety requirements. AI uses that pose an unacceptable risk will be prohibited, like social scoring.”

“From the start, the EPP Group wanted to address the challenges and potential risks of so-called foundation models upon which AI systems such as ChatGPT are based by providing clear rules and creating a framework for the sharing of necessary information along the AI value chain. I am extremely pleased that our proposal to address these models was included in the final text”, Clune said.

“As the EPP Group, we would like to maintain the possibility of law enforcement to use biometric recognition in cases of searches of victims of crime such as missing children, preventing imminent threats such as terrorist attacks or in criminal investigations”, Voss emphasised.

Source – EPP Group

Forward to your friends