Sun. Oct 6th, 2024
artificial intelligence, network, programming
The development of AI proceeds. Photo by geralt on Pixabay

The European Artificial Intelligence (AI) Office kicks off the process for drawing-up of the first Code of Practice for general-purpose AI models, under the AI Act. Nearly 1,000 attendees including general-purpose AI model providers, downstream providers, industry, civil society, academia, and independent experts, take part in the online Plenary to help develop the Code of Practice. The meeting has a working character and therefore is open only to  eligible stakeholders who signed up via the EU Survey by 25 August 2024.

Ahead of their publication in autumn, the AI Office presents initial results from the multi-stakeholder consultation on the Code of Practice, which received almost 430 submissions.

The Code of Practice aims to facilitate the proper application of the AI Act’s rules for general-purpose AI models, including transparency and copyright-related rules, systemic risk taxonomy, risk assessment, and mitigation measures. The Code of Practice process will involve four working groups meeting three times to discuss the drafts. This process will be led by chairs and vice-chairs. These independent experts have been selected after the call for expression of interest. Thelist of the chairs and the vice-chairsof the four working groups’ is available.

The final version of the Code of Practice will be published and presented in a closing plenary, expected in April 2025. More information about the development of the first General-Purpose AI Code of Practice is available online.

Source – EU Commission Digital Strategy

 


Meet the Chairs leading the development of the first General-Purpose AI Code of Practice

Brussels, 30 September 2024

Hundreds of participants are set to attend the kick-off Plenary for the development of the first Code of Practice for general-purpose AI models under the AI Act.

Organised by the EU AI Office, this event marks the beginning of a collaborative effort involving general-purpose AI model providers, industry organisations, academia, and civil society, aiming to craft a robust framework. The Plenary will focus on outlining the working groups, timelines, and the expected outcomes, including the initial insights of a multi-stakeholder consultation with almost 430 submissions.

Chairs and Vice-Chairs play pivotal roles in shaping the first General-Purpose AI Code of Practice. These experts, drawn from diverse backgrounds in computer science, AI governance and law, will guide the process across four working groups. Their leadership is central to developing and refining drafts that address transparency, copyright, risk assessment, mitigation measures and internal risk management and governance of general-purpose AI providers. Selection criteria prioritised expertise, independence, and ensuring geographical diversity and gender balance.

For example, the co-chairs of the working group on transparency and copyright bring a unique combination of expertise. One has a deep background in European copyright law, with over 25 years of experience, while the other offers extensive knowledge in AI transparency, backed by a PhD from MIT and leadership in human-centric AI research.

The diversity in the Chairs’ specialisations ensures comprehensive attention to technical, legal and governance considerations at the state of the art.

The Chairs and Vice-Chairs will synthesise input from participants and lead iterative discussions between October 2024 and April 2025, ensuring a comprehensive and effective Code of Practice. The final draft is expected to be presented in a closing plenary by April 2025.

Working Group 1: Transparency and copyright-related rules

Co-Chair (Transparency) Nuria Oliver (Spain): Nuria Oliver is the Director of the ELLIS Alicante Foundation and holds a PhD in AI from MIT. She has 25 years of research experience in human-centric AI, spanning academia, industry, and NGOs. Nuria is an independent board member of the Spanish Supervisory Agency of AI, a member of the International Expert Advisory Panel to the Scientific Report on the Safety of Advanced AI, and a Fellow of IEEE, ACM, EurAI, and ELLIS. She is also the co-founder and vice-president of ELLIS.

Co-Chair (Copyright) Alexander Peukert (Germany): Alexander Peukert is a Professor of Civil, Commercial, and Information Law at Goethe University Frankfurt am Main. With over 25 years of experience, he is a leading expert on European and international copyright law, focusing recently on the intersection of copyright and artificial intelligence. He has been a member of the Expert Committee on Copyright of the German Association for the Protection of Intellectual Property (GRUR) since 2004 and is a founding member of the European Copyright Society, which he chaired in 2023/2024.

Vice Chair (Transparency) Rishi Bommasani (US): Rishi Bommasani is the Society Lead at the Stanford Center for Research on Models as part of the Stanford Institute for Human-Centered AI. He researches the societal impact of general-purpose AI models, advancing the role of academia in evidence-driven policy. His work has won several scientific recognitions and been featured in the Atlantic, Euractiv, Nature, New York Times, Reuters, Science, Wall Street Journal, and Washington Post.

Vice Chair (Copyright) Céline Castets-Renard (France): Céline Castets-Renard is Full Law Professor at the Civil Law Faculty, University of Ottawa, and Research Chair Holder Accountable AI in a Global Context. Her research focuses on the regulation and governance of digital technologies and AI from an international and comparative law perspective. She is an expert in AI law, personal data and privacy law, digital copyright law and platform regulation. She also studies the impact of technologies on human rights, equity and social justice.

Working Group 2: Risk identification and assessment, including evaluations

Chair Matthias Samwald (Austria): Matthias Samwald is an Associate Professor at the Institute of Artificial Intelligence at the Medical University of Vienna. His research focuses on harnessing AI to accelerate scientific research, transform medicine, and contribute to human well-being, while ensuring that these AI systems are safe and reliable in their operation.

Vice-Chair Marta Ziosi (Italy): Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. During her PhD at the Oxford Internet Institute, she worked on algorithmic bias and collaborated on projects at the intersection of AI policy, fairness, and standards for large language models. She is also the founder of AI for People, a non-profit organization dedicated to ensuring that technology serves the public good.

Vice-Chair Alexander Zacherl (Germany): Alexander Zacherl is an independent Systems Designer. At the inception of the UK AI Safety Institute, he helped build the technical research team and the autonomous systems evaluations team. Previously, he worked at DeepMind on simulations and human interaction environments for multi-agent reinforcement learning research.

Working Group 3: Technical risk mitigation

Chair Yoshua Bengio (Canada): Recognised worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.

Vice-Chair Daniel Privitera (Italy and Germany): Daniel Privitera is the founder and Executive Director of the KIRA Center, an independent AI policy non-profit based in Berlin. He is the Lead Writer of the International Scientific Report on the Safety of Advanced AI, which is co-written by 75 international AI experts and supported by 30 leading AI countries, the UN, and the EU.

Vice Chair Nitarshan Rajkumar (Canada): Nitarshan Rajkumar is a PhD candidate researching AI at the University of Cambridge. He was previously Senior Policy Adviser to the UK Secretary of State for Science, Innovation and Technology, a role in which he co-founded the AI Safety Institute. Prior to that, he was a researcher at Mila in Montréal, and a software engineer at startups in San Francisco.

Working Group 4: Internal risk management and governance of General-purpose AI providers

Chair Marietje Schaake (Netherlands): Marietje Schaake is a Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centred AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN’s High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade, foreign- and tech policy. She is the author of The Tech Coup.

Vice Chair Markus Anderljung (Sweden): Markus Anderlung’s research focuses on AI regulation, responsible cutting-edge development, and compute governance, next to other topics. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist, GovAI’s Deputy Director, and Senior Consultant at EY Sweden.

Vice Chair Anka Reuel (Germany): Anka Reuel is a Computer Science Ph.D. candidate at Stanford University. Her research focuses on technical AI governance. She conducts research at the Stanford Trustworthy AI Research Lab and the Stanford Intelligent Systems Laboratory. She’s also a Geopolitics and Technology Fellow at the Belfer Center at Harvard Kennedy School.

Related topics

 


AI Act: How to participate in the drawing-up of the first General-Purpose AI Code of Practice

The European AI Office has opened a call for expression of interest to participate in the drawing-up of the first general-purpose AI Code of Practice.

The European AI Office invites eligible general-purpose AI model providers, downstream providers and other industry organisations, other stakeholder organisations such as civil society organisations or rightsholders organisations, as well as academia and other independent experts to express their interest to participate in the drawing-up of the Code of Practice.

The Code will be prepared in an iterative drafting process by April 2025, 9 months from the AI Act’s entry into force on 1 August 2024. The Code will facilitate the proper application of the rules of the AI Act for general-purpose AI models.

You can express your interest by25 August2024, 18:00 CEST, through this application form.

Timeline of the AI Code of Practice drafting process. Source: EU Commission

At the same time, the AI Office launched a Multi-stakeholder consultationon trustworthy general-purpose AI models under the AI Act. The consultation is an opportunity for all stakeholders to have their say on the topics covered by this Code of Practice.

The Code of Practice will detail the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risks. These rules will apply 12 months after the entry into force of the AI Act. Providers should be able to rely on the Code of Practice to demonstrate compliance.

The AI Office facilitates an iterative drafting process to ensure that the Code of Practice effectively addresses the AI Act rules. This includes transparency and copyright-related rules for all general-purpose AI models as well as a systemic risk taxonomy, risk assessment and mitigation measures. It is an inclusive and transparent approach which benefits from the input of all relevant stakeholders.

Interested and eligible general-purpose AI model providers and stakeholders will be part of a Code of Practice Plenary. The AI Office will verify eligibility on the basis of the expressions of interest and confirm participation to respective stakeholders.

The Plenary will be structured in four Working Groups on specific topics. Participants will be free to choose one or more Working Groups they wish to engage in. Meetings are conducted exclusively online. Following a kick-off Plenary in September, the participants will convene three times virtually for drafting rounds between September 2024 and April 2025 with discussions organised in Working Groups. Participants can express comments during each of those meetings or within two weeks in writing.

The AI Office will appoint Chairs and, as appropriate, Vice-Chairs for each of the four Working Groups of the Plenary, responsible for synthesising submissions from the consultation and plenary participants. Interested independent experts can apply for such a role.

As the main addressees of the Code, providers of general-purpose AI models will be invited to dedicated workshops with the Chairs and Vice-Chairs to contribute to informing each iterative drafting round, in addition to their Plenary participation. The AI Office will ensure transparency into these discussions, such as by drawing-up meeting minutes and making these available to all Plenary participants.

The final version of the first Code of Practice will be presented in a Closing Plenary, expected to take place in April, and published. The Closing Plenary gives general-purpose AI model providers the opportunity to express themselves whether they would envisage to use the Code.

Iterative drafting process including stakeholders. Source – EU Commission

After publication of the Code, the AI Office and the AI Board will assess its adequacy and publish this assessment. The Commission may decide to approve the Code of Practice and give it a general validity within the Union by means of an implementing act.  If the Code of Practice is not deemed adequate, the Commission will provide common rules for the implementation of the relevant obligations.

For all relevant information, please read the Call for Expression of Interest carefully.

You can get in touch with the European AI Office for inquiries related to the call for expression of interest through our functional mailbox (hyperlink on the right side of the page).

Downloads
  • Call for Expression of Interest General-Purpose AI Code of Practice – Download
  • Graphics Code of Practice – Drafting – Download
  • Graphics Code of Practice – Timeline – Download

Source – EU Commission Digital Strategy

Forward to your friends