Mon. Feb 17th, 2025

San Francisco, 20 November 2024

The EU AI Office is participating today and tomorrow in the inaugural meeting of the International Network of AI Safety Institutes, held in San Francisco. This milestone gathering marks a significant advancement in fostering global cooperation on AI safety. The meeting is organised around three tracks.

In Track 1, the Network will discuss initiatives to mitigate risks associated with synthetic AI generated content, focusing on digital content transparency techniques as well as safeguards at the AI model and system level to reduce and prevent harmful or illegal synthetic outputs such as child sexual abuse materials. Participants will discuss principles and initial practices. While technical methods are still evolving, best practices can foster stakeholder alignment on transparency and mitigation of risks arising from synthetic content and demonstrate trust to targeted audiences. Complementary approaches – including normative, educational, regulatory, and market-based measures – are essential.

Input from stakeholders during the meeting will refine this work, setting the stage for discussions at the Paris AI Action Summit in February 2025.

Track 2 will focus on evaluation and testing of foundation models. The objective is to foster a joint understanding of how to conduct evaluations to eventually reach a state of complementary testing among network members. AI Safety Institutes will present a prototype of a joint testing exercise which will lay the foundation for discussing how to expand and evolve this in preparation for the AI Action summit in France.

In Track 3, the Network will endorse a Joint Statement on Risk Assessment of Advanced AI Systems. This technical document, coordinated by the EU AI Office with the UK, outlines the evolving science and practice of risk assessment. It aims to establish a shared technical basis to support the network’s ongoing efforts to create comprehensive and effective risk assessment strategies for advanced AI systems.

The joint statement introduces six key aspects essential for robust risk assessment:

  • Actionable insights
  • Transparency
  • Comprehensive scope
  • Multistakeholder involvement
  • Iterative processes
  • Reproducibility

Track 3 will discuss how this work could further advance until the AI Action Summit in France in February.

Find further infomation about theInternational Network of AI Safety Institutes.

Joint Statement on Risk Assessment of Advanced AI Systems

International Network of AI Safety Institutes – Download

Source – EU Commission Digital Strategy

 


Background on the first meeting of the International Network of AI Safety Institutes

AI safety institutes launched the International Network of AI Safety Institutes in San Francisco. The mission statement reflects their goals to advance AI safety, research, testing, and guidance.

On 20 and 21 November 2024, AI safety institutes and government-mandated offices from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States are convening in San Francisco for the first meeting of the International Network of AI Safety Institutes.

Building on the Seoul Statement of Intent toward International Cooperation on AI Safety Science, released during the AI Seoul Summit on 21 May 2024, this initiative marks the beginning of a new phase of international collaboration on AI safety.

The Network brings together technical organisations dedicated to advancing AI safety, helping governments and societies better understand the risks posed by advanced AI systems, and proposing solutions to mitigate these risks. The Network members also stress in their Mission Statement that “international cooperation to promote AI safety, security, inclusivity, and trust is vital to addressing these risks, driving responsible innovation, and expanding access to the benefits of AI worldwide.”

Beyond addressing potential harms, the institutes and offices involved will guide the responsible development and deployment of AI systems.

Goals and priorities of the network

The International Network of AI Safety Institutes will serve as a forum for collaboration, bringing together technical expertise to address AI safety risks and best practices. Recognising the importance of cultural and linguistic diversity, the Network will work towards a unified understanding of AI safety risks and mitigation strategies.

It will focus on four priority areas:

  • Research: Collaborating with the scientific community to advance research on the risks and capabilities of advanced AI systems, while sharing key findings to strengthen the science of AI safety.
  • Testing: Developing and sharing best practices for testing advanced AI systems, including conducting joint testing exercises and exchanging insights from domestic evaluations, as appropriate.
  • Guidance: Facilitating shared approaches to interpreting test results for advanced AI systems to ensure consistent and effective responses.
  • Inclusion: Engaging partners and stakeholders in regions at all stages of development, by sharing information and technical tools in accessible ways to broaden participation in AI safety science.
A commitment to global cooperation

Through this Network, the members commit to advancing international alignment on AI safety research, testing, and guidance. By fostering technical collaboration and inclusivity, they aim to ensure that the benefits of safe, secure, and trustworthy AI innovation are shared widely, enabling humanity to fully realise AI’s potential.

Downloads

Mission Statement International Network of AI Safety Institutes – Download

Source – EU Commission

 

Forward to your friends