Fri. Sep 13th, 2024
artificial intelligence, network, programming
The development of AI proceeds. Photo by geralt on Pixabay

21 May 2024

From:

Department for Science, Innovation and TechnologyThe Rt Hon Michelle Donelan MP and The Rt Hon Rishi Sunak MP

Published

21 May 2024

Global leaders agree first international network of AI safety institutes

  • New agreement between 10 countries and the EU will help establish an international network of publicly backed AI Safety Institutes, after the UK launched the world’s first last year.
  • nations will forge a common understanding of AI safety and align their work on research, standards and testing.
  • newly signed Seoul Declaration sees leaders commit to work together to make sure AI advances human-wellbeing and helps to address world’s greatest challenges in a trustworthy and responsible way.

A new agreement between 10 countries plus the European Union, reached today (21 May) at the AI Seoul Summit, has committed nations to work together to launch an international network to accelerate the advancement of the science of AI safety.

The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will bring together the publicly backed institutions, similar to the UK’s AI Safety Institute, that have been created since the UK launched the world’s first at the inaugural AI Safety Summit – including those in the US, Japan and Singapore.

Coming together, the network will build “complementarity and interoperability” between their technical work and approach to AI safety, to promote the safe, secure and trustworthy development of AI.

This will include sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science around AI safety.

This was agreed at the leaders’ session of the AI Seoul Summit, bringing together world leaders and leading AI companies to discuss AI safety, innovation and inclusivity.

As part of the talks, leaders signed up to the wider Seoul Declaration which cements the importance of enhanced international cooperation to develop AI that is “human-centric, trustworthy and responsible”, so that it can be used to solve the world’s biggest challenges, protect human rights, and bridge global digital divides.

They recognised the importance of a risk-based approach in governing AI to maximise the benefits and address the broad range of risks from AI, to ensure the safe, secure, and trustworthy design, development, deployment, and use of Al.

Prime Minister, Rishi Sunak, said:

AI is a hugely exciting technology – and the UK has led global efforts to deal with its potential, hosting the world’s first AI Safety Summit last year.

But to get the upside we must ensure it’s safe. That’s why I’m delighted we have got agreement today for a network of AI Safety Institutes.

Six months ago at Bletchley we launched the UK’s AI Safety Institute. The first of its kind. Numerous countries followed suit and now with this news of a network we can continue to make international progress on AI safety.

Technology Secretary Michelle Donelan said:

AI presents immense opportunities to transform our economy and solve our greatest challenges – but I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology.

Ever since we convened the world at Bletchley last year, the UK has spearheaded the global movement on AI safety and when I announced the world’s first AI Safety Institute, other nations followed this call to arms by establishing their own.

Capitalising on this leadership, collaboration with our overseas counterparts through a global network will be fundamental to making sure innovation in AI can continue with safety, security and trust at its core.

Deepening partnerships with AI safety institutes and similar organisations is an area of work the UK has already kickstarted through a landmark agreement with the United States earlier this year. The UK’s AI Safety Institute is the world’s first publicly-backed organisation, with £100 million of initial funding. Since it was created, a number of other countries have launched their own AI Safety Institutes, including the US, Japan and Singapore, all of which have signed the commitments announced today.

Building on November’s Bletchley Declaration, the newly agreed statement recognises safety, innovation and inclusivity and interrelated goals, and advocates for socio-cultural and linguistic diversity being embraced in AI models.

These follow the freshly announced “Frontier AI Safety Commitments” from 16 AI technology companies, setting out that the leading AI developers will take input from governments and AI Safety Institutes in setting thresholds when they would consider risks unmanageable. In a world first, the commitments have been signed by AI companies from around the world including the US, China, Middle East and Europe.

The Seoul Declaration and the Seoul Statement of Intent on AI Safety Science can be found here.

It has been signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United States of America and the United Kingdom.

On 21 and 22 May, the United Kingdom and the Republic of Korea will host the AI Seoul Summit. It will bring together international governments and select global industry, academia and civil society leaders for discussions across two days.

It builds on the inaugural AI Safety Summit hosted by the United Kingdom at Bletchley Park in November last year and will be one the largest ever gathering of nations, companies and civil society on AI.

On day one, President Yoon Suk Yeol of the Republic of Korea and Prime Minister Rishi Sunak co-chaired a virtual session for world leaders on innovation and inclusivity, as well as commitments made at Bletchley.

On day two, Minister of Science and ICT, H.E. Lee Jong Ho of the Republic of Korea, and the Secretary of State for Science, Innovation and Technology, Michelle Donelan, will co-chair a ministers’ session with representatives from countries, the European Union and the UN, alongside key figures from industry, academia and civil society looking at AI safety, sustainability and resilience.

Source – UK Government

 

 


U.S. Secretary of Commerce Gina Raimondo Releases Strategic Vision on AI Safety, Announces Plan for Global Cooperation Among AI Safety Institutes

  • As AI Seoul Summit begins, Raimondo unveils Commerce’s goals on AI safety under President Biden’s leadership.
  • Raimondo announces plans for global network of AI Safety Institutes and future convening in the U.S. in San Francisco area, where the U.S. AI Safety Institute recently established a presence.

Today, as the AI Seoul Summit begins, U.S. Secretary of Commerce Gina Raimondo released a strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI), describing the Department’s approach to AI safety under President Biden’s leadership. At President Biden’s direction, the National Institute of Standards and Technology (NIST) within the Department of Commerce launched the AISI, building on NIST’s long-standing work on AI. In addition to releasing a strategic vision, Raimondo also shared the Department’s plans to work with a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices, and to convene the institutes later this year in the San Francisco area, where the AISI recently established a presence.

U.S. Secretary of Commerce Gina Raimondo said:

“Recent advances in AI carry exciting, lifechanging potential for our society, but only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly. That is the focus of our work every single day at the U.S. AI Safety Institute, where our scientists are fully engaged with civil society, academia, industry, and the public sector so we can understand and reduce the risks of AI, with the fundamental goal of harnessing the benefits.”

“The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety. Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

US Commerce Department AI Safety Institute Strategic Vision

The strategic vision released today, available here, outlines the steps that the AISI plans to take to advance the science of AI safety and facilitate safe and responsible AI innovation. At the direction of President Biden, NIST established the AISI and has since built an executive leadership team that brings together some of the brightest minds in academia, industry and government.

The strategic vision describes the AISI’s philosophy, mission, and strategic goals. Rooted in two core principles—first, that beneficial AI depends on AI safety; and second, that AI safety depends on science—the AISI aims to address key challenges, including a lack of standardized metrics for frontier AI, underdeveloped testing and validation methods, limited national and global coordination on AI safety issues, and more.

The AISI will focus on three key goals:

  1. Advance the science of AI safety;
  2. Articulate, demonstrate, and disseminate the practices of AI safety; and
  3. Support institutions, communities, and coordination around AI safety.

To achieve these goals, the AISI plans to, among other activities, conduct testing of advanced models and systems to assess potential and emerging risks; develop guidelines on evaluations and risk mitigations, among other topics; and perform and coordinate technical research. The U.S. AI Safety Institute will work closely with diverse AI industry, civil society members, and international partners to achieve these objectives.

Launch of International Network of AI Safety Institutes

Concurrently, today Secretary Raimondo announced that the Department and the AISI will help launch a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices focused on AI safety and committed to international cooperation. Building on the foundational understanding achieved by the Republic of Korea and our other partners at the AI Seoul Summit through the Seoul Statement of Intent toward International Cooperation on AI Safety Science, this network will strengthen and expand on AISI’s previously announced collaborations with the AI Safety Institutes of the UK, Japan, Canada, and Singapore, as well as the European AI Office and its scientific components and affiliates, and will catalyze a new phase of international coordination on Al safety science and governance. This network will promote safe, secure, and trustworthy artificial intelligence systems for people around the world by enabling closer collaboration on strategic research and public deliverables.

To further collaboration between this network, AISI intends to convene international AI Safety Institutes and other stakeholders later this year in the San Francisco area. The AISI has recently established a Bay Area presence and will be leveraging the location to recruit additional talent.

Source – US Department of Commerce (via email)

Forward to your friends