Paris, 7 February 2025
The Organisation for Economic Co-operation and Development (OECD) launched today the first global framework for companies to report on their efforts to promote safe, secure, and trustworthy AI. This initiative monitors the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems, a central component of the Hiroshima AI Process launched during Japan’s G7 Presidency.
For the first time, companies will be able to provide comparable information on their AI risk management actions and practices – such as risk assessment, incident reporting and information sharing mechanisms – fostering trust and accountability in the development of advanced AI systems. Some of the world’s largest developers of advanced AI systems have contributed to this initiative and were instrumental in its pilot phase, testing its features, and ensuring its effectiveness. phase, testing its features, and ensuring its effectiveness. Leading AI developers, including Amazon, Anthropic, Fujitsu, Google, KDDI CORPORATION, Microsoft, NEC Corporation, NTT, OpenAI, Preferred Networks Inc., Rakuten Group Inc., Salesforce, and Softbank Corp. have already pledged to complete the inaugural framework.
“The OECD is committed to promoting transparency, comparable reporting and co-operation among global stakeholders, ultimately building trust in AI systems,” OECD Secretary-General Mathias Cormann said. “Enabling companies to share their practices and demonstrate their focus on safety, accountability, and transparency will contribute to the responsible development, deployment and use of advanced AI systems.”
By aligning the reporting framework with multiple risk management systems, including the Hiroshima Code of Conduct, the OECD aims to promote interoperability and consistency across international AI governance mechanisms.
Organisations developing advanced AI systems are invited to submit their inaugural reports by 15 April 2025, after which submissions are accepted on a rolling basis. Reporting organisations are welcome to update their reports annually.
Source – OECD
G7 reporting framework – Hiroshima AI Process (HAIP) international code of conduct for organizations developing advanced AI systems
As part of the G7 Hiroshima AI Process, the G7 launched a voluntary Reporting Framework to encourage transparency and accountability among organizations developing advanced AI systems. The results will facilitate transparency and comparability of risk mitigation measures and contribute to identifying and disseminating good practices.
The OECD, informed by leading AI developers, supported the G7 in developing this reporting framework as a monitoring mechanism to facilitate the application of the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems.
Share your organization’s practices through our user-friendly interface. You can save your work in progress and return to complete it later.
Browse submitted reports to learn about organizations’ approaches and practices for safe and trustworthy AI development.
Find out more about the project and how to submit your own report
The Reporting Framework, launched on February 7 2024, is a direct outcome of the G7 Hiroshima AI Process, initiated under the Japanese G7 Presidency in 2023 and further advanced under the Italian G7 Presidency in 2024. It builds on the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems, a landmark initiative to foster transparency and accountability in developing advanced AI systems. At the G7’s request and in line with the Trento Declaration, the OECD was tasked with identifying mechanisms to monitor the voluntary adoption of the Code.