Washington DC, March 25, 2025
Responding to the need to propel AI innovation by developing AI standards more quickly while encouraging openness and collaboration – incorporating a wide range of expertise – NIST is launching its AI Standards Zero Drafts Pilot Project.
As discussed at NIST’s AI symposium in September, this initiative will pilot a new process of distilling stakeholder views into “zero drafts”—thorough but preliminary proposals that will be submitted into the private sector-led standardization process to be developed into voluntary consensus standards.
NIST has proposed initial topics related to testing, evaluation, verification, and validation; AI design and architecture concepts; transparency documentation; among others. NIST solicits private and public sector input on priority topics and scoping. For each topic selected by NIST based on this feedback, the agency anticipates releasing a concept paper for public comment. NIST then will propose and revise a document to be submitted to the formal standardization process.
More details are available on a dedicated NIST web page. Please send any suggestions via email to ai-standards@nist.gov.
Input and Listening Sessions Welcome
NIST’s new AI Standards Zero Drafts project will pilot a process to broaden participation in and accelerate the creation of standards, helping standards meet the AI community’s needs and unleash AI innovation.
The project, discussed at the NIST’s AI symposium in September, will distill stakeholder views on topics with a science-backed body of work into “zero drafts”—preliminary, stakeholder-driven drafts of standards that are as thorough as possible. These drafts then will be submitted into the private sector-led standardization process to be developed into voluntary consensus standards.
The Need
As NIST has gathered input on AI standardization needs, stakeholders have repeatedly emphasized two problems, which are especially challenging to address simultaneously:
- AI standards need to be developed expeditiously to address urgent needs, prevent fragmentation of governance frameworks, and keep up with AI advances—while still maintaining the rigor of the process and the quality of the resulting standards.
- AI standards demand a wide range of expertise and perspectives. Stakeholders seek to address many needs via AI standards. This makes it important for the standards to draw on multi-disciplinary perspectives, including from many kinds of organizations and stakeholders who develop, use, research, or are affected by AI systems.
NIST’s Solution: Convening the Community to Develop Zero Drafts
NIST seeks to expand participation in AI standards development and help standards developing organizations (SDOs) achieve consensus more quickly through the following process:
- NIST proposes topics and solicits stakeholder input on which topics to prioritize and how to scope them.
- For each topic selected by NIST based on this feedback:
- NIST releases a concept paper outlining a proposed direction for a standard;
- Based on the concept paper and broad stakeholder input, NIST proposes an initial draft standard;
- NIST iterates on drafts based on further rounds of input; and
- The resulting document is submitted to SDOs via established processes as a proposal for formal standardization.
Zero drafts will be thorough, high-quality documents reflecting stakeholder inputs, but will likely still change during the formal standardization process.
Initial Topics
The first several zero drafts will be approached as pilots. They will address a subset of the topics below, which are based on stakeholder identified priorities expressed to NIST, and adjusted to minimize overlap with existing standards projects.
Documentation about system and data characteristics for transparency among AI actors
- Contents of model, data, and/or system cards, such as model details, performance measurements, training data, evaluation data, and intended use.
- Standardized mechanisms and practices for formatting, presenting, sharing, and/or accessing documentation.
- Application of existing documentation practices for securing information technology supply chains to AI systems.
Methods and metrics for AI testing, evaluation, verification, and validation (TEVV)
- Application of well-established TEVV methods (e.g., for establishing construct validity, performing sensitivity analysis, or field testing) to TEVV for generative AI.
- Approaches for translating heterogeneous benchmark scores into meaningful scores or rankings for a given use case.
- Methods for preventing training data from becoming “contaminated” with canonical test outputs and detecting when contamination has occurred.
Maps of concepts and terminology regarding AI system designs, architectures, processes, and actors
- Clarification of the “AI stack”—the layers of technology and resources used to build AI applications, including the roles, responsibilities, and processes involved in each layer across the AI lifecycle.
- Reference architectures or design patterns for AI systems to establish shared understanding of AI system components and their relationships.
Technical measures for reducing risks posed by synthetic content
- A taxonomy of approaches and terms to refer to these approaches (e.g., digital content transparency, provenance data tracking, signed metadata, watermarking).
- Methods and metrics for evaluating and reporting the effectiveness of such measures.
Invitation for Input
NIST welcomes input about:
- The process proposed above;
- Prioritization of topics and scopes for zero drafts, including others not listed above that may be of higher priority;
- The needs that standards on these topics could address;
- The best ways to scope zero drafts so that they cover material that is well-circumscribed, mature enough for standardization, and well-distinguished from existing standards initiatives; and
- Ideas to be incorporated into NIST’s initial concept notes on priority topics, including relevant resources or organizational experiences.
Please send any suggestions via email to ai-standards@nist.gov.
NIST also welcomes organizations’ volunteering to host listening sessions. Such sessions would be valuable at any stage of the process. Those interested in holding listening sessions can email ai-standards@nist.gov
Source – U.S. NIST