UMMARY:
The National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, in support of efforts to create safe and trustworthy artificial intelligence (AI), is establishing the Artificial Intelligence Safety Institute Consortium (“Consortium”). The Consortium will help equip and empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI, particularly for the most advanced AI systems, such as the most capable foundation models. NIST invites organizations to provide letters of interest describing technical expertise and products, data, and/or models to enable the development and deployment of safe and trustworthy AI systems through the AI Risk Management Framework (AI RMF).
This notice is the initial step for NIST in collaborating with non-profit organizations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI. Many of these challenges were identified under the Executive Order of October 30, 2023 (The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) and the NIST AI RMF Roadmap. Much of this research will center on evaluations of and approaches towards safer, more trustworthy AI systems. Participation in the consortium is open to all interested organizations that can contribute their expertise, products, data, and/or models to the activities of the consortium. Selected participants will be required to enter into a consortium Cooperative Research and Development Agreement (CRADA) with NIST. At NIST’s discretion, entities which are not permitted to enter into CRADAs pursuant to law may be allowed to participate in the Consortium pursuant to separate non-CRADA agreement.