On February 8, 2024, the Biden Administration announced the establishment of the US AI Safety Institute Consortium (AISIC), labeling it as the “first-ever consortium dedicated to AI safety.” This move came a day after appointing a key White House aide as the Director of the new USAISI at NIST.
The AISIC Members
The AISIC boasts a membership of over 200 companies and organizations, spanning from industry giants like Google, Microsoft, Amazon, OpenAI, Cohere, and Anthropic to various research institutions, civic groups, academic bodies, local governments, and nonprofits.
According to a post on the NIST website, the AISIC is described as the largest assembly of test and evaluation teams established thus far, with a primary focus on laying the groundwork for a new measurement science in AI safety. It operates under the guidance of the USAISI and contributes to key objectives outlined in President Biden’s Executive Order, such as formulating guidelines for red-teaming, capability assessments, risk management, safety protocols, security measures, and watermarking synthetic content.
Origins of the Consortium
The creation of the AISIC was announced on October 31, 2023, as part of President Biden’s AI Executive Order. Interested organizations are encouraged to participate in the consortium by offering their expertise, products, data, and models to enhance its initiatives.
Contributions to the Consortium
Members of the Consortium play a vital role by contributing to various guidelines set by NIST:
- Developing new standards and practices to promote safe, secure, and trustworthy AI development and deployment.
- Creating benchmarks for evaluating AI capabilities that could pose potential risks.
- Implementing secure development practices for generative AI, with a focus on dual-use foundation models and privacy-preserving machine learning.
- Ensuring the availability of testing environments.
- Enhancing red-teaming and privacy-preserving machine learning practices.
- Establishing guidelines for authenticating digital content and defining AI workforce skills criteria.
Overall, the AISIC represents a significant step towards advancing AI safety and fostering collaboration among leading industry players and stakeholders.
Join us in New York on February 29 for the AI Impact Tour, where we’ll be partnering with Microsoft to discuss the balance between risks and rewards in AI applications. Request an invite for this exclusive event now!
Selected participants are required to pay an annual fee and enter into a Consortium Cooperative Research and Development Agreement with NIST.
For more details and insights on the US AI Safety Institute Consortium and its mission, click here or Read More.
