TechExciting News: White House Announces AI Safety Consortium with Over 200 Top...

Exciting News: White House Announces AI Safety Consortium with Over 200 Top Firms for Testing and Evaluation

On February 8, 2024, the ‍Biden Administration announced the establishment of the US AI ⁣Safety Institute Consortium (AISIC), labeling ​it as the “first-ever consortium ‍dedicated to AI safety.” This move came a day after appointing⁢ a key White House aide as the Director of the⁣ new USAISI at NIST.

The AISIC Members

The AISIC boasts⁣ a membership of over 200 companies and organizations, spanning from industry giants like Google, Microsoft, ⁣Amazon, OpenAI,⁤ Cohere, and Anthropic to various research institutions, civic groups, academic ⁣bodies, local ⁢governments, and nonprofits.

According to a post ​on the NIST website, the AISIC is described as the⁤ largest⁢ assembly of test and ‍evaluation teams established thus far, with ⁣a primary ‌focus on laying the groundwork for a ⁢new measurement science in AI safety. It operates under the guidance of the ‌USAISI and contributes to key objectives outlined in President Biden’s Executive⁣ Order, such as formulating guidelines for red-teaming, capability assessments, risk management, safety protocols, security measures, and watermarking synthetic content.

Origins of the Consortium

The creation of the AISIC was⁢ announced on October 31, 2023, as part of President Biden’s AI Executive Order.⁣ Interested organizations are encouraged to participate in the consortium by offering their expertise, products, data, and models ⁣to​ enhance its initiatives.

Contributions to the Consortium

Members of the Consortium play⁣ a vital role by contributing to various guidelines set by NIST:

  1. Developing new standards and practices to promote safe, secure, and trustworthy AI development and deployment.
  2. Creating benchmarks for evaluating AI capabilities that could pose potential risks.
  3. Implementing⁤ secure development practices for generative AI, with a focus on dual-use foundation models and privacy-preserving machine learning.
  4. Ensuring the availability of testing environments.
  5. Enhancing red-teaming and privacy-preserving machine learning practices.
  6. Establishing guidelines for authenticating digital content and defining AI workforce skills criteria.

Overall, the AISIC represents a significant step towards advancing AI ‌safety and fostering collaboration among leading industry players and stakeholders.

Join us in New ⁢York on February 29 for the AI Impact Tour, where we’ll be partnering with Microsoft to discuss the balance between risks and rewards in AI applications. Request an invite for this exclusive event now!

Selected participants are required to pay an annual fee and enter into a Consortium Cooperative Research and⁤ Development Agreement with NIST.

For⁢ more ‌details and insights on the US AI Safety‍ Institute Consortium and its mission, click here or Read More.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe Today

GET EXCLUSIVE FULL ACCESS TO PREMIUM CONTENT

SUPPORT NONPROFIT JOURNALISM

EXPERT ANALYSIS OF AND EMERGING TRENDS IN CHILD WELFARE AND JUVENILE JUSTICE

TOPICAL VIDEO WEBINARS

Get unlimited access to our EXCLUSIVE Content and our archive of subscriber stories.

Exclusive content

Latest article

More article