BusinessByteDance Researcher Mistakenly Included in AI Safety Groupchat, US Standards Body Finds

ByteDance Researcher Mistakenly Included in AI Safety Groupchat, US Standards Body Finds

US standards body says ByteDance researcher wrongly added to AI safety groupchat © Reuters. FILE PHOTO: A person arrives at the offices of TikTok after the U.S. House of Representatives overwhelmingly passed a bill that would give TikTok’s Chinese owner ByteDance about six months to divest the U.S. assets of the short-video app or face a ban

Unwelcome Addition to American AI Safety Group Chat

According to the U.S. National Institute of Standards and Technology (NIST), a researcher from ByteDance, the Chinese owner of TikTok, was mistakenly included in a group chat for American artificial intelligence safety experts last week. The researcher was inadvertently added to a Slack instance meant for discussions among members of NIST’s U.S. Artificial Intelligence Safety Institute Consortium.

NIST stated that the researcher was included as a volunteer by a consortium member. However, upon realizing that the individual was an employee of ByteDance, they were promptly removed for violating the consortium’s code of conduct on misrepresentation.

The presence of a ByteDance researcher within the consortium raised concerns due to the ongoing national debate surrounding TikTok and its potential security risks related to the Chinese government. The U.S. House of Representatives recently passed a bill requiring ByteDance to divest from TikTok to avoid a nationwide ban, although the fate of this ultimatum in the Senate remains uncertain.

Role of the AI Safety Institute Consortium

The AI Safety Institute, established under NIST, aims to assess the risks associated with cutting-edge artificial intelligence technologies. Its consortium comprises numerous major American tech companies, universities, AI startups, and NGOs, including Reuters’ parent company, Thomson Reuters. One of the primary objectives of the consortium is to develop guidelines for the safe deployment of AI programs and assist AI researchers in identifying and addressing security vulnerabilities in their models.

As of now, the consortium’s Slack instance boasts approximately 850 members actively engaged in advancing AI safety measures and addressing emerging challenges.

(This story has been refiled to add the dropped word ‘Consortium’ to the name of the AI body in paragraph 2)

» …
rnrn

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe Today

GET EXCLUSIVE FULL ACCESS TO PREMIUM CONTENT

SUPPORT NONPROFIT JOURNALISM

EXPERT ANALYSIS OF AND EMERGING TRENDS IN CHILD WELFARE AND JUVENILE JUSTICE

TOPICAL VIDEO WEBINARS

Get unlimited access to our EXCLUSIVE Content and our archive of subscriber stories.

Exclusive content

Latest article

More article