Stanford University researchers say AI ethics practitioners report lacking institutional support at their companies.
The ethical development of artificial intelligence (AI) is falling short at tech companies, despite their promises to prioritize it over performance metrics and product launches, according to a new report from Stanford University researchers.
Even though many companies have published AI principles and hired social scientists and engineers to work on AI ethics, the report from Stanford’s Institute for Human-Centered Artificial Intelligence found that ethical safeguards are not being prioritized within private companies.
According to researchers Sanna J Ali, Angele Christin, Andrew Smart, and Riitta Katila, “Companies often ‘talk the talk’ of AI ethics but rarely ‘walk the walk’ by adequately resourcing and empowering teams that work on responsible AI,” in the report titled Walking the Walk of AI Ethics in Technology Companies.
The experiences of 25 “AI ethics practitioners” revealed that workers promoting AI ethics feel unsupported and isolated within large organizations, facing a culture of indifference or even hostility from product managers who prioritize productivity and timelines over ethical concerns.
Concerns about the speed of AI development and ethical issues touching on everything from the use of private data to racial discrimination and copyright infringement have increased since the release of platforms like OpenAI’s ChatGPT and Google’s Gemini.
Employees also mentioned that ethical issues are often only considered late in the development process, making it difficult to adjust new apps or software, and that ethical considerations are often disrupted by frequent reorganization of teams.
“Metrics around engagement or the performance of AI models are so highly prioritized that ethics-related recommendations that might negatively affect those metrics require irrefutable quantitative evidence,” the report said.

