Here’s the scoop: A new certification scheme for copyright-compliant AI is now in place, but ChatGPT and other text generators won’t make the cut.
Called Fairly Trained, this initiative comes at a time when generative AI companies are facing serious backlash. Many of their tools, from OpenAI’s chatbots to Stability AI’s art generators, are trained on copyrighted content scraped from the web. The systems then churn out countless creations based on this data, often resulting in clear derivations of the original material, infuriating creators and copyright-holders. And the controversial practice has been openly admitted by GenAI leaders.
But hang on, is this even legal? Well, companies claim they’re covered by the “fair use” doctrine, which permits transformative and socially valuable use of copyrighted content. But not everyone is buying it, especially Fairly Trained’s CEO, Ed Newton-Rex, who left Stability AI due to its use of copyrighted content. Now, he’s on a mission to change things by certifying companies that obtain a license for their training data.
By doing so, Fairly Trained aims to create a fairer world for human creators and provide transparency for consumers to make informed decisions about using GenAI. The certification indicates which companies prioritize creator consent and which ones don’t.
So, what’s driving this mission? According to Newton-Rex, “GenAI poses an existential threat to creative industries,” prompting him to launch Fairly Trained and certify nine GenAI organizations in its inaugural batch. However, there’s a notable gap in the program—text. Newton-Rex revealed that no major text generation model currently meets the certification’s requirements.
Curious to learn more? Take a look at the full article Read More
