Stability AI audio head resigns over company’s copyright stance

Ed Newton-Rex, vice-president of audio at Stability, said that he can only support generative AI that doesn’t ‘exploit’ creators by training models on their work.

A senior executive at Stability AI, the start-up behind the AI-powered text-to-image generator Stable Diffusion, has resigned from his post because he disagrees with the company’s opinion that training generative AI models on copyrighted works is “fair use”.

In a long resignation post on X yesterday (15 November), Ed Newton-Rex, who led the audio team at Stability AI, said that despite his colleagues having a more “nuanced” view on this issue than some competitors, he wasn’t able to change the “prevailing opinion” on fair use at the company.

“There are lots of people at Stability who are deeply thoughtful about these issues,” wrote Newton-Rex, who joined the company last November and became vice-president of audio in February.

“I’m proud that we were able to launch a state-of-the-art AI music generation product trained on licensed training data, sharing the revenue from the model with rights-holders. I’m grateful to my many colleagues who worked on this with me and who supported our team.”

Newton-Rex claimed that his disagreement was made clear when Stability responded to a US Copyright Office call for public comments on generative AI by saying that its development is an “acceptable, transformative and socially beneficial use of existing content that is protected by fair use”.

The term fair use refers to the view that training an AI model on copyrighted works does not infringe the copyright in those works, and therefore can be performed without permission or payment.

“This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models – it’s far from a view that is unique to Stability. But it’s a position I disagree with,” explained Newton-Rex, who is also a scout for a16z.

He said his disagreement stems from one of the factors affecting whether of the act of copying is fair use according to the US Congress, which, according to him, is “the effect of the use upon the potential market for or value of the copyrighted work”.

“Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.”

The use of copyrighted works to train generative AI models has been a contentious issue from the beginning of the year. In January, Getty Images sued Stability for allegedly stealing copyrighted content to train Stable Diffusion.

The stock image supplier claimed Stability AI “unlawfully copied and processed” millions of copyright-protected images for its own commercial benefit and “to the detriment of content creators”.

Last year, an analysis of 12m images used to train Stable Diffusion found that around 47pc were sourced from only 100 domains, with the largest number of images coming from Pinterest. This also suggested that some of these training images could be copyright protected.

Even enthusiastic proponents of generative AI are finding the positions taken by some AI companies to be extreme https://t.co/dPmi5mt9Js

— Future of Music Coalition (@future_of_music) November 15, 2023

Getty Images also issued a ban on the upload and sale of AI-generated images on its platform due to “open questions” around the copyright of such images, along with uncertainty surrounding the data these AI models are trained on.

In August this year, a US district court judge said that human authorship is a “bedrock requirement of copyright” and that artwork generated by AI cannot be copyrighted.

The judge was presiding over a lawsuit against the US Copyright Office after the it refused a copyright to Stephen Thaler for an image he generated using AI.

“I’m a supporter of generative AI. It will have many benefits – that’s why I’ve worked on it for 13 years. But I can only support generative AI that doesn’t exploit creators by training models  –  which may replace them – on their work without permission,” added Newton-Rex.

“I’m sure I’m not the only person inside these generative AI companies who doesn’t think the claim of ‘fair use’ is fair to creators. I hope others will speak up, either internally or in public, so that companies realise that exploiting creators can’t be the long-term solution in generative AI.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top