Stability AI’s Head of Audio Resigns Due to Disagreement Over Fair Use

Ed Newton-Rex, the former head of Stability AI’s audio team, has stepped down from his position due to a disagreement surrounding the utilization of ‘fair use’ principles in the training of generative AI models.

Stability AI's Head of Audio Resigns Due to Disagreement Over Fair Use

In a public statement, Newton-Rex expressed his uneasiness, stating, “I am in disagreement with the company’s perspective that training generative AI models using copyrighted works falls under ‘fair use’.”

Ed Newton-Rex, Head Of Audio at Stability AI Resigned

While acknowledging the thoughtful approach taken by many individuals at Stability AI and applauding the company’s support in creating Stable Audio, an AI music generator that utilizes licensed training data and shares revenue with rights holders, Newton-Rex noted that the prevailing viewpoint on fair use within the company remained unchanged.

US Copyright Office Stance on Fair Use in Generative AI

The issue of fair use in the context of generative AI has gained significant attention, particularly when the US Copyright Office sought public input on the matter.

Stability AI, along with numerous other companies, took part in the discussion. In their comprehensive 23-page submission, Stability AI expressed their belief that AI development constitutes a transformative and socially beneficial use of existing content protected by fair use.

Fair use is a legal principle that permits limited use of copyrighted material without necessitating permission from the rights holders.

Newton-Rex points out that a crucial factor in determining fair use is the impact it has on the potential market or value of the copyrighted work.

Newton-Rex’s perspective is that contemporary generative AI models have the capacity to generate works that directly compete with the copyrighted material they are trained on. This challenges the notion that training AI models in this manner can be classified as fair use.

The Moral Implications of AI Training Without Consent

Beyond the fair use argument, Newton-Rex strongly holds the belief that training generative AI models without obtaining permission is morally unjust.

He expresses concern that billion-dollar companies are utilizing creators’ works without permission, potentially jeopardizing their livelihoods.

Despite his disagreement with Stability AI’s stance on fair use, Newton-Rex remains a supporter of generative AI, having dedicated 13 years to the field.

However, he emphasizes that his support is contingent on generative AI practices that do not exploit creators by training models on their work without permission.

Newton-Rex expresses hope that others within generative AI companies will join in discussing the fair use matter and advocate for a change in how creators are treated during the development of generative AI technology.

AI Giants Advocate for Fair Use

In addition to Stability AI, prominent AI companies, including Meta, Google, and OpenAI, have submitted their perspectives to the US Copyright Office.

They assert that training AI models with copyrighted material falls under fair use and does not infringe upon the rights of copyright holders.

Meta drew a comparison between generative AI and historical technological advancements like printing presses, cameras, and computers.

They argued that imposing high licensing fees for AI training data could impede the progress of generative AI.

Both Google and OpenAI advocated for a flexible interpretation of fair use and cautioned against hasty legislation that might hinder innovation and restrict the full potential of AI technology.

Related Stories:

Help Someone By Sharing This Article