U.S. Copyright Office Report Backs Licensing for AI Training and Rejects Compulsory Models
On Friday, the office noted its support of licensing copyrighted material for "commercial" AI models. The next day, the head of the Copyright Office was fired by President Trump.

On Friday afternoon, the U.S. Copyright Office released a report examining copyrights and generative AI training, which supported the idea of licensing copyrights when they are used in commercial AI training.
On Saturday (May 10), the nation’s top copyright official – Register of Copyrights Shira Perlmutter – was terminated by President Donald Trump. Her dismissal shortly follows the firing of the Librarian of Congress, Carla Hayden, who appointed and supervised Perlmutter. In response, Rep. Joe Morelle (D-NY) of the House Administration Committee, which oversees the Copyright Office and the Library of Congress, said that he feels it is “no coincidence [Trump] acted less than a day after [Perlmutter] refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”
This report was largely seen as a win among copyright owners in the music industry, and it noted three key stances: the Office’s support for licensing copyrighted material when a “commercial” AI model uses it for training, its dismissal of compulsory licensing as the correct framework for a future licensing model, and its rejection of “the idea of any opt-out approach.”
The Office affirms that in “commercial” cases, licensing copyrights for training could be a “practical solution” and that using copyrights without a license “[go] beyond established fair use boundaries.” It also notes that some commercial AI models “compete with [copyright owners] in existing markets.” However, if an AI model has been created for “purposes such as analysis or research – the types of uses that are critical to international competitiveness,” the Office says “the outputs are unlikely to substitute” for the works by which they were trained.
“In our view, American leadership in the AI space would best be furthered by supporting both of these world-class industries that contribute so much to our economic and cultural advancement. Effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights,” the report reads.
While it is supportive of licensing efforts between copyright owners and AI firms, the report recognizes that most stakeholders do not hold support “for any statutory change” or “government intervention” in this area. “The Office believes…[that] would be premature at this time,” the report reads. Later, it adds “we agree with commenters that a compulsory licensing regime for AI training would have significant disadvantages. A compulsory license establishes fixed royalty rates and terms and can set practices in stone; they can become inextricably embedded in an industry and become difficult to undo. Premature adoption also risks stifling the development of flexible and creative market-based solutions. Moreover, compulsory licenses can take years to develop, often requiring painstaking negotiation of numerous operational details.”
The Office notes the perspectives of music-related organizations, like the National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), and Recording Industry Association of America (RIAA), which all hold a shared distaste for the idea of a future compulsory or government-controlled license for AI training. Already, the music industry deals with a compulsory license for mechanical royalties, allowing the government to control rates for one of the types of royalties earned from streaming and sales.
“Most commenters who addressed this issue opposed or raised concerns about the prospect of compulsory licensing,” the report says. “Those representing copyright owners and creators argued that the compulsory licensing of works for use in AI training would be detrimental to their ability to control uses of their works, and asserted that there is no market failure that would justify it. A2IM and RIAA described compulsory licensing as entailing ‘below-market royalty rates, additional administrative costs, and… restrictions on innovation’… and NMPA saw it as ‘an extreme remedy that deprives copyright owners of their right to contract freely in the market, and takes away their ability to choose whom they do business with, how their works are used, and how much they are paid.’”
The Office leaves it up to the copyright owners and AI companies to figure out the right way to license and compensate for training data, but it does explore a few options. This includes “compensation structures based on a percentage of revenue or profits,” but if the free market fails to find the right licensing solution, the report suggested “targeted intervention such as [Extended Collective Licensing] ECL should be considered.”
ECL, which is employed in some European countries, would allow a collective management organization (CMO) to issue and administer blanket licenses for “all copyrighted works within a particular class,” much like the music industry is already accustomed to with organizations like The MLC (The Mechanical Licensing Collective) and performing rights organizations (PROs) like ASCAP and BMI. The difference between an ECL and a traditional CMO, however, is that under an ECL system, the CMO can license for those who have not affirmatively joined it yet. Though these ECL licenses are still negotiated in a “free market,” the government would “regulat[e] the overall system and excercis[e] some degree of oversight.”
While some AI firms expressed concerns that blanket licensing by copyright holders would lead to antitrust issues, the Copyright Office sided with copyright holders, saying “[the] courts have found that there is nothing intrinsically anticompetitive about the collective, or even blanket, licensing of copyrighted works, as long as certain safeguards are incorporated— such as ensuring that licensees can still obtain direct licenses from copyright owners as an alternative.”
This is a “pre-publication” version of a forthcoming final report, which will be published in the “near future without any substantive changes expected,” according to the Copyright Office. The Office noted this “pre-publication” was pushed out early in an attempt to address inquiries from Congress and key stakeholders.
It marks the Office’s third report about generative AI and its impact on copyrights since it launched an initiative on the matter in 2023. The first report, released July 31, 2024, focused on the topic of digital replicas. The second, from Jan. 29, 2025, addressed the copyright-ability of outputs created with generative AI.