On copyright law, the enormous opportunities presented by artificial intelligence raise great challenges to legal systems. And if they fail to engage in an active effort to keep up with the latest technological developments, the road to harmonizing the law with progress could be an uphill one. Fortunately, as far as both the European Union and the United States are concerned, the discussion on this issue is as lively as it is productive.

Let’s start with the fundamentals. As things stand, in both America and Europe, copyright has as its inescapable core a human intellectual effort manifested through the expression of free and creative choices. It is, consequently, expressions that are protected, not ideas. One cannot register the idea of a book, a film or an image, but the words, frames and pixels that make them up. And these individual expressions must be produced by human beings. Consider, for example, the case in 2018, when a photo taken by a macaque was declared to be in the public domain.

But back to artificial intelligence. In 2022, the image Théâtre D’opéra Spatial created by artist Matthew Allen using the AI program Midjourney takes first place in a competition held during the Colorado State Fair. Allen attempts to register this artwork with the Copyright Office – or Usco – the U.S. government agency for copyright protection, but his request is denied. Allen writes again to Usco, explains that he used software, including Photoshop, to modify the image created by Midjourney. In response, the government agency argues that Allen can protect the specific parts of the artwork on which he personally intervened, but not the entirety of the image. The Copyright Office’s final decision is announced last Sept. 5: “The work cannot be registered.”

Allen maintains that he will appeal: “I will fight with all my might.” But the decision involving Théâtre D’opéra Spatial fits into the legislative-trial storyline that has already been developing for years around the relationship between works generated by AI models and the laws protecting intellectual property.

Take the case of Stephen Thaler, who appealed to the U.S. District Court of Columbia after Usco denied his request to register an image generated by the Creativity Machine program, created by Thaler himself, on the copyrighted works register. Judge Howell, in his ruling issued last August, agreed with the Copyright Office and held that intellectual property cannot apply to AI-generated works and also cannot be transferred to third parties, including the creator of the model that generated the image.

The issue seems settled, then. If the object that constitutes the artwork is generated by an AI model, it cannot be protected. Indeed, according to some experts, this should be the case, but, according to others, such a clear-cut solution would risk undermining the interest and investment in these technologies. Compromises could be explored.

In 2020, the European Commission set out to do just that. In its report, titled Trends and developments in artificial intelligence, it identified a set of criteria for granting authorship protection to a work created with the help of AI, including the need for human intellectual effort and free and creative choice on the part of the author. A distinction was made, which will surely set the standard in the debate in question, between “AI-generated output” and “AI-assisted output.” The former is a product created wholly or primarily by artificial intelligence without significant human intervention; the latter is influenced or enhanced by the use of artificial intelligence, but in which the human element is still fundamental.

A work could be protected, then, if AI were used as a tool or resource to support or enhance human work and provided that the final decisions, though made on the basis of information provided by AI, are determined by the human being.

But how to concretely establish this principle within our legal systems? Many experts fear so-called “digital novelism,” i.e., the tendency to legislate specifically on the latest phenomena of technology, instead of relying on principles already established and solid in our jurisprudence. Instead, it might be preferable to resort to the general principles of law already in force in the field of copyright, reinterpreting them in the light of developments as they occur. Whether it is decided to give new interpretations to existing principles or to intervene with ad hoc regulatory productions, AI raises questions hitherto never before addressed.

Two main sources of contention between authors and producers revolving around these new creative models have already emerged in the audiovisual field. The first is the issue of copyright referring to training material. As mentioned several times before, AI models need vast amounts of pre-existing works, writings, and scripts, which they go on to engulf and rework in order to then produce their own “original” output. The rights holders of the material used by the AI clamor for compensation and royalties, while the systems architects claim they are unable to determine which authors are owed which payments, because the structure of these models is too complex and opaque to trace back to sources. After all, any flesh-and-blood author can and is influenced by his or her peers, but except in cases of plagiarism, this mechanism is known and approved by all.

The second is intellectual property declined within the production contracting of a film. A shrewd producer could come up with an idea for a film and throw down a first draft through the use of an AI program and only hire the screenwriter to work on a “ready-made” script. Obviously the screenwriter in this case could ask for much less at the contract stage because he was not tasked with inventing a story or building a script from scratch; he is on par with an editor who has to work much harder for much less. For this reason, during the WGA screenwriters’ and SAG-AFTRA actors’ strike in Hollywood, one of the strikers’ demands is that guild members not be put to work on scripts produced through artificial intelligence.

This is the conflict of interests. On the one hand we find the protection of the fruits of human inventiveness and ingenuity, and on the other the safeguarding of the technological process, with its consequent economic and – why not? – cultural benefits. The resolution is not easy either substantively or formally. Where is the dividing line between the use of a new, albeit very powerful, creative tool and authorship itself? How best to represent this new reality within our legal systems? Artificial intelligence arises as an interdisciplinary subject par excellence; consequently, it will be necessary to engage as many perspectives and areas of expertise as possible to enrich the discussions surrounding it and to inform the decisions that flow from it.

Click here to read the first episode.

Click here to read the second episode.