By: Elliot Hangos
Text to image is the creation of art from a text description. Text to image technology uses AI to read and comprehend user-inputted text, and covert that text into a unique image. Like the Google interface, the software solely provides users with a search bar. Upon entering their search, users are provided with the option to download their image and scroll through relevant images that have been produced by searches with similar words.
More impressive than text to image is text to video, such as OpenAI CLIP. CLIP was used by AI Artist Glenn Marshall to create a short movie, “The Crow.” Marshall uploaded video frames of a live action film into CLIP, inserted a narrative for the software to understand what to generate, and voila! The film, semi-designed by Marshall and semi-designed by AI, went on to win the Jury Award at the 2022 Cannes Short Film Festival.
This creates several questions. If one can simply think art into existence, who owns it? The user? The AI? The copyright holder that the art is based on? Additionally, can someone be sued for infringement based on such technology?
OpenAI is a research and deployment company founded in 2015. The company’s mission statement is to build safe AI and explicitly states that their “primary fiduciary duty is to humanity.” This is a goal clearly meant to redirect the modern day fears of AI and its future capabilities.
Due to the ease at which AI software can extend into foreign territories, it is helpful to first examine both the US and EU landscapes regarding AI legislation. While the two are relatively distinct in other areas of technology law, the US and EU have similar views on the regulation of AI. Although the US has been slower than the EU to update its laws, the Federal Trade Commission (“FTC”) has recently added AI-related issues to its agenda. The current rules are more concerned with antitrust regulation and the misuse of data. However, questions of ownership likely need to be addressed since AI systems are accessible from anywhere with an internet connection.
Digital images have been criticized since their introduction, and their marketplaces have proven to be volatile. Owners and potential buyers have experienced untrustworthy market expectations due to lack of public interest and criticism of duplication of digital images. For example, when NFTs initially gained popularity but still maintained their mystique, there were questions as to who actually owned the art because non-owners could simply screenshot the works. OpenSea, the largest NFT marketplace, has argued that this is not true. The only way for one to truly “own” an NFT, they argue, is by purchasing it. With that being said, NFTs are not AI-generated.
Typically, the images you would see on OpenSea are created by real, human artists. Like those eager entrepreneurs making music on SoundCloud in 2016, the NFT market welcomed many aspiring artists, who quickly realized the difficulty of sticking out. These artists were unable to maintain the popularity they had expected since the initial emergence of NFTs. Many artists who were able to develop a following through the various digital art marketplaces did earn a steady salary. So what about art that has been created by AI or a mixture of artist and AI? Fortunately, US laws give somewhat of an answer to the copyrightability of AI-created art.
For a work of art in any tangible medium to receive copyright protection, there must be authorship. The question of authorship, under the U.S. Copyright Act, is whether the individual or individuals created a work fixed in any tangible medium of expression. The Act even goes as far as to say that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” Essentially, under the statute, AI-generated art does not have an owner.
Understandably, the courts require the human-author element for the legitimization of art. If this were not a bright-line rule, this would allow any random user to profit off the ingenuity of their “text-to-video” AI search. However, for the more technical questions, such as the copyrightability of art created from a mixture of human-touch and AI magic, Congress has yet to deliver a clear resolution. Congress established the AI Caucus in 2017 to work alongside the Copyright Office and equip our governing body with the necessary knowledge to tackle such gaps in legislation. The Office’s most striking response to digital art occurred against one of the most popular NFT projects, in which they denied copyright protection for 10,000 digital images made up of input from both humans and AI. Our legal and social mindset has seemingly reached a consensus that art which falls under the category of “purely human created” should be held to a higher regard.
I empathize with the art community’s contention that this surplus of art is tarnishing the quality of art collections. Furthermore, I understand that community’s concerns on plagiarism. However, I do dispute arguments that pieces of art, like Marshall’s film “The Crow” and Kris Kashtanova’s book “Zarya of the Dawn,” should be less deserving of their recognition due to their AI-components. Kashtanova, on writing the 18-page book said, “[t]here is no way I could have written certain poems without A.I., but there’s no way the A.I. could have written them without me.” The best thing we can do now is prepare for a landscape in which AI art is perhaps valued with the same regard as human art.
Student Bio: Elliot Hangos is a second-year law student at Suffolk University Law School. He is a staffer on the Journal of High Technology Law. Elliot received a Bachelor of Business Administration in Marketing and Sports Management from The George Washington University.
Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.