When Steven Spielberg brought Stanley Kubrick’s unfilmed screenplay “A.I. Artificial Intelligence” to the screen in — of all ironic years — 2001, most of us had never before heard the acronym. In science fiction parlance it represented an ominous harbinger of things to come — the prospect of a “Terminator”-level tipping point where humanity finds itself at risk of being replaced by its own artificial creations.
Twenty-three years later, “A.I.” is no longer science fiction or theory. The great android replacement theory has failed to materialize — but our incarnation of A.I. is widely deemed a threat just the same. So much so that for a solid year it has been the single greatest stumbling block in film industry labor negotiations — the central issue during last year’s SAG and WGA strikes, and a major factor in the ongoing negotiations with both Teamsters and animators. In a shockingly short period of time, A.I. has become the most ubiquitous — and the most misunderstood — acronym in the world.
Precisely what A.I. can and cannot do, and what it may and may not end up doing, remains a topic of considerable debate. A.I. integrations are now routinely used to “clean up” video, audio and still images, while millions of others routinely use ChatGPT to reduce their research and writing workload. At the same time, A.I.’s foibles have been the stuff of headlines, from the fiasco of the first “Megalopolis” movie trailer to some profoundly embarrassing incidents involving Google’s Gemini and Adobe’s Firefly.
Ultimately, the real concern for creatives pertains to the “likeness rights” and the use of copyrighted intellectual property which may be used to “train” the machine learning algorithms which constitute the basic building blocks of all A.I.
To help separate fact from fiction from speculation, I went to the most authoritative source I know — veteran attorney Mark Lee of law firm Rimon. An expert in the field of intellectual property specifically as pertains to the entertainment industry, Mark’s work on behalf of artists, authors and athletes and the protection of their work and likenesses has been far-reaching. In addition to contributing to “right of publicity” statutes in California, Ohio and Pennsylvania, he is the author of “Entertainment and Intellectual Property Law,” which is regularly updated and may be purchased at Thomson Reuters.
Given the complexity and the seriousness of the subject, and the fact that A.I. will increasingly become a part of all our daily lives for the foreseeable future, I proposed a two-tiered approach to the subject — an exchange of questions and answers in email — which are furnished below — followed by a free-flowing podcast conversation which you can watch or listen to above.
W.M.
Wade Major: Is it fair to say that “AI” is being used to generally brand a wide variety of machine learning algorithms and machine language models which are all fundamentally different tools?
Mark Lee: The answer may depend on how you define “different,” but I don’t think so. All AI tools differ from traditional algorithms in the same essential way. Traditional algorithm-based programs are all ultimately and utterly predictable. Responses to stimuli have all been worked out in advance. They will give the same answer to the same question every time.
In contrast, AI programs exhibit dynamic behavior, and can adapt and evolve as they are trained on more and better data. Answers to questions will differ, and often improve, as the AI learns how to better answer them from the input it is provided. And the AI decides how to do it.
Which is not to say the answer will always be right. In my field, the law, AI is already famous for giving wrong answers to legal questions, and making up non-existent statutes or case law to support wrong arguments in legal briefs. Some courts have already issued rules barring use of AI in pleadings and motions for this reason. But that is a different issue, and beyond the scope of your question.
WM: When “AI” entered the conversation over artists’ rights during the WGA and SAG-AFTRA strikes, there were wildly conflicting stories as to what was actually at stake. Can you clarify what the specific concerns are for actors and writers? And how do their concerns dovetail with those of the 200 recording artists who recently signed an open letter calling for protection from AI?
ML: The stakes were, and are, huge. AI “deepfake” technology can make it appear that an actor or musician rendered a performance in a film, television program, or music video he or she did not actually render. Photorealistic, digitally animated performance by Tom Cruise, or Clark Gable, or Sharon Stone, or Marilyn Monroe, became a very real possibility. The only impediment is legal, and given the current state of privacy and right of publicity laws, that protection is very uncertain. The Motion Picture Association of America has argued for decades that producers and studios have a First Amendment right to create such “deepfake” performanceswithout the permission of the actor or his or her estate.
Similarly, AI can now write scripts, untouched by human hands, that these virtual actors can perform. They won’t contain truly “new” plots, since they will recombine elements from pre-exiting works, but people have been arguing that there are only two, or four, or seven basic plots for years, so that does not create an insurmountable obstacle to AI-generated scripts. And in my opinion, most of the time copyright law is unlikely to prevent the creation of those new scripts, though the issue is being tested in court now.
In this technologically exploding and legally uncertain environment, any contractual protections the guilds can obtain through collective bargaining are an important bulwark against the unauthorized, free commercial use of a person’s identity or creativity without permission- at least for guild members.
WM: In your view, were the guilds able to make gains on heading off the threat of AI? Or will they need to substantially revisit the subject when the current contracts expire?
ML: The SAG AFTRA agreement requires a performer’s consent at each step of the generative AI process, or when a member’s performance will be imitated by a synthetic performer. The WGA agreement say AI can’t write or rewrite literary material or be considered source material, while a write can choose to use AI when performing writing services with company disclosure and consent. So, I believe the guilds did make important gains.
However, I suspect they will still need to substantially revisit the subject in the next rounds of negotiations. Expotential improvements in AI’s capabilities over the next five to ten years will create presently unanticipated problems and opportunities that both sides will likely have to address.
WM: What are the current legal challenges where AI is concerned for copyright holders concerned that their work is being ingested and used without their permission? Can you shed any light on the current spate of class action lawsuits by writers against OpenAI and Meta for copyright infringement?
AI hoovers up huge amounts of data to help it “learn” how to respond to prompts. It does that by copying the data into the AI’s database. In most settings, that data is copyrighted by others, and the AI companies are massively copying hundreds, or thousands, or millions of works, virtually always without permission.
A potentially infringing “copy” obviously is made to the extent those works are uploaded without permission, but the AI creator has a significant “fair use” argument. Google, for example, has used spiders to electronically scan and copy the millions of websites that populate the Google search engine for decades. Google doesn’t seek permission from those website owners before doing that, though a website can “opt out” if it chooses. That massive copying has uniformly been held to be fair use.
Similarly, almost 10 years ago, an appellate court held that Google’s efforts to copy every book in the English language to create a searchable database, and to provide digital copies of those books to libraries, was fair use. Authors Guild v. Google, Inc., 804 F. 3d 202 (2d Cir. 2015). The court so ruled for several reasons, but a significant one was that Google’s output was restricted, which meant that the public could not view copies of the entire copied works, and therefore Google’s copies did not act as substitutes for the copied works in the marketplace. If Google can do that for its search engine, the argument goes, they should be able to do it for their AIs, because the AI’s output is also restricted, and the public generally cannot obtain a copy of the underlying work from the AI.
Will that argument be successful? We don’t know; a number of cases have been brought around the country by copyright owners, but they all are in their early stages, and so far the early decisions have not been illuminating, but instead focused on, for example the particular language used in a particular complaint rather than the main legal issue. I personally suspect fair use may be a big obstacle for most copyright owners, since the output the AIs generate in response to prompts will seldom or never contain copyrightable content that can be traced to an underlying, copied work. And if it doesn’t or can’t be so traced, the argument will go, the fair use argument should prevail, because the AI’s output, like Google’s output, does not act like a substitute for the underlying works.
WM: Are the copyright infringement concerns different for writers versus visual artists or recording artists? If so, how?
ML: At present, copyright concerns are similar for writers and recording artists, since both are trying to protect their copyrighted works. Actors’ concerns are usually different, since they generally don’t own a copyright in their recorded performances and thus usually can’t allege copyright infringement when they are loaded into an AI. Actors usually must rely on other laws, such as the right of publicity, to protect their performances.
WM: India was the first to issue a legal ruling in this area when actor Anil Kapoor won a case against AI use of his likeness in a deep fake. Is that case a bellwether? Are “likeness rights” a possible way the issue might be tackled elsewhere?
ML: Right of publicity or name, image and likeness law is a way to address the issue, but there are uncertainties there, too. The right of publicity is a creature of state law, and varies from state to state. Many states presently have no clear right of publicity law that protects performances. Further, courts traditionally have been reluctant to recognize right of publicity claims involving “expressive” uses of someone’s identity in media, music or motion pictures on First Amendment grounds, though there are two exceptions.
The major exception involves “performances.” Taking someone’s “performance” without permission has been held by the Supreme Court and various lower courts to violate the right of publicity and not constitute protected speech in a few states. This exception is limited to the use of an artist’s “performance” only; taking and using a performance in connection with biographical or other material about the artist would remain protected speech under existing case law. And some states, such as New York, limit their right of publicity to uses “for purposes of trade,” commonly thought to be limited to merchandise, which would mean that any performance in expressive works such recorded music, television, motion pictures, or other media may not violate those statutes. Finally, in states whose right of publicity laws do attempt to protect against unauthorized exploitation of performances, attempts to use right of publicity law is already under attack in the courts on copyright preemption and First Amendment grounds. How courts will ultimately decide those attacks in the AI setting is presently unclear.
My bottom line is, current state right of publicity law is not well-suited to address unauthorized deepfake AI technology.
WM: Is it correct to say that in the end this will all probably require congressional action? And if so, what partisan and/or legislative hurdles need to be cleared for the legislation to be meaningful?
ML: To the extent we want to effectively regulate or prevent AI ‘s unauthorized use of others’ names, likenesses, and performances, state and/or federal legislation will probably be necessary. In fact, Tennessee already has enacted legislation it calls the ELVIS Act, which seeks to impose liability on generative AI companies and internet platforms that use a person’s likeness and voice without permission. How that legislation will survive a legal attack is uncertain.
Both Congress and several other states, including California, Kentucky, Illinois and Lousiana, have analogous legislation in various stages of consideration, though none has been enacted yet, and it is unclear when, or if, any will be.
WM: If there is limited action in only a few countries and territories, or if laws are not perfectly aligned, what are the chances that non-signatories will simply become rogue AI states and undermine the legislative attempts to control AI?
ML: Internationally-based intellectual property infringement has long been a problem, whether it involves AI technology or traditional copyright or trademark infringement originating from or in another country. Other countries’ status as signatories to international copyright or trademark conventions has not prevented them from hosting companies and/or computer or online platforms that engage in widespread infringement in the US.
Thus, while limited action on AI in only a few countries could complicate the international legal landscape, even universal agreement and widespread multinational treaties on how AI should be regulated may not materially reduce abuses.
WM: While AI has been described by some as a “plagiarism engine,” it has elsewhere been used to identify plagiarism, resulting in the firings of a number of college academics. Could it end up that AI actually polices the copyright infringements of other AI?
ML: AI can, and I’m sure will, help detect plagarism, but that is not likely to solve the infringement problem. Attempts to use technology to locate, catch and cure infringement have been underway for years, but have not succeeded in stopping it. If anything, the rise of the Internet has increased infringement by making distribution of other peoples’ intellectual property easier. I see no reason to think that AI technology will succeed where others failed in that effort.
WM: A great deal was made of Google’s Gemini and Adobe’s Firefly producing racially inaccurate AI images of historic figures. The media made the case that this showed that AI was both deeply flawed, but also that the results were too easily subject to the manipulations of engineers. As they move forward and adjust the algorithms, what are the real take-aways for creators and copyright holders where AI engines like Gemini and Firefly are concerned?
ML: To misquote Shakespeare, the fault lies not in our AI technology, but in ourselves. To cite another common but true cliché about the output of computer programs generally, “Garbage in, garbage out.” Regarding AI, shortcomings in the information provided it, how that information was chosen, the structure of prompts from users, and in AI technology itself, which can so easily state that false information is “fact” when it is really incorrect, misleading or outright false, means that AI output should not be confused with truth. Often it won’t be the truth at all, but a hallucination that AI technology makes appear even better than the real thing.
WM: In the current unregulated environment, what advice would you give to creators to protect their copyrighted material, and to performers and recording artists to protect their work and likenesses?
ML: Do everything you can to protect your work through copyright notices, copyright registration, terms in online terms of use, and NDA or other agreements. Join any guild you can that seeks to protect the intellectual property of its members. Consider whether to join classes of people bringing class action lawsuits to prevent unauthorized exploitation of your work.
And then, recognize it’s not enough.
WM: If you had a crystal ball, where do you see this issue shaking out in the next ten or twenty years?
ML: I don’t know if it will be shaken out in the next ten to twenty years, and if it is, I have no idea how it will be. I don’t have confidence in anyone whoclaims they do. AI’s capabilities are increasing so rapidly, and legal mechanisms to regulate them are in such turmoil, that it is presently unclear whether, and how, this will be resolved, if it is resolved at all. We are cursed with living in interesting times.
I hope it will ultimately be resolved in a way that allows modern technology to enhance our lives, while protecting uniquely human creativity and originality. Whether that happens is up to us.
Share this post