The First Battles in the Legal War Over the Future of AI Have Begun
The first lawsuits are ripening; AI firms begin offering legal coverage.
As of this writing, Sam Altman is still out as CEO of OpenAI, the maker of ChatGPT. The Economist depicts this firing as a battle in the war between AI “boomers” and “doomers.”
The doomers believe AI threatens humanity and must be constrained. The boomers believe this threat is overstated and that AI progress should be turbocharged to fuel productivity increases and wealth generation.
The control of OpenAI is an odd battle because of the quirky nature of its corporate governance. The parent company is a nonprofit that owns a for-profit subsidiary. Accordingly, the nonprofit board doesn’t answer to the investors in the for-profit subsidiary. Thus, the board can act for societal-benefit reasons. But we don’t yet know if concern over AI safety was its reason for firing Altman.
While this corporate drama plays out, the real war over the future of AI is being fought in courtrooms. This war is over whether copyright and the right of publicity will kill AI. This war has caused some AI makers to offer some legal coverage for some AI products.
Legal Threats to AI
There are two copyright threats to AI. The biggest threat is over training data.
AI systems need massive amounts of training data to tune themselves sufficiently to do their magic. It is widely believed many large AI systems, such as ChatGPT, have been trained on material copied off the Internet without the permission of the copyright owners of such material. These AI makers have largely been silent about their training data sources.
Various content creators and licensors have filed lawsuits against the makers of AI systems, contending such copying and use without their permission is copyright infringement. Beyond that, the same content creators sometimes contend that the AI output is too similar to their creative works and, thus, is also copyright infringement.
Beyond these copyright claims, some celebrity writers and performers claim AI systems violate their publicity rights. The right of publicity is the right to control the use of your name, image, and likeness for a commercial purpose.
Many AI systems will generate content in the style of a famous author or artist. Some AI systems claim to have implemented “guardrails” to prevent such mimicry, but they may not be effective.
Numerous lawsuits have been filed attacking AI systems. The plaintiffs include famous authors such as John Grisham and George R. R. Martin and performers such as actress Sarah Silverman. Also, Getty Images sued because it fears image-generating AIs will kill its business for stock photos.
These lawsuits are just reaching the stage where courts are making preliminary rulings. So far, no court has granted definitive victory to either side.
IMO, AI is Likely to Beat the Legal Challenges
In my opinion, the AI makers have the better argument. I believe the courts will ultimately hold it’s fair use to use for training data other people’s copyright-protected material found on the Internet without their permission.
I base this on how AI systems work. These systems don’t store and regurgitate copies of the material they ingest during training. They just study those copies to map relationships between words or other data to set “weights” in their neural networks. These tuned neural networks later produce amazing output in response to prompts.
As for output, except in rare cases, the AI output in response to a prompt shouldn’t be so similar to any single piece of training data to constitute copyright infringement.
Content creators are mad because they fear AI will reduce or eliminate the demand for their creative services. Why buy a new novel by your favorite author when AI can generate one in that author's style – perhaps a novel that fits your niche interest?
Yet, “in the style of” probably is not the basis for a winning copyright infringement claim or right of publicity claim. You generally are free to create something “in the style of” a famous author or artist, provided your work is not substantially similar to any single creation of that person.
Some AI Makers Are Offering Legal Coverage
Still, the multiplicity of these lawsuits and the possibility that some might succeed have driven most major AI players to begin offering copyright-infringement protection to their customers. This coverage is just emerging. Important details remain to be revealed.
So far, the following companies have announced some copyright-infringement coverage for some of their AI services: OpenAI (the maker of ChatGPT and DALL-E), Google, Microsoft (which reportedly owns 49% of OpenAI’s for-profit subsidiary), IBM, Amazon, Adobe, Getty Images, and Shutterstock.
But some important players are not yet providing coverage, such as Stability AI, Midjourney, and Anthropic. Also, I haven’t seen any indication that Apple, Facebook, or Twitter (now X) is providing coverage.
Is the Coverage Good Enough?
Is the copyright-infringement coverage from the major AI players meaningful and reliable? It’s too soon to say.
The coverage offered so far is limited to specific corporate AI products and is not provided for widely used consumer services. For example, neither ChatGPT nor Google Bard is covered. This includes the free version of ChatGPT and the Plus version, which is available for a $20-per-month subscription.
Also, in some cases, such as with OpenAI, it’s unclear whether a copyright infringement claim over training data is covered.
In addition, some companies have not publicly released the contractual language of the coverage. They have released only PR statements about it.
Finally, some coverage promises contain qualifiers that may be impractical to meet. For example, Google limits its coverage to unmodified outputs. What if you use the output as a starting point for your work and make modifications, but the infringement arises from Google’s output, not your modifications?
Some companies, such as Microsoft, condition coverage on using its guardrails, content filters, and other safety systems. I can’t say whether those protective features undercut the value of using the AI.
The bottom line for such copyright coverage is you have to do your homework. You need your AI vendor to tell you what specific AI products are covered. You also should have an attorney review the contractual coverage language and give you an opinion on how protective it is. Don’t rely on just a press release touting the coverage.
Overall, the time has come to watch the AI legal battlefield, for as George R.R. Martin famously wrote, “Winter is coming.”
Written on November 21, 2023
by John B. Farmer
© 2023 Leading-Edge Law Group, PLC. All rights reserved.