Since the public launch of ChatGPT late last year, I have been inundated with solicitations from companies offering new AI tools for business contracting. Are they ready for prime time?
These AI products generally claim they can do three things: summarize a contract, review a contract for certain kinds of terms (e.g., limitation on liability), and suggest amendments to a contract based on policies you load into the system (e.g., your company is the vendor, and you wish to limit its maximum liability to the total amount paid).
The prospect of AI improving contracting is tantalizing: The cost of contracting could be significantly lowered by requiring less attorney time. Contracting could be sped up. The AI might spot issues a lawyer might miss, do better at keeping contract language consistent, and provide insights from analyzing the uploaded past contracts of the user.
Technologically, AI for contracting has strengths and weaknesses that make it promising for some situations but unlikely to be helpful (or even counterproductive) in others.
Consider this analogy: Would you feel comfortable riding in the back seat of a self-driving car with no one behind the wheel on a road trip mixing highway and urban driving? I wouldn’t – too risky. That’s analogous to using AI contracting software with no human oversight.
How about if there is a person behind the wheel, but that person has no driving experience? That’s also too risky. It would be like having someone doing contracting with the AI tool who has no legal training or experience in contract law.
How about if the person behind the wheel is an experienced driver? Now you’re safe, maybe safer than just having a human driver, but is it worthwhile to pay the premium price for the car’s self-driving technology on top of paying the driver?
The analogy isn’t perfect, but the point is safe usage of AI, whether in cars or contracts, will produce only marginal savings.
With that expectation set, let’s look at how well AI tools should do at various contracting tasks. The offerings in this space are so new that we can’t yet say whether any are now worthwhile. Still, based on talking with people who have expertise in AI technology, here are some predictions on usefulness.
Contract Summarization
Of the three things a company might call on an AI contracting product to do (contract summarization, review for critical terms, and language revision), the AI will be weakest in summarizing.
Technologically, to perform summarization, the AI will review the contract to try to ascertain which words are most important and then eliminate unimportant parts to generate a summary.
In doing so, it might make critical mistakes, such as not making proper connections when encountering indefinite pronouns or other indefinite words, such as stand-alone use of “this” to refer to something else in the contract. It might also omit a key use of a negation term (such as “not”) because of its placement. It might misreport a key quantitative value (e.g., a monetary amount, the number of days to do something) because its summarization cuts out key details or qualifiers.
Thus, it’s risky to rely on an AI-generated summary to report the critical aspects of a contract accurately. You can reduce that risk by letting the AI generate a longer summary, but that might defeat the purpose of getting a summary.
Contract Revision
The AI should do well at identifying important contractual terms and suggesting revisions to them to comply with the contracting policies you input into the system. It ought to be able to use various tools, including computer thesauruses, to rarely fail in flagging the types of terms you want to find and in substituting language containing your preset policies.
But are you willing to trust the computer not to miss or mishandle crucial issues? An attorney can check the replacement language quickly by looking at the document’s changes in the redlining. But checking to see if the AI flagged all the important contractual terms would require an attorney to review the whole document, which cuts down on savings.
Thus, your savings will depend on your risk tolerance. Perhaps have the attorney spend more time with important contracts.
Best Situations for AI Usage
AI contracting will work best when the type of contracts processed are commonplace and the stakes are not high. The type of contract must be commonplace for the training data absorbed by the AI to give adequate guidance to it.
If the contracting situation is rare, such as a contract containing custom terms not frequently found in contracts, the AI may struggle. It might do poorly at flagging important terms because it doesn’t know what is important in uncommon situations. It also might do a poor job at suggesting revised language, again because of its low experience in the training data.
Beyond that, the AI might be helpful to a contracts lawyer as a brainstorming buddy. For example, if a contracts lawyer asks an AI to draft a contract of a particular type, the AI might suggest contract terms the lawyer wouldn’t think of or remember to include.
AI Won’t Help with Custom Business Terms
An AI likely will provide no or perhaps negative value concerning custom business terms. A contract is not just boilerplate, such as termination rights, limitation of liability, confidentiality, and indemnity and defense obligations. The contract usually contains business terms, such as prices, timelines, and descriptions of deliverables and services.
Unfortunately, those business terms are often vague or incomplete because they are written by business managers rather than lawyers. The AI isn’t going to be materially helpful in identifying or fixing those inadequacies. When contractual relationships fail, a frequent cause is the lack of clarity and detail in business terms.
Consider Bias
If the training data used by the AI consists of a corpus of contracts slanted toward one side of a deal, the AI might do poorly in suggesting revisions appropriate to the interests of the other side of the deal.
For example, if you represent a borrower in negotiating a commercial loan agreement, and if the AI learned on commercial loan agreements drafted by lenders, it might do a poor job of flagging important terms and suggesting appropriate revisions that protect the borrower.
It should still do a good job in handling specific contracting policies you input into the AI, such as striking any chance of a confessed judgment. But, due to training bias, the AI might miss issues you didn’t put in your contracting policies. It’s hard to think of everything a biased contract might throw at you.
Overall, the AI may deliver enough value to make it worth its cost in routine contracting situations where key terms can be flagged and revised in line with specific policies imported into the system, but it may deliver less value or even negative value in custom contracting situations and especially concerning custom business terms.
Written on April 19, 2023
by John B. Farmer
© 2023 Leading-Edge Law Group, PLC. All rights reserved.