If you spend any time on law-related social media, you’ve probably encountered chatter about ChatGPT, an AI chatbot system built on OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), a language model that uses deep learning to produce human-like text. (Yes, we’re talking artificial intelligence.)
People have experimented with ChatGPT by asking it to do all sorts of things. (To pick an example at random, “Write the transcript of an NPR interview with SpongeBob about the rumors he’s dating Patrick Star.”) But inevitably, some have wondered what the implications are for drafting contracts.
If you’re looking for optimal drafting, the answer is easy: stay away from ChatGPT in particular and artificial intelligence in general.
ChatGPT would create a draft contract by cobbling together an amalgam of relevant language in its vast trove of text. What it comes up with would reflect whatever patterns happen to predominate, but as a matter of quality control generally, what predominates isn’t necessarily the best or the most relevant. In many contexts that might not matter much, but contracts aren’t a promising candidate for that kind of expediency—too much is at stake. Furthermore, traditional contract language is a dysfunctional mess, so it’s inconceivable that ChatGPT would somehow stumble its way into optimal drafting. (Browsing my book A Manual of Style for Contract Drafting, my articles, and my blog posts will give you a sense of the dysfunction.)
As my own experiment, I asked ChatGPT what the difference is between representations and warranties. The answer was a bland recitation of a version of the addled conventional wisdom I debunked in this 2015 article.
A Leap Into the Void
Another shortcoming of ChatGPT is that nothing about it encourages the leap of faith that’s required to use someone else’s contract language. Here’s some of what Nir Golan says about that in this LinkedIn post:
When it comes to the output/legal content created by ChatGPT, I do wonder if this trust/reliability issue will be a blocking point/challenge for legal users/lawyers here also. In the legal context, the contract clause/template or summary does not need to be “correct” but I believe it needs to be reliable, thorough, and trustworthy content coming from a legal expert.
So what’s the alternative? Expertise! It would be a simple matter to offer contract templates that are automated, customizable, annotated, clear, and concise. Here’s what Brad Newman says about that in this LinkedIn post:
GPT-3-enabled tools may be great at explaining concepts, finding relevant concepts, and perhaps analyzing and generating discrete contractual provisions or briefings. However, I think web form-based document assembly tools like ContractExpress will continue to have their place in a lawyer’s automation toolkit.
The efficiency of filling in a few text fields and flipping a few boolean switches and generating precise results based on a carefully maintained form … I just don’t see GPT-3 making that process better.
Similarly, after considering what one might do with ChatGPT and AI generally, Ryan Samii (of the startup Standard Draft) said in this tweet, “Standardized contracts (along with a more structured approach to drafting and negotiating them) is the much more sensible and straightforward approach.”
Copy-and-Paste By Another Name
ChatGPT has people worried. For example, there’s this tweet by Paul Kedrosky:
I am so troubled by what I see everywhere all at once with ChatGPT in the last few days. College and high school essays, college applications, legal documents, coercion, threats, programming, etc.: All fake, all highly credible.
— Paul Kedrosky (@pkedrosky) December 8, 2022
In other contexts, the concern might be justified, but traditional contracts are already dysfunctional—it’s hard to see how ChatGPT could make things worse. If you’re satisfied with cranking the handle of the copy-and-paste machine, you have no reason to look down your nose at ChatGPT.
In a LinkedIn post, Andrew Stokes, CEO of The Law Machine, described his experiment in using ChatGPT to draft a payment provision for a construction contract. In the course of discussing the shortcomings, here’s how he assess it: “But frankly, nothing significantly worse than stuff I’ve seen from junior lawyers over the years.”
And on Twitter, Jason Morris suggests that if you blindly use precedent contracts, ChatGPT might represent an improvement:
People saying using GPT3 for legal drafting risks over-reliance, when they learned to draft by copying utterly random precedents word for word.
The problem is not that you will become overly reliant, because you already are, just on an astronomically smaller data set.
— Jason Morris (he/him)💻⚖️🇨🇦 (@RoundTableLaw) December 7, 2022
If ChatGPT has a chance of being a better option that what you’re currently doing, you should think long and hard about your choices.
If AI has serious limitations for contract drafting, it follows that it also has serious limitations for contract review. I wrote about that in this 2021 post and this 2019 post on LegalSifter’s blog.
But, you sputter, aren’t you chief content officer of LegalSifter, an AI-and-contracts company that, among other things, helps with contract review? Yes, that’s right, but the big difference with what we do is that we don’t just look for patterns based on traditional contract dysfunction. Instead, we deploy expertise (for now, mostly my expertise) in training our AI. Here’s what I say about that in the 2019 post:
By contrast, LegalSifter relies on expertise. We build “sifters”—algorithms that look for specific contract concepts—and bundle them in document types targeted at different transactions (leases, sponsored research agreements, services agreements, hotel agreements, and so on) and different users (buyers, sellers, landlords, tenants, and so on). The expertise comes into play in deciding what issues to look for, in determining how those issues are expressed in contracts (so we can instruct the technology accordingly), and in deciding what to tell users about those issues.
No More GIGO, Thanks
When it comes to contracts, the legal profession is in the habit of looking to the latest buzzword to save the legal profession from itself. AI is only the most recent buzzword. But if you want to improve your contract process, as opposed to just seeming to improve it, you’ll need real expertise, not bogus conventional wisdom. It won’t be enough to just use the word excellence a lot. And you’ll have to be willing to change. Otherwise, you’re doomed to endless garbage in, garbage out.
6 thoughts on “ChatGPT Won’t Fix Contracts”
As an engineer turned operations management consultant before becoming an attorney, I have often been heard to say and say again and again “Don’t automate anything you can’t do flawlessly manually.”
There seems to be a widespread childish faith in technology, and an apparent dearth of people who have seen and understood the message in Disney’s “Sorcerer’s Apprentice” cartoon from Fantasia. The further you get from people who actually work with technology, the worse the disease seems to be, and most lawyers fit that bill.
Garry Kasparov has done a lot of good thinking and writing on this subject. Real experts can incorporate technology into their practices and increase their natural advantages over others lacking that expertise. The opposite is not true — people who lack the expertise cannot expect to be able to specify or implement technology to make up the difference … the Dunning-Kruger effect is a universal truth: If you aren’t good at something, you likely don’t know you’re not good at it because if you knew what good was you would be able to see the difference between what you do and what good is. So if you’re a great contract drafter, you can probably increase your advantages over others by using technology to supplement your skill; but an average contract drafter will not be able to use the same technology to the same effect.
A good friend of mine many years ago, also an engineer, had his own phrase for what you (insightfully) suggest above John. He told me never to engineer around stupidity. (He’d go on about how it just encourages them to come up with more creative ways to be stupid. “Them” in that case was radio disk jockeys. But I’ve seen the principle applies broadly.)
So the notion that we cannot get a machine to do that which we have not done ourselves yet rings true for me.
I read your article and it was very informative. Its case against GPT-based drafting seems to focus on pre-trained models such as ChatGPT that use the so-called “astronomically smaller [contract-related] data set,” as was quoted. But GPT-3 (not 3.5 or 4 but maybe that’s coming?) can be trained with arbitrary data sets; and certainly they can be trained with an astronomical amount of data—that’s fundamental to their function. Do you not see potential from creating fine-tuned models with your and others’ expertise?
Yes, I see that potential. I noticed that Casey Flaherty recently said “maybe, someday, an LLM will be used to apply the sixth edition of Ken’s MSCD to every contract destined for EDGAR.” But LLMs are parasitic: for them to emulate quality, that quality must first exist. Currently, it doesn’t. Once it does, we can resume this conversation!
I think I see! The quote, “traditional contract language is a dysfunctional mess,” probably sums it up best. Fine-tuning for drafting would require a massive amount of hands-on model training as you do with review, and there are a lot of bad contract examples that need to be explicitly excluded. Take care