In this post last month, I responded to this blog post by Kingsley Martin by considering the extent to which artificial-intelligence analysis of a set of precedent contracts can, by itself, allow you to create optimal contract language. I concluded that it cannot. I’m now going to continue gnawing on that bone; if you found my previous post a drag, you might want to sit this one out.
Kingsley responded to my post by appending this comment to his own blog post. It includes the following:
“What is” also captures the degree of conformity (or divergence) in a set of documents. This degree of conformity is commensurate with the degree of consensus among the drafters. My experience has shown that we can move to standards (and hopefully to optimal standards) slowly and incrementally. I have yet to find a situation where a law firm or corporate legal department will adopt vastly different drafting conventions compared to their current practice.
So Kingsley acknowledges that what he calls “automated transaction analysis” doesn’t determine what’s clearest, it determines what’s standard. And he says that it can be easier to sell lawyers on what’s standard as opposed to what’s best—no surprise there.
To write clearly, you have to know what works best, and I’ve set myself the task of figuring that out for purposes of contract drafting. And the prevalence of a given usage has no bearing on how clear it is.
To assess the difference between what’s standard and what’s best, and where the boundary lies between automated analysis and editorial control, let’s consider how that might play out in practice.
Imagine that you’re looking to add an efforts provision to a contract. If you use automated transaction analysis to explore how efforts provisions are handled in a broad set of precedent contracts, you’ll likely find that they use a whole menagerie of efforts variants. (I have a sense of what the results might be because I performed a primitive version of such an analysis: I looked at what efforts provisions were used, and how often, in contracts filed on the SEC’s EDGAR system in a single month. That analysis is contained in this 2004 article; don’t bother reading the rest of it, as it’s out of date.)
So your automated analysis tells you the frequency with which your precedent contracts use the phrases best efforts, reasonable efforts, commercially reasonable efforts, reasonable best efforts, good-faith efforts, and so on. What the heck do you do then? If you’re relying solely on automated analysis, your only option is to go with the most frequently used variant, as being the most “standard.”
But that would be reckless, because different terminology necessarily carries with it the potential for different meaning. You have to know what the implications are of efforts terminology, and you’re not going to get that from automated analysis.
If you want someone to tell you how best to handle efforts provisions, I suggest you consult chapter 7 of MSCD. (If you prefer your analysis piecemeal, search for “efforts” on this site and on the mothballed AdamsDrafting blog.) MSCD makes a series of recommendations regarding efforts provisions, including the most obvious one: The notion that a best efforts obligation is more onerous than one using reasonable efforts is unworkable, so don’t use best efforts unless you want to sow seeds of confusion.
The limitations of automated analysis for purposes of distinguishing between efforts provisions are representative of the limitations of automated analysis generally in the absence of strong editorial control. Automated analysis can be very valuable for establishing what you want to say in a contract, but to determine how best to say whatever you want to say, that’s when you need strong editorial control.
And if you’re not careful, automated analysis can have barking up the wrong tree even when it comes to what you say in a contract. “Successors and assigns” provision, anyone?