Automated Review of Contracts: Some Thoughts on LawGeex’s AI-Versus-Humans Study

I’m a contract-drafting guy, but I have to acknowledge that drafting contracts might not be the most annoying part of the day-to-day contracts process.

Assume that Acme does ten deals with ten different companies in which it drafts the contracts using its templates. Then assume that it does those deals using the other guy’s drafts. Odds are that in the second scenario Acme would end up doing a lot more work than it would in the first scenario—producing a draft using a template you’re familiar with is likely to take less time than reviewing a draft produced by the other side.

So it’s not surprising that software aimed at reviewing the other side’s drafts should now be attracting attention. The two names I’m familiar with are LegalSifter and LawGeex. They’re welcome innovations.

LawGeex recently disseminated this study that made the following claim:

In a landmark study, 20 experienced US-trained lawyers were pitted against the LawGeex Artificial Intelligence algorithm. The 40-page study details how AI has overtaken top lawyers for the first time in accurately spotting risks in everyday business contracts.

That prompted a bunch of hyperventilating articles, including this one.

I’m sure the reported results reflect what happened. I know something about confidentiality agreements, having spent around a year building an automated one. (It’s described in this LinkedIn article.) So I had a look at the LawGeex study, and it prompted the following thoughts. My intention isn’t to criticize, but to offer some context.

Humans Are Fallible

Yes, humans are fallible. In terms of contract drafting, my presumption is that everything is bad, and I’ve offered on this blog many examples of that. When that’s not the case, I’m pleasantly surprised. So I would expect a comparable dynamic to apply when it comes to review. But the circumstances of LawGeex’s study are a worst-case scenario. If a company is paying attention, it would give those reviewing confidentiality agreements some sort of checklist against which to measure what they’re reviewing. By contrast, those taking part in LawGeex’s test had to rely only in their experience and native wit.

Granularity

Second, the issues flagged by LawGeex are very broad. For example, one issue was presence of a no-soliciting provision. In my automated confidentiality agreement, the no-soliciting provision is customizable up the wazoo, starting with whether it covers just hiring or both hiring and soliciting. Simply flagging no-soliciting provisions doesn’t get one very far.

Necessarily Limited Scope

Third, inevitably, LawGeex’s list of issues wasn’t comprehensive. To select an example at random, it doesn’t include flagging instances of the word proprietary, something I wrote about in this 2010 post. And their list doesn’t cover general drafting issues, such as whether something should be expressed as a condition and not as an obligation, and I doubt it ever will. That’s why review by software should support review by a person, not replace it.

What Comes Next

Fourth, spotting issues is the first part of what the technology does. Then LawGeex suggests edits based on a company’s pre-defined legal policies. I’d be interested to know the level of detail it offers, in terms of both what it reads and the suggestions it offers. But that would require a demo. For purposes of this post, I’m just looking at their study.

Whom Do You Trust?

And fifth, my biggest question about the new crop of “AI” technologies isn’t the technology per se, it’s the humanoid expertise it incorporates. That concern applies to all services that address contract content. I’m toying with the slogan “Editorial expertise is the new black box.” In the case of services that offer contract templates, if I don’t know who prepared a template, I’m not going to trust it. Even if I do know, I’ll be skeptical unless given good reason not to be. Relying on someone’s contract language requires a leap of faith, so I know that I have to not only be an expert but also appear to be an expert. The same goes for services that assist with review.

To its credit, LawGeex identifies the “team of prestigious law professors and veteran lawyers” that prepared the list of issues that forms the basis for the test. But I happened to spot that the heading for one of the identified issues was “Exclusion—Public Domain.” That’s a little worrisome: as I note in this 2010 post, the phrase in the public domain “has no bearing on how widely available any given information is. Instead, it means that the information isn’t protected by intellectual-property rights and so can be used by anyone free of charge. That would represent an irrationally narrow exclusion from the definition of ‘Confidential Information’ ….” So LawGeex’s team flubbed by using that phrase, albeit just in a heading. Might they have missed other stuff? Just as those performing an old-fashioned review are likely to be fallible, the experts giving instructions to AI might be fallible too. [Updated 13 September 2022: This take on public domain is insufficiently nuanced. For my revised take, see this 2021 blog post.]

Most people shouldn’t find that sort of problem disconcerting, as most of us would be grateful to have the benefit of the collective expertise of LawGeex’s team, even if they’re fallible.

Other Kinds of Contracts

It’s not surprising that LawGeex’s report features confidentiality agreements. They’re the cockroach of the contract world—ubiquitous, annoying, and apparently indestructible. And you see the same issues in contract after contract. It will be interesting to see how LawGeex and its competitors do when it comes to reviewing more fluid kinds of contracts.

This category of product has the potential to make contract review quicker and more effective. Let’s see whether the technology and the underlying expertise are up to the job. And, to quote this post, let’s see whether the intended users give a ****.

About the author

Ken Adams is the leading authority on how to say clearly whatever you want to say in a contract. He’s author of A Manual of Style for Contract Drafting, and he offers online and in-person training around the world. He’s also chief content officer of LegalSifter, Inc., a company that combines artificial intelligence and expertise to assist with review of contracts.

4 thoughts on “Automated Review of Contracts: Some Thoughts on LawGeex’s AI-Versus-Humans Study”

  1. I am equally sceptical of AI contract review. However I am also j treated to know how it can produce efficiencies. The question is, how does a lawyer work with Ai tech without performing the entire review hinir herself as would otherwise be the case. Because if that’s what they are going to do, what is AI adding, other than cost and some additional peace of mind?

    The one aspect in which I differ with Ken is him ignoring the capacity of these systems to ge taught. Out of the box they may be general and high level but those points identified by Ken as criticisms can be trained.

    Reply
    • Hello Martin,

      In my perspective, If the AI tech includes strategies with both human and computer to do the AI contract review , then it may make more sense and less error prone. If it is just software (ie computers alone) , then it may take a long time to make it completely 100% smart and a tremendous amount of training before it can be anywhere near accurate for all the different scenarios that exist with the numerous types of contracts that exists today.

      The amazing improvement in AI technologies should help us get there less error prone hopefully sometime in the near future.

      Regards
      Nazar @ CMx Contract Team

      Reply
  2. Machine learning is nifty, and I’m all for springing reviewers out of the NDA salt mines. But these are tools for paving the road to contract Hell, not digging us out of the mess we’re in.

    Technology shouldn’t accommodate lawyers’ wont to produce pointless variations of common transactional forms. The future I want for my clients isn’t one where every lawyer writes their own two-bit NDA, and fancy software helps me recognize it for standard without having to bill for reading. I want a future where technology helps lawyers make, share, and improve standards that they can refer to and reuse as a shared vocabulary.

    That’s what I’m after with rxnda.com. The NDA forms I wrote for that service aren’t the point. The point is that when you receive a request to sign through RxNDA the second time, or the hundredth time, you already know what the form says. You don’t need to run a redline. You don’t need to run a machine learning algorithm. You know. Because sending an NDA request sends not just a form and a request to sign, but a message, certified independently: This is the published form. Nobody’s tampered with that form since the last time you saw it.

    Those kinds of guarantees allow folks to work with RxNDA forms like words, as units of meaning. Haven’t seen that form before? Look it up by name and edition. Read its definition. Once it’s part of your vocabulary, you can read and write with it fluently. You can use it one time or twenty.

    That’s related to, but different from, “standardization”. When an industry group publishes a form, they give it a name and perhaps a version, fix its text, and also tell parties doing that kind of business that it’s the form they should use. RxNDA only names and fixes. On the Internet, with computers, anybody can publish, name, and fix forms. You don’t have to be an industry group, and nobody has to listen to you. They might, if your form is good.

    I’ve a mind to apply this approach to pieces of contracts, too. That’s the point of my work on Common Form and “contract components”, on which RxNDA is built.

    Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.