You Want to Measure Quality in Contracts? Without a Style Guide, You’re Nowhere

I noted with interest this post by Ken Grady on Seyfarth Shaw’s Seytlines blog, particularly as last year I did a Q&A with Ken on this blog (here).

Ken’s post is about quality in contracting. He starts by discussing the limitations of determining quality by proxy. As he says, “Trusting the brand, versus trusting metrics that measure desired characteristics, is measuring something by proxy.” According to Ken, the alternative to is to measure quality itself:

I’ll walk through a few quality measures that are easy to introduce when working with documents. These aren’t the only quality measures and I’m not even going to argue they are the best. These are a starting point.

Ken then proposes the following quality measures:

  • Readability, of the sort measured by, for example, the Flesch-Kincaid test.
  • Accuracy, namely avoiding typos, incorrect dates, inaccurate cross-references, and the like.
  • Custom, which appears to relate to setting targets for key points in the contracts process.

Ken continues as follows:

The quality metrics I discuss above are just the beginning. We can add other metrics, such as whether a contract results in a dispute, the severity of the dispute, the cost of a contract over its lifecycle, the number of times a contract must go back and forth between drafters before completion, and so on.

But I suggest that Ken’s measures of quality are simply another way to measure quality by proxy:

  • As I discussed in this post way back in 2006, using readability tests is a crude way to assess contract prose.
  • One certainly wants to avoid mistakes, but that part of the process is like running final checks after a car comes off the production line: it ensures that the car conforms to specifications, but it doesn’t measure the quality of the car as designed.
  • As for failing to meet contract-process deadlines, that’s a symptom of a problem rather than a problem itself. The same goes for any disputes that arise.

The only way to measure the actual quality of contract prose—broadly speaking, not what you say but how you say it—is to compare it to your organization’s style guide. What, you don’t have a style guide? That pretty much guarantees that your contracts contain an inconsistent mishmash of the dysfunctional usages that characterize traditional contract language.

But any old style guide won’t do. For example, a style guide that reflects the dysfunction of tradition contract language wouldn’t be of any use. Furthermore, to be effective, a style guide has to be more than just a few pages—there’s a reason why MSCD is more than 500 pages. But it’s not realistic to expect any organization to create its own comprehensive style guide. That’s why I’ve put together a model “statement of style” (here), which is an example of a short document which an organization can use to say that it’s adopting a style guide based on MSCD.

When it comes to substance, determining quality is tougher. It’s not susceptible to quick checks; there’s no alternative to getting the input of subject-matter experts and working through multiple drafts.

But measuring the quality of contract prose is the place to start. And if you don’t have a style guide for your contract language, you’re not serious about quality.

About the author

Ken Adams is the leading authority on how to say clearly whatever you want to say in a contract. He’s author of A Manual of Style for Contract Drafting, and he offers online and in-person training around the world. He’s also chief content officer of LegalSifter, Inc., a company that combines artificial intelligence and expertise to assist with review of contracts.

5 thoughts on “You Want to Measure Quality in Contracts? Without a Style Guide, You’re Nowhere”

  1. I agree that KG’s quality measures seem like further proxy measures, though that may depend partly on how you define the ultimate objective. His seem to focus on commercial colleagues as the customer, which downplays the role of inhouse lawyer as a protector of the company’s strategic objectives (which may include (a) winning in court before a judge with no jury – a very different audience for the contract than the commercial manager – or (b) planning ahead so that disputes are avoided by the parties’ future decision makers who may not have been involved in the negotiation of the agreement).

    They are good proxies for the commercial colleague as customer. A commercial client may not know whether a contract is legally accurate but they will see typos and may give undue weight to them as an indicator of the drafter’s quality. This is one reason why I was brought up to produce clean, typo-free documents, and teach that to the next generation.

    By contrast, I don’t think compliance with MSCD will cut much ice with a typical commercial manager, and this may be more relevant as a measure if one widens the criteria of quality as in (b) above.

    Reply
  2. Ken:

    While I agree that readability is a crude measure, it is at least a measure. The challenge with using MSCD as a standard is that no one has created a measure for deviation from the standard that would be comparable to the readability tests embedded in Word. It could be done, of course, but it would be really complicated. (Imagine scoring on use of the word “will.” You get a malus for each use where it means “has a duty to” and a bonus for each use where it is really a future tense.) One could come up with a rubric that was for scoring against just the categories of language chapter in MSCD, but that would be yet another proxy.

    Another proxy might be to redline the document using Word, bringing it into conformity with MSCD. Word’s track changes feature can count the number of insertions, deletions, and moves you make. If all you were doing was MSCD compliance, that would give you a rough metric. But it would be very rough. For example, if you fix problems with two adjacent phrases by deleting both as one deletion and inserting a single replacement, that gets counted as one insertion and one deletion, not two. But I’d say that the number of changes would roughly correlate with MSCD compliance.

    Chris

    Reply
    • Maybe someday I’ll create an MSCD equivalent of WordRake. But more generally, I don’t think that having a standard requires that one have a metric for compliance with that standard.

      Reply
      • Ken:

        Sure, having a standard does not require having metrics measuring deviation from standard. But Ken Grady’s entire article is about metrics, not standards.

        In your post, you say, “The only way to measure the actual quality of contract prose—broadly speaking, not what you say but how you say it—is to compare it to your organization’s style guide.” But your suggestion is incomplete. All you have said is that you ought to start with a standard. Fine, I have one. Now how to I measure deviation from it? Without that, you are not fairly rebutting Ken Grady’s point (part of which is that measuring quality in legal services is all about using proxies to approximate a real measurement).

        Chris

        Reply
        • Yes, Ken’s article was about metrics. But it suggested that the metrics discussed were a direct measure of quality.

          Regarding deviating from MSCD, that’s something you assess line by line, then fix. You can assess it, even give it a grade. After all, that’s what I do when I teach. But it’s not amenable to some quick metric.

          Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.