Earlier this year I redrafted a complex commercial agreement and sent it off to the client. I received in response a comment that I hadn’t expected at all—that the readability score for my draft was rather low.
This caused me to scratch my head—I’d never given a moment’s though to readability tests. So I did some rooting around online. (Here’s the relevant Wikipedia page.)
My client was referring to the Flesch Reading Ease test. It’s intended to indicate how difficult a piece of writing is to understand—the higher the score, the easier it is to read. Scores of 90-100 are considered easily understandable by an average 5th grader, whereas the Harvard Law Review apparently scores in the low 30s. Many government agencies require that certain documents meet or exceed a stated minimum score. For example, most states require that insurance forms score 40-50.
Microsoft Word will show readability statistics whenever you spell-check a document if you select the appropriate box (in “Options,” under the “Spelling & Grammar” tab). (Along with the Flesch Reading Ease score you’re also given the score under another test, the Flesch-Kincaid Grade Level test, which is only relevant in education.)
I duly checked the Flesch Reading Ease score for the draft I had sent my client, and it was in the mid 30s. I didn’t think that was any particular disgrace, but I needed some sort of frame of reference to compare it to.
For another client I had redrafted a “golden parachute” termination agreement. I dubbed it “RMA Widgets,” and it’s become something of a showpiece—I tend to offer it as an example of the glories that can be wrought when you redraft, MSCD-style, a mainstream corporate agreement. I checked the readability of the “before” and “after” versions of RMA Widgets—they were 14.5 (before) and 23.6 (after).
I had two reactions to these scores. On the one hand, I was gratified that my many hours of slaving over RMA Widgets had made it almost twice as easy to read. On the other hand, a score of 23.6 is nothing to crow about. But any improvements I might be able to make would, at this stage, be entirely marginal and would have next to no effect on the readability score. RMA Widgets is about as readable as I can make it.
I checked the scores for the “before” and “after” versions of another contract I redrafted, to the same effect—17.6 (before) and 25.3 (after). It would appear that mid 30s is, by my standards, as good as it gets.
Rather than concluding that I in fact stink at contract drafting, I’m inclined to attribute the mediocre “after” readability scores to an unavoidable feature of contracts—long sentences. I’m thinking in particular of sentences that are made up of enumerated clauses: they can go on and on, and would doubtless do a number on any Flesch Reading Ease score, but they nevertheless make any contract much easier to read, particularly when tabulated.
So now that I’ve been introduced to readability scores, do I think they serve any purpose in contract drafting? I suggest that if the Flesch Reading Ease score of any contract is in the teens, its likely that you’re dealing with a product of mainstream contract drafting, with all the deficiencies that that entails. Applying to it the recommendations in MSCD would doubtless increase its readability score. But once you have a halfway decent sense of what constitutes clear drafting, readability scores lose any significance. In particular, it would be pointless to tweak one’s drafting to goose the readability score of a contract.
But I suppose that if one were touting the benefits of an MSCD-style redrafting, one could point to the increase in the readability score as a simple indication of the cumulative impact on readability. Darn—I should have tried that on the client who raised the issue in the first place. (As it happens, the client promptly forgot about the entire subject.)