If AI Gets One Thing Wrong, It Might Get Other Stuff Wrong

Recently I saw this assessment of a contract:

This assessment was generated by artificial intelligence. Or more specifically, by Claude, from Anthropic.

I have no objection to what it says. Instead, what’s notable about it is how many abstract nouns it uses. I’ve highlighted them in gray.

Abstract nouns are nouns for things you can’t see, hear, smell, touch, or taste. They quickly make your prose wordy and bureaucratic. They also allow you to play hide-the-actor, so you end up saying, for example, Upon notification of the incident instead of When Acme notifies Widgetco of the incident. Generally, you’re better off using verbs, which is why misuse of abstract nouns is also referred to as using buried verbs. (Abstract nouns, bad! Verbs, good!) For a short account of why you should be wary of using abstract nouns, see the first page of this Bryan Garner article in the Michigan Bar Journal.

I’ve long been attuned to use of abstract nouns, wherever they lurk. In consulting-firm white papers. In flight-attendant announcements. In contracts. I even recall warning my then preteen daughter of the perils of abstract nouns. So when I say that that Claude assessment is the worst example of egregious use of abstract nouns I can recall seeing, that’s saying something. Note in particular the clunky advancement and the noun pileup service delivery progression.

If this post alerts you to the perils of abstract nouns, so much the better. But I have a more specific reason for writing this post.

AI chooses what words to use based on the massive datasets it has been trained on. It would appear from Claude’s assessment of that contract that Claude has decided that abstract nouns are the way to go. In other words, Claude is replicating the dysfunction on display in what it has digested. I spotted immediately the abstract nouns in that Claude extract. There’s no reason to assume that what Claude and other AI brands produce is otherwise free of dysfunction. Instead, we can assume that AI is prone to replicating other forms of dysfunction. Winkling out other instances of replicated dysfunction might require a lot more work—work of the sort most users of AI might not be inclined or equipped to do.

About the author

Ken Adams is the leading authority on how to say clearly whatever you want to say in a contract. He’s author of A Manual of Style for Contract Drafting, and he offers online and in-person training around the world. He’s also head of Adams Contracts, a division of LegalSifter that is developing highly customizable contract templates.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.