[Revised on 11 September 2025 to change the title and the text. The original title was If AI Gets One Thing Wrong, It Might Get Other Stuff Wrong.]
Recently I saw on LinkedIn, in this post by Kara Dowdall, the following assessment of a contract:

This assessment was generated by artificial intelligence. Or more specifically, by Claude, from Anthropic.
I have no views on what it says. Instead, what caught my eye is how many abstract nouns it uses. I’ve highlighted them in gray.
Abstract nouns are nouns for things you can’t see, hear, smell, touch, or taste. They quickly make your prose wordy and bureaucratic. They also allow you to play hide-the-actor, so you end up saying, for example, Upon notification of the incident instead of When Acme notifies Widgetco of the incident. Generally, you’re better off using verbs, which is why misuse of abstract nouns is also referred to as using buried verbs. (Abstract nouns, bad! Verbs, good!) For a short account of why you should be wary of using abstract nouns, see the first page of this Bryan Garner article in the Michigan Bar Journal.
I’ve long been attuned to use of abstract nouns, wherever they lurk. In consulting-firm white papers. In flight-attendant announcements. In contracts. I even recall warning my then preteen daughter of the perils of abstract nouns. That Claude assessment is one of the more egregious examples of overuse of abstract nouns I can recall seeing. Note in particular the clunky advancement and the noun pileup service delivery progression.
AI chooses what words to use based on the massive datasets it has been trained on. It would appear from that Claude assessment that Claude has decided that abstract nouns are the way to go. In other words, Claude is replicating the dysfunction on display in what it has digested.
Given that business writing in general tends to use more abstract nouns than is healthy, you’d expect that AI generally (and not just Claude) would replicate that dysfunction. I offer as a small piece of evidence to that effect the AI prose featured in this 2024 blog post. It includes a long list of abstract nouns.
If this post alerts you to the perils of abstract nouns, so much the better. But I have a more specific reason for writing this post.
A “prompt” is natural-language text describing the task that an AI should perform; “prompting” is the act of submitting a prompt to the AI. Prompting is often a process, with the user revising their prompt in response to answers offered by the AI.
You might revise your prompt to reflect nuances you hadn’t thought of. But you might also, or instead, find yourself revising your prompt with the aim of preventing the AI from behaving erratically. In comments to my LinkedIn post (here) regarding the original version of this post, it was suggested that you could use prompts to address excessive use of abstract nouns.
But that poses a problem. If you ask AI a question, that implies (1) that the AI is capable of providing an answer you can rely on and (2) that you aren’t capable of answering the question yourself. If you have to revise your prompt to address something dysfunctional in an answer offered by the AI, that means the AI is unreliable. You might be in a position to spot something as basic as AI overusing abstract nouns, but if the AI is erratic in one way, it might be erratic in other, more problematic ways that you might not be equipped to spot. In particular, I’ve written often about AI replicating the dysfunction of mainstream contract language.
What further complicates matters is AI’s tendency to make mistakes (known unhelpfully as “hallucinations”).
The risk of AI replicating dysfunction and making mistakes explains anecdotal evidence to the effect that in addition to using AI, lawyers are also doing work they way they’ve always done it, because they don’t trust the AI to get it right.
So revising a prompt in an attempt to reverse AI dysfunction shouldn’t be considered a standard prompting maneuver. Instead, it’s a sign you’re on shaky ground.