Although it’s early in the game, I’m willing to believe that generative artificial intelligence, by whatever name (“large language models”, GPT-4, and so on), will have a significant effect on the world of work. But it’s not relevant to what I do, because for now, contracts are largely immune to a generative-AI takeover.
For one thing, mainstream contract drafting is a stew of dysfunction. In this 2022 post, I explain that if you train AI on dysfunction, the AI will only be able to replicate that dysfunction.
But today’s post looks at a different issue—how generative AI compares to using document-assembly software for creating contracts.
I was prompted to do this post by an exchange on Twitter yesterday. Someone published a poll asking which is better, creating contracts using document-assembly software such as Contract Express or manually replacing placeholders in a Word document. Someone said in reply, “This won’t age well. GPT-3 will upend this sector in months.” To which I responded, “I’m betting it won’t.”
Consider how a sophisticated document-assembly template works. The user is presented with a questionnaire that asks the user to supply information or choose from among alternatives. The questionnaire offers the user guidance in tackling questions. If the questionnaire is intended for a broad group of users, it could offer extensive customization. The questionnaire for the automated confidentiality agreement I built 15 years ago using Contract Express is of that sort: depending on the extent to which you need bells and whistles, you could end up answering 70 or more questions. Presumably that’s why for years, Thomson Reuters used a screenshot from my questionnaire on their Contract Express web page. (The image above is a screenshot from that questionnaire.)
So a sophisticated document-assembly template would go a long way toward allowing users to create a contract that optimally addresses their needs, while offering control and transparency.
If you wanted to create the same kind of contract using generative AI, to approximate a document-assembly questionnaire you’d have to come up with a whole bunch of prompts. Besides being laborious, that would require extraordinary expertise. The whole point of document assembly is that you build expertise into the template, so users can access it. You’d have to do the same sort of thing with generative AI if you want to give users access to expertise.
That’s presumably why in this LinkedIn post speculating about “A simple interface that lets non-expert user generate high quality first draft of legal contract using natural language,” Danish start-up lawyer Kristian Holt assumes that one element would be “Implementation of human reinforcement feedback from legal experts with adequate skill level.” It’s not obvious how you’d achieve that with generative AI.
So document-assembly technology is great way of making contract-drafting expertise available to users. Generative AI? Not so much. So why is all the buzz about generative AI? Why aren’t we talking more about document assembly?
Because document assembly has been a notorious underachiever. That has nothing to do with shortcomings in the technology. Instead, it reflects that building a document-assembly system requires rummaging around in the entrails of contracts; in a copy-and-paste world, few people have the expertise and stomach for that. And most organizations can’t achieve the economies of scale required to make a document-assembly system cost-effective.
Doubtless plenty of people would be inclined to prostrate themselves at the altar of generative AI, but it’s not currently plausible enough for contracts. If someone were to build a document-assembly library of sophisticated templates of commercial contracts, we could forget about asking generative AI to draft contracts. (This isn’t entirely hypothetical.) But generative AI might play useful ancillary roles.