Clients pay you (not an AI) to get it right, not wrong.

Ask yourself the following questions before using any Artificial Intelligence Tool in the course of performing legal work

The below one-pager list of questions and prompts is intended to build on the emerging slate of guidances regarding the pervasion of AI Tools in legal practice, best collated here by the Law Society of New South Wales.

Questions 1 and 2 (for all readers)

  1. Are you a qualified lawyer? If yes, advance to question 3 (if no, go to question 2)
  1. Are you a law student?

If Yes – do not advance further down the check-list until graduation. But read on until question 3.

Like the use of digital calculators in high school being permissible once sound mental arithmetic skills are acquired and demonstrated, use of AI tools should only be used once the base research, reading and legal writing skills are mastered. AI and ChatGPT in particular are not a primary or secondary source of law.

A trained lawyer needs only their own brain to perform the key task of Facts, Issues, Law, Conclusion. If you are, or have become reliant on current AI tools to perform the tasks of legal reasoning/legal problem solving, then you’ll battle in practice. That said, students ought be taught the use of AI tools for administrative and practice management purposes. Whatever it is that top-tier firms are having their summer clerks use AI tools for. There’s a case to argue that law schools ought think about teaching it.

When the the first non-LLM AI tools hit the legal services market, students then ought be taught how to build precedent documents with them. Creators of these post-LLM AI tools will need to create a version for students to be trained on. They won’t be able to keep a limited permissible use AI Tool behind a paywall for professionals only. The future and current lawyer needs to be trained on how to handle client queries which are heavily ChatGPT’d. This being the case, how to fact and law check ChatGPT without using ChatGPT is an essential skill to have, ie accessing primary and secondary sources of law and reading them, understanding them and applying them to a set of facts then drawing a conclusion and drafting your advice.

One suspects that the incoming tranche of summer clerks on Phillip Street will be doing a lot of fact & law checking of AI output and building the precedent suite of the future. An in-silica and ever evolving livin’ thing of legal document creation. Beats being in the State Archives at Kingswood in 40 degrees looking through the sorts of materials that, now being mostly digitised, could be ordered and sorted by AI. You’d also miss out on that heck yes moment when a colleague found the needle in the haystack, the thing that gave the client’s claim legs. You also took five straight days to do it. Today’s bots could have probably done it in five seconds.

It’s just not the same as reviewing the Halsbury’s or any encyclopaedia of precedents and clicking through the profile and professional experience of the drafter. It’s not the same as being a graduate and having a partner introducing you to a precedent document you’ll be working on that either they themselves drafted or a well known legal wordsmith did. The existing discipline of maintaining precedent suites is the intergenerational handing down of key written works of substantive law and also a kind of ‘lore of the profession’ as well.

What are we but the professional in-vivo descendants of Chancery (the Office of Writing Things Down) feeling around for a way to co-exist with the in-silica fabrication of literally one duotrigintillion synaptic carbon-based impulses involved in creating the common law/civil law/Western Legal Tradition. We’ll need to co-exist because the time-savings and productivity improvements flowing from AI tools on the job will command it.

When your 10,000 hours are about half-way up and you’ve been working with precedent documents for a while, you build up your own clause library. You chance your arm at drafting bits of your own. You get the hang of it. Can’t be an effective member of any litigation team if you can’t sit at a keyboard and bang out a word-perfect operative bits of a deed on the fly. But will this really matter when an in-silica intelligence can bang it out all word-perfect but faster and cheaper?

For wordsmiths and other professional descendants of Chancery (copywriters, journalists, creatives etc) AI tools are doing a lot of that.

A final form precedent, wholly drafted by the current Lazenby-esque-AI lacks provenance as the output of a person who advises “the document is in order for execution.” If it didn’t originate with a human, it can’t trace its lineage into what humans created, the common law. Is the process of training up an AI bot on an existing precedent suite (built by humans) enough to say that the documents produced by in-silica brain have a substantive connection with the law? Our up-right ape brains create legally binding agreements because we draw on our knowledge of the law to draft them. What knowledge is a bot drawing on and what intent is it bringing to the crafting of the words of a document? Machines can’t give legal advice. It is the final human lawyer deep read of any AI output that rectifies this ‘document provenance’ issue so those final proofers need to know the law.

The legal drafting labours of people’s past being fed into algorithms of numbers/symbols stored in a place called “the cloud” doesn’t feel like a workflow that the Office of Writing Things Down would approve of. For future AI capabilities, each fresh piece of work can arguably be spun as the sum of the wisdom of thousands of past (but updated daily) legal minds. A current, living human legal mind still needs to do a final check though and that human needs an LLB because LLMs don’t get it right (see Hallucinatory Cases). They often get things wrong (unless you XXE the crap out of them).

Whether AI-generated precedents are correct or not – that is the question. Both students and practising lawyers need an above average mastery of English and a sound knowledge of what the law is. Why? Because clients pay you to get it right, not wrong.

    Questions 3-10  

  1. Is the AI Tool one that has been especially created for use in the legal profession?

If yes, continue. If the AI Tool is merely ChatGPT, stop using it. Clients pay us to get things right, not wrong and ChatGPT gets things wrong. It is a large language model (LLM) that works via linguistic probability and guess work. ChatGPT cannot think and cannot reason. It is not intelligent. It is not a lawyer. You are.

  1. Do you already know how do to the task you’re contemplating asking an AI Tool to produce for you?
  1. Do you know the subject matter with a strong degree of familiarity and are you capable of detecting any inaccuracy or error in what the AI Tool produces?
  1. Whether it pertains to paid professional fee-earning work or educational work or pro bono work, would the person receiving the benefit of the legal work expect that YOU did the work or the AI Tool?
  1. If using an AI Tool for professional legal work, has AI use been disclosed in your retainer agreement? How specific and precise is your wording? Do you identify the AI Tool used and the scope of work performed whilst using it?
  1. Where you are using AI Tools in the provision of professional legal services, in addition to disclosing use of AI Tools to your client, are you ensuring any claimed improvements in productivity (via AI use) are reflected in the final invoices to your client?

A legal practitioner ought never charge for ‘time on the tools’ when it is AI Tools doing the purported ‘legal work’ for you.

Lastly, when use of AI Tools results in deliverables or advices that are just dead-set wrong, a practitioner ought be responsible for a colleague’s fees to fix up the mistakes and errors caused by failure to know the difference between AI-generated content that looks about right and content that is right.

Wallumedegal Country, 28/9/24