This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Search our site

Viewpoints

| 1 minute read

GPT CGT fail - another AI cautionary tale!

The legal profession has collectively been getting very excited about AI this year. Used properly, it has huge potential to support lawyers in delivering legal services and reducing costs for clients. There seem to be a huge number of products in testing or early release phase which promise to transform the way we conduct legal research, analyse legal arguments and carry out disclosure, to list just a few. Many of these use Chat GPT4 or similar technology but applied across legal document databases or private data sets rather than the web, with all the inaccurate junk that sits out there alongside the reliable information. This approach has the potential to reduce the sort of “hallucinations” that sometimes occur if you ask the publicly available version of Chat GPT to answer a legal question. However, we are a long way from an infallible machine that can reliably tell you the answer to any legal question with accuracy. These tools still need to be given the right prompts and the results need to be checked and refined as carefully as you would with anything handed to you by a trainee lawyer.

Last week gave us another lesson in the dangers of over reliance on AI and not properly checking the results. In the case of Harber v Commissioners for HMRC [2023] UKFTT 1007 – an otherwise unexciting appeal against a penalty for failure to notify liability to capital gains tax – a litigant in person appears to have used an AI tool to find helpful caselaw in support of their appeal. The AI tool duly obliged. The only problem being that the caselaw did not actually exist. Appeal dismissed!

This is perhaps less egregious than the US case of Mata v Avianca in which two qualified lawyers relied on summaries of fake cases supplied to them by Chat GPT and then, when challenged by the judge, doubled down by getting Chat GPT to supply them with the full (but still fake) judgments. However, it still serves to highlight the risks of assuming that AI is somehow omnipotent and all knowing.

We are on the cusp of having some very exciting new tools at our disposal. But they are just that. Tools. And they will only be as good as the humans operating them.

Tags

technology, dispute resolution, artificial intelligence