Everyone’s wondering whether artificial intelligence (AI) is out to steal their jobs, or at least to make their industry’s job market increasingly competitive. Lawyers are certainly no exception. Various researchers and businesses are testing the extent to which various AIs are capable of performing key legal services, including document review and drafting, advice provision, and even arguing cases in court. In this blog, we ask how successfully AI may be able to provide legal services and look at what could limit its success.
What is AI?
‘Artificial intelligence’ is a broad term that covers various types of computer programmes designed to perform in ways that mimic human cognition and intelligence. These range vastly in complexity. The AIs currently in vogue are generally ‘machine learning’ models, which use algorithms to learn from many examples of a specific type of content (eg written information or images). This learning process is referred to as ‘training’ an AI. Trained AIs can make predictions and extrapolations based on their learning to, for example, generate new content in response to a user’s prompt. Of particular interest in the legal services industry are large language models (LLMs), which train on and generate text.
How might an AI’s success as a legal services provider be limited?
It can simply be wrong
Providing valuable legal services, whether advice, documents, advocacy, or other services, always requires accuracy. Provide incorrect legal information or ineffective contracts and a legal services provider’s clients are going to be very unhappy and may suffer serious losses (which may, in turn, be passed on to the legal services provider if it has provided some sort of guarantee to the client). So are LLMs up to the task?
In short, no (or at least not yet). LLMs can sometimes simply be wrong. LLMs produce output that is purely informed by the data they’ve been trained on. An AI’s training data may be out of date, limited (ie if it simply doesn’t contain necessary information), or just wrong. AIs that use data available online to train are also vulnerable to hacking, ie when a malicious party intentionally exposes an AI to incorrect or confusing data. In any of these situations, an LLM may have learnt incorrect patterns from training data, which it will in turn present to users as truth.
Even if an AI’s data is all correct, it may still produce incorrect output. AI creates output not based on deep, nuanced understandings of topics, but based on statistical outcomes and probabilities. For instance, an answer suggested by the patterns an AI has identified in its training data is the answer the AI will use. Sometimes such answers are incorrect as an AI may try so hard to provide useful output that it essentially totally makes up something it thinks should fit. This is sometimes referred to as the AI ‘hallucinating’.
These inaccuracies leave a gap in the market for human legal professionals to fit into. AI does not (yet?) look able to possess the level of practical experience and commercial expertise or the ability to check reliable sources that a knowledgeable human lawyer can offer.
AI can perpetuate societal biases
As well as being simply incorrect, inaccurate AI output can be harmful. For example, if an AI is trained on data that contains systematic biases (eg assumptions about individuals that belong to certain population groups), the AI will take in these biases and incorporate them into its output without ethical evaluation, perpetuating the biases by presenting users with bias-informed output. This could affect legal services provision. For example, if an AI is used to create a contract that allocates risk between different groups (eg in insurance situations) and it uses risk data that was originally collected via a procedure that contained bias (eg if the data collector made unconscious judgements about survey participants belonging to certain population groups).
AI can over-generalise
Epidemiology often relies on a concept of ‘generalisability’. This refers to the extent that the outcomes of statistical analyses of health data within a given sample (ie a small group within a population) can be accurately said to apply to the general population from which the sample was taken. If something is over-generalised (ie you assume it applies to a general population, when really the sample is qualitatively different to the general population), the application of the data to the general population will be inaccurate.
AI faces an analogous issue when it relies on proxy measurements when providing output. Using a proxy measurement involves data on one matter being used to provide information about another matter, relying on the assumption that the first matter maps onto the second matter for the purposes of the data. For example, an AI might be trained on data that includes a study that found that businesses that have more experience (however measured) as subcontractors in their given industries are less likely to do something to cause main contractors losses related to Subcontracting agreements. An LLM may write a main contractor a new subcontracting agreement that includes a limitation of liability that it considers allocates the main contractor a commercially appropriate amount of risk due to the length of time its new subcontractor has been acting in its industry. If the subcontractor has been acting in the industry for a long time but somehow does not have a lot of experience (eg because they had a very small market share for a long time and are only now expanding), the degree of risk the main contractor has agreed to take on could be inappropriately large – all because the AI incorrectly conflated amount of experience and length of time. In relying on proxy data in such ways, AIs can provide legal services based on inaccurate information that may lead to costly missteps for clients.
AI use could break data protection laws
LLMs are trained using a large volume of data. This may accidentally, or due to an AI developer’s ignorance of data protection law, include people’s personal data (ie information about them from which they can be identified). The UK’s data protection laws restrict how such data can be processed (eg used and stored). If such data is inadvertently included in an AI’s output, for example, to illustrate a point in generated legal advice, its use may infringe data protection law.
Businesses may also input personal data that they control. For example, they may input their customers’ details when asking an LLM to generate contracts for use with those customers.
Businesses in either of these situations must be responsible for their own data protection compliance and will need to take into account risks specifically associated with processing data using AI. For example, the UK’s Information Commissioner’s Office (ICO) has recently released guidance on factors that businesses should consider when processing data using AI and specifically when carrying out a Data protection impact assessment (DPIA). For more information on data protection compliance, read Complying with GDPR.
Does AI have a place in legal services provision?
Despite the multiple limitations outlined above, AI models do offer a lot of promise for enhancing legal services provision. It’s indisputable that their intellect and computing power offers time and cost efficiencies that, if used in the right way, may offer clients of legal services more cost-effective legal solutions.
For example, AI models can be used as a tool to expedite the initial stages of tasks like contract and document drafting. This can cut down the time a human lawyer takes to complete a task as they, for example, may only need to check and improve upon clauses that have been created for them. Less complex and nuanced legal tasks could be even more highly automated using AI. For example, answering people’s legal questions by signposting to more reliable, human-updated sources. Innovations like this could greatly increase the accessibility of legal services.
Better yet, AI enterprises and users are constantly working to mitigate some of the issues highlighted above, meaning AI may become increasingly reliable and better able to assist with legal services provision. Measures include:
- training models on high-quality maintained data sets designed specifically for legal services purposes
- putting in place guardrails, ie programmed interventions that minimise the chances of an LLM hallucinating or producing inappropriate output
- using ‘red teams’ within an AI business, ie groups who purposefully try to make a model behave poorly to identify its vulnerabilities, so that these may be overcome
AI technology is developing so quickly that various industry representatives have called for a slowdown. Whatever is next, we can expect it to be more powerful and, although perhaps more dangerous, to offer even greater potential for creating efficiencies and accessibility in the legal market. As long as LLMs’ limitations are taken into account, tackled, and mitigated, the future of AI in the legal services industry looks positive.
That’s not to say AI doesn’t pose issues in other areas of the legal world. For an example, read our blog post on AI and copyright law.
If you run a business and want to work with AI, either to help you with legal needs or to help you provide services yourself, consider Asking a lawyer for legal assistance to ensure you have your advice or documents checked for accuracy or contracts in place to protect your business’ endeavours.