Imagine a future in which legal AI becomes sufficiently developed to engage in legal reasoning. No longer will the ‘robots’ be relegated to dry document management and due diligence work – we might see robo-lawyers giving clients legal advice, making case outcome predictions, evaluating legal arguments or even assisting judges. Technologically, this bold vision of the future remains far-off. However, the idea of ‘thinking’ Lawtech is not at all far-fetched.
Of all the ways AI could be utilised in the legal industry, the area that poses some of the most complex and most interesting questions is where an element of legal reasoning is involved. If we are to successfully develop Lawtech products with legal reasoning capabilities, we must first identify and engage with certain more fundamental questions about the nature of law and the AI learning process.
This article focuses not on whether AI could ever think like a lawyer, but rather on what such a claim would mean, and what some of the implications would be for our understanding of the law.
The first issue relates whether we see legal AI as being employed in a primarily descriptive, or normative way. Descriptive applications of AI are less controversial – an example would be using software to analyse a huge volume of past cases to identify patterns in decisions. It is fairly clear that AI can offer significant efficiency and comprehensiveness advantages over human analysts. What’s more, stripped from the framework of assumptions within which we operate, AI-driven analysis may bring to the fore factors that we may not even be aware of as relevant to legal decision-making.
A more substantive claim is to argue that in addition to describing the state of the law and legal decision-making, legal AI might be able to tell us what the law, or the outcome of a particular case, should be. This would be to ascribe a normative application to AI systems. An example would be using software to analyse a huge volume of past cases to suggest how future decisions should be decided. In this example, we might compare decisions, with the predictions of the AI system being used as a benchmark for correctness.
This claim would require some kind of acceptance of the objectivity of legal reasoning (if we deny that AI systems can engage in ‘subjective’ analysis), which, in turn, would have implications for our understanding of legal decision-making. On this view, there is always a right answer, and it is one that is objectively discoverable. Promoting normative applications of legal AI would require commitment to such a claim, particularly in the context of judicial decision-making.
Further questions:
Another set of issues arises from the fact that the AI systems likely to be used for legal reasoning applications are typically set up to ‘learn’ from the data input which they are exposed to. Examples (in other fields) can be seen in Google’s use of neural networks to identify images and Microsoft’s well intentioned, but short-lived chatbot Tay, which attracted controversy for its rapid adoption of extreme and bizarre viewpoints based on interactions with users. Three interrelated concerns may be identified.
The first concern is that the data may itself be biased. Even if, for example, the legal system disproportionately penalises or discriminates against women, ethnic minorities, Toyota-drivers, cat-owners or any other arbitrary factor, AI learning processes will struggle to exclude such biases from the frameworks it develops. While we may argue that such problematic (or ‘wrong’, depending on the position you adopt regarding the nature of law) decisions will naturally be filtered out by virtue of statistical rarity, if these biases are themselves systemic, it is hard to see how legal AI based on data from our own legal systems could avoid this taint.
The second concern echoes the perennial lament of statisticians, that correlation does not equal causation. Even if it is the case that AI systems can correctly and accurately identify patterns in the data, this does not necessarily lead to the right inferences being drawn. This perhaps highlights a divergence between pragmatic and principled views of law: while it may be statistically true that certain ethnic minorities are more likely to commit crimes, this does not mean that the law should therefore treat such individuals as more likely to have committed a crime.
A useful illustration of both the first and second concern can be found in the problems associated with the use of the COMPAS risk assessment algorithm by certain US courts to assist in sentencing decisions. While the developers of the algorithm have attempted to reassure users that certain ‘impermissible’ factors such as gender are not included in the calculations, it is clear that there is a risk that irrelevant characteristics could be taken into account on the basis of misleading statistical correlations drawn from the data pool.
Further questions:
The preceding discussion shows how questions about the capability and application of Lawtech are not simply limited to practical questions of technical capability and cost-effectiveness. The possibility of AI systems engaged in legal reasoning raises deeper questions about how we conceptualise legal decision-making, broadening out into questions about the operation of precedent and analogy in the law.We are also left with difficult questions of responsibility – when something goes wrong, who takes the blame? Is it the AI itself, its developers or those who apply it? In what sense can a AI software even be wrong? What will our standards of correctness be if legal AI is used to decide the outcomes of cases? Such questions will need to be answered if legal AI is to successfully be employed in the judicial context.In the commercial context, such concerns are somewhat softened. It is clear that there is scope for predictive and descriptive applications of Lawtech, particularly from the perspective of clients who want pragmatic advice on the viability of claims and who do not need to wrangle with jurisprudential questions about the nature of law.The debate is still developing, and it will be interesting to see which flashpoints emerge as the most pressing in the near future.