In July 2018 the Automated and Electric Vehicle Act 2018 received Royal Assent, marking the first step into a new era of AI driven cars.
The Act foreshadows the systemic change that will be brought about by the new technology. It already removes the main pillar of liability in accident cases: reliance on the tort of negligence. The system put in place is a no-fault compensation system (“NFCS”), making the insurer liable for any damages. The insurers will then be able to sue the manufacturers under the Consumer Protection Act or in negligence.
Inspired by these developments, this article will cover three main points. First, it will analyse the relevant provisions of the Act and discuss its rationales. Second, it will raise relevant issues relating to the Act's interplay with the Consumer Protection Act. Finally, it will analyse the case of “Level 3”1 autonomous vehicles.
The Act itself does not define what an “automated vehicle” is. It gives power to the Secretary of State to create a list of vehicles that are “designed or adapted to be capable, in at least some circumstances or situations, of safely driving themselves”2, to which the provisions will then apply. This guideline fits Level 4 and Level 5 vehicles under the SAE classification3. These are highly advanced vehicles that are capable of performing all driving tasks without the need for supervision by the driver. The Act, however, does not cover Level 3 vehicles, such as the new Audi A8, which are already available for purchase.
In Section 2 of the Act, for the vehicles identified by the Secretary of State, the insurer will be liable for any damages caused to third parties, and even for injuries caused to the driver (liability for the car and the good carried is excluded by Section 3).
As Schellekens4 points out, the Act changes the liability system, the only one left in Europe that ascribed liability based on fault, to a NFCS. The rationales are multiple, but as he argues5, victims' protection is the main one.
Victims' protection and the influence of compulsory insurance have always been the main drivers for a strict application of negligence in the case of road accidents, as it was shown in Nettleship v Weston6. However, the law has tended to avoid strict or no-fault liability. Ultimately, two factors must have pushed Parliament to make such a bold move.
First, the sheer illogicality of applying a fault-based system to situations where human input is minimal, if existent at all. Even if the negligence does not ascribe moral blame, as seen in Nettleship, a meaningful causal relationship is still needed. However, in the case of self-driving cars, there is no way in which the accident was “caused” by the driver.
The second factor, and the more relevant one, is the difficulty of recovering under the Consumer Protection Act 1987 (CPA). The cases under the Act are generally few7, due to the harshness with which the test is applied. In addition, as we will see, the application of the CPA to the case of self-driving cars will be even more problematic. Thus, the insurers are in a much better position to deal with the claims and can easier internalise the costs if the claims fail.
The first issue is whether a claim could be brought at all under the CPA. Even if in some cases the crashes will be caused by mechanical failures, a sensible proportion will likely be caused by software. In those cases, it is important to ascertain if software is a product. Under the current law the position is unclear. Clerk and Lindsell8 argue that it should be seen as a product only if it is given on a physical support (e.g. on a CD), but “over the air” updates should not count. The position is not tenable as arbitrary distinctions will follow: if A receives the software over the air and B receives it by going to a dealership, for the purposes of the CPA only B's software will be seen as a product.
One solution would be to consider the software as an integrated part of the car. As De Bruyne has argued, software updates should be considered part of the maintenance9. Thus, the car itself would be deemed as the defective product.
However, more issues arise. To deem a product defective, the test in section 3(1) of the CPA needs to be applied: “there is a defect in a product (…) if the safety of the product is not such as persons generally are entitled to expect”. What then, are the expectations of the users from a self-driving car? Surely no accidents would be too high of a standard. However, the standard should nonetheless be higher than what would be expected from a human driver. But how much higher? It is unclear.
Another issue is that Artificial Intelligence (AI) develops as it goes: even if the product is tested, there is no way to predict how it will act in novel situation and why it took certain decisions. It is essentially a black box. Some decisions will clearly be wrong: hitting a pedestrian instead of hitting a car. On the other hand, there will be difficult cases: hit a pedestrian or hit a brick wall and injure everyone in the car? What is the proper decision we expect the AI to make?
In the case of human drivers, the conundrum is avoided because the standard of care is adapted to account for emergencies10. In the case of AVs, however, no direct comparison can be drawn. As a recent report by Norton Rose Fulbright highlights, “the (…) reaction by AV software follows from a deliberate decision (…) [of the] software [to] react in that way to that situation.”11 It is thus likely that the courts will have to deal with difficult moral issues in such cases. A strong possibility would be to hold manufacturers liable in any outcome of these difficult cases. This would be align with the victim protection rationale. Given the fact that cases such as these will be rather few, the potential risk of deterring development through such “strict liability” should not be overestimated.
A final relevant issue for the application of the test is the cost-benefit analysis. As decided in Wilkes v DePuy International Ltd,12 the analysis is relevant when applying the test: “assessment of its safety will necessarily require the risks involved in use of that product to be balanced against its potential benefits.”13 The main reason why the self-driving cars are introduced is to bring significant benefits – most importantly, fewer crashes. If a self-driving car works flawlessly 80% of time, reducing the number of accidents, but in the remaining 20% causes major ones, would it be deemed as defective or not? Indeed, the social utility is significant, and this will likely be an argument for not holding the manufacturers liable. The practical consequence will be an incentive to develop better technology in the long run, but a potential disincentive to use safer technologies. Whether or not this is satisfactory, it is up to the courts to decide.
As it has been previously noted, Level 3 vehicles are covered by the normal negligence and CPA principles. However, given their paradoxical nature, it is likely that these will be insufficient to cover liability. Indeed, the legal uncertainty is part of the reason why the technology is not yet fully released.
Level 3 cars are a problematic case: on the one hand, they are supposed to be autonomous and handle the tasks on their own, while at the same time requiring the driver to be able to take back control immediately if something goes wrong. Thus, they are a great example of why legislation is needed to supplement the existing principles.
First, the basic claim would be in negligence against the driver. For instance, it could be argued that the driver was negligent in not taking back control of the car fast enough when required. However, how the standard of care will be applied is unclear. It might be difficult to prove that the reasonable person would have reacted two seconds earlier or that getting slightly distracted for a second is careless when the car is in self-driving mode. In addition, following the potential shift towards conceptualizing car crashes as product liability cases14 judges may be prone not to find negligence in these border-line situations. That would force the victims to rely on a CPA claim, the unpopular option that the AEVA tries to circumvent in the first place.
However, besides the issues already highlighted, there are some “Level 3” specific ones that can hinder even more a claim under the CPA.
For instance, Level 3 cars are advertised as autonomous vehicles. Section 3.2(a) allows the purpose for which the product has been marked to be considered. Therefore, an argument could be run that given the advertisements, a user could expect a certain lapse of concentration to be acceptable. Therefore, the courts will have to navigate carefully the potential issues arising from the discrepancy between the real capacities of Level 3 cars and the way they are promoted.
As it has been shown, even if the 2018 Act is a step in the right direction and brings much needed changes, the work of the legislators has only just begun. More statutes are required in order to fully “harmonise” the advent of self-driving cars with the legal principles. It is to be hoped that more of such Bills will be introduced and when the technology is ready to meet the market, it will be “greeted” by clear and sensible laws.