Updated: Oct 7
Is Artificial Intelligence a Substantial Threat to the Law in the Twenty-First Century?
There has been an epochal shift from the traditional industries established by the Industrial Revolution, including hand production methods in machines, to a post-Industrial Revolution economy based upon information technology, widely known as the Digital Age. Lord Sales has referred to computational machines as ‘transformational due to their mechanical ability to complete tasks…faster than any human could’. The twenty-first century has seen an enhancement in human innovation, and the world of law is being forced to change. Legal practice has become more technology-centric, allowing for law in theory and in practice to keep abreast of society. This article explores how technology, specifically AI, has evolved through the digital age.
Firstly, it will explore how the evolution of AI has warranted a cataclysmic shift in the law. Then, in chapter two, it will illustrate the challenges which AI has posed and which it has the potential to create for the law. In so doing, it will identify how AI could pose a substantial threat to the law. In chapter three, however, solutions to the issues that AI poses will be addressed and analysed.
Undoubtedly, AI can be a substantial threat to the law. Nonetheless, this article aims to illustrate that human creativity must not be underestimated. If used correctly, AI could change how law functions in the twenty-first century for the better. This article explores various theoretical aspects of how AI and the law interact with society, focusing in particular on Lessig’s Law of the Horse and the new revelation of the Law of the Zebra. It further treats the concept of technological exceptionalism and how this theory has allowed for the progressive evolution of AI.
McGinnis and Pearce argue that machine intelligence and AI will cause a ‘great disruption’ in the market for legal services. This article will explore this concept of disruptive innovation, suggesting that the disruption McGinnis and Pearce allude to will be more significant in scale than initially anticipated. It will explore the ethical, moral, and social issues associated with AI, investigating how AI has the potential to pose a problem to the law. As an extension of the moral, social, and ethical issues presented, this article will offer an insight into AI’s autonomy as regards the law. Finally, the problems of foreseeability and transparency will be discussed in terms of the substantial issues AI poses to the law.
The idea of the robot judge will be addressed, identifying how this could materialise. Its benefits and challenges will subsequently be critically assessed in terms of the threat to legal practice in the twenty-first century. Additionally, the practicalities of the robot judge will be assessed, suggesting that it is an unnecessary fear and a potential gift. As part of chapter two, the idea of AI systems being granted a more respected legal personality will be explored. The arguments presented will allude to the sophistication of current AI technology and issues surrounding liability. The importance of the concept of legal personality will be stressed, demonstrating that society must be cautious as to who is granted legal rights of personhood.
Chapter three will present innovative solutions to the problems assessed in chapter two. This section will set out a detailed model that allows for the comprehension of the legal disruption caused by AI and its associated technologies.
The practice of law is constantly evolving. A wise solution to combat any threats of AI is for humankind to evolve alongside technology and work in tandem to allow the practice of law to become more effective and modern. Traditional society has shown that technology is one of the most significant enablers of positive change. Picker highlights this in his commentary on the agricultural and industrial revolutions, in which similar evolutions occurred. With regard to these, Picker shows that technology has allowed for the ‘creation and modification…of international law throughout history’. In line with this, society must reconcile itself with the inevitable changes AI will bring to both law and broader society.
AI and associated technologies are only a threat to the law if those involved in the practice and creation of law allow it. The twenty-first century has induced a wave of innovation. This article will demonstrate that, whilst AI is not the greatest threat to legal practice in the twenty-first century, as Surden has explained, ‘knowing the strengths and limitations of AI technology is crucial for the understanding of AI within the law’. If understood correctly, with respect to its creation and subsequent implementation in legal practice and beyond, AI could potentially be the greatest gift to the law through technical understanding, enhanced education, and a new, more flexible legislative framework.
Chapter 1: The Revolution of AI in the Law
Kronman suggests that law is a non-autonomous discipline, such that human input is required, but other components are just as essential for its functionality. It has become increasingly apparent that technology and AI are crucial parts of this multi-functional composition. Accordingly, this section will explain the evolution of AI from its origins, and critically assess how technology has been used in the practice of law in the twenty-first century.
The law is influenced by changing social norms in society and contributes to broader social structures. The emergence of AI is undoubtedly changing case predictability and the interpretation of legal data. Technology is evolving, and legal-expert systems can be seen as less valuable than the more advanced technologies available in predictive coding and machine learning. This chapter aims to illustrate the development of AI in law and how the future of law could potentially develop.
AI—Evolution in Legal Practice
The concept of AI is enigmatic. At present, the term has no official legal definition. Russell and Norvig have rightly linked the speculation concerning its capabilities with its lack of precise definition, particularly concerning for a technology so prevalent in society and the law. Following McCarthy, this article will define AI as ‘the science and engineering of making intelligent machines, especially intelligent computer programs’, as ‘related to the similar task of using computers to understand human intelligence’.
Alarie has commented on the power of AI and its ability to provide financial sustainability and productivity. If technology is available to improve how the law is implemented and practised, then it is only natural for it to be utilised. However, there is much trepidation regarding the changes that AI has brought and will continue to bring. The obscurities of AI and its unpredictable nature have led some, such as Leith, to believe that AI is a substantial threat to the law. However, the evolution of AI has also, by those such as Stoskute, been seen to revolutionise the law for the better in terms of both client satisfaction and practice efficiency.
Sergot has been a particularly prominent commentator on AI and the law. He demonstrated that AI could use computational reasoning to interpret statutes through ‘Prolog’, a coding language. In doing so, he illustrated the application of technical rules and procedures in the interpretation of rules and laws. Sergot’s use of Prolog uses computational reasoning to allow letters and words to be numerically processed and, in turn, allows for the interpretation of statutes and other laws. The relationship between letters and numbers allows for rules to be created and conclusions to be drawn. This early use of basic computational methodology can be understood as a prediction of the future impact of AI. Expanding upon Sergot’s findings, Susskind’s investigation into technology and written law promotes the concept of a symbiotic pathway developing between lawyers and technology, allowing the ‘digital lawyer’ to be conceived. Susskind and Sergot’s research thus proves complementary, both symbiotically positioning law and technology as a foreshadowing of the future of the legal practice.
At present, AI’s primary use in the law takes the form of legal-expert systems. A legal-expert system is a domain-specific system that employs a particular branch of AI to mimic human decision-making processes in the form of deductive reasoning. Technology is, however, evolving, and legal-expert systems are becoming less valuable than the more advanced technologies available.
It should be noted that AI has the purpose of assisting lawyers and does not have any form of recognisable legal personhood in the court of law. One of the main benefits of AI in the form of legal technology at present is e-discovery, described by Baker as a means of organising complex information centred around a given legal problem. Recent court rulings have shown progression in allowing the use of AI in the court of law, as seen in Irish Bank Resolution Corporation Limited and Ors vs Sean Quinn and Ors. In the Irish Bank Resolution Case, the ruling favoured predictive coding to aid in the e-discovery process, whereby the discovery process of document disclosure and predictive coding reduced the documents to 3.1 million by electronic de-duplication processes. Within the Irish Bank Resolution ruling, Fulham J specifically references the paper ‘Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review’. This indicates that AI processes can yield superior results as measured by recall, precision, as well as the f-measure. Similarly, in Pyrrho Investment Limited v MWB Property Limited, the decision was made that AI benefits legal services. Lawyers must view this technical progression as bettering the practice of law. Whilst the second part of this article identifies substantial challenges AI poses to the law, the following section explores the concept of technological exceptionalism in the sense that AI is not an option but an unavoidable obligation.
Technological Exceptionalism and the Law
AI has undoubtedly made a substantial impact on the implementation and practice of law, manifesting practically in terms of contract management and data analysis. However, there are concerns from scholars such as Cowls and Floridi surrounding whether AI is negatively impacting how the law is created and implemented. This takes into consideration the theory of technological exceptionalism. Calo has described technological exceptionalism to be ‘when [a technology’s] introduction into the mainstream requires a systematic change to the law’. This concept aligns with the idea that AI substantially impacts law more than all other areas of regulation, such as social norms, financial markets, and architecture. The creation of new laws and how society interprets existing laws for the facilitation and control of AI is necessary to ensure the maintenance of the Rule of Law. Calo argues that the law is ‘catching up with the internet [and AI]’. For instance, the Electronic Communications Privacy Act passed in 1986 interacts poorly with a post-Internet environment in part because of this legislation’s assumptions about how communications work.
If this is true, it is arguable that technology and AI could be seen as a substantial threat to the law in the twenty-first century in terms of case prediction and descriptive ability. Suppose the law is catching up with the internet and technology. In that case, there are inadequacies in our current legislative framework, as mentioned by the Committee of Communications in the UK House of Lords. Nonetheless, Davis has argued that although AI can become better than humans at describing and predicting the law, AI will not be able to address the value of judgement about how the law should be interpreted. Lawyers are still needed and are not in any imminent danger of becoming useless in our current day society. Bues and Matthaei have similarly made the case that ‘lawyers are needed to process convoluted sets of facts and circumstances...and to consider moral, legal rights and render reasoned opinions. Of greater concern than AI’s possible replacement of human lawyers is its regulation. At present, there is no legislative framework to regulate the use of AI.
Chapter 2: Accelerated Technology in Law—is AI an Insurmountable Threat?
Lyria Bennett Moses demonstrates how new technologies, including AI, are challenging existing laws and how they are practised. This notion of continuous change is unwelcomed by some in legal practice and other industries. Issues associated with AI and emerging technologies within the legal sphere emerge from lack of foreseeability, opacity, and the human inability to compete with AI technology and its computational ability. Huang and Rust have commented on the ongoing human concern of jobs being replaced by AI and technology. This chapter will identify the challenges the Digital Age has induced in the law in the twenty-first century. Additionally, it will explore the social and ethical issues legal systems continue to face concerning AI’s regulation in terms of autonomy and potential unpredictability.
AI and Emerging Technologies—Human Replacement?
As chapter one outlined, there have been many developments in legal technology and AI in the twenty-first century. In 2017, the legal technology industry saw an investment of $233 million across 61 countries. Many technological and legal scholars, such as Moses and Susskind, argue that innovation is of paramount importance and that a revolution of legal technology is essential for increased efficiency and productivity in legal practice.
This technical revolution poses risks to the human aspects of the law. Susskind and Susskind have predicted the inevitability of technological unemployment and replacement to come in the legal profession. They suggest that legal practice is about the provision of knowledge, and the technological capabilities of AI can offer this more efficiently than humans. This notion calls to the fore AI’s greater capacity to understand and predict law, as opposed to its lesser capacity to interpret and evaluate it. If the Susskinds are correct, then AI could be the most substantial threat to the practice of law in the twenty-first century. McGinnis and Pearce support this view by stressing the ‘high value’ placed on legal information; if there is greater importance placed on legal information than other more trivial forms of information, then AI technology could replace humans in information finding and analysis. However, if the legal profession takes heed of AI’s lower capacity than humans to interpret, create and evaluate the law, then this appears less of an issue.
Nevertheless, there are also many opposing views suggesting that AI could greatly assist the practice of law in the twenty-first century, such as Brescia’s proposals around alternative cost-effective access to justice facilitated by technology and the removal of unnecessary human labour when it comes to document review and contract formation. Similarly, Levine, Park, and McCornack argue that AI technology offers superior lie detection and probability prediction services.The following section will assess the challenges and dangers of AI with more autonomically advanced abilities.
Autonomy and Artificial Intelligence and the Law
One of the main problems with AI is that it can act as an autonomous system beyond human control. This autonomy sets AI aside from all other earlier technologies and causes moral, social, and legal problems. AI now has the potential to drive cars, diagnose diseases, and design buildings. If AI currently can perform complex tasks autonomously, it poses the question of what is next in terms of the digital capabilities of AI, in particular as concerns the metrics of autonomy, foreseeability, and causation.
Foreseeability is the notion of knowing or being able to guess something before it happens. This concept is deemed important in law as it allows new laws to be created before issues occur. However, AI and autonomous technologies pose a substantial threat to the concept of foreseeability. AI first illustrated its ability to think autonomously through a computer program playing a chess game in 1958 and successfully beating a grandmaster in 1997 with IBM’s Deep Blue Computer. This example can be seen as a positive technological breakthrough for AI and its autonomy to make decisions without human input. However, there is also clear risk involved regarding how else this technology could be used.
This notion of autonomy creates barriers in terms of foreseeability. Autonomy is difficult to manage and control. This autonomy could be interpreted as a potential issue in the law if AI is used in the same manner. In law, the issue is not the newly apparent creative nature of AI but the lack of foreseeability. In the aforementioned example, the system’s actions were unprecedented and unforeseeable. As Calo notes, ‘truly novel affordances tend to invite re-examination of how we live’. AI in the Digital Age is a novel affordance, which promotes high-risk novel affordances. The lack of foreseeability forces a need to look closely at AI from a legal perspective as a preventative measure to protect the Rule of Law. In terms of law, issues could be found in unjust case predictions, as AI systems have no moral compass, and human replacement in the actual practice of law. Although evidence concerning foreseeability is limited in legal practice, there is a potential foreshadowing of substantial risk to legal practice in the twenty-first century.
The idea that AI and associated technologies pose a substantial threat to legal practice may be valid. However, it is crucial to recognise that AI may also be identified as a risk not exclusive to law. Peter Huber has looked critically at the capabilities of AI and suggested that it can be seen as a ‘public risk’, defined as a ‘centrally or mass-produced [threat]…outside the risk bearers direct understanding or control’. Due to the lack of understanding of the abilities of AI, coupled with the challenge of assigning social, legal, or moral responsibility to AI, it could be considered a ‘public risk’. Therefore, all humankind must be cognisant of this threat. This article assesses whether AI is a substantial threat to the law in the twenty-first Century. Whilst it is clear there are many risks to the law, one could conversely view the law as the most substantial threat to AI in the twenty-first century. The law provides safeguards to control and regulate the abilities of AI to safeguard society. Nonetheless, whilst the law can offer safeguards in AI regulation, this also poses many challenges within the realm of law, such as the introduction of the ‘robot judge’.
The Robot Judge—A Threat to Law?
Tasks and activities in which humans are superior to computers are becoming ‘vanishingly small’. Today, machines perform manual tasks once performed by humans, but they also perform tasks that require thought. Dworkin has spoken about the concept of computer programmes predicting the outcome of cases more effectively than humans. He poses the question of whether, if this happened in the practice of law, it would render lawyers obsolete. If given greater autonomy, AI could lead to legal obsolescence in terms of legal description and prediction. Sorensen and Stuart and, later, Moses suggest that AI could lead to the obsolescence of human legal functionality due to the cost-effective nature of AI. This poses a moral issue regarding the Universal Declaration of Human Rights and the International Covenant of Economic, Social and Cultural Rights. These internationally recognised laws convey a premise of secure employment for all its signing members. If AI eventually exceeds human capacity, some jobs will inevitably become obsolete. If the political leaders, scientists, and lawyers do not address this situation of opacity and discretion, then the demise of the human lawyer may become a reality. At present, the answer is unknown. However, Dworkin’s question will be applied to the idealisation of the permanent presence of the robot lawyer.
In a study carried out in October 2016 regarding the use of a ‘Robot Judge’, it was seen that an AI Judge was able to reference 584 cases from the European Court of Human Rights on privacy law. Aletras and Prooticus commented regarding this study on the 70% success rate of accuracy (meaning the correct prediction was made) and instant prediction by this algorithm. Barton has also described this technological move toward the robot lawyer as changing from monotonous criminal defence to intelligent defence. Whilst Barton’s comment is exclusive to criminal law, it could also be interpolated to other areas such as human rights, family law, or contract law, posing a more dominant issue if AI systems take over. If society believed that a robot lawyer would offer more accurate predictions and more intelligent case analysis, then perhaps this form of technology could pose the greatest threat to the law in the twenty-first century. However, this threat may be restricted to administrative law. AI machine learning technology performs probabilistic analysis of any given legal dispute using case law precedents. This does not take into consideration evaluative and creative input to judicial decision-making. Attention must also be given to the importance of advocacy and the influence it provides with respect to case outcomes. AI systems could be viewed as immune to human creativity (explored in chapter three). Therefore, the threat may not be as substantial as some, like Huber, perceive it to be.
Scholarly opinion suggests that there is more to legal analysis, judgement, and interpretation than swift computational analysis. Additional challenges associated with the robot lawyer include using human morality to make life-changing decisions in the court of law. Many scholars, including Bogosian, note that the unpredictability and ever-changing nature of the law lends itself to a variety of likelihoods with any given legal issue, for example how criminal law should be enforced. These situations require a sense of moral judgement as to how certain laws should be interpreted. Human judgement is required to analyse the context of a particular case. In the UK, for example, Section 25 of the Offences Against the Person Act 1861 speaks to a jury determining whether the Defendant is guilty or not guilty. Currently, there has not been a robot jury implemented in the UK. This implies that this jury must take human form. Henderson comments on the necessity of humans being involved in the court of the law decision-making process, stating that it is ‘intrinsically important to retain a human in the loop’. Nonetheless, technological input could be beneficial in certain legal practice areas.
It could be argued that a robot lawyer might be useful in certain situations of non-contentious law. However, there remains a need for a human lawyer and human jury in more complex areas of law. In this scenario, it is possible that AI and humankind could work collaboratively to make the law more extensive and concise in its practice. This supports the idea that AI is not a substantial threat to law but rather a form of assistance.
Legal singularity is defined as ‘AI becom[ing] more…sophisticated, [such that] it will reach a point where AI itself becomes capable of recursive self-improvement. Simply put, technology will learn how to learn’. Singularity in law would entail a world where all legal outcomes are perfectly predictable. In 1993, Vinge predicted that super-intelligence would continue to advance at an incomprehensible rate. Although this is considered valid, the concept of super-intelligence is yet to be achieved by AI. If deep-learning technologies could produce artificial super-intelligence, it would make possible one of Dworkin’s most controversial and compelling theories: that there is one correct answer to any legal question. If this theory became a reality, it would be accurate to state that AI poses a substantial threat to the law. However, there remain at present many limitations to AI. Machine learning and AI systems cannot know what patterns or predictions exist outside their training data. Transparency is essential in terms of the design and purpose of AI. This problem concerns the human inability to identify how the software functions. However, this could be combated in the pre-design and post-design development of AI. Humans must learn to comprehend and fully understand the inner workings of AI and technology, in order to minimise the threat to society and the practice of law.
AI’s Legal Personality - Legal Ethics
Legal personality is the crux of any legal system in terms of representation and determination of rights. The idea of an AI machine or algorithm possessing its own legal personhood was first presented by Lawrence Solum in 1992. Solum’s rationale for granting this official legal personality was to allow for liability when there is any form of wrongdoing. The arguments in favour of granting AI its own legal personality are typically framed in instrumental terms with comparisons to juridical persons, such as corporations. Implicit in those arguments, or explicit in their illustrations, is the idea that as AI systems approach the point of indistinguishability from humans, they should be entitled to a status comparable to natural persons; this can be seen through the Turing Test. Until early 2017 the idea of granting an AI machine its own form of legal personality was speculative. However, in late 2017 the jurisdiction of Saudi Arabia gave citizenship to a ‘humanoid robot’ named ‘Sophia’ in terms of personhood. This was widely seen as a step towards granting machines a more autonomous legal personality. Furthermore, the European Parliament brought forward a resolution to contemplate the establishment of ‘legal status…so at least the most sophisticated of robots…[would be] responsible for making good any damages that they may cause’.
As seen in this proposal by the European Parliament and the implementation in Saudi Arabia, the transfer of legal personality to AI machines is clearly possible. However, whether it is ethically and morally correct is a different question. If this became a reality, it would contribute to the narrative that AI could pose a substantial threat to law due to the increased legal power that this would allow AI to incur. AI systems cannot be punished as any human would. Edward, First Baron Thurlow observed in the 18th century regarding corporations that there was ‘no soul to be damned, nobody to be kicked’; this statement resonates nowadays in the moral and ethical implications of granting AI legal personality. Lawmakers must be cognisant that human qualities may be ascribed to machines with artificial natural language processing capabilities. There are further concerns regarding AI resembling human attributes because of the lack of understanding of how these qualities originate from humans. To avoid AI being granted excessive legal personhood, it may be appropriate to grant a limited juridical personality to AI in certain scenarios, such as contract law. This would allow AI to be limited in what it can do and enable the law to become more transparent and effective concerning work carried out or decisions made by AI machines.
Ethical and Social Issues of AI and the Law
According to Gravett, the practice of law has been ‘relatively shielded for the past 50 years’. Subsequently, the ethics, morals, and social implications of law have generally been unchanged due to its protected status. However, it is clear from the unique legal personality AI proposes that there may be newfound ethical, moral, and social issues associated with AI. Arguably, the new world of law will differ significantly from traditional law. According to Johnson, AI has the potential to pose issues concerning its hypothetical ability to kill and launch cyber or nuclear attacks. New regulations that respect the Rule of Law and current legal order must be implemented. This concept of new laws will be discussed in the next section in terms of the Law of the Horse and the newly suggested Law of the Zebra.
The Law of the Horse or the Law of the Zebra?
The late 1970s saw the beginning of the Digital Age. Legal scholars and technical experts debated whether the internet and the use of technology deserved their own regulation. An example of this can be seen in Easterbrook’s early discussion concerning the need to regulate technology. Easterbrook suggested that if a law is created, it should simply be applied to the internet and technology, including AI. How, however, would one know if specific features of pre-existing law were ample to regulate technology? This inspired Lessig’s argument that the current legislative framework in place in the UK at the time fell short, which saw him conceive of the ‘Law of the Horse’. The Law of the Horse allowed for tech-specific laws to be drafted to narrow the gap in pre-existing laws in terms of technology. A primary example is the Controlling the Assault of Non-Solicited Pornography and Marketing Act in the USA. This Act focuses on the regulation of digital communication, making it more difficult for AI algorithms to send emails without human intervention. While this can address certain issues concerning AI in terms of its autonomy, there has been a movement towards the ‘Law of the Zebra’: a usurpation of this traditional method, especially in contract law.
The Law of the Zebra has been described as ‘an undesirable body of law where technological exceptionalism triumphs over traditional legal paradigms’. With the prevalence of technology in law and broader society, lawmakers risk rendering long standing traditional contract law irrelevant due to its inability to regulate AI. This poses a substantial threat to traditional law in that technology and AI take precedence in the engineering of legislation and how it is practised over conventional black letter law that laid the foundations of our global legal society. The cases of International Airport Center LLC v Citrin and Douglas v US District Court exemplify the incompatibility of existing laws and the courts favouring technology and amending precedence of traditional contract law. However, Andrea Maywyshyn argues for the notion of ‘restrained technological exceptionalism’. The ‘Law of the Horse’ is necessary; it is essential to restrict technology in its dictation of existing laws. As Maywyshyn states, society must ‘maintain the status quo of human relations. Suppose society allows AI not only to change how the law is practised but also change how the law is interpreted. In that case, the threat of AI replacing humans and superseding long standing human thought becomes much higher.
Nonetheless, the ‘Law of the Zebra’ simply poses a potential threat. At present, the ‘Law of the Horse’ is implemented where required. If AI is addressed at its roots in development and through effective solutions such as regulatory legislation and a legal disruption model, then the benefits of AI may outweigh its apparent risks. The next chapter of this article will signpost and explain solutions to the issues of AI and the law. If humanity embraces AI with an open mind, it is arguable that AI does not pose a threat to the law in the twenty-first century but rather a substantial aid.
Chapter 3: Solutions—AI and the Law, A Better Future
Innovation—Solutions to AI and the Law
Chesterman has suggested that instead of AI posing a considerable threat to legal practice, legal and technological innovation will progress in tandem, in line with the previously outlined symbiotic relationship between law and technology. This collaboration will lead to an alternative business model that supports both law and technology. This hybrid revolution has arguably already started. Two universities located in Northern Ireland have recently implemented new postgraduate degrees: namely, Queen’s University Belfast with their Law and Technology Postgraduate Course; and the University of Ulster with Corporate Law and Computing. Ciaran O’Kelly stated in association with these new programmes that they are ‘designed to prepare [one] for a career on the interface of legal practice and technology’. Using education and developing new skills for our future lawyers, Chesterman may be proven correct if the skills taught in these programmes are implemented to facilitate a smooth transition into a new era of law and technology working in collaboration. Dana Remus further supports this notion of collaboration. Remus suggests that technology is changing the practice of law rather than replacing it. She believes that AI will only impact repetitive tasks that require little thought, such as document review or contract formation. However, Remus’ findings are based on the current capabilities of AI. Society has seen huge developments in AI and the practice of law. For instance, a ‘Law Geex’ study showed that an AI programme could review a Non-Disclosure Agreement with 94% accuracy compared to a human lawyer with 85% accuracy. Although this is impressive, the use of AI for such purposes remains, at present, limited.
Innovation and the creativity of the human mind can allow for solutions to be introduced in the practice and formation of law across the globe. Horowitz has compared AI to historically enabling technologies, such as internal electricity combustion. Society and lawmakers have learned to regulate these vital ‘technologies’, making it plausible that the same form of continuous regulation could be implemented with AI. This section aims to pose a solution to some of the challenges previously mentioned, with the aim of demonstrating that AI can be the greatest gift to the practice and interpretation of the law rather than a threat. The possibility of introducing a Legal Disruption Model will be suggested as a solution to any challenges AI and associated technologies pose or have the potential to pose in the future in terms of the creation, interpretation, and implementation of laws. This section will further evaluate the best method of introducing a legislative framework to curtail the challenges associated with AI, namely through the channel of legislative means.
A New Legislative Framework
It is vital to address both legal scholarship and legal, regulatory responses to AI, addressing problems at their core rather than reactively. The concept of the legal disruption model could be a solution to combat the threat that AI poses to law. This model identifies the most fundamental issues in terms of regulation. AI and its ambiguous and unpredictable nature require new sui generis rules to deal with conduct, application, and implementation issues in the present and the future, making this model highly applicable.
AI is still relatively new, meaning that legislators are still coming to terms with understanding its potential implications and how it should be regulated. At present, the European Commission is trying to push through new legislation concerning the regulation of AI. Legal scholars, such as Susskind, believe that laws, by their very nature, must be technology agnostic to ensure that future technology will still be subject to an overarching legal framework. However, to achieve this, a more in-depth understanding must be achieved. This section explores how a legislative framework could be implemented internationally and nationally to regulate and control AI. Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, has commented regarding AI that ‘trust is a must’. New legislation is imperative to ensure this trust is in place. The new AI regulation that the European Commission has posed will ensure the safety of its citizens and will offer proportionate and flexible rules to address the specific risks posed by AI systems. If the European Commission were successful with this legislation, it would ensure that its member states combat the challenges AI poses. This would result in assurances that AI is used safely, benefitting society and the law.
Currently, however, laws being made to combat the issue of AI are reactive and not preventative. If preventative legislative measures were put in place to anticipate the issues posed by AI, then problems would never arise. AI and technology would not have the capacity to be seen as a threat and could be used productively. The issue of ‘DeepFakes’ (a term coined to describe AI-generated face-swaps in pornography) and the legislation imposed to combat this issue provide an illuminating example. ‘DeepFakes’ is a technology that uses AI-based algorithms to create and produce online content indistinguishable from that created by a human. To combat ‘DeepFakes’, domestic law was introduced in China and the USA to impose sanctions for the incorrect use of this Generative Adversarial Network-based technique. For example, in Virginia in the USA, the government incorporated this regulation into state law through the ‘DeepFake’ Civil Harassment Bill. In China, similar rules and regulations have been imposed to combat this AI-based technology. This is a step in the right direction in terms of curtailing the issues that AI poses to society and the law, demonstrating that this type of issue could be solved in a macrocosmic manner. If this problem was addressed at the preliminary stages of creating this technology and the nexus between AI, the law, and regulation were more comprehensively understood, then it would no longer be an issue in law.
Conclusion—An Issue with Manageable Solutions
The use of AI in creating and implementing the law is arguably endless and could completely transform the twenty-first-century legal landscape. Despite the negative speculation from Dworkin and Moses, AI will fortunately not replace most lawyers’ jobs, at least in the short term. However, this article has highlighted some of the most substantial threats to legal practice and legal interpretation in the twenty-first century. These challenges include the idea of AI progressing so far ahead of what the human mind can fathom to possess a level of autonomy that cannot be controlled by legal or technological means.
The idea of the robot judge becoming a reality in twenty-first century legal society has been addressed in detail. However, as illustrated, if developed and maintained correctly and within the desired scope of control and manageability, the robot lawyer is a novelty that could prove useful, rather than threatening, in some aspects of law. The complex newfound ‘Law of the Zebra’, the prospective issue of legal singularity, has been identified as a key issue that lawyers face globally. This has led to the conclusion that areas of traditional black letter law must be maintained. Law has layers of complexity that exceed the comprehension of computational comprehension; this necessitates human input, thought and creativity.
The solutions examined in the latter section of this chapter illustrate that legal practitioners can use AI effectively and allow the law to develop alongside AI. This article has recognised a need for a new conceptual model for understanding legal disruption in the twenty-first century. If an innovative legislative framework was developed and implemented, this could combat any challenges posed by new technologies such as AI.
Human society is bound by cognitive limitations, meaning the law could use AI and its ‘brute force calculation speed’ to better itself. Whilst factual foreseeability and unforeseen functionality pose a substantial threat, these issues can be overcome by human innovation. In the 1940s, writer and futurologist Isaac Asimov laid down his three laws of robotics. These three laws encompass the concept that technology will not replace lawyers. Still, lawyers who can use and understand technology will replace those who do not, lawyers who act like robots will be replaced by robots, and lawyers who can combine technology and the creativity of the human mind to embrace AI in the twenty-first century will allow the law to develop in the future positively. AI should not be feared but embraced. Future lawyers must be proactive, informed, and educated in all areas of AI to optimise how it can improve the law. AI is arguably not a substantial threat to the law in the twenty-first century if handled accordingly by innovative legislation, education, and an open mind.
Jamie Donnelly has graduated from Queen’s University, Belfast with his degree in Law and masters degree in Law and Technology. He is currently training to become a solicitor. Jamie has a keen interest in the intersection between law and technology and how Artificial Intelligence is changing legal practice.
 Yun Hou, Guoping Li, and Aizhu Wu, ‘Fourth Industrial Revolution: technological drivers, impacts and coping methods’ (2017) 27 Chin. Geogr. Sci. 626–637.  Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2nd edn, W W Norton & Company 2016).  Lord Sales, ‘Algorithms, Artificial Intelligence and the Law’ (2020) 25(1) Judicial Review.  Lawrence Lessig, ‘The Path of Cyberlaw’ (1995) 104(7) The Yale Law Journal 1743-1755.  Andrea M Matwyshyn, ‘The Law of the Zebra’ (2013) 28 Berkely Tech LJ 155.  John O McGinnis and Russell G Pearce, ‘The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in The Delivery of Legal Services’ (2014) 82(6) Fordham Law Review. See generally Willem Gravett, ‘Is the Dawn of the Robot Lawyer upon us? The Fourth Industrial Revolution and the Future of Lawyers’ (2020) 23(1) PER / PELJ.  Colin B Picker, ‘A View from 40,000 Feet: International Law and the Invisible Hand of Technology’ (2001) 232 Cardozo Law Review 149, 156. See generally Ryan Calo, Robot Law (1st edn, Edward Elgar Publishing 2016).  Picker (n 7).  Harry Surden, ‘Artificial Intelligence and Law: An Overview’ (2019) 35 Ga St U L Rev 1305.  Anthony Kronman, The Lost Lawyer: Failing Ideals of the Legal Profession (Belknap Press of Harvard University Press 1995).  See generally; Vivienne Artz, ‘How ‘intelligent’ is artificial intelligence?’ (2019) 20(2) Privacy and Data Protection.  Peter Norvig and Stuart Russell, Artificial Intelligence: A Modern Approach (1st edn, Prentice Hall 1995).  John McCarthy, ‘What is Artificial Intelligence’ (2007) <http://www-formal.stanford.edu/jmc/whatisai/whatisai.html> accessed 8 September 2021.  Chay Brooks, Cristian Gherhes, and Tim Vorley, ‘Artificial intelligence in the legal sector: pressures and challenges of transformation’ (2020) 13(1) Cambridge Journal of Regions, Economy and Society 135-152.  Philip Leith, ‘The application of AI to law’ (1988) 2(1) AI & Soc.  Laura Stoskute, ‘How Artificial Intelligence Is Transforming the Legal Profession’ in Sophia Bhatti and Susanne Chishti (eds), The LegalTech Book: The Legal Technology Handbook for Investors, Entrepreneurs and FinTech Visionaries (John Wiley & Sons Inc 2020) 27.  Marek Sergot et al, ‘The British Nationality Act as a logic program’ (1986) 29(5) Commun ACM 370–386.  Laurence White and Samir Chopra, A Legal Theory for Autonomous Artificial Agents (1st edn, University of Michigan Press 2011); see also ibid.  ibid.  ibid.  Richard Susskind, ‘Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning’ (1986) 49(2) The Modern Law Review 168-194.  Johannes Dimyadi et al, ‘Maintainable process model driven online legal expert systems’ 2019 (27) Artificial Intelligence and Law 93–111.  Jamie J Baker, ‘Beyond the Information Age: The Duty of Technology Competence in the Algorithmic Society’ (2018) 69 S C L Rev 557.   IEHC 175.  Olayinka Oluwamuyiwa Ojo, ‘The Emergence of Artificial Intelligence in Africa and its Impact on the Enjoyment of Human Rights’ (2021) 1(1) African Journal of Legal Studies.  Irish Bank Resolution Corporation Limited and Ors vs Sean Quinn and Ors  EWHC 256 (Ch). Maura R Grossman, ‘Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review’ (2011) 17(3) Richmond Journal of Law and Technology.  Irish Bank Resolution Corporation Limited and Ors vs Sean Quinn and Ors  EWHC 256 (Ch).  ibid.  Josh Cowls and Luciano Floridi, ‘Prolegomena to a White Paper on Recommendations for the Ethics of AI’ (2018) <https://ssrn.com/abstract=3198732> accessed 9 September 2021. See generally Luciano Floridi et al, ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds & Machines 689–707.  Quoted in Meg Leta Jones, ‘Does technology drive law: the dilemma of technological exceptionalism in cyberlaw’ (2018) Journal of Law, Technology & Policy 249, 251. See also Andrew D Selbst, ‘Negligence and AI’s Human Users’ (2020) 100 BU L Rev 1315.  Lawrence Lessig, Code: Version 2.0 (Basic Books 2006).  Summary of argument in Jones (n 30) 255.  Orin S Kerr, ‘The Next Generation Communications Privacy Act’ (2014) 162 U. PA. L. Rev 373, 375, 390. See generally Ryan Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51(399) University of California Journal.  House of Lords, Select Committee on Communications, ‘Regulating in a digital world’ (2019) <https://publications.parliament.uk/pa/ld201719/ldselect/ldcomuni/299/299.pdf> accessed 9 September 2021.  Joshua Davis, ‘Artificial Wisdom? A Potential Limit on AI in Law (and Elsewhere)’ (2019) 72(1) Oklahoma Law Review 51-89. See also Susan Morse, ‘When Robots Make Legal Mistakes’ (2019) 72(1) Oklahoma Law Review.  Micha-Manuel Bues and Emilio Matthaei, ‘LegalTech on the Rise: Technology Changes Legal Work Behaviours, But Does Not Replace Its Profession’ in Kai Jacob, Dierk Schindler, and Roger Strathausen (eds), Liquid Legal: Transforming Legal into a Business Savvy, Information Enabled and Performance Driven Industry (Springer 2017) 94.  Lyria Bennett Moses, ‘Recurring Dilemmas: The Law’s Race to Keep Up With Technological Change’ (2007) 21 University of New South Wales Faculty of Law Research Series <http://www.austlii.edu.au/au/journals/UNSWLRS/2007/21.html> accessed 3 July 2018.  Adrian Zuckerman, ‘Artificial intelligence – implications for the legal profession, adversarial process and rule of law’ (2020) 136(1) Law Quarterly Review 427-453.  Ming-Hui Huang and Roland T Rust, ‘The Service Revolution and the Transformation of Marketing Science’ (2014) 33(2) Marketing Science 206–221.  The Law Society, ‘Horizon Scanning: Artificial Intelligence and the Legal Profession’ (2018) <https://www.lawsociety.org.uk/topics/research/ai-artificial-intelligence-and-the-legal-profession> accessed 9 September 2021.  Daniel Susskind and Richard Susskind, The Future of the Professions (Oxford University Press 2015).  ibid.  ibid.  McGinnis and Pearce (n 6) 3041.  Raymond Brescia et al, ‘Embracing Disruption: How Technological Change in the Delivery of Legal Services Can Improve Access to Justice’ (2015) 78 Alta. L. Rev. 553.  Timothy R Levine, Steven A McCornack, and Hee Sun Park, ‘Accuracy in detecting truths and lies: Documenting the ‘veracity effect’’ (1999) 66(2) Communication Monographs 125.  Ozlem Ulgen, ‘A ‘human-centric and lifecycle approach’ to legal responsibility for AI’ (2021) 26(2) Communications Law 97-108.  Matthew Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies’ (2016) 29(2) Harvard Journal of Law and Technology.  Legal Information Institute, ‘Foreseeability’ (Cornell Law School Website, August 2021) <https://www.law.cornell.edu/wex/foreseeability> accessed 6 August 2021.  Larry Greenemeier, ’20 Years after Deep Blue: How AI Has Advanced Since Conquering Chess’ (Scientific American, 2 June 2017) <https://www.scientificamerican.com/article/20-years-after-deep-blue-how-ai-has-advanced-since-conquering-chess/> accessed 6 August 2021.  Ryan Calo, ‘Robots in American Law’ (2016) <http://ssrn.com/abstract=2737598> accessed 8 September 2021.  Scherer (n 48). See also Brandon Perry and Risto Uuk, ‘AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk’ (2019) 3(2) Big Data and Cognitive Computing 26.  Benjamin Alarie, Anthony Niblett, and Albert Yoon, ‘Law in the Future’ (2016) 66(4) University of Toronto Law Journal 423-428.  Ronald Dworkin, ’Hard Cases’ (1975) 88(6) Harvard Law Review.  Moses (n 37). See also Jesper B Sørensen and Toby E Stuart, ‘Ageing, Obsolescence, and Organisational Innovation’ (2000) 45(1) Administrative Science Quarterly 81-112.  ibid.  ibid.  Nikolaos Aletras et al, ’Predicting judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective’ (2016) 93(2) PeerJ Computer Science.  ibid.  Benjamin H Barton and Stephanos Bibas, Rebooting Justice: More Technology, Fewer Lawyers, and the Future of Law (Encounter Books 2017) 89–90.  Raffaele Giarda, ‘Artificial Intelligence in the administration of justice’ (Lexology, 12 February 2022) <https://www.lexology.com/library/detail.aspx?g=6aa0d4f0-3f67-4bd1-b352-a6d7b700f9e2> accessed 22 March 2022.  Perry and Uuk (n 52) 26.  Dworkin (n 54).  Kyle Bogosian, ‘Implementation of Moral Uncertainty in Intelligent Machines’ (2017) 27 Minds & Machines 591–608.  Offences Against the Person Act 1861, Section 25.  Stephen E Henderson, ‘Should Robots Prosecute or Defend?’ (2019) 72(1) Oklahoma Law Review.  Wim de Mulder, ‘The legal singularity’ (KU Leuven Centre for IT & IP Law, 19 November 2020) <https://www.law.kuleuven.be/citip/blog/the-legal-singularity/> accessed 10 August 2021.  Vernor Vinge, ‘Technological Singularity’ (1993) <https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html> accessed 6 August 2021  Daniel Goldsworthy, ‘Dworkin’s Dream: Towards a Singularity of Law’ (2019) 44 ALT. L.J. 286, 289. See also Robert F Weber, ‘Will the “Legal Singularity” Hollow out Law’s Normative Core?’ (2020) 27 Mich Tech L Rev 97.  IBM Cloud Education, ’What is Machine Learning?’ (IBM, 15 July 2020) <https://www.ibm.com/cloud/learn/machine-learning> accessed 25 August 2021.  Lawrence B Solum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 NC L Rev 1231. See also Simon Chesterman, ‘Artificial intelligence and the limits of legal personality’ (2020) 69(4) International & Comparative Law Quarterly 819-844.  Solum (n 71).  Huma Shah and Kelly Warwick, ‘Passing the Turing Test Does Not Mean the End of Humanity’ (2016) 8 Cognitive Computation 409–419.  Ioannis Kougias and Lambrini Seremeti, ‘The Legalhood of Artificial Intelligence: AI Applications as Energy Services’ (2021) 3 Journal of Artificial Intelligence and Systems 83–92.  Ugo Pagallo, ‘Vital, Sophia, and Co.—The Quest for the Legal Personhood of Robots’ (2018) 9(9) Information 230.  European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) (European Parliament, 16 February 2017), para 59(f).  Quoted in Mervyn King, Public Policy and the Corporation (Chapman and Hall 1977) 1.  Cf. Luisa Damiano and Paul Dumouchel, ‘Anthropomorphism in Human–Robot Co-evolution’ (2018) 9 Frontiers in Psychology 468; Simon Chesterman, ‘Artificial intelligence and the limits of legal personality’ (2020) 69(4) International & Comparative Law Quarterly 819-844.  Eleanor Bird et al, ’The ethics of artificial intelligence: Issues and initiatives’ (2020) <https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf> accessed 25 August 2021.  Gravett (n 6).  Geoffrey C Hazard Jr, ‘The Future of Legal Ethics’ (1991) 100 Yale LJ 1239.  Arthur J Cockfield, ‘Towards a Law and Technology Theory’ (2003) 30 Man LJ 383.  James Johnson, ‘Artificial intelligence & future warfare: implications for international security’ (2019) 35(2) Defence & Security Analysis 147-169.  Frank Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) 1(1) University of Chicago Legal Forum.  Lawrence Lessig, ‘The Law of the Horse: What Cyberlaw Might Teach’ (1999) 11(2) Harvard Law Review 501-549.  Andrea M Matwyshyn, ‘The Law of the Zebra’ (2013) 28 Berkely Tech LJ 155.  ibid.  Anna Johnston, ‘The ethics of artificial intelligence: start with the law’ (Salinger Privacy, 19 April 2019) <https://www.salingerprivacy.com.au/2019/04/27/ai-ethics/> accessed 6 August 2021.  International Airport Center L.L.C v Citrin F.3d 418, 420 (7th Cir. 2006).  Douglas v US District Court 495 F.3d 1062 (9th Cit. 2007).  Ryan Calo, Robotics and the Lessons of Cyberlaw’ (2014) 103(3) California Law Review.  Matwyshyn (n 86).  Simon Chesterman, We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (Cambridge University Press 2021).  Frank Levy and Dana Remus, ‘Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law’ (2017) 30(3) Georgetown Journal of Legal Ethics.  Queen’s University Belfast, ‘Law and Technology’ <https://www.qub.ac.uk/courses/postgraduate-taught/law-technology-llm/#overview> accessed 6 August 2021.  University of Ulster, ‘Corporate Law and Computing’ <https://www.ulster.ac.uk/courses/202122/corporate-law-computing-and-innovation-27258#secsummary> accessed 6 August 2021.  Quoted (n 95).  Chesterman (n 93).  Levy and Remus (n 94). See generally Frank Levy, ‘Computers and populism: artificial intelligence, jobs, and politics in the near term’ (2018) 34(3) Oxford Review of Economic Policy 393–417.  ‘LawGeex Hits 94% Accuracy in NDA Review vs 85% for Human Lawyers’ (The Artificial Lawyer, 26 February 2018) <https://www.artificiallawyer.com/2018/02/26/lawgeex-hits-94-accuracy-in-nda-review-vs-85-for-human-lawyers/> accessed 6 August 2021.  Matthijs Maas, ‘International Law Does Not Compute: Artificial Intelligence and the Development, Displacement or Destruction of the Global Legal’ (201) 20(1) MelbJlIntLaw 29-56.  See generally Heike Felzmann et al, ‘Towards Transparency by Design for Artificial Intelligence’ (2020) 26 Sci Eng Ethics 3333–3361.  Hin-Yan Liu et al, ‘Artificial intelligence and legal disruption: a new model for analysis’ (2020) 12(2) Law, Innovation and Technology.  Maas (n 101).  See generally European Commission, ‘Proposal for a Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts’ (2021) <https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206> accessed 25 August 2021.  Susskind and Susskind (n 41).  European Commission, ‘Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence’ (21 April 2021) <https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682> accessed 25 August 2021.  ibid.  Konstantin Pantersev, ‘The Malicious Use of AI-Based DeepFake Technology as the New Threat to Psychological Security and Political Stability’ in Hamid Jahankhani and Jaime Ibarra (eds), Cyber Defence in the Age of AI, Smart Societies and Augmented Humanity (Springer 2020) 37; See also Adi Robertson, ‘Virginia’s “Revenge Porn” Laws Now Officially Cover Deepfakes’ (The Verge, 1 July 2019) <https://www.theverge.com/2019/7/1/20677800/virginia-revenge-porn-deepfakes-nonconsensual-photos-videos-ban-goes-into-effect> accessed 29 July 2021.  Liu et al (n 103).  Edvinas Meskys et al, ‘Regulating Deep Fakes: Legal and Ethical Considerations’ (2020) 15(1) Journal of Intellectual Property Law & Practice.  Patrick Hayes, ‘The Frame Problem and Related Problems in Artificial Intelligence’ in Nils J Nilsson and Bonnie Lynn Webber (eds), Readings in Artificial Intelligence (Morgan Kaufmann 1981) 223-230.  Hanoch Dagan, ‘The Realist Conception of Law’ (2007) 57(3) The University of Toronto Law Journal.  Yavar Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31(2) Harvard Journal of Law and Technology 898-927.  Lee McCauley, ‘AI Armageddon and the Three Laws of Robotics’ (2007) 9(1) Ethics Information Technology 153-164. See also ‘The Three Laws of Legal Robotics’ (The Lawyer, 29 July 2021) <https://www.thelawyer.com/knowledge-bank/white-paper/the-3-laws-of-legal-robotics/> accessed 24 August 2021.