Law is a tool that intersects in almost, if not, all aspects of life including education, social interaction, and businesses. In this era, where technological advancement is an inevitable event that society faces, questions arise as to what extent these changes affect the law or whether or not the law can catch up to regulate—what is known as the unregulated—Artificial Intelligence (AI).
Haenlein and Kaplan define AI as, “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaption.”1 This means the abilities that AI are expected to have are similar to human capabilities, such as visual perception, speech recognition, decision-making, and translation between languages. For instance, self-driving cars and chatbots are powered by AI.
In the field of law, AI usually comes in the form of contract drafting, contract review, contract analytics, litigation prediction, and legal research, which offers acceleration and efficiency in the legal industry.
This existence of AI in the field of law arises multiple questions we should all be asking. Can AI one day be expected to supervise and enforce the contract that it has written?
Up to this point, the AI system offers limited capabilities as mentioned above. Yet, this does not stop the possibility that one day, AI will have the ability to supervise and enforce a written contract. For example, the usage of AI in a transaction agreement. With the data that it has gathered, an AI software can easily detect which party failed to implement the agreement or even determine if the contracting Parties was performed dishonestly. Without errors, AI can also identify which clause
the party has breached and what punishments or compensations the party should bear due to that breach of contract. These are nit the fanciful musings with the exponential rate of development in AI.
One of the current leading AI systems is Kira Systems,2 an AI software that claims to identify, extract, and analyse content in legal contracts and documents with unparalleled accuracy and efficacy. This system is currently being used by global law firms such as Hogan Lovells, Gorissen Federspiel, Osler, Shapman and Cutler LLP, Loyens and Loeff, Nagashima Ohno & Tsunematsu and many more.
Other honourable mentions of Legal AI start-ups are Leverton, ContractPod, ThoughtRiver, and so on.3 With more Legal AI start-ups emerging and more law firms using AI in their legal practice, this leads to the next question: What are the moral boundaries when it comes to law enforcement by the AI?
Unlike the judges who listen to both parties before rendering a decision, AI could easily appears to remove the noise and clutter, rendering a decision based on real live data and while calculating risks. No ‘good faith’ is going to be considered nor the true intentions of such deeds. As Gustav Radbruch postulated that the idea of law is defined through a triad of justice, utility, and certainty,4 can it be guaranteed the decisions that are based on data and AI’s considerations alone, fulfil that idea of law?
Some jurisdictions have made efforts to create relevant framework to govern the unabated growth of AI, namely Singapore’s PDPC that released its first edition of the Model AI Governance Framework in 2019 as updated in 2020.5 Brazil and Scotland recently launched their AI strategies, and the EU where the European Commission published its draft regulation for AI. Yet, none of these initiatives have been promulgated into laws.6 In 2019, there were 42 countries that supported OECD Principles on AI. These principles are not binding on member countries, but it gives the sense of urgency that all have the same vision and framework to contain the growth of AI and why AI should be overseen.
Whilst the debate is ongoing, the technology keeps on updating. More questions will emerge as long as developers keep pushing the limits of AI to satisfy the never-ending curiosity about just how far can AI really go? At the end of the day, when technological advances are unavoidable, AI will either integrate with the law or abolish the law altogether. SPU
Philosophical Review, Vol. 53, No. 1, (January 1944), pp. 23-45.
human-centric. See, PDPC, “Singapore’s Approach to AI Governance”, https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-
on Advanced Technology and Artificial Intelligence to review and advise the Governor, the Legislature, and other interested parties on the use and development of advanced technology and artificial intelligence. The United Kingdom has a guidance regarding Artificial Intelligence provided by the Department for Digital, Culture, Media and Sport, on data ethics and the Alan Turing Institute, on responsible design and implementation of AI systems. In China, the regulation of AI is governed mainly by the State Council of the PRC’s July 8, 2017 “A Next Generation Artificial Intelligence Development Plan” (State Council Document No. 35).