Dena Anee (Regular Writer)
Our intelligence is what makes us human, and AI is an extension of that quality
For centuries, the advancement of technology has prompted fear of the rise of unemployment. That in and of itself is hardly surprising. In the 19th century, the first industrial revolution ushered in the production of machines which enabled the creation of goods and services. The second revolution brought about the introduction of electricity and mass production. Then came the third revolution, which created information technology and automation, and brought the world closer together by creating a shift from mechanical and analogue electronic technology to digital electronics. We are now living through the early stages of the fourth industrial revolution: one which encompasses new technologies, bringing together digital, physical, and biological systems.
We are already seeing significant progress in the developments of Artificial Intelligence (AI). In some areas, AI has already overreached human intellect. Former world chess champion, Garry Kasparov, was defeated in a chess game in 1997 against ‘Deepblue’; IBM’s Watson beat the two best human players on the language game show “Jeopardy!”; and in 2016, the online computer board game, ‘AlphaGo’ was developed and trained on over 30 million moves and, as a result, was able to defeat the European world champion.
As a result of these technological advancements, it is estimated that by 2030, 800 million jobs worldwide could be displaced to automation. In light of this worrying statistic, this article will consider the more pressing question in the legal sphere: are lawyers safe from, or susceptible to, the risk of being displaced by technological advancements?
What is Artificial Intelligence?
The term “Artificial Intelligence” was coined in 1956 by John McCarthy, an American computer scientist who played a vital role in the development of intelligence machines in the 1950s. McCarthy – widely regarded as the father of AI – believed that:
every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
In essence, AI can mimic human behaviour by carrying out operations and solving problems that typically require human intelligence. AI machines work not only by replicating human behaviour, but also by observing patterns, understanding, and building intelligent entities.
AI encompasses a broad scope of technologies, includes a variety of sub-divisions and – much like Russian Matryoshka nesting dolls – consists of various layers. One of these layers is Natural Language Processing (NLP). NLP works to establish programs that replicate and understand nuances of human language by ascertaining which words are being said and precisely what these words correlate to – as a result, they are able to communicate language accurately. Other subdivisions of AI include Deep Learning (DL) and Machine Learning (ML). While DL learns by using deep neural networks resembling that of the human brain, ML learns from experience by finding algorithmic patterns in data which have been inputted, and predicting suggestions accordingly. ML, in particular, is used in several prominent technologies we use today; for example, Google Translate or Navigation.
So, how far has AI developed? Some industries have already begun to trial and implement a form of NLP machines known as “chatbots”, which simulate the natural form of human interaction and therefore allow people to interact with the machine through ordinary speech or instant messaging. In 2018, the National Health Care Service in Britain trialled ‘Babylon’, a chatbot which aims to place healthcare in the hands of patients. Babylon uses algorithms to gauge and assess an individual’s medical condition and provides advice to patients based on its analysis. In the legal industry, ‘Billy Bot’, a junior clerk bot, can carry out the tasks of a junior clerk and can go even as far as carrying out conflict checks or connecting clients to barristers’ chambers or solicitors.
Advancements in legal technology
In recent years, we have seen a greater push for the use of AI in the legal profession with a variety of Magic Circle, Silver Circle and other large City law firms investing in and adopting AI systems. In 2016, Addleshaw Goddard, Clifford Chance, and DLA Piper all announced deals with a Canadian AI provider, Kira systems (Kira), concerning a machine learning system which can extract data from a large volume of non-structured documents. In the same year, legal information provider Thomson Reuters collaborated with IBM Watson to launch a data privacy advisor. This AI question-answering system was trained to analyse and understand nuances of legal definitions in order to assist privacy regulators in law firms with data privacy laws, compliance and regulations.
What has become evident, however, is that to date the legal industry has largely used AI systems merely as supplemental tools. At a large scale, AI can be a useful tool with the ability to replace large-volume work which lawyers are often begrudgingly required to carry out. In real-estate, for example, AI is used to facilitate title and lease reviews; in mergers and acquisitions, the due diligence process; in litigation, the review of large bundles of documents for disclosure; and in commercial law, the reviewing and drafting of commercial contracts. As a result of this innovative shift, the more routine, low-cost tasks could be handled automatically by AI, transforming legal work for lawyers and offering them the opportunity to focus on more complex tasks or client engagement work.
The Challenges of AI
The increasing role of AI in the legal profession presents some practical and abstract challenges. Some of these challenges are derived from how AI systems will be controlled and maintained to avoid liability. Other challenges include how responsibility and liability will be assigned and to what extent humans will be held accountable when AI fails to perform for a client. Will this give rise to AI disputes and, if so, then how will grounds for duty of care, liability and accountability be ascertained and proved? Ultimately, will these concerns give birth to a new, niche, area of law?
For present and future lawyers, it is vital to understand some of the risks which could arise from AI and, fundamentally, how these risks could be mitigated.
Regulating AI — Privacy
One of the main concerns regarding the implementation of AI is how personal information and data will be regulated, and the possibility of infringements of the privacy of individuals. This challenge arises due to the way in which AI machines work: their ability to analyse and collect information in order to create new data. These features present some key challenges in the data protection world.
As previously discussed for Keep Calm Talk Law, the General Data Protection Regulation (GDPR), introduced in May 2018 by the European Union (EU), promoted stringent rules on how data should be collected, used and stored. The fundamental principles of the GDPR could pose some challenges for individuals who use AI systems, in particular, data controllers who have a responsibility to ensure that personal data is not misused. To avoid accountability for the misuse of data, companies who use AI will need to conform to the principles underpinned by the GDPR and ensure the implemented AI systems are 'GDPR-compliant'.
However, it doesn’t end there: in 2019, the EU will propose a new ePrivacy Regulation (ePR).The ePR will revoke the Privacy and Electronic Communications Directive (ePrivacy Directive). The objective of the ePR is to regularise and protect personal data in electronic communications. This will ensure the regulation is future proof for emerging technologies, such as AI.
Ultimately, these restrictions could create limitations for ML and DL systems since these systems can only learn by the vast input of data, namely personal data. The difficulty will, therefore, be leveraging the use of AI machines whilst simultaneously ensuring adherence to privacy regulations. On the other hand, data subjects (users whose personal data is being collected, held or processed) may be less inclined to share their personal data if there is a potential for mishandling of data. In light of this, data subjects will need to agree on the use and collection of their data for automated decision-making. To help their clients, lawyers will need to understand the challenges such regulations will bring for companies which adopt AI.
Ethics — “The Algorithmic Bias”
Ethical standards play a crucial part in the legal profession and confront lawyers with a certain amount of pressure. Regarding these new technologies, the question as to how lawyers can use AI whilst still maintaining ethical standards is prevalent, particularly in regards to bias. The reality is that, while AI machines can learn, process, and predict data, we cannot predict what biases AI machines will be exposed to. In a similar vein to privacy regulations, AI systems must be used in a way which adheres to anti-discriminatory laws and directives imposed by the EU.
Hypothetically speaking, if a bank mortgage application is made and the bank's system has pre-existing, trained, biased data on a specific gender, this could give rise to loopholes if a bank refuses an application against a particular gender as a result of the biased data its AI system has produced. With this in mind, in a client-facing profession such as the law, algorithmic bias could lead to many complications, particularly if AI systems are being used in algorithmic decision-making. While algorithmic bias cannot completely be avoided, law firms – when using AI systems – will need to ensure appropriate measures are undertaken in order to identify and correct algorithmic bias; this will at least make sure algorithmic bias in AI systems is minimised.
Conclusion – Will AI replace lawyers?
While some argue that AI machines can only replicate human behaviour and are not, yet, at an intellectual level of human intelligence, the reality is that AI is growing at a rapid rate. Indeed, Ray Kurzweil, Director of Engineering at Google, predicts:
By 2029, computers will have human-level intelligence.
What will this mean for the legal industry? Baker Hostetler, a large US law firm, believes that ‘emerging technologies like cognitive computing can help enhance the services we deliver to our clients’. In 2016, the firm hired ‘ROSS’ – the ‘world’s first artificially intelligent attorney’ – which is built upon IBM’s ‘Watson’ computer and uses ML to act as a legal researcher alongside other lawyers in the firm’s bankruptcy division.
The AI evolution could seemingly blur the line between humans and machines: an idea that is compelling for some and frightening for others. However, it is submitted that the increasing use of AI within the legal industry should not be viewed as a threat to lawyers; rather, it is a tool to increase efficiency while creating opportunities for lawyers to do more. Within the next decade, we will probably not be fazed with the idea of sharing our offices with non-human AI systems and machines, working alongside us to make well-informed decisions. Legal regulators will play a big part in driving this change forward, and the challenge for them is ensuring that law firms use AI systems in a way which does not conflict with the SRA Code of Conduct and regulatory principles. Lawyers and future lawyers must embrace, anticipate and adapt to this change. Only then can we innovate and remain combative in an increasingly competitive environment.
For the latest articles straight to your inbox, you can subscribe for free. Alternatively, follow @KeepCalmTalkLaw on Twitter or Like us on Facebook.