Digital services have frequently been in collision — if not out-and-out conflict — with the rule of law. But what happens when technologies such as deep learn software and self-executing code are in the “drivers seat” of legal decisions?
How can we be sure next-gen’ legal tech’ systems are not unfairly biased against certain groups or individuals? And what skills will lawyers need to develop to be able to properly assess the quality of the justice flowing from data-driven decisions?
While entrepreneurs have been eyeing traditional legal processes for some years now, with a cost-cutting glimmer in their eye and the word ‘streamline‘ on their lips, this early phase of legal innovation pales in significance beside the transformative potential of AI technologies that are already pushing their algorithmic thumbs into legal processes — and perhaps shifting the line of the law itself in the process.
But how can legal protections be safeguarded if decisions are automated by algorithmic models developed on discrete data-sets — or flowing from policies administered by being embedded on a blockchain?
These are the sorts of questions that lawyer and philosopher Mireille Hildebrandt, a professor at the research group for Law, Science, Technology and Society at Vrije Universiteit Brussels in Belgium, will be engaging with during a five-year project to investigate the implications of what she terms’ computational law’.
Last month the European Research Council awarded Hildebrandt a grant ofEUR2. 5 million to conduct foundational research with a dual technology focus: Artificial legal intelligence and legal applications of blockchain.
Discussing her research plan with TechCrunch, she describes the project as both very abstract and very practical, with a staff that will include both lawyers and computer scientists. She says her intention is to come up with a new legal hermeneutics — so, basically, a framework for lawyers to approach computational law architectures intelligently; to understand limitations and implications, and be able to ask the right questions to assess technologies that are increasingly being put to work assessing us.
” The notion is that the lawyers get together with the computer scientists to understand what they’re up against ,” she explains.” I want to have that dialogue … I want lawyers who are preferably analytically very sharp and philosophically very interested to get together with the computer scientists and to truly understand each other’s language.
” We’re not going to develop a common language. That’s not going to work, I’m convinced. But they must be able to understand what the meaning of a word is in the other discipline, and to learn to play around, and to say okay, to insure the complexity in both fields, to shy away from trying to make it all very simple.
” And after watching the complexity to then be able to explain it in a way that the people that really matter — that is us citizens — can make decisions both at a political level and in everyday life .”
Hildebrandt tells she included both AI and blockchain technologies in the project’s remit as the two offer” two very different types of computational law “.
There is also of course the opportunity that the two will be applied in combination — creating” an entirely new situate of dangers and opportunities” in a legal tech setting.
Blockchain” freezes the future”, argues Hildebrandt, acknowledging of the two it’s the technology she’s more skeptical of in this context.” Once you’ve set it on a blockchain it’s very difficult to change your intellect, and if these rules become self-reinforcing it would be a very costly affair both to its implementation of fund but also in terms of endeavour, hour, embarrassment and uncertainty if you would like to change that.
” You can do a fork but not, I guess, when governments are involved. They can’t just fork .”
That said, she posits that blockchain could at some point in the future be deemed an attractive alternative mechanism for states and companies to settle on a less complex system to determine obligations under global taxation law, for example.( Presuming any such accord was in a position to be reached .)
Given how complex legal conformity can already be for Internet platforms operating across borders and intersecting with different jurisdictions and political expectations there may go a point when a new system for applying rules is deemed necessary — and putting policies on a blockchain could be one way to respond to all the chaotic overlap.
Though Hildebrandt is cautious about the idea of blockchain-based systems for legal compliance.
It’s the other area of focus for the project — AI legal intelligence — where she clearly insures major potential, though also of course dangers too.” AI legal intelligence means you use machine learning to do argumentation mining — so you do natural language processing on a lot of legal texts and “youre just trying to” see lines of argumentation ,” she explains, quoting the example of needing to judge whether a specific person is a contractor or an employee.
” That has huge consequences in the US and in Canada, both for the employer … and for government employees and if they get it wrong the tax office may just walk in and give them an enormous fine plus claw back a lot of money which they may not have .”
As a consequence of confused case law in the field, academics at the University of Toronto developed an AI to try to help — by mining lots of related legal texts to generate a situated of features within a specific situation that could be used to check whether a person is an employee or not.
” They’re basically looking for a mathematical function that connected input data — so lots of legal texts — with output data, in this case whether you are either an employee or a contractor. And if that mathematical function gets it right in your data set all the time or nearly all the time you call it high accuracy and then we test on new data or data that has been kept apart and you see whether it continues to be very accurate .”
Given AI’s reliance on data-sets to derive algorithmic models that are used to make automated judgement calls, lawyers are going to need to understand how to approach and interrogate these technology structures to determine whether an AI is legally sound or not.
High accuracy that’s not produced off of a biased data-set cannot just has become a’ nice to have’ if your AI is involved in building legal decision calls on people.
” The technologies that are going to be used, or the legal tech that is now being invested in, will require lawyers to construe the end results — so instead of telling’ oh wow this has 98% accuracy and it outperforms the best lawyers !’ they should say’ ah, ok, can you please show me the define of performance metrics that you tested on. Ah thank you, so why did you set these four into the drawer because they have low accuracy ?… Can you show me your data-set? What happened in the hypothesis space? Why did you filter those arguments out ?’
” This is a conversation that really requires lawyers to become interested, and to have a bit of fun. It’s a very serious business because legal decisions have a lot of impact on people’s lives but the idea is that lawyers should start having fun in construing the outcomes of artificial intelligence in statute. And they should be able to have a serious conversation about the limitations of self-executing code — so the other part of the project[ i.e. legal applications of blockchain tech ].
” If someone says’ immutability’ they should be able to say that means that if after you have put everything in the blockchain you abruptly detect a mistake that mistake is automated and it will cost you an incredible amount of money and great efforts to get onto repaired … Or’ trustless’ — so you’re saying we should not trust the institutions but we should trust software that we don’t understand, we should trust all sorts of middlemen, i.e. the miners in permissionless, or the other types of middlemen who are in other types of distributed ledgers …”
” I want lawyers to have ammunition there, to have solid debates … to actually understand what bias entails in machine learning ,” she continues, pointing by way of two examples to research that’s being done by the AI Now Institute in New York to investigate disparate impacts and treatments related to AI systems.
” That’s one specific problem but I think there are many more problems ,” she adds of algorithmic discrimination.” So the purpose of this project is to really get together, to get to understand this.
” I think it’s extremely important for lawyers , not to become computer scientists or statisticians but to genuinely get their finger behind what’s happening and then to be able to share that, to truly contribute to legal method — which is text oriented. I’m all for text but we have to, kind of, make up our intellects when we can afford to use non-text regulation. I would actually say that that’s not law.
” So how should be the balance between something that we can really understand, that is text, and these other methods that lawyers are not trained to understand … And also citizens do not understand .”
Hildebrandt does assure opportunities for AI legal intelligence debate mining to be “used for the good” — telling, for example, AI could be applied to assess the calibre of the decisions made by a particular court.
Though she also cautions that huge thought would need to go into the design of any such systems.
” The stupid thing would be to merely give the algorithm a lot of data and then develop it and then say’ hey yes that’s not fair, wow that’s not allowed ‘. But you could also truly think deeply what sort of vectors you have to look at, how you have to label them. And then you may find out that — for instance — the court sentences much more strictly because the police is not bringing the simple lawsuits to tribunal but it’s a very good police and they talk with people, so if people have not done something really terrible they try to solve that problem in another way , not by using the law. And then this particular tribunal get merely very heavy cases and therefore dedicates far more heavy sentences than other courts that get from their police or public prosecutor all life cases.
” To see that you should not only look at legal texts of course. You have to look also at data regarding the police. And if you don’t do that then you can have very high accuracy and a total nonsensical outcome that doesn’t tell you anything you didn’t already know. And if you do it another way you can sort of confront people with their own racisms and make it interesting — challenge certain things. But in a way that doesn’t take too much for awarded. And my idea would be that the only way this is going to work is to get a lot of different people together at the design stage of the system — so when you are deciding which data you’re going to train on, when you are developing what machine learners call your’ hypothesis space ‘, so the type of modeling you’re going to try and do. And then of course you should test five, six, seven performance metrics.
” And this is also something that people should talk about — not only the data scientists but, for instance, lawyers but also the citizens who are going to be affected by what we do in statute. And I’m absolutely convinced that if you do that in a smart way that you get much more robust applications. But then the incentive structure to do it that way is maybe not obvious. Because I believe legal tech is going to be used to reduce costs .”
She tells one of the key concepts of the research project is legal protection by design — opening up other interesting( and not a little alarming) topics such as what happens to the presumption of innocence in a world of AI-fueled’ pre-crime’ detectors?
” How can you design these systems in such a way that they offer legal protection from the first minute they come to the market — and not as an add-on or a plug in. And that’s not just about data protection but also about non-discrimination of course and certain consumer rights ,” she says.
” I always think that the presumption of innocence has to be connected with legal protection by design. So this is more on the side of the police and the intelligence services — how are you able help the intelligence services and the police to buy or develop ICT that has certain holds which makes it compliant with the presumption of innocence which is not easy at all because we likely have to reconfigure what is the presumption of innocence .”
And while the research is part abstract and solidly foundational, Hildebrandt points out that the technologies being examined — AI and blockchain — are already being applied in legal contexts, albeit in” a state of experimentation “.
And, well, this is one tech-fueled future that really must not be unequally distributed. The risks are stark.
” Both the EU and national governments have taken a liking to experimentation … and where experimentation stops and systems are actually already implemented and impacting decisions about your and my life is not always so easy to consider ,” she adds.
Her other hope is that the interpretation methodology developed through the project will help lawyers and law firms to navigate the legal tech that’s coming at them as a sales pitch.
” There’s going to be, plainly, a lot of crap on the market ,” she says.” That’s inevitable, this is going to be a competitive market for legal tech and there’s going to be good stuff, bad stuff, and it will not be easy to decide what’s good stuff and bad stuff — so I do believe that by taking this foundational perspective it will be more easy to know where you have to look if you want to stimulate that judgement … It’s about a mindset and about an informed mindset on how these things matter.
” I’m all in favor of agile and lean computing. Don’t do things that build no sense … So I hope this will contribute to a competitive advantage for those who can skip methodologies that are basically nonsensical .”
Make sure to visit: CapGeneration.com