Automated Legal Advice

Automation and artificial intelligence are making their mark in the legal industry. Lawyer-bots give rudimentary legal advice online while AI tools are performing document review tasks.

Some say this rise of the robot lawyers is having positive impact: removing much of the document drudge work and freeing up humans to focus on more thoughtful and intricate tasks. Others worry that there will be less work for the next generation of law students, that automation might push humans out of their jobs and replace the nuanced field of law with cold analytical algorithms that have little sympathy for the human condition.

So where does the future of legal work lie along this spectrum?

In this podcast episode we here from the Regulating Automated Legal Services Technologies research team, who have set out to answer this question. The talk focuses on their discussion paper Current State of Automated Legal Advice Tools.

Hit play to hear the episode, or find the podcast on iTunes, Stitcher, Castbox, and other good podcast outlets.

Episode recorded: 7 June 2018

Speakers: A/Prof Tim Miller, Prof Julian Webb, Katie Miller

Recorded by: Adam Lodders

Produced by: Kate Murray

Audio effects and music by: Netpunk, Hisoul, Symphoid from Audio Jungle and Setuniman from FreeSound

  • Full Transcript

    KATE MURRAY: Automation and artificial intelligence are making their mark in the legal industry. Lawyer-bots give rudimentary legal advice online while AI tools are performing document review tasks.

    Some say this rise of the robot lawyers is having positive impact: removing much of the document drudge work and freeing up humans to focus on more thoughtful and intricate tasks. Others worry that there will be less work for the next generation of law students, that automation might push humans out of their jobs and replace the nuanced field of law with cold analytical algorithms that have little sympathy for the human condition.

    So where does the future of legal work lie along this spectrum?

    A research team at the University of Melbourne’s Networked Society Institute have set out to answer this question. I’m Kate Murray, welcome to Networked Society Stories.

    Their project is titled Regulating Automated Legal Advice Technologies and it not only maps the landscape of how these technologies are being used, but examines the technical and regulatory barriers facing automated legal services to develop appropriate policy, regulation and practice settings. Recently the team released a discussion paper and that’s where I’ll be taking you today. To the presentation of this paper at the University of Melbourne.

    We will hear from two of the researchers, first Associate Professor Tim Miller of Computing and Information Systems. Tim will talk about artificial intelligence and its relation to the legal industry. Next we’ll hear from Professor Julian Webb of the Melbourne Law School and Director of the Legal Professions Research Network. He will delve deeper into the findings of the paper and questions for further exploration.

    Following these two speakers, I will pop in again and introduce Katie Miller who was the discussant for the event. This is also a good point to hit pause and grab yourself a cuppa if you need a break. Now the team use the shorthand term ALAT to talk about these technologies in general. That is an acronym spelt A-L-A-T and stands for Automated Legal Advice Tools. That just an important snippet to know. Without further ado let’s head into the lecture theatre where Tim Miller is starting his presentation.

    TIM MILLER: Thank you very much! Okay, so, I'm nominally the AI person on this project, so I wanted to talk a little bit about what AI actually means, and what is it capable of, and what could it be capable of in the future.

    So we've kind of got this working definition of AI as something that learns things, reasons about things, and acts in a rational manner. And this is kind of a very loose definition of artificial intelligence that I think most people in the field would agree with.

    Okay? But, in the focus of sort of ethics and legal advice here, it's a bit of a moving target, because we're not really sure what things are going to be capable of in five or ten years. So while I could stand here and say, "I don't think we're actually going to be able to do this in 20 years," I'd probably give it a fifty-fifty chance that some lab somewhere would prove me wrong in five. Okay? So, we're not really sure what it is, and a lot of the ethical and legal considerations that people are considering are around super intelligence, what happens if we build something that's as smart as a human being?

    But I think there's a lot more important things that are happening here and now that we need to think about in the legal space that people are really providing in this particular moment.

    What's interesting here, is that we're talking a lot about what's going to happen in the future of AI. But I want to question whether we're at the start of a revolution in AI or are we at the end of a revolution in AI. The biggest hype around right now is deep learning. It's a very powerful technique in machine learning, that surprised a lot of people by what it can solve, by just finding correlations in data. But the breakthrough paper on deep learning, Back Propagation by Geoff Hinton was published in 1986, and that's still the dominant method.

    Now, what we know in technology is that from the start of a revolution to the end is around about 30 years. And that end is signalled when research comes out of labs and into companies. And we can see a lot of companies here. Noon here is from Silverpond, who's a deep learning company, transitioning this into the market.

    So this is hyperreality. A lot of the things we're seeing now you might have seen, Elon Musk saying, "In 20 years robots will be able to do everything better than us." But we could see this before Herb Simons' famous quote from '65 that was within 20 years, robots would be able to do everything that we can do.

    Okay, and similarly John McCarthy, who's the father of AI, said in '69, they started the Stanford AI project in '65 with the plausible goal of building a human intelligence machine in a decade. But as we see, the hype curve comes up, we very quickly run into limitations we couldn't have seen before.

    And so, where are we on the hype curve? This isn't my hype curve, I've taken this from somewhere else, but we've taken this from somewhere else here, and what we see with hype is this kind of curve in technology. Where sort of early on, people don't think much about it, and then right at the top of the curve, people think this will keep going at an exponential rate. And then very quickly, we lose enthusiasm as we see, "Ah, actually, there's limitations that we couldn't have possibly seen before." But in the end, it actually does a lot better than what we thought in the winter.

    Okay? And we see this a lot with AI technology. This is probably six months old, but we can probably see deep learning as the top tip of the curve, in the last few months I've started to see a lot of, sort of, not negative press about deep learning, but people going, "Ah, actually, there's a lot of limitations that we didn't think about five years ago." And now they're turning to, "deep reinforcement learning is going to be next best thing," but we'll probably see a similar fate. So the question is, with this kind of hype curve here, should we regulate on this hype curve, or should we be completely wrong and it turns out AI will be able to solve everything we can do in 20 years, how do we regulate that?

    Okay, so in the actual report, we kind of went round and thought a bit about trying to classify what ALATs could be, and how we do it, and we kind of came up with this really simple model of different ways of looking at artificial intelligence. Okay, and there's kind of two ways that artificial intelligence works. We either come up with knowledge ourselves and we encode it into a piece of software, or we learn it from data. Okay? The first one's very hard and requires a lot of human expertise. The second one requires a lot of data, but if you have the data, it's relatively straightforward in comparison.

    And then we can divide it on another axis here, where we look at finding correlations between things, which is the dominant method in machine learning right now, versus actually finding the causes of things. And what we find is, from left to right, the right hand side is far more powerful.

    So what we see as early AI was up in this top level thing, but right now, where we are is reasoning by association, this is what machine learning is all about. And these kind of powerful things that can reason about causal information in data is somewhat of a dream right now, but people are pushing, "this is where machine learning needs to go to." But we don't see many applications in that space here.

    In the report, we've classified a lot of the automated legal advice technologies we could find using this predictor here, you can read about those in the report. And now I'm going to hand over to Julian.

    JULIAN WEBB: Oh, thank you Tim. So, let's focus on the bit that we said that we were going to focus on, which is the regulatory side. The regulation, particularly of legal advice. There's lots of other issues, obviously around regulation of legal technologies, lots of conversation around cyber security, data protection, particularly the GDPR at the moment and various things like that. But that's not our concern. What we were focusing on was much more the professional regulatory conversation around legal advice and the developments in this space in terms of the kinds of technologies that we're starting to see emerging, which are possibly taking us a little bit close to the robo-lawyer type of issues.

    We took a deliberately broad definition of legal advice. And I think this is in itself quite an important basic point to make, that legal advice is not just about the technique, it's not just about knowing the law, but in fact what's going on, and talk about legal advice as a much broader conceptualisation and that involves a mix of quite a lot of, not just legal information, commercial information, but also soft skills, and awareness about the client's situation. And I think, again, one way in which this is obviously very important is in terms of raising the question about just what is going to be technologically replicable.

    So if we're talking about the automation of legal advice, what is it we're actually talking about? Which of these elements? And that I think is really possibly one of the most fundamental questions for in practice going forward.

    The benefits of legal automation in this space are of course quite well known. And of course, if they're right, then there could be some quite significant benefits here in terms of dealing with what is seen internationally as a growing access to justice gap, the way in which for firms as well, it may open up the missing middle of the market, and we're seeing some of the tech developers, to use an example that I suppose in a sense kind of non-controversially outside the jurisdiction of firm like Riverview Law, it's using technology on a whole range of process techniques, to open up access to medium size and smaller size enterprises for work that was not historically been viable. So there's advantages on both sides there, and also in terms of the pure sort of output process and the opportunities for cheaper and quicker legal advice as well.

    When we start to focus in on AI specifically, then there has been some recognition and the Law Society of New South Wales's recent report is a good example of this. That AI, in and of itself raises issues over and above other forms of automation. That there are regulatory and ethical issues that, as the Law Society of New South Wales said require investigation and guidance. They didn't tell us much more than that about it. And that again, I think is great news for us, but I think it does give us an indication that this is an area where actually there has not really been a lot of deep thinking, or at least not publicly a great deal of deep thinking. Yet.

    In terms of thinking about what the regulation of legal advice involves, we came up with a number of ways of thinking about this in the paper. We did, one example is we actually gave it a tripartite typology in terms of thinking about the regulation of intrinsic quality of advice, the processes of advice giving and the capacity to deliver legal advice. I'm not going to talk about all of those in the few minutes we've got here, I'm just going to focus on a few, what I would suggest are particularly key points. And one of those is around the issue of capacity to give legal advice. And this is in essence one of the, I'm always tempted to say kind of a certainly, in essence, meta regulatory questions for us here. Because legal profession regulation does control currently who is in a position to give legal advice. There is fundamental but quite complex distinction between legal information and advice, which is not directly based in legislation but comes through case law on the scope of legal practice.

    What it does do to use, what is I think in some ways a bit of an American term for this, is that it sets out a protection against the unauthorized practice of law, UPL. Now that's actually, in some ways, very important. If you think about this in consumer protection terms, then UPL regulation is a broad way of potentially protecting consumers from dodgy practices. But it's potentially quite a blunt instrument. It has on the other side of the scale, potential monopoly effects and consequences. And it raises some real issues, which I'm going to conclude with as well in this part of the presentation tonight, in terms of its impact on access and impacts on disruption of the market for legal services. So, in a way, regulation in this regard can be seen as something of a barrier to innovation and a barrier to disruption. The line between information and advice is not a bright one. So that in itself is part of the problem.

    So we have questions here at a technical level about where we draw the line, but even larger than that, we need to perhaps be asking whether that line is actually justifiable. And I think that is particularly interesting because one of the questions that the FLIP Inquiry again raised was the question of whether in fact legal information ought to be brought within the regulatory domain. And that's something that we highlight in the report as potentially problematic because of the public interest, public good qualities that certainly primary legal information have in that sense that there is a kind of public right to know. And so there are some really quite big issues potentially around the extension of regulatory restrictions to legal information. We concluded on this slide here that the information advice distinction therefore, not surprisingly, is problematic. Ultimately, we suggest there is at the very least, if not already a failure, there is certainly a risk of failure to engage with the question of what regulation in the public interest requires in the context of a climate of declining access to law.

    And I think, again, that is ultimately the big regulatory question that we have to address in this area. One of the other things we picked up was this theme of quality and quite implicitly, quality control. What challenges does the automation of legal advice create regarding the quality of legal advice? There are lots of debates as to extent to which we're, in a way, there's perhaps a subtle handover of control and even a handover of interpretation and construction of law from Lawyers to programmers and the question of the interfacing between Law skills and programming skills and the roll of some of the larger agendas of people working in this area and I mean someone who'll be known to a number of you I guess in the room is, at least exactly, people like Stephen Wolfram, who actually have a very strong agenda for creating a real Lawyer-esque code. Converting Law into this binary right wrong model. That's not as unproblematic at all sorts of levels as Stephen Wolfram tends to think it is. So there are real issues here in terms of what does it mean to convert Law into code.

    Who's doing it, for what purpose? And at the same time, we do know, and many of the studies are showing this certainly with ... numerous studies have been done on the ability of technology to manage quite basic agreements. All the studies on Non-Disclosure Agreements and the high level of, or higher level of quality that the technology seems to be producing in human agency, in those areas. Who determines and who assures accuracy? There are liability issues here that are also quite complex and in terms of another dimension on the access debate here, when we're talking about the quality what work these tools can do, what is good enough quality? Do we need to show that the tech does better on average than human agents? How much better, does it matter? When is good enough actually good enough in terms of allowing these systems into the wild? There is another question which had been raised in the US as well, which is just the question of competence in the way in which technological innovation raises some quite interesting challenges for what constitutes Lawyer competence.

    The American Bar Association rules have now issued, or adapted their definition of Lawyer competence to include a requirement to be, in effect, up to date and aware of what technology can do. They have not put it to extent of saying, "You must use the latest technology." But there is now a requirement of technological awareness within the professional definition of what constitutes competent service. Should we be having that same conversation here? When might failure to use technology, or even more crucially given the kinds of technologies that we're talking about with AI, a failure to understand what the technology does and what it means when you get a result out of a system, when does that become incompetent? That last question, I think, is really quite a fundamental one in the context of something that Tim knows a lot more about than I do, which is the explain-ability or black box problem that we have with AI. The fact that with deep learning systems where you have, in effect, self-learning and in a way, self-regulating algorithms, we don't actually have a deep understanding ultimately of how a particular result is arising.

    And if we look at some of the things that legal AI is being used for, so things like analysis of bail applications and so on which is the kind of work that Northpointe is doing. Then what we're starting to see, there's some really important and interesting questions arising there in the way in which AI is potentially replicating some things that we might not actually want replicated. So patterns of discriminatory decision making, for example. If we look at some of the legal analytics software that's coming which is saying, "We can help you rate your Lawyer, we can identify, for a given range of matters, how good a lawyer is and which quadrant that lawyer is in." Then that sounds really great but it also has some or it could have some really interesting unintended consequences. So if you're in the bottom 25% by Success Raters' as counsel, how might you decide to change your case selection? Practices.

    What if you have a reputation for taking the unwinnable cases because you think there is an ethical principle, that's what your job is fundamentally about. What price a cab rank rule in those sorts of circumstances?

    Underneath all of this, there is, I think, a really fundamental point which is the extent to which AI in these sharper end technologies, are actually changing the risk environment and it maybe changing it in some good ways but they also maybe changing it in some negative ways. How might ALATs themselves transform regulatory risk, is again, another substantial question for us in the research and also for, I think, us as profession of unity. In terms of assessing the risk of our ALATs, we need to be thinking about accuracy of system coding and learning, levels of transparency in explain-ability and, as we raised in the paper, should we be talking about explain-ability standards for legal AI?

    We need to think about the risk of negative outcomes from its use. We need to think about the consequences of negative outcomes from its use as well as the balancing those in a proper kind of cost benefits sense against the positives as well. And if the risks are all positives, then that raises what is perhaps the most interesting question, the regulation of all. And that's at the bottom of the slide there. Why doesn't ALAT need to managed or utilised within regulated legal practice? If the technology is appropriately designed and is representing legal outcomes appropriately, do we need a lawyer? In other words, should we be saying, or not just not saying let's regulate legal information, should we be saying there may be circumstances where it's no longer appropriate to regulate legal advice in the way that we have done and that may actually be, so long as we have a set of technology standards, who delivers the service may become rather more extraneous. And that really is, I think, actually a very fundamental question there in terms of the scope of legal services, regulation. To what extent does technology, does AI, have the potential to disaggregate access to Law from access to lawyers?

    KATE MURRAY: That was Professor Julian Webb and Associate Professor Tim Miller presenting their latest discussion paper titled Current State of Automated Legal Advice Tools which can be found on the Networked Society Institute website.

    Next we will hear from the Executive Director, Legal Practice, with Victoria Legal Aid, Katie Miller. Miller is not involved in the research project, but she was invited to give a response to the research paper and presentation. Here is Katie Miller.

    KATIE MILLER: So my purpose here is to just offer some opinions and comments. I'm not involved in the research project but I am delighted that the research project is happening. So in 2015 when I published my very basic paper around disruption, innovation and change in the legal profession, my objective was really to try and get a handle on what was actually happening. And in Australia there was not a lot happening. There was some stuff happening and I saw my task was really just trying to bring that to the surface. But to really find out what was happening I had to go to America to see what was happening.

    Fast forward to today, three years later and I kind of think that this just speaks to the pace of change. Our problem now is that there is so much happening that it's actually really difficult to get a grasp on it. So I think that the first reason that this paper is really valuable is that it does try to corral and contain and classify what is actually happening. So I think the paper, if nothing else is really useful on trade to people who I've been hearing that there's something going on around right about lawyers and stuff but their not really sure what. I think it's a useful starting point for people to see just what is available at the moment and also some, I guess, some indications of how far this could be taken. So I think it's useful for that reason, number one.

    The second is that there's obviously does raise a lot of regulatory issues, again going back to 2015 when I wrote my report, you know, you write these things, you always want a nice neat number. I decided that my nice neat number would be 10. I didn't actually have 10 points to make, so point number 10 actually just became just, look there are so many different questions around here for regulation, for education, and I do not have time to look at these things so I'm just gonna blurt them all down on a page and leave them to someone else to deal with.

    I was, I found that people didn't sort of rush in to that invitation, but I am really delighted that the Institute for Network Society, I do love that name. I'm delighted that you so have taken up that challenge because I think that it something people really been crying out for.

    And I think that until these questions start to be explored, I think that is in itself a bit of a break on innovation and disruption because I think that the people with the legal knowledge go well, "I'm just not sure how it works in the regulatory space, so I'm just not gonna go there." But equally I think that it means that for people who don't come from that legal paradigm, who might be more on the technology side, I think we got sort of the opposite risk is just they go, "Well, you know there's no regulatory issues here and I'll just sort of, off I go." And what it starts to mean is rather than having I think and integration of law and technology, which is really where you really have the best potential here, what we will end of having is a divergence where technology goes off on its merry frolic and lawyers sort of sit there frozen, too scared to do anything and we sort of loose the benefits of both.

    And this was illustrated to me when I was at sort of a conference, somebody was sort of showing off one of their wonderful automated legal advice tools. A question was asked, "What has been the regulatory sort of, what has been the reception from regulators, what are their views on it?" And the response was, "Well, I haven't engaged, and I don't need to because I've read all the advices on what's legal advice and what's legal information. And I'm fine. I know this is not providing legal advice." And that to me rings some warning bells, because as a lawyer I know that we don't actually have really clear definition on what legal advice is.

    I think that for many decades we've been trying to avoid it because it's just too hard. And I guess this is another area where the paper is really useful is that it actually starts to confront some of those issues.

    And this is not just a technology issue. I think this is something that's been challenging the legal profession and the sector since really about the '80's, when we started having specialised areas of laws where people sort of went, "You know what? I don't need, I want the specialist business knowledge of how law applies to this part of society." And so you had conveyances developing, you had your industrial relations experts, people who weren't lawyers but were using their specialised knowledge of, I guess legal information to essentially do things that otherwise lawyers could do.

    I think that's where we first started seeing the blurring. And we didn't really confront it at the time. I think technology means we can no longer hide from it. So I think another reason that the paper's really useful is that you know we actually do sort of say, "Well what is legal advice? What is legal information?"

    I note the comment in the Flip report, the New South Wales Law Society was saying, "Well maybe we should just regulate legal information." This kind of brings me to, I don't know which point I'm up to, probably three, why we need academics to be involved in this conversation because if left to our own devices, lawyers will do, we will sort of fall back to the paradigm that we know. If there is a problem, regulate it. And that's not necessarily going to help us here. I think that if we try to regulate legal information, I quite frankly think that would just be too difficult a task. I don't think that's going to help us. That's just going to dig the hole even deeper.  But look, you know, but this project will tell me if I'm right or wrong on that point.

    The fourth reason I think the paper is really useful, is this sort of classification. So as I said before, there's so much out there at the moment. I think that we've had real trouble distinguishing the hype from the reality. I think we've had real trouble in sort of grabbing hold of “what is this?” and it's it all just become this amorphous of “it’s just technology”. And I think that the classification helps to sort of draw some parameters, give us some criteria so that we can actually start sort of you know discussing where lines should be drawn, if at all. I mean the other thing I would say is that lawyers love a good classification. So I think that it also speaks another language that would resonant with lawyers, certainly this administrative lawyer. So I think that is great.

    There is, and I've been talking on Twitter about this paper. There is something about the classification that has been identified, the classification process, that's been called out by Kate Fazio of Justice Connect and she sort of made the point that you know the classification framework is great but does raise the question of, "How do we engage with that though?" Like what information do we rely on? Do we look for something objective, that we say, well this is when we will say, you are in one of the four quadrants, or do we rely on sort of self-assessment? Which is really relying on the technology companies to tell us are they correlative or causative, are they hand coding or machine learning or whatever else. So I think that's a really good insight and I'm sort of assuming that somewhere through the project, that sort of question of what, "How do we actually know what these things are doing and who do you rely on and how much do we sort of just take technologists at their word?" I think is a valid issue.

    What else have  I've got? I think, this is now, now I'm just gonna sort of, just a couple of thoughts I had in listening to the presentation tonight, that I sort of just, I guess run through.

    Julian was asking the question of quality control and when is good enough, good enough? I think again that is an old problem in the legal profession, that we quite never grappled with and technology just forces us to grapple with because it's becoming more complicated. So, that's always been a problem because we have an information asymmetry between clients and lawyers and what lawyers will consider quality is not necessarily the same thing as clients, but how do you decide, you know? Who gets to decide that sort of issue? Technology just means that we have a third component, so we have information asymmetry between client, lawyer, and technologists. So it’s just become more complicated and I think we just have to, again it's why the paper's needed because we need to grapple with this.

    In the discussion about professional competence, I think we're talking a lot about what do lawyers need to know about tech and I think they are worthy, valid questions. I think we also need to start talking about what do technologists who wish engage in the legal sector need to know about law. I think that there is something more than just the words in the statute book. We know that law comes from multiple sources. How much should technologists be required to be engaging with that to be able to have a quality product?

    And probably the last thing I will say is that around the ability to give reasons and look this is something that I am absolutely keen on, I'm an administrative lawyer, so of course I love the ability to have reasons and to pick them apart but you know I also need to be, I guess, honest here. The fact is that we've never had 100% transparency in law. We've never been able to break open the judge's head and sort of find out what they're really thinking. I think the way we have increased our transparency and understanding of how law operates and how decisions are made. It's actually taken decades of legal academic thinking and I would also say advocacy from lawyers, particularly in the community legal sector and the legal assistants sector more broadly. It's taken that sort of real challenging thinking to realise that what we had thought was very impartial and very fair and very equal, was actually anything but and had been shaped by societal prejudices and discrimination.

    I think that some of the discussion around the black box problem is actually tapping in to an anxiety that we've gotten to this point where finally we understand that things aren't as simple and as black and white as law can sometimes suggest. And I think there's this fear that we are going to lose the momentum that we have gained and we'll go right back to the beginning where we have something that ostensibly is impartial and fair and equal, but it just then takes all academics to do all this work to find out that actually no, it's not.

    So, they're really just some observations. I think that, this is a really, look I think this is a fantastic project. I'm really pleased that it's happening. I think that it something that's being presented in a very accessible way. I mean I literally read the paper on the Saturday night, while I'm watching the footy and tweeting about it. So I encourage you to get beyond to page summary and actually read the paper. It's a good paper. And I also really like that it's been deliberately designed to invite in the thoughts and considerations of people who are interested in this area because I do think these are difficult questions. I think we are going to need to go back to first principles in a way and I think that the more minds on it, the better and stronger our solutions will be. So, thank you.

    KATE MURRAY: That was Katie Miller from Victoria Legal Aid finishing up that event. You have been listening to Networked Society Stories on Automated Legal Advice. You can find out more about this project and other research on our website networkedsociety.unimelb.edu.au and you can find us on Twitter @MelbNSI

    I’m Kate Murray, thanks for listening!

More Information

Kate Murray

83445335