Stanford CodeX FutureLaw Conference -- Summary, Part One

I stumbled across the Stanford CodeX #FutureLaw summit last year…mere days AFTER it occurred.  I made sure to attend this year.  CodeX strives to “promote progress in legal technology,” and its annual FutureLaw summit provides an opportunity for legal tech innovators from across the globe to meet and learn ways to “advance legal tech to solve fundamental problems in the field.”

The Law Geex quickly summarized several of the key tweets and trends, my top take-aways are below.

The Agenda served to address the general future and “re-invention” of law as we know it, covering aspects of 

  • Predictive analytics in law,
  • Mechanized legal analysis and the systems to create them,
  • Pros and cons of chatbots,
  • How law schools and clients are approaching the changing legal landscape.

Keynote – Reinventing Law
Professor Gillian Hadfield of USC’s Gould School of Law delivered the keynote in which she identified five rules:

  1. Change the conversation
  2. Don’t leave it to the lawyers
  3. Change the rules
  4. Catalyze and fund research
  5. Invest in legal innovation

The list echoes conversations I heard at the January Marketing Partner Forum produced by Thomson Reuters’ Legal Executive Institute.  Their Codex FutureLaw summary is here, where they noted the quick popularity of point number two, “Don’t leave it to the lawyers.”  This interests me as someone who straddles the linear world of law and the creative world of marketing.  In the future, lawyers will become less the lead or driver of an institution (like the traditional, historic firm where non-lawyer activity and actors lie at the outermost periphery) and instead will share in a collaborative system, where each player’s role may ebb and flow between (lead and follower).  Thus, the future (which is already here, by the way) cannot leave it only to the lawyers, and prefers interconnected minds and ideas for innovation and survival.

Lawyer Extinction?
The extinction of a particular practice would be a rare event, although an attorney’s inability to adapt may create that ominous result.  So, where does adaptability intertwine with the Future of Law discussion?  

Panel:  The Perils & Promise of Predictive Analytics in Law -- Prof. Daniel Martin Katz, Chicago Kent College of Law; John Nay, CEO, Skopos Labs; Gipsy Escobar, Measures for Justice; Josh Becker, CEO, Lex Machina; Dera Nevin, eDiscovery Counsel, Proskauer Rose.

Panel:  The Perils & Promise of Predictive Analytics in Law -- Prof. Daniel Martin Katz, Chicago Kent College of Law; John Nay, CEO, Skopos Labs; Gipsy Escobar, Measures for Justice; Josh Becker, CEO, Lex Machina; Dera Nevin, eDiscovery Counsel, Proskauer Rose.

A major advantage of recent technological innovation has been the increased speed and accuracy of processing data – the result of Artificial Intelligence (AI).  With AI, we can feed a machine a set of data, teach it a set of rules, and ultimately produce predictive analytics for a variety of areas of law.  The downside, or at least the cautionary point arises when human biases are factored into the data fed to a machine.  Panelist Gipsy Escobar of Measures for Justice reminded the crowd that data can “reveal latent biases or information we don’t have…” and thus uncover issues to anticipate or even new causes for concern.  Add to this:  predictive variables, where a model can change what actually happens…subsequently making the original model wrong.

Panelist Dera Nevin, e-Discovery Counsel with Proskauer Rose, noted, “Data-driven law can make things transparent.  This is a fantastic opportunity.”  She continued, saying we should always examine the ends (of a given objective) and ask, “Are the ends based on logic or emotion?”  Escobar said, “Probability will spit out false positives, and as a society, we must determine the consequences when that occurs.”  Co-panelist, Josh Becker of Lex Machina added, “In the medical world, machines may have an error rate of 1- to 3-percent, but that is less than with humans where there is a 20- to 25-percent error rate reading mammograms, so the machines are helping.”

Data-driven and rule-based approaches; the Stanford/Ford autonomous car.

Data-driven and rule-based approaches; the Stanford/Ford autonomous car.

Follow the Rules…?
The Rule Systems Panel carried the pro/con debate further as they discussed computational law and compared rule-based and data-driven approaches.  For example, during a demonstration of a study related to Stanford’s efforts with autonomous cars, the audience saw an example where the car come to a complete stop for an indefinite amount of time – because it had been programmed to follow essential driving rules.  When the car faced the dilemma of a double-yellow-striped line to its left, and an obstruction in its lane ahead, the car simply stopped safely shy of hitting the obstruction, but unwilling to break the rule of Not Crossing the Double Yellow Line.  This aided the car’s programmers in mapping philosophical and mathematical frameworks to help the car identify instances where a standard rule (the double-yellow lane) might be violated.

Michael Mills, Co-Founder Neota Logic, explained, “We who write at the code level don’t care about the tradeoff in formulating a law as a rule or a standard because we are writing for implementation – the audience wants an operational, useful answer.”  He added that successful systems generally require a hybrid of “rules-based” approaches and those using tailored algorithms.

The early afternoon introduced three innovators and the chatbots they created.  First, Joshua Browder, @jbrowder1, a young Brit who developed the Do Not Pay bot after receiving parking tickets.  To date, according to Browder, his bot has handled 257,000 parking tickets and has saved drivers $7 million.  He has expanded his bot for other areas of law and most recently is assisting refugees seeking asylum in the US.  

Browder believes government will become more efficient with the rise of technology.  “Tech and chat bots will have a big impact on the law.  I’m only 20, and there are thousands of other programmers working on similar issues,” he said.

Kevin Xu, @KevinSXu, a Stanford 3-L discussed his healthcare chatbot Hilbert,  His example showed how users can input natural language questions to obtain information to triage an issue and to explore their healthcare insurance status – such as total spend to date for the year – all in responsive real-time.

Two young Russian developers built Visabot after their own experience with US Immigration.  Because a large portion of immigration lawyers’ work is form-driven, Visabot helps process initial communications with a client and can schedule appointments for consultations and even handle invoicing.

These three fresh chatbot examples were next challenged with the proposal from Joshua Lenon, @JoshuaLenon -- that we are about to enter (doom & gloom music here) the "Reign of Tech Terror."  Lennon is the attorney in residence with Clio, and was also part of the team to initiate Airport Lawyer earlier this year, a web app that connected travelers affected by the initial Trump/US Travel Ban – with attorneys who were available to assist the travelers.

Lenon opened with an example of the dilution of legal offerings today.  He said, “People were asked if they needed help, and they said no because they knew what they were doing – because they had Googled.  So now, more money is spent on the top line (advertisement) of Google.  The public can’t tell good legal advice from bad legal advice…thus, we will see a proliferation of bad legal advice.”  

People worry that computers will get too smart and take over the world, but the real problem is that they are too stupid and they have already taken over the world.
— Pedro Domingos, author The Master Algorithm

Lenon said chatbots fail to address three key areas:
1.    Geography,
2.    Choices and consequences, and
3.    Evidence.

He explained, saying that most people search for local information, but chatbots are global.  So, interactions with bots may not produce ideal help.  Plus, it is difficult to change your answer with current bot systems.  This issue is easily remedied, as Lenon demonstrated.  (It just requires designers to consider the option for changing responses and allowing users to navigate backwards as necessary.)

That leaves his third concern – evidence.  Lenon remarked, “Will we ask people to save every bit of text and information to prove their decisions?”  He concluded, “Chatbots may do triage, but may not be the panacea.  They may lay groundwork for legal follow-up.”  

Lenon’s summary bolsters my point of view:  the breadth of an attorney’s legal-related work may shift when automation steps in to handle routine and repeatable tasks – and the results of automation will better and more efficiently inform an attorney for the legal work to be done.  It will be more a blend of artificial intelligence and human intelligence – where the humans design and direct machines.  ...If the human adapts to this new setting.

More in Part Two:  Future Law from law schools, legal services, and in-house counsel.  

19 April 2017 - updated to correct the spelling of Joshua Lenon's last name, and to correctly identify Joshua Browder as creator of DoNotPay.

 APRIL 26:  Spring Legal Mindstorm -- Remarkable changes are coming to the field of law, and with greater speed than any changes in the past five years.  

This afternoon seminar will provide an ethics CLE session (0.50 pending) followed by interactive discussion regarding the new technologies and changes facing legal practice today and the future of law, including:  Artificial Intelligence, Blockchain, Chatbots, Smart Collaboration, the Death of Experitse, Non-Lawyers, and more.

Register now: - price goes up on April 25th.

Flip Cat Consulting works with law firms, practice groups, individual attorneys, and other professionals to design marketing and business development strategy.  We work onsite or remotely, from specific projects to global change management.  Contact us to arrange a free consultation.