Nicolas Economou
H5

Nicolas Economou

Chairman & CEO
H5

Nicolas Economou is the Chairman and CEO of H5, a Silicon Valley-based firm which, as early as the late 1990s, pioneered the application of artificial intelligence to Big Data analytics in litigation, regulatory investigations and compliance undertakings. In addition to leading H5, Nicolas contributes actively to advancing the dialogue on public policy challenges at the intersection of law, science, and technology. Working with Harvard Law Professor Arthur R. Miller and Georgetown University Law Center, he led the design of a groundbreaking summit on law, technology, and policy, where he spoke alongside Justice Stephen Breyer. Nicolas has been featured in Forbes magazine and is the author of a number of published articles on issues relating to information risk management, technology, and the practice of law. He has spoken before legal audiences at a wide variety of conferences and organizations, including Stanford Law School, the American Bar Association, and the American Intellectual Property Law Association. Mr. Economou directs H5’s participation in The Sedona Conference’s Electronic Document Retention and Production Working Group. He has also served on both corporate and non-profit boards, and was a member of the Law and Judiciary policy committee for Barack Obama’s presidential campaign. Nicolas holds a B.A. in international public law and political science from the Graduate Institute of International Studies of the University of Geneva (Switzerland) and an M.B.A. in finance from the Wharton School of Business. He chose to forgo completion of an MPA at Harvard’s Kennedy School in order to found H5.

The Challenge

The AI Initiative mission is to foster and guide a public policy dialogue and set of frameworks leading to voluntary codes of conduct, national and international regulations, and a societal consensus aimed at helping humanity harness the advancements of Artificial Intelligence while protecting the dignity of the person. This mission is particularly germane in the realm of law. The functioning and even our very conception of the legal system are likely to be transformed by scientific developments in general and AI in particular. Some of challenges that will arise may seem distant, but many of them have perceptible early manifestations:

In the longer term

How will the legal system incorporate AI in the provision of legal services? Will there be a time when AI “lawyers” are, for example, a complement to or replacement for overworked public defenders? How will society respond to the possibility of AI “judges” that can demonstrably produce faster, more equitable and more uniform decisions than human judges can? Is it intrinsically improper to have human disputes adjudicated by AI, even if the system‐wide outcomes are more equitable than humans can deliver? Would it be appropriate –even ethically prescribed— to entrust the practice of legal tasks solely to “AI lawyers” if they are proven to be generally superior to humans?

Early manifestations

AI is already used to replicate and automate the work of lawyers in certain fact‐finding tasks, in particular electronic discovery. A ground‐breaking study under the aegis of US NIST demonstrated that automated assessments of relevancy and responsiveness conducted by sophisticated AI systems could, in the hands of scientifically trained experts, perform with greater accuracy and speed than human attorneys could. This raises today the somewhat futuristic question outlined immediately above: if AI systems are demonstrably superior to human attorneys at certain aspects of legal work (e.g., responsiveness assessments), what are the ethical and professional implications for the practice of law?

In the longer term

How will the legal system delineate the need for privacy and the respect of human dignity, when artificial intelligence and human intelligence are fused, when implanted technologies routinely enhance our abilities at performing intellectual, logical or work tasks, or track our biorhythms, emotional state and, perhaps, record our thoughts and dreams? Where will society draw the line between the need to adjudicate legal disputes based on facts, and the need to protect the dignity (and therefore the essence) of being human?

Early manifestations

With the increasing information gathered and available about us from our cellphones, browsing patterns, social networking sites, the growing use of wearable technology and the advent of IoT, discovery can already extend into the personal sphere in ways far more intrusive than ever existed in the past: 6.4 billion interconnected “things”, make uncomfortably much information potentially available.

In the longer term

How will the Legal System incorporate in its assignment of guilt or innocence the question of human‐machine interaction, when our mental faculties are supplemented by nanocomputers, and our decisions influenced by algorithms? Who is at fault for an error of judgment that incorporates both human intelligence and artificial intelligence in an interwoven continuum?

Early manifestations

In rudimentary form, the question of delineation of responsibility in human‐computer interactions appears in various forms in errors related to health care decisions, the use of avionics, data analytics, etc.

In the longer term

Will scientific and technological progress, along with “H.I. / A.I.” complementarity and interconnectedness in a vast IoE (“Internet of Everything”) create a data explosion so uncontrollable as to make fact discovery entirely cost‐prohibitive? Will the legal system inevitably become the preserve of the very wealthy and the technologically savvy? How can the “just, speedy, and inexpensive determination of every action" be secured in a world drowning in data well‐beyond anything that we can conceive today? Or will AI become a democratizing force, facilitating access to justice by replacing expert human labor? And if that occurs, will then litigation itself proliferate uncontrollably? And if it does, will AI judges become a necessity? Can AI (and science) help solve the fundamental challenges that AI (and science) are creating? And if the legal system relies increasingly on AI to meet legal challenges, at what point will human justice become uncomfortably controlled by machines?

Early manifestations

The advent of “ESI” (Electronically Stored Information) such as e‐ mail and corporate documents is already a manifestation of science and technology stretching the limits of the legal system: the cost of discovery is used as a weapon of war either to impose prohibitive costs on poorly‐funded opponents or to create what some describe as extortionate threats to extract undue settlements or face unbearable discovery costs. AI is being increasingly applied to meet this challenge, in particular to reduce cost and time to discovery. The use of AI for such tasks was faced with incredulity by legal practitioners and courts just fifteen years ago.

In the longer term

Will the interconnectedness of man, communications, ultra‐high‐ quality video‐recording devices in public spaces, face‐recognition software, and of all objects across a vast “IoE” enable automated and systematic policing and enforcement of laws and regulations? How far would that intrude in our daily lives? Will jaywalking be immediately followed by a “ping” on your mobile device announcing a fine? Will exceeding the speed limit by 5 miles an hour automatically do the same? What about an automated notification from an AI legal authority that, based on all available facts across the IoE, a posting you are about to publish would be deemed legally defamatory and would trigger a fine (which you might appeal)? Where does this vision cross from the fair and egalitarian to the dictatorial and unlivable?

Early manifestations

GPS or car systems that record the position, speed and mechanical data from cars are increasingly available, as are cameras in cities across the world recording the movement of people and goods. These are currently used selectively, typically after the fact, to determine the circumstances of certain actions or incidents.

Beyond these examples, there are areas where science and AI have not yet made their first inroads, but where similar opportunities and challenges arise. Consider the formulation of laws and regulations:
  • How will society respond to the inevitable development of automated means of identifying the need for, and creating new laws and regulations?
  • Or to the ability to model in advance all the possible consequences of a law or regulation across a wide range of possible scenarios?
  • Or to the ability to use such evolving knowledge to create self‐optimizing laws and regulations?
  • When does the human legislative process become so clearly inferior to an AI process that society might become comfortable with AI‐developed legislation?
  • These are just a few examples of the sorts of challenges at the intersection of science, the law and society that legislators, academics, technologists and civil society at large will have to grapple with.

The Opportunity

An AI Initiative Chapter on AI and the Law

Addressing those challenges requires a societal consensus on the functioning of the legal system in a world that we are barely beginning to envision. Such a societal consensus can only result from an active, deliberate, inclusive long-term dialogue of stakeholders in government and civil society.


The Chapter of the AI Initiative on AI and the Law will develop an agenda for this public dialogue, instituting speaker series, thought leadership summits, policy papers and briefings, op-eds, media interviews and articles, collaborative research projects, etc.