With the incredible success and global attention computational antitrust has received, CodeX Affiliated Faculty, Professor Thibault Schrepel, and CodeX Fellow, Dr. Lance Eliot, share their views on computational antitrust and its implication for futures of AI and law.
Three Implications of Computational Antitrust by Prof. Thibault Schrepel
Our seminal article defines computational antitrust as “a new domain of legal informatics which seeks to develop computational methods for the automation of antitrust procedures and the improvement of antitrust analysis.” I see three short, medium, and long-term implications.
- Short term: Computational antitrust will push antitrust law to become fairer and more effective. The project documents current possible uses of computational tools (e.g., creating network analysis and/or training machine learning systems using existing case-law to understand it better). It also imagines tomorrow’s solutions (e.g., simulating merger operations with agent-based modeling, operationalizing commitments). In turn, agencies will detect anti-competitive practices in greater numbers; they will better analyze mergers, etc. In the meantime, computational antitrust will provide companies with new tools to ensure compliance and play by the rules.
- Medium Term: New procedural rules will emerge to oversee the use of computational tools and ensure they do not distort fundamental principles. The project sets out to create them. The field requires researching how to ensure small companies’ right of defense when they do not have the resources to oppose a different analysis. It also entails compelling agencies to explain computational outputs and constraining the monitoring of companies, the transmission of data, and new investigative powers.
- Long Term: The use of computational tools will lead to a modification in substance of antitrust law. Today’s antitrust law is static. It uses an algebraic method that appeared in the 17th century mainly because research has not given agencies the theoretical and technical tools to operationalize a more dynamic antitrust law. Computational antitrust will remedy this. Modern computational capacities will enable taking the market context (economic, legal, social, architectural) into account by scrutinizing the impact of a change in the ecosystem. Computational antitrust will lead to a rethinking of the competitive approach.
These different implications require serious academic research. The computational antitrust project aims to initiate, consolidate, and distribute this research into open access. Over 60 antitrust agencies have joined our researchers, and several of them have implemented their first solutions. The future of antitrust 3.0 is bright.
Response: Artificial Intelligence Dovetails Judiciously into Computational Antitrust by Dr. Lance Eliot
The exemplary and notably ground-breaking work of the Computational Antitrust Project, led by my CodeX colleague Professor Thibault Schrepel, has established a new and robust foundation for understanding and reshaping the future of antitrust law and its concomitant enforcement.
Within his affirming three vital implications that underpin computational antitrust, there is a demonstrative role for Artificial Intelligence (AI), which he indeed identifies as an integral part of the set of computational tools powering said endeavors. AI will serve in many capacities within the law and the practice of law, along with inexorably intertwining with adjudication and jurisprudence.
Some illuminating examples of AI in the antitrust realm include:
- Machine Learning (ML) and Deep Learning (DL) are being used to analyze prior antitrust cases to ascertain what patterns underlie the variables characterizing antitrust instances. These tools can be reused to assist with ascertaining whether arising and newly suspected antitrust behaviors are being exhibited in the marketplace and, therefore, worthy of antitrust legal scrutiny.
- Natural Language Processing (NLP) is being employed to examine textual narratives to ferret out whether antitrust activities might be underway. This is done by exploring a wide variety of narrative-oriented sources, including memorandums, public announcements, tweets, and the like.
- Knowledge-Based Systems (KBS) are useful for seeking to encode essential rules of conduct that can then be used to ascertain whether organizational actions appear to violate antitrust provisions. This is akin to the law-as-code trending direction.
In my research work and published papers, which focus overall on the merging of AI and the law, I have posited that for the specific role of AI in antitrust research, and everyday practice, that a useful framework consists of applying AI throughout the devised lifecycle of antitrust.
As such, AI can be valuably infused into these six major stages or phases of the antitrust lifecycle:
- Detection: Use of AI to identify potential antitrust violations
- Assessment: AI for ascertaining if there is a civil or criminal case of prosecutorial merit
- Investigation: Establishing (or not) via AI the case to support an asserted antitrust violation
- Recommendation: AI indicating whether a formal civil or criminal suit seems viable
- Prosecuting: Leveraging AI while aiding or carrying out an antitrust case in our courts
- Implementation: AI aiding the required judgment monitoring and enforcement
These stated uses of AI are considered to be embedded within the context of humans and machines working side by side, as it were. I mention this salient point because we are not yet at a juncture of AI systems attaining a semblance of autonomous capacities per se. For now, the idea is that these would be AI-based computational tools that are developed and tuned toward the antitrust realm for active hands-on use by antitrust analysts, regulators, enforcers, researchers, etc.
Few such AI-infused tools exist today in the antitrust space and, thus, suggest there is ample opportunity and challenge ahead in crafting and fielding these sorely needed AI-assisting capabilities.