CodeX Computational Antitrust Project at the OECD
Conferences usually end with reports. This one began with one. On June 18, 2025, the Computational Antitrust CodeX project launched its fourth cross-agency report at the OECD (Paris, France). The event gathered regulators, scholars, practitioners, and technologists to debate one deceptively simple question: can antitrust law absorb the methods of computational law without dissolving the guarantees that make it law?
The launch was less a ceremony than a lively discussion about where computational antitrust has already entered enforcement, and what this entry means for the future of antitrust. It was, as one participant put it, “a birthday party for computational antitrust.” The metaphor fits. What began as an idea only a few years ago now stands on its own feet. And the conference recording (available on YouTube) makes one thing clear: computational antitrust is no longer a distant horizon. It is already here.
A movement
When we started this project four years ago, “computational antitrust” sounded speculative, almost like an academic curiosity. Today it has become a real movement. More than 75 antitrust agencies, scholars and practitioners from around the world contribute to the project. We publish academic articles, organize workshops, convene conferences, and host a podcast where heads of agencies and leading thinkers share their experiences. None of this would exist without the extraordinary work of the team behind the project, who keep it alive year-round.
Among our activities, we also publish a yearly report that takes stock of the field. For each edition, we invite our partner agencies to submit a short memo describing where they have deployed computational tools, the obstacles they face (legal, technical, institutional…) and the directions they intend to pursue. The exercise creates a rare public archive, a snapshot of how agencies experiment and learn in real time.
This year’s report is the largest so far with 25 contributions. It documents how antitrust agencies are only developing tools, but also assembling infrastructures and institutions to support enforcement. It shows not just what works, but also where agencies struggle. Failures, constraints, budget pressures, and recruitment difficulties are all part of the story. And that honesty is itself progress. Law learns by experimenting, not by pretending.

Global snapshot
Together with my colleague (and now Dr.!) Teodora Groza, we opened the panel with a summary of this year’s report, structured around three main pillars.
Tools. Bid-rigging remains the poster child. Peru’s competition authority has developed algorithms that analyze procurement data to flag suspicious bidding patterns. Brazil’s CADE uses machine learning systems to uncover collusive behavior in public tenders. But Lithuania has also introduced natural language models to trace connections among firms and individuals, while Korea has piloted dashboards that visualize anomalous pricing patterns across entire industries. Together, these projects show how agencies are increasingly treating procurement and market data as raw material for automated screening of collusion.
Infrastructures. Agencies are building the backbones on which computational antitrust rests. Cloud systems are the most visible battleground: Italy develops sovereign clouds to guarantee security and autonomy, while others, like Pakistan, rely on commercial providers such as Microsoft Azure. France has invested in digitizing its entire archive of decisions since 1988, now made accessible on Hugging Face. Canada is moving in the same direction, with advanced systems for data management and internal knowledge sharing. These are stable, durable systems that make computational antitrust possible.
Institutions. Perhaps the most striking development is organizational. Dedicated digital units are being set up, often as hybrids between legal, economic, and technical expertise. Forensic departments are evolving into AI and data science labs. Recruitment profiles have diversified. Data scientists, computer scientists, and engineers are now joined by behavioral psychologists. This shows computational antitrust is not sustained by machines alone, but by institutions learning to build and staff new teams around them.
The report is rich in detail across the board. Peru and Brazil built ML systems to screen procurement; Lithuania’s authority trained NLP tools in over 100 languages; Chile monitors 360,000 news articles; Colombia tracks supermarket and flight prices; Pakistan detects fake discounts; Taiwan parses time-series price shifts; Italy insists on sovereign cloud systems; Canada develops instant case summaries and chatbots; France digitized every decision since 1988 and shared them on Hugging Face; Poland fine-tuned GPT-4 to detect dark patterns; Singapore unveiled its AI Verify toolkit and merger bot; Austria and Luxembourg built foundation models; Malaysia launched case-tracking systems. From Malawi to Catalonia, from Japan to Slovenia, the breadth of experimentation shows computational antitrust is no longer niche.

Panel perspectives
The panel discussions brought these developments to life.
Susana Campuzano Fernández (CNMC, Spain) detailed the technical evolution of Brava. What began as a screening tool for suspicious bids has matured into a system that does more than raise red flags. It explains its reasoning, distinguishes between types of collusion and makes its logic intelligible not just to data scientists, but to case handlers and, eventually, judges. The CNMC now draws on a database of more than seven million tenders and has integrated explainability techniques such as LIME and SHAP to show which variables drive the system’s conclusions. In practice, this means case handlers can interrogate the tool, see why a bid is flagged, and prepare evidence with a clearer chain of reasoning.
Lukas Cavada (Austrian Competition Authority) described Vienna’s institutional transformation. What had long been a forensic unit has been refashioned into a broader data analytics hub. The team now includes six specialists, recruited with expertise in data science and AI, and backed by a budgetary commitment amounting to one-third of the agency’s resources. Tools alone do not deliver, people do. By investing in training, recruitment, and integration between technical experts and case handlers, Austria is trying to ensure that computational methods actually reach the front line of enforcement.
Despina Pachnou (OECD) raised the essential question of scale. Large agencies can afford to dedicate staff and budget to computational work; smaller ones often struggle. So how should a small agency allocate scarce resources between investing in digital tools and pursuing traditional casework? The question goes beyond technology. It forces agencies to think about priorities and sequencing. Not every agency can run a digital lab, yet no agency can afford to ignore the computational turn.
Antonio Capobianco (OECD) widened the lens to the international scene. Computational methods, he noted, are spreading across jurisdictions of all sizes, from Malawi to Catalonia, and international cooperation has become “contagious.” Yet he also pointed to the limits of the current wave. Most tools remain concentrated on cartels and bid-rigging, the low-hanging fruit of computational enforcement. Why not extend these methods to merger control or abuse of dominance? The challenge ahead is to move beyond the familiar and test computational methods on the harder cases that define antitrust in digital markets.
Together, these interventions balanced optimism with realism. Computational antitrust is not magic. It is institutional labor.
Lessons for computational law
Though the report (and the OECD event) spoke the language of antitrust, its grammar belongs to computational law. What antitrust agencies are assembling are not mere “tools,” but fragments of legal order expressed in code. Compliance once lived exclusively in statutes and case law; today it also lives in algorithms that monitor bids, models that flag risks, dashboards that repackage raw data into evidence.
This leads to “better enforcement.” But it also signals a shift in the ontology of law itself, from text to system, from words on paper to executable functions. Norms that once awaited adjudication now live inside infrastructures that act in real time. This migration brings with it a paradox. The more law becomes computable, the more it must reckon with the classic guarantees of legality, i.e., explainability, due process, accountability. A model that cannot explain itself before a judge is no more law than a secret statute. Seen from this angle, computational antitrust is not a niche experiment on collusion detection. It is a stress test for the legal order itself.
Why this all matters
Enforcement without computational capacities risks obsolescence in digital markets. Yet blind reliance on algorithms risks technocracy and opacity. That is the paradox. Law must become computable without ceasing to be accountable. Computational antitrust is the becoming one of the testbeds where this paradox is being played out, case by case, dataset by dataset.
The OECD launch left me with one conviction. The frontier is no longer about adoption. It is about institutionalization. Agencies are already building tools and infrastructures. The question now is how to integrate them into workflows, budgets, training, and accountability structures.
The day closed not with data points, but with a reception beneath the ceilings of the Château de la Muette; a fitting reminder that computational law is made not only by code and datasets, but also by conversations around a glass.

Further reading:
- The full report: “Computational Antitrust Worldwide: Fourth Cross-Agency Report”
- Thibault Schrepel, “Computational Antitrust: Evidence From 25 Antitrust Agencies” (Network Law Review, 2025)
- Video recording of the OECD launch: YouTube
- The Computational Antitrust project webpage
