Computational Presumptions Applied to AI Markets
Abstract
Digital regulators worldwide are imposing sweeping bans on data combinations to eliminate data asymmetries and learning effects. These interventions reveal a critical disconnect. Ex ante regulations ban data combinations across services that fundamentally uproot the functioning of any large language model (LLM). Regulations designed for traditional platforms are now being applied to AI downstream markets, where reinforcement learning, model drift, and nuances between across- and within-user learning create fundamentally different competitive dynamics. As a consequence, ex ante regulations risk
stifling innovation while failing to address consumer harm. The paper argues for considering computational presumptions that use privacy-preserving techniques as measurable compliance mechanisms. By replacing blunt prohibitions with architectural safeguards grounded in privacy-utility thresholds, regulators can effectively neutralize lock-in effects while preserving the essential data flows for AI advancement and improvement.