California’s Disclosure Gambit: What SB 53 Reveals About Our Relationship with (Potentially) Dangerous Technology
The Story Regulations Tell
For most of human history, dangerous technologies were controlled by containing the humans who wielded them, with walls and armies holding back external threats while the threat of execution disciplined would-be traitors. Then the twentieth century produced technologies whose danger lay not (only) in physical objects but in knowledge itself, knowledge that ultimately could not be imprisoned, exiled, or executed. The response to this change was to supplement physical containment with a novel kind of regulatory framework built on inspections, verification protocols, and institutional monitoring. We see that play out prominently in nuclear regulation, which requires safety committees, reporting structures, and quality assurance programs that seek to prevent meltdowns.
California’s Transparency in Frontier Artificial Intelligence Act (SB 53) signed into law on September 29, 2025, represents the state’s attempt to write the next chapter of this story for a technology that is not just “dangerous,” but the law itself recognizes as potentially “catastrophic.” This forces us to ask whether this law is enough. I don’t think it is.
I: The Disclosure Gambit
SB 53 aims at “frontier AI” models, which the statute defines as foundation models trained using more than 1026 floating-point operations, a threshold that no current foundation model meets, but could, if you subscribe to the Ray Kurzweil camp, happen sooner than most think. In practice, this law will apply to large frontier developers, namely OpenAI, Anthropic, Google DeepMind, Meta, and Microsoft which constitute the cohort of organizations with the resources (which vastly exceed the statute’s $500 million gross revenue metric) and ambition to train models at this scale.
SB 53 imposes three principal obligations. First, these developers must disclose how they incorporate national and international standards into their AI development processes. Second, they need to publish reports before deploying new or substantially modified frontier models that describe the capabilities, intended uses, limitations, and the results of risk assessments. Third, they need to report safety incidents to the California Office of Emergency Services within fifteen days of discovery, or within twenty-four hours if the incident poses imminent danger to public health or safety.
The disclose, publish, describe and report requirements share a common structure that exposes the statute’s underlying assumptions and its weakness. It expects that companies will articulate what they are doing rather than prove that what they are doing actually works to prevent the potentially catastrophic harms the statute ostensibly addresses. Does that approach work? Let’s put it this way: It would not have prevented Chernobyl, but it would have created an appearance of regulatory oversight.
II: The Gap Analysis
The AI Life Cycle Core Principles (AILCCP) is a compendium of 37 principles that constitute the framework for responsible AI oversight. No law, however, will engage all 37 principles nor all of the subsets any single principle contains as we will shortly see. The question is which principles a given statute prioritizes and what its choices reveal about legislative assumptions and political constraints.
Viewed through the analytical lens of the AILCCP, SB 53 reveals incomplete alignment with four AILCCP principles: Governance, Safety, Accountability, and Data Stewardship. Its average score comes to 1.75/5. Let’s take a closer look.
Governance (Score: 2 of 5)
Governance is the single most important principle in the AILCCP framework. Without it, no other principle can materialize. The principle contains fifteen distinct requirements (the “subsets”) that center around creating and maintaining policies, procedures, and processes that enable organizational alignment the other 36 principles. It would make sense to see AI laws focusing heavily on this principle.
SB 53 only goes as far as requiring large frontier developers to describe their alignment with this principle. It could have set the tone, but chose to remain silent about well-known, basic efforts, such as board-level and senior executive oversight, and commitment to continuous learning and improvement. Merely requiring a description of the alignment with this principle reveals a reluctance to get into the weeds.
Safety (Score: 2 of 5)
Within the Safety principle we find sixteen distinct requirements. They coalesce around resistance to attacks and threats through robust protective measures, continuous monitoring, and incident response capabilities while maintaining data integrity and system resilience. SB 53 addresses Safety through its reporting requirements, which must include summaries of catastrophic risk assessments conducted by the developer. But the statute does not define what constitutes an adequate assessment, does not mandate specific testing protocols or evaluation methodologies, and does not prohibit deployment of models that fail whatever safety evaluations the developer chooses to conduct.
The contrast with the vetoed SB 1047 is instructive. That bill contained (among other things) deployment prohibitions for models that failed defined safety tests, creating a performance standard rather than merely relying on a disclosure requirement. The choice to remove these provisions from SB 53 was political rather than technical, reflecting a judgment by policymakers that disclosure would be sufficient, or perhaps all they could realistically muster.
Will disclosure prove sufficient? The evidence from other domains where disclosure regimes have been deployed suggests good reasons for skepticism. Financial disclosure requirements did not prevent the 2008 crash, environmental impact statements have not halted ecological degradation, and nutritional labeling has not solved the obesity epidemic. Disclosure is an effort to discipline conduct, but it only works when paired with appropriate enforcement mechanisms and performance standards that convert disclosed information into behavioral constraints.
Accountability (Score: 2 of 5)
Accountability contains seventeen distinct requirements. SB 53 addresses Accountability solely through its whistleblower protections and incident reporting requirements. This allows covered employees to disclose information to the Attorney General if they have reasonable cause to believe a developer’s activities pose substantial danger to public health or safety.
Alignment with Accountability requires much more. It requires traceability, clear ownership, responsive legal mechanisms, and comprehensive oversight of AI. Institutional infrastructure for investigating reported concerns, mechanisms for tracing harm to specific decisions and actors, and enforcement capacity sufficient to impose meaningful consequences on those responsible for failures are also must-have features of effective alignment with the Accountability principle.
SB 53’s civil penalty of up to $1 million per violation provides some enforcement leverage, though this amount may prove trivial relative to the revenues and market capitalizations of the companies subject to the statute. More significantly, SB 53 does not create private rights of action, meaning that injured parties cannot sue directly under the statute and must instead rely on the Attorney General to pursue violations on their behalf. In a state where the Attorney General must balance AI enforcement against hundreds of (if not more) other priorities spanning consumer protection, antitrust, environmental law, and civil rights, this concentration of enforcement authority in a single office creates bottlenecks that may prove consequential.
Data Stewardship (Score: 1 of 5)
Data Stewardship is a principle that addresses the responsible collection, use, storage, and sharing of data throughout the AI lifecycle, recognizing that model behavior depends heavily on the data used to train and fine-tune it. SB 53 does not engage this principle, as disclosure reports need not deal with training data provenance, data quality measures, data handling practices, or the processes by which developers identify and address problematic content in training corpora.
This gap is significant. Frontier model behavior emerges from training data in ways that are often difficult, if not impossible, to predict and harder to reverse. This means that models trained on biased, incomplete, toxic, or otherwise problematic data will exhibit corresponding behaviors regardless of what disclosures their developers release.
III: What SB 53 Reveals
California choose disclosure over capability requirements, over performance standards, and over organizational infrastructure. But why? One answer lies in political economy. Remember who the frontier developers really are. They are among the most valuable and influential organizations in human history. They employ sophisticated lobbyists, fund academic research, and profoundly shape public discourse about their own regulation. Disclosure requirements impose costs on these companies and create compliance burdens that consume executive attention, but they do not threaten business models or constrain commercial freedom in the way that deployment prohibitions or performance standards would.
But there is another, deeper answer. Disclosure regimes reflect a particular theory of oversight: that information, once made available, will discipline behavior through mechanisms that do not require direct state intervention. Publish the data, and markets will punish bad actors by withdrawing capital and customers; disclose the risks, and stakeholders will demand reform through shareholder proposals and public pressure; create reporting requirements, and reputational concerns will motivate compliance even without aggressive enforcement. This theory has a distinguished intellectual pedigree stretching back through securities regulation, environmental law, and consumer protection, and it has enabled regulatory action in domains where more intrusive intervention would have been politically impossible.
The question SB 53 forces us to ask is whether we believe this theory of disclosure applies to frontier AI, a technology whose potential for catastrophic harm the statute explicitly acknowledges. Do we believe that disclosing catastrophic risk assessments will prevent catastrophic risks from materializing? Do we really believe that publishing policies, procedures, and processes will create the control structures capable of constraining some of the most powerful commercial actors? Do we really believe that disclosure, standing alone without performance standards or institutional infrastructure or meaningful enforcement, constitutes adequate control for technologies that could cause mass casualties or billions of dollars in damage?
I do not, and the AILCCP framework illuminates why. Effective alignment with the principles we reviewed requires demonstrated capabilities, verified compliance, and institutional infrastructure capable of converting principles into real constraints. SB 53 provides the foundation for disclosure of intentions. It’s a start, but not the destination.
What’s Next
It’s unclear. If California’s gambit on disclosure pays off, SB 53 will take its seat on a distinguished podium. One where proportionate and innovation-friendly responses to emerging technology, serve as a model for other jurisdictions seeking to balance safety with economic dynamism. If it doesn’t, the statute has conveniently supplied the vocabulary for its own post-mortem.