The Future of Regulation: Using AI to Score AI Laws

What was Scored

When Illinois enacted HB 1806, the Wellness and Oversight for Psychological Resources Act, I was curious: How does the Act align with the AI Life Cycle Core Principles? To find out, I asked ChatGPT 5 to write a prompt that instructed the model to reference the AI Life Cycle Core Principles database. I explained how the Act should be scored (you can read about that at the bottom) and then ran some tests. After tweaking the prompt a few times, it worked.

You can skip ahead to the analysis, but if you are unfamiliar with the AI Life Cycle Core Principles and the database that was created from it, I recommend taking a couple of minutes to read through the first three introductory sections.

First, About the AI Life Cycle Core Principles

Most people interested in AI have read and heard about trustworthy AI, safe AI, reliable AI, secure AI, and so on. We intuitively acknowledge that these are important principles. At the same time, most of us would be hard pressed to provide a compelling explanation why. An explanation that goes beyond vague “well, it’s common sense,” referencing sci-fi dystopian scenarios, or the circular reasoning that “powerful technology should be [trustworthy, safe, reliable, secure] because it’s powerful.”

We need to do better. What these principles mean is critical. They shape the public narrative around AI. They drive legislation, regulation, standards, and best practices, creating an intricate framework that determines research priorities, funding decisions, and technical architectures. This framework determines which AI approaches get developed and which get abandoned. It influences how systems are designed, tested, and deployed. We need to have a solid grasp on these principles.

The AI Life Cycle Core Principles (AILCCP) project was launched to do precisely that.

I ran deep, detailed surveys, mining for every tidbit of information I could find from the most authoritative sources on AI: the G7, OECD, UNESCO, ISO, IEEE, NIST, FTC, G20, and APEC. I worked to determine which were the most frequently mentioned principles, and which were missing. I collated and cataloged them, working to define, identify, and simplify what they mean, in essence operationalizing them. And there was more. Since the project began, I also carefully added rigorous, highly granular, substantive discussion to further enhance and clarify them. And so, this project steadily grew. Today it contains a total of 37 AI life cycle core principles and more than 14,000 words.

As the AILCCP project grew, I gradually realized that while this work was encyclopedic, that very comprehensiveness made it too unwieldy. I wanted to make it more accessible. My first step was to transform it into a relational database. It was elaborate and time consuming, but it got done and watching it slowly emerge from a massive pile of words was genuinely exciting and helped me learn even more about what I had set out to do. And to make it all even more accessible I tested it with a pre-trained AI model to perform context-aware analysis. 

We will take a closer look at what that means shortly.

About the AILCCP Database

The AILCCP database provides a comprehensive, structured, repeatable framework for using pre-trained AI to measure how relevant subject matter, namely, AI laws, contracts, policies, procedures, processes, and other AI-related items align with the core principles identified in the AILCCP. 

The AILCCP database breaks down the 37 core principles. It maps them into various granular metadata across 10 AI life cycle phases, maps them to nearly 40 global standards from ISO, IEEE, and NIST, and adds over 20 targeted controls to address more than 10 categories of AI-related considerations along with hundreds of keywords. The database also cross-references relevant guidance from the FTC, SEC, and FDA. All said, it takes 14,000 words and parses them into structured data chunks.  The project is ongoing, with most of the work centered on updating the database. Now, instead of focusing on building out the Notes section in the AILCCP, my focus is on building out the database, enriching the records, setting up new fields, adding more tables, creating more connections, and so on.

AILCCP as the Context-Aware Analytical Framework

This database framework is the foundation for context-aware analysis using pre-trained AI models. It works by providing the AI with both the curated database as the exclusive reference material and an input document such as legislation, contracts, or policies for integrated interpretation and insight extraction. After testing multiple models including ChatGPT 5 Thinking, Manus, Gemini Pro 2.5, and Anthropic’s Claude Opus 4.1, the latter currently delivers the most robust results. I expect this will change over time, so on-going testing will be necessary.

The model systematically analyzes the subject matter against the AILCCP database, identifying areas of strong alignment while calculating quantitative scores for each principle. More importantly, as you will see below, the model pinpoints specific gaps and provides targeted recommendations for improvement based on the reference material. The model employs a weighted, quantitative scoring methodology that ensures analyses are both reproducible and defensible, transforming subjective assessments of AI principle alignment into measurable, actionable intelligence. (The scoring methodology is explained in the Appendix.)

Use Cases

Apart from analyzing AI legislation, the AILCCP database can be applied to AI regulatory analysis, AI contractual risk assessments, AI policy audits, and investor due diligence in M&A deals involving AI.

The contents of the report were not modified.

 

——–

AI Analysis of the Illinois HB1806 – Wellness and Oversight for Psychological Resources Act

Act Sections

Section 1 Short title
“This Act may be cited as the Wellness and Oversight for Psychological Resources Act.” 
Section 5 Purpose
Preview: “The purpose of this Act is to safeguard individuals seeking therapy or psychotherapy services by ensuring these services are delivered by qualified, licensed, or certified professionals.” 
Section 10 Definitions
Preview: “In this Act: ‘Administrative or supplementary support’ means tasks performed to assist a licensed professional in the delivery of therapy or psychotherapy services…” 
Section 15 Prohibition on unauthorized therapy services
Preview: “(a) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.” 
Section 20 Enforcement and penalties
Preview: “(a) Any individual, corporation, or entity found in violation of this Act shall pay a civil penalty to the Department in an amount not to exceed $10,000 per violation…” 
Section 25 Exceptions
Preview: “This Act does not apply to the following: (1) religious counseling; (2) peer support; and (3) self-help materials…” 
Section 99 Effective date
Preview: “This Act takes effect upon becoming law.” 

Main Scoring Table

Principle Score (0-5) Brief Rationale Sections Maps to Standard
Accountability 5 The Act establishes clear accountability by requiring licensed professionals to maintain full responsibility for all AI interactions and outputs, with enforcement through civil penalties up to $10,000 per violation. Licensed professionals must review and approve all AI-generated recommendations. Section 15(a), Section 15(b)(3), Section 20 ISO/IEC 23053:2022, ISO/IEC 23894:2023, ISO/IEC 38507:2022
Human-Centered 5 The Act mandates that therapy services must be conducted by licensed professionals and prohibits AI from making independent therapeutic decisions or directly interacting with clients. Human professionals retain complete control over therapeutic processes. Section 15(a), Section 15(b) ISO/IEC 23053:2022, ISO/IEC TR 24028:2020
Safety 4 The Act’s stated purpose is to safeguard individuals seeking therapy services, with specific restrictions on AI use to protect consumers from unqualified providers and unregulated AI systems. While strong on protective measures, it lacks specific safety testing requirements. Section 5, Section 15 ISO/IEC 23053:2022, ISO/IEC 23894:2023, ISO/IEC TR 24028:2020
Security 4 Provisions protect consumers from unqualified and unregulated AI systems through licensing requirements and restricted AI use. The Act would benefit from specific technical security measures and data protection standards. Section 5, Section 15 ISO/IEC 27001:2022, ISO/IEC 27017:2015, ISO/IEC 27018:2019
Trustworthy 3 Establishes trust through licensed professional requirements and enforcement mechanisms. To fully achieve trustworthiness, the Act needs provisions for transparency and explainability of AI systems used in therapy. Section 15, Section 20 ISO/IEC TS 8200:2024, NIST AI RMF 1.0
Governance 3 Creates governance structure through Department oversight and enforcement authority. A comprehensive AI governance framework with clear roles, responsibilities, and oversight mechanisms would strengthen this principle. Section 20 ISO/IEC 38507:2022, ISO/IEC 23053:2022
Reliability 2 Implicit reliability through professional oversight requirements. The Act would benefit from explicit provisions for AI system reliability testing, validation procedures, and performance monitoring. Section 15(b) ISO/IEC 25010:2023, ISO/IEC TR 29119-11:2020

Potential Gaps and Future Legislative Opportunities

Principle Recommendation Maps to Standard
Transparency Require licensed professionals to provide written disclosure when AI systems are used in therapy services, including clear explanation of the AI’s specific purpose, capabilities, and limitations in the therapeutic process. Mandate that clients receive understandable information about how AI tools assist in their treatment. ISO/IEC 23053:2022, ISO/IEC 23894:2023
Consent Establish explicit requirements for obtaining informed patient consent before using AI systems in therapy sessions, including consent for any recordings, transcriptions, or AI-assisted data processing. Ensure patients can opt out of AI-assisted services while maintaining access to traditional therapy. ISO/IEC 27018:2019, ISO/IEC 29184:2020
Privacy Implement comprehensive data protection requirements for AI systems used in therapy, including specifications for data collection, storage, sharing, and retention. Mandate privacy impact assessments for AI tools processing sensitive mental health information. ISO/IEC 27018:2019, ISO/IEC 29184:2020
Fairness Require assessment and mitigation of potential biases in AI systems used for therapy support, ensuring equitable treatment across all patient demographics. Mandate regular audits to identify and address discriminatory patterns in AI-assisted therapeutic recommendations. ISO/IEC TR 24027:2021, ISO/IEC 23053:2022
Bias Establish requirements for bias testing and monitoring of AI systems, including documentation of training data sources and demographic representation. Require corrective measures when bias is detected in AI-assisted therapeutic tools. ISO/IEC TR 24027:2021, ISO/IEC 23894:2023
Explainability (XAI) Mandate that AI systems used in therapy provide explanations for their outputs that licensed professionals can understand and communicate to patients. Require documentation of AI decision-making processes in patient records. ISO/IEC 23053:2022, ISO/IEC TR 29119-11:2020
Data Stewardship Establish comprehensive requirements for responsible management of patient data used by AI systems, including data quality standards, access controls, and audit trails. Define clear data ownership and portability rights for patients. ISO/IEC 38505-1:2017, ISO/IEC 27001:2022
Equity Require assessment of AI system accessibility and availability across different socioeconomic groups to ensure equitable access to AI-enhanced therapy services. Mandate provisions to prevent AI from widening healthcare disparities. ISO/IEC 30071-1:2019, ISO/IEC Guide 71:2014
Accessibility Mandate that AI-assisted therapy platforms meet accessibility standards for users with disabilities, including compatibility with assistive technologies and multiple communication modalities. ISO/IEC 30071-1:2019, WCAG 2.1
Ethics Establish an ethical framework for AI use in therapy, including principles for beneficence, non-maleficence, autonomy, and justice. Require ethics review for new AI implementations in mental health services. IEEE 7000-2021, ISO/IEC 23894:2023
Interpretability Require AI systems to provide outputs in formats that licensed professionals can interpret and validate against clinical judgment. Mandate documentation of AI reasoning processes for clinical review. ISO/IEC TR 29119-11:2020, ISO/IEC 23053:2022
Fundamental Rights Explicitly protect patients’ rights to human-provided therapy, informed choice about AI use, and recourse for AI-related harms. Establish clear liability frameworks for AI-assisted therapy outcomes. ISO/IEC 23894:2023, IEEE 7000-2021
Accuracy Establish accuracy standards for AI systems used in therapy support, including validation against clinical guidelines and regular accuracy testing. Require disclosure of error rates and limitations to professionals and patients. ISO/IEC 25022:2016, ISO/IEC TR 29119-11:2020
Metrics Establish measurable performance indicators for AI systems in therapy, including accuracy rates, error detection, and patient outcome tracking. Require regular reporting of AI system performance to regulatory authorities. ISO/IEC 25022:2016, ISO/IEC 25023:2016
Robust Implement requirements for AI system robustness testing, including stress testing, adversarial testing, and failure mode analysis. Ensure AI systems maintain performance under various conditions and edge cases. ISO/IEC TR 29119-11:2020, ISO/IEC 25010:2023

Summary

The Illinois Wellness and Oversight for Psychological Resources Act demonstrates strong commitment to Accountability, Human-Centered design, Safety, and Security in AI use for therapy services. The Act effectively establishes human oversight requirements and enforcement mechanisms.

However, significant gaps exist in critical areas including Transparency, Consent, Privacy, Fairness, and Bias prevention. These gaps represent opportunities for future legislative enhancement to create a more comprehensive regulatory framework for AI in mental health services. The Act provides a solid foundation focused on professional oversight but would benefit from additional provisions addressing patient rights, data protection, and algorithmic accountability to fully align with international AI standards.

————————–

Scoring Methodology

This analysis applies the AI Life Cycle Core Principles Legislative Scoring Methodology (v7), which evaluates legislation using structured criteria, weighted scoring, and zero-score guardrails.

Evidence Components:

  • Keyword Evidence (10% weight): Distinct keyword matches from principle definitions, capped at saturation threshold of 5
  • Definition Alignment (20% weight): Semantic match between operative provisions and principle definitions
  • Obligations in Verified Sections (40% weight): Enforceable provisions using “shall,” “must,” “may not,” etc., capped at 2 sections
  • Enforcement Strength (30% weight): Enforcement mechanisms that apply to principle obligations

Scoring Formula:

Raw Score = 5 × (0.40 × O_evidence + 0.30 × E_p + 0.20 × D_evidence + 0.10 × K_evidence)

Scores are rounded half-up (4.5→5, 4.4→4) and capped at 1 if no operative provisions exist.

Zero-Score Guardrails:

  • Transparency: Forced to 0 if no written disclosure of AI use AND no disclosure of AI’s specific purpose
  • Consent: Forced to 0 if no requirement for patient consent for recordings/transcriptions or AI-assisted processing