Large Language Models as Lobbyists

John Nay
Center for Legal Informatics, Stanford University

Part of the law-making process currently – like it or not – involves human lobbying. ChatGPT and Large Language Models more generally have demonstrated the rapidly advancing capabilities of AI. A novel concern is that, with these additional advancements in AI and further deployments of AI systems (even without the agents having instrumental power-seeking goals per se) influencing law through lobbying may be the first crack in AI influence on public policy.

Initially, AI is being used to simply augment human lobbyists. However, there may be a slow creep of less and less human oversight over automated assessments of the pros and cons of policy ideas, and the AI-generated written communication to regulatory agencies and Congressional staffers.

The most ambitious goal of research at the intersection of AI and law should be to computationally encode and embed the generalizability of existing legal concepts and standards into AI. We should stop short of AI “making law.” But how do we define “making law”? Where should we draw the line between human-driven and AI-driven policy influence? Where is the boundary between a helpful tool and a key input to the law-making process? And, depending on the answers to those questions, how should we amend lobbying disclosure laws?

These are all open questions, but we wanted to first see how close we are to AI lobbyists being a real concern.

An Empirical Test of AI as Lobbyist

We used autoregressive large language models (LLMs, the same type of model behind the now wildly popular ChatGPT) to systematically conduct the following steps. (The full code is available at this GitHub link: https://github.com/JohnNay/llm-lobbyist.)

  1. Summarize official U.S. Congressional bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3.
  2. Using either the original official bill summary (if it was not too long), or the summarized version:
    1. Assess whether the bill may be relevant to a company based on a company’s description in its SEC 10K filing.
    2. Provide an explanation for why the bill is relevant or not.
    3. Provide a confidence level to the overall answer.
  3. If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation.

The model is provided with the following data, which is embedded in the prompts programmatically:

  • Official title of bill {official_title}
  • Official (or model-generated if too long) summary of bill {summary_text}
  • Official subjects of bill {subjects}
  • Company name {company_name}
  • Company business description {business_description} (the business description in the company’s SEC Form 10-K filing)

We expect much higher accuracy of LLM predictions if we were to provide it with more data about a bill, and especially if we provide it with more data about a company. This paper was focused on the minimal amount of data a model could leverage in order to compare across LLMs.

Here is the prompt provided to the model for each prediction:

You are a lobbyist analyzing Congressional bills for their potential impacts on companies.

Given the title and summary of the bill, plus information on the company from its 10K SEC filing, it is your job to determine if a bill is at least somewhat relevant to a company (in terms of whether it could impact the company if it was later enacted).

Official title of bill: {official_title}

Official summary of bill: {summary_text}

Official subjects of bill: {subjects}

Company name: {company_name}

Company business description: {business_description}

Is this bill potentially relevant to this company?

Answer in this format:

ANSWER: ‘YES’ or ‘NO’ (use all caps). EXPLANATION: the step-by-step reasoning you undertook to formulate a response. CONFIDENCE: integer between 0 and 100 for your estimate of confidence in your answer (1 is low confidence and 99 is high)

Always predicting that a bill is not relevant to a company leads to accuracy of 70.9% (n = 485) on our dataset where the labels are whether a given company is relevant to a given proposed U.S. federal Congressional bill. GPT-3.5 (text-davinci-003) obtains an accuracy of 75.1% (n = 485). The immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy of 52.2% (n = 485). text-davinci-002 was state-of-the-art on most natural language related benchmark tasks until text-davinci-003 was released a few weeks ago.

These results suggest that, as models improve, performance on corporate lobbying related tasks could continue to rapidly improve.

The model is also relatively well calibrated in its confidence assessments. The accuracy of text-davinci-003 on predictions where its confidence was greater than “90” increases to 79% (n = 433; this only eliminates a small number of predictions).

In addition to predicting relevance and a confidence score, the prompt also elicits an explanation of the model’s relevance decision. Here is an example of the LLM generated response:

ANSWER: YES.

EXPLANATION: Alkermes Plc is a biopharmaceutical company that develops and commercializes products designed to address unmet medical needs of patients in major therapeutic areas, including addiction and schizophrenia. This bill requires the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit, which could potentially impact Alkermes Plc’s products.

CONFIDENCE: 95

Finally, for all bills deemed relevant, the LLM is prompted to write a letter on behalf of the company. Here is an example letter:

Dear [Sponsor of the Bill],

I am writing on behalf of Alkermes Plc, a fully integrated, global biopharmaceutical company that applies its scientific expertise and proprietary technologies to research, develop and commercialize pharmaceutical products that are designed to address unmet medical needs of patients in major therapeutic areas.

We are writing to express our support for the Medicare Negotiation and Competitive Licensing Act of 2019. We believe that this bill is an important step in ensuring that Medicare beneficiaries have access to the medications they need at a price they can afford.

We are particularly supportive of the provisions in the bill that would require the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit. We believe that this will help to ensure that the prices of these drugs are fair and reasonable.

We are also supportive of the provisions in the bill that would allow for competitive licensing of drugs if the CMS is unable to negotiate the price of a drug. This will help to ensure that the prices of these drugs are kept in check and that Medicare beneficiaries have access to the medications they need.

At Alkermes, we develop and commercialize products designed to address the unmet needs of patients suffering from addiction and schizophrenia. We have two key marketed products, ARISTADA and VIVITROL, which are used to treat these conditions. We believe that the provisions in the bill will help to ensure that our products are available to Medicare beneficiaries at a price they can afford.

We would like to suggest that the bill be amended to include provisions that would provide additional incentives for pharmaceutical companies to negotiate with the CMS. We believe that this would help to ensure that the prices of drugs are kept in check and that Medicare beneficiaries have access to the medications they need.

We thank you for your consideration and look forward to working with you to ensure that the Medicare Negotiation and Competitive Licensing Act of 2019 is passed in its amended form.

Sincerely,

[Name]
General Counsel
Alkermes Plc

Opportunities and Risks

There are (at least) two potential upsides of this advancement in AI as lobbyist.

  1. It may reduce human time spent on rote tasks, freeing up human effort for higher-level tasks such as strategizing on the best means to implement policy goals in legislation and regulation.
  2. It may reduce the costs of lobbying-related activities in a way that makes them differentially more affordable to non-profits and individual citizens relative to well-funded organizations, which could “democratize” some aspects of influence (arguably donations to campaigns are more influential than any natural-language-based task discussed here).

There are obvious potential downsides if AI systems develop instrumental power-seeking goals and use lobbying as a means to effectuate misaligned policies. The potential, less obvious, downside we focus on here is that extended LLM capabilities may eventually enable AI systems to influence public policy toward outcomes that are not reflective of citizen’s actual preferences. This does not imply the existence of a strongly goal-directed agentic AI. Rather, this may be a slow drift, or otherwise emergent phenomena. AI lobbying activities could, in an uncoordinated manner, nudge the discourse toward policies that are unaligned with what traditional human-driven lobbying activities would have pursued.

This is problematic in itself, but also insofar as it disrupts the process of the only democratically determined societal values knowledge base (law) informing AI what not to do.

Policy-making embeds human values into rules and standards. Legislation is currently largely reflective of citizen beliefs. The second-best source of citizen attitudes is arguably a poll, but polls are not available at the local level, are only conducted on mainstream issues, and the results are highly sensitive to their wording and sampling techniques. Legislation expresses higher fidelity, more comprehensive, and trustworthy information. Legislation and associated agency rule-making also express a significant amount of information about the risk preferences and risk tradeoff views of citizens. The cultural process of prioritizing risks is reflected in legislation and its subsequent implementation in regulation.

In many ways, public law provides the information AI systems need for societal alignment. However, if AI significantly influences the law itself, the only available democratically legitimate societal-AI alignment process would be corrupted. Initially, AI is being used to simply augment human lobbyists. But as the capabilities of artificial intelligence are rapidly shifting underneath the policy-making process, we urgently need public dialogue about where to draw the boundary of artificial influence.