Governing AI: A General Perspective on the Proposed Brazilian Artificial Intelligence Bill of Law

(This article was published on April 4, 2023, in The AI Journal. See the original here.)

There are elements of artificial intelligence (“AI”) regulation that must, naturally, be localized and customized to a given jurisdiction.  And there are elements that are universal.

Although commissioned, or prompted, by the Brazilian Senate’s recent AI Bill of Law, this Note is centered on the latter—the “universal” or international elements.  We all face the same dilemma in regulating a quicksilver-like object which we can neither fully see, nor readily define, nor entirely capture.

There is no bottle to put the genie back into.  Brazilians—like people from all nations—will continue to make copious and increasing use of AI products, effects, and detritus (secondary applications) of all kinds—whether overtly and explicitly, or in the shadow of the law.  The technology’s power, along with its social effect, will generally compound (increase at a non-linear rate); and it will do so in highly novel ways, good and bad.

Thus, as with certain astronomical phenomena, the present and compounding analytic engines present a novel phenomenon beyond which we cannot see.  Humans have never had automated systems this capable.  (And, again, that novelty compounds.)  We are confronted with a kind of “AI Event Horizon”.

Because we cannot see what AI will become, just around the corner, nor can we see, precisely, the best way to regulate it.  We are trying to put a frame around something we cannot see.

But humility is its own razor and may help guide the Senate’s guidance.  I.  We can leverage and transcribe existing legal precedents onto new technologies as they arise.  (In other words, we can use adaptive and incremental common law, applying new and existing legal codes.)  II.  We can leverage professional and civil society groups to help create an AI auditing ecosystem—one which supports the intent of the Senate while maintaining the economic efficiency and adaptability of law and secondary, implementing codes.

National law can create broad principles and boundaries; while empowered and managed technical groups can implement legislative intent with speed, scale, and agility.  Nation states frequently do this with great effect and efficiency in everything from financial and brokerage licensing to electrical and building safety codes.  III.  The Senate, and other branches, should finance, develop, and leverage public and private “Governance AI” to balance the speed and scale of commercial and civil AI.  This can be an efficient and socially productive means of regulating the AI explosion.

BACKGROUND DEFINITIONS

Before jumping into specific recommendations and concepts, it would be wise to lay some background definitions. Not everyone will agree with the below.  Some will disagree vehemently.  But—as with a merger or other complex contract with defined terms—definitions may make the subsequent analysis easier.

What is AI?  AI is, essentially, software that does analysis.

Law: The art of regulating human behavior, state, and relations.

Engineering: The art of getting things done.

Legal Engineering: The art of getting legal things done, and things done legally.

AI Event Horizon:  A metaphor for the difficulty of predicting (and regulating the effects) of a compounding intelligence phenomenon that is novel.

For all of the above, see generally: On Legal AI, Part I, Chapter 5.

1.       The Present Regulatory Dilemma

Brazil already regulates various forms of analysis in different contexts—from housing, to medical, to finance, etc., whether regarding discrimination, proportionality, or other metrics and legal filters.  We need not invent new wheels for every machination.  Indeed, to some extent, we can use the “dynamic-steady state” metaphor for adapting existing systems to new phenomena.

The relatively new and accelerating problem with AI relates to the speed and scale by which it conducts analysis (whether such analysis be “generative”, informationally “extractive”, or “predictive”).  Manual analysis will never keep up with it.  For this reason, the infrastructure of legal regulation—not the content (which remains, as always, as the people will it, via the Senate, and other instruments of government)—needs to mirror the speed and scale of AI.

There is only one way to do that, and that is by leveraging AI to regulate AI.  Individual, human regulators engaging in manual regulation of AI events are likely to fail, acting alone.  Creating AI to help audit and regulate AI should be required on three levels:

(1) Implementers.  People creating AI for purposes (e.g., commercial and civil society AI, used by consumers).

(2) Audit.  Independent audit groups—with professional licensing requirements, to forcefully encourage independence and ethical auditing behavior—should be empowered to audit the algorithmic and implementation details of implementers.  By using technology themselves, plus human cross-checks and federal disclosure requirements (see, e.g., securities laws in Brazil, US, etc.), legal-technical auditors can analyze commercial AI activity with speed, diligence, and acuity. (The author suggests requiring that attorneys be part of these auditing groups, as, traditionally, such requirements have helped deter corruption and financial abuse in the US and elsewhere.  Nothing is a guarantee, naturally.  And the auditors themselves will need monitoring and accountability.)

(3) Government.  Government entities themselves will need AI tools to audit legal compliance—either because independent auditors, implementers, or others have failed to ensure compliance; or because the Senate and other bodies are truly dealing with novel and material issues, that require government investigation.

This nested, overlapping series of Governance AIs—and Legal Engineering control systems embedded in AI systems by all implementers—has a chance of modulating and adaptively regulating AI implementations and secondary phenomena.  (Remember that implementers can evade “AI” fixated laws merely by adding de minimis human involvement before the analysis gets to humans.  There may be other evasions that the Senate bill and other regulatory models will need to adapt to.)  Government AI can operate selectively where needed.  Independent AI Auditors (using legal AI) may potentially make periodic and regular compliance tests painless and relatively cheap for the vast majority of AI implementers, of all sizes.  Implementer control requirements—as alluded to in the present bill—can make AI development both safer and more productive (for both the implementer and the ultimate clients).

The reason that this nested system of regulatory controls may work better than, say, the algorithmic repository approach of the People’s Republic of China (“PRC”) is many-fold.  First, the PRC’s requirement that all AI implementers deposit their algorithms (or, the heart of their AI systems) into a government AI bank is a taking on a massive scale—if of trade secrets.  It puts the entire business or organization of every depositor at risk.  A repository of that scale would be enormously valuable to the government, and to any government agent.  Corruption, misuse, and “algorithmic embezzlement” may be inevitable.  Indeed, all government agents engaged on the repository side would gain enormously valuable, inextractable competitive know-how about these systems.  Misuse is inevitable.  Second, algorithmic deposits may be made which do not necessarily tell the government how the AI is being used.  The same algorithm may be used in both a discriminatory and non-discriminatory manner, depending on how it is applied, and which data feeds have been “silenced” or activated.  A savvy independent auditor, experienced in both the law and AI engineering, could pick this up.

For this reason, my group would recommend that the Brazilian government legally and financially promote and/or sponsor a three-tiered, nested and overlapping system of Governance AI—Implementer, Independent Auditor, Government & Standards-Setting.  Creating such an ecosystem need not be expensive.  (And could be radically less expensive, on the compliance side, than regulatory alternatives.)  First, national technical-legal standards can reduce the costs of compliance enormously—much like well-designed and locally/professionally enforced electrical codes, etc.  Second, good software—while rare—has a high fixed cost to develop.  But a low marginal cost to re-use—approaching zero for each use / copy / etc.  The government could even enable the creation of national source code to help AI implementers comply efficiently—with penalties for spoofing and security controls.  Brazil has a vast and highly sophisticated Computer Science and legal ecosystem to add academic rigor, and broad, independent testing, to civil scale.

A three-tiered Governance AI ecosystem is the primary recommendation of this note.  Such an ecosystem can apply and realize the Senate’s intent far more adeptly, adaptively, and at less cost, in complement to legislation.

ACKNOWLEDGEMENTS

Special thanks to Professor Juliano Maranhão (Universidade de São Paulo) and his team, for prompting the original comment, and, particularly to Anthony Novaes (Universidade Presbiteriana Mackenzie) for coordinating and translating this Note, and for his leadership and scholarship generally.

Author

Joshua Walker is the author of “On Legal AI” [Full Court Press, US, 2019; Brazilian Portuguese version: Thomson Reuters Brazil, 2021] and the CEO of System.Legal, a full-service legal technology and AI consultancy for the private sector, attorneys, civil society, and governments globally. Previously, he co-founded, architected, and led Lex Machina, the leading analytic platform for US litigation—relied upon by all three branches of the US government, attorneys, press, and myriad scholars worldwide. Walker also co-founded CodeX (Stanford Center for Legal Informatics). He received his undergraduate degree, m.c.l., from Harvard College and his J.D. from The University of Chicago Law School, where he was a Cornerstone Scholar.