No. 104: Embracing the Artificial Intelligence (‘AI’) Revolution: Spotlight on AI Regulation in Europe, the United Kingdom, and the United States


Publish Date:
July 18, 2023
Publication Title:
TTLF Working Papers
Stanford Law School
Working Paper
  • Diana Milanesi, Embracing the Artificial Intelligence (‘AI’) Revolution: Spotlight on AI Regulation in Europe, the United Kingdom, and the United States, TTLF Working Papers No. 104, Stanford-Vienna Transatlantic Technology Law Forum (2023).
Related Organization(s):


Although the development of artificial intelligence (‘AI’) started decades ago, the exponential growth in computing power, the increased availability of data, and the progress in algorithms have only recently helped lower the barriers to entry for developing and using AI technologies and systems, thus opening a new era for AI-driven innovation.
AI has the capacity to complement, assist, and empower people in almost every field of human endeavor and is creating remarkable opportunities across our society and economy, with the potential to foster unprecedented developments and radically transform every sector in it – from banking and financial services, to healthcare, manufacturing, energy, agriculture, and education among others.
Across a variety of sectors, AI is being used to improve customer experience, deliver more intelligent offerings, and provide more resilient and tailored services and products to customers. More individuals and businesses are turning to AI technologies and systems to increase productivity, speed of execution, and efficiency, and to remain competitive in their markets. AI is also utilized to improve access to information and facilitate connectivity, thus helping overcome barriers including access and language. In addition, AI is being applied to address fundamental challenges in the scientific and medical fields that will revolutionize and significantly increase the accuracy and efficiency of medical detection, diagnosis, and treatment of diseases, drug discovery and development, and clinical trials. Furthermore, AI is set to positively impact the future of mobility and can materially contribute to fighting cyber-attacks by improving the accuracy and speed of threat detection and incident response. Similarly, AI can help tackle climate change by accelerating emissions reduction, increasing energy efficiency, and improving the use of renewable energies.
And these are only a few examples of its advantages, as AI’s use cases continue to evolve and expand at an exceptional speed, and its value proposition is gaining momentum and is increasingly brought to life.
However, alongside the remarkable benefits that it offers, AI also poses novel challenges, amplifies existing risks, and creates new risks. Growing concerns from across the industry, academia, and the general public have arisen in connection with a lack of transparency around how certain AI systems make decisions and collect and process personal data, as well as various biases and discrimination which may be introduced at different stages of an AI system’s lifecycle. In addition, AI experts have highlighted the danger of AI technologies and systems being misused to spread fake news and online misinformation. They have also voiced caution about the risks that AI technologies may pose to financial services and markets if unleashed unfettered, including the risks of such technologies being abused to affect the integrity, price discovery, transparency, and fairness of markets, or in ways that could hinder industry competition or exacerbate market volatility. In more extreme scenarios, AI practitioners have warned that AI technologies and systems may be exploited to enable harmful behavior, jeopardize peace and security, and threaten human rights and safety, if they are designed and managed without effective guardrails and safeguards.
These issues, among others, call not only for broad conversation and animate debate on AI, but also for a decisive response and effective actions. Moreover, they highlight the complicated ethical questions associated with the development and deployment of AI, which do not always have clear answers.
As is the case with many emerging technologies, the establishment of regulatory and compliance frameworks has lagged behind the rise of AI. To date, the development and deployment of AI technologies and systems have largely been unregulated. Certain existing regulatory provisions and technical standards, including data protection laws, have been interpreted and enforced to capture the multiple and rapidly evolving uses of AI. However, because they were first introduced for other purposes and were not explicitly written with AI in mind, the result is often a very uncertain, fragmented, and at times inconsistent patchwork of legal and regulatory requirements and standards that various actors in the AI ecosystem need to navigate. This, in turn, has the potential to undermine clarity, confidence, and trust in AI and may fail to prevent wrongful misuse, overuse, or harmful exploitations of AI technologies and systems.
Yet, things are set to change as over the past twelve months an increased number of leading AI experts from the scientific community, academia, and industry have begun calling for more regulation, effective guardrails, and restrictions, as well as agreed-upon standards for the development and use of AI to prevent unwarranted harm. In response, governments and regulators around the world are now fiercely debating the pros- and cons- of regulating AI and have recently started taking actions targeted at AI technologies and systems. These include advancing new AI-related legislative and policy proposals, considering new AI-related technical and governance standards, issuing multiple pieces of guidance on AI, launching public consultations, establishing expert working groups, and promoting collaborative research with the industry and academia to deepen the understanding of the impact of AI, promote a better appreciation of its associated benefits, and enable more effective management of its risks.
Among them, legislators and policymakers in Europe, the United Kingdom, and the United States have been ramping up their activities to develop blueprints to govern AI and are now shaping their approach to AI regulation. Efforts to regulate AI across these three geographic areas are ongoing at the date of this publication, yet the approaches being taken appear to differ significantly. Although there seems to be a consensus that existing rules and standards are insufficient, and at times even inadequate, to deal with the complexities, challenges, and risks that AI poses, there is also considerable divergence on what new AI regulatory frameworks should look like.
Although there is of course no single or perfect approach, it is likely that effective AI regulatory frameworks, policies, and standards will need to be proportionate and forward-looking to keep pace with the incredible speed of development of AI and help reach its full potential, while also mitigating its risks and addressing its critical challenges. In addition, they will likely require calibration of relevant obligations between different actors and careful risk assessments of AI systems and technologies, with an enhanced focus on outcomes, governance, and processes.
Thoughtful AI regulatory approaches have the potential to significantly influence the speed of further AI-driven advancements, propel human-centric innovation, and positively shape the trajectory of AI’s transformation of the global economy and society. On the other hand, too-prescriptive AI regulatory frameworks may have unintended consequences, including stifling innovation and inhibiting further growth. Getting the balance right is necessary to support a thriving AI ecosystem, while also ensuring the safe and responsible development and use of AI.
Because of the complexities of AI and its pervasive impact and international reach, AI commands a broadly shared sense of responsibility, accountability, and leadership alignment. It is therefore imperative that governments and regulators strengthen their international efforts and cooperation, and they work closely with a large, diverse, and inclusive community of AI practitioners across the industry, academia, and civil society to address AI’s challenges more effectively and fully harness the power of AI.
While governments and regulators are jostling to take the lead on AI regulation, many renewed AI experts and leaders in the industry and academia are taking steps to foster responsible AI through a number of initiatives, including by pioneering best practices in the design and deployment of AI systems, prioritizing AI research and development that assist and benefit people and society, articulating and implementing effective AI-related processes and governance, encouraging responsible behavior, and establishing principles and industry standards to guide further advancements and groundbreaking work in AI.
Looking ahead, being designed and implemented boldly and responsibly, AI-driven innovation can be extremely impactful and has the capacity to positively revolutionize the lives of millions of people around the world. The coming months will provide more opportunities for dialogue, debate, and action on AI. All the above stakeholders are called to continue to engage constructively and play a critical role in helping the global economy capitalize on the breakthrough of AI to fuel further growth and innovation, and supporting our society in embracing AI technologies and systems safely, and in ways that respect fundamental rights and protect shared values.