How Disruptive Is DeepSeek? Stanford HAI Faculty Discuss China’s New Model
(Full article originally published by Stanford University Human-Centered Artificial Intelligence on February 13, 2025.)

In recent weeks, the emergence of China’s DeepSeek — a powerful and cost-efficient open-source language model — has stirred considerable discourse among scholars and industry researchers. At the Stanford Institute for Human-Centered AI (HAI), faculty are examining not merely the model’s technical advances but also the broader implications for academia, industry, and society globally.
Central to the conversation is how DeepSeek has challenged the preconceived notions regarding the capital and computational resources necessary for serious advancements in AI. The capacity for clever engineering and algorithmic innovation demonstrated by DeepSeek may empower less-resourced organizations to compete on meaningful projects. This clever engineering, combined with the open-source weights and a detailed technical paper, fosters an environment of innovation that has driven technical advances for decades.
While the open weight model and detailed technical paper is a step forward for the open-source community, DeepSeek is noticeably opaque when it comes to privacy protection, data-sourcing, and copyright, adding to concerns about AI’s impact on the arts, regulation, and national security. The fact that DeepSeek was released by a Chinese organization emphasizes the need to think strategically about regulatory measures and geopolitical implications within a global AI ecosystem where not all players have the same norms and where mechanisms like export controls do not have the same impact.
DeepSeek has reignited discussions of open source, legal liability, geopolitical power shifts, privacy concerns, and more. Below, Julian Nyarko, Professor of Law at Stanford Law School and Stanford HAI Associate Director, offers what DeepSeek means for the field of artificial intelligence and society at large.

LLMs are a “general purpose technology” used in many fields. Some companies create these models, while others use them for specific purposes. A key debate right now is who should be liable for harmful model behavior—the developers who build the models or the organizations that use them. In this context, DeepSeek’s new models, developed by a Chinese startup, highlight how the global nature of AI development could complicate regulatory responses, especially when different countries have distinct legal norms and cultural understandings. While export controls have been thought of as an important tool to ensure that leading AI implementations adhere to our laws and value systems, the success of DeepSeek underscores the limitations of such measures when competing nations can develop and release state-of-the-art models (somewhat) independently. The open-source nature of DeepDeek’s releases further complicates the question of legal liability. With the models freely available for modification and deployment, the idea that model developers can and will effectively address the risks posed by their models could become increasingly unrealistic. Instead, regulatory focus may need to shift towards the downstream consequences of model use — potentially placing more responsibility on those who deploy the models.