Policy Practicum: AI For Legal Help
Current Offerings
Policy Practicum: AI For Legal Help (809E): AI for Legal Help is a two-quarter, hands-on course where law, design, computer science, and policy students team up with legal aid organizations and court self-help centers to take on one of the biggest challenges in tech today: using AI to expand access to justice. Students will work directly with real-world partners to uncover where AI could make legal services faster, more scalable, and more effective--while ensuring it's safe, ethical, and grounded in the realities of public service. From mapping workflows to spotting opportunities, from creating benchmarks and datasets to designing AI "co-pilots" or system proposals, students will help shape the future of AI in the justice system. Along the way, they will learn how to evaluate whether AI is the right fit for a task, design human--AI teams that work, build privacy-forward and trustworthy systems, and navigate the policy and change-management challenges of introducing AI into high-stakes environments. By the end, policy lab teams will have produced a substantial, real-world deliverable--such as a UX research report, benchmark dataset, evaluation rubric, system design proposal, or prototype concept--giving students practical experience in public interest technology, AI system design, and leadership engagement. This is the opportunity to create AI that works for people, in practice, where it's needed most. Students from across the university are invited to apply via Consent of Instructor. Students may enroll in up to two quarters, ideally consecutively, so that the research teams are consistent. This is a Cardinal Course certified by the Haas Center for Public Service. Cross-listed with the d.school (DESIGN 809E).
Sections
Past Offerings
Policy Practicum: AI For Legal Help (809E): AI for Legal Help is a two-quarter, hands-on course where law, design, computer science, and policy students team up with legal aid organizations and court self-help centers to take on one of the biggest challenges in tech today: using AI to expand access to justice. Students will work directly with real-world partners to uncover where AI could make legal services faster, more scalable, and more effective--while ensuring it's safe, ethical, and grounded in the realities of public service. From mapping workflows to spotting opportunities, from creating benchmarks and datasets to designing AI "co-pilots" or system proposals, students will help shape the future of AI in the justice system. Along the way, they will learn how to evaluate whether AI is the right fit for a task, design human--AI teams that work, build privacy-forward and trustworthy systems, and navigate the policy and change-management challenges of introducing AI into high-stakes environments. By the end, policy lab teams will have produced a substantial, real-world deliverable--such as a UX research report, benchmark dataset, evaluation rubric, system design proposal, or prototype concept--giving students practical experience in public interest technology, AI system design, and leadership engagement. This is the opportunity to create AI that works for people, in practice, where it's needed most. Students from across the university are invited to apply via Consent of Instructor. Students may enroll in up to two quarters, ideally consecutively, so that the research teams are consistent. This is a Cardinal Course certified by the Haas Center for Public Service. Cross-listed with the d.school (DESIGN 809E).
Sections
-
2025-2026 WinterSchedule No Longer Available
Policy Practicum: AI For Legal Help (809E): Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will work on teams, each of which will have a partner organization from the justice system and an interest in using AI to improve services. Partner organizations include frontline legal aid and court groups interested in using AI to improve their ability to help people dealing with evictions, criminal justice problems, debt collection, domestic violence, and other legal problems. Using human-centered design practices, students will help their partners scope out exactly where AI and other interventions might serve both the providers and the clients, what quality benchmarks should guide any new intervention, and what datasets and other projects could jumpstart a new technology initiative. Using this design work, teams will establish important guidelines to ensure that any new AI project is centered on the needs of people, and developed with a careful eye towards ethical and legal principles. This multi-stakeholder and policy research will then turn towards creative, design-driven technology development. Student teams will build a demonstration project to determine if AI is able to accomplish the legal tasks they have identified in their design research. They will consult with subject matter experts to help evaluate the AI's performance and go through iterative development cycles to refine their intervention to better meet the quality benchmarks they've established. Student teams will present their design research and technical system demo to their partners and a broader audience, for critical discussion about the next steps for the projects. They might continue to work on the efforts after the quarter or might help their partner move towards other sustainable models that could allow them to further develop, deploy, and maintain AI-powered projects to enhance their legal services. The students' learnings about engaging in responsible AI development with public interest partners will be useful to others working on AI for community agencies, government and civic tech, and high-stakes legal services. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. The students' final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are able, but not required, to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/. Cross-listed with the d.school (DESIGN 809E).
Sections
-
2024-2025 SpringSchedule No Longer Available
Policy Practicum: AI For Legal Help (809E): Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will work on teams, each of which will have a partner organization from the justice system and an interest in using AI to improve services. Partner organizations include frontline legal aid and court groups interested in using AI to improve their ability to help people dealing with evictions, criminal justice problems, debt collection, domestic violence, and other legal problems. Using human-centered design practices, students will help their partners scope out exactly where AI and other interventions might serve both the providers and the clients, what quality benchmarks should guide any new intervention, and what datasets and other projects could jumpstart a new technology initiative. Using this design work, teams will establish important guidelines to ensure that any new AI project is centered on the needs of people, and developed with a careful eye towards ethical and legal principles. This multi-stakeholder and policy research will then turn towards creative, design-driven technology development. Student teams will build a demonstration project to determine if AI is able to accomplish the legal tasks they have identified in their design research. They will consult with subject matter experts to help evaluate the AI's performance and go through iterative development cycles to refine their intervention to better meet the quality benchmarks they've established. Student teams will present their design research and technical system demo to their partners and a broader audience, for critical discussion about the next steps for the projects. They might continue to work on the efforts after the quarter or might help their partner move towards other sustainable models that could allow them to further develop, deploy, and maintain AI-powered projects to enhance their legal services. The students' learnings about engaging in responsible AI development with public interest partners will be useful to others working on AI for community agencies, government and civic tech, and high-stakes legal services. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. The students' final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are able, but not required, to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/. Cross-listed with the d.school (DESIGN 809E).
Sections
-
2024-2025 WinterSchedule No Longer Available
Policy Practicum: AI For Legal Help (809E): Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will work on teams, each of which will have a partner organization from the justice system and an interest in using AI to improve services. Partner organizations include frontline legal aid and court groups interested in using AI to improve their ability to help people dealing with evictions, criminal justice problems, debt collection, domestic violence, and other legal problems. Using human-centered design practices, students will help their partners scope out exactly where AI and other interventions might serve both the providers and the clients, what quality benchmarks should guide any new intervention, and what datasets and other projects could jumpstart a new technology initiative. Using this design work, teams will establish important guidelines to ensure that any new AI project is centered on the needs of people, and developed with a careful eye towards ethical and legal principles. This multi-stakeholder and policy research will then turn towards creative, design-driven technology development. Student teams will build a demonstration project to determine if AI is able to accomplish the legal tasks they have identified in their design research. They will consult with subject matter experts to help evaluate the AI's performance and go through iterative development cycles to refine their intervention to better meet the quality benchmarks they've established. Student teams will present their design research and technical system demo to their partners and a broader audience, for critical discussion about the next steps for the projects. They might continue to work on the efforts after the quarter or might help their partner move towards other sustainable models that could allow them to further develop, deploy, and maintain AI-powered projects to enhance their legal services. The students' learnings about engaging in responsible AI development with public interest partners will be useful to others working on AI for community agencies, government and civic tech, and high-stakes legal services. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. The students' final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are able, but not required, to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/. Cross-listed with the d.school (DESIGN 809E).
Sections
-
2024-2025 AutumnSchedule No Longer Available
Policy Practicum: AI For Legal Help (809E): The policy client for this project is the Legal Services Corporation (https://www.lsc.gov/): This project works closely with the Legal Services Corporation's Technology Information Grant Program (https://www.lsc.gov/grants/technology-initiative-grant-program) to research how the public interacts with AI platforms to seek legal assistance, and to develop a strategy around how to mitigate risks, ensure quality, and enhance access to justice on these AI platforms. AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will conduct research to identify key opportunities and risks around AI's use by the public to deal with common legal problems like bad living conditions, possible evictions, debt collection, divorce, or domestic violence. Especially with the launch of new AI platforms like ChatGPT, Google Bard, and Bing Chat, more people may turn to generative AI platforms for guidance on their legal rights, options, and procedures. How can technology companies, legal institutions, and community groups responsibly advance AI solutions to benefit people in need? Students will explore these questions about AI and access to justice through hands-on interviews, fieldwork, and design workshops with different stakeholders throughout the justice system. They will run interview sessions online and on-site at courts, to hear from various community members about whether they would use AI for legal help and to brainstorm how the ideal AI system would behave. Students will also observe how participants use AI to respond to a fictional legal problem, to assess how the AI performs and understand how people regard the AI's guidance. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. They will then conduct community interviews according to an approved IRB research protocol. Students will synthesize what they learn in these community interviews, observations, and brainstorm sessions, in a presentation to legal and technical experts. They will hold a multi-stakeholder workshop at to explore how their findings may contribute to technical and legal projects to develop responsible, human-centered AI in the legal domain. Students will develop skills in facilitating interdisciplinary policy discussions about how technology and regulation can be developed alongside each other. The students¿ final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are asked to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/.
Sections
-
2023-2024 WinterSchedule No Longer Available
Policy Practicum: AI For Legal Help (809E): The policy client for this project is the Legal Services Corporation (https://www.lsc.gov/): This project works closely with the Legal Services Corporation's Technology Information Grant Program (https://www.lsc.gov/grants/technology-initiative-grant-program) to research how the public interacts with AI platforms to seek legal assistance, and to develop a strategy around how to mitigate risks, ensure quality, and enhance access to justice on these AI platforms. AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will conduct research to identify key opportunities and risks around AI's use by the public to deal with common legal problems like bad living conditions, possible evictions, debt collection, divorce, or domestic violence. Especially with the launch of new AI platforms like ChatGPT, Google Bard, and Bing Chat, more people may turn to generative AI platforms for guidance on their legal rights, options, and procedures. How can technology companies, legal institutions, and community groups responsibly advance AI solutions to benefit people in need? Students will explore these questions about AI and access to justice through hands-on interviews, fieldwork, and design workshops with different stakeholders throughout the justice system. They will run interview sessions online and on-site at courts, to hear from various community members about whether they would use AI for legal help and to brainstorm how the ideal AI system would behave. Students will also observe how participants use AI to respond to a fictional legal problem, to assess how the AI performs and understand how people regard the AI's guidance. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. They will then conduct community interviews according to an approved IRB research protocol. Students will synthesize what they learn in these community interviews, observations, and brainstorm sessions, in a presentation to legal and technical experts. They will hold a multi-stakeholder workshop at to explore how their findings may contribute to technical and legal projects to develop responsible, human-centered AI in the legal domain. Students will develop skills in facilitating interdisciplinary policy discussions about how technology and regulation can be developed alongside each other. The students¿ final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are asked to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/.
Sections
-
2023-2024 AutumnSchedule No Longer Available