Skip to main content

Policy Practicum: AI For Legal Help

Current Offerings

Policy Practicum: AI For Legal Help (809E): Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will work on teams, each of which will have a partner organization from the justice system and an interest in using AI to improve services. Partner organizations include frontline legal aid and court groups interested in using AI to improve their ability to help people dealing with evictions, criminal justice problems, debt collection, domestic violence, and other legal problems. Using human-centered design practices, students will help their partners scope out exactly where AI and other interventions might serve both the providers and the clients, what quality benchmarks should guide any new intervention, and what datasets and other projects could jumpstart a new technology initiative. Using this design work, teams will establish important guidelines to ensure that any new AI project is centered on the needs of people, and developed with a careful eye towards ethical and legal principles. This multi-stakeholder and policy research will then turn towards creative, design-driven technology development. Student teams will build a demonstration project to determine if AI is able to accomplish the legal tasks they have identified in their design research. They will consult with subject matter experts to help evaluate the AI's performance and go through iterative development cycles to refine their intervention to better meet the quality benchmarks they've established. Student teams will present their design research and technical system demo to their partners and a broader audience, for critical discussion about the next steps for the projects. They might continue to work on the efforts after the quarter or might help their partner move towards other sustainable models that could allow them to further develop, deploy, and maintain AI-powered projects to enhance their legal services. The students' learnings about engaging in responsible AI development with public interest partners will be useful to others working on AI for community agencies, government and civic tech, and high-stakes legal services. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. The students' final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are able, but not required, to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/. Cross-listed with the d.school (DESIGN 809E).

Sections

Policy Practicum: AI For Legal Help | LAW 809E Section 01 Class #30166

  • 3 Units
  • Grading: Law Honors/Pass/Restrd Cr/Fail
  • Enrollment Limitations: Consent 16
  • Graduation Requirements:
    • EL -Experiential Learning Requirement for Law Deg
  • Learning Outcomes Addressed:
    • LO3 - Ability to Conduct Legal Research
    • LO4 - Ability to Communicate Effectively in Writing
    • LO5 - Ability to Communicate Orally
    • LO6 - Law Governing Lawyers/Ethical Responsibilities
    • LO7 - Professional Skills

Notes: Class meets at the d.school, room TBA by instructors.

  • 2024-2025 Winter ( )
  • Fri

Past Offerings

Policy Practicum: AI For Legal Help (809E): Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will work on teams, each of which will have a partner organization from the justice system and an interest in using AI to improve services. Partner organizations include frontline legal aid and court groups interested in using AI to improve their ability to help people dealing with evictions, criminal justice problems, debt collection, domestic violence, and other legal problems. Using human-centered design practices, students will help their partners scope out exactly where AI and other interventions might serve both the providers and the clients, what quality benchmarks should guide any new intervention, and what datasets and other projects could jumpstart a new technology initiative. Using this design work, teams will establish important guidelines to ensure that any new AI project is centered on the needs of people, and developed with a careful eye towards ethical and legal principles. This multi-stakeholder and policy research will then turn towards creative, design-driven technology development. Student teams will build a demonstration project to determine if AI is able to accomplish the legal tasks they have identified in their design research. They will consult with subject matter experts to help evaluate the AI's performance and go through iterative development cycles to refine their intervention to better meet the quality benchmarks they've established. Student teams will present their design research and technical system demo to their partners and a broader audience, for critical discussion about the next steps for the projects. They might continue to work on the efforts after the quarter or might help their partner move towards other sustainable models that could allow them to further develop, deploy, and maintain AI-powered projects to enhance their legal services. The students' learnings about engaging in responsible AI development with public interest partners will be useful to others working on AI for community agencies, government and civic tech, and high-stakes legal services. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. The students' final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are able, but not required, to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/. Cross-listed with the d.school (DESIGN 809E).

Sections

Policy Practicum: AI For Legal Help | LAW 809E Section 01 Class #29189

  • 3 Units
  • Grading: Law Honors/Pass/Restrd Cr/Fail
  • 2024-2025 Autumn
    Schedule No Longer Available
  • Enrollment Limitations: Consent 16
  • Graduation Requirements:
    • EL -Experiential Learning Requirement for Law Deg
  • Learning Outcomes Addressed:
    • LO3 - Ability to Conduct Legal Research
    • LO4 - Ability to Communicate Effectively in Writing
    • LO5 - Ability to Communicate Orally
    • LO6 - Law Governing Lawyers/Ethical Responsibilities
    • LO7 - Professional Skills

Notes: Cross-listed with the d.school (DESIGN 809E). Class meets in Studio 2 at the d.school.

  • 2024-2025 Autumn
    Schedule No Longer Available

Policy Practicum: AI For Legal Help (809E): The policy client for this project is the Legal Services Corporation (https://www.lsc.gov/): This project works closely with the Legal Services Corporation's Technology Information Grant Program (https://www.lsc.gov/grants/technology-initiative-grant-program) to research how the public interacts with AI platforms to seek legal assistance, and to develop a strategy around how to mitigate risks, ensure quality, and enhance access to justice on these AI platforms. AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will conduct research to identify key opportunities and risks around AI's use by the public to deal with common legal problems like bad living conditions, possible evictions, debt collection, divorce, or domestic violence. Especially with the launch of new AI platforms like ChatGPT, Google Bard, and Bing Chat, more people may turn to generative AI platforms for guidance on their legal rights, options, and procedures. How can technology companies, legal institutions, and community groups responsibly advance AI solutions to benefit people in need? Students will explore these questions about AI and access to justice through hands-on interviews, fieldwork, and design workshops with different stakeholders throughout the justice system. They will run interview sessions online and on-site at courts, to hear from various community members about whether they would use AI for legal help and to brainstorm how the ideal AI system would behave. Students will also observe how participants use AI to respond to a fictional legal problem, to assess how the AI performs and understand how people regard the AI's guidance. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. They will then conduct community interviews according to an approved IRB research protocol. Students will synthesize what they learn in these community interviews, observations, and brainstorm sessions, in a presentation to legal and technical experts. They will hold a multi-stakeholder workshop at to explore how their findings may contribute to technical and legal projects to develop responsible, human-centered AI in the legal domain. Students will develop skills in facilitating interdisciplinary policy discussions about how technology and regulation can be developed alongside each other. The students¿ final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are asked to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/.

Sections

Policy Practicum: AI For Legal Help | LAW 809E Section 01 Class #33078

  • 3 Units
  • Grading: Law Honors/Pass/Restrd Cr/Fail
  • 2023-2024 Winter
    Schedule No Longer Available
  • Enrollment Limitations: Consent 16
  • Graduation Requirements:
    • EL -Experiential Learning Requirement for Law Deg
  • Learning Outcomes Addressed:
    • LO2 - Legal Analysis and Reasoning
    • LO4 - Ability to Communicate Effectively in Writing
    • LO5 - Ability to Communicate Orally
    • LO6 - Law Governing Lawyers/Ethical Responsibilities
    • LO7 - Professional Skills

  • 2023-2024 Winter
    Schedule No Longer Available

Policy Practicum: AI For Legal Help (809E): The policy client for this project is the Legal Services Corporation (https://www.lsc.gov/): This project works closely with the Legal Services Corporation's Technology Information Grant Program (https://www.lsc.gov/grants/technology-initiative-grant-program) to research how the public interacts with AI platforms to seek legal assistance, and to develop a strategy around how to mitigate risks, ensure quality, and enhance access to justice on these AI platforms. AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate? In this course, students will conduct research to identify key opportunities and risks around AI's use by the public to deal with common legal problems like bad living conditions, possible evictions, debt collection, divorce, or domestic violence. Especially with the launch of new AI platforms like ChatGPT, Google Bard, and Bing Chat, more people may turn to generative AI platforms for guidance on their legal rights, options, and procedures. How can technology companies, legal institutions, and community groups responsibly advance AI solutions to benefit people in need? Students will explore these questions about AI and access to justice through hands-on interviews, fieldwork, and design workshops with different stakeholders throughout the justice system. They will run interview sessions online and on-site at courts, to hear from various community members about whether they would use AI for legal help and to brainstorm how the ideal AI system would behave. Students will also observe how participants use AI to respond to a fictional legal problem, to assess how the AI performs and understand how people regard the AI's guidance. Students will be required to complete ethical training for human subjects research, which takes approximately 2 hours through the CITI program online. They will then conduct community interviews according to an approved IRB research protocol. Students will synthesize what they learn in these community interviews, observations, and brainstorm sessions, in a presentation to legal and technical experts. They will hold a multi-stakeholder workshop at to explore how their findings may contribute to technical and legal projects to develop responsible, human-centered AI in the legal domain. Students will develop skills in facilitating interdisciplinary policy discussions about how technology and regulation can be developed alongside each other. The students¿ final report will contribute to policy and technology discussions about the principles, benchmarks, and risk typologies that can guide the ethical development of AI platforms for access to justice. Students are asked to enroll in both Fall and Winter quarters of the class. The class may be extended to Spring quarter, depending on the issues raised. Elements used in grading: Attendance, Performance, Class Participation, and Written Assignments. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available at SLS Registrar https://registrar.law.stanford.edu/.

Sections

Policy Practicum: AI For Legal Help | LAW 809E Section 01 Class #31697

  • 3 Units
  • Grading: Law Honors/Pass/Restrd Cr/Fail
  • 2023-2024 Autumn
    Schedule No Longer Available
  • Enrollment Limitations: Consent 16
  • Graduation Requirements:
    • EL -Experiential Learning Requirement for Law Deg
  • Learning Outcomes Addressed:
    • LO2 - Legal Analysis and Reasoning
    • LO4 - Ability to Communicate Effectively in Writing
    • LO5 - Ability to Communicate Orally
    • LO6 - Law Governing Lawyers/Ethical Responsibilities
    • LO7 - Professional Skills

Notes: Class meets at the D.school Studio 2.

  • 2023-2024 Autumn
    Schedule No Longer Available
Back to the Top