Use of Generative AI Technology
Generative AI tools (e.g., ChatGPT) are increasingly used in legal practice and may have some beneficial uses to support learning. However, these tools come with risk: they may produce incorrect responses and otherwise inhibit learning, and some uses would constitute professional malpractice in the provision of legal services. Law school instructors should therefore exercise discretion to choose the appropriate AI policy for their learning goals.
Individual course instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools, provided that the permitted uses of AI do not authorize students to contravene standard academic norms concerning plagiarism and accuracy. Plagiarism includes using an idea obtained from AI without attribution or submitting AI-generated text verbatim without quotation marks; accuracy norms require verifying assertions in submitted work. In coursework that involves attorney client representation, providing legal advice or services, or that would otherwise constitute the practice of law, AI use must meet all applicable rules of professional responsibility. Course instructors must state these policies in their course syllabi and clearly communicate these policies to students. Students who are unsure of policies regarding generative AI tools are encouraged to ask their instructors for clarification. This policy applies to all Stanford Law School courses, including clinics.
In the absence of a course-specific AI policy set by the instructor, students may use generative AI tools to support their learning and to aid in the development or refinement of their own ideas, provided they do not use such tools to generate content that is then presented as their own work. The use of generative AI while taking an exam or to draft or revise any portion of submitted work (e.g., drafting or revising the text of a paper or clinic work product; generating citations; fixing citation format) is not permitted unless (1) fully disclosed in advance of its use by the student to the instructor, and (2) explicitly authorized by the instructor in writing prior to the student’s use of the tool. In all cases, the student’s use of AI cannot contravene standard academic norms concerning plagiarism and accuracy, and in coursework involving the practice of law, AI must meet all applicable rules of professional responsibility. Students who are unsure whether a particular AI use is permitted should default to asking the instructor and disclosing the use.
Citations to sources that do not exist will raise a presumption of generative AI use. Unauthorized use of AI tools may result in a lower grade, including a grade of F, and/or a referral to Stanford’s Office of Community Standards.
The following examples illustrate the application of this policy in the absence of a course-specific Generative AI policy:
- A student uses an AI tool to organize her study outline in preparation for an exam in a doctrinal course. This use would not be prohibited.
- A student uses AI to fix the Bluebooking of a paper and discloses in advance. This involves the use of AI to revise submitted work, so it must be authorized by the instructor in writing in advance of the student’s use of AI.
- A student uses AI to prepare for being on panel in class. This use is not prohibited.
- A student uses AI to write a paper or to produce clinic work product and discloses in advance prior to using such tools. This involves the use of AI to draft or revise submitted work, so it must be authorized by the instructor in writing in advance and cannot contravene standard academic norms concerning plagiarism and accuracy, or relevant professional standards for legal practice.
- A student uses AI to write a paper or to produce clinic work product and does not disclose in advance. This is a violation of the AI policy and could result in an F and/or a referral to Stanford’s Office of Community Standards.
- A student uses AI during an exam and does not disclose. This is a violation of the AI policy and could result in an F and/or a referral to Stanford’s Office of Community Standards.