The Legal and Policy Implications of Computer-Generated Child Sex Abuse Imagery
![Loading Events](https://law.stanford.edu/wp-content/plugins/the-events-calendar/src/resources/images/tribe-loading.gif)
- This event has passed.
The advent of generative AI has given computer users the power to create realistic-looking visual content on consumer-grade hardware. With that new capability has come the misuse of AI-generated images for abusive purposes, including the creation of computer-generated child sex abuse material (CG-CSAM). While CSAM is illegal (and online platforms must report it when they detect it on their services), “virtual child pornography” is protected speech under current Supreme Court precedent. The coming rise of photorealistic CG-CSAM will therefore pose difficult legal, practical, and policy challenges for both online platforms and the U.S. government. This talk will review the existing federal legal regime governing CSAM in the U.S., analyze the constitutional status of CG-CSAM, and suggest policy interventions to combat the proliferation of this new kind of abusive online content.
Come hear from Riana Pfefferkorn, a Research Scholar at the Stanford Internet Observatory and an affiliate of the Center for Internet and Society at SLS. Her research focuses on encryption policy in the U.S. and abroad; she also researches online trust and safety, cybersecurity, and novel forms of electronic surveillance by law enforcement. Before coming to Stanford, Riana was previously an attorney in private practice. She is a graduate of the University of Washington School of Law and Whitman College.
Lunch included for those who RSVP.