The Human Rights Dimensions of Generative AI: Guiding the Way Forward
![Loading Events](https://law.stanford.edu/wp-content/plugins/the-events-calendar/src/resources/images/tribe-loading.gif)
- This event has passed.
The significant deployment of Generative AI over the last year has underscored the urgent need to examine the human rights dimensions of AI. This panel explores the relevance of human rights frameworks to the tech sector and governments as they consider how to understand and regulate the opportunities, while mitigating the risks.
Many have called for companies and States to do more to ensure that AI is used to enhance, not infringe upon, human rights – and to do so quickly. But practical ways forward to reign in the risks of AI without stifling innovation remain elusive. Ethical and human rights frameworks, including the UN Guiding Principles on Business and Human Rights (UNGPs), offer important guidance, but uptake among technology firms and government regulators remains limited.
The UN Human Rights Office works with members of the tech sector to explore these challenges, and in particular how human rights risks associated with tech products and services can be mitigated and prevented through applying a human rights lens during the design and development stages of AI-enhanced products. The Office’s B-Tech Project highlights the importance of UNGPs’ application in the tech sector, noting the adverse effects of generative AI are not just broadly negative society impacts, but are actually negative impacts on internationally protected human rights.