Content Moderation and the Interests of Vulnerable Groups: A Comparison between Children’s Online Privacy Protection Act (US) and the Audio-Video Media Services Directive (EU)

Research project

Catalina Goanta

Internet platforms have been called the ‘new governors’ of online speech (Klonick, 2018). As ‘functional sovereigns’ (Pasquale, 2017) who have seemingly taken over the functions of the state in setting and enforcing rules, platforms develop self-regulatory mechanisms such as contractual frameworks or dispute resolution systems to govern the activity of their users. The leniency of internet governance during the past decades, both at global and European levels contributed to the consolidation of de facto private legal orders providing consumers with either less protection than mandated by law, or less protection than the law ought to have mandated, had the ensuing legal harms been more transparent for lawmakers.

As a category of highly vulnerable users, children are directly affected by such harms, as they are said to have difficulties recognizing manipulating techniques which may affect their immediate interests. To protect these interests, both EU and US legislators have responded with (mandatory) statutory rules aimed, on the one hand, at platforms (the US model), but also to commercial actors using the platforms for their business models (the EU model). This research project is set out to investigate the similarities and differences between the two regulatory approaches, and critically reflect on their strengths and weaknesses in offering vulnerable groups an effective recourse to protective regimes.”