You are here:




Problems in Ethical Debates around Content Moderation 

Rob Simpson (UCL) 


What norms of content moderation should be used to govern social media sites and other sites of online public discourse? And who should set those norms? Governments? Self-regulating tech companies? Some kind of hybrid governance structure? My aim in this paper isn’t to settle these questions, but to identify some worrisome tendencies in the suddenly-mushrooming scholarly literature that seeks to tackles these questions, e.g. in political philosophy, applied ethics, law, and public policy. The problems I’m trying to diagnose all involve a lack of critical depth in our normative appraisal of the technologies under discussion. Roughly, these technologies are treated as benign tools whose misuse we have to try to prevent. The alternative view, which is more prevalent in mainstream discourse – that the tools are malign, in some more inherent way, which is resistant to regulatory mitigation – is not so much rejected in the emerging ethical scholarship, as ignored. I argue that philosophers especially should take this alternative view more seriously, and consider some explanations as to why it is currently being sidelined. 



The Institute of Philosophy hosts a regular workshop series entitled ‘The Practical, the Political, and the Ethical’.
The series was created in 2015 by Véronique Munoz-Dardé (UCL) and Hallvard Lillehammer (Birkbeck) in order to discuss work in progress from visiting speakers.
This year the series is convened by Elise Woodard (KCL) and Michael Hannon (Nottingham). Talks are normally 45 minutes (no pre-circulation of the paper), followed by discussion. All are welcome.