How we use cookies

We use Google Analytics cookies to help give you the best experience on our website. By continuing without changing your cookie settings, we assume you agree to this. Please read the Law faculty's cookie statement to find out more.

Skip down to main content
Photo of phone with social media apps by Nathan Dumlao

The Lawlessness of Content Moderation

Photo of phone with social media apps by Nathan Dumlao

Over the last decades, social media have provided digital spaces to share ideas and opinions online, while also providing useful platforms for working or even dealing with emergencies. This has been brought into sharp relief during the current global pandemic. Nevertheless, social media as a flourishing framework for democratic values is not beyond the influence of unaccountable forms of governance. This risk is posed not only by governments increasingly relying on discretionary online censorship and shutdowns, but by the lawlessness of content moderation.

Online platforms are free to decide how to show and organise online content based on the algorithmic processing of users’ content and data. The information uploaded by users is processed by automated systems that define (or at least suggest to human moderators) content to remove according to non-transparent standards. Furthermore, these processes leave users without access to any remedy against decisions on online content. The lawlessness of content moderation, and the rise and dominancy of these platforms, is not just the result of the power that platforms can exercise over online content. Another contributing factor is the vast implementation of AI technologies in content moderation, which has allowed the proliferation of computing standards of protection which are opaque and not entirely accountable.

As business actors, online platforms tend to set their content moderation processes on the basis of business purposes rather than public interest. Whilst several social media platforms propose to represent a global community enhancing rights and freedoms transnationally, platforms do not just fight hate speech and other harmful content, due to the profitability of these activities. Content moderation is indeed a constitutional function. The primary incentive of social media is to create an attractive environment where users gather to connect and share content, with the added (often woefully unrealised) impetus to make these spaces safe, welcoming, and accessible for all. Yet, in creating a virtual space for public engagement, social media serves private interests. For instance, platforms profit from offering tailored advertising services. This ‘content moderation paradox’ explains why, on the one hand, social media commits to protecting free speech, while at the same time moderates content to attract advertising revenues. This intersection between safe environment and business interests´ leads to a mix of incentives making content moderation activities not always aligned with these goals.

This framework driven by profit maximisation has led to troubling consequences, especially in peripheral areas where platforms do not have incentives to moderate content. Some examples in which social media has played a determinat role include: the Capitol Hill events in the United States; mass atrocities between Christians and Muslims in the Central African Republic; religious attacks in Sri Lanka, including the 2019 Easter Sunday church and hotel bombings; and the use of Facebook for inciting violence against Myanmar’s minority Muslim population, which has elevated concerns about the role of social media in perpetrating genocides. Even in situations of conflict, these actors are free to determine how to moderate hate and disinformation according to their ethical, business and legal frameworks. There is no evidence of how (and maybe whether) they moderated content in these cases. This lack of transparency is compounded by the fact that  there is no training of AI technologies to detect hate speech in a number of different languages involved in some regions.

Facebook´s recent attempts to set an independent oversight board can be read as a first step towards institutionalising mechanisms of redress and the privatisation of the protection of human rights online. Despite this positive step towards accountability and transparency in content moderation, this system constitutes another example of the path towards the privatisation in the protection of human rights online. Whilst Facebook’s board will increase awareness, it is a relatively small move in the field considering the very few cases that it will likely consider. In the case of online hate and conflicts, the board would not represent the multifaceted nuances of different cultural traditions, especially when looking at the African framework. Besides, this board will act as a reactive body precluding the possibility to monitor the dissemination of online hate, while developing and escalating.

These challenges open a new research agenda to study the lawlessness of content moderation. This agenda might enrich the study of platform governance and the dissemination of online hate and disinformation producing effects beyond digital boundaries. This is precisely one of the goals of the ConflictNet project, a comprehensive research programme studying the uses and impact of social media on people and communities affected by violent conflicts.

About the Author

Photo of Giovani De Gregorio

Dr Giovanni De Gregorio

Postdoctoral Research Fellow , Centre for Socio-Legal Studies, University of Oxford

Giovanni De Gregorio is a postdoctoral researcher at the Programme in Comparative Media Law and Policy, Centre for Socio-Legal Studies, University of Oxford. Within the ERC ConflictNet project, his research focuses on online speech and platform governance. He is interested on digital constitutionalism, internet law and human rights.

Share via
Copy link
Powered by Social Snap