How we use cookies

We use Google Analytics cookies to help give you the best experience on our website. By continuing without changing your cookie settings, we assume you agree to this. Please read the Law faculty's cookie statement to find out more.

Skip down to main content
A hand holding a smartphone
Dawid Sokołowski on Unsplash

Exploring Facebook as a Research Tool

A hand holding a smartphone
Dawid Sokołowski on Unsplash

What do you do when you see hate speech or mis/dis-information online? Do you report it? Or do you ignore it and keep scrolling? Our online behaviour is meditated by our understanding of who governs social media and their responsibility in protecting online spaces. It is also predicated on our understanding of the potential dangerousness of this content. Yet, we are also exposed to a different degree of hateful or mis/dis-information online depending on where we live. In the global north, much content is filtered out before it reaches our feeds, thanks to automated content moderation systems. In the global south, however, these automated systems are much less effective, if at all. The algorithms cannot capture nuances in online content nor flag content in many local languages and dialects. The burden of moderation thus largely falls to users in these contexts, who become the “new” governors of social media. In conflict-affected nations, the inequalities in online content moderation are amplified (see the failures of Facebook in Ethiopia, for example).

As part of a wider ConflictNet research package on the politics of flagging, our research team sought to explore the online behaviour of social media users in South Africa, Ethiopia, and Kenya. We were particularly focused on answering: what do these users consider hate speech and mis/dis-information, and what do they think happens when they report content? In mapping the different ways we could reach social media users, we initially considered hiring a survey company like Qualtrics to run our survey. While survey companies are an essential resource, we were concerned about the limited depth of their reach in our research countries. We came across a University of Glasgow research project centred on exploring the use of Facebook as a research tool. This project used Facebook ads to circulate survey links to increase response rates. As we aimed to engage specifically with social media users in our research countries, this presented a sound research option – using social media to reach social media users made logical sense. Our issue, however, was that, despite its potential, there was very little literature or guidance on how to use Facebook as a research tool in African contexts. So, we initially ran a pilot study using Facebook ads in Kenya, setting shallow ad parameters (i.e. we purchased very low-cost ad infrastructure on Meta).

Within 24 hours, we had more survey responses than we thought we would get in the two weeks we planned to run the Kenyan ads. We scaled up the survey to run in Ethiopia and South Africa shortly thereafter. While we were collecting more data than we could have hoped for to answer our research questions, another much more profound question emerged: How (and why) are Facebook ads targeting different participants? Despite using the same ad parameters across the countries, the Meta ads manager highlighted drastically different demographic profiles of respondents (n.b. these demographic profiles are given at an overview level, not at an individual participant level). One country had ads mostly reaching middle-aged men, while the other two reached a more balanced gender and age distribution. Significantly, these profiles did not map onto existing research on who social media users are in these countries. Our ads were reaching different participants than we expected them to based on the data of who uses social media in these countries. We reached out to Meta advertising to try and understand how the ads were working. How do they target potential participants? Where does the ad show on their feed? We could not get a clear answer to these questions, but we were told that the ad algorithm for all of Africa is trained on the ad algorithm used in Europe, which raises a whole new set of questions that we are still exploring.

Overall, there are many issues and limitations to using these ads, and we are pausing to reflect on the biases they introduce into our material. The ads mediate how we answer the question of what people do when they see hate speech and mis/disinformation on social media. Who the ad reaches changes what they answer. We are still exploring the full depth of these tensions by conducting focus groups in the research countries to understand how participants engaged with the survey and how the ad platforms targeted them. While we are still in the early stages of analyzing our data, there does appear to be some (tentatively proposed) benefit to using ads in African contexts to, for instance, reach participants who we know are using social media that are otherwise very hard to reach.

 

About the Author

Portrait of Caitlyn McGeer

Dr Caitlyn McGeer

Postdoctoral Researcher, Centre for Socio-Legal Studies, University of Oxford

Caitlyn McGeer is a postdoctoral researcher working on the ERC-funded ConflictNet project within the Programme in Comparative Media Law and Policy at the Centre for Socio-Legal Studies. She focuses on the use of artificial intelligence and social media in conflict settings in Africa. Caitlyn specializes in anticipatory action for conflict prevention and hate speech and disinformation in conflicts. Her research interests deal with gender and technology, human rights, and security.

Share via
Copy link
Powered by Social Snap