Online abuse experts from Middlesex University explain why social media content moderators have a crucial role to play after Russia invaded Ukraine
*If you are a content moderator and want to get in touch please email: r.spence@mdx.ac.uk
The invasion of Ukrainian by Russian forces again underlines the power and reach of big technology companies such as Meta (formerly known as Facebook), YouTube and Tik Tok. It is through their platforms that the world will learn and react to the growing crisis. They have more power than believed, wielding the ability to keep or take down content, ultimately shaping public opinion as the war progresses. Gone is their ability to remain placid or neutral in on-going global crises. These organisations are critical agents in the dissemination and transfer of (mis)information, whether that be active or passive. Their decisions can greatly influence how events are perceived, regardless of how they have truly unfolded.
The recognition of social media’s power to influence and persuade society has led to companies such as Meta and Google facing pressure from governments on all sides of the conflict to either ban or remove content they view as misleading. Russia has banned Instagram and accused Meta of being an ‘extremist’ organisation, whilst European leaders have put pressure on social media platforms to block Russian state-controlled media. Ukraine has gone so far as to appeal directly to social media companies to block their services in Russia. It is a double-edged sword: if companies do too much, it may lead to calls of censorship and blocking free speech, but doing too little may leave them open to accusations of undermining democracy and human rights. A lot of the content published on these platforms is being generated by their users (also termed UGC or user-generated content) and is often unregulated, requiring continuous monitoring. Social media companies can partly rely on artificial intelligence (AI) to assist, but ultimately it is their content moderators (CMs) who are at the coal face in shaping how the conflict is perceived to play out. They are the ones who monitor content posted and apply their company’s rules which define what is and is not accepted. CMs or First Digital Responders as they can be known are the individuals who protect us from exposure to harmful and traumatic content.
At the best of times, content moderators are under pressure to view and then respond to high volumes of content with accuracy. Workers whose performance dips below certain levels are at risk of losing their jobs. In the current climate, where company performance is heavily scrutinised by governments and regulatory bodies, they find themselves at the centre of highly-charged political debates. This puts pressure on companies to demonstrate their capability to police themselves, and that they can use the technology at their disposal as a force for good. However, delivering these goals is left to the frontline moderators, where the pressure to deliver is likely to be increased. Every error in moderation may result in genuine posts being removed, accounts being suspended for reasons unclear, or leave fake posts untouched, leading to the spread of misinformation and false narrative, viewed by millions.
We can assume that content moderators are currently being exposed to and overwhelmed by war footage emerging from the Ukrainian conflict. This is likely to include violent and bloody content which they will have to watch, analyse and decide whether it is genuine or part of the swathes of disinformation they will be asked to identify. This is difficult to do, especially as techniques for producing fake footage have become increasingly sophisticated. Often individuals or organisations with specialist knowledge are needed to identify fakes. Content moderators are a global workforce, often hired as contractors and paid minimum wage, and it is unfair to expect them to understand every subtle cultural differences in a complex conflict.
There will no doubt be a lag between the tsunami of content they are moderating, and the development of official policy regarding where freedom of speech and expression end, and censorship begins. This will be followed by a waiting time, whilst decisions are translated into actionable policies for content moderators. For instance, Twitch has recently announced updated policies regarding channels that spread misinformation and Facebook have instituted a temporary change in policy that allows users in some countries to post content that is usually forbidden. This is just one part of the complex process, with reports that policies are often developed in stages or adapted on the fly. In part, this is because situations evolve, and posts can be unclear, allowing for multiple interpretations of the same information. This inevitably increases the opportunity for disagreements about moderation decisions and adds to moderator uncertainty.
These imprecise processes do not help content moderators faced with reviewing content and rapid decision making. They may find they are left to carry out their tasks with little official guidance and support, while always thinking about the threat of losing their low paid jobs if they get things wrong. For example, should a violent video that normally would be removed remain publicly available due to the political importance attached in highlighting realities on the ground in Ukraine? Should videos or posts that can be used to identify and track troop movements be removed? Are videos falsely claiming to be from the current conflict actually misinformation that needs removing? These are challenging questions during a very difficult time.
Despite the illusion these platforms give of being places for free speech, they wield their power to carefully curate according to internal policies that are driven by corporate concerns. As such, the processes social media platforms use to decide whether posts should be allowed to stay up, or which accounts can remain active, remain frequently not transparent to their users and those who work outside the organisation. In the days ahead whilst the conflict continues, hopefully this will not also be the case for their content moderators.
Dr Ruth Spence is a Research Fellow at the centre for Child Abuse and Trauma Studies (CATS) at Middlesex University. Ruth uses quantitative and online methodologies to research trauma and attachment, working with partners in the third sector, police, and industry. She is currently project manager on a research study funded by the Technology Coalition to investigate the impacts of the role on content moderators.
Dr Elena Martellozzo is an Associate Professor in Criminology at the centre for Child Abuse and Trauma Studies (CATS) at Middlesex University. Elena has extensive experience of applied research within the Criminal Justice arena. Elena’s research includes online stalking, exploring children and young people’s online behaviour, the analysis of sexual grooming and police practice in the area of child sexual abuse. Elena has emerged as a leading researcher and global voice in the field of child protection, victimology, policing and cybercrime. She is a prolific writer and has participated in highly sensitive research with the Police, the IWF, the NSPCC, the OCC, the Home Office and other government departments. Elena has also acted as an advisor on child online protection to governments and practitioners in Italy (since 2004) and Bahrain (2016) to develop a national child internet safety policy framework
Jeffrey DeMarco is Senior Lecturer in Psychology and Senior Fellow with the Centre for Abuse and Trauma Studies (CATS) at Middlesex University. His expertise has generally focused on the behavioural understandings of those who are at high risk of exploitation and abuse, applying care and support to those who may be vulnerable being drawn into crime and deviance. The majority of his work explores the intersection between psychology and the online space, including work for the European Commission in enhancing the policing of online sexual abuse; investigating youth justice systems responses to digital risks for UNICEF across the MENA region and eastern Africa; improving partnership between local communities and military in conflict zones using social media, including Iraq and Afghanistan; and assessing the psychopathology of adolescent victims/offenders of many forms of cybercrime. He is a Fellow of the Royal Society of the Arts, and the Assistant Director, Knowledge & Insight at Victim Support.
Tags: Centre for Abuse and Trauma Studies, research, Ukraine, Ukraine Crisis
Leave a Reply