The International Press Institute (IPI) has published a new resource for online moderators of news outlets to address abuse that takes place on social media. The tools and strategies for managing debate on Facebook and Twitter are part of IPI’s Newsrooms Ontheline platform, which collects best practices for countering online harassment against journalists.
Newsrooms worldwide use social media platforms to reach a wider audience, generate public debate around certain issues and, ultimately, create a community. Unlike on their own comment forums, however, newsroom moderators seeking to establish a healthy debate on social media and protect journalists from online abuse must also deal with the rules and policies of social media platforms, which are largely outside of their control.
The resources published on Newsrooms Ontheline provide an overview of how newsroom moderators can best approach social media moderation and effectively employ the tools and options provided by social media platforms to effectively respond to online abuse, smears and threats against the media outlet or individual journalists. These measures stem from in-depth interviews with experts on audience moderation from several leading news outlets in Europe.
Due to the changing nature of this topic – new policies, new types of harassment, etc. – these strategies are subject to constant review and update. The current resources focus on the role of moderators and the tools and guidelines provided by Twitter and Facebook, given the central importance of both channels to news outlets.
Most of the experts consulted by IPI highlight that the following tools can be considered when moderating abusive messages on Twitter:
- Muting: When it comes to online abuse in violation of both the media outlet’s own and Twitter’s community standards, moderators tend to mute rather than block accounts. This option dilutes the direct impact of the abuse as the target will no longer receive notifications from the muted account. It also prevents a possible angry backlash as the muted user has no knowledge of the muting. Finally, muting allows moderators to still see content produced by muted accounts and therefore remain vigilant to any potential credible threats against the media outlet or a journalist.
- Blocking: Moderators tend to block accounts that persistently spam or send scams – including bots and organic users – otherwise moderators generally adopt this measure as a very last resort to avoid a backlash from the blocked accounts as the latter are notified when they are blocked. Also, since the moderator will not be able to access the blocked account, it makes it difficult to monitor any imminent threat.
Reporting: Moderators generally report tweets or accounts to Twitter that disseminate potentially credible and imminent threats or contain violent imagery.
As per Facebook’s tools, moderators make use of the following tools:
- Delete a comment when it contains aggressive or threatening content or derogatory words and insults. This is done to promote a healthy public discussion. Criticism, no matter how harsh, is permitted, however.
- Hide a comment with abusive content. Moderators generally considered this less effective than deleting as the user and the user’s friends can still see content in question, even if others cannot.
- Ban a user from the media outlet’s Facebook Page when the user has repeatedly posted hateful or abusive comments, even after being warned by the moderators. This is done to remove a user who is seen as persistently undermining the values of a health discussion and the open community that the media organization aims to generate.
- Remove a user from the page, as a warning to deter further abusive comments. Less consequential than banning as the user can like or follow the Page again.
- Disable/turn off comments, although this feature is only available on video posts. This is done when the moderation team does not have the resources to moderate the flow of comments on a video or live stream.
- Block words and set the strength of the profanity filter.
- Report a post or a Page that the moderator considers has breached both Facebook’s and the media’s own community standards. Facebook’s stated policy is that it will then assess whether the content or Page should be removed.
Since 2014, the International Press Institute (IPI) has been systematically researching online harassment as a new form of silencing critical, independent media. Our work has unveiled patterns of online attacks, analysed the emotional and professional impact on journalists, and collected best practices for newsrooms to address the phenomenon.