The aim of the system – which has yet to be named – is to create a suite of tools which work to highlight relevant comments, potentially through word recognition software. Additionally, it would categorise and rank commenters according to previous contributions. A highlighting system would keep comments constructive and allow journalists to interact directly with reader comments. While this may not eliminate ‘trolling’, it could diminish the incentive for users to be the loudest voice in the comment section.
Fiona Martin, who researches online news comments at the University of Sydney in Australia has enthusiastically welcomed the Mozilla partnership. “The creation of an open source, shareable commenting and social media tool that can be plugged into existing content management systems, would be a great initiative for helping users and small publishers manage their interaction,” Dr Martin said.
The project will run for two-years and it is to be funded by a $3.89 million grant from the John S. and James L. Knight Foundation. The Washington Post and The New York Times will incorporate the tools developed as part of the project into their websites, and extend them to other publishers and bloggers in order to allow them to create their own websites, similar to publishing platforms such as WordPress. The project, which has dozens of members, will be led by Mozilla’s Dan Sinker, who is the Director of the Knight-Mozilla OpenNews initiative, which specialises in developing digital news tools.
“A move to standardise posting and other user generated content functionality would be useful.” Dr Martin said. “My research has shown that most major mastheads have developed their own idiosyncratic approach to enabling commenting and user interaction, and this can confuse and frustrate people who go looking for functions that exist in one website but aren’t found on another” she said.
But Dr Martin says there are potential downsides to tools that identify ‘super-users’ if the system is not implemented in a spirit of transparency with audiences. “I can see the appeal of automation to help identify ‘superusers’. The problem is automated ranking tools and verified commenter systems create class systems. They disadvantage one-off and casual users. If the rules publishers introduce for users to be given greater visibility are not transparent, then those rules will only cause anger and, potentially, disengagement”, she said.
A report published by the World Editors Forum in 2013 titled ‘Online Comment Moderation: Emerging Best Practices’, identified the current models of moderation as pre-publication and post-publication processes. This has typically involved moderation staff, either individually approving each comment, or assessing comments which have been flagged by other users. A combination of these two processes is also used. This allows publications to ensure reader commentary is contributing to the story. It is, however, time-consuming and newsrooms may not have the resources to undergo this process in which case, problematically, there may be no moderation at all.
There is also the option of blocking specific users if they break the guidelines of a publication’s comment section. This can be done using their account, or in the case of users who create multiple accounts, their IP address. Moderators can ban or closely monitor the activity of flagged users, or hide comments from everyone except the offending user.
The report provides suggestions which could be used to moderate a comment section:
- Don’t post content that is offensive or abusive. Here, most specifically refer to racism, homophobia and sexism, as well as mentioning hate speech.
- Don’t post content that is illegal: most refer to defamation, libel and pornography.
- Don’t post irrelevant, off-topic content.
- Don’t swear.
- Don’t post content that is badly-written, in all caps, abbreviations or misspelled to such a degree that it is illegible.
- Don’t write excessively long comments.