In the US, Section 230 of the U.S. Communications Decency Act (CDA) protects news organizations and other platforms such as Yelp and Twitter from lawsuits based on user content in the same way that phone companies aren’t responsible for salacious phone calls, the Guardian noted. But if sites take an active role in policing user content, this protection no longer applies to them. While legal protections vary dramatically from country to country, the ‘safest’ practice tends to be general passivity in comment moderation, unless a user reports an offensive comment.
A petition with more than 100,000 signatures asked for Twitter to take a more active role in monitoring and removing inappropriate tweets, following news that activist Criado-Perez had received a deluge of threatening and harassing tweets, including many that promised rape, at a rate of about 50 per hour for 12 hours, Metro reported. Criado-Perez started receiving the abusive tweets shortly after her campaign for women representation on British banknotes led to Jane Austen’s appointment as the new face of the £10 bill.
Criado-Perez reported the abuse to police, who have since arrested a suspect. But she and others, including U.K. law enforcement, were disappointed by Twitter’s response (or, rather, lack thereof). Del Harvey, Twitter’s senior director of trust & safety, has since responded with a blog post that announces a “report tweet” button that will soon be rolled out to desktop and Android Twitter versions. The function was added to the iPhone app and mobile site about three weeks ago. Previously, users had to fill out a form to alert Twitter of individual cases of abuse, a process that seemed impractical as Criado-Perez was “drowning in rape threats,” she tweeted. Facebook has a similar “report” button for all content on its site.
This is not the first time the issue of moderation has come up for Twitter. In 2012, British journalist Guy Adams’ Twitter account was temporarily suspended after a series of tweets criticizing NBC’s Olympics coverage. Twitter later apologized — not for suspending Adams’ account but for acting on a staffer’s recommendation rather discovering Adams’ tweets through the official moderation channels, Gigaom reported. “We should not and cannot be in the business of proactively monitoring and flagging content, no matter who the user is,” Twitter’s General Counsel Alex MacGillivray responded. Twitter was also forced to take a more active role in policing content in France, where it was asked to hand over account details for anti-semitic tweeters, and in Germany, where it was had to block a neo-Nazi account.
News organizations have been struggling to promote engaging conversations in comment sections while avoiding offensive contributions. While legal ramifications can in fact make a strong case against moderation, more than half of respondents in an NPR survey said they’d prefer comments to be pre-screened. Moreover, a recent study found that comments affect readers’ opinions of news events, heightening the importance of weeding out inaccurate and offensive posts.
But the sheer magnitude of comments — some sites, such as ESPN.com, get as many as 6.9 million comments in a month — means it’s difficult and in some cases impossible to pre-screen all of them. NPR is giving it a shot, Poynter reported, but other sites are pursuing alternatives to such aggressive moderation.
For instance, several news outlets such as USAToday.com and ChicagoTribune.com have made Facebook login mandatory to comment. Removing anonymity makes users more accountable for their words, so commenting systems tend to “civilize” without much intervention from the news organization. This shift is linked to a drop in comment quantity but increase in discussion quality, Poynter reported.
Twitter’s decision to introduce a “report Tweet” button seems sensible. But although news outlets themselves might be better protected by relying on user flagging rather than aggressive comment moderation, some may not find such a response adequate.