Converged communicators currently enrolled in RTV4403 (that’s Media Criticism) have been discussing the problem of online trolling in recent months, as well as the different approaches to the issue. Now, Twitter is rolling out some new policies to combat trolls on the microblogging network.
As reported by Issie Lapowsky of Wired, Twitter is taking new steps to deter trolls (sometimes called twolls when they operate in the Twittersphere). Significantly, the service is launching a tool that can automatically flag Tweets that are likely to be abusive, based on comparisons with past behavior and context. In addition, posts that promote violence are also likely to draw flags. Lapowsky writes that these changes are a sign of Twitter’s continued growth to become part of mainstream business; shareholders want to see more users, and Twitter hopes to attract them by promoting its no-trolling policies. In Lapowsky’s words:
The thinking now seems to be that it’s better to alienate destructive users if it means holding onto the good ones.
As converged communicators who have taken Communication Law and Ethics (MMC3200) know, though, restricting behaviors like trolling is sometimes easier said than done. Automated screening systems have a distressing rate of false positives, and one-size-fits-all measures stumble more often than not. In a noted example that proves these debates are far from new, America Online (now AOL) banned the word breast from its service in 1995, purging content ranging from group meetings of cancer survivors to fried chicken recipes before ending the experiment. Twitter, of course, is a private organization, not a government agency, so it’s largely free to regulate content on its site. And many users agree that online trolling is a problem. But that doesn’t mean Twitter can’t receive a PR backlash if its automatic filters don’t perform as expected.