As a rapidly-expanding social media outlet with countless millions of active worldwide users, Twitter sees its share of content that might be deemed offensive, tasteless, inflammatory, or otherwise controversial. Although very little content is specifically banned on Twitter (rules are slightly tighter for advertisers, as described here), the service does place some regulations on media types. Twitter’s Help Center spells out some key points:
- Users posting “sensitive content,” described by Twitter as content involving “nudity, violence, or medical procedures,” are expected to select a check mark as a warning to others.
- Users viewing tweets with sensitive content will normally see the box pictured above; to access the media, they can simply click the View button.
- If viewers see sensitive tweets that were not marked as such by their creator, they may flag the tweets for review by Twitter administrators.
Ethically, Twitter’s regulation tries to balance the rights of content creators, the rights of content viewers, and its own interests as a private company seeking to maximize its usage and revenue. This task is difficult. Most people view creativity as basically good, but even the most ardent supporter of free speech will recognize that unrestricted expression can also serve harmful ends. Because Twitter controls its own operation, it does have the right to regulate how people use its resources – after all, people who are dissatisfied with Twitter’s content restrictions are free to leave the network and choose an alternative channel if they choose. Nonetheless, since Twitter naturally wishes to keep as many users satisfied as possible, the service must sift through many individual cases while weighing the conflicts between its principles of promoting expression but preventing abuse.
The challenge of regulating online media is even more difficult when considering the issue of context. Within the last two months, Jacksonville has experienced a controversy surrounding the use of a photograph with partial nudity as part of an exhibition at the Museum of Contemporary Art. Several City Council members objected vehemently to the exhibition, denouncing it as pornography, while many supporters defended it as a work of art celebrating motherhood and family. Even recognizing the number of layers (including a heavy dose of politics) in the dispute, the issue underscores a valuable point. If human interpreters with a basically common linguistic and cultural background disagree about whether an image constitutes sensitive content, what chance do computer algorithms have?
Those who work regularly with computers know how erratic computer-based content shields can be. At the Florida Times-Union, for example, an internal system designed to prevent obscenities from being published takes a “better safe than sorry” approach. Terms that are non-controversial in their context are sometimes blocked, requiring editors to restore them manually before printing. In some cases, strings of letters within a word or even in adjoining words can trigger the automatic censor. And since computers can recognize the content of words much more easily than images, Twitter would likely arouse worldwide consternation by filtering out a multitude of false positives if it attempted to regulate media using software.
Twitter may have found an effective middle ground between a heavy-handed approach that snuffs out expression and a totally unrestricted, Wild West-style philosophy in which anything goes. Instead of a comprehensive policy that covers every conceivable type of material posting, Twitter adopts a largely voluntary regulatory framework. If people posting potentially controversial media mark their posts, most potential sources of dispute will be prevented. Viewers, in turn, can configure their settings to determine whether tweeted media items flagged as sensitive will appear on their screen. Though the policy may not satisfy everyone, it is largely effective in keeping questionable material off the screens of those who wish to avoid it, while not unduly burdening media creators. Twitter’s method illustrates the type of compromise, imperfect but perhaps inevitable, that will mark the continued debates about content in new media.