Instagram Raises Troll Shields With Anti-Harassment Comment-Filtering Tools
Social networking platforms like Instagram, which Facebook acquired in 2012 for $1 billion, have the unenviable task of finding a balance between letting a community organically grow and striking down users who cross a not always clearly visible line with their comments. Through a set of rules dictating what can and can't be posted, online services often play the part of judge and jury, though Instagram is passing the gavel to its users.
That's not to say Instagram is abandoning its policies on what users are and are not allowed to post. Those will remain, but in recognition that certain rhetoric will offend some but not all people, it's decided to let users filter their own content streams as they see fit, including the disabling of comments altogether.
"Our goal is to make Instagram a friendly, fun and, most importantly, safe place for self expression," Instagram's head of public policy, Nicky Jackson Colaco, told The Washington Post. "We have slowly begun to offer accounts with high volume comment threads the option to moderate their comment experience. As we learn, we look forward to improving the comment experience for our broader community."
Instagram's already been testing the new comment filtering tools on accounts that receive a lot of comments, and it's rumored that pop singer Taylor Swift may have used them during her online feud with Kim Kardashian and Kanye West. That hasn't been confirmed, but either way, Instagram's looking to expand the feature to a wider audience starting with high profile accounts.
Dealing with trolls has been a big problem for social networks, especially Twitter, which recently saw its community turn Microsoft's impressionable teen chatbot into a hate-spewing racist nympho. Instagram is now bigger than Twitter, having amassed over 500 million monthly active users in June.