Instagram Swiftly Removes Swift’s Snakes – A Conspiracy?
Instagram Swiftly Removes Swift’s Snakes – A Conspiracy?
In the last couple of weeks, Twitter and Instagram have taken gutsy steps to stop trolling on their sites. In a move to penalize those posting hateful content, Twitter banned the controversial Breitbart blogger Milo Yiannopoulos for harassing actor Leslie Jones, while an Instagram software test unintentionally found itself caught in the snakepit of the Taylor Swift / Kanye West feud.
In the Instagram case, the company released a software update that caused fans of Kanye West to accuse the company of conspiring with Swift when it removed snake images (posted to accuse her of lying about knowledge of lyrics about her in a West song) that were posted by the hundreds on her feed. No conspiracy, the company responded, just a test of new comment moderation filters.
Both moves are part of new policies aimed at using machine learning software to reduce harassment, abuse, and bullying. Twitter recently posted, “We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders.” They are also beefing up the account verification process by connecting your interaction to your real identity.
Instagram is raising the bar even higher, announcing a set of tools that will allow users to self-moderate their feeds. A user will be able to turn on moderation, have Instagram filter out offensive content, program the tool to filter out content that you select (as Swift may have been allowed to do), or not allow comments at all.
Yahoo is also expressing enthusiasm for its own machine learning system, recently saying, “In 90% of test cases, it was able to correctly identify an abusive comment — a level of accuracy unmatched by humans, and other state-of-the-art deep learning approaches.” If posted to Instagram, that statement would get the snake treatment from online community managers, as there will never be a machine learning system that will match the accuracy of a trained team of human moderators.