Anti-Free Speech! Instagram to Deboost ‘Potentially Harmful Content’ Before It Spreads

Instagram has written in clear terms that the company’s moderators will reduce the spread of content not merely for violating the platform’s rules, but for even having the potential to offend viewers.

The shrinking violets and pearl-clutchers of Instagram’s user base are in for a treat: Instagram announced it will restrict the flow of “potentially” offensive content before it even has the chance to reach their newsfeed. “If our systems detect that a post may contain bullying, hate speech or may incite violence, we’ll show it lower on Feeds and Stories of that person’s followers,” Instagram declared in a Jan. 20 blog, How We Address Potentially Harmful Content on Feed and Stories. “We’re constantly improving our systems to be as precise as possible, not only to help remove harmful content from Instagram, but to also make our enforcement as accurate as we can.” Instagram also explained it is “always trying to show you content from the accounts you engage with and have the most value to you, while minimizing the likelihood that you come across content that could be upsetting or make you feel unsafe.”

One key aspect of the new rules is that Instagram’s systems will attempt to detect similarity to previously censored content: “To understand if something may break our rules, we’ll look at things like if a caption is similar to a caption that previously broke our rules.” In short, if Instagram were to ban a conservative talking point for violating rules on so-called “hate speech,” and users tried to speak it in code as a workaround, the platform could still crack down on the new content at will. 

Instagram experimented for many years with anti-crime strategies to stop offensive posts.

Instagram boasted of a then-new artificial intelligence (AI) program in its Dec. 16 blog in 2019 about the social media giant’s “long-term commitment to lead the fight against online bullying.” Instagram claimed the AI program “notifies people when their captions on a photo or video may be considered offensive, and gives them a chance to pause and reconsider their words before posting.”

Instagram originally announced this new AI that preempts offensive posts in a July 8 blog headlined “Our Commitment to Lead the Fight Against Online Bullying.” The photo-sharing giant wrote that the program gives users “a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification. From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

Conservatives are being attacked. Meta headquarters can be reached by calling (650) 308-7300 to request that Big Tech be held responsible for implementing the First Amendment. If you have been censored, contact us using CensorTrack’s Use the contact formPlease help us make Big Tech more accountable.

About Post Author

Follow Us