Meta Designs New AI to Detect ‘Harmful Content,’ Despite Past AI Problems

Meta is doubling down on its use of artificial intelligence (AI) to detect alleged “harmful content,” even though AI algorithms have been shown to be biased or error-prone in censoring content.

Censorship and moderation AI is really “machine learning (ML) powered automation,” which has repeatedly proven to be lacking, according to Reclaim The Net. “Harmful content can evolve quickly, so we built new AI technology that can adapt more easily to take action on new or evolving types of harmful content faster,” Meta announced. “To tackle this, we’ve built and recently deployed Few-Shot Learner (FSL), an AI technology that can adapt to take action on new or evolving types of harmful content within Weeks instead of months.” The announcement specifically used COVID-19 vaccination information as an instance of potential “misleading or sensationalized information” targeted by the new AI system. The AI will censor text and images in more than 100 languages, including alleged “hate speech,” according to Meta.

The glaring issue is that Meta will be defining what “harmful content” is, and its platform Facebook has a track record of very biased censorship. Not only that, Facebook’s algorithms in the past have proven defective. Facebook users were sharing an inspirational meme in October with an image of a daisy growing in a sidewalk and the sentence, “Stand up for what you believe in, even if it means standing alone.” The meme received a “sensitive content” censorship restriction. Even Alexandria Ocasio-Cortez (Democratic Representative) asserted algorithms can be biased.

There are multiple similar instances of Facebook’s AI algorithms making astonishing alleged errors or mistakes. Their Facebook page was disabled two times by The Wimborne Militia, a historical reenactment organization. This happened because the algorithm had reportedly mistakenly identified them as militia groups. Facebook has once previously admitted error in The Wimborne Militia’s case. Facebook blocked Rachel Enns’s ability to share a fundraising page for a wheelchair van that would benefit two girls living with rare and life-threatening conditions. Facebook initially didn’t respond to journalist queries but admitted that it had made a mistake and now allows users to share a fundraiser for a wheelchair van. Facebook also reportedly censored a discussion of gardening tools by WNY Gardeners in July, after the platform apparently mistook the word “hoe” for a disparagement of women instead of the name of a garden tool.

Conservatives under attack. Call Facebook Headquarters at 1-650-308-7300 to demand Big Tech to abide by the First Amendment. If you have been censored, contact us using CensorTrack’s Contact formPlease help us make Big Tech more accountable.

About Post Author

Follow Us