Facebook’s transparency report shows when it does and doesn’t remove content

Before you set up Facebook, you may want to read this leaked document. The website Wired and Politico reports that the document shows that the company manually reviews its pages to determine whether to…

Facebook's transparency report shows when it does and doesn't remove content

Before you set up Facebook, you may want to read this leaked document. The website Wired and Politico reports that the document shows that the company manually reviews its pages to determine whether to take down specific content. However, because of algorithms, these measures may not always be accurate. That data is “subject to change in real time” according to the documents.

Posts you have reported as “sensitive” may not be removed from your page at all or may be taken down for various reasons — though those reports are often accompanied by other, completely innocent content that they automatically throw out. In some cases, if a user shares and then is followed by a subset of their followers, a post may be removed automatically for a violation of Facebook’s community standards. Some posts have actually been deleted because a user liked, followed, or left comments on someone else’s post.

According to Wired and Politico, Facebook “regularly killed” inappropriate content from the group’s pages before and after the start of 2018. These categories were reported as controversial content from all angles and cited as offensive comments or posts with pictures of women in bikinis or calling for the murder of women. One group was caught using racist slurs, another posting images of vagina worms and a third made sexually suggestive comments. Other derogatory images were purged from the group for their discriminatory content, but the same jokes were not removed.

Facebook regularly removed videos and songs for these offenses. However, as the company reviewed the content, some were eventually reinstated — and the rate varied widely based on a person’s location. For example, a user in Washington, D.C., had their videos removed 20,000 times per week, while someone in Russia was likely not to see more than 1,000 fewer than in any other country.

Because of this discrepancy, journalists were able to study those millions of reports and extrapolate from them. They were able to determine that less than 25 percent of reported content was removed due to a violation of Facebook’s policy.

And some of that content was usually part of a larger “takedown” action. Take, for example, the 2018 Trending Topic On Facebook report. The program works based on an algorithm. The algorithm identifies the most recent item in your stream of friend’s posts, which at some point may also be included in the Trending Topic feature. By comparing those on Facebook and in other accounts with screenshots of the Trending Topic feature, the team may able to tell if one item is still occurring but is “legitimately trending” on Facebook.

When a Trending Topic hits your feed, the team informs Facebook of its validity in order to “provide value to our news feed product.” In the Trending Topic on Facebook report, over 95 percent of what was removed was actually “not positive or negative content,” but rather content that was part of the Trending Topic. While certain posts or comments may be removed in a “zero tolerance” policy against hate speech or nudity, others may be yanked from the Facebook feed for violating the site’s policy against privacy and other laws.

What should you do if you want your posts to remain on Facebook?

Sites like Snopes and Politifact have come to the defense of Facebook and remain steadfast that there is no widespread problem with user posts being removed by the social media platform. But with the release of these documents and the tech company attempting to lay its recent corporate spying scandal to rest, what better time to get a look at how often certain pages get flagged and how the social media site algorithmically determines which content is acceptable.

According to a post on Facebook’s US Community Standards, the company recommends “content that includes photos, videos, and audio, that are clearly of an educational, factual, accurate, or artistic nature; content that includes photos and videos that show real people or animals doing everyday things; and photos or videos of food.” It also recommends videos that “inform, entertain, or promote human dignity and human rights.”

These guidelines and rules should sound familiar to those who have been on Facebook before. In 2012, a group of ABC executives banned content which included politically incorrect language. Reddit also previously had strict guidelines banning offensive and unsavory items from their platform.

Watch: Five security bugs at Facebook exposed

Leave a Comment