The Hand That Feeds You: A Review of Misinformation Policies over Social Media Platforms

(Photo by Tracy Le Blanc)

(Photo by Tracy Le Blanc)

By Zach Mallender


This article was written by a student of the Honors Praxis Lab 3700-002: Truth, Deception, and Information Disorder. This article is part of a joint issue between the Daily Utah Chronicle and this Praxis class. Other facets of the Praxis Lab’s capstone project can be found on their website.


Digital misinformation has become a prevalent issue in today’s world, and social media companies are noticing. Deceptive information saturated Facebook during the 2016 election, which made it clear that misinformation was most powerful on social media: where it could spread and fester unchecked. After this revelation, social media companies — particularly Facebook — were initially slow to react, but the continued damaging effects of rampant troll farms and false advertisements have become even more dire with the approach of the next presidential election. Over the past few months, many of the most prominent social media companies have updated their policies to more effectively combat misinformation on their sites, although there are some notable exceptions to these policies. What follows is a brief summary of the fine text of three platforms whose positions are vital to shaping the role of misinformation in our political future. 



Facebook is the most pivotal stronghold of misinformation, both in terms of audience and policy as 2.5 billion people log onto Facebook each month. Facebook now employs third-party fact-checkers and outside experts, such as the WHO, to vet posts labeled as news that may contain false information. Advertisements also go through this vetting process, and ads that are found false are banned from the site. 

However, there are some holes in Facebook’s policies, which has created a climate that is still dangerously susceptible to misinformation on a broad scale. “Organic” news pieces, or posts from a user of the site, are not outright banned — instead they “moved lower on the news feed.” Pages that repeatedly post false content have their “distribution reduced,” and may be demonetized. Posts and ads created by politicians are completely exempt from the fact-checking rule, meaning their spread will not be reduced or controlled in any way, nor will any warning be added to them. There is an even broader exception that can potentially apply to any significant misinformation event: “In some cases, we allow content which would otherwise go against our Community Standards — if it is newsworthy and in the public interest.” This gives Facebook total power over the final decision over the actions taken once something is categorized as misinformation. 



Instagram is another hotspot for misinformation, specifically in the emergence of virtual actors — computer-generated models that pose as real people and spread opinions that companies have paid for. To mitigate this phenomenon, Instagram has a policy that prevents the impersonation of others and requires accurate registration information. There is also plenty of misinformation being spread by real users, and there is a policy in place to prevent that. In May, Instagram began using third-party fact-checkers globally to catch misinformation. Anything deemed false has its distribution reduced by removing it from the explore and hashtag pages and is marked with a warning that tells viewers some information in the post has been found false. In addition to all of this is an overarching policy that “you can’t do anything unlawful, misleading, or fraudulent.” This policy means that anything deemed as misleading constitutes a violation of the terms of use, which could be grounds for the removal of the post and potential banning of the account. 

Another recent policy change is the linking of Facebook and Instagram fact-checking networks. Anything found false on one platform is automatically marked on the other, and an image matching search looks for any other instances of the offending post. 



Twitter has limited policies in place to combat misinformation and diverts blame for false information onto the creator or poster of the tweet. The only system in place to directly report false information is if you live in France, where French law requires an avenue to report misinformation relating to elections. However, Twitter does have policies in place against large scale artificial boosting of information, although these policies target the boosting methods and not the false information itself. You can’t create overlapping accounts or coordinated networks of bots to boost the visibility of a specific topic. 

Clearly, policy and practice are varied across many platforms, and no one technique has emerged as better than the others. However, regardless of how each company decides to approach misinformation, they will be tested as we approach the 2020 election and perhaps the greatest misinformation campaigns we have ever seen.


[email protected]