Social Media’s New Stance 2020 Election Claims in Ads Stir Debate

Nick Jones Avatar
Social media platforms allowing political ads about the 2020 election

In a significant shift, Meta, the parent company of Facebook and Instagram, has announced that it will allow political advertisements on its platforms to question the outcome of the 2020 US presidential election. This change marks a departure from the stringent content moderation policies that social media platforms had implemented in the wake of the 2020 election and the subsequent attack on the US Capitol on January 6, 2021.

Meta’s new policy permits political ads to claim that past elections, including the 2020 presidential race, were rigged. However, it will continue to prohibit ads that “call into question the legitimacy of an upcoming or ongoing election.” This policy update, though in effect for over a year, has not garnered widespread attention until recently, when it was reported by The Wall Street Journal.

The company explained that this move aligns with its approach to last year’s midterm elections, during which it vowed to restrict ads targeting users in the United States, Brazil, Israel, and Italy that discouraged voting, challenged the legitimacy of ongoing elections, or prematurely declared victory. Furthermore, Meta made it clear that it would not remove posts from political candidates or regular users alleging voter fraud or claiming that the 2020 election was rigged.

Despite this policy change, Meta maintains its commitment to preventing electoral misinformation that could interfere with people’s ability to participate in voting or the census, such as false claims about election timing.

Unsurprisingly, this decision has sparked criticism, particularly from President Joe Biden’s reelection campaign. They accused Meta of “choosing to profit off of election denialism” and emphasized that Joe Biden’s victory in the 2020 election was clear and fair, regardless of Meta’s stance on the matter.

Pressure on tech companies to combat election misinformation has been mounting since the events of January 6, 2021, when baseless claims about 2020 election fraud fueled the attack on the US Capitol. It’s worth noting that numerous lawsuits attempting to challenge the 2020 presidential election results were dismissed in state and federal courts across the country.

However, in recent times, social media platforms have begun to reevaluate their policies regarding election-related content. Meta, YouTube, and X (formerly known as Twitter) have all reinstated accounts belonging to former US President Donald Trump since late last year. Meta clarified that while it wouldn’t penalize Trump for challenging the 2020 election results, it would prohibit him from casting doubt on upcoming elections.

X also announced its decision to once again allow political advertisements after a previous ban. YouTube, on the other hand, reversed a policy instituted more than two years ago, stating that it would no longer remove content featuring false claims about the 2020 US presidential election. However, the company maintained its stance on prohibiting content that misleads users about how and when to vote or encourages interference with democratic processes.

It’s important to note that YouTube’s policy change regarding 2020 election denialism applies to content and not ad policies. The platform continues to prohibit claims that are “demonstrably false and could significantly undermine participation or trust in an electoral or democratic process.”

Separately, Meta announced that it would require political advertisers worldwide to disclose their use of artificial intelligence in ads, starting next year. This move is part of Meta’s broader initiative to curb “deepfakes” and digitally altered misleading content. Additionally, the company stated that political advertisers would be barred from using its new artificial intelligence tools designed to create text, backgrounds, and other marketing content.

Meta’s decision to allow political ads that question the 2020 election results signifies a change in the company’s content moderation policies. While it permits ads that claim past elections were rigged, it continues to restrict content that undermines the legitimacy of ongoing elections. This shift aligns with the evolving landscape of social media platforms as they grapple with the challenges of political advertising and misinformation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Author Profile

John Doe

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam.


There’s no content to show here yet.