Subscribe

Facebook has long resisted calls to scrap political advertising on the platform and limit targeted messaging amid fears the tools might be used to sow discord. The company defends those policies as a way to safeguard free expression and online organizing efforts.

But internally, staffers acknowledged the cost of the services is that politicians will likely exploit them to spread misinformation and target vulnerable users, according to documents reviewed by The Washington Post as part of the Facebook Papers investigation.

“We will definitely see misinfo from political parties and candidates that we will not fact-check, which will hurt public trust,” read a slide deck from early 2020 assessing product risks, including misinformation in ads. “We also expect custom audiences for political and social issue ads to be used to narrowcast misinfo to vulnerable communities.”

The documents, disclosed to the U.S. Securities and Exchange Commission by Facebook whistleblower Frances Haugen and provided to Congress in redacted form by Haugen’s legal counsel, were reviewed by a consortium of news organizations, including The Post. Together they provide an unparalleled look into how the tech giant weighs tradeoffs between safety and profit.

Digital advertising is a primary revenue driver for Facebook, which gives business partners a plethora of options for targeting users with messages based on demographics and interests. Even political ads, which executives say account for a fraction of those gains, have generated billions of dollars for the social network, according to reports.

Internal documents show Facebook staffers determined its hands-off approach to political ads and to targeting ads and other political content to users posed significant risks.

A Feb. 25, 2020, slide deck titled “US 2020 Product Risk Assessment — Update” rated the “residual risk” posed by misinformation in Facebook ads as “high,” even if the company managed to “execute perfectly” the interventions it was weighing to mitigate the threat, such as increasing fact-checking by its third-party partners.

The threat rating was based, according to the slide deck, on the severity of the potential harm posed and the likelihood of it “being successfully exploited.”

Misinformation posted by politicians, and ads targeting narrow user segments, were particular vulnerabilities, according to the document.

Separate Facebook Paper documents previously reported by The Post, based on data from 2019, found that misinformation shared by politicians was more damaging than that coming from ordinary users. But the company maintains a policy against fact-checking posts by public officials.

“With the exception of opinions and speech from politicians, ads with political content are eligible for fact checking through our program,” Facebook spokeswoman Dani Lever said in a statement Tuesday. “We don’t believe decisions about political ads should be made by private companies, but reject any ad that violates our rules — including from politicians.”

Facebook faced withering criticism during the 2020 campaign from some liberal lawmakers and candidates for exempting politicians from fact-checking and for not significantly limiting how much users can be targeted with advertisements, even as rivals like Twitter have done away with political ads altogether. Facebook temporarily suspended new political ads prior to the November elections and afterward on all political and issue-based advertising “to avoid confusion or abuse following Election Day,” but it has since resumed them. Facebook said in March it will examine “whether further changes may be merited” on the policy.

Republican officials including Sen. Ted Cruz of Texas initially cheered Facebook for not bucking political ads altogether, calling it a victory for free speech on the internet, but later criticized the company for imposing the temporary ban that began partially before the election. Political ads are a major fundraising tool for both Democratic and Republican candidates.

Internally, employees said Facebook’s fact-checking partners, who the company relies on to add context to misleading or false posts, were vastly outgunned against a deluge of political misinformation percolating online.

The document estimated that Facebook’s partners are able to fact check more than 200 posts a day, but only 11 of these were what it called “viral civic posts” that reached more than 100,000 daily views. The report recommended increasing “fact-checking capacity” to cover 100 of those viral posts a day, a figure “highly based on how repeated the viral civic misinformation is in a given day.” The assessment also claimed that Facebook is “not great at detecting Misinfo, especially in Spanish or in media (image/video).”

A separate literature review of outside research on “Targeted Political Content” dated Oct. 15, 2020 — just weeks before the 2020 elections — outlined the potential benefits and risks of allowing political material to be narrowly tailored to a subset of users in ads and other political content.

The document identified four potential benefits, including helping civil society groups organize efficiently, allowing groups to mobilize voters and increasing the diversity of information online. But it also outlined seven potential harms: “Targeted political content can potentially harm people by narrowly delivering divisive appeals to vulnerable audiences; inciting violence; intimidating, discouraging, or misleading voters; creating echo chambers; and decreasing accountability for politicians.” It did not distinguish between organic content and ads.

The review also noted that targeting “can divide communities or even incite violence by delivering outrage-inducing or fearmongering content to susceptible audiences that will have an outsized negative reaction,” such as people who exhibit “politically extreme beliefs,” an “authoritarian” personality or believe in “racial conservatism.”Facebook’s Lever said, “Ahead of the US 2020 election, we gave people the ability to see fewer political ads, more control around how political advertisers could reach them, and temporarily paused all political advertising following the election.”

The concerns around political ads and microtargeting have been subject of intense interest not only externally but internally, documents show.

In early 2020, when Samidh Chakrabarti, then Facebook’s civic integrity lead, informed employees the company planned to largely stand pat on its political ads policy, the internal post sparked a back and forth with dozens of replies, including over whether the changes Facebook did propose were enough. (Facebook instead announced steps aimed at boosting transparency and control around political ads.)

The separate 2020 risk assessment identified there was “reputational risk and employee churn” stemming from the “internal and external perception that we benefited from our position to allow political ads.” The report mentioned the company planned to launch an “internal comms plan to address employee concerns.”

The issue also rose to the company’s highest level.

In Chakrabarti’s post from Jan. 8, 2020, he wrote that Facebook CEO Mark Zuckerberg “made these decisions” on political ads and “concluded that further targeting restrictions would result in far too great collateral damage to get-out-the-vote campaigns and other advocacy campaigns who use these features for vital political mobilization.”

()

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now