Subscribe
This legislative push is taking on more urgency since a whistle-blower revealed thousands of pages of internal documents revealing how Facebook employees knew that the company’s algorithms prioritizing growth and engagement were driving people to more divisive and harmful content.

This legislative push is taking on more urgency since a whistle-blower revealed thousands of pages of internal documents revealing how Facebook employees knew that the company’s algorithms prioritizing growth and engagement were driving people to more divisive and harmful content. (Ben Margot/AP)

U.S. lawmakers investigating how Facebook and other online platforms shape users’ world views are considering new rules for the artificial intelligence programs blamed for spreading malicious content.

This legislative push is taking on more urgency since a whistle-blower revealed thousands of pages of internal documents revealing how Facebook employees knew that the company’s algorithms prioritizing growth and engagement were driving people to more divisive and harmful content.

Every automated action on the internet -- from ranking content and displaying search results to offering recommendations or showing ads -- is controlled by computer code written by engineers. Some of these algorithms take simple inputs like words or video quality to show certain outputs, while others use artificial intelligence to learn more about people and user-generated content, resulting in more sophisticated sorting.

Both Republicans and Democrats agree there should be some accountability for tech companies, even though Section 230 of the 1996 Communications Decency Act provides broad legal immunity for online platforms.

While there has been some consensus around updated privacy rules and tech-focused antitrust bills, two week-long recesses next month and fiscal deadlines looming in December mean there is precious little time for concrete action this year.

After wrestling with how to write laws to allow or prohibit certain kinds of speech, which risks running afoul of the First Amendment, regulating automated algorithms is emerging as a possible strategy.

“The algorithms driving powerful social media platforms are black boxes, making it difficult for the public and policy makers to conduct oversight and ensure companies’ compliance, even with their own policies,” Senator Ed Markey, a Massachusetts Democrat, told Bloomberg. He introduced a bill in May he said would “help pull back the curtain on Big Tech, enact strict prohibitions on harmful algorithms, and prioritize justice for communities who have long been discriminated against as we work toward platform accountability.”

Several senators touted their own algorithm-focused bills while questioning Frances Haugen, the Facebook whistle-blower, when she appeared before Congress earlier this month. While Haugen didn’t endorse any specific piece of legislation, she did say the best way to regulate online platforms like Facebook is to focus on systemic solutions, especially transparency and accountability for the machine-learning architecture that powers some of the world’s biggest and most influential companies.

Senator Richard Blumenthal, the Connecticut Democrat who as chair of the Senate consumer protection subcommittee has led the congressional investigation of Haugen’s allegations, last week invited Facebook Chief Executive Officer Mark Zuckerberg to testify before Congress. Blumenthal, in a statement Monday, identified the machine-learning structure of the company’s platform as a danger not only to users, but also to democracy.

“Facebook is obviously unable to police itself as its powerful algorithms drive deeply harmful content to children and fuel hate,” Blumenthal said. “This resoundingly adds to the drumbeat of calls for reform, rules to protect teens, and real transparency and accountability from Facebook and its Big Tech peers.”

While Haugen’s revelations add to the bipartisan anger directed at big tech companies, asking a gridlocked Congress to regulate a technically complex and fast-moving industry is a tall order. Lawmakers have been discussing potential bills to chip away at Section 230, especially since the issue rocketed to national prominence when former President Donald Trump last year vetoed an unrelated defense bill amid demands that legal shield be repealed. Congress overrode the veto.

But no one proposal has emerged as a front-runner, and several groups of activists -- even those advocating for new tech regulation -- have pointed out that government regulation of speech risks silencing already marginalized voices.

Hence the focus on algorithms. Proponents of this approach allow that platforms shouldn’t be liable for user-generated content, but argue that they bear responsibility for how their systems are designed to amplify certain kinds of information.

How Facebook Algorithms Can Fight Over Your Feed: QuickTake

Facebook’s own employees recognized this editorial responsibility, according to the internal documents that Haugen shared with Congress and the Securities and Exchange Commission. In one 2019 report, a Facebook employee laments how “hate speech, divisive political speech, and misinformation on Facebook and the family of apps are affecting societies around the world.”

“We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform,” the document says, describing the objectives designed for the algorithms. “The mechanics of our platform are not neutral.”

A new bill from Representative Frank Pallone, chair of the House committee responsible for science and technology, would revoke Section 230 protections for any online platform that uses algorithms to amplify or recommend dangerous content.

“Designing personalized algorithms that promote extremism, disinformation, and harmful content is a conscious choice,” said Pallone, a Democrat from New Jersey, in announcing the bill. “And platforms should have to answer for it.”

Markey’s bill, which was also introduced in the House by Democratic Representative Doris Matsui of California, takes a different approach by not addressing Section 230 and focusing just on new requirements for how companies use algorithms. Their bill would set safety guardrails for these automated processes and require more transparency for consumers and federal regulators.

There is another proposal that takes a lighter touch but has bipartisan support in the Senate. This bill, from South Dakota Republican John Thune, Blumenthal and others would require online platforms to allow users to turn off the “filter bubble” created by an algorithm so they could see more chronologically-ordered content.

“The simple solution of the filter bubble really is to give consumers the option, give them the choice, give them the freedom to opt out of an algorithm-manipulated platform,” Thune said in an interview.

Haugen, speaking to the U.K. Parliament on Monday, urged policymakers around the world to act quickly to regulate the artificial intelligence of the algorithms that underpin online platforms.

“We have a slight window of time to regain people’s control over AI,” Haugen said. “We have to take advantage of this moment.”

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now