Control, Stifle, Censor: Social Media’s Toxic Double-Edged Policies

A debate over online censorship is raging in the US, but corporate content moderation practices have long affected people around the globe.

An illustration of planet Earth with social media messages popping out from everywhere, some of them toxic and some normal, with a red X marked on some of both varieties.
Sam Island

“If you thought disinformation on Facebook was a problem during our election, just wait until you see how it is shredding the fabric of our democracy in the days after.” Bill Russo, deputy press secretary to President-elect Joe Biden, expressed this bitter sentiment in a since-deleted tweet not long after the US presidential election was called in November. It’s a familiar theme — lawmakers, activists, and thought leaders have been delivering one variation or another of it over recent years, particularly since the 2016 election of Donald Trump.

All the talk may finally be coming to a head. Democrats and Republicans alike are reassessing Section 230 of the Communications Decency Act, which relieves “interactive computer services” from being treated as the publisher of third-party content and provides limited immunity if the services make a good faith effort to restrict prohibited material. While many on the left complain about hate speech and misinformation on social media, Republicans charge that conservative ideas and voices are suppressed.

In late October, the CEOs of Facebook, Twitter, and Google testified before the Senate Commerce Committee for a hearing to examine whether Section 230 “has outlived its usefulness in today’s digital age.” In his opening remarks, Sen. Roger Wicker (R-Miss.), the committee’s chairman, acknowledged the importance of the law:

“Section 230 gave content providers protection from liability, to remove and moderate content that they or their users considered to be ‘obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.’ This liability shield has been pivotal in protecting online platforms from endless and potentially ruinous lawsuits.”

He continued: “It has also given these internet platforms the ability to control, stifle and even censor content in whatever manner meets their respective standards. The time has come for that free pass to end.” On this point, Wicker was off the mark. The protection of free speech under the First Amendment of the US Constitution, not Section 230, is what allows tech companies to promote certain types of content over others while setting and enforcing the boundaries of what constitutes “acceptable” content on their proprietary platforms.

But it’s true that these platforms are limiting online voices. Over the years, a wide variety of communities, from LGBTQ Americans to Australian Aboriginals, have complained of censorship at the hands of Big Tech. Though the discussion around platform speech may today feel freshly politicized, the broad and often arbitrary content moderation practices of these companies have long carried profound implications well outside of the US.

Blind trust

In 2004, as Mark Zuckerberg was watching from his Harvard dorm room while his classmates uploaded personal details and photos to Facebook’s early prototype, he had a quick but telling exchange with a friend over Instant Messenger. “Yeah so if you ever need info about anyone at Harvard… Just ask. I have over 4,000 emails, pictures, addresses, SNS… People just submitted it. I don’t know why,” he wrote. “They ‘trust me’… Dumb fucks.”

When Facebook opened its virtual doors to the world beyond colleges and universities two years later, it did so without articulating “community standards.” Instead, the platform only offered terms of service written in legalese. The terms were lengthy and confusing, as most are, but the fine print was clear: No nudity or graphic violence; nothing hateful, threatening, or intimidating; nothing unlawful, misleading, malicious, or discriminatory — and, just like today, users were required to use their name.

The rules of other social platforms were similarly simple. “Respect the YouTube community,” read YouTube’s original guidelines. “We’re not asking for the kind of respect reserved for nuns, the elderly, and brain surgeons. We mean don’t abuse the site.” There were a few additional details — graphic or gratuitous violence was explicitly noted — but for the most part, users were free to express themselves as they wished.

Social media quickly revealed itself to be a double-edged sword when it came to managing content. One of the earliest controversies occurred in 2007, when YouTube took down the account of Egyptian journalist Wael Abbas for showing clips that captured extraordinary brutality from local law enforcement. Activists protested the decision, and YouTube eventually restored the account and Abbas’s videos. A few years later, it adjusted its rules to allow for documentary content, even if graphic in nature.

Nicole Wong, YouTube’s associate general counsel at the time, recalled to me that the Abbas case was “really distinctive” because it occurred not long after Google had acquired YouTube. “I was trying to wrap my head around what it meant to apply content rules to a video platform,” she remarked. “We were using a lot of the same rules from Blogger because we wanted YouTube to be a robust, free-expression platform, too. It hadn’t really occurred to me how a visual medium would differ.”

But as these companies have scaled to accommodate millions and even billions of users, external pressures have pushed platforms toward policies that are hypocritical, and sometimes even unsettling.

The intense media and platform focus on the US occurs at the expense of much of the rest of the world — where violence and unrest, sometimes fueled by social media, has caused real death and destruction.

Complications

Over the previous decade, three events occurring in quick succession had an outsized impact on the way these platforms operate: The rise of the Islamic State, or ISIS; the harassment campaign known as GamerGate; and the 2016 US election, along with the surge of global populism that accompanied it.

The Islamic State’s sophisticated use of social media to spread propaganda and recruit new members caught the platforms off guard in 2014. Under government pressure, they raced to eradicate the group’s online presence, which ultimately led to the creation of the Global Internet Forum to Counter Terrorism. Founded by YouTube, Twitter, Facebook, and Microsoft, the GIFCT is now an independent nonprofit organization that intermediates between platforms and world governments to identify and block extremist content. This opaque effort is not without costs; preventing terrorists from using platforms at scale has required an increased use of automation, which brings higher error rates.

Around the same time, the GamerGate campaign used hacktivist tactics developed by groups like Anonymous to viciously target women in gaming, as well as anyone who stood up for them. GamerGate agitators coordinated attacks on platforms like 4Chan before using swarms of “sockpuppet” Twitter accounts to send abusive messages. It took pressure from users and advocacy groups for Twitter to take GamerGate seriously, eventually leading to better blocking tools and simpler methods of reporting harassment. 

Finally, the 2016 election and rise in populism worldwide brought a wave of disinformation and hate-filled content in different nations that platforms are struggling to moderate. Defining and combating hate speech, which isn’t illegal in the US, has been a major challenge, particularly in light of the extreme impact that such speech can have elsewhere in the world.

These events unfolded as a newly public Facebook and Twitter were scrambling to placate shareholders as well as governments, advocacy groups, and sometimes their own employees. The result? Policies that don’t hold up under scrutiny.

You see it today in Facebook’s approach to politicians who break the rules. In September 2019, Nick Clegg, Facebook VP of global affairs and communications and former UK deputy prime minister, wrote on behalf of the company: “If someone makes a statement or shares a post which breaks our community standards, we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm.”

In other words, if hate speech is deemed by Facebook to be in the public’s interest, it stays up. A similar double standard applies to misinformation. A recent report by The Information revealed that Facebook’s own Civic Integrity researchers recommended that the company end its policy of exempting politicians from fact-checking, which can trigger penalties, because they found that users are more likely to believe false information that’s shared by a politician. Facebook rejected the recommendation.

Meanwhile, ordinary users are regularly penalized or suspended from the platform for similar violations. These inconsistencies in policymaking and content moderation reveal themselves through countless examples. Though the removal of content by platforms doesn’t amount to censorship in the legal sense, the effect is ultimately quite similar. Voices are being silenced, often wrongfully, at the hands of an unelected authority. 

A world reckoning

The response to the recent US election also demonstrates a fractured approach to policy. “It’s clear that Facebook and Twitter took election interference in the US seriously,” Dia Kayyali, the Berlin-based associate director for advocacy at Mnemonic, a nonprofit human rights organization, told me.

“Companies need to pour the same level of resources into elections outside of the US, but also into ensuring that their platforms are not being used to instigate or perpetuate genocide and other atrocities in India, Myanmar, and elsewhere,” Kayyali went on. “Instead, companies treat much of the rest of the world as ‘emerging markets’ where they engage with nationalist politicians like [Indian Prime Minister Narendra] Modi and invest enough resources to build their user base, but not enough to understand the complex socio-political and human rights implications of their platforms globally.”

Just recently, social media companies grappling with a torrent of US election disinformation made sudden policy changes in response. Twitter temporarily disabled one-click retweets, prompting users to instead use the “quote tweet” feature in an attempt to get them to reflect on the content. The tool was specifically implemented only for the US election period, but it affected users worldwide.

While it’s true that politics in the US can have a global impact, it’s equally true that these platforms function as a modern equivalent of the public square for billions of people internationally. The biggest social media companies wield immense power to shape the interpretation of public events, and to control what people can or cannot access or say.

The intense media and platform focus on the US occurs at the expense of much of the rest of the world — where violence and unrest, sometimes fueled by social media, has caused real death and destruction. Facebook’s role in ethnic cleansing in Myanmar is well known, but as Kayyali noted, the company is accused of fanning the flames of hatred against marginalized communities, and Muslims in particular, in dozens of other countries from India to Sweden. 

This hegemonic approach to platform policy disadvantages other regions of the world. While Americans are rightly concerned about the impact of social media on their elections, there have been recent armed conflicts raging in Armenia and Azerbaijan over the Republic of Artsakh; in Pakistan, Ethiopia, the Democratic Republic of Congo, Yemen, Mozambique, and Syria; and serious threats to democracy in the Philippines, Belarus, India, and Brazil. But with companies focused on mitigating the harms posed by American disinformation, social media users in these countries struggle to get platform policymakers to hear their concerns.

For these reasons and many more, as US legislators weigh possible actions, it is essential to recognize that the policies that corporations and governments create at home can have deleterious effects abroad. A law created to limit extremism, hate speech, or disinformation in a democratic society can, when implemented in a more authoritarian context, easily be used against democracy activists.

Companies must also become more transparent about how they implement and enforce platform policies. The Santa Clara Principles on Transparency and Accountability in Content Moderation offer a set of baseline standards for platforms to be less opaque and more accountable to their users, calling on them to publish data about their enforcement of rules, notify users when they remove content, and ensure that every user has the opportunity to appeal.

The early internet may have been incubated in the US, but today Silicon Valley companies exert a far-reaching influence throughout the world — and they need to be accountable for it.

An illustration of planet Earth with social media messages popping out from everywhere, some of them toxic and some normal, with a red X marked on some of both varieties.

Artwork By

Sam Island

Contact Us

Have an idea for a story or illustration? Interested in discussing partnerships? We want to hear from you. Send us a note at info(at)thereboot(dot)com.

Recommended Reading