Designed to Thrive: Communal Approaches to Collective Platforms

Enabling community content moderation through engineering can help address the chronic problems we see on platforms like Facebook.

Various individuals are tending a community garden in the shape of a speech bubble.
Sam Island

We, the denizens of the internet, are in a bind. We want the freedom to speak but also protection from harmful speech — and it’s not clear who can be trusted with these freedoms and protections. The challenges of free speech in the twentieth century, the age of mass communications, have become even more acute in our present age of mass participation. This is particularly true in times of crisis, such as a pandemic, when the matter of a “healthy” public sphere that’s free of vitriol and disinformation is also a matter of public health.

For many people, free speech is not enough. They want the public conversation to answer to higher ideals: to be good, constructive, educational, fair, equitable, just. Complicating matters, the public sphere encompasses not only people and governments, but also corporate bodies, and, increasingly, trans-human agents like AI assistants and chatbots for hire. The online presence of bad-faith actors fostering clickbait, hate speech, and state-sponsored disinformation campaigns further compels the evergreen question: What is to be done?

In the jumbled tangle of opinions about free speech and censorship, the usual political polarities have reversed, if not scrambled entirely. Classically, censorship refers to the curtailment of individual expression by the authorities, and not the other way around. The First Amendment to the US Constitution prohibits the Congress “from abridging the freedom of speech, or of the press.” In the liberal tradition of John Locke and John Stuart Mill, it is the individual, and by extension, the collective and the corporation, that have positive rights to free speech, along with the negative right not to platform statist propaganda. The positive right implies that people or media outlets can say and publish what they want.

This does not mean, however, that they must say or publish everything. Free speech also involves the negative right protecting individuals and outlets equally from outside intervention. At issue is the ability to direct speech as one sees fit. I cannot show up to my local pizza joint, for example, and expect the proprietors to give me floor space for a lecture on computational linguistics. Their choice not to host my lectures is not a violation of my free speech rights, but rather an exercise of their own. In the new upside-down world, the conservative line calls for invasive regulatory interventions to enable the opposite.

On the other side of the debate, we find repeated appeals for large, private, corporate entities to do more to police their audiences, to be more responsible for the content on their platforms, and to act more in the public good. Set aside for a moment the obvious dissonance between the calls for increased policing and progressive thought. A corporate entity cannot, by definition, act in the public good, because it has a fiduciary responsibility to its stakeholders, not to the public. The two may align for a moment, in that elusive “invisible hand” way, when the interests of the market coincide with the public good. But one does not have to be a Marxist to observe that capital is often not in step with ethics.

Communally moderated platforms implement interfaces that allow for varied standards to emerge organically.

The upside-down nature of the proposed solutions indicate a fundamental inadequacy of the terms framing the discussion. Unlike milk, the internet is not homogenized — it’s not one uniform place or thing. Few universal truths can be derived from the variety of observed behaviors online. The same can be said about the large, walled-garden conglomerates that are increasingly mistaken for the internet itself. Among the top-ranked websites in the US by traffic, YouTube (2nd), Facebook (3rd), Twitter (9th), and even Pornhub (7th) each contain multitudes of very different online communities that in turn represent diverse people with diverse values and practices. It makes sense that the standards of a “healthy” online discussion differ greatly from one community to another. What’s appropriate in a thread of tweets among academics may not be among the sex worker Twitterati. They hold divergent values and answer to different authorities.

Yet the debate about online content moderation often tacitly assumes some sort of mythical common standard, a public good that everyone can supposedly agree on. The point of a pluralistic society, however, isn’t to find a single, absolute, dogmatic ideal. It is rather to discover ways of coexisting productively, despite and perhaps even in celebration of our differences. Asking for large government or corporate intervention in the social lives of diverse communities cannot end well, because customs governing expression are rooted in a myriad of specific cultural contexts. A remote authority will always fail to uphold local standards because it doesn’t know nor care about them.

Put simply, the standards of wellness emanate from specific communities. In most thriving places, standards and values materialize from within, among peers. Good moderators have a personal investment in the localized common good. Peers have skin in the game; their discussions may not always be pleasant, but they are strongly cohesive, reflecting a tradition of culture and custom.

My research into online communities has taken me to some wonderful, weird, and occasionally disturbing places. I continue to be inspired by the spirit of mutual aid found among peer-to-peer librarians responsible for several of the largest so-called “pirate” or shadow libraries, necessary for preserving access to scientific information globally. As a software engineer, I saw first-hand the growth of Stack Overflow from a tiny Q&A resource for coders to a near-ubiquitous network of crowd-sourced knowledge. Recently, my lab has partnered with public health officials to study the language motivating the vaccine-hesitant rhetoric that is proliferating in Facebook groups, natural parenting bulletin boards, and YouTube comments.

Observing the address of residents to the Wayne County canvassers’ (election) board in Detroit on a Zoom call recently, I was struck not by the political rancor of the proceedings but by their civility. Speakers were ceding time to highlight marginalized voices, respectful even in disagreement. It was, on the whole, an orderly affair. Despite the challenges of increased online participation, can you imagine an event such as the local election board meeting being policed by underpaid corporate content moderators? Would you install an algorithmic filter into a “smart” microphone at your community board? I hope not. Generally speaking, those who are not present and sharing in a communal task cannot and should not be trusted to articulate or enforce its standards.

Thriving online platforms are pluralistic by design. Take Reddit, for example, a community of communities. The platform encompasses such diverse forums as r/AskHistorians, the “largest public history platform on the web,” and r/MensRights, which is included in a list of sites promoting hate and extremism by the Southern Poverty Law Center. More than 700,000 users subscribe to r/LGBT, almost 30 million read r/gaming, and more than five million can be found on r/BlackPeopleTwitter.

The names of these forums alone hint at their disparities. The AskHistorians subreddit “aims to provide serious, academic-level answers to questions about history.” The forum’s rules begin with a section on civility, reminding members “to behave with courtesy and politeness at all times.” Racism, sexism, or any other forms of bigotry are not to be tolerated. Banned are Holocaust denial, personal insults of any kind, and minor nitpicking of grammar or spelling. All questions must concern events 20 or more years into the past. “Write Original, In-Depth and Comprehensive Answers, Using Good Historical Practices,” the guidelines declare to members, spelling out each of these principles in a separate section.

Compare these values with those of r/TheRedPill, whose rules begin with a directive to stay on the topic of “men’s identity, sexual strategy, and options in the context of our current global culture for the benefit of men.” (Emphasis theirs.) They include “no moralizing” and “no concern trolling,” which involves pretending to support an argument and then expressing disagreement in the form of concerns. “Do not announce that you are a woman,” cautions another. “Your comments and posts should be able to stand on the merit of your ideas alone.” Few historians are likely to enjoy these pills, nor should they. The two groups may as well occupy distant online planets.

Stack Exchange follows a similar philosophy of community moderation. Those using the site passively may overlook the political dimension of the platform, which supports diverse communities ranging from Mathematics to English Language & Usage. Participants in such forums earn reputation, rewarding constructive input. Reputation feeds into a graduated system of escalating privileges: for example, users with 2,000 rep can “edit any question or answer in the system”; 10,000 rep unlocks the ability to “edit any question or answer.” A small number of democratically-elected super-moderators guide the discussion on the Meta subforum for each site, where community standards are further discussed and evolve into policies. This political machinery makes the site run smoothly behind the scenes, though its lessons should not be forgotten. Rather than engineering for a specific standard of discussion, the architects of Stack Exchange have designed processes — such as the reputation reward system, election interfaces, and meta-discussion forums — that combine to facilitate the emergence of manifold community standards.

Sites like Reddit and Stack Exchange have become immensely popular because of these differences, not despite them. Communally moderated platforms implement interfaces that allow for varied standards to emerge organically. They do so by giving participants space to articulate their values (in the form of rules and guidelines), to elect moderators, and to give moderators the technological capacity to monitor and improve the forum discussion. The matter of political organization here concerns also the affordances of technology, in the design of interfaces that enable active community moderation.

By contrast, centrally federated platforms such as Facebook, YouTube, and Twitter place their diverse constituencies under the same universal regime, neither transparent nor open to democratic revision. As a consequence, they suffer from chronic problems related to low-quality engagement — trolling, misinformation, abusive language — prompting backlashes that demand paradoxical political solutions. For the internet to be free, fair, and safe, we must stop imagining a singular vision of freedom, fairness, or safety of the sort that trickles down from corporate governance to the public.

Though the creation of an advisory body such as Facebook’s Oversight Board in response to moderation problems is understandable, it is subject to the same problems of central planning that are plaguing the platforms. The board imagines itself as a kind of a democratic institution, modeled after the Supreme Court. According to company documents, it is “obligated to the people who use Facebook — not Facebook the company.” Despite these ambitions, it does little to actually cede oversight powers to the community, nor does it institute practices, processes, or interfaces for true community moderation.

Who do you trust to shape the future of a free and fair public space online?  The matter of technical design embodies also a political ideal, which with time develops into a culture. Imagine a public square on top of a gated hotel. The kind of discussions it might sustain would necessarily be limited by issues of access and accessibility. Similarly, in thinking through the compromise between free and fair speech, we must make visible the existing structural commitments and the kind of cultures they evoke by design. The design of communal platforms is an emerging art that lies at the intersection of social and software engineering. It can give free speech ample public space, or rather, a multitude of thriving local public spaces, cultivated by their communities.

Follow The Reboot

Join a growing community that’s examining the state of the internet and exploring its future. Subscribe to our newsletter.

Various individuals are tending a community garden in the shape of a speech bubble.

Artwork By

Sam Island

Contact Us

Have an idea for a story or illustration? Interested in discussing partnerships? We want to hear from you. Send us a note at info(at)thereboot(dot)com.

Recommended Reading