By Daniel A. Hanley
Daniel A. Hanley is a policy analyst at the Open Markets Institute.
In the early years of the commercial internet, back when companies like CompuServe and Prodigy dominated service, regulators faced a reckoning. They were forced to determine whether or not digital platforms should have traditional publisher liability for user-generated content hosted on their site.
CompuServe had escaped liability for defamation in 1991 because it made no effort to review or filter its users’ content — the company took an entirely hands-off approach, so it couldn’t claim to have knowledge of what was posted. But in 1995, a court found Prodigy liable in another case because the company had decided to actively moderate its message boards. In policing some of the platform’s content, the court decided that Prodigy had assumed legal responsibility for all of it.
Concerned that imposing liability on internet companies for moderating user content would cripple the growth of online services, federal lawmakers decided that they should be incentivized to moderate “offensive material” in good faith without the fear of facing the same liability as a newspaper does for the content it publishes. In 1996, Congress enacted Section 230 of the Communications Decency Act. Popularly known as “the law that created the internet,” it established that “interactive computer services” (more commonly known as platforms) would not be treated as publishers of third-party content. This means that Section 230 shields platforms for transmitting, displaying, and filtering (or not filtering, as the case may be) material that they host.
The law attracted little attention over the next two decades, but this has changed significantly in recent years. One reason for the renewed attention to Section 230 is that tech giants like Google and Facebook have grown to be vastly more powerful than legislators imagined when Congress enacted the law in 1996. To deal with network monopolists and other providers of essential services, antitrust enforcers have traditionally implemented policies such as line-of-business restrictions, forced interoperability, and prohibitions on price discrimination. But we still lack policies that prevent undue control and concentrations of private power in this critical sector of the economy. Instead, alarmingly, Section 230 has been recently invoked in court as granting immunity from anticompetitive claims and other ostensibly exclusionary acts.
Regulators have generally applied a laissez-faire antitrust philosophy to online communications and commerce. This has allowed companies like Facebook and Google to base their entire advertising-supported business models on actively manipulating the news and information that they transmit between users, keeping them addicted to their platforms while pushing misinformation and extremist content. YouTube’s recommendation engine, for example, promotes deceptive and extreme content that the platform monetizes with ads. For years, not only have the dominant platforms failed to adequately police objectionable and harmful content, but they also have earned billions of dollars by algorithmically boosting such material.
What’s worse is that since courts have interpreted Section 230’s liability shield so broadly, despite its original purpose, the law has given dominant platforms almost no incentive to moderate even patently objectionable content. Courts have fundamentally subverted the main goals of the law by radically expanding its liability protection to nearly every business that involves even marginal internet-based operations. Section 230’s authors had never expected it to cover every possible activity on the internet, but businesses involved in digital activities that cannot plausibly be considered “free speech” in any traditional sense use the law to protect against liability today.
In one case, the Wisconsin Supreme Court affirmed immunity under the law for an online firearms marketplace that allows unlicensed gun sellers to sell firearms to users who cannot pass a background check. The site had facilitated the sale of a handgun to an individual who was legally prohibited from possessing one, and who used it in a mass shooting. As the legal scholars Danielle Citron and Mary Anne Franks have written:
“Section 230 has been read to immunize platforms from liability that knew about users’ illegal activity, deliberately refused to remove it, and ensured that those responsible could not be identified; solicited users to engage in [wrongful] and illegal activity; and designed their sites to enhance the visibility of illegal activity and to ensure that the perpetrators could not be identified and caught.”
The implications of using Section 230 to shield potentially criminal online activities from prosecution are just as problematic for the internet as anything that the law originally aimed to fix.
Because transmitting and displaying user-generated information is an essential aspect of how most internet platforms operate, modifications to Section 230 encompass fundamental questions of how the US should organize its critical digital communications and information infrastructure to promote and protect democracy in the 21st century.
By 2020, support had grown on both the right and left to simply abolish Section 230, with Joe Biden and Donald Trump both calling for its repeal. Trump was moved to do so after Twitter began labeling his tweets about voting fraud as misinformation, while Biden had complained of Facebook and other platforms “propagating falsehoods they know to be false.” But repealing Section 230 would not remedy the stranglehold that Silicon Valley giants have over our networks. On the contrary, many policy experts fear that such a repeal would allow Google and Facebook to retain near-unilateral control of what is accessible to the public on the internet and further cement their dominance.
A full repeal could potentially result in smaller, less powerful technology platforms, as some experts have suggested. But a repeal may also play out in ways that effectively anoint these corporations as perpetual arbiters of speech in the US. To minimize liability and lawsuits, they may decide to preventatively block content that is flagged, adopting a policy that could apply to disinformation and legitimate material alike. Competitively, the tech giants would also have the resources to withstand litigation and implement advanced processes and procedures to manage user content, while smaller and emerging businesses would not.
A bill introduced in early February by Sens. Mark Warner (D-Va.), Mazie Hirono (D-Hawaii), and Amy Klobuchar (D-Minn.) is an important step for Congress to begin the process of making necessary changes to Section 230, creating a foundation for reforms to the communications sector. As the bill is currently written, it aims to hold platforms liable for harms that result from paid advertisements or other financially enhanced speech, like promoted posts. It specifically imposes liability on platforms that directly or indirectly violate civil rights, antitrust, or harassment laws. Notably, the proposed law also explicitly provides aggrieved parties relief in court to stop violent and defamatory conduct that is facilitated by internet platforms.
Congress’s reform efforts are commendable because they reframe the problem of digital communications beyond just how Google and Facebook operate their platforms. Critics have argued that changes to Section 230 like those proposed by Democrats would endanger innovation or stifle free speech on the internet. These concerns reflect the law’s crucial role in the growth of the web and the importance of crafting the legislative details thoughtfully, but the arguments against any reform at all are unfounded.
Changes to liability will certainly affect the industry, but they can help improve it. For example, car manufacturers in the 1960s, like internet platforms today, had near-total immunity from product liability for harms to customers resulting from injuries and deaths from their automobiles. When this liability shield was weakened and automobile companies were held accountable for their products, cars became significantly safer. With the right adjustments, the same can happen with internet platforms. Perhaps a weakened or modified liability shield will properly incentivize them to design adequate tools and systems to inhibit the harmful speech that propagates on their systems, instead of indiscriminately prioritizing content to maximize ad revenue.
While some speech would undoubtedly be deterred if the liability shield were changed, our current situation is already stifling certain kinds of speech — that of the individuals being harassed, targeted, and harmed by others, who avoid using platforms at all because they have no avenue for redress. As Danielle Citron argues, “A recalibrated § 230 would… do a better job of incentivizing the best-positioned parties to protect against risks to free expression engendered by online abuse.” In fact, changing liability protection may actually help users realize that their speech has consequences and can adversely affect other users. This could improve the quality of speech on online platforms and deter the kinds of speech that we don’t want, like defamation and harassment.
A reassessment of Section 230 is long overdue and necessary, but no final set of changes will please everyone. This is partly because any modifications will require Congress to make difficult tradeoffs in order to properly structure our communications ecosystem so that it adequately fosters essential public values, such as freedom of speech and the right to not be harmed or harassed by others. Polls also show conflicting evidence of the public’s overall desire to make radical changes to Section 230. Fifty-four percent of Americans say that it has done more harm than good, according to a Gallup/Knight Foundation survey, but only about a third favor changing it. A recent Protocol survey of tech workers familiar with Section 230 found that 71 percent of them agreed that it should be reformed, but 65 percent also agreed that “tech companies should not be held liable for the content on their sites and products.”
With many difficult choices ahead, Congress’s current Section 230 reform efforts will likely continue for months. But critically, lawmakers must recognize that reforming Section 230 is not a panacea for all of the ills that are facilitated by tech giants. To fully address the problems existing in our digital ecosystem, it’s vital that Congress maintains its attention on the monopolization of essential communications infrastructure by private corporations.
The control that colossal corporations like Google and Facebook have over American speech effectively makes their rules the de facto national policy on the internet. Changes to Section 230 will have a limited effect on solving this core problem. Congress can more readily begin achieving many of the goals it seeks to implement with modifications to Section 230 by using its long-standing anti-monopoly tools, such as enforcing antitrust laws, restructuring a company’s integrated operations into multiple separate firms, deploying line-of-business restrictions, and prohibiting certain conduct like targeted advertising.
Such anti-monopoly actions have been used for more than a century against all types of businesses that injure competition and worker welfare by using their sheer size and financial capital to dominate markets and set the rules of the game in their favor. Rather than serving the broader public interest, a monopoly effectively operates as a corporate license to be an autocrat. Many of these considerations were analyzed in congressional hearings in 2020, culminating in last October’s landmark 450-page report on Big Tech.
An essential lesson of the American anti-monopoly tradition going back to the founding is that Congress should abolish such aggregations of private power. With this in mind, Congress should waste no time to begin implementing its arsenal of anti-monopoly tools as it continues to determine the appropriate modifications to Section 230. We can’t afford to wait.
By Daniel A. Hanley
Daniel A. Hanley is a policy analyst at the Open Markets Institute.