Support First Things by turning your adblocker off or by making a  donation. Thanks!

Social media platforms such as Facebook, Twitter, and YouTube are virtual public squares, allowing individuals to communicate their views to wide audiences. At first, these platforms avoided regulating user-created content. But pressure from politicians, activist corporations, and users led Big Tech to adopt “content moderation” policies. Whether “moderation” is an apt term depends on one’s perspective. In January, Facebook and Twitter suspended President Trump’s accounts indefinitely, and Apple and -Amazon yanked services to Parler—Twitter’s right—leaning rival. At the same time, the platforms do little to curb bullying by users on the right and left.

These efforts to limit content on social media are increasingly viewed as a threat to freedom of speech. Many now call for legal restrictions on the platforms’ immense power over public discourse. A number of proposals are on the table, and they raise important questions. Are social media firms more like the New York Times, in which case they are liable for what they publish and entitled to refuse to publish content with which they disagree? Or are they more like the government, forbidden from taking sides? Or are they something new, requiring an adaptation of older laws prohibiting monopolies and anti-competitive practices? Our answers depend on how we define freedom of speech and conceive of the government’s proper role in the liberal order.

We must face an important fact: We cannot take for granted that our fellow citizens are committed to free speech. The cancel culture of the illiberal left brooks no dissent; meanwhile, a growing cohort of conservative intellectuals decries liberalism as a hollow sham, abetting populist closed-mindedness. Both trends put freedom of speech on its heels.

Perhaps rightly so. The standard liberal justifications for freedom of speech are half-truths. One holds that human dignity demands a right to self-expression. This idea has had more purchase in Supreme Court opinions on abortion and ­same-sex marriage than in opinions on free speech. The self-expression justification can be self-defeating: One person’s expression may require another’s ­silence.

More influential has been the marketplace of ideas theory. But the notion that we should endorse free speech because “truth will out” relies on two assumptions: The marketplace of ideas is efficient, and people will recognize and choose the truth when they see it. Both assumptions are dubious.

The master theory of free speech in American constitutional discourse, however, has always been that free speech is essential to self-­government. As a Supreme Court opinion puts it, “debate on public issues should be uninhibited, robust, and wide-open.” Although this principle does not require the protection of nonpolitical speech, the Court has always recognized the difficulty of distinguishing between speech related to self-government and other forms of speech. In recent decades, the Court has adopted a liber­tarian approach as the safest course. Yet we have seen that this approach does not necessarily yield more democratic engagement, equal representation, or wise policy.

It is hard to distinguish speech that should be protected from speech that is rightly curtailed, because speech itself is a mixed bag. Self-expression, the search for truth, and self-­government are aspects of humanity’s creation in the image of God. God speaks the world into existence and speaks boundaries into being. ­Adam cooperates with God, using speech to shepherd the created order. Enter the serpent, whose words are sideways from the beginning; they ­un-create, dissolve boundaries. As a result of the fall, ­human speech itself becomes twisted, a means of ­asserting the self and rejecting God’s rule. Words ­deceive, draw people away from God, and ­destroy community.

The philosopher Bernard Williams understood that liberal democracy could not endure ­without a commitment to the twin virtues of ­truth-­telling: sincerity and accuracy. Unfortunately, he ­underestimated how difficult this would prove to be. Left to ourselves, we too often choose lies over truth.

Yet Christians declare the coming of the Word! Through his Incarnation, death, and Resurrection, the Son of God re-formed humanity according to his image, bringing human beings into conversation with the Father through the Spirit. With Mary, we may now speak the truth necessary to overcome sin’s distortions of speech: “Behold, I am the servant of the Lord; let it be done to me according to your word.”

As Christians, we have good reason to guard ourselves against the inevitable corruption of speech—and its corrupting power. But even the bad is often mixed with some good, an embodiment of humanity’s ineradicable search for truth, intimacy, and meaning. In his powerful case against licensing, John Milton acknowledged that unfettered publication inevitably results in a mixture of wheat and tares. Who is to separate them? “How shall the licensers themselves be confided in, unless we can confer upon them, or they ­assume to themselves, above all others in the land, the grace of infallibility and uncorruptedness?” Censorship requires judgment, and judgment requires trust. The question is not whether ­freedom of speech is worth protecting. Nor is there disagreement about whether there ought to be limits. Rather, the central question is this: Whom should we trust to define the borders of free speech?

The government has so far allowed social media platforms and their users to define the boundaries of acceptable speech on those platforms. To stave off governmental interference, most platforms have developed strategies to ensure (or at least appear to ensure) that their content moderation is minimal and politically neutral. Facebook’s Oversight Board offers a prominent example. It operates like a private Supreme Court to adjudicate specific decisions to remove content.

The platforms now find themselves in a vise of their own creation. Founded by entrepreneurial left-libertarians, social media was envisioned as a free-speech utopia. But social media companies use algorithms to sift and deliver content to users, algorithms designed to suit the marketing firms that pay the bills. This business practice has created echo chambers, contributing to the polarization that now threatens Big Tech’s independence. Half of their users want to silence the other half, who cry foul—and appeal to politicians for relief.

The proposed governmental solutions fall into two categories: The first involves direct government regulation of the platforms’ decisions about content. The second, government regulation of the market in order to encourage competition, will in theory indirectly promote freedom of speech.

Officials in both parties have lobbied to repeal Section 230 of the Communications Decency Act of 1996, which immunizes social media platforms from liability for user content they host and for the removal of content deemed objectionable. ­Getting rid of that immunity would allow users to file ­civil suits seeking damages caused by social media ­censorship.

The proponents of this strategy argue that the platforms have leveraged network and scale effects to create virtual public squares that are integral to the way institutions and private persons communicate today. The platforms have unprecedented power over who can say what and to whom. Critics argue that Section 230 licenses these platforms to exercise arbitrary and unrestrained censorship powers. Unlike news corporations, the platforms are unaccountable for publishing harmful content. And unlike the government, they are unaccountable for censorship. This lack of accountability has had significant consequences for public debate. When social media platforms intervene in the political process, as was the case when some companies silenced a presidential candidate who had received nearly 74 million votes, it is hard to conclude that our political discourse is as the Supreme Court wishes: “uninhibited, robust, and wide-open.”

Yet amending Section 230 would almost certainly make matters worse.

If the New York Times publishes content that some individual deems abusive and harmful to him, not only can he sue the author and editors, he can reach into the pockets of the Times. Not so with social media behemoths. By law, they operate at arm’s length from users. They enjoy the same immunity from liability that the government has when it opens the town square to private speakers.

Nixing this immunity would end social media as we know it. Subjecting platforms to liability for user speech would invite entrepreneurial lawyers to pioneer novel theories of speech-caused harm, an idea that already enjoys a great deal of currency in universities. Faced with notions such as “­dignitary tort,” to protect themselves, the platforms would have to police every post by every user. Social media companies would undoubtedly err on the side of censorship, for they would have everything to lose. Instead of more free speech, we would almost certainly get less. Mark this path: “Here be ­dragons.”

In addition to removing liability for user-­posted content, Section 230 immunizes platforms for ­censoring content they find objectionable. Senators such as Josh Hawley have this provision in their sights. Several state legislatures are currently considering laws that would punish social media companies for discriminating on the basis of content, a measure that effectively repeals the protection accorded by Section 230.

It is easy to understand the concerns of conservatives. Why authorize corporate behemoths, almost all of which lean left, to decide who gets to talk, and to whom we can listen? The answer is simple: because the alternatives are ­unconstitutional.

Moderator immunity codifies the bedrock First Amendment right against government-coerced speech. Neither the New York Times nor First Things may be forced to publish political content with which it disagrees. The Supreme Court has referred to this rule as the “fixed star in our constitutional constellation.” Repealing the censorship immunity provision would accomplish nothing on its own, for there is no law forbidding such censorship. Congress would need to replace the provision with specific anti-discrimination rules, and those rules would force private companies to host speech they would not otherwise host. This is the essence of compelled speech, which is forbidden by the First Amendment.

Admittedly, the case law is unclear concerning whether forbidding the platforms from censoring users would constitute compelled speech. The best argument, though, holds that it would.

The question is context-sensitive. As the Supreme Court has said, “Each medium of expression . . . must be assessed for First Amendment purposes by standards suited to it, for each may present its own problems.” For instance, newspapers and journals, whether print or online, have limited space, so they necessarily make decisions about which content to include and which to exclude. Forcing them to include something with which they disagree would violate their First Amendment rights. Yet social media platforms are different from newspapers, some argue, such that forcing them to carry unwanted user content would not amount to “compelled speech.” Their “space” is virtual—and virtually unlimited. There is little chance a reader would ascribe a platform’s endorsement to content posted by a user. The platforms are designed to allow users to distinguish themselves from one another, and from the platform. Requiring the platforms to include user content would not put words in their mouths in the same way requiring the Times to publish an op-ed by a right-wing politician would.

By contrast, forbidding platforms from censoring user speech would require them to facilitate speech with which they disagree. Under current rules, platforms are free to censor content they find objectionable for any reason. As a result, Facebook, Twitter, YouTube, and the like screen out a great deal of material that most people would rather not see in their feeds: hard-core porn, recordings of ­violent crime, depictions of child and animal abuse. The platforms also take intermediate steps short of eliminating content by flagging it as unreliable, explicit, or the like. These are forms of censorship. Forcing social media companies to include content they find objectionable would require them to expend resources to facilitate speech with which they disagree. We might imagine that the government can enforce viewpoint-neutrality with a light touch. But the Supreme Court’s free speech jurisprudence shows that viewpoint neutrality is hard to define and implement. In all likelihood, it would end social media companies’ ability to cultivate a platform culture free of the vilest content, forcing them to allow videos of ISIS-style beheadings and deceitful political propaganda.

Yet not all compelled speech is unconstitutional. The most obvious example is that the government may require marketers of consumer goods to include disclaimers about potential risks and side effects. Moreover, ever since Turner Broadcasting System v. Federal Communications Commission (1994), the Supreme Court has held that Congress may require cable companies to carry local broadcast stations. The Court acknowledged that this requirement involves a measure of ­compelled speech.

If Congress can require cable companies to carry specific content, why can’t it require platforms to host Donald Trump? There is a difference in “the medium of expression” between cable television and social media platforms. Because of the natural scarcity of the electromagnetic spectrum, broadcast television was already subject to significant governmental regulation. Congress regulated cable companies because they threatened to put broadcast television out of business. The speech the cable companies were required to carry, therefore, was canned, regulated by the FCC, and predictably banal.

A law prohibiting social media platforms from censoring anyone would be quite different. Cable companies were using their clout to shut out a handful of already regulated channels. By contrast, social media platforms are making thousands of judgments every day about whether to host specific user content. In doing so, they engage in a form of expression. When they choose not to host a violent video, racist slur, or “fake news,” they are taking a stand about that content. Speech policies are central to a platform’s ability to foster a distinctive online community. And whereas cable companies threatened to attain monopoly control of television access in some areas, there is no physical limit to the number of social media platforms that can compete for users. The separate speech codes of Twitter, Reddit, and Parler reflect competing bids for potential users. Moreover, users can be on all of the platforms if they wish.

If the cable TV precedent does not apply, one might think that the government could take a targeted approach to eliminating content moderation. There are two possible paths. One would eliminate First Amendment protections for the online platforms by applying the First Amendment to their content moderation decisions; the other would subject the platforms to nondiscrimination norms on the grounds that a social media company functions as a “common carrier.”

Generally speaking, the Bill of Rights and the Fourteenth Amendment apply only to the government. This rule is usually easy to apply, but there are sometimes hard cases, such as whether the Eighth Amendment’s prohibition on “cruel and unusual punishment” should apply to government contractors who operate prisons. It is true that the Supreme Court has occasionally held that the equal protection clause’s prohibition of race discrimination should extend to non-­government institutions. The Court’s test for who counts as a “state actor” is functional; it ordinarily boils down to whether “it can be said that the State is responsible for the specific conduct” at issue. The best reading of these cases, then, holds that the Constitution itself does not apply to the non-­government actor. Rather, the Constitution ­requires the government to ensure that private parties do not engage in race discrimination.

This rationale does not apply to online platforms. Not only does the Free Speech Clause not create such a duty; it expressly forbids the government from putting words in the mouths of private speakers. Understandably, therefore, the Court has rarely held that the Free Speech Clause applies to private institutions. It did so in Marsh v. Alabama (1946). In that case, the Court extended the ­constitutional protection of free speech to a company town because it exercised a ­government-granted monopoly on space—in this instance, sidewalks—that functioned exactly like a public forum. It is true that platforms have tremendous power over our virtual public squares, but that power is ­neither exclusive nor the result of governmental favoritism.

Equating social media with sidewalks would be only the start of the judicial adventurism. The Court would have to decide which free speech rules apply to which platforms. Free speech doctrine is riddled with vague exceptions. Whole categories of speech are given conditional protection, while others receive no protection. For instance, under current constitutional doctrine, even if the First Amendment had been applied to Facebook and Twitter, those platforms were probably well within their rights to suspend President Trump’s accounts, so long as he was advocating “­imminent lawless action” or “facilitating crime.”

Moreover, this approach is likely unworkable. Applying free speech doctrine to platforms would subject every instance of content moderation, potentially thousands each day, to judicial review. And to which companies would the doctrine ­apply? Social media platforms only? What about search engines such as Google and Bing? What about platforms that create some of the content they host, such as Netflix? Speech doctrine would become only more byzantine and the federal judiciary’s influence on society more pervasive.

A more modest approach would be to deem the social media platforms “common carriers.” In our legal tradition, common carriers were private parties that operated a government-granted or natural monopoly on public transportation. Because such corporations had exclusive control over an important public good—the flow of goods and people—the courts held that the government could require them to accept all customers on reasonable and nondiscriminatory terms. The United States has long applied common carrier doctrine to modern industries that exercise analogous power over private activity, such as public utilities and telecommunications firms. Internet platforms, the argument goes, exercise a similar monopoly power over private speech, and the government may likewise require them to accept all users without censorship.

The constitutionality of applying neutrality rules to social media cannot rely on the fact that they have a lot of power over an important public good. Many large corporations do. They can only be called common carriers if they operate as a government-created or natural monopoly. Otherwise, imposing neutrality rules on them would run afoul of the coerced speech doctrine.

The key case is Red Lion Broadcasting Co. v. ­Federal Communications Commission (1969), in which the Supreme Court held that the Federal Communications Commission could impose the Fairness Doctrine on radio station broadcasters. The Doctrine, since repealed, required broadcasters to give “equal time” to political opponents. It was a classic case of coerced speech. But the Court ruled narrowly. Central to the Court’s reasoning was that the scarcity of broadcast frequencies places a natural limit on competition. Moreover, the government had controlled the licensing scheme from the beginning, effectively delegating its authority over the spectrum to handpicked private firms. Neither condition applies to social media platforms.

Big Tech has power because of decisions made by private parties—users, customers, and competitors—not because of policies advanced by the government. Whatever oligarchic power the firms exercise over speech is a result of the cumulative decisions of users—which made them the place where everyone wants to be—and, perhaps, of collusion among the firms themselves. Both the power and the possible collusion can and should be addressed directly, rather than through a forced application of common carrier doctrine.

The more straightforward way for the government to protect freedom of speech on platforms is to promote competition. More options would mean that none of the platforms effectively functions as a radio station or, worse, a company town. The suspension of Trump’s accounts by Facebook and Twitter was an exercise of immense power, but it was not the end of the story. Thousands of users flocked to Twitter’s competitor, Parler. What ultimately excluded Trump and his followers from the virtual public square were not decisions by the platforms but rather by their Big Tech brethren: Apple deleted Parler’s app from its store and Amazon kicked Parler off of its servers. What silenced Parler’s users, including Trump, was the lack of alternatives. This is a very real problem. And it is best addressed by our already existing anti-­trust regulatory regime.

The good news is that measures are being taken. Europe has drawn first blood, socking Google with billions of dollars in fines. The Federal Trade Commission and attorneys general from more than a dozen states have sued Facebook for purchasing competitors such as Instagram and WhatsApp. Depending on the success of these anti-trust actions, Congress could also consider creating a special-­purpose agency, or retooling an agency such as the Federal Communications Commission, to regulate online platforms with an eye toward ­promoting competition. This is a more effective approach over the long term than having the courts micromanage content moderation. As James Madison aptly observed, deficits in trustworthiness are best addressed by disaggregating responsibility for ­judgment.

Promoting competition—rather than regulating content moderation—will reduce the power of any one firm to influence private speech. The more options users have, the more firms will have to compete for users. Some users will gravitate toward affinity platforms, perhaps on ideological grounds. Others will go to the platforms with the biggest networks. Many will go in both directions. The big firms rely on network and scale effects. If subjected to competition, they will have a powerful incentive to promise content moderation policies that are predominantly viewpoint-neutral. ­Facebook is already signaling its desire to be seen as evenhanded, with its Oversight Board. New companies are offering content moderation services to platforms, promising independent oversight and neutrality. What we need is not government regulation of content moderation, but a pluralist ecosystem of overlapping virtual public squares, one with genuine diversity.

The call to regulate Big Tech arises from an accurate intuition: These firms have too much power over the ways in which we now communicate with one another. One way to frame the problem is that virtual platforms threaten to displace real-world “mediating” institutions—­industry, universities, churches, and other private associations that mediate between private life and the state. Members of society also share bonds through participation in many overlapping associations, from employers to churches, social clubs, and softball leagues.

Social media platforms have insinuated themselves into the operations of every other institution, including the family. These days, Facebook sometimes seems a more vital link than family get-togethers. The only solution I can imagine, and it is admittedly a tall order, is for our institutions to reduce their reliance on social media. What about a Day of Silence, on which institutions from ­Starbucks to ESPN to Harvard University take a day off from social media? This may seem paltry, but its symbolic power would be real. And we as individuals should use social media more sparingly. If we prioritize more direct and deliberate means of electronic communication—such as text chains, email groups, Zoom, and phone calls—to say nothing of in-person conversations, our communities and our mental health would be stronger for it. The most obvious remedy for social media’s abuse of its power over private speech is for its users—­institutions and individuals alike—to stop giving them that power in the first place.

Nathan S. Chapman is the Pope F. Brock Associate Professor in Professional Responsibility at the University of Georgia School of Law.