Trump’s latest attack on Section 230 is really about censoring speech
One aspect of the 2020 presidential campaign that isn’t much discussed is the fact that both candidates want to end the internet as we know it. Both President Trump and Joe Biden have called for the end of Section 230 of the Communications Decency Act, which protects tech companies in most cases when their users post something illegal on their platforms.
Trump brought the subject up today when a Twitter account with fewer than 200 followers posted an obviously doctored image of Senate Majority Mitch McConnell dressed up in Soviety military garb, with the caption reading “Moscow Mitch.”
“Why does Twitter leave phony pictures like this up, but take down Republican/Conservative pictures and statements that are true?” the president wanted to know. “Mitch must fight back and repeal Section 230, immediately. Stop biased Big Tech before they stop you!”
He then tagged Republican senators Marsha Blackburn and Josh Hawley, who reliably step up to lodge baseless complaints about systematic bias against their party whenever called upon. (In fact, they introduced something called “the Online Freedom and Viewpoint Diversity Act” on Tuesday, the point of which seems to be to stop social networks from doing so much moderating.)
The reason Twitter (usually) leaves phony pictures like that up is that the United States permits its citizens to speak freely about politicians — even to say mean things about them. Repealing Section 230 would likely have no impact on the tweet in question, because the Twitter user’s speech is protected under the First Amendment.
It might, however, make Twitter legally liable for what its users post — which would lead the company to remove more speech, not less. Whatever repealing Section 230 might achieve, it would not be what the president seems to want.
Anyway, all of this is well known to followers of the long-running Section 230 debates and seemingly impenetrable to everyone else. But if there’s one important lesson from 2020, it’s that long-running debates over expression can sometimes result in clumsy but decisive actions — ask TikTok! And so it’s worth spending a few more minutes talking about what smarter people say ought to be done about Section 230.
As it so happens, there’s a sharp new report today out on the subject. Paul Barrett at the NYU Stern Center for Business and Human Rights looks at the origins and evolution of Section 230, evaluates both partisan and nonpartisan critiques, and offers a handful of solutions.
To me there are two key takeaways from the report. One is that there are genuine, good-faith reasons to call for Section 230 reform, even though they’re often drowned out by bad tweets that misunderstand the law. To me the one that lands the hardest is that Section 230 has allowed platforms to under-invest in content moderation in basically every dimension, and the cost of the resulting externalities has been borne by society at large.
Ellen P. Goodman, a law professor at Rutgers University specializing in information policy, approaches the problem from another angle. She suggests that Section 230 asks for too little — nothing, really — in return for the benefit it provides. “Lawmakers,” she writes, “could use Section 230 as leverage to encourage platforms to adopt a broader set of responsibilities.” A 2019 report Goodman co-authored for the Stigler Center for the Study of the Economy and the State at the University of Chicago’s Booth School of Business urges transforming Section 230 into “a quid pro quo benefit.” The idea is that platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.
The Stigler Center report provides examples of quids that larger platforms could offer to receive the quo of continued Section 230 immunity. One, which has been considered in the U.K. as part of that country’s debate over proposed online-harm legislation, would “require platform companies to ensure that their algorithms do not skew toward extreme and unreliable material to boost user engagement.” Under a second, platforms would disclose data on what content is being promoted and to whom, on the process and policies of content moderation, and on advertising practices.
This approach continues to enable lots of speech on the internet — you could keep those Moscow Mitch tweets coming — while forcing companies to disclose what they’re promoting. Recommendation algorithms are the core difference between the big tech platforms and the open web that they have largely supplanted, and the world has a vested interest in understanding how they work and what results from their suggestions. I don’t care much about a bad video with 100 views. But I care very much about a bad video with 10 million.
So whose job will it be to pay attention to all this? Barrett’s other suggestion is a kind of “digital regulatory agency” whose functions would mimic some combination of the Federal Trade Commission, the Federal Communications Commission, and similar agencies in other countries.
It envisions the digital regulatory body — whether governmental or industry-based — as requiring internet companies to clearly disclose their terms of service and how they are enforced, with the possibility of applying consumer protection laws if a platform fails to conform to its own rules. The TWG emphasizes that the new regulatory body would not seek to police content; it would impose disclosure requirements meant to improve indirectly the way content is handled. This is an important distinction, at least in the United States, because a regulator that tried to supervise content would run afoul of the First Amendment. […]
In a paper written with Professor Goodman, Karen Kornbluh, who heads the Digital Innovation and Democracy Initiative at the German Marshall Fund of the United States, makes the case for a Digital Democracy Agency devoted significantly to transparency. “Drug and airline companies disclose things like ingredients, testing results, and flight data when there is an accident,” Kornbluh and Goodman observe. “Platforms do not disclose, for example, the data they collect, the testing they do, how their algorithms order news feeds and recommendations, political ad information, or moderation rules and actions.” That’s a revealing comparison and one that should help guide reform efforts.
Nothing described here would really resolve the angry debate we have once or week or so in this country about a post that Facebook or Twitter or YouTube left up when they should have taken it down, or took down when they should have left it up. But it could pressure platforms to pay closer attention to what is going viral, what behaviors they are incentivizing, what harms all of that may be doing to the rest of us.
And over time, the agency’s findings could help lawmakers craft more targeted reforms to Section 230 — which is to say, reforms that are less openly hostile to the idea of free speech. Moscow Mitch will continue to have to take his lumps. But the platforms — at last — will have to take theirs, too.