Opinion | The Best Way to Regulate Social Media Has Been Staring Us Right in the Face
There’s a First-Amendment-friendly way to clean up social media. But tech CEOs won’t like it.
You can’t use a mega-sound system to hold a political rally in front of a hospital in the middle of the night. You can’t pack a theater so full of people that no one can reach the fire exits without being trampled. In the physical world, these kinds of noise control and fire safety regulations uneventfully coexist with our First Amendment free speech and free assembly rights. They’re accepted as common-sense ways to keep us safe and preserve our sanity.
The same ideas can be applied to social media. By reverse engineering the noise and lack of crowd control that has overrun social media platforms, we can make the internet a more peaceful, reliable, less polarizing place.
And we can do it without the government policing speech. In fact, Congress does not have to do anything. It doesn’t even need to touch Section 230, the now infamous 1996 law that gives social media platforms immunity for the harmful content — from healthcare hoaxes to election misinformation to Russian and Chinese state-sponsored propaganda — that has created a world of chaos and division, where so many people don’t believe even the most basic truths. Instead, the Federal Trade Commission and other consumer protection regulators around the world could enforce the contracts the platforms already have with their users.
Meta, the parent company of Facebook and Instagram, already promises users that it will enforce “community standards” that prohibit, among other abuses: inciting violence, “inauthentic” behavior such as setting up fake accounts, promoting suicide, bullying, hate speech, graphic or sexually explicit content, human exploitation and misinformation “that will cause imminent physical harm, health-care misinformation and misinformation about elections and voting.”
This list could more accurately describe all the categories of harmful content that have flourished on Facebook and Instagram.
Most of the other platforms have similar lists of prohibited content and contracts with their users. These terms of service do not say, “We take our role seriously, but our algorithms encourage a lot of that content, and the volume of it flowing through our platform makes it impossible to prevent much of it from being posted even if we wanted to. Sorry.” Yet that’s basically what they say whenever they embark on another session of their yearslong apology tours testifying in front of Congress and similar tribunals around the world.
The FTC is responsible for protecting consumers, including by suing companies that defraud them by violating the terms of a contract. Section 5 of the law creating the FTC declares that “unfair or deceptive acts or practices in or affecting commerce … are … unlawful,” and empowers the commission to prevent companies from using such deceptive practices. The commission’s Policy Statement on Deception defines “deceptive” practices as a material “representation, omission or practice that is likely to mislead a consumer acting reasonably in the circumstances.” In fact, the FTC has already taken action against Facebook for violating the privacy promises it makes in those same terms of service. It imposed a $5 billion fine in 2020.
The FTC’s website explains that “the Commission may use rulemaking to address unfair or deceptive practices or unfair methods of competition that occur commonly, in lieu of relying solely on actions against individual respondents.” Accordingly, the commission could enforce the content promises in these terms of service by promulgating a rule that any digital platform must prominently and clearly spell out in its terms of service what content it will allow and not allow — and then, as with their privacy assurances, make sure they keep those promises.
Spelling out those terms prominently and clearly would mean posting a large chart on a prominent screen listing all possible offending content and requiring the platform to check a box if the content is prohibited or allowed.
In keeping with First Amendment restrictions on government regulation of the content of speech, it would be up to the platforms to decide which content to prohibit — that is, to check or not check each box. A platform that wants to allow misinformation or hate speech could choose to do so. However, it would have to level with its users in that prominent chart by declaring that it is choosing to allow it. This would give a stricter platform a competitive advantage in the marketplace; a platform that has to declare in large print that it allows misinformation or hate speech is likely to turn off many potential users and advertisers.
First Amendment protections would prohibit the government from forcing the platforms to prohibit hate speech or most misinformation. Yet nothing stops the proprietors of a platform from making those decisions by defining what it considers hate speech or harmful misinformation and screening it out. That’s called editing, which is protected by the First Amendment when private parties, not the government, do it. In fact, editing is what the authors of Section 230 had in mind when they wrote it; it shielded platforms from liability not only for what they do allow but also for what they do not allow. This is why the provision was titled the “Protection for ‘Good Samaritans’ Act.” Let’s make those who run the platforms be Good Samaritans.
This would be a logical and content-neutral way for the commission “to address deceptive practices” — in this case an obvious and widespread failure by the platforms to deliver on the promises made in their contracts with users. The FTC rule would require that the platforms prove that their declarations about what content they will not allow are real — not aspirational. As with inspectors checking enforcement of building codes, making sure there is adequate exit access in a crowded theater or catering hall, the FTC rule should require that each platform demonstrate that it has the capability to screen the volume of its content.
If this means that a platform has to cut its profit margins to hire thousands of people to screen all content before it is posted, or that it has to drastically lower the volume of users or the amount of content that they can post, so be it.
We should have learned by now that the ability for anyone anywhere to send any kind of video or text message instantly to everyone anywhere may be a technology marvel, but it is anything but a positive development. A quieter, less spontaneous online community is superior to the alternative of people live-streaming a murder, summoning rioters to the Capitol or, as happened in the days following the Hamas attack on Israel, Facebook, X and TikTok posting hundreds of lurid videos — quickly receiving millions of views worldwide — of the terrorists celebrating as they committed unspeakable atrocities.
The promises from the Silicon Valley witnesses at these now routine congressional hearings to work harder and do better are meaningless because they do not manage the volume and velocity of what gets posted. Instead, they apologize for being overwhelmed by the fire hose of content that is the essence of their business model — and the controllable but uncontrolled source of their bonanza profits.
To enforce this capability requirement, the FTC would have independent auditors review the platforms’ content on a regular basis to determine whether they have proved capable of keeping their promises. The audits should be publicly available. And if the audits demonstrate that a platform’s promises are not being kept, fines or even an order to suspend service would follow.
The FTC has the regulatory authority to proceed on its own without Congress to enforce the platforms’ own contractual promises. And you can encourage the commissioners to do just that. The FTC website invites consumers to report fraud. Although this is clearly meant for specific complaints about some online scam or unwelcome robo-call, if you think your social media company is not keeping its promises about preventing harmful content, you can report them.
What's Your Reaction?