US President Donald Trump threatened to close Twitter down a day after the social-media giant marked his tweets with a fact-check warning label for the first time. He followed this threat up with an executive order that would encourage federal regulators to allow tech companies to be held liable for the comments, videos, and other content posted by users on their platforms.
As is often the case with this president, his impetuous actions were more than a touch self-serving and legally dubious absent a congressionally legislated regulatory framework.
Despite himself, Trump does raise an interesting issue – namely whether and how the US should regulate the social-media companies such as Twitter and Facebook, as well as the search engines (Google, Bing) that disseminate their content. Section 230 of the Communications Decency Act largely immunizes Internet platforms from any liability as a publisher or speaker for third-party content (in contrast to conventional media).
The statute directed US courts not to hold providers liable for removing content, even if the content is constitutionally protected. On the other hand, it doesn’t direct the Federal Communications Commission (FCC) to enforce anything, which calls into question whether the FCC does in fact have the existing legal authority to regulate social media (see this article by Harold Feld, senior vice-president of the think-tank Public Knowledge, for more elaboration on this point).
Nor is it clear that vigorous antitrust remedies via the Federal Trade Commission would solve the problem, even though FTC chairman Joe Simons suggested last year that breaking up major technology platforms could be the right remedy to rein in dominant companies and restore competition.
In spite of Simons’ enthusiasm for undoing past mergers, it is unclear how breaking up the social-media behemoths and turning them into smaller entities would automatically produce competition that would simultaneously solve problems like fake news, revenge porn, cyberbullying, or hate speech. In fact, it might produce the opposite result, much as the elimination of the “fairness doctrine” laid the foundations for the emergence of a multitude of hyper-partisan talk-radio shows in the US and, later, Fox News.
Given the current conditions, the Silicon Valley–based social-media giants have rarely had to face consequences for the dissemination of misinformation, or outright distortion (in the form of fake news), and have profited mightily from it.
The US Congress has made various attempts to establish a broader regulatory framework for social-media companies over the past few years, in part by imposing existing TV and radio ad regulations on them, introducing privacy legislation in California, as well as having congressional hearings featuring Facebook, Twitter and Google, where their CEOs testified on social media’s role in spreading disinformation during the 2016 US election.
But an overarching attempt to establish a regulatory framework for social media has seldom found consensus among the power lobbies in Washington, and, consequently, legislative efforts have foundered.
As the 2020 US elections near, the Republican Party has little interest in censoring Donald Trump. Likewise, Silicon Valley elites have largely seized control of the Democratic Party’s policy-making apparatus, so good luck expecting that party to push hard on regulating big tech, especially if their dollars ultimately help to lead the country to a Biden presidency and a congressional supermajority.
As things stand today, there’s not even a hint of a regulatory impulse in this direction in Joe Biden’s camp. As for Donald Trump, he can fulminate all he likes about having Twitter calling into question the veracity of his tweets, but that very conflict is red meat for his base.
Trump wants to distract Americans from the awful coronavirus death toll, which recently topped 100,000, civil unrest on the streets of America’s major cities, and a deep recession that has put 41 million Americans out of work. A war with Twitter is right out of his usual political playbook.
By the same token, social-media companies cannot solve this problem simply by making themselves the final arbiter of fact-checking, as opposed to an independent regulatory body. Twitter attaching a fact check to a tweet from President Trump looks like a self-serving attempt to forestall a more substantial regulatory effort.
Even under the generous assumption that social-media giants had the financial resources, knowledge, or people to do this correctly, as a general principle, it is not a good idea to let the principal actors of an industry regulate themselves, especially when that arbiter is in effect one person, as is the case at Facebook.
As Atlantic columnist Zeynep Tufekci wrote recently, “Facebook’s young CEO is an emperor of information who decides rules of amplification and access to speech for billions of people, simply due to the way ownership of Facebook shares [is] structured: Zuckerberg personally controls 60% of the voting power.”
At least Zuckerberg (unlike Twitter’s Jack Dorsey) has personally acknowledged that “Facebook shouldn’t be the arbiter of truth of everything that people say online.… Private companies probably shouldn’t be, especially these platform companies, shouldn’t be in the position of doing that.”
As things stand today, existing legal guidelines for digital platforms in the US fall under Section 230 of the Communications Decency Act. The goal of that legislation was to establish some guidelines for digital platforms in light of the jumble of (often conflicting) pre-existing case law that had arisen well before we had the Internet.
The legislation broadly immunizes Internet platforms from any liability as a publisher or speaker for third-party content. By contrast, a platform that publishes digitally can still be held liable for its own content, of course. So a newspaper such as The New York Times or an online publication such as the Daily Beast could still be held liable for one of its own articles online, but not for its comments section.
While the quality of public discourse has suffered mightily for the immunity granted by Section 230, the American public doesn’t have much power to do anything about it.
There is, however, a growing coalition of business powers that have bristled for many years at their inability to hold these platforms accountable for the claims made by critics and customers of their products, and to prevent the expansion of Section 230 into international trade agreements, as it had already seeped into parts of the new United States-Mexico-Canada Agreement.
A New York Times story about the fight explained that “companies’ motivations vary somewhat. Hollywood is concerned about copyright abuse, especially abroad, while Marriott would like to make it harder for Airbnb to fight local hotel laws. IBM wants consumer online services to be more responsible for the content on their sites.”
At this point, it is necessary to be prepared for the sophistication and capacity of business lobbies in Washington to initiate a national controversy like the recent headlines of struggles at Twitter and Facebook with Trump to serve a long-term regulatory goal.
US Senator Ron Wyden, an advocate of Section 230, argued that “companies in return for that protection – that they wouldn’t be sued indiscriminately – were being responsible in terms of policing their platforms.”
In other words, the quid pro quo for such immunity was precisely the kind of moderation that is conspicuously lacking today. However, Danielle Citron, a University of Maryland law professor and author of the book Hate Crimes in Cyberspace, suggested there was no quid pro quo in the legislation, noting that “there are countless individuals who are chased offline as a result of cyber mobs and harassment.”
In addition to the issues of intimidation or targeted groups cited by Citron, there are other problems, such as the dissemination of content designed to interfere with the function of democracy (seen in evidence in the 2016 presidential election), that can otherwise disrupt society.
This is not a problem unique to the United States. Disinformation was spread during the Brexit referendum, for starters. Another overseas example is featured in a Wall Street Journal article that reported in June, “After a live stream of a shooting spree at New Zealand mosques last year was posted on Facebook, Australia passed legislation that allows social-media platforms to be fined if they don’t remove violent content quickly.”
Likewise, Germany passed its NetzDG law, which was designed to compel large social-media platforms, such as Facebook, Instagram, Twitter and YouTube, to block or remove “manifestly unlawful” content, such as hate speech, “within 24 hours of receiving a complaint but have up to one week or potentially more if further investigation is required,” according to an analysis of the law written by Human Rights Watch.
It is unclear whether Section 230 confers similar obligations in the US.
Public Knowledge’s Harold Feld noted that Section 230 does not exempt the application of federal or state criminal laws, such as sex trafficking or illegal drugs, with respect to third-party content protection. But he recognized that it by no means constitutes a complete solution to the problems raised here.
In his book The Case for the Digital Platform Act, he proposes that Congress create a new agency with permanent oversight jurisdiction over social media. Such an agency could “monitor the impact of a law over time, and … mitigate impacts from a law that turns out to be too harsh in practice, or creates uncertainty, or otherwise has negative unintended consequences.”
To maintain ample flexibility and democratic legitimacy, Feld proposed that the agency have the “capacity to report to Congress on the need to amend legislation in light of unfolding developments.”
Regulating the free-for-all on social media is unlikely to circumscribe Americans’ civil liberties or democracy one way or another. The experiment in letting anybody say whatever he or she wants, true or false, and be heard instantly around the world at the push of a button has done less to serve the cause of free speech, or enhance the quality of journalism, than it has to turn a few social-media entrepreneurs into multi-hundred millionaires or billionaires.
We Americans managed to have the civil-rights revolution even though radio and TV and Hollywood were regulated, so there is no reason to think that a more robust series of regulations of social media will throw us back into the political dark ages or stifle free expression.
Even with the dawn of the Internet era, major journalistic exposés have largely emerged from traditional newspapers and magazines, online publications such as the Huffington Post, or curated blogs, not random tweets, or Facebook posts. Congress should call Trump’s bluff on social media, by crafting regulation appropriate for the 21st century.
That may have to wait until after the 2020 election, but it is a problem that won’t go away.
This article was produced by Economy for All, a project of the Independent Media Institute, which provided it to Asia Times.