Bill to Regulate Social Media Opens Broader Discussion

Colorado State Capitol Denver

Clarification: This article was updated on April 22 to better reflect comments from CU Law School clinical professor Blake Reid regarding technology companies’ role in distributing news.

A senate bill that would establish a system to regulate social media content is drawing criticism for its clash with the free expression rights created by the U.S. and Colorado constitutions. The measure represents a marker in a growing debate about how best to confront the disinformation that has riven American society.


Senate Bill 132, sponsored by Vail legislator Kerry Donovan, proposes to set up a state agency to investigate and potentially fine social media and digital content providers — including Facebook, Twitter, Twitch and YouTube — that engage in any “unfair or discriminatory digital communications practice,” which would include a variety of speech categories. SB 132 would also require social media and digital content companies to register with the new state agency and prohibit “violations of users’ privacy.” The bill is scheduled for a Tuesday hearing before the Senate State Veterans & Military Affairs Committee.

“It’s a very interesting bill,” said Derigan Silver, an associate professor at the School of Media, Film, and Journalism Studies at the University of Denver and an adjunct professor at the university’s Sturm College of Law. “The first part is the dissemination of distasteful information — ‘hate speech,’ speech that undermines ‘election integrity,’ ‘conspiracy theories,’ ‘intentional disinformation,’ [and] ‘fake news.’ And the other side of it seems to target information gathering and practices.” SB 132 includes provisions that forbid sharing user personal data, “profiling users based on their personal data,” “selling or authorizing others to use” website visitors’ data “to provide location-based advertising or targeted advertising,” and use of “facial recognition software or other tracking technology.”

Regardless of its novelty, the proposal encompassed by the bill is likely unconstitutional, according to a leading First Amendment expert and media lawyer in Denver. “The bill is rife with constitutional infirmities,” said Steve Zansberg, a Denver media lawyer who is an expert on the First Amendment.

DEFINITIONS

The bill’s text neither clarifies the reach of the categories of speech to be regulated nor includes any guide to help the public understand them. Hate speech, for example, is specifically defined by the Cambridge Dictionary as involving “public” comments that indicate “hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation.”

However, social media companies or digital content providers might have their own definitions. Facebook says “hate speech” includes any “direct attack against people on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.” The company further explains that it defines an attack as any “violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing, and calls for exclusion or segregation.”

Twitter’s hate speech policy looks similar, banning, among other forms of expression, “content that wishes, hopes, promotes, incites, or expresses a desire for death, serious bodily harm, or serious disease against an entire protected category and/or individuals who may be members of that category.” Twitter also proscribes “targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category,” threats of violence, “Inciting behavior,” and “hateful imagery.”

“Fake news,” which is addressed by the bill, is similarly unclear. Brought to recent fame by former President Donald Trump, the slogan was first used in the 19th century, according to the Merriam-Webster Dictionary. Before that, reporters were accused of communicating “false news” as early as the 16th century. Today, Merriam-Webster says, it is “frequently used to describe a political story which is seen as damaging to an agency, entity, or person.” It can also be used to describe distorted or spurious information.

LEGALITY

Definitions aside, an American law cannot validly seek to prevent dishonesty in communications, in most circumstances. “The Supreme Court ruled that you do have a First Amendment right to lie,” Silver said, referring to the 2012 decision in  United States v. Alvarez. “You have the right to disseminate intentional disinformation.”

The ambiguity of the terms “hate speech” and “fake news,” along with an absence of any incisive meaning of the phrase “undermine election integrity,” “conspiracy theories,” or even “disinformation,” could condemn the statute to invalidation by a court. Not only is such speech “lawful but awful,” said Blake Reid, a clinical professor at the University of Colorado Law School who specializes in technology policy and telecommunications law, the proposed statute, should it become law, would likely be ruled too vague or overbroad to survive constitutional scrutiny.

Zansberg pointed to another exemplar of the constitutional problem that would possibly afflict Donovan’s bill: a 1996 federal law that prohibited the electronic transmission of any “obscene or indecent” communication to a minor. “There’s a real problem out there,” he said, referring to disinformation. “No one can deny it. The same [was] true when Congress passed the Communications Decency Act because, in its early days, the Internet was already becoming a bit of a cesspool. Congress wanted to empower, back then, Prodigy and Compuserve and AOL to clean up the Internet, so they created a statute [that] sail[ed] through . . . with practically no discussion. Lo and behold, the Supreme Court struck it down as blatantly unconstitutional.”

Zansberg said SB 132’s inscrutability would not likely be considered justifiable by a federal court on grounds that the writers of the First Amendment could not anticipate the rise of the Internet. “They were very intimately aware of political misinformation in election campaigns personally, and that’s well documented,” Zansberg said. “Yes, the technology has changed. And [so has] the speed of transmission and the mass distribution. The idea that one had to have Ben Franklin’s printing press to be a mass disseminator of information has changed. But the problem of hurling epithets and falsehoods in the midst of a political campaign was not foreign to our founding fathers.”

Technology companies are aware of the problems that disinformation and speech intended to incite hate or distrust can pose for society, Reid said. “Pretty much across the board, there is agreement that there is a problem that we — the royal We — need to do something about.” He said the industry has not reached agreement on how to respond to the issue.

“Where there’s not consensus is what ought to be done about that, where responsibility lies, the details of how to approach it, whether it should be backstopped by regulation and at what level, those sorts of questions,” Reid said. “What to do about it is a source of huge contention.”

REGULATION

Reid said he thinks it is unclear whether laws regulating content posted to social networks and on digital media platforms can solve the problem. Because the First Amendment poses challenges for efforts to regulate objectionable speech, he said he thinks policymakers’ focus should instead be on the technology companies’ business model. “You can’t necessarily go after the speech directly,” Reid said.

Instead, use of antitrust and privacy protection laws offer a more promising course. “We have a few very large dominant platforms,” he said. “There are not a lot of viable competitors to them, [and] the business model they have and the model they have for structuring our discourse” is one that all users must face and that “proliferates, no matter what.”

To solve that problem, efforts to “reduce the dominance of these companies so that we get better architectures that don’t reward users who post disinformation, that don’t reward users that post inflammatory hate speech, that don’t have algorithms that are built all around engagement” should be prioritized. “We ultimately need different platforms . . .  that don’t reward that kind of content,” Reid concluded.

Aside from the business model approach Reid identified, Silver suggested effective policy making to address destructive speech online is replete with questions about commercial scruples. “We now have a situation where the area of discourse is no longer public streets,” he said. “It’s no longer the park. It’s no longer areas that are controlled by the government. It is largely platforms controlled by private entities that are not constrained by the First Amendment. And so you start getting into all these really difficult ethical arguments about what is the moral and ethical responsibility of these corporations.”

Technology companies, Reid said, should not necessarily be expected to act in the same way journalists do. “I think they are a phenomenon to themselves,” he said. “It interweaves with the dissemination and gathering of news in lots of new ways that I don’t think are quite amenable to … the way what we would traditionally call journalistic ethics operate[s].” He suggested that a more productive approach might be to question whether social media platforms should be viewed as being in the business of distributing news. “It is a little bit hard to think of them and say we should treat Facebook like the New York Times or we should treat Google like NBC News and we should treat the algorithm like Walter Cronkite. I don’t think that analogy makes a lot of sense.”

Silver continued by arguing that, while the Internet has indisputably worsened the disinformation problem, neither technology company executives nor legislators should realistically expect human susceptibility to disinformation, lies, incitement and fantastical tales to fade completely away. “People are not very good about figuring out false information on the Internet,” he said. “We’re actually really terrible about it. And most studies suggest that people for the entire history of humankind have been really bad about figuring out what is false and what is correct.” Silver said humans tend to be influenced by confirmation bias. “We’re much more likely to believe information that confirms our preexisting worldviews,” he said. “And we believe it’s true even when there is obvious evidence that it’s not true.”

The privacy aspects of the bill might be less likely to provoke legal challenges. Reid explained that there is a growing trend around the nation of state legislatures seeking to set expectations for privacy of consumer data by social networking companies and digital media platforms. California, Nevada, and Virginia have enacted comprehensive consumer data privacy laws, according to a March 11 update by the National Conference of State Legislatures. The General Assembly has also cleared ground in this area, enacting into law the Protections for Consumer Data Privacy Act in 2018.

Whether those laws violate the Constitution is not clear. A 1997 decision by a New York-based federal court found that state data privacy rules might run afoul of the so-called dormant commerce clause, holding that “states’ jurisdictional limits are related to geography,” but that geography “is a virtually meaningless construct on the Internet.” Reid thinks the rationale for that decision may be obsolete. Technology companies rely on “targeted advertising,” Reid said, and consequently, “they have a tremendous amount of technology invested in being able to understand, with a fairly high degree of precision, where you are coming from.” To him that means “the excuse that used to drive these discussions about the dormant commerce clause” now seems “to ring a lot more hollow.”

Previous articleThe Bridge Between Medical and Legal Immigrant Assistance
Next articleArrest Made in 1982 Breckenridge Cold Case

LEAVE A REPLY

Please enter your comment!
Please enter your name here