10.4 C
New York
Thursday, December 8, 2022

Buy now

First Amendment Restriction on State Laws Targeting Election Bias Information, Part V

This is part V in the discussion series First Amendment limitation on state laws targeting false information, 20 First Amendment. L.Rev. 291 (2022). The following is an excerpt from the article (without the footnotes you’ll find in the full PDF).

While most of the state statutes we review end up being constitutional, their enforcement does nothing to eliminate lies and intimidation in elections, much less eliminate the flow of misinformation that pollutes public discourse. The problem is simply too big. Any legislative approach to combating election disinformation must be part of a broader strategy to reduce the overall prevalence of misinformation and mitigate the harm caused by such speech.

Part of the challenge stems from the fact that we may be entering what Richard Hasen calls a “post-truth era” of election law, where rapid technological change and hyperpolarization are the “calls.”[ing] questions people’s ability to distinguish truth from falsehood.” According to Hasen, political campaigns “increasingly take place under conditions of voter distrust and groupthink, made possible through foreign interference and domestic political manipulation through new and increasingly sophisticated technological tools.” In response to these profound changes, electoral law must adapt to take into account the ways in which our socio-technical systems breed misinformation.Furthermore, we must recognize that regulating the truth in political campaigns can only take us so far; there are things the law simply cannot do about its own.

[A.] Internet blind spot

One of the biggest challenges to election speech regulations is the rise of social media, which have become modern-day public forums for voters to access, engage with, and challenge their elected representatives and fellow citizens. While political misinformation has been with us since the founding of the nation, it spreads especially quickly on social media.

[* * *]

Although the Internet plays an increasingly important role in political communication and public discourse in general, there is currently no national strategy to deal with online election disinformation. The federal government does not regulate the content of election-related speech outside of the broadcast context, and even for the broadcast medium, federal regulations are limited. The transparency of political advertising gets a little more federal attention, but here again the law is aimed at advertising distributed by broadcast, cable and satellite service providers. Although more money is spent on online advertising today than on print and television advertising combined, federal laws mandating disclosure and retention requirements do not currently apply to online political ads.

[* * *]

Further complicating matters, state efforts to reduce election misinformation on social media are limited by Section 230 of the Communications Decency Act, which prohibits the enforcement of state laws that hold Internet platforms liable for the publication of third-party speech (including advertising content). . As a result, while states can enforce their election speech laws against the individuals and entities that originally made the prohibited statements, they cannot hold social media companies or other Internet services civilly or criminally liable if such speech is shared. . Given the large role social media platforms play in spreading and amplifying election misinformation, this leaves much of the battle over election speech beyond the reach of state legislatures.

Both Republicans and Democrats have called for changes to Section 230, but it seems unlikely that Congress will stick to legislation that strips election-related harm from the law’s protections. Indeed, their complaints about the law suggest they will remain contentious for the foreseeable future, with one side arguing that Section 230 is to blame for allowing social media platforms too little policing of harmful content, while the other argues that Section 230 allows platforms to engage in too much speech moderation motivated by conservative bias. And even if they agree on the problem they want to solve, there’s a danger that congressional efforts to force social media companies to police election misinformation will only make the situation worse.

[B.] The limits of the law

Whether or not Congress takes the lead in regulating election speech, the government’s efforts to combat election misinformation must be part of a multifaceted strategy. . . . While the government can target narrow categories of false, fraudulent, or threatening speech, the First Amendment sharply limits the government’s ability to broadly regulate election-related false and misleading speech. This is not to say that state legislatures should throw up their hands on the problem of election misinformation. Both the federal and state governments have a number of policy options that can reduce the prevalence and harmful effects of election misinformation. Two areas of particular promise are often offered—both less likely than direct regulation to raise First Amendment issues: (1) increasing transparency about the types and scope of election bias information reaching voters, and (2) supporting self-regulation of entities that act as conduits for the dissemination of others’ speech, particularly on social media platforms.

[* * *]

However, transparency is not a panacea, and there is reason to believe that as the government imposes ever more intrusive accounting and disclosure requirements on media and technology companies, these efforts will face a constitutional challenge. Eric Goldman notes that laws requiring online platforms to disclose their content moderation policies and practices are “problematic because they require publishers to detail their editorial thought process [creating] unhealthy entanglements between government and publishers, which in turn distort and chill speech.” According to Goldman, transparency mandates can “affect the content of published content, in the same way that restrictions on direct speech” and therefore these mandates “should be classified as content-based restrictions and trigger strict scrutiny.” He also suggests that requiring platforms to publicly disclose their moderation and content curation practices should be classified as “compelled speech,” also anathema under the First Amendment.

A recent decision by The Fourth Circuit Washington Post v. McManus appears to support these concerns. McManus involved a Maryland law that extended the state’s advertising disclosure and recordkeeping regulations to online platforms and required them to make certain information available online (such as the buyer’s identity, contact information, and the amount paid) and to collect, retain, and make available other information. upon request to the Maryland Board of Elections. In response, a number of news organizations including The Washington Post and The Baltimore Sun, filed a lawsuit challenging the requirements applicable to them. In his opinion overturning the law, Judge Wilkinson noted that the law was a content-based speech rule that also compelled speech and that these features of the law “caused.”[] the real risk is to either freeze speech or manipulate the marketplace of ideas.”

[* * *]

The McManus the case casts a shadow over state laws that seek to impose broad records and disclosure requirements on online platforms. More narrowly tailored transparency laws targeting election bias on social media platforms may pass the constitution, however. The McManus The court did not strike down the Maryland statute, but merely held that it was unconstitutional as applied to plaintiff news organizations. Moreover, as Victoria Ekstrand and Ashley Fox note, “Given the plaintiffs’ unique position in the case, it is currently unclear how far, if at all, this opinion extends to online political advertising laws targeting large platforms such as Facebook.” Nevertheless, they write that “McManus suggests that governments are unlikely to take a broad approach by imposing accounting requirements on all or nearly all third parties that share political advertising online.”

No matter what level of First Amendment scrutiny courts apply to mandatory recordkeeping and disclosure laws, the reality is that the federal or state governments simply cannot legislate false information about elections. The government’s efforts to ensure free and fair elections must take into account and strive to utilize the influential role of online platforms, especially social media, in facilitating and shaping public debate. Because these private entities are not state actors, their choices to prohibit election fraud are not subject to First Amendment scrutiny.

[* * *]

One way the government can facilitate online platforms’ efforts to address election misinformation is by contrast to preserve Section 230’s immunity provisions. These protections give platforms the “breathing space” they need to experiment with different self-regulatory systems that deal with election misinformation. For example, Section 230(c)(1) allows Internet services to monitor third-party content on their sites without worrying about being responsible for reviewing that material. This allows social media companies to escape the “moderator’s dilemma”, where any attempt to moderate third-party content can result in the company becoming aware of its infringing or illegal nature and thus becoming liable for all of its service. To avoid this liability, the sensible response is to forego third-party content review altogether, creating a strong disincentive for moderation.

Section 230(c)(2) also protects platforms from civil lawsuits arising from the platform’s removal of inaccurate information or banning of users who post such content. While platforms undoubtedly have a First Amendment right to choose what speech and speakers they allow on their services, this provision is a very effective barrier to claims by users of social media platforms who have been suspended or banned for violating the platform’s acceptable use policies. In fact, after one of his Twitter posts was flagged as misinformation, former President Donald Trump sought to remove this very provision with an executive order aimed at limiting platforms’ ability to remove or flag controversial speech.

As states have shown, there is no one-size-fits-all approach to dealing with election misinformation. While many believe that social media providers are not doing enough to remove election disinformation from their platforms, others argue that major platforms are too willing to limit political discussion and ban controversial speakers. The advantage of Section 230 is that platforms can take different approaches to this challenging and controversial topic. As Mark Lemley points out, “[t]The fact that people want platforms to do fundamentally contradictory things is a pretty good reason that we shouldn’t impose any model on how a platform regulates the content posted there — and therefore a pretty good reason to keep Section 230 intact.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,604FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles