Episode 13: Why is social media such a corrosive force in politics?

Facebook ads, Twitter bots, TikTok teens—over the last decade, social media has played an increasingly important role in our social and political discourse. This week, we continue our conversation with UVA Media Studies Professor Siva Vaidyanathan to discuss the way social media algorithms can unintentionally help organize radical groups and why social media platforms don’t bear the same liability for content as other publishers—namely, Section 230 of the Communications Decency Act. After that, journalist Brad Kutner takes us into a recent defamation suit in Virginia involving California Congressman Devin Nunes, Section 230, and a Twitter account for a fake cow.

Editor’s Note (7/6/20): Brad Kutner is a reporter for Court House News, not the Virginia Mercury as mentioned in the episode.

Episode Transcript:

Nathan Moore: This is Bold Dominion, an explainer for state politics in a changing Virginia. I'm Nathan Moore.

[intro music]

If you're listening to this, you probably have an internet connection. And living in the internet age as you do, it's likely that you also have a social media presence, maybe even multiple. Between Facebook, Twitter, Instagram, Reddit, and TikTok--which I only know about because my kid shows me stuff on that one--a recent Pew survey found that almost three quarters of American adults use at least one form of social media. If you're listening to this, you're also probably aware of how the social media landscape of our own political discussion has gotten pretty bad in recent years.

 It turns out social media websites like Facebook and Twitter don't have the same kind of legal restrictions and responsibilities that news outlets do. They're exempt from many of the accountability laws that news outlets have to deal with. That's because of something called Section 230 of the Communications Decency Act. In short, the law does not treat internet platforms as publishers of any information provided by a user or another entity. Surprisingly, this all brings us back to Virginia where California representative Devin Nunes is currently suing a fake Twitter account called Devin Nunes' Cow for defamation. We'll get to fake bovine social media accounts in a bit, I promise.

But first Bold Dominion producer Charlie Bruce talks with Siva Vaidhyanathan. He's a professor of Media Studies at the University of Virginia and author of Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy.

Charlie Bruce: As you've written about extensively, there's a lot of fake news or false information that is being used right now. And the thing is, is that people are looking to Twitter and Facebook, for live streams and on-the ground-reporting, but we can't trust everything we're seeing. So I wanted to first ask you about how you felt about the new policy that Twitter is using where they fact-check information in tweets that are related to the Coronavirus or things like that.

Siva Vaidhynanathan: I mean, it's bigger than what I think colloquially is called fake news--which I don't think is a useful category--than disinformation. It's bigger than misinformation. It's, it's it's all of the above. And it's really meant not to convince us that untrue things are true. But to wear us down to the point where we don't care what's true. That's really the overall game. See, if you pay attention to the storm rather than any individual raindrop, you'll get a better picture of what the real threat is to democracy.

 Again, not just in the United States, but around the world. We are suddenly faced with this global pandemic, where governments, especially the United States government, has been incapable of putting forth a standard, systematic, strong-voiced set of guidelines to help save lives. Given that, and given the fact that the President himself is willing to spread all sorts of nonsense about COVID-19: nonsense about when we might have a vaccine, nonsense about alternative treatments, nonsense about the extent of the virus itself, right, you know, so all of a sudden, all these companies had to deal with this new front of misinformation from a person they could not afford to alienate. But what Twitter did as a policy and what Twitter would continue to do as a policy is inconsistent, somewhat incoherent, but certainly experimental.

So in one case, Twitter saw that Trump had tweeted out something that was not true about mail-in voting and the risk for mail-in voting to cause some sort of, you know, voter fraud. So instead of deleting the tweet as it might have with any of us, or warning us or deleting our account or any of that, Twitter decided to put a little link at the bottom of the tweet saying click here, if you want to find out the truth-- "more information" is the way they put it, right. So it wasn't really a fact-check, and it wasn't really effective. And again, this was really just in one case that the President did. They didn't go back through the President's tweets--that would have taken too long. They have not repeated this far as I know. And then a week later--several days later--the President called for violence against protesters, right, especially state violence. Specifically state violence against protesters. This was a step too far for Twitter. This was a very explicit violation of Twitter's rules. No one is allowed to use Twitter to call for violence against anybody. And so Twitter again had a choice. Do they treat Trump like they would treat you or me if we called for violence? Well, no, they can't--they won't do that. It's like like they can't do that they won't. So they put a cover on the tweet that said "This violates Twitter's rules against calls for violence. If you want to see it, click through" and people would click through and see the original tweet. But that that point that user's mind is sort of focused on the fact that it violated the rule against calls for violence. And it's a, I think, a really effective way to deal with tweets like this. But again, it has not been repeated. Trump continued to call for violence and celebrate violence in the ensuing weeks. And Twitter did nothing to follow up on that. So they've not made it a policy they made it a one-off situation.

In contrast, Facebook did even less. Mark Zuckerberg criticized Twitter for its move. Mark Zuckerberg said that, you know, companies like that should not be the arbiters of truth, which I actually think is totally an acceptable position. I agree with Zuckerberg on that. Not not because of the should, but because of the could. I just don't think these companies can do that job, you know, decide what's true and what's not.

CB: You spoke a little bit about the potential for maybe a federal agency to regulate these companies and how they respond to misinformation. Can you tell me why there hasn't been more regulation?

SV: Yeah, it's easy. The First Amendment. The First Amendment prevents the government from telling companies what they can and cannot say or amplify or share or any of that. So it's never gonna happen.

CB: There was a lawsuit, I believe, last year or the year before, trying to classify Facebook as a publisher. And if it were legally classified as a publisher, it would be subject to different kinds of regulations. Wouldn't that be another potential avenue of regulating the company?

SV: There is one provision of federal law, the sole remaining operational part of the Communication Decency Act, which was passed in 1995. So in 1995 Congress saw this internet thing coming and, unsurprisingly, saw what we all saw, which is: "Oh my gosh, there was gonna be a lot of pornography coming through here." Congress wanted to make a stand against pornography; wanted to protect children from pornography. So they wrote up this elaborate, clearly unconstitutional law that would have restricted what sorts of content Internet Service Providers provided. Federal Courts quickly ruled it unconstitutional, except for one provision. And that's Section 230. What Section 230 does is it grants immunity from civil suits and criminal prosecution to any company that is an internet service provider basically, that provides content to people through the internet. That's a pretty broad group of companies, right?

It was intended--it was written at a time time when the idea of "Internet Service Provider" was the company that actually brought the data to your house in those days, America Online or Prodigy, you know, those were the major internet service providers of the day. And the idea was that, you know, email is coming through that was the dominant internet traffic of the day. No one foresaw a YouTube. No one thought there would be such a thing as a Facebook or a Twitter. So that wasn't even in regulators' minds or in Congress's mind when it created the Communication Decency Act.

Now, flash forward a little bit, you've got all these big companies that are able to host our contents. And this is really important, decide what is appropriate and therefore advantage our content, edit our content, because of the Communication Decency Act, shields them from liability. What that means is if I libel someone on Twitter, like I call someone a bank robber or I call someone a child molester on Twitter, and that person sues me, Twitter cannot be a party to that lawsuit. It would be so much better for that person to be able to sue Twitter, Twitter has more money than I do. But this absolves Twitter of that risk and of that fear. And that gives Twitter two powers derived from confidence.

 Number one, it means that Twitter doesn't have to hire a bunch of lawyers and people to sit and read every tweet before they come up. Because, you know, look, Twitter would be impossible if it didn't have that protection from liability. Facebook would be impossible. Google would be impossible without the protection from liability. But the other thing that the Communications Decency Act Section 230 allows, and in fact, makes very clear is that companies may--and we'll find a reason to--moderate their content, what flows across. So they are able to build out systems that flag certain terms or certain images, and push those off to the side where humans can review them. And all of these companies do this. They don't do it at a scale that there that is needed because these companies are just too big, right? I mean, Facebook has 2.5 billion users. But nonetheless, Section 230 allows them to--encourages them to--moderate content. If we did want to remove that protection and hold Facebook and Twitter and Google to the standard that they have to be legally responsible for all of the stuff we post, they would shut down tomorrow.

CB: Why?

SV: They would be sued out of business within weeks, and couldn't defend themselves. So they would have to hire--to stay in business, they would have to hire as many people to to scour the services as they have posting to the services. You know, they would have to hire millions of people to do this work, and it's just not possible. Right?

So again, we have a set of laws that were created for publications, which are managed by people at a reasonable rate, you know, publications or radio stations or TV stations, right? They don't produce content at this scale and have never been designed to, nor should they be, right, which is why we like the fact that there's human editing and judgment built into the system at every level, and we hold those humans legally responsible for what goes in. But if we want systems this big--if we like Google in our lives, and there are reasons to like it--we have to accept that it is not possible to hold Google to that level of responsibility.

CB: Right. And another thing that you and others have written about is that Facebook is not a neutral platform, as Zuckerberg likes to say it is. Something like 66% of people who join white supremacy groups on Facebook do so because it is suggested to them, and that creates engagement. So why is he saying that? He's just promoting free speech when really he's taking data and encouraging people to become more radicalized?

SV: Yeah, you've just described the core function of Facebook and the core problem of Facebook, which is algorithmic amplification. So in two different areas, one in your newsfeed Facebook has a constant record of the things that you're interested in a constant record constantly changing record of the level of engagement that you have expressed towards certain people in your life and towards certain issues and sources of content. And Facebook constantly wants to give you more of what you have shown already that you like. This creates, you know, a funnel of information so that you are constantly barraged with affirming content, right, content that tells you and reminds you how right you are about everything. But it can also send you down these rabbit holes of extremism.

There are a lot of Groups--capital-G Groups--all of these groups, you know, a lot of these groups are devoted to really dangerous things. And Facebook is their big carrier, their big engine. Now radical groups that are antisemitic or anti-black, anti-women, those groups are growing on Facebook as well. And Facebook's algorithms sense something about people who fit the profile of those already active in those groups. That profile could be the town in which they live or geographical indicators, expressions of politics, expressions of fandom for certain bands or artists, being male being in their 20s. Those markers all combined for Facebook to say: "Well, there are a lot of people in this group over here, and might be a crazy alt-right group, who exhibit these demographic patterns." Let's advertise this group to everybody else who fits these demographic patterns--and you know, only 5% might bite at that bait. But that's still a lot of people. And that's how those groups grow. But algorithmically, right? It's not like a person at Facebook says: "Let's build up the alt-right groups." It's that the algorithms are designed to look for people like the group membership and try to bolster membership.

So there are Facebook groups devoted to French Bulldogs, there are Facebook groups devoted to knitting, those Facebook groups do the same thing. They will profile the members and they will try to find identical profiles around Facebook and recommend that you join the French Bulldog group, which is lovely, right that I mean, that's nice. It's the sort of thing where if you're running this company, and you think, Well, you know, 2.5 billion people are on Facebook and all of them are super nice people who have lovely little French Bulldogs and knit a lot, then by all means this is a perfectly reasonable system to build. But they never stop and think: "Wait a minute, what if there's somebody who you know is spreading antisemitic nonsense? Then maybe we don't want this kind of system." But they never ever run the worst-case scenario.

CB:  Why wouldn't they want to run through the worst-case scenario when the consequences have been as extreme as they were in 2016?

SV: Well, Facebook had a decade before that, of tremendous success, where they were able to live in their own bubble. Everyone there was able to live in a bubble and pretend that nothing bad was happening. Meanwhile, those of us who do my job, social media scholars, we were documenting the rise of all of these problems long before 2016. But if you were only paying attention to Palo Alto, California, and how nice everything is, you might not have noticed that all of this horrible stuff was going on and no one at Facebook cared or or noticed. Human rights advocates were trying to get the attention of Facebook employees, Facebook staff for many years, and didn't really succeed until 2016, or actually 2017 after the election in the United States. That's when people at Facebook suddenly said: "Oh, wait a minute, maybe we're not creating utopia. Maybe the idea of connecting every person around the world and amplifying things algorithmically based on engagement and emotion isn't all that great."

 But they're not in a position to undo what they've built for a couple of reasons. Number one, it is so incredibly profitable. If people keep using this service, and money keeps coming. That's affirmation. So while there might be little things that people complain about, like the death of hundreds of thousands of Rohingya Muslims in Burma, whatever, right little things people complain about..."That's gonna happen. And we'll try to work on that on the side. But like, fundamentally, people love this thing! And it keeps growing and it keeps making more money!" So, from Zuckerberg point of view, he's a true believer in himself. And he's a true believer in his vision and a true believer in Facebook's potential to make our lives better. And so he only focuses on the things that affirm his beliefs, even though there has been some level of cognitive dissonance since 2017. Some reason for people at Facebook to be shaken in their belief. We've only seen that happen at the lower levels of Facebook employees. We're starting to see Facebook engineers, other Facebook employees say: "Look, I don't like where this company's going. I don't like having to explain to people in bars that I work for Facebook and have them walk away. Not happening at the top levels of that company, where they are true believers in what they're doing."

[fade up midtro]

NM: Siva Vaidhyanathan is a professor of Media Studies at the University of Virginia and the author of Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy. In the second half of this episode, we're going deeper down the rabbit hole of social media law, including why a California Congressman is suing a fake cow in Virginia courts. But first, a short break.

You're listening to Bold Dominion, a state politics explainer for a changing Virginia. Visit us online at bolddominion.org. Got a friend who's trying to figure out Virginia state politics? Well, tell them about this show and then subscribe at Spotify, Apple Podcasts, and wherever find podcasts are served up. Bold Dominion is a member of the Virginia Audio Collective online at Virginiaaudio.org. Check out UVA Press Presents, one of our sister podcasts. In that one you'll hear from authors in the UVA Press Catalog about their inspirations, their processes and their publications. That's all online at Virginiaaudio.org.

[fade out midtro]

NM: So if social media platforms affect what type of content we see, and there are very few ways to hold them accountable, then why is us representative Devin Nunes suing a fake cow? And why is the California congressman pursuing that lawsuit in Henrico County, Virginia? Bold Dominion producer Aaryan Balu spoke with Virginia Mercury courthouse reporter Brad Kutner.

Brad Kutner: So Devin Nunes is a sitting US Congressman from California. He's a staunch Trump ally and conservative who has made it his business, much like Trump, to start wars with the media. And part of that involves suing publications and critics whenever possible. He has at least six active defamation suits around the country. Two, maybe three, are here in Virginia.

The one I've been covering is here in Henrico County just in the suburbs of Richmond. This suit is against--it's the more famous one because it names Twitter as well as two prominent parody accounts: Devin Nunes Cow and Devin Nunes Mom. Yeah, this is March 2019. He is asking for $250 million, which is what you do in a lawsuit. But he's got Steven Biss, who's a Charlottesville based attorney who specializes in defamation cases. Biss has won a couple of high-profile defamation suits. He's also faced some admonishment from the state court and had his law license suspended at one point. A highlight of the first hearing was Biss comparing Twitter to a fire next door that has smoke seeping into your house and choking your newborn baby. He is not one to shy away from hyperbole. Really, whether or not it helps your argument is up to the judge, I suppose.

 So he files it in Henrico County. Why is a California congressman suing a California company in Henrico County? So Virginia is one of the few states that has pretty weak anti-SLAPP laws. An anti-SLAPP law is a strategic lawsuit against public participation. These laws are designed to protect speakers in the public square when they speak out against those in power, most often politicians. States like California and New York have very strong anti-SLAPP laws, which allow for dismissal of defamation claims before discovery. So that's another argument around this Nunes thing is because he's suing two anonymous Twitter users, one of the requests he's asking here in the Henrico court is for Twitter to be forced to reveal the identities of these Twitter accounts--which, If you have any knowledge of how the First Amendment has worked in this country since it was written--we have incredibly strong laws protecting anonymous speech in this country, particularly when it's filed-wWhen it's about a public figure, particularly when it's authored by a fake cow.

Aaryan Balu: How likely is Nunez to win the case?

BK: You know, it's hard to prove defamation against a political figure, because the threshold for defamation for political figures is really high. You can say pretty much--I could say pretty much whatever I want about Donald Trump because he is that much in the public square. And we as a nation have agreed that you can't say whatever you want, but you can say pretty darn whatever you want, because they're that important, and they should be immune to obvious fake criticism.

 So in states with good anti-SLAPP laws, you can file a motion dismiss early on and it will stop the lawsuit from going any further, and it's stifled defamation suits in a lot of in places like California in New York, where those laws are powerful. So why Virginia? Less powerful anti-SLAPP law.

But the other question and this was kind of the early days of the suit was "How Virginia?" because when you're filing a lawsuit, you still need to find jurisdictional grounds for the case to be brought here. Biss and Nunes argued that Twitter had a LLC of sorts here for a hot minute and that there are Twitter users in Virginia, and that they collect revenue for advertising from people in Virginia. However, these laws are designed and the courts are supposed to be designed to seek and find justice, whether or not as they say a tenuous connection exists between a California congressman and a California company that they should sue in Virginia is again up to a judge. So the judge sided with Nunes and kept the suit here in Henrico County.

AB: So what exactly is Nunez accusing Twitter of?

BK: These lawsuits--this lawsuit in particular I know the most about this one accuses Twitter of bias against Republicans and bias against Trump and Nunes specifically. He talks about shadowbanning a lot, which I think is really funny, because I definitely know the judge has no idea what shadowbanning is. And I'm pretty sure only Reddit users know what shadowbanning is. I don't think anybody else shadowbans. And again, Nunes responded to the judge's lack of knowledge on Twitter by agreeing and say he doesn't use it or know how it works. So it's a bunch of people who don't know what Twitter is arguing that Twitter is breaking the law.

AB: So what happened in the latest hearing?

BK: So this hearing was interesting, because it's the first time they've gotten to the merits of the case. So this is actually the legal argument as to why Twitter should be able to get this case dismissed--and they very much should. That's when you get into Section 230 of the Communications Decency Act. And they are allowed to remove content, to ask for specific kinds of content, even negative content. They're not allowed to write anything. That's kind of the core of this--they cross the Section 230 line when they're participating in the actual content of the speech. But there's a lot of freedom in how they control that content more broadly, that allows them to be protected by this law.

AB: So what's been the ruling so far?

BK: The judge offered some supportive comments to Twitter. Biss started talking about the anti-Trump bias and the anti-Nunes bias and the shadowbanning. And Judge Marshall said even if there was bias, Section 230 allows for them to have that bias. Literally hundreds of court decisions across the country have sided with Twitter on this issue--Twitter and other platforms--that they are protected under this federal law. I mean, that's problematic because it allows almost seemingly randomly targeted moderation.

So you've got, as Nunes and friends complain, you know, conservative voices being targeted more than liberal voices. I mean, I might argue that conservative voices have a tendency to speak with more vitriol that might inspire a ban or content removal. But I'm sure that happens on the left as well. Whether or not that's being enforced when Nunes is filing such a complaint, he doesn't really have to check that out. He just has to make the claim that it's happening,

AB: So one thing that this case is alleging is that Twitter is acting not just as a platform, but as a publisher. So what exactly is that line?

BK: So I actually got to give Biss some credit because I thought this was kind of a neat argument. I mean, I don't think he should win, but I thought he threw an interesting wrench in the machine. He argued that Twitter's algorithm is designed to increase visibility on content that they believe that they are influencing. He argued that it turned Twitter into a contributor to the content, because it had the capacity to increase visibility on defamatory language while decreasing or suppressing the voice of the target of that language.

So I thought that was a neat argument. But again, that's not really against the law. So, yeah, I mean, this law was written to protect internet providers from lawsuits. Let's not--I have no doubt that these laws were designed and written by internet people to make sure they would never get sued, which if you're running a business and you have the capacity to write the laws that regulate you, jump on it.

Has it been worse for democracy than better? I might tend to agree I don't even use Facebook anymore. As we were both saying you know, this is--they have an economic interest in keeping you mad. And much like they're asked to self-regulate, we need to step up and self-regulate and consider what impact our speech is having on platforms and whether or not that's worth the degradation of the public square.

NM: Brad Kutner is a courthouse reporter with Virginia Mercury. Thanks to him and also to UVA Professor Siva Vaidhyanathan, who we heard from earlier in the show. My name is Nathan Moore, and I'm the host of Bold Dominion. Huge thanks to our producers this week, Charlie Bruce and Aaryan Balu. Find the show online at Bolddominion.org. Go ahead and subscribe--it's just a click away. Keep social distancing, y'all and I'll talk with you again in two weeks.

WTJU Radio

WTJU is a non-commercial radio station founded in 1955 focused on airing music from across genres (Folk&World, Jazz&Blues, Classical, Rock) and curated by local music lovers.

https://www.wtju.net/
Previous
Previous

Episode 14: What does the Atlantic Coast Pipeline's cancellation mean for Virginia energy?

Next
Next

Episode 12: How will Black Lives Matter change Virginia?