Has The Supreme Court Even Read Section 230?

There’s some need for me to return to the Section 230 well one more time, partly because I want to say something else but mostly because you need to listen to how Danielle Citron talked about it in a podcast conversation with Slate’s Dahlia Lithwick.

First, me. I didn’t really get into this in my other posts, but there’s a thing Kavanaugh said that’s been sticking in my craw. (Inexplicably and against all evidence, this piece is titled, “The Supreme Court Actually Understands the Internet”.)

None of the justices appeared satisfied with Schnapper’s reasoning. Justice Brett Kavanaugh summed it up as paradoxical, pointing out that an “interactive computer service,” as referred to in Section 230, has been understood to mean a service “that filters, screens, picks, chooses, organizes content.” If algorithms aren’t subject to Section 230 immunity, then that “would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.”

First of all, Section 230 doesn’t define “interactive computer service” to mean a service “that filters, screens, picks, chooses, organizes content”. Here’s how it does define it.

The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.

Within that definition, there’s one class of such a service called an “access software provider”, which the law does define as a provider of tools to “filter, screen, allow, or disallow content”, “pick, choose, analyze, or digest content”, or “transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content”.

First, Kavanaugh is defining a wider term by the definition of a narrow subset of that term. Second, algorithms are not “the very thing that makes the website an interactive computer service”, as websites existed for some time before they started being operated through targeted and tailored recommendation math made possible through extensive mining of personal data.

Third, and more importantly, the law doesn’t say anything about protecting all of those actions that might be made possible by tools provided by an access software provider. What it says is that as a subset of interactive computer service, an access software provider won’t be treated as the speaker of someone else’s speech, and that as a subset of interactive computer service, an access software provider won’t be liable to good-faith efforts to restrict content.

It never says anything at all about liability when it comes to anything else. Not a single other thing.


What you really want to do is go listen to Dahlia Lithwick and Danielle Citron discuss Section 230 and this past week’s Supreme Court arguments (or read the awkward, unedited transcript), because Citron is right on the money on all of this—and also on the absurd notion, repeated more than once by more than one Justice, that there’s such a thing as a “neutral” algorithm.

Citron takes Lithwick through the legal and legislative history of Section 230, and pretty much eviscerates a court that seemed mostly ignorant about the law and mostly uninterested in actually understanding it.

I mean, if the theory works out for the plaintiffs […], [providers] would bear responsibility for their own actions. We would enter a world in which network tools—that these tools and services that engage in exploitation of our data—they’d have to be responsible for some of what they did. They would not be responsible […], actually, for over-filtering. (c)(2) stands, right? We’re not going to lose our marbles here. But it would be a proper interpretation of (c)(1). Finally, we wouldn’t be in this land of over-interpreting (c)(1), right?

But the Justices seem to think they’re out over their skis here, that they can’t interpret the statute. But, you know, as Justice Jackson showed us (she’s brilliant), they can interpret the statute. And if Congress wants [it can] revise the playing field, keeping on this […] super-immunity that we have here. It’s an unqualified immunity, so to speak. It’s like an absolute super-immunity that anything happens on the internet they’re immune from. And that shouldn’t be the case: that wasn’t the point of (c)(1).

And so I hope they listen to Justice Jackson, and heed her lessons because she read the briefs. She understands the stakes. She understands the theory of liability the plaintiffs are pushing. It’s not about treating YouTube as a publisher or speaker for leaving up, failing to remove ISIS videos. The theory of liability isn’t about the ISIS videos and not taking them down. The theory of liability is the business model of YouTube, of using an algorithm to recommend, using our data in a very sophisticated way, to press content.

Lithwick then asks Citron to write the opinion that the Court obviously isn’t going to write, given that they aren’t truly engaging with the actual issues in the case, or even the actual content or intent of the law—none of which even is especially abstruse. Or, at least, shouldn’t be for the highest court in the land.

I could, easily. I feel like I’ve written it in a series of review articles […] where I explain that the over-broad interpretation of the statute has led us to a land that misunderstands Section 230 (c)(1) and (2), and how they operate together. And, you know, we have instructions, a blueprint from Cox and Wyden, and we can go back to the origins, We can go to the language.

So, the decision would read—and I’m imagining this is what Justice Jackson would write—is that […] Section 230 does not immunize YouTube from liability—civil liability—here, because (c)(1) is inapplicable. Here, what’s at issue is YouTube’s own conduct, their algorithmic recommendation system—that they built and make tons of money from—that they use our data and recommend things. This lawsuit isn’t about treating YouTube as a publisher or a speaker for information that they fail to remove or left up. “We out!”, you know?

The “too long, didn’t read” of it is what I’ve been saying here all week: nothing about Section 230 protects providers from their own algorithmic actions. If we think those actions should be protected, then Congress needs to draft a law that says that, because Section 230 ain’t it.

“If they want to write that statute, do it, friends,” says Citron. “But that’s not the statute that was written in 1996.”


Addenda

  1. I’d be remiss if I failed to mention that Cox and Wyden, the authors of Section 230, obviously have read it, and they filed a brief backing Google in this case. I just want to pull out a couple of items.

    First, check out the third footnote.

    The discussion in this brief pertains only to the algorithmic recommendation systems at issue in this case. Some algorithmic recommending systems are alleged to be designed and trained to use information that is different in kind than the information at issue in this case, to generate recommendations that are differ- ent in kind than those at issue in this case, and/or to cause harms not at issue in this case. Amici do not express a view as to the existence of CDA immunity in a suit based on the use of algorithms that may operate differently from those at issue here.

    While this basically is just typical legal ass-covering for possible futures, it’s important to take this at face value because at least for the moment Cox and Wyden leave open the possibility that, yes, certain algorithmic processes in fact might not be protected by Section 230.

    That means that the general issue of applicability to algorithmic activity is subject to debate, contrary to those commentators who try to make it sound like there’s no there here. It’s entirely possible there is no there in this particular case. I don’t know. It’s not entirely possible, though, that there never could be a there in some future case.

    Then there’s this bit from the main text of the brief.

    Petitioners thus seek to hold Google liable for the harms caused by ISIS’s videos, on the ground that YouTube has disseminated that content and, through its recommendation algorithms, made it easier for users to find and consume that content. Petitioners’ claims therefore treat YouTube as the publisher of content that it is not responsible for creating or developing. The fact that YouTube uses targeted recommendations to present content does not change that conclusion; those recommendations display already-finalized content in response to user inputs and curate YouTube’s voluminous content in much the same way as the early methods used by 1990s-era platforms. The principal differences, of course, are the size of the data set the modern system must curate and the speed at which it does so.

    This doesn’t sit right for me. Whether or not this is how the plaintiffs frame it, the question isn’t “is the provider the speaker of the original content” but “is the recommendation its own, separate speech act” and, if so, “is the provider the speaker of the recommendation”. That the provider isn’t involved in the creation or development of the speech being recommended isn’t relevant to those other two questions.

    Cox and Wyden, in fact, don’t simply ignore this question. They tackle it head-on.

    Contrary to the government’s argument, U.S. Br. 26-28, Section 230 does not permit the Court to treat YouTube’s recommendation of a video as a distinct piece of information that YouTube is “responsible” for “creat[ing],” 47 U.S.C. § 230(f)(3). Although amici agree with the government that platforms’ recommending systems could cause harms that become the subject of claims for which there might be no Section 230 immunity, that is the extent of their agreement. The government’s attempt to define the category of non-immune, recommendation-based claims by posit- ing that a recommendation constitutes a new piece of “information” ineligible for immunity finds no support in the statute and would preclude even the possibility of immunity for recommendation-based claims. More- over, the government’s reasoning—that presenting a video to a YouTube user amounts to an implicit statement by YouTube—would apply equally to all content presentation decisions, not just recommendations.

    First, note how again Cox and Wyden admit there might be some kind of algorithmic action that would not receive Section 230 immunity. They just argue that the algorithmic action here doesn’t count.

    Second, they don’t provide an argument here, simply stating that the idea that “a recommendation constitutes a new piece of ‘information’ ineligible for immunity finds no support in the statute”. An argument would be nice, because it’s at least as true that there’s no support in the statute for the idea that a recommendation isn’t a new piece of information. It’s simply untrue on its face that a recommendation algorithm is the same as simply “presenting a video”.

    This is what I was talking about before: that there’s arguably a difference between the “host” and “conduit” nature that Cox and Wyden sought to protect through liability protections and targeted recommendations based on extractive data-mining. It’s legitimate to argue that recommendations perhaps aren’t merely “hosting” and “conducting” the speech of someone else.

    If a user were expressly to follow a channel that posts offensive material, the mere act on the part of the provider of showing that user the activity of that channel, which might include recommendations by that channel—that would be the provider acting as “host” and conduit”. I’m extremely skeptical and a bit unnerved by the suggestion that the provider going out of its way to recommend and suggest additional content somehow isn’t the speech of the provider.

    Cox and Wyden continue their brief by referring repeatedly to “content moderation and curation”.

    Congress sought in Section 230 to afford platforms leeway to engage in the moderation and curation activities that were prevalent at the time, and to encourage the development of new technologies for content moderation by both platforms and users.

    But the statute doesn’t say in its liability provisions anything at all about “content moderation and curation” writ large. It only says a provider isn’t the speaker of someone else’s speech, and a provider isn’t liable for good-faith efforts at content removal. Not moderation. Not curation. Just removal.

    Cox and Wyden easily could have written “content moderation and curation” in Section 230, but for some reason they saved it for this brief (and, presumably, other writings).

    The provision does not distinguish among technological methods that providers use to moderate and present content, thereby allowing for innovation and evolution over time. Indeed, Congress declared that Section 230 is intended to “encourage the development of technologies which maximize user control over what information is received,” and to “remove disincentives for the development and utilization of blocking and filtering technologies.” 47 U.S.C. § 230(b)(3)-(4). And it broadly defined the “interactive computer service[s]” eligible for immunity, to include platforms that provide “software” or “tools” that “filter,” “choose,” and “display” content, among other things. 47 U.S.C. § 230(f)(2), (4).

    This is so weird. It’s true that the law says Congress seeks “to encourage the development of technologies which maximize user control” but of course provider-programmed and -deployed recommendation algorithms are not user control. They in fact are the exact opposite of user control. What’s more, while the law does define protected services to include those which “provide ‘software’ or ‘tools’ that ‘filter,’ ‘choose,’ and ‘display’ content” the liability protection provisions say nothing about those broader activities and only protect good-faith content removal.

    Cox and Wyden begin their brief by stating that they intend to “explain the plain meaning” of the statute and yet then repeatedly and bizarrely misstate what the statute actually says in it’s plain language.

    Maybe this is what they meant by the law, but it’s not the law they managed to pass. If they want this to be what the law says, they need to pass a new law.


Referring posts