The Section 230 Shibboleth
Having been involved in the original fight over the Communications Decency Act as an organizer of one of the earliest grassroots online petition efforts on any issue (later aped more officially by the Center for Democracy and Technology), I wanted to share a few thoughts about Section 230 given this week’s oral arguments before the U.S. Supreme Court.
First things first: I am, thoroughly, a layman. Second things second: I’m exasperated by the reverential idolatry of 47 U.S. Code § 230 that seems to place it beyond debate.
There’s no question that there are bad faith arguments for revising or repealing the law, but there’s also no question that as the means and methods of the internet change we should be re-examining how Section 230 functions. Not with a predetermined or prejudiced eye toward revising or repealing but because circumstances change, and we actively should weigh any potential concerns.
Let me offer up a portion of Ian Millhiser’s coverage for Vox of today’s oral arguments.
Schnapper argued that, while Section 230 does protect social media websites from the mere act of publishing users’ illegal content, it does not permit those websites to recommend such content to others. So if Facebook were to, for example, send an email to one of its users recommending that it click on a defamatory Facebook post, the company could be held liable for such a recommendation.
Under this theory, an algorithm that ranks content based upon what it thinks website users wish to see — so, for example, every Facebook user’s home feed — is no different from such an email recommending a particular Facebook post, and thus is beyond Section 230’s protections.
But, as Chief Justice John Roberts suggested, it’s not entirely clear where to draw the line between content that is “recommended” by a website or other company, and content that is merely organized by that company. Suppose, for example, that a bookseller has a table where it places all the books related to sports. By grouping all the sports-related books together, this bookseller has engaged in the same sort of content organizing that an algorithm might engage in for a website.
This, I think, gets at the core of the current debate, or what the current debate should be, at any rate.
Section 230 ended up in the Communications Decency Act (and survived as its other components kept getting struck down by the courts) at the behest of internet companies, users, and Congressional allies. The idea isn’t some sort of blanket immunity for providers. Instead, it’s meant to encourage active content moderation.
Basically, the premise was this: providers were worried that if they took deliberate action to tamp down problematic or illegal material it would be viewed in such a way as to make them liable for everything they didn’t catch. The liability provisions of Section 230 specifically make reference to protecting “good samaritan” moderation.
The question boils down to whether or not targeted and tailored surfacing of content to a user somehow is outside the provisions of the “good samaritan” liability protection the law sought to guarantee.
Millhiser:
Justice Neil Gorsuch, meanwhile, pointed to a provision of Section 230 that can be read to permit websites to “pick, choose, analyze, or digest content” as a reason to permit algorithms to function unmolested. At the very least, Gorsuch suggested at one point, the Court could send the case back down to the lower courts to consider this language.
The language Gorsuch cites here is under the law’s set of definitions.
Interactive computer service
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
[…]
Access software provider
The term “access software provider” means a provider of software (including client or server software), or enabling tools that do any one or more of the following:
A. filter, screen, allow, or disallow content;
B. pick, choose, analyze, or digest content; or
C. transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.
Emphasis added for clarity. Looking back at the liability provisions, the law states a couple of different things: that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” and that “provider or user of an interactive computer service shall be held liable on account of” actions “taken in good faith to restrict access to or availability of material”.
What this means, at least by plain language, is that any software provider engaged in filtering content cannot be held liable for actions taken to restrict content. It doesn’t mean they’re also protected for actions taken to promote content. That language Gorsuch cites is just defining a class of software company whose restrictive actions expressly are protected under the law.
Gorsuch, to my mind, seems to think that the law somehow also protects the act of filtering, screening, allowing, or disallowing content”, but that’s not what the plain language of the law says. I suppose it’s possible to interpret the targeted and tailored surfacing and promotion of content de facto as an act restricting access to other content, but that’s sort of a stretch.
I’d think they more straightforward approach would be for providers to lean heavily upon that first part of the “good samaritan” provision mentioned above.
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Back then, what mostly was at issue was the fairly straightforward matter whether or not anyone who allowed other people to post online content through their services was going to be liable for that content should it become problematic. Were hosts and conduits also “publishers” and “speakers” of the content they hosted and conducted?
The question today, then, becomes whether or not an overt act by a provider to surface and promote content through targeted and tailored algorithms is substantively and substantially equivalent to the means and methods of the internet as it existed in the mid-90s when the law was passed, before the real era of algorithmic online media.
I don’t know, and I don’t have an opinion. I doubt the Supreme Court is the venue to sort it out. Of course, I’m also not sure the current Congress could sort it out either.
What bothers me, though, is the recurrent intimation that Section 230—a law, not a part of the U.S. Constitution—somehow is so sacrosanct that just agreeing we need to reconsider a law that’s nearly thirty years old when the means and methods of content on the internet have changed so dramatically amounts to “bad faith”.
Reconsidering old law in the light of new facts neither predetermines nor prejudices the outcome. Not engaging with those new facts in light of an old law is the true “bad faith”.
Addenda
-
Naturally, after somehow managing my way through all of that and publishing it, I thought of a simpler way to extract one part of what’s at issue, and it’s a question we really do need to answer.
At what point, if any, does the tailored and targeted algorithmic promotion of someone else’s speech itself become an entirely separate act of speech by the provider of the algorithm?
If it is an entirely separate act of speech completely under the purview of the provider, at what point, if any, do they become liable for using their own speech to promote the offensive or unlawful speech of others?
I don’t see how this is a question we simply can dodge by turning away, holding up a hand, and intoning, “Section 230.”
-
One thing I didn’t get into is the part of the Court’s exploration that I find peculiar. According to Ashley Belanger for Ars Technica, “it became clear that justices had grown increasingly concerned about the potential large-scale economic impact of making any decision that could lead to a crash of the digital economy”.
This seems like a weird thing outside the purview of the Supreme Court, to worry about whether or not a decision will have an adverse economic effect. There are lots of correct decisions on things that could have an adverse effect on the economy, but that doesn’t seem like a call for a judge?