They Pull Me Back In

First the disclosure: I’ve read neither the Mike Masnick post nor the Jaron Lanier and Allison Stanger article that prompted it, so I’m not actually here to respond directly to either of these things. It’s just that 47 U.S. Code § 230 is being discussed again, and I wanted very briefly to return to something I’ve written before.

In less than a week, it will be one year since the Supreme Court arguments about Section 230 and the matter of recommendation algorithms. Over the course of five days, I wrote three blog posts.

Then, as now, I only can ever speak as a lay person with an interest in Section 230 stemming from having been deeply involved in the fight against the overall Communications Decency Act of which originally it was a part. Then, as now, I simply wanted to suggest that much hay was being made out of misreadings of the plain text of the law, and that all too frequently vocal defenders of the law—a law which I supported in the 1990s and support today—behave as if it’s inherently bad faith ever to question or even posit its limits.

It is not.

The tl;dr of the law is that it offers two simple protections:

  1. Neither a provider nor a user is considered the publisher or speaker of someone else’s speech.

  2. No provider is liable for problematic material they miss while taking good faith actions to restrict such problematic material.

That’s it. That’s all the law does when it comes to liability protections: exactly and precisely just these two, specifically-enumerated things.

What tripped people up during those cases that went all the way to the Supreme Court was an entirely different part of the law that defines who is covered by it. It defines “interactive computer service”—one of the entities receiving the law’s specifically-enumerated liability protections—as including any “access software provider that provides or enables computer access by multiple users to a computer server”. It then defines “access software provider” as a provider of software that can “filter, screen, allow, or disallow content”, “pick, choose, analyze, or digest content”, or “transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content”.

What happens is that because the law says that an “access software provider” so defined will receive protection under the law, some people think that such a provider is protected by the law for all of those actions: the picking and choosing, the allowing, et cetera. The law doesn’t do that. It simply says that a provider which does these things (1) isn’t considered the publisher or speaker of someone else’s speech, and (2) isn’t liable for material they miss while taking good faith actions to restrict problematic material.

The key word there is “restrict”, taken from the law’s use of the phrase “restrict access to or availability of”. Nowhere does the law ever say anything about promoting or recommending content. It only says that a provider of software which might promote or recommend content (“pick, choose”, “allow”) is protected for any actions taken to restrict content.

The issue that came before the Supreme Court in essence should have been a question of whether or not algorithmic actions taken to promote or recommend content constitute a separate act of publishing or speech.

To make a moral analogy, although not a legal one, we take it as common sense that a review of a movie is a separate speech act from the movie itself. So, too, advertisements for that movie. If the movie happened to be viciously anti-Semitic, we would hold the reviewer and the advertiser morally responsible for spreading anti-Semitism.

The question when it comes to Section 230 in cases like last year’s is whether or not we somehow quite bizarrely believe that promoting or recommending content is covered by a liability protection for restricting content, because the law only protects, on the one hand, being a mere conduit or host and, on the other hand, taking good faith actions to restrict.

For all intents and purposes, the bad faith here is coming from those who think we can’t question Section 230. As I noted in one of my posts last year, even the law’s actual authors concede in their amicus brief that there might be some algorithmic activity that’s simply not protected by it. I disagreed with them as to whether or not that was the case in that case, but nonetheless they recognize that the law as they wrote it leaves open that possibility.

In the end, algorithmic promotion and recommendation winds up under First Amendment jurisprudence anyway, as well as maybe any implicated issues of product liability. (Let’s not even get into the idea that the algorithms are speaking only for themselves.) It just shouldn’t under any sane reading of Section 230 be protected by the overt text of that law.

There are only two questions for the purposes of Section 230: (1) is the algorithmic promotion and recommendation of content a separate speech act, and (2) is promotion and recommendation covered by the phrase “restrict access to or availability of”. It’s inconceivable to me that the answers, respectively, aren’t (1) yes, and (2) no. We can debate the answers to these questions in any particular case. What we can’t do is debate the validity of the questions themselves. The law so reflexively defended by so many itself leaves those questions completely and entirely unaddressed.