Yesterday I had a few, brief thoughts about Section 230 that expanded a bit to a few more, less-brief thoughts. After reading more coverage of the oral arguments in Gonzalez v. Google LLC, there’s some things I want to revisit, to reemphasize my views.
Justice Kagan is among those on the Court whom observers have indicated seems wary of gutting the law, but not because she doesn’t understand the issues at hand. Adi Robertson, writing for The Verge:
The fine-line distinctions around Section 230 were a recurring theme in the hearing, and for good reason. Gonzalez targets “algorithmic” recommendations like the content that autoplays after a given YouTube video, but as Kagan pointed out, pretty much anything you see on the internet involves some kind of algorithm-based sorting. “This was a pre-algorithm statute, and everyone is trying their best to figure out how this statute applies,” Kagan said. “Every time anyone looks at anything on the internet, there is an algorithm involved.”
The key takeaway here is that acknowledgment that “this was a pre-algorithm statute”. That’s the entire reason why we need to revisit Section 230 even if the eventual outcome is to maintain it, and the entire reason I think it’s “bad faith” to suggest there’s not even any debate to be had over the law.
At the risk of picking a fight with an actual lawyer, there are a couple of things I wanted to pull from Eric Goldman’s writeup, the first of which is a reference to something argued by Lisa S. Blatt, the lawyer for Google.
Blatt conceded that “endorsements” aren’t covered by 230. That may be true, but it all depends on how “endorsement” is defined. SCOTUS could define it in very unfavorable ways.
This is interesting to me because it sort of gets at what I was arguing yesterday about how a plain-language reading shows the law says a provider “cannot be held liable for actions taken to restrict content” but does not say “they’re also protected for actions taken to promote content”.
Goldman is skeptical of this, in part (it seems to me) because of something he writes later on, in a bit about the discussion at the Court trying to come up with a definition of “publishing” as used by Section 230.
As I indicated yesterday, the law says that no provider “shall be treated as the publisher […] of any information provided by another” provider. When the law was written, it was to protect those who acted as “host” or “conduit” for other people’s speech.
To me, prioritization and removal are two sides of the same coin. If a service removes content, it prioritizes the rest. I’m not sure the justices internalized this, but it seems so completely obvious to me.
This is infuriating and maybe a good example of why people have such a hard time with lawyers. Colloquially, he might be right that these are two sides of the same coin. However, when Congress wrote this law, they only addressed one side of it, and did so very expressly and very explicitly.
It seems to me ridiculous on its face to say “if a service removes content, it prioritizes the rest”. It’s just as likely that they are merely indifferent to the rest. Really, that’s the far more likely interpretation, and actually is the very behavior the law was designed to enable: it protects that indifference (or ignorance, really), so long as a provider is trying to do something about removing problematic content.
To put it another way, half the point of Section 230 is that a provider is not liable for any inaction around content if they’re taking “good samaritan” removal actions, and the other half is that a provider is not responsible for the speech of another provider.
Had the law’s authors wanted, they could have written it not specifically about content restriction actions but more generally about content moderation actions writ large. They did not do this. Instead they enacted liability protection for “good samaritan” removal action against problematic content even in the presence of inaction against other problematic content.
So the question is simple to define if not simple to answer: if a provider promotes something (“recommends” it), is that in fact an action not somehow encompassed by the law’s protection of inaction?
Or, as I had it yesterday, “At what point, if any, does the tailored and targeted algorithmic promotion of someone else’s speech itself become an entirely separate act of speech by the provider of the algorithm?”
(Not for nothing: even if we were to agree with Goldman that the law itself somehow, if silently, considers “prioritization” as just the other side of the “removal” coin, that leaves us with the same unanswered the question of whether or not “prioritization” in fact is a “separate act of speech”, since “prioritization” sure seems a lot more like an overt action and a lot less like a passive hosting or conduction.)
All of the above, then, is what sits at the core of Kagan’s acknowledgment that “this was a pre-algorithm statute”.
The fact that “everyone is trying their best to figure out how this statute applies” actually is exactly and precisely what we should be doing, and perhaps should have been doing before it ever became a question being put (Kagan again) to “not the nine greatest experts on the internet”.