Tyler Elliot Bettilyon gives a decent overview of Section 230 (it’s taken me days to find a way to read this since it’s behind the Medium paywall) but the only real recommendation for how to solve abuse on the web seems to be to ape laws in other countries requiring the takedown of certain kinds of offensive or problematic speech.
This kind of law is not familiar to folks in the United States, but Germany adopted a law in 2018 that requires social network providers to remove any content that violates Germany’s fairly strict hate speech laws within 24 hours. Following the Islamophobic terrorist attack in Christchurch, New Zealand, Australia passed a law assigning criminal charges — including possible jail time — for companies that do not “expeditiously” remove “abhorrent violent material.” The legislation’s wording is vague, so the law will probably have to be interpreted by a court before it can be meaningfully enforced. Nevertheless, there is an emerging global movement toward assigning some degree of legal responsibility to technology companies for the interactions their platforms facilitate.
This type of law is not familiar to folks in the United States because it’s exactly the type of law that tends to run afoul of the First Amendment. There’s no question that content moderation on mainstream social platforms is an unholy mess right now, but with “conservatives” gunning for political speech they simply dislike, relitigating Section 230 can’t possibly be the way to go.