MIT Technology Review – “Though we won’t probably know until summer, here are some scenarios for how cases on Section 230 and content moderation could resolve…We shouldn’t read too much into the oral arguments heard this week, and they’re not a firm indication of how the court will rule (likely by summer). However, the questions the justices ask can signal how the court is thinking about a case, and we can extrapolate what might happen with more confidence. I’ve broken down some of those more probable scenarios below. First, some context. The two cases–Gonzalez v. Google and Twitter v. Taamneh–both deal with holding online platforms responsible for harmful effects of the content they host. They were both filed by the families of people killed in ISIS terrorist attacks in 2015 and 2017. They differ in many ways, but at their core is a similar claim: that Google and Twitter helped to aid terrorist recruitment on their platforms, and thus violated the law. Gonzalez has garnered the most attention for its argument that Section 230 protection shouldn’t extend to recommendation algorithms. If the Supreme Court rules that it does cover these algorithms, Google has not broken the law. If it doesn’t, Google could be held liable. The core question is whether the presentation of content (which is protected under the law) is different from the recommendation of content. (I’ve written about why this is actually a really hard distinction, and why experts are so concerned about the unintended consequences of drawing this line legally.) ..”
See also EFF – Section 230 is On Trial. Here’s What You Need to Know.
Sorry, comments are closed for this post.