{"id":132751,"date":"2023-05-18T15:26:14","date_gmt":"2023-05-18T15:26:14","guid":{"rendered":"https:\/\/fin2me.com\/?p=132751"},"modified":"2023-05-18T15:26:14","modified_gmt":"2023-05-18T15:26:14","slug":"supreme-court-sidesteps-ruling-on-scope-of-internet-liability-shield","status":"publish","type":"post","link":"https:\/\/fin2me.com\/politics\/supreme-court-sidesteps-ruling-on-scope-of-internet-liability-shield\/","title":{"rendered":"Supreme Court Sidesteps Ruling on Scope of Internet Liability Shield"},"content":{"rendered":"
The Supreme Court said on Thursday that it would not rule on a question of great importance to the tech industry: whether You Tube could invoke a federal law that shields internet platforms from legal responsibility for what their users post in a case brought by the family of a woman killed in a terrorist attack.<\/p>\n
The court instead decided, in a companion case, that a different law, one allowing suits for \u201cknowingly providing substantial assistance\u201d to terrorists, generally did not apply to tech platforms in the first place, meaning that there was no need to decide whether the liability shield applied.<\/p>\n
The court\u2019s unanimous decision in the second case, Twitter v. Taamneh, No. 21-1496, effectively resolved both cases and allowed the justices to duck difficult questions about the scope of the 1996 law, Section 230 of the Communications Decency Act.<\/p>\n
In a brief, unsigned opinion in the case concerning YouTube, Gonzalez v. Google, No. 21-1333, the court said it would not \u201caddress the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief.\u201d The court instead returned the case to the appeals court \u201cto consider plaintiffs\u2019 complaint in light of our decision in Twitter.\u201d<\/p>\n
The Twitter case concerned Nawras Alassaf, who was killed in a terrorist attack at a nightclub in Istanbul in 2017 for which the Islamic State claimed responsibility. His family sued Twitter and other tech companies, saying they had allowed ISIS to use their platforms to recruit and train terrorists.<\/p>\n
Justice Clarence Thomas, writing for the court, said the \u201cplaintiffs\u2019 allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.\u201d<\/p>\n
That decision allowed the justices to avoid ruling on the scope of Section 230 of the Communications Decency Act, a 1996 law intended to nurture what was then a nascent creation called the internet.<\/p>\n
Section 230 was a reaction to a decision holding an online message board liable for what a user had posted because the service had engaged in some content moderation. The provision said, \u201cNo provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.\u201d<\/p>\n
Section 230 helped enable the rise of huge social networks like Facebook and Twitter by ensuring that the sites did not assume legal liability with every new tweet, status update and comment. Limiting the sweep of the law could expose the platforms to lawsuits claiming they had steered people to posts and videos that promoted extremism, urged violence, harmed reputations and caused emotional distress.<\/p>\n
The ruling comes as developments in cutting-edge artificial intelligence products raise profound questions about whether laws can keep up with rapidly changing technology.<\/p>\n
The case was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during terrorist attacks there in November 2015, which also targeted the Bataclan concert hall. The family\u2019s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to push Islamic State videos to interested viewers.<\/p>\n
A growing group of bipartisan lawmakers, academics and activists have grown skeptical of Section 230 and say that it has shielded giant tech companies from consequences for disinformation, discrimination and violent content across their platforms.<\/p>\n
In recent years, they have advanced a new argument: that the platforms forfeit their protections when their algorithms recommend content, target ads or introduce new connections to their users. These recommendation engines are pervasive, powering features like YouTube\u2019s autoplay function and Instagram\u2019s suggestions of accounts to follow. Judges have mostly rejected this reasoning.<\/p>\n
Members of Congress have also called for changes to the law. But political realities have largely stopped those proposals from gaining traction. Republicans, angered by tech companies that remove posts by conservative politicians and publishers, want the platforms to take down less content. Democrats want the platforms to remove more, like false information about Covid-19.<\/p>\n