At a Glance
- The U.S. Supreme Court heard arguments on whether YouTube is liable for the ISIS content it recommended to sympathizers.
- Plaintiffs were the family of Nohemi Gonzalez, a 23-year-old U.S. citizen who was killed during an ISIS terrorist attack in 2
- Some justices argued the issue is one for Congress, not the Supreme Court.
This week, the U.S. Supreme Court heard oral arguments in a legal case that could have a profound impact on the future of free speech online.
Gonzalez v. Google tackles the issue of whether Section 230(c)(1) of the 1996 U.S. Communications Decency Act provides immunity to online platforms when they recommend content to users that was created by other parties.
The case involves Google’s video platform YouTube and whether it can be sued for promoting content from foreign terrorists. The outcome could have a major impact on AI, including AI-generated content and whether platforms would be responsible for recommending a user’s unlawful content.
The case was brought by the family of Nohemi Gonzalez, a 23-year-old U.S. citizen who was killed during an ISIS terrorist attack back in November 2015. Stepdad Jose Hernandez and mom Beatriz Gonzalez (photo above), allege that YouTube, and its parent company Google, are liable for her death by recommending ISIS recruitment content on its platform.
“Google affirmatively recommended ISIS videos to users,” the complaint alleges, “Those recommendations were one of the services that Google provided to ISIS. Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos.”
Google denies any liability under Section 230, which protects interactive online services such as websites, blogs, forums and others from being legally responsible for what others post on its platform.
But the question before the High Court is this: When a platform’s algorithms recommend other content to the user, does Section 230 still protect it since it has an active role in pushing the content?
In lower courts, Google has thus far prevailed in this lawsuit.
How the judges argued
The bench’s newest justice, Ketanji Brown Jackson, dug into Google’s representative, Lisa Blatt, contending that Congress’s intentions for Section 230 were not to protect platforms that are promoting offensive materials.
The usually quiet Justice Clarence Thomas became quite spirited in the oral arguments. He had argued in previous cases, including Malwarebytes v Enigma Software Group and Biden v. Knight First Amendment Institute, that Section 230 was overly broad in its immunity to Big Tech.
However, Thomas now questioned why YouTube should be punished if the same algorithm that recommends cooking videos to viewers of cooking shows also recommends ISIS videos to people who were interested in terrorism topics.
Both Justice Brett Kavanaugh and Justice Elena Kagan agreed that the case is best suited for Congress, not the Supreme Court. The bench was “not the nine greatest experts on the internet,” said Kagan, drawing a laugh.
Neil Gorsuch was the justice who repeatedly brought up AI generation. The Trump appointee said, “artificial intelligence generates poetry, it generates polemics today. That would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”
Justice Samuel Alito focused on what constituted publishing. He questioned the randomness of Google’s algorithmic decision-making: "If (the algorithm) does anything than displaying them purely at random, isn't it organizing and presenting information to people who access YouTube?”
A full copy of the transcript can be found here. Alternatively, an audio version of the hearing is available.
The judges will now convene before making a decision, which could come in before the court’s recess at the end of June.
About the Author
You May Also Like