The US Supreme Court, Google and the Future of Free Speech
Gonzalez vs. Google could make online platforms liable for recommending content to users.
This week, the U.S. Supreme Court heard oral arguments in a legal case that could have a profound impact on the future of free speech online.
Gonzalez v. Google tackles the issue of whether Section 230(c)(1) of the 1996 U.S. Communications Decency Act provides immunity to online platforms when they recommend content to users that was created by other parties.
The case involves Google’s video platform YouTube and whether it can be sued for promoting content from foreign terrorists. The outcome could have a major impact on AI, including AI-generated content and whether platforms would be responsible for recommending a user’s unlawful content.
The case was brought by the family of Nohemi Gonzalez, a 23-year-old U.S. citizen who was killed during an ISIS terrorist attack back in November 2015. Stepdad Jose Hernandez and mom Beatriz Gonzalez (photo above), allege that YouTube, and its parent company Google, are liable for her death by recommending ISIS recruitment content on its platform.
“Google affirmatively recommended ISIS videos to users,” the complaint alleges, “Those recommendations were one of the services that Google provided to ISIS. Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos.”
Google denies any liability under Section 230, which protects interactive online services such as websites, blogs, forums and others from being legally responsible for what others post on its platform.
But the question before the High Court is this: When a platform’s algorithms recommend other content to the user, does Section 230 still protect it since it has an active role in pushing the content?
In lower courts, Google has thus far prevailed in this lawsuit.
About the Authors
You May Also Like