google-site-verification: googlebc47d07320294fb4.html

Businesses, users, experts defend big tech against algorithm lawsuits

On Thursday, a diverse group of individuals and organizations defended the liability shield of Big Tech in a crucial Supreme Court case regarding YouTube’s algorithms. This group included businesses, internet users, academics, and human rights experts, with some arguing that removing federal legal protections for AI-driven recommendation engines would have a major impact on the open internet.

Among those weighing in at the Court were major tech companies such as Meta, Twitter, and Microsoft, as well as some of Big Tech’s most vocal critics, including Yelp and the Electronic Frontier Foundation. Additionally, Reddit and a group of volunteer Reddit moderators also participated in the case.

What happened. The controversy started with the Supreme Court case Gonzalez v. Google and centers around the question of whether Google can be held liable for recommending pro-ISIS content to users through its YouTube algorithm.

Google has claimed that Section 230 of the Communications Decency Act protects them from such litigation. However, the plaintiffs in the case, the family members of a victim killed in a 2015 ISIS attack in Paris, argue that YouTube’s recommendation algorithm can be held liable under a US anti-terrorism law.

The filing read:

“The entire Reddit platform is built around users ‘recommending’ content for the benefit of others by taking actions like upvoting and pinning content. There should be no mistaking the consequences of the petitioners’ claim in this case: their theory would dramatically expand Internet users’ potential to be sued for their online interactions.”

Yelp steps in. Yelp, a company with a history of conflict with Google, has argued that its business model relies on providing accurate and non-fraudulent reviews to their users. They have also stated that a ruling that holds recommendation algorithms liable could severely impact Yelp’s operations by forcing them to stop sorting through reviews, including those that are fake or manipulative.

Yelp wrote;

“If Yelp could not analyze and recommend reviews without facing liability, those costs of submitting fraudulent reviews would disappear. If Yelp had to display every submitted review … business owners could submit hundreds of positive reviews for their own business with little effort or risk of a penalty.”

Meta’s involvement. Facebook parent Meta has stated in their legal submission that if the Supreme Court were to change the interpretation of Section 230 to protect platforms’ ability to remove content but not to recommend content, it would raise significant questions about the meaning of recommending something online.

Meta representatives stated:

“If merely displaying third-party content in a user’s feed qualifies as ‘recommending’ it, then many services will face potential liability for virtually all the third-party content they host, because nearly all decisions about how to sort, pick, organize, and display third-party content could be construed as ‘recommending’ that content.”

Human rights advocates intervene. New York University’s Stern Center for Business and Human Rights has stated that it would be extremely difficult to create a rule that specifically targets algorithmic recommendations for liability, and that it might lead to the suppression or loss of a significant amount of valuable speech, particularly speech from marginalized or minority groups.

Why we care. The outcome of this case could have significant implications for the way that tech companies operate. If the court were to rule that companies can be held liable for the content that their algorithms recommend, it could change the way that companies design and operate their recommendation systems.

This could lead to more careful content curation and a reduction in the amount of content that is recommended to users. Additionally, it could also lead to increased legal costs and uncertainty for these companies.

The post Businesses, users, experts defend big tech against algorithm lawsuits appeared first on Search Engine Land.

Original source:

9 thoughts on “Businesses, users, experts defend big tech against algorithm lawsuits”

  1. When I initially left a comment I seem to have clicked the -Notify me when new comments are added- checkbox and from now on each time a comment is added I receive 4 emails with the exact same comment. Perhaps there is an easy method you can remove me from that service? Thanks!

  2. Hi there! I know this is kinda off topic but I was
    wondering which blog platform are you using for this
    website? I’m getting sick and tired of WordPress
    because I’ve had issues with hackers and I’m looking at options for another
    platform. I would be awesome if you could point me in the
    direction of a good platform.

  3. I’m impressed, I must say. Seldom do I encounter a
    blog that’s both equally educative and engaging, and without
    a doubt, you have hit the nail on the head. The issue is something that
    too few folks are speaking intelligently about. Now i’m very happy I came across this in my search for something concerning this.

  4. After I originally commented I seem to have clicked the -Notify
    me when new comments are added- checkbox and from now on every time a
    comment is added I receive 4 emails with the same comment.
    There has to be an easy method you are able to remove me from that service?
    Thank you!

  5. Spot on with this write-up, I really believe this web site needs much more attention.
    I’ll probably be returning to read through more, thanks for the information!

  6. Hi, I do think this is an excellent site. I stumbledupon it 😉 I’m going to return once again since i have book marked it. Money and freedom is the greatest way to change, may you be rich and continue to help others.

Leave a Reply

Your email address will not be published. Required fields are marked *

+ +