As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Let's Talk About [Section 230]

AngelHedgieAngelHedgie Registered User regular
Given that it's come up in both the public discourse and in a number of threads here, and the last time I brought it up was over a decade ago, it might be a good idea to separate those discussions out to go over Section 230 of the Communications Decency Act - if only to not cross the streams elsewhere.

So, before we can talk about Section 230, a question must be answered - what is Section 230? While I disagree with some of his conclusions and position, this video by layer and legal Youtuber LegalEagle does do a good job of running down the history of the law:

https://www.youtube.com/watch?v=eUWIi-Ppe5k

The short version is this - in the early days of online service providers, a set of rulings established that such providers could be held legally liable for user generated content if they moderated their services, which in turn made such providers reluctant to do so. This was a frankly ludicrous result, so legislators created Section 230 - a law that basically says that an online service provider cannot be held legally liable for user generated content. For example, if someone posted a defamatory comment on these forums, Tube doesn't have to worry that striking the post will open our hosts up to liability. And that is a good thing - these forums probably wouldn't exist without that indemnity.

The devil, unfortunately, lies in the details. Indemnity is a useful tool, but when extended widely can become harmful. And Section 230 has become more and more blanket immunity, what has caused a number of problems. Further, there are (as we just saw) a large number of people arguing in bad faith to "reform" Section 230 for their own purposes. The end result has been a mess from all ends.

XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
«134567

Posts

  • Options
    AngelHedgieAngelHedgie Registered User regular
    So, what are my issues with the law? Basically, they can be summed up in two points: indemnification of acts that we as a society most likely do not want to indemnify (i.e. How You Can Run A Site Devoted To Nonconsentual Pornography Legally (And Why That's A Bad Thing)), and the blurring of lines between hosting and publishing.

    On point one, the main issue is that it turns out that "user created content" is a wide ranging target that can potentially cover a lot of activities. For example, if a service provider allows potential landlords to use filters that are against the Fair Housing Act, are such ads "user created content"? This isn't an idle question - Facebook has argued that they are indemnified by Section 230 when landlords post ads violating the FHA by using the filters Facebook provides. But the signature example here has to be nonconsentual pornography. Sites that specialize in hosting such are able to protect themselves by following two rules: first, that all images must be provided by end users; and second, that no underage images are allowed (this is because such images are covered by federal statute, which does have a carveout in Section 230.) Probably the most famous example of this was the disturbingly named "Fappening" - a massive release of nonconsentual pornography (much of which was gathered through hacking into phone accounts and such) that was hosted on a subreddit of the same name. When it first surfaced on Reddit, the company's then CEO wrote an editorial in which he stated that Reddit had no legal requirement to remove it, and thus the subreddit would be allowed to stay. Eventually, one of the victims noted that the images of her in the release were taken when she was underage - at which point Reddit, now facing legal liability, nuked the subreddit from orbit.

    On the second point, the issue here is that while defenders of Section 230 argue that it only applies to hosting and not publication, the reality is that the courts have held that publisher behavior can be indemnified if it crosses with users. This is the heart of why Batzel, one of the first major Section 230 cases, is such a bad ruling - the defendant was running a moderated listserv, where he would send out reports of stolen artwork based on tips he received from users. He had editorial control prior to publication - something that has been held to define someone as a publisher in other media (hence why newspapers won't publish defamatory letters to the editor, for example), yet the Ninth Circuit held that Section 230 applied.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    FoefallerFoefaller Registered User regular
    https://youtu.be/w7tZmFfc73Y

    Another good youtube lawyer video on what section 230 does, with a dash of product placement for a good read if you want to go further.

    Most of what I see defending Section 230 has basically been summarized as "It could end the Internet as we know it!!" Because apparently everyone will need an army of lawyers, rather than just more moderators. (Could even be a new industry; training/certifying moderators to identify and remove content that would put the host in legal trouble.)

    In the short term though, the federal law exception seems to allow some way to fix the biggest problems, even if in a wack-a-mole fashion: make non-consensual porn a federal offence rather than just open to lawsuit for starters.

    steam_sig.png
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Foefaller wrote: »
    https://youtu.be/w7tZmFfc73Y

    Another good youtube lawyer video on what section 230 does, with a dash of product placement for a good read if you want to go further.

    Most of what I see defending Section 230 has basically been summarized as "It could end the Internet as we know it!!" Because apparently everyone will need an army of lawyers, rather than just more moderators. (Could even be a new industry; training/certifying moderators to identify and remove content that would put the host in legal trouble.)

    In the short term though, the federal law exception seems to allow some way to fix the biggest problems, even if in a wack-a-mole fashion: make non-consensual porn a federal offence rather than just open to lawsuit for starters.

    The problem with the "it could end the Internet as we know it" argument is that for a lot of people, "the internet as we know it" is pretty gooseshit. Saying "you can sue the people who harmed you" is meaningless when online service providers make it functionally impossible to do so. And while you could fix some of the issues through the carveout, others require actually addressing the underlying problem. Allowing Facebook to be indemnified against FHA cases would go a long way towards functionally gutting the FHA, as it would force the government to have to engage in whack a mole with landlords.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Ars has posted an "explainer"* about Section 230. The piece is the expected sort of "rah rah 230" bit I've come to expect from the tech press, but the expert they get to hold up the pro-230 side comes across as an ideologue. As part of the piece, the author discusses a case involving an individual who had an ex turn Grindr into a tool of harassment by creating fake profiles stating that this person had rape fantasies and wanted people to go to his apartment to have sex with him. While the accounts would get banned, the ex would just create new accounts, so he eventually sued Grindr, claiming that the platform was defective, lacking in tools to deal with abuse. Ultimately, Grindr successfully made a Section 230 defense and had the case dismissed. The response from the pro-230 expert argues that such stories don't justify a change:
    But Goldman argues that even stories like Herrick's don't provide a compelling reason to change Section 230.

    "This is not a case where that person was anonymous. We know who did it," Goldman told Ars. "The question is what do we need to do to make that bad actor stop and to properly punish him for his bad actions."

    This is an argument I've seen from a lot of 230 defenders - that the law doesn't stop liability, but shifts it to the "proper" party. This argument avoids the point that part of what makes cases like this so damaging is the reach of the platform serving to amplify the act.

    Near the end, there's a link to an essay discussing the ways indemnification has become a problem. One good point that is made is that online services have changed immensely in the past two decades,and as such a re-evaluation is past due on what should and should not be indemnified.

    *The "explainer" framing annoys me, as if criticism of Section 230 is born out of ignorance. It's a rather backhanded way of dismissing criticism, I find.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    Phoenix-DPhoenix-D Registered User regular
    Grindr went to law enforcement, banned the accounts when they noticed them, specifically checked for this dude's details. Should Tube be liable because we all occasionally get papist PMs?

    You're being kinda vague here despite writing a lot. Because yes, ending 230- which is what that Ars quote is about- would functionally end the internet as we know it. The only site that functions there is a completely umoderated one.

  • Options
    FoefallerFoefaller Registered User regular
    Phoenix-D wrote: »
    Grindr went to law enforcement, banned the accounts when they noticed them, specifically checked for this dude's details. Should Tube be liable because we all occasionally get papist PMs?

    You're being kinda vague here despite writing a lot. Because yes, ending 230- which is what that Ars quote is about- would functionally end the internet as we know it. The only site that functions there is a completely umoderated one.

    I think it's kinda silly to assume his posture is to end 230 altogether. But we're well pass the days of just message boards and chatrooms.

    For example, in a perfect world Facebook wouldn't be able to claim indemnity to help people violate the FHA. In fact, one of 230's exceptions is in cases where there is a federal violation. But the case law essentially states you would need to prove that Facebook either directly assisted people in using their ad tools in such a way, or went to extreme lengths to intentionally blinded themsleves to the possibility of it. Maybe the case in question will come out in a way to make any changes unnecessary, but some black-letter law about how 230 applies to targeted ads would probably be better.

    steam_sig.png
  • Options
    Phoenix-DPhoenix-D Registered User regular
    (PS, sorry for the mods for the approval spam)

    Facebook could claim 230 immunized them even in a perfect world. Because people make bad legal arguments all the time :P In reality, Facebook settled those suits after agreeing to changes in their policy. Hardly the stunning problem Hedgie portrays it as here. I don't think Hedgie wants full repeal but I still don't have a firm idea of what he *actually* wants, and he uses arguments against full repeal as if they weren't such.

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Phoenix-D wrote: »
    Grindr went to law enforcement, banned the accounts when they noticed them, specifically checked for this dude's details. Should Tube be liable because we all occasionally get papist PMs?

    You're being kinda vague here despite writing a lot. Because yes, ending 230- which is what that Ars quote is about- would functionally end the internet as we know it. The only site that functions there is a completely umoderated one.

    Yes, but there's something that Grindr could have done that would have had impact as well on this sort of abuse - increase the friction on the creation of new accounts. The problem with this is that would impede growth, and to platforms like Grindr, growth is much more important than curtailing abuse (an attitude that has been endemic in social media.)

    As for my position, I would want to see two reforms done to Section 230:
    • First, I want to see editorial control ante publication to invalidate Section 230 claims. If you can assert control over what's getting put out, then you are engaging in publication, and should not be protected under Section 230.
    • Second, I'm a fan of this proposal, which would add a requirement for operators to engage in good faith to monitor their sites for illegal conduct to maintain Section 230 protection.

    The problem is that the staunch Section 230 defenders basically view any reform as tantamount to repeal of Section 230. From the Ars piece:
    But Goldman isn't convinced.

    "I see the proposal as functionally equivalent to repealing Section 230," he said in a recent email. In a 2019 analysis, he argued that the Citron and Wittes proposal "would make Section 230 litigation far less predictable, and it would require expensive and lengthy factual inquiries into all evidence probative of the reasonableness of defendant’s behavior." Fear of litigation could cause some online providers to take down lawful speech proactively, harming online freedom of speech.

    First off, the whole point of the exercise is to make Section 230 "less predictable", because the reason it's so predictable is that Section 230 has been made into wide-ranging blanket indemnity for online service providers. If you roll back the blanket, it's going to change what does and doesn't get indemnified. As for the test of reasonableness, we have the situation right now that someone can set up a website geared towards the publication of nonconsentual pornography and do so completely in the legal free and clear as long as they follow two very simple rules. It's also telling that they're concerned about the "chilling effect" of litigation, while not giving a damn about how abuse of the marginalized and dispossessed is currently chilling free speech.
    Phoenix-D wrote: »
    Facebook could claim 230 immunized them even in a perfect world. Because people make bad legal arguments all the time :P In reality, Facebook settled those suits after agreeing to changes in their policy. Hardly the stunning problem Hedgie portrays it as here.

    I read this more as a sign that the tide is turning. A lot of the current scope of Section 230 is based on case law, and it seems that Facebook would rather not roll the dice, lest the courts come back and say that no, Section 230 doesn't mean they get to give landlords the tools to violate the FHA without recourse for doing so.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    FoefallerFoefaller Registered User regular
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    steam_sig.png
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Foefaller wrote: »
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    Which is why I would expect that the changes would be carefully written to control the scope of the excision. The argument of "what if bad actor A misuses the regulation, better to not regulate at all" is a libertarian bad penny that needs to be knocked down when it pops up.

    The argument that Section 230 is effective because it's "simple" belies the point - it's simple because it indemnifies a wide scope of activity (a scope, mind you, that wasn't conceived of when the law was first written), and that's resulted in the indemnification of activities that we don't want to.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    Phoenix-DPhoenix-D Registered User regular
    The linked essay isn't really persuasive to me.

    The first example it gives isn't illegal. In the slightest. So right away we're into "think of the children" bullshit. It's just a site that connects people with random strangers to talk to. That is *not* something that needs regulating. The second is mostly fine. The third is just...what, no. Twitter and Facebook aren't police and shouldn't be expected to be so.

    re your Ars comments: chilling effects work both ways. I don't imagine Twitter would be hosting too many posts critical of TERFs without section 230, or with the proposed changes mentioned there. And lack of predictibility is a bad thing. If you want to loosen 230, that's one thing, but loosening it in such a way that the only people able to take advantage of it are rich corporations is a problem.

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Phoenix-D wrote: »
    The linked essay isn't really persuasive to me.

    The first example it gives isn't illegal. In the slightest. So right away we're into "think of the children" bullshit. It's just a site that connects people with random strangers to talk to. That is *not* something that needs regulating. The second is mostly fine. The third is just...what, no. Twitter and Facebook aren't police and shouldn't be expected to be so.

    re your Ars comments: chilling effects work both ways. I don't imagine Twitter would be hosting too many posts critical of TERFs without section 230, or with the proposed changes mentioned there. And lack of predictibility is a bad thing. If you want to loosen 230, that's one thing, but loosening it in such a way that the only people able to take advantage of it are rich corporations is a problem.

    With regards to Omegle, the point isn't about illegality, but about duty of care. As the author points out, if you were to create a similar service in real life, that service would not be able to just get away with statements of "use at your own risk" - they would be obliged to perform some sort of vetting. And the courts are starting to come around to this online as well - the piece discusses later the Model Mayhem lawsuit, where the owners of the aforementioned website had claimed Section 230 protection when a user of the site who had been raped by predators targeting models on the site - predators whom the owners knew about due to disclosures made when they acquired the site - sued the owners for failing to disclose that information publicly. The courts rejected the claim, pointing out that the specific law had nothing to do with user content, but with obligations to warn users.

    As for social media and terrorist groups, nobody is asking them to be police, just as nobody is asking Visa or Mastercard to be police by telling them not to process transactions for terrorist groups. And when this gets discussed later in the piece, the author points out that as long as the site can show a good faith effort in taking down accounts made by terrorist groups, then they should remain indemnified - it's only when they are actively turning a blind eye that indemnity should no longer apply.

    As for chilling effects - how would these changes give a TERF legal cause? A lot of the arguments about how altering Section 230 would have a chilling effect seem to be built on the logic that without blanket indemnity, people would be afraid to invest in online service platforms, which to me feel overly reminiscent of arguments made for tort reform.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    spool32spool32 Contrary Library Registered User regular
    Foefaller wrote: »
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    Which is why I would expect that the changes would be carefully written to control the scope of the excision. The argument of "what if bad actor A misuses the regulation, better to not regulate at all" is a libertarian bad penny that needs to be knocked down when it pops up.

    The argument that Section 230 is effective because it's "simple" belies the point - it's simple because it indemnifies a wide scope of activity (a scope, mind you, that wasn't conceived of when the law was first written), and that's resulted in the indemnification of activities that we don't want to.

    I think it's very pollyanna-ish to think we can craft a set of regs that won't reshape the internet in negative ways. How in the world would you define what pre-posting (I'm not saying pre-publication because that's leading the witness) editorial control looks like without causing platforms to just give up and go unmoderated?

  • Options
    AngelHedgieAngelHedgie Registered User regular
    spool32 wrote: »
    Foefaller wrote: »
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    Which is why I would expect that the changes would be carefully written to control the scope of the excision. The argument of "what if bad actor A misuses the regulation, better to not regulate at all" is a libertarian bad penny that needs to be knocked down when it pops up.

    The argument that Section 230 is effective because it's "simple" belies the point - it's simple because it indemnifies a wide scope of activity (a scope, mind you, that wasn't conceived of when the law was first written), and that's resulted in the indemnification of activities that we don't want to.

    I think it's very pollyanna-ish to think we can craft a set of regs that won't reshape the internet in negative ways. How in the world would you define what pre-posting (I'm not saying pre-publication because that's leading the witness) editorial control looks like without causing platforms to just give up and go unmoderated?

    Batzel is a great example - you had a moderated listserv where the operator was receiving tips on stolen artwork, who would then select which ones would be published. There was clear editorial control there, and yet the courts indemnified the operator.

    Also, when talking about the internet being affected in negative ways, I have to ask - for who? There are a lot of people who are getting the raw end of the deal with the internet as is, and asking them to accept that in service to a nebulous greater good strikes me as being cheerleading for Omleas. (This is also one of the big problems with free speech "absolutism".)

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    FoefallerFoefaller Registered User regular
    edited June 2020
    The chilling effect that seems most likely to me would be raising the bar for participation, particularly for content creators that are trying to make a livelihood on Youtube et al.

    With services where some sort of pre-release editing is effectively unavoidable (dating sites seem like an obvious one) the platform is going to want extra $$ from somewhere whenever something falls through the cracks and frivolous suits because of content they host but wasn't edited (like chatrooms) and therefore still something they have indemnity. And the most reasonable place would be raising the/adding a cost for creating a profile.

    And because nothing occurs in a vacuum:

    https://youtu.be/iVlaEstFkhA

    TL:DR Gov study on the DMCA has been finished and came to the conclusion that in regards to copyright holders vs platforms like Youtube, the holders have been hosed, and Congress should revisit the law and consider new or additional criteria for platforms to keep their safe harbor provisions. Though it doesn't suggest how, one of the things that popped up was how automated curating all-too-often leads to violating content to show up.

    So there is a non-zero chance that new regulations to prevent copyright infringement would require platforms to preform the pre-public editing that would, in AngelHedgie's suggestion, now open them up to harassment and libel/slander lawsuits. Again, I don't think it would be the end of Youtube, but it would probably the end of creators like Leonard French being able upload videos for public viewing without having to pay Youtube for the privilege.

    Foefaller on
    steam_sig.png
  • Options
    PolaritiePolaritie Sleepy Registered User regular
    Copyright holders get hosed? Pretty sure there's tons of horror stories out there about Youtube blocking or monetizing things based on utterly baseless claims. This is the law where the courts decided "bad faith" basically required literal mustache twirling, isn't it?

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Polaritie wrote: »
    Copyright holders get hosed? Pretty sure there's tons of horror stories out there about Youtube blocking or monetizing things based on utterly baseless claims. This is the law where the courts decided "bad faith" basically required literal mustache twirling, isn't it?

    Tube's also discussed his personal experiences with how legitimate copyright holders get the runaround from YouTube when trying to assert legitimate copyright claims. It's a shitshow all around.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    PaladinPaladin Registered User regular
    Mmm, I think this thing is going to be a third rail for a while with everybody needing to do stuff online a lot due to social distancing. What are the short term effects of addressing section 230? Will it require a lot of legal stuff taking place?

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    PolaritiePolaritie Sleepy Registered User regular
    Polaritie wrote: »
    Copyright holders get hosed? Pretty sure there's tons of horror stories out there about Youtube blocking or monetizing things based on utterly baseless claims. This is the law where the courts decided "bad faith" basically required literal mustache twirling, isn't it?

    Tube's also discussed his personal experiences with how legitimate copyright holders get the runaround from YouTube when trying to assert legitimate copyright claims. It's a shitshow all around.

    This sounds like it may be more of a YouTube problem then, to be honest. Because the DMCA is already ridiculously biased towards copyright holders.

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    spool32spool32 Contrary Library Registered User regular
    edited June 2020
    spool32 wrote: »
    Foefaller wrote: »
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    Which is why I would expect that the changes would be carefully written to control the scope of the excision. The argument of "what if bad actor A misuses the regulation, better to not regulate at all" is a libertarian bad penny that needs to be knocked down when it pops up.

    The argument that Section 230 is effective because it's "simple" belies the point - it's simple because it indemnifies a wide scope of activity (a scope, mind you, that wasn't conceived of when the law was first written), and that's resulted in the indemnification of activities that we don't want to.

    I think it's very pollyanna-ish to think we can craft a set of regs that won't reshape the internet in negative ways. How in the world would you define what pre-posting (I'm not saying pre-publication because that's leading the witness) editorial control looks like without causing platforms to just give up and go unmoderated?

    Batzel is a great example - you had a moderated listserv where the operator was receiving tips on stolen artwork, who would then select which ones would be published. There was clear editorial control there, and yet the courts indemnified the operator.

    Also, when talking about the internet being affected in negative ways, I have to ask - for who? There are a lot of people who are getting the raw end of the deal with the internet as is, and asking them to accept that in service to a nebulous greater good strikes me as being cheerleading for Omleas. (This is also one of the big problems with free speech "absolutism".)

    Sure, if you grant that the harm from offensive speech outweighs the harm from government restriction on speech, of course you're going to be fine with laws that restrict lots of speech in order to stop the offensive bits. That's part of what makes the argument for change sound easy - you don't think the collateral damage is important or maybe even collateral. I don't grant that premise though, so my answer is 'for everyone'.

    You don't need to modify Section 230 to stop nonconsensual porn on the internet. Firstly because you literally cannot stop that or any other detestable thing being on the internet. Secondly because to the extent you can partially succeed, making the content illegal does the trick.

    spool32 on
  • Options
    AngelHedgieAngelHedgie Registered User regular
    spool32 wrote: »
    spool32 wrote: »
    Foefaller wrote: »
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    Which is why I would expect that the changes would be carefully written to control the scope of the excision. The argument of "what if bad actor A misuses the regulation, better to not regulate at all" is a libertarian bad penny that needs to be knocked down when it pops up.

    The argument that Section 230 is effective because it's "simple" belies the point - it's simple because it indemnifies a wide scope of activity (a scope, mind you, that wasn't conceived of when the law was first written), and that's resulted in the indemnification of activities that we don't want to.

    I think it's very pollyanna-ish to think we can craft a set of regs that won't reshape the internet in negative ways. How in the world would you define what pre-posting (I'm not saying pre-publication because that's leading the witness) editorial control looks like without causing platforms to just give up and go unmoderated?

    Batzel is a great example - you had a moderated listserv where the operator was receiving tips on stolen artwork, who would then select which ones would be published. There was clear editorial control there, and yet the courts indemnified the operator.

    Also, when talking about the internet being affected in negative ways, I have to ask - for who? There are a lot of people who are getting the raw end of the deal with the internet as is, and asking them to accept that in service to a nebulous greater good strikes me as being cheerleading for Omleas. (This is also one of the big problems with free speech "absolutism".)

    Sure, if you grant that the harm from offensive speech outweighs the harm from government restriction on speech, of course you're going to be fine with laws that restrict lots of speech in order to stop the offensive bits. That's part of what makes the argument for change sound easy - you don't think the collateral damage is important or maybe even collateral. I don't grant that premise though, so my answer is 'for everyone'.

    You don't need to modify Section 230 to stop nonconsensual porn on the internet. Firstly because you literally cannot stop that or any other detestable thing being on the internet. Secondly because to the extent you can partially succeed, making the content illegal does the trick.

    I find it interesting and telling that you say that I don't consider the "collateral damage" of my position while ignoring the harm we see with the status quo, where we see the marginalized routinely having to make the choice of participation or safety, and how that chills speech. As I've said elsewhere, things like hate speech, nonconsentual pornography, defamation, etc. are not "offensive", and trying to mark them as such is a dodge from addressing the problem.

    As for making nonconsentual pornography illegal, states have been doing that, but it's been an uphill battle - in large part because when laws to make it illegal come up, there's an outcry about how such bills will "criminalize innocuous behavior". I can just imagine the shitshow a proposed federal statute banning/criminalizing nonconsentual pornography would create.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    PaladinPaladin Registered User regular
    How chilled is speech on the internet? Cuz I see a lot of protest stuff on Twitter

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    FoefallerFoefaller Registered User regular
    Polaritie wrote: »
    Polaritie wrote: »
    Copyright holders get hosed? Pretty sure there's tons of horror stories out there about Youtube blocking or monetizing things based on utterly baseless claims. This is the law where the courts decided "bad faith" basically required literal mustache twirling, isn't it?

    Tube's also discussed his personal experiences with how legitimate copyright holders get the runaround from YouTube when trying to assert legitimate copyright claims. It's a shitshow all around.

    This sounds like it may be more of a YouTube problem then, to be honest. Because the DMCA is already ridiculously biased towards copyright holders.

    Yeah, that's sorta what I got from the reading. Basically, Youtube et al have basically discovered that the more you automate, the less likely you're going to lose your safe harbor (since there isn't a real live person doing the curation) so copyright holders get screwed because the platform is going to resist any active steps they could take that would be better at stops infringement for as long as it can, and fair use users get screwed because "resisting" means giving many holders their own automated tools that are just asking to be abused.

    *Queue Archer "This is why we can't have nice things."*

    But in any case, I only brought it up to point out that the legal landscape could shift in a way to make pre-release/pre-public editing of content considerably more frequent that it currently happens now, which would make the idea of losing indemnity for doing so not as clean-cut of a suggestion as it is today.

    steam_sig.png
  • Options
    HefflingHeffling No Pic EverRegistered User regular
    First off, the whole point of the exercise is to make Section 230 "less predictable", because the reason it's so predictable is that Section 230 has been made into wide-ranging blanket indemnity for online service providers. If you roll back the blanket, it's going to change what does and doesn't get indemnified. As for the test of reasonableness, we have the situation right now that someone can set up a website geared towards the publication of nonconsentual pornography and do so completely in the legal free and clear as long as they follow two very simple rules. It's also telling that they're concerned about the "chilling effect" of litigation, while not giving a damn about how abuse of the marginalized and dispossessed is currently chilling free speech.

    Making a law less predictable sounds like a good way to cause a lot of things to get tied up in lawsuits, which favor the side with greater funding, and in general as a way to make the internet worse as enforcement by hosts becomes more inconsistent.

    If the issue is non-consensual pornography, then make hosting/publishing/or otherwise being involved with non-consensual pornography illegal. You don't need to completely redo a framework law to address a specific exception.

  • Options
    spool32spool32 Contrary Library Registered User regular
    edited June 2020
    spool32 wrote: »
    spool32 wrote: »
    Foefaller wrote: »
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    Which is why I would expect that the changes would be carefully written to control the scope of the excision. The argument of "what if bad actor A misuses the regulation, better to not regulate at all" is a libertarian bad penny that needs to be knocked down when it pops up.

    The argument that Section 230 is effective because it's "simple" belies the point - it's simple because it indemnifies a wide scope of activity (a scope, mind you, that wasn't conceived of when the law was first written), and that's resulted in the indemnification of activities that we don't want to.

    I think it's very pollyanna-ish to think we can craft a set of regs that won't reshape the internet in negative ways. How in the world would you define what pre-posting (I'm not saying pre-publication because that's leading the witness) editorial control looks like without causing platforms to just give up and go unmoderated?

    Batzel is a great example - you had a moderated listserv where the operator was receiving tips on stolen artwork, who would then select which ones would be published. There was clear editorial control there, and yet the courts indemnified the operator.

    Also, when talking about the internet being affected in negative ways, I have to ask - for who? There are a lot of people who are getting the raw end of the deal with the internet as is, and asking them to accept that in service to a nebulous greater good strikes me as being cheerleading for Omleas. (This is also one of the big problems with free speech "absolutism".)

    Sure, if you grant that the harm from offensive speech outweighs the harm from government restriction on speech, of course you're going to be fine with laws that restrict lots of speech in order to stop the offensive bits. That's part of what makes the argument for change sound easy - you don't think the collateral damage is important or maybe even collateral. I don't grant that premise though, so my answer is 'for everyone'.

    You don't need to modify Section 230 to stop nonconsensual porn on the internet. Firstly because you literally cannot stop that or any other detestable thing being on the internet. Secondly because to the extent you can partially succeed, making the content illegal does the trick.

    I find it interesting and telling that you say that I don't consider the "collateral damage" of my position while ignoring the harm we see with the status quo, where we see the marginalized routinely having to make the choice of participation or safety, and how that chills speech. As I've said elsewhere, things like hate speech, nonconsentual pornography, defamation, etc. are not "offensive", and trying to mark them as such is a dodge from addressing the problem.

    As for making nonconsentual pornography illegal, states have been doing that, but it's been an uphill battle - in large part because when laws to make it illegal come up, there's an outcry about how such bills will "criminalize innocuous behavior". I can just imagine the shitshow a proposed federal statute banning/criminalizing nonconsentual pornography would create.

    Heh, you find it 'telling'? What does it tell you exactly?

    Anyhow, I don't think you ignore the collateral damage, and neither do I. You think it's outweighed by the harm you believe occurs now, or perhaps that it's actually working as intended rather than being collateral damage, and that's a fair position but not one I agree with. I also find wildly unpersuasive the argument that because it might be hard to criminalize nonconsensual porn, we should instead do an easier thing that still doesn't solve the problem but does create new ones for the whole society.

    spool32 on
  • Options
    Phoenix-DPhoenix-D Registered User regular
    Phoenix-D wrote: »
    The linked essay isn't really persuasive to me.

    The first example it gives isn't illegal. In the slightest. So right away we're into "think of the children" bullshit. It's just a site that connects people with random strangers to talk to. That is *not* something that needs regulating. The second is mostly fine. The third is just...what, no. Twitter and Facebook aren't police and shouldn't be expected to be so.

    re your Ars comments: chilling effects work both ways. I don't imagine Twitter would be hosting too many posts critical of TERFs without section 230, or with the proposed changes mentioned there. And lack of predictibility is a bad thing. If you want to loosen 230, that's one thing, but loosening it in such a way that the only people able to take advantage of it are rich corporations is a problem.

    With regards to Omegle, the point isn't about illegality, but about duty of care. As the author points out, if you were to create a similar service in real life, that service would not be able to just get away with statements of "use at your own risk" - they would be obliged to perform some sort of vetting. And the courts are starting to come around to this online as well - the piece discusses later the Model Mayhem lawsuit, where the owners of the aforementioned website had claimed Section 230 protection when a user of the site who had been raped by predators targeting models on the site - predators whom the owners knew about due to disclosures made when they acquired the site - sued the owners for failing to disclose that information publicly. The courts rejected the claim, pointing out that the specific law had nothing to do with user content, but with obligations to warn users.

    As for social media and terrorist groups, nobody is asking them to be police, just as nobody is asking Visa or Mastercard to be police by telling them not to process transactions for terrorist groups. And when this gets discussed later in the piece, the author points out that as long as the site can show a good faith effort in taking down accounts made by terrorist groups, then they should remain indemnified - it's only when they are actively turning a blind eye that indemnity should no longer apply.

    As for chilling effects - how would these changes give a TERF legal cause? A lot of the arguments about how altering Section 230 would have a chilling effect seem to be built on the logic that without blanket indemnity, people would be afraid to invest in online service platforms, which to me feel overly reminiscent of arguments made for tort reform.

    No, they really wouldn't. A library with a bulletin board for meetups isn't obliged to screen whoever's using it. Do you want Tube to have to do background checks on everyone using this site? Because hey, Penny Arcade DMs can be used exactly the same way. And all the Model Mayem suit required was what Omegl already did.

    And yes, that's pretty much what you're asking. They already go to lengths to deny access. "Good faith attempts" are instead taken as "you must block 100% instantly" which is ridiculous

    Chill effects: Easy. JK Rowling threatens to sue pretty much everyone who criticizes her. If she could just go to Twitter and say "You're hosting defamatory posts, you are liable" this is a problem. Instead of having to deal with uses individually, she can sue Twitter and tie up the case in expensive litigation for years. Twitter could probably handle that. Smaller service providers could not. And since you've said ambiguity is a good thing, there is no way of knowing whether they would win, which means even if they do the platform could not use anti-SLAPP or recovery of fees. They're just fucked.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Phoenix-D wrote: »
    Grindr went to law enforcement, banned the accounts when they noticed them, specifically checked for this dude's details. Should Tube be liable because we all occasionally get papist PMs?

    You're being kinda vague here despite writing a lot. Because yes, ending 230- which is what that Ars quote is about- would functionally end the internet as we know it. The only site that functions there is a completely umoderated one.

    Yes, but there's something that Grindr could have done that would have had impact as well on this sort of abuse - increase the friction on the creation of new accounts. The problem with this is that would impede growth, and to platforms like Grindr, growth is much more important than curtailing abuse (an attitude that has been endemic in social media.)

    What sort of friction? 30 day waiting period? Full identity verification (because you definitely want to that in your hookup app)?

    This is a person who created multiple accounts after they were banned. A small signup friction wouldn't have deterred them

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Phyphor wrote: »
    Phoenix-D wrote: »
    Grindr went to law enforcement, banned the accounts when they noticed them, specifically checked for this dude's details. Should Tube be liable because we all occasionally get papist PMs?

    You're being kinda vague here despite writing a lot. Because yes, ending 230- which is what that Ars quote is about- would functionally end the internet as we know it. The only site that functions there is a completely umoderated one.

    Yes, but there's something that Grindr could have done that would have had impact as well on this sort of abuse - increase the friction on the creation of new accounts. The problem with this is that would impede growth, and to platforms like Grindr, growth is much more important than curtailing abuse (an attitude that has been endemic in social media.)

    What sort of friction? 30 day waiting period? Full identity verification (because you definitely want to that in your hookup app)?

    This is a person who created multiple accounts after they were banned. A small signup friction wouldn't have deterred them

    I'd go with a token initiation fee (say, $5) - which, yes, brings its own issues (but again, we're talking about what tradeoffs services make.) But on the same token, it also brings another point of verification, with the account used to make the payment being checked against a blacklist.

    Saying that his creating multiple accounts is proof that friction wouldn't have stopped him is arguing facts not in evidence. If what enabled the creation of multiple accounts was the lack of friction in doing so, adding some friction changes the opportunity cost and can make doing so less attractive.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    The linked essay isn't really persuasive to me.

    The first example it gives isn't illegal. In the slightest. So right away we're into "think of the children" bullshit. It's just a site that connects people with random strangers to talk to. That is *not* something that needs regulating. The second is mostly fine. The third is just...what, no. Twitter and Facebook aren't police and shouldn't be expected to be so.

    re your Ars comments: chilling effects work both ways. I don't imagine Twitter would be hosting too many posts critical of TERFs without section 230, or with the proposed changes mentioned there. And lack of predictibility is a bad thing. If you want to loosen 230, that's one thing, but loosening it in such a way that the only people able to take advantage of it are rich corporations is a problem.

    With regards to Omegle, the point isn't about illegality, but about duty of care. As the author points out, if you were to create a similar service in real life, that service would not be able to just get away with statements of "use at your own risk" - they would be obliged to perform some sort of vetting. And the courts are starting to come around to this online as well - the piece discusses later the Model Mayhem lawsuit, where the owners of the aforementioned website had claimed Section 230 protection when a user of the site who had been raped by predators targeting models on the site - predators whom the owners knew about due to disclosures made when they acquired the site - sued the owners for failing to disclose that information publicly. The courts rejected the claim, pointing out that the specific law had nothing to do with user content, but with obligations to warn users.

    As for social media and terrorist groups, nobody is asking them to be police, just as nobody is asking Visa or Mastercard to be police by telling them not to process transactions for terrorist groups. And when this gets discussed later in the piece, the author points out that as long as the site can show a good faith effort in taking down accounts made by terrorist groups, then they should remain indemnified - it's only when they are actively turning a blind eye that indemnity should no longer apply.

    As for chilling effects - how would these changes give a TERF legal cause? A lot of the arguments about how altering Section 230 would have a chilling effect seem to be built on the logic that without blanket indemnity, people would be afraid to invest in online service platforms, which to me feel overly reminiscent of arguments made for tort reform.

    No, they really wouldn't. A library with a bulletin board for meetups isn't obliged to screen whoever's using it. Do you want Tube to have to do background checks on everyone using this site? Because hey, Penny Arcade DMs can be used exactly the same way. And all the Model Mayem suit required was what Omegl already did.

    And yes, that's pretty much what you're asking. They already go to lengths to deny access. "Good faith attempts" are instead taken as "you must block 100% instantly" which is ridiculous

    Chill effects: Easy. JK Rowling threatens to sue pretty much everyone who criticizes her. If she could just go to Twitter and say "You're hosting defamatory posts, you are liable" this is a problem. Instead of having to deal with uses individually, she can sue Twitter and tie up the case in expensive litigation for years. Twitter could probably handle that. Smaller service providers could not. And since you've said ambiguity is a good thing, there is no way of knowing whether they would win, which means even if they do the platform could not use anti-SLAPP or recovery of fees. They're just fucked.

    The author of the piece I linked outright says that Twitter's current approach should be indemnified:
    By contrast, Twitter likely would enjoy immunity from liability for the delayed removal
    of ISIL accounts. Depending on the circumstance, the failure to remove specific ISIL
    accounts might be understood as negligent or reckless conduct falling within the safe
    harbor immunity. Given the scale of Twitter’s user base (in the hundreds of millions),
    Twitter should be immunized from liability for failing to remove accounts about which
    it had not been notified or for removing accounts after a normal review process. The
    platform is currently engaged in good-faith screening efforts.

    Nobody is asking Tube to do background checks - all that's being asked is that he not turn a blind eye to criminal conduct. The idea that good faith means perfect exclusion is a strawman of your own making that has no basis, and is in fact contradicted by arguments made.

    The point with the Model Mayhem case was that Internet Brands knew about the specific individuals who raped the plaintiff as part of the disclosure process when they acquired the site, and did nothing to warn the users about these specific individuals hence the lawsuit. Again, the author points out that it's this specific situation in which Omegle should not have protection:
    Imagine that the site is given credible
    information about a specific sexual predator using its services and decides to do
    nothing about it. The family of a child exploited by that predator should be able to sue
    the site for knowingly enabling criminal activity. Even if the site knows that predators
    are using its services and takes no meaningful action to stop that, it should not be
    categorically immune from suit related to the decision to make its service available to
    predators. There is no particular reason, even under current law, to treat the decision to
    allow predators access to children as the act of a “publisher” or “speaker.” And it
    certainly isn’t the act of a Good Samaritan.

    As for chilling effects example, that's not how it would work. Just saying that Twitter, or these forums is hosting defamatory material would not be (and should not be) grounds for revoking Section 230 protection - instead, the plaintiff would have to show that the operator acts in bad faith and turns a blind eye to defamatory postings even when shown that they are in fact defamitory. And the defense would still have the right to file anti-SLAPP countersuits where available.

    And no, I didn't say that ambiguity was a good thing - I pointed out that the current state of predictability is based on the courts ruling on Section 230 as wide-ranging blanket indemnity, and thus rolling back that blanket is going to necessarily introduce ambiguity initially as the courts figure out where the law stands. But that shouldn't be an argument against doing so - we should not continue indemnifing harmful behavior because rolling back that protection will require rethinking what is and isn't protected. In fact, I'd argue that we're beginning to see such a realignment - as you noted, Facebook chose to settle the FHA cases rather than go to the courts, which shows that they're no longer confident in being able to successfully argue a Section 230 defense, and would rather not chance it.

    Finally, let's discuss the elephant in the room - the cultural demonization of tort law. We in the US live in a culture that views tort law as illegitimate in many ways - the popular distortion of the Stella Lieblek case is testament to this. That's not to say that there aren't genuine problems with individuals abusing tort law, which is why we need things like stronger anti-SLAPP statutes - but at the same time, the treatment of tort law as purely the tool of unscrupulous individuals is just as problematic.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    edited June 2020
    Phyphor wrote: »
    Phoenix-D wrote: »
    Grindr went to law enforcement, banned the accounts when they noticed them, specifically checked for this dude's details. Should Tube be liable because we all occasionally get papist PMs?

    You're being kinda vague here despite writing a lot. Because yes, ending 230- which is what that Ars quote is about- would functionally end the internet as we know it. The only site that functions there is a completely umoderated one.

    Yes, but there's something that Grindr could have done that would have had impact as well on this sort of abuse - increase the friction on the creation of new accounts. The problem with this is that would impede growth, and to platforms like Grindr, growth is much more important than curtailing abuse (an attitude that has been endemic in social media.)

    What sort of friction? 30 day waiting period? Full identity verification (because you definitely want to that in your hookup app)?

    This is a person who created multiple accounts after they were banned. A small signup friction wouldn't have deterred them

    I'd go with a token initiation fee (say, $5) - which, yes, brings its own issues (but again, we're talking about what tradeoffs services make.) But on the same token, it also brings another point of verification, with the account used to make the payment being checked against a blacklist.

    Saying that his creating multiple accounts is proof that friction wouldn't have stopped him is arguing facts not in evidence. If what enabled the creation of multiple accounts was the lack of friction in doing so, adding some friction changes the opportunity cost and can make doing so less attractive.

    Aren't you usually the one bringing up the fact that lots of people don't have access to credit cards and online payment methods and such? I distinctly remember you came down on that side before. Should the poor (who are more likely to be minorities) not be able to get access to the hookup app, or twitter, or PA or whatever? You also wouldn't be able to accept anything like a prepaid or virtual card because those could be anybody, this is supposed to be equivalent to identity verification

    Nevermind that for grindr specifically, tying a real identity to it on a credit card statement can be an issue given it's very specific purpose

    Phyphor on
  • Options
    EncEnc A Fool with Compassion Pronouns: He, Him, HisRegistered User regular
    Paying for participation as a means to verify identity sure is two terrible ideas stapled together.

  • Options
    shrykeshryke Member of the Beast Registered User regular
    edited June 2020
    Foefaller wrote: »
    On your first point, keep in mind the case law prior to Section 230 states that *any* sort of moderating qualified the site as a publisher.

    Which probably means someone could could successfully argue a profanity filter that kicks in before anyone can see the post would count as editing that occured prior to "publication."

    This is a lot of what the law was about too. Because this was the 90s. So what Section 230 was actually designed to do was let websites at least try to remove porn and swearing from their sites without making them legally liable for porn and swearing on their sites.

    Our concerns about content on the internet have shifted fairly dramatically though and I think it's fair to say that currently things like political content are the thing everyone is now thinking about.

    shryke on
  • Options
    AngelHedgieAngelHedgie Registered User regular
    The Senate is currently holding a hearing over Section 230 with the usual suspects (Dorsey, Zuckerberg, Pichai, etc.) Since it's the GOP running things, it's a thinly veiled attempt to attack Silicon Valley for not being sufficiently obsequious to conservatives (Cruz's "who elected you" rant being a prime example.)

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    AngelHedgieAngelHedgie Registered User regular
    edited February 2023
    So, with the oral arguments for Gonzalez v. Google coming up next week, we're starting to see "OMG The Internet Is Going To Be Destroyed" commentary like this:

    https://youtu.be/hzNo5lZCq5M

    My feeling on this is that we're hitting a point a long time coming, as Section 230 defenders have asserted that nothing is wrong with the law while ignoring the various ways the law is failing. At the heart of the lawsuit is a simple question - should Google's algorithmic curation be viewed as purely mechanical, or is the company placing an editorial thumb on the scale (and thus potentially out of ever broadening scope of Section 230, which has been extended to cover things like traditional publishing with Batzel.) As we've discussed, a lot of modern algorithmic curation by social media appears to be driven not by user preference, but by the platform seeking to push "engagement" - hence why if you watch a breadtube video, you'll often get recommended right wing videos The Algorithm sees as potentially "engaging" you (by basically waving a red flag before your eyes.)

    (I view Devon's argument here particularly disingenuous given he is a part owner of Nebula, which was created (as he has pointed out several times in videos) by content creators in response to...YouTube editorial control in the form of algorithmic demonetization of their videos.)

    Edit: In the larger scope of things, the argument put forth here strikes me as having the same sort of flaws that we've seen with free speech "absolutism" - namely the argument that we are obliged to put up with abuse in the name of the "greater good". Nobody thinks that Stratton Oakmont was a good ruling, and there needed to be a correction. But as the fact that an entire legalized extortion industry exists because of Section 230 shows, the pendulum has swung the other way, and the prior underpinning the law - that an online service and its users should be considered intrinsically divorced - doesn't really hold up.

    AngelHedgie on
    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    Phoenix-DPhoenix-D Registered User regular
    You'd think after your previous post on the thread you could see the problem there Hedgie.

    SCOTUS isnt going to rewrite the law to do whatever fantasy land stuff you want. They'd most likely overturn it entirely of they don't pull a weird "230 is overturned but only for deleting posts by conservatives" calvinball bullshit.

    And no before you go there none of this is the "fault" of "maximalists" or something like that. 99% of the arguments against 230 are entirely false, so it wouldn't matter what was being said by the other side.

    For example

    There are no editorial exceptions in 230.

  • Options
    HamHamJHamHamJ Registered User regular
    edited February 2023
    Nothing seems to change the basic facts as laid out in that video that liability will result in either shutting down something like YouTube completely, or pre-screening every upload at which point you actually just have Netflix not YouTube. So either way YouTube cannot exist without Section 230.

    I also like his point at the end that for all the complaining about algorithms the modern internet would be basically unusable without them. And I find a lot of the asks don't really consider practical reality. Like, how is an algorithm supposed to be tell the difference between you engaging with a video in a positive vs negative way? If you don't want to keep being recommended videos you don't like maybe stop hate watching them, I don't know.

    The one real idea for improvement I have is that if personal hardware capability increases, or the processing requirements decrease, to the point where you can basically have a recommendation algorithm this powerful running locally and configure it in whatever way you want.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Phoenix-D wrote: »
    You'd think after your previous post on the thread you could see the problem there Hedgie.

    SCOTUS isnt going to rewrite the law to do whatever fantasy land stuff you want. They'd most likely overturn it entirely of they don't pull a weird "230 is overturned but only for deleting posts by conservatives" calvinball bullshit.

    And no before you go there none of this is the "fault" of "maximalists" or something like that. 99% of the arguments against 230 are entirely false, so it wouldn't matter what was being said by the other side.

    For example

    There are no editorial exceptions in 230.

    This is an argument for blanket indemnification of any and all conduct by an online service provider as long as they can tie said conduct to a user. Which is not just ridiculous, but the reason that an entire legalized extortion industry exists online. Beyond that, the "editorial exemption" you bring up is one of two things:

    * The idea, long enshrined in law, that having editorial control prior to publication necessarily means that you fundamentally take ownership of the content. This is why newspapers don't publish defamatory letters to the editor, and why Batzel is such a horrible, tech ignorant ruling.
    * The point that algorithmic preference and selection is an action taken by the platform, orthogonal to user content, and thus should be taken as the voice of the platform and not the user. (This is the argument at the heart of Gonzalez.)

    And yes, I don't trust the Supreme Court. But I also don't trust a lot of Section 230 defenders who have attacked the idea that a law that was written 25 years ago might not have kept up with the evolution of the Internet. Not to mention that things like pointing to the legal exemption that makes it so that you can't claim Section 230 on CSAM as proof that it's not indemnification - then attacking attempts at creating a Federal online harassment statute as "chilling free speech" because it would open up liability for service providers.

    (Then there's the whole treatment of liability as horrible, which is a much larger topic that gets into the idea that the tech industry opposes being ruled.)

    But finally, let ask you this - should Patrick Tomlinson and his family be the price for the Internet? Because after dealing with an organized campaign of abuse and threats, he tried to handle it the way that many Section 230 advocates said he should - and in response the courts not only told him that Section 230 said that he wasn't entitled to legal relief, but that he owed the legal fees of the person enabling that abuse.

    That - along with other results causing genuine harm to people - is why I keep pointing out that Section 230 in its current form is fucked, and saying "there are no editorial exemptions" isn't actually an answer to that.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    DoodmannDoodmann Registered User regular

    HamHamJ wrote: »
    Nothing seems to change the basic facts as laid out in that video that liability will result in either shutting down something like YouTube completely, or pre-screening every upload at which point you actually just have Netflix not YouTube. So either way YouTube cannot exist without Section 230.

    I also like his point at the end that for all the complaining about algorithms the modern internet would be basically unusable without them. And I find a lot of the asks don't really consider practical reality. Like, how is an algorithm supposed to be tell the difference between you engaging with a video in a positive vs negative way? If you don't want to keep being recommended videos you don't like maybe stop hate watching them, I don't know.

    The one real idea for improvement I have is that if personal hardware capability increases, or the processing requirements decrease, to the point where you can basically have a recommendation algorithm this powerful running locally and configure it in whatever way you want.

    I disagree. Make the algorithm opt in like a google search and we wouldn't have any kind of 230 incongruity.

    Also the second bold is rich, we have an ongoing discussion on this forum about trying to beat the algorithm into submission only to have the Jordan Peterson floodgates open because you accidently watched one tertiarily related video.

    Whippy wrote: »
    nope nope nope nope abort abort talk about anime
    I like to ART
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Doodmann wrote: »
    HamHamJ wrote: »
    Nothing seems to change the basic facts as laid out in that video that liability will result in either shutting down something like YouTube completely, or pre-screening every upload at which point you actually just have Netflix not YouTube. So either way YouTube cannot exist without Section 230.

    I also like his point at the end that for all the complaining about algorithms the modern internet would be basically unusable without them. And I find a lot of the asks don't really consider practical reality. Like, how is an algorithm supposed to be tell the difference between you engaging with a video in a positive vs negative way? If you don't want to keep being recommended videos you don't like maybe stop hate watching them, I don't know.

    The one real idea for improvement I have is that if personal hardware capability increases, or the processing requirements decrease, to the point where you can basically have a recommendation algorithm this powerful running locally and configure it in whatever way you want.

    I disagree. Make the algorithm opt in like a google search and we wouldn't have any kind of 230 incongruity.

    Also the second bold is rich, we have an ongoing discussion on this forum about trying to beat the algorithm into submission only to have the Jordan Peterson floodgates open because you accidently watched one tertiarily related video.

    Again, the problem isn't algorithms in of themselves - it's that often times algorithms contain biases that influence their operation,and as such their users should be held accountable for those. A large part of why we're in Algorithmic Hell currently is because we don't hold their users accountable for their behavior.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    spool32spool32 Contrary Library Registered User regular
    edited February 2023
    The core issue here is that there must be a way for Penny Arcade to run a forum without being legally responsible for my bullshit in particular, or yours. There must be a way for platforms to provide an avenue for content availability while simultaneously retaining a) the ability to moderate behavior they don't like, according to whatever standard they set, and b) the shield against being sued into oblivion when they either fail to notice or fail to act on that standard.

    Removal of the shield exposes every modern platform to legal action. It's either the actual real-ass wild west again, or it's going to shut everybody down, or both honestly. If section 230 goes, and nothing takes its place, we're in for a long decline into fragmentation and broken usability.

    spool32 on
Sign In or Register to comment.