As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/

[Social Media]: The Intersection Of Money, Policy, And Hate

12425272930100

Posts

  • Phoenix-DPhoenix-D Registered User regular
    jmcdonald wrote: »
    TryCatcher wrote: »


    Honestly, a good question is what resources do countries have against social media manipulation. Is the same reason why Merkel and a good chunk of the EU has come against Trump's ban, is not about Trump per se, but about the likes of the Zucc having more power than a national government, and the EU has been spending a lot of effort in at least, cutting the power of social media companies over EU citizens.

    "But they are private companies and blah blah". Don't care.

    The POTUS account still exists

    He can't actually use it. Twitter nukes any post made.

  • AngelHedgieAngelHedgie Registered User regular
    TryCatcher wrote: »


    Honestly, a good question is what resources do countries have against social media manipulation. Is the same reason why Merkel and a good chunk of the EU has come against Trump's ban, is not about Trump per se, but about the likes of the Zucc having more power than a national government, and the EU has been spending a lot of effort in at least, cutting the power of social media companies over EU citizens.

    "But they are private companies and blah blah". Don't care.

    The "freedom of opinion" argument is a tired old chestnut - yes, you have the right to your opinion, but so does everyone else, and part of their right is to say "I no longer wish to associate with you." I'm also done with the "hate speech is the price of free speech" argument (which is the argument all these leaders are implicitly making by opposing the ban without getting into the details of why Trump was banned.)

    The argument that Zuckerberg has more power than a government is belied by his behavior regarding Elizabeth Warren - if he truly had more power than the government, the idea of her gaining regulatory authority wouldn't fill him with the pants-shitting terror that we've seen.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Phoenix-DPhoenix-D Registered User regular
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

  • jmcdonaldjmcdonald I voted, did you? DC(ish)Registered User regular
    Phoenix-D wrote: »
    jmcdonald wrote: »
    TryCatcher wrote: »


    Honestly, a good question is what resources do countries have against social media manipulation. Is the same reason why Merkel and a good chunk of the EU has come against Trump's ban, is not about Trump per se, but about the likes of the Zucc having more power than a national government, and the EU has been spending a lot of effort in at least, cutting the power of social media companies over EU citizens.

    "But they are private companies and blah blah". Don't care.

    The POTUS account still exists

    He can't actually use it. Twitter nukes any post made.

    Maybe he shouldn’t be inciting violence?

    I have no issue with moderation on clear calls to violence regardless of who is posting it.

  • PolaritiePolaritie Sleepy Registered User regular
    TryCatcher wrote: »


    Honestly, a good question is what resources do countries have against social media manipulation. Is the same reason why Merkel and a good chunk of the EU has come against Trump's ban, is not about Trump per se, but about the likes of the Zucc having more power than a national government, and the EU has been spending a lot of effort in at least, cutting the power of social media companies over EU citizens.

    "But they are private companies and blah blah". Don't care.

    The "freedom of opinion" argument is a tired old chestnut - yes, you have the right to your opinion, but so does everyone else, and part of their right is to say "I no longer wish to associate with you." I'm also done with the "hate speech is the price of free speech" argument (which is the argument all these leaders are implicitly making by opposing the ban without getting into the details of why Trump was banned.)

    The argument that Zuckerberg has more power than a government is belied by his behavior regarding Elizabeth Warren - if he truly had more power than the government, the idea of her gaining regulatory authority wouldn't fill him with the pants-shitting terror that we've seen.

    Twitter is in a position to exercise an entirely unhealthy amount of soft power against the government but has little it can do if the government decides to take direct action either via legislation or the courts.

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • TryCatcherTryCatcher Registered User regular
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

  • AngelHedgieAngelHedgie Registered User regular
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    Then the government needs to step the fuck in way before we get to a crisis point. But the bigger point is that this is part of a cycle that we have seen over and over - online service turn a blind eye to misconduct, that misconduct leads to a flashpoint event, outcry from said event forces the service's hand and they institute reforms, backlash comes out from the usual suspects about how said reforms are a "threat to free speech" while not taking one moment to reflect on how the service had enabled harms.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • reVersereVerse Attack and Dethrone God Registered User regular
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

  • AngelHedgieAngelHedgie Registered User regular
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    It's not being a "public platform" that protects them, it's that Section 230, being written in the infancy of the Internet Age, created a clear cleavage between user and platform that has never been revisited even as that relationship evolved. And yes, that's something that needs to be revisited, as so much of the issue with social media has been that they are indemnified from user behavior that the platform enables.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Phoenix-DPhoenix-D Registered User regular
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    No. This is a completely incorrect framing. Section 230 says nothing about being a town square, and Twitter is not a publisher under the law. There is no "publisher vs not" distinction, that is crap that has been popularized by bad faith BS.
    No provider or user of an interactive computer service shall be held liable on account of—
    (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

  • PaladinPaladin Registered User regular
    I think that messing with internet communication will blow up in your face until people can go get drunk at bars again. Whatever you try to do, if you cut a lot of people off during social distancing, you risk paying a heavy price. I'm actually curious to see what that price would be, but I won't be one to tempt it.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • FoefallerFoefaller Registered User regular
    edited January 2021
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    It's not being a "public platform" that protects them, it's that Section 230, being written in the infancy of the Internet Age, created a clear cleavage between user and platform that has never been revisited even as that relationship evolved. And yes, that's something that needs to be revisited, as so much of the issue with social media has been that they are indemnified from user behavior that the platform enables.

    Not to say it's perfect and should not be altered, but the reason Section 230 exists wasn't so much to protect platforms from liability from user content, but rather so that platforms could regulate user content on their platform in the first place without automatically becoming liable for *all* the content posted.

    The case law that exist prior had basically set up a scenario that would have turn all the internet into 8chan, because platforms would be unable to remove anything offensive on their sites without getting sued for said content in the first place.

    Foefaller on
    steam_sig.png
  • AngelHedgieAngelHedgie Registered User regular
    Foefaller wrote: »
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    It's not being a "public platform" that protects them, it's that Section 230, being written in the infancy of the Internet Age, created a clear cleavage between user and platform that has never been revisited even as that relationship evolved. And yes, that's something that needs to be revisited, as so much of the issue with social media has been that they are indemnified from user behavior that the platform enables.

    Not to say it's perfect and should not be altered, but the reason Section 230 exists wasn't so much to protect platforms from liability from user content, but rather so that platforms could regulate user content on their platform without automatically becoming liable for *all* the content posted.

    The case law that exist prior had basically set up a scenario that would have turn all the internet into 8chan, because platforms would be unable to remove anything offensive on their sites without getting sued for said content in the first place.

    The problem with indemnifying service providers from their moderation decisions is that it turns out that "do nothing" is a moderation decision. The Fappening should have been a wake up call - Reddit refusing to remove images posted nonconsentually on their service until they were legally liable (at which point the subreddit was nuked from orbit) highlighted the problem with how things stood.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • FoefallerFoefaller Registered User regular
    Foefaller wrote: »
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    It's not being a "public platform" that protects them, it's that Section 230, being written in the infancy of the Internet Age, created a clear cleavage between user and platform that has never been revisited even as that relationship evolved. And yes, that's something that needs to be revisited, as so much of the issue with social media has been that they are indemnified from user behavior that the platform enables.

    Not to say it's perfect and should not be altered, but the reason Section 230 exists wasn't so much to protect platforms from liability from user content, but rather so that platforms could regulate user content on their platform without automatically becoming liable for *all* the content posted.

    The case law that exist prior had basically set up a scenario that would have turn all the internet into 8chan, because platforms would be unable to remove anything offensive on their sites without getting sued for said content in the first place.

    The problem with indemnifying service providers from their moderation decisions is that it turns out that "do nothing" is a moderation decision. The Fappening should have been a wake up call - Reddit refusing to remove images posted nonconsentually on their service until they were legally liable (at which point the subreddit was nuked from orbit) highlighted the problem with how things stood.

    Like I said, I'm not saying Section 230 is flawless.

    I'm just saying we need to keep in mind that without it, "do nothing" is the only moderation decision that's proven in court to protect a site from getting sued to oblivion.

    Any successor meant to create a social media environment that's more inclusive would have to make sure that, no, they can't just do nothing in regards to offensive content and wipe their hands of any responsibility if they feel the demands of the new regulations are too onerous.

    steam_sig.png
  • CelestialBadgerCelestialBadger Registered User regular
    No moderation is not a serious option for any platform that wants normal people on it. The more libertarian sites that have tried it quickly become cesspools.

  • Phoenix-DPhoenix-D Registered User regular
    No moderation is not a serious option for any platform that wants normal people on it. The more libertarian sites that have tried it quickly become cesspools.

    Getting sued because you moderated this and not that is also not a serious option.

  • FoefallerFoefaller Registered User regular
    edited January 2021
    No moderation is not a serious option for any platform that wants normal people on it. The more libertarian sites that have tried it quickly become cesspools.

    ...and those cesspools are where the racists and alt-righters are conditioned into becoming racists and alt-righters.

    I don't think it would be in the best interest of the internet to unintentally encourage any site to take that path, even if only a small number of them would, given the choice.

    Foefaller on
    steam_sig.png
  • spool32spool32 Contrary Library Registered User regular
    Foefaller wrote: »
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    It's not being a "public platform" that protects them, it's that Section 230, being written in the infancy of the Internet Age, created a clear cleavage between user and platform that has never been revisited even as that relationship evolved. And yes, that's something that needs to be revisited, as so much of the issue with social media has been that they are indemnified from user behavior that the platform enables.

    Not to say it's perfect and should not be altered, but the reason Section 230 exists wasn't so much to protect platforms from liability from user content, but rather so that platforms could regulate user content on their platform in the first place without automatically becoming liable for *all* the content posted.

    The case law that exist prior had basically set up a scenario that would have turn all the internet into 8chan, because platforms would be unable to remove anything offensive on their sites without getting sued for said content in the first place.

    This is correct. just to bring it home, the Glorious Edict could not exist or be enforced here if section 230 vanished, because it would put the PA megacorp into a position of liability for moderated content.

  • GoumindongGoumindong Registered User regular
    edited January 2021
    There is such
    Phoenix-D wrote: »
    No moderation is not a serious option for any platform that wants normal people on it. The more libertarian sites that have tried it quickly become cesspools.

    Getting sued because you moderated this and not that is also not a serious option.

    It really isn't. Because the case law in question does kind of have a cutout for good faith efforts, even if the cutout is not perfect. The claimant had previously wanted the host that content was defamatory and that host removed said content. The content was reposted and was removed after notifying again, multiple times. The claimant more or less was arguing that the web host had an obligation to actively search out the defamatory content that had been removed after legal notices because the content kept being reposted and those responsible were not prevented from doing so.

    Absent 230 the answer might have been "yea if you host defamatory content and, after moderating that content multiple times do not set up structures to prevent it from being posted or remove it when it is you're liable" but we don't really know that because 230 granted blanket liability protection. It is not certain that the answer would have been "yea if you host defamatory content for any length of time you're liable". Because even though the judge did indicate that they would have found for the Plaintiff this was after the plaintiff had sought relief via other methods.

    We might note that most large content providers have automated methods to detect and remove content that breaks its ToS as well as content that breaches copyright so its entirely possible that most content hosters would be in compliance right out of the bat. And its not impossible that small content hosters(like this forum) would not have a significant issue because, after receiving enough legitimate complaints would ban the poster in question and actually have methods in place to prevent people who have been banned from remaking accounts and continuing to post.

    Goumindong on
    wbBv3fj.png
  • spool32spool32 Contrary Library Registered User regular
    That's purely speculation though, Goumindong, and doesn't really take into account the sensibly risk-averse nature of corps that operate a forum as a side-thing, like PA does.

    There's no upside to trying to keep PA Forums alive in a post-230 world because one mistake ends their business even in a case they might ultimately win.

  • GoumindongGoumindong Registered User regular
    spool32 wrote: »
    Foefaller wrote: »
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    It's not being a "public platform" that protects them, it's that Section 230, being written in the infancy of the Internet Age, created a clear cleavage between user and platform that has never been revisited even as that relationship evolved. And yes, that's something that needs to be revisited, as so much of the issue with social media has been that they are indemnified from user behavior that the platform enables.

    Not to say it's perfect and should not be altered, but the reason Section 230 exists wasn't so much to protect platforms from liability from user content, but rather so that platforms could regulate user content on their platform in the first place without automatically becoming liable for *all* the content posted.

    The case law that exist prior had basically set up a scenario that would have turn all the internet into 8chan, because platforms would be unable to remove anything offensive on their sites without getting sued for said content in the first place.

    This is correct. just to bring it home, the Glorious Edict could not exist or be enforced here if section 230 vanished, because it would put the PA megacorp into a position of liability for moderated content.

    Probably not because the Glorious Edict is a ToS and not a moderation policy. That is. You could still post defamatory statements. Moderation in and of itself need not trigger the conditions because the case in question hinges on the specific content being moderated being the content they were liable for.

    Case goes like this

    Person X says false thing about person Y.
    Person y contacts host Z and says "yo this is defamatory take it down"
    Host Z does
    Person X posts false thing about person Y again
    Person Y contacts host Z and says "yo this is defamatory take it down"
    Host Z does
    Person X posts false thing about person Y again.
    Person Y sues Host Z for failing to stop Person X from saying the false thing even though they know its false and have been contacted about this in the past.

    Section 230 prevents them being liable. And absent 230 they would have been said the Judge.

    But we notice that there is a lot of leeway here in the facts of the case. The moderation that they are liable for is the moderation of the content that had been removed, not other content that had or had not been removed. So the plaintiff succeeding here does not necessarily put in jeopardy all moderation.

    So I don't really buy the idea that killing 230 would kill all moderation. If not because the case in question doesn't really deal with it but also because the law would likely update itself as judges examined the situation in light of new technology for which they have had 20 years to appreciate the structure of.

    wbBv3fj.png
  • spool32spool32 Contrary Library Registered User regular
    Goumindong wrote: »
    spool32 wrote: »
    Foefaller wrote: »
    reVerse wrote: »
    TryCatcher wrote: »
    Phoenix-D wrote: »
    Phoenix-D wrote: »
    If you think social media is big enough that getting banned is an issue, there's a perfectly cromulant situation that already exists: anti trust.

    Also if you only speak up when the fascist is being deplatformed cry me a river. Twitter bans people all the time. Either they are allowed to or they aren't. No special treatment just because he's president.

    Also come to think of Germany has laws that required people to be blocked from Twitter there.

    That's the point that Merkel is making. Is the government that should be dictating terms to social media, not the other way.

    I think the main problem here is that Twitter et al claim that they are a public platform (i.e. a town square) so they can't be regulated as if they were a publisher, but they also have rules and standards for what they allow on their platform just as a publisher would.

    It would probably be good for all to finally get through that double standard.

    It's not being a "public platform" that protects them, it's that Section 230, being written in the infancy of the Internet Age, created a clear cleavage between user and platform that has never been revisited even as that relationship evolved. And yes, that's something that needs to be revisited, as so much of the issue with social media has been that they are indemnified from user behavior that the platform enables.

    Not to say it's perfect and should not be altered, but the reason Section 230 exists wasn't so much to protect platforms from liability from user content, but rather so that platforms could regulate user content on their platform in the first place without automatically becoming liable for *all* the content posted.

    The case law that exist prior had basically set up a scenario that would have turn all the internet into 8chan, because platforms would be unable to remove anything offensive on their sites without getting sued for said content in the first place.

    This is correct. just to bring it home, the Glorious Edict could not exist or be enforced here if section 230 vanished, because it would put the PA megacorp into a position of liability for moderated content.

    Probably not because the Glorious Edict is a ToS and not a moderation policy. That is. You could still post defamatory statements. Moderation in and of itself need not trigger the conditions because the case in question hinges on the specific content being moderated being the content they were liable for.

    Case goes like this

    Person X says false thing about person Y.
    Person y contacts host Z and says "yo this is defamatory take it down"
    Host Z does
    Person X posts false thing about person Y again
    Person Y contacts host Z and says "yo this is defamatory take it down"
    Host Z does
    Person X posts false thing about person Y again.
    Person Y sues Host Z for failing to stop Person X from saying the false thing even though they know its false and have been contacted about this in the past.

    Section 230 prevents them being liable. And absent 230 they would have been said the Judge.

    But we notice that there is a lot of leeway here in the facts of the case. The moderation that they are liable for is the moderation of the content that had been removed, not other content that had or had not been removed. So the plaintiff succeeding here does not necessarily put in jeopardy all moderation.

    So I don't really buy the idea that killing 230 would kill all moderation. If not because the case in question doesn't really deal with it but also because the law would likely update itself as judges examined the situation in light of new technology for which they have had 20 years to appreciate the structure of.

    This doesn't touch things like demonstrating a pattern of moderation that demonstrates intent to tolerate defamatory behavior. I think you're being a little pollyanna-ish about the impact. It also doesn't consider situations where the moderators refuse to take something down that might be defamatory, and that causes the business to be dragging into court to argue their moderation was appropriate (with all the legal bills that requires). It turns moderators into the arbiters of truth and means failure may expose the business to lawsuits.

    While the law is updating itself, (and given the arc of legislation since the 90s there's no reason to believe it will arrive at a place that's either logical or positive for social interaction on the internet) the disintegration of small forums and 'engagement' sections (i.e. patreon's comment sections, or any comment section underneath a blog of any kind) will be, I think, swift and widespread.

  • Phoenix-DPhoenix-D Registered User regular
    Also there is pre 230 precedent and it's not good. Moderate at all = liable

  • GoumindongGoumindong Registered User regular
    Phoenix-D wrote: »
    Also there is pre 230 precedent and it's not good. Moderate at all = liable
    I was discussing the pre-230 precedent. Which does not necessarily say that due to the facts of the case.

    wbBv3fj.png
  • ZibblsnrtZibblsnrt Registered User regular
    Amazon's response to Parler's antitrust(?!) lawsuit (PDF link) against them in response to getting booted from AWS has some significant "are you sure you want to play this game?" energy. The first three sentences:
    This case is not about suppressing speech or stifling viewpoints. It is not about a conspiracy to restrain trade. Instead, this case is about Parler’s demonstrated unwillingness and inability to remove from the servers of Amazon Web Services (“AWS”) content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens.

  • evilmrhenryevilmrhenry Registered User regular
    I feel the technical perspectives on 230 removal are missing the point. While section 230 has been chosen as the boogieman, the actual goal is to make social media hospitable to conservative thought. (This is a euphemism.) You should start with a Twitter that is forbidden from banning Trump and work backwards from there.

  • shrykeshryke Member of the Beast Registered User regular
    Polaritie wrote: »
    TryCatcher wrote: »


    Honestly, a good question is what resources do countries have against social media manipulation. Is the same reason why Merkel and a good chunk of the EU has come against Trump's ban, is not about Trump per se, but about the likes of the Zucc having more power than a national government, and the EU has been spending a lot of effort in at least, cutting the power of social media companies over EU citizens.

    "But they are private companies and blah blah". Don't care.

    The "freedom of opinion" argument is a tired old chestnut - yes, you have the right to your opinion, but so does everyone else, and part of their right is to say "I no longer wish to associate with you." I'm also done with the "hate speech is the price of free speech" argument (which is the argument all these leaders are implicitly making by opposing the ban without getting into the details of why Trump was banned.)

    The argument that Zuckerberg has more power than a government is belied by his behavior regarding Elizabeth Warren - if he truly had more power than the government, the idea of her gaining regulatory authority wouldn't fill him with the pants-shitting terror that we've seen.

    Twitter is in a position to exercise an entirely unhealthy amount of soft power against the government but has little it can do if the government decides to take direct action either via legislation or the courts.

    Also Trump's ban is not really an exercise in soft power. It's just an example of Trump's incompetence. He's already president. He could get around a twitter ban if he wanted via other methods, but he's too stupid and lazy to do so.

    Twitter's soft power was in making Trump president in the first place by allowing him a platform without moderation against what he was saying. It's in shaping the political landscape by controlling what information gets to people and what information doesn't. It's by propping up voices from one side of an issues and silencing voices from the other.

    The power of social media companies to shape politics is quite large but it's got nothing to do with how it effects people who already have power and fame.

  • RMS OceanicRMS Oceanic Registered User regular
    I feel the technical perspectives on 230 removal are missing the point. While section 230 has been chosen as the boogieman, the actual goal is to make social media hospitable to conservative thought. (This is a euphemism.) You should start with a Twitter that is forbidden from banning Trump and work backwards from there.

    I thought it was more people were being mean to Trump and Twitter wasn't punishing them.

  • AngelHedgieAngelHedgie Registered User regular
    This analysis of the AWS response by a lawyer to Parler's TRO request is :chefkiss:, frankly:



    Basically, we hit [stophesalreadydead.gif] territory about halfway through.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • PaladinPaladin Registered User regular
    The role of section 230, or even the government, is super unclear to me as the stuff that seems to actually work are big tech companies doing things to smaller companies. Looks like if you want something fixed, sic Amazon, Google, or PayPal on em

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    Paladin wrote: »
    The role of section 230, or even the government, is super unclear to me as the stuff that seems to actually work are big tech companies doing things to smaller companies. Looks like if you want something fixed, sic Amazon, Google, or PayPal on em

    My understanding is that 230 is about who individuals can sue about what someone puts on the internet.

  • EncEnc A Fool with Compassion Pronouns: He, Him, HisRegistered User regular
    edited January 2021
    Paladin wrote: »
    The role of section 230, or even the government, is super unclear to me as the stuff that seems to actually work are big tech companies doing things to smaller companies. Looks like if you want something fixed, sic Amazon, Google, or PayPal on em

    This is pretty simple. If you have a platform, your options before were:
    • You are liable and get sued when someone posts [insert illegal thing here] on your platform if you try to remove anything ever as you are considered the custodian of the site at that point. This is a problem if you don't want your brand to be associated with child porn, neo nazis, and other criminal evil bullshit... which is to say every brand.
    • Allow [insert illegal thing here] on your platform and do nothing to remove it, and don't have liability because your platform is an open marketplace of ideas with posters as self-liable by ULA. This is fine for 4chan and other cesspools of filth, but, again, a problem for most brands.

    Section 230 allows for a third option:
    • You are able to remove offensive content to protect your brand/platform's reputation without gaining full liability for everything your users post. This allows open discussion while you prevent your brand from being associated with bullshit.

    Government action here is setting where the lawsuits can happen. Without 230, the entire internet becomes 4chan or platforms will have to screen every social media post before allowing them to be posted.

    Enc on
  • PaladinPaladin Registered User regular
    I feel like the big problem, from an enforcement standpoint, is that illegal activities can occur on the internet but there is a liability gap. If somebody does a bad thing, you can't sue them because they have anonymity that's untraceable. So unless everyone gets tethered to an ID when on the internet, I don't see individual liability being a recourse.

    If this was a business or healthcare, I'd simply designate a maximum level of allowable harm, beyond which we'd have to take action. I don't know how to quantify that in this case, and I don't know if I even want to quantify it for everybody, knowing a bunch of people out there will abuse it. This makes it tough to argue that business level solutions aren't actually the most appropriate way to deal with the issue, as they can react to individual cases efficiently without resorting to things like climbing the appellate courts all the way to the top, or running a massive campaign to make a new federal law every time social circumstances change.

    The point is, this level of allowable harm is a quantity, not a quality. If it's one person spouting whatever harmful stuff, it would be hard to argue that the liability switch should be flipped, as that is basically the same as not having section 230 at all. There would be a minimum group size x minimum social deviancy level calculation somewhere.

    Obviously, this is just a thought experiment. Specific rules to specific situations are only good if the institution making the rules has good intentions and is invested in the success of the system - I don't think the current government qualifies. The government will continue not to qualify unless government officials themselves have liability for the success of their projects. We are far from that checkpoint.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • EncEnc A Fool with Compassion Pronouns: He, Him, HisRegistered User regular
    Paladin wrote: »
    I feel like the big problem, from an enforcement standpoint, is that illegal activities can occur on the internet but there is a liability gap. If somebody does a bad thing, you can't sue them because they have anonymity that's untraceable. So unless everyone gets tethered to an ID when on the internet, I don't see individual liability being a recourse.

    If this was a business or healthcare, I'd simply designate a maximum level of allowable harm, beyond which we'd have to take action. I don't know how to quantify that in this case, and I don't know if I even want to quantify it for everybody, knowing a bunch of people out there will abuse it. This makes it tough to argue that business level solutions aren't actually the most appropriate way to deal with the issue, as they can react to individual cases efficiently without resorting to things like climbing the appellate courts all the way to the top, or running a massive campaign to make a new federal law every time social circumstances change.

    The point is, this level of allowable harm is a quantity, not a quality. If it's one person spouting whatever harmful stuff, it would be hard to argue that the liability switch should be flipped, as that is basically the same as not having section 230 at all. There would be a minimum group size x minimum social deviancy level calculation somewhere.

    Obviously, this is just a thought experiment. Specific rules to specific situations are only good if the institution making the rules has good intentions and is invested in the success of the system - I don't think the current government qualifies. The government will continue not to qualify unless government officials themselves have liability for the success of their projects. We are far from that checkpoint.

    To the first bold point, this is pretty rare. As evidenced by the capital coup attempt, most people are real dumb re: their identity on the internet.

    The rest of the post reads as nonsense to me. Individuals can and are censured on the internet by themselves all the time and it isn't a particular burden to do so. We have laws existent to deal with crimes documented on the internet. I'm not sure what you are concern trolling here, but it doesn't read coherently to me, at least.

  • PaladinPaladin Registered User regular
    Enc wrote: »
    Paladin wrote: »
    I feel like the big problem, from an enforcement standpoint, is that illegal activities can occur on the internet but there is a liability gap. If somebody does a bad thing, you can't sue them because they have anonymity that's untraceable. So unless everyone gets tethered to an ID when on the internet, I don't see individual liability being a recourse.

    If this was a business or healthcare, I'd simply designate a maximum level of allowable harm, beyond which we'd have to take action. I don't know how to quantify that in this case, and I don't know if I even want to quantify it for everybody, knowing a bunch of people out there will abuse it. This makes it tough to argue that business level solutions aren't actually the most appropriate way to deal with the issue, as they can react to individual cases efficiently without resorting to things like climbing the appellate courts all the way to the top, or running a massive campaign to make a new federal law every time social circumstances change.

    The point is, this level of allowable harm is a quantity, not a quality. If it's one person spouting whatever harmful stuff, it would be hard to argue that the liability switch should be flipped, as that is basically the same as not having section 230 at all. There would be a minimum group size x minimum social deviancy level calculation somewhere.

    Obviously, this is just a thought experiment. Specific rules to specific situations are only good if the institution making the rules has good intentions and is invested in the success of the system - I don't think the current government qualifies. The government will continue not to qualify unless government officials themselves have liability for the success of their projects. We are far from that checkpoint.

    To the first bold point, this is pretty rare. As evidenced by the capital coup attempt, most people are real dumb re: their identity on the internet.

    The rest of the post reads as nonsense to me. Individuals can and are censured on the internet by themselves all the time and it isn't a particular burden to do so. We have laws existent to deal with crimes documented on the internet. I'm not sure what you are concern trolling here, but it doesn't read coherently to me, at least.

    I don't think I'm concern trolling.

    If the threshold of activation of legal involvement is when people actually try to take internet stuff into the real world, then yeah, anonymity doesn't really apply. But that's also the system we've currently got. It works when you try to do things off the internet.

    If we were able to deal with every crime on the internet and assign liability appropriately, then we wouldn't need to put social media sites in a special regulation category. Someone sends you a death threat? Send the law after them. If the workload is possible and there is no liability gap, then it seems like pouring more resources into resolving cases regardless of whether social media was involved is the way to go.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • AngelHedgieAngelHedgie Registered User regular
    Twitter is working on a decentralized blockchain based Twitter replacement:
    "We are trying to do our part by funding an initiative around an open decentralized standard for social media,” Dorsey said in his tweetstorm. “Our goal is to be a client of that standard for the public conversation layer of the internet. We call it #bluesky.”

    He continued, “Twitter is funding a small independent team of up to five open source architects, engineers, and designers to develop an open and decentralized standard for social media. The goal is for Twitter to ultimately be a client of this standard.”

    Dorsey likened the proposed new decentralized standard to Bitcoin, the digital currency which is also decentralized. Dorsey called it “a foundational internet technology that is not controlled or influenced by any single individual or entity. This is what the internet wants to be, and over time, more of it will be.”

    He acknowledged that such a project “will take time to build. We are in the process of interviewing and hiring folks, looking at both starting a standard from scratch or contributing to something that already exists. No matter the ultimate direction, we will do this work completely through public transparency.”

    Dorsey really has no clue what he is doing. He really needs to just resign already.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • PaladinPaladin Registered User regular
    Isn't this basically pied piper

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Phoenix-DPhoenix-D Registered User regular
    We already have that Jack. Except no Mastadon instance would federate with you :P

  • evilmrhenryevilmrhenry Registered User regular
    Wow. A whole up to five developers.

  • CalicaCalica Registered User regular
    Wow. A whole up to five developers.

    This gives me nightmarish visions of an open source social media standard designed and developed by five white men :bigfrown:

This discussion has been closed.