As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

[Social Media]: The Intersection Of Money, Policy, And Hate

194969899100

Posts

  • Options
    jothkijothki Registered User regular
    The best comparison that I can think of for Discord would be a company that rents out private conference rooms. I'd expect roughly the same level of guaranteed confidentiality there.

  • Options
    Phoenix-DPhoenix-D Registered User regular
    In other s
    DarkPrimus wrote: »
    It's already started:
    Thread
    Just as we feared, TwitterSafety's new "Private Media" policy is already being used by extremists to suppress valid, public-interest research by journalists who expose violent crimes.

    This policy is hours-old and is already being abused.

    mcc linking to a thread from Chad Loder explaining how organized Twitter mobs have utilized Twitter's report/moderation functions to silence journalists trying to report on alt-right sympathizers like Andy Ngo. The Twitter thread includes examples of numerous journalists having reported on individuals with alt-right connections having their reporting about those individuals' actions being mass-reported.

    Incredibly predictable from Twitter. They never implement anything without fucking it up somehow.

  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited December 2021
    jothki wrote: »
    The best comparison that I can think of for Discord would be a company that rents out private conference rooms. I'd expect roughly the same level of guaranteed confidentiality there.

    And there, if a nazi group was using them, they might not know about it. But if it is brought to their attention, they could be expected to deny access to that group from then on. Which is fine.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    Commander ZoomCommander Zoom Registered User regular
    Phoenix-D wrote: »
    In other s
    DarkPrimus wrote: »
    It's already started:
    Thread
    Just as we feared, TwitterSafety's new "Private Media" policy is already being used by extremists to suppress valid, public-interest research by journalists who expose violent crimes.

    This policy is hours-old and is already being abused.

    mcc linking to a thread from Chad Loder explaining how organized Twitter mobs have utilized Twitter's report/moderation functions to silence journalists trying to report on alt-right sympathizers like Andy Ngo. The Twitter thread includes examples of numerous journalists having reported on individuals with alt-right connections having their reporting about those individuals' actions being mass-reported.

    Incredibly predictable from Twitter. They never implement anything without fucking it up somehow.

    You say that like this was not the intended result.

  • Options
    Raiden333Raiden333 Registered User regular
    So the main point of contention re: the discord thing seems to be whether or not Discord should feel the need to have human beings actively tapping in to completely random discords to eavesdrop on them for a little while, just in case something bad is going on there. I just don't see how this is an effective use of time, resources, or people, on either the part of the Discord company themselves or those who wish to find and burn out hate speech wherever it rears its ugly head. Like imagine a system like the following:

    1. Discord has a Reports Division where, anytime a server is reported for something big like illegal shit or hate speech, someone who is well trained in recognizing hate, even if it's dogwhistling, can investigate and shut that shit down.
    2. Discord has an algorithm that pings every server every so often with a giant list of Super Naughty Words and Phrases. Slurs, the 14 words, etc, anything that's indicative of some of the nasty hateful shit we all want to stomp out. If a server is flagging the algorithm like crazy, it gets flagged for one of the staff of team #1 to investigate the server, even if the server's never been reported by anyone.
    3. If after investigating a #1 incident and finding it to be a hate-server that needs to be shut down, figure out how they skirted getting auto-flagged by #2. Did they invent a new dogwhistle phrase or code of somekind? Whatever it is, have it added to the list the algorithm in #2 checks for.

    Now imagine they have a 2nd team, with 10 times, hell, 20 times as many people, that are being paid some kind of hourly rate for this method that keeps coming up of "join servers at random, poke around in their channels, maybe lurk a bit and see what's what, juuuuust in case there's something shady". For the sake of argument, we're going to assume every single person on both teams is a good-faith actor who only has the best of intentions.

    Which team would you bet on to catch more nazis? Hell, do you think the 2nd team there would catch ANY nazi servers the first team missed? Do you genuinely think there's an underground epidemic of radicalization servers that are SO DISCIPLINED that they always do their radicalization in unbreakable code phrases and they're always so careful about who they recruit to radicalize that they never get reported by someone who sees what's going on and goes yikes? Have you SEEN these people? They can't go 10 minutes without throwing racial slurs around.

    And I'd be willing to bet that anyone who is vaguely (like I am) or vehemently (like some people in this thread) opposed to the idea of what the second team would be doing, thinks my 3 point idea of what the first team would be doing is 100% acceptable for combatting hate-speech without going crazy about it.

    There was a steam sig here. It's gone now.
  • Options
    TicaldfjamTicaldfjam Snoqualmie, WARegistered User regular
    Phoenix-D wrote: »
    In other s
    DarkPrimus wrote: »
    It's already started:
    Thread
    Just as we feared, TwitterSafety's new "Private Media" policy is already being used by extremists to suppress valid, public-interest research by journalists who expose violent crimes.

    This policy is hours-old and is already being abused.

    mcc linking to a thread from Chad Loder explaining how organized Twitter mobs have utilized Twitter's report/moderation functions to silence journalists trying to report on alt-right sympathizers like Andy Ngo. The Twitter thread includes examples of numerous journalists having reported on individuals with alt-right connections having their reporting about those individuals' actions being mass-reported.

    Incredibly predictable from Twitter. They never implement anything without fucking it up somehow.

    You say that like this was not the intended result.

    Gonna love it when the Right turn on West Coast Tech companies, that didn't bow down hard enough , to Lord and savior "Trump".

    Its coming. Dorsey, unlike dumbass Zuck, sees the waves.

  • Options
    MrMonroeMrMonroe passed out on the floor nowRegistered User regular
    In fact I'll be even more clear.

    Private doesn't mean sacrosanct. A participant in those private conversations reporting the nazi to discord who then step in and take it down? Fine. Identical to a citizen reporting a crime performed in a private residence to the police, who then step in to investigate.

    Popping into public discords to monitor them? Fine. They're public. I don't believe in free speech meaning you are immune to consequences.

    Actively monitoring private discords? Not fine.

    Actively monitoring private dms? Not fine.

    Taking action on a reported private DM by a participant? Fine.

    Ok, but to be clear, the current legal regime absolutely does not protect your hypothetically "private" communications from the people handling those communications, and those services absolutely will not show up in court to defend your right to this arrangement you're proposing. They're showing up right now on the side of "individual communities should do what they want" because they want the business from those communities, not because they believe you have privacy in your interpersonal communications.

    The people arguing that Discord should exert some control over whether its services end up getting used to create Actual Nazi clubhouses are not arguing that they should get some additional level of legal authority that they do not currently have, they are arguing that Discord should use the legal authority they already clearly have for good purposes.

  • Options
    MorganVMorganV Registered User regular
    edited December 2021
    Phoenix-D wrote: »
    In other s
    DarkPrimus wrote: »
    It's already started:
    Thread
    Just as we feared, TwitterSafety's new "Private Media" policy is already being used by extremists to suppress valid, public-interest research by journalists who expose violent crimes.

    This policy is hours-old and is already being abused.

    mcc linking to a thread from Chad Loder explaining how organized Twitter mobs have utilized Twitter's report/moderation functions to silence journalists trying to report on alt-right sympathizers like Andy Ngo. The Twitter thread includes examples of numerous journalists having reported on individuals with alt-right connections having their reporting about those individuals' actions being mass-reported.

    Incredibly predictable from Twitter. They never implement anything without fucking it up somehow.

    You say that like this was not the intended result.

    Yup. One of two things results.

    1) Right wing shitheads weaponize this, and completely hash things up.
    2) Same as 1, but then Twitter abandons the policy, and says "we tried, it was worse than not doing anything, back to before".

    Actually fixing the problem? Not on the table.

    MorganV on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited December 2021
    MrMonroe wrote: »
    In fact I'll be even more clear.

    Private doesn't mean sacrosanct. A participant in those private conversations reporting the nazi to discord who then step in and take it down? Fine. Identical to a citizen reporting a crime performed in a private residence to the police, who then step in to investigate.

    Popping into public discords to monitor them? Fine. They're public. I don't believe in free speech meaning you are immune to consequences.

    Actively monitoring private discords? Not fine.

    Actively monitoring private dms? Not fine.

    Taking action on a reported private DM by a participant? Fine.

    Ok, but to be clear, the current legal regime absolutely does not protect your hypothetically "private" communications from the people handling those communications, and those services absolutely will not show up in court to defend your right to this arrangement you're proposing. They're showing up right now on the side of "individual communities should do what they want" because they want the business from those communities, not because they believe you have privacy in your interpersonal communications.

    The people arguing that Discord should exert some control over whether its services end up getting used to create Actual Nazi clubhouses are not arguing that they should get some additional level of legal authority that they do not currently have, they are arguing that Discord should use the legal authority they already clearly have for good purposes.

    That's not particularly relevant. The issue is the mandate to eavesdrop officially, not whether or not it's really private or not right now. That's a separate issue. It's not really a gotcha to say "well there's nothing stopping them from doing something bad so lets make it official that they do something bad regularly."

    Instead the response to that is "hey, good point, on top of not making it official to do the bad thing regularly, let's also make it so they can't do it without a good reason in the first place".

    You do realise that the government can absolutely wiretap your house or send forces to kick down your door and search through it for criminal activity, right? They have that power. There's just legal protections in place to prevent that from happening without a good reason, which was developed over years.

    This is analogous to the wiretapping conversation from many pages back now, where wiretapping wasn't regulated, so it was perfectly legal to spy on peoples phone calls. At that point the situation is in the same as Discord right now. It's not illegal, and there's nothing stopping them. The response to that was, interestingly, not to say "well its allowed, so lets use it to spy on everyone and try to catch criminals".

    Instead they regulated it and stopped it from being used without a good reason. Recognising, quite rightly, that just because you are using a service to privately talk to someone, doesn't mean someone else should be able to freely listen in for any reason.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    MrMonroeMrMonroe passed out on the floor nowRegistered User regular
    MrMonroe wrote: »
    In fact I'll be even more clear.

    Private doesn't mean sacrosanct. A participant in those private conversations reporting the nazi to discord who then step in and take it down? Fine. Identical to a citizen reporting a crime performed in a private residence to the police, who then step in to investigate.

    Popping into public discords to monitor them? Fine. They're public. I don't believe in free speech meaning you are immune to consequences.

    Actively monitoring private discords? Not fine.

    Actively monitoring private dms? Not fine.

    Taking action on a reported private DM by a participant? Fine.

    Ok, but to be clear, the current legal regime absolutely does not protect your hypothetically "private" communications from the people handling those communications, and those services absolutely will not show up in court to defend your right to this arrangement you're proposing. They're showing up right now on the side of "individual communities should do what they want" because they want the business from those communities, not because they believe you have privacy in your interpersonal communications.

    The people arguing that Discord should exert some control over whether its services end up getting used to create Actual Nazi clubhouses are not arguing that they should get some additional level of legal authority that they do not currently have, they are arguing that Discord should use the legal authority they already clearly have for good purposes.

    That's not particularly relevant. The issue is the mandate to eavesdrop officially, not whether or not it's really private or not right now. That's a separate issue. It's not really a gotcha to say "well there's nothing stopping them from doing something bad so lets make it official that they do something bad regularly."

    Instead the response to that is "hey, good point, on top of not making it official to do the bad thing regularly, let's also make it so they can't do it without a good reason in the first place".

    Ok but I feel like you should recongize that yours is the far more radical position then, yeah? You're talking about a massive government intervention to prevent private companies from... some indeterminate class of monitoring depending on how "private" the users expect the communications to be... with the data they generate from handling our interpersonal communications?

  • Options
    Commander ZoomCommander Zoom Registered User regular
    edited December 2021
    For me, a big part of the issue with Discord et al - per the other sub-thread about Twitter, above - is that I believe, in the near future, any such measures are much more likely to be used not to find and silence nazis, but to help nazis to find and silence Others.

    The anti-wiretapping laws also referenced above, for example, were put in place by very different courts than we have now. Should the fascists regain power, I do not expect those laws to remain in force, or to be applied in the service of justice if they do.

    We should not, IMO, be doing anything to expand the surveillance state when the ultimate beneficiaries may very well be those who have already shown they will do anything within their power to secure and maintain that power in perpetuity.

    Commander Zoom on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited December 2021
    MrMonroe wrote: »
    MrMonroe wrote: »
    In fact I'll be even more clear.

    Private doesn't mean sacrosanct. A participant in those private conversations reporting the nazi to discord who then step in and take it down? Fine. Identical to a citizen reporting a crime performed in a private residence to the police, who then step in to investigate.

    Popping into public discords to monitor them? Fine. They're public. I don't believe in free speech meaning you are immune to consequences.

    Actively monitoring private discords? Not fine.

    Actively monitoring private dms? Not fine.

    Taking action on a reported private DM by a participant? Fine.

    Ok, but to be clear, the current legal regime absolutely does not protect your hypothetically "private" communications from the people handling those communications, and those services absolutely will not show up in court to defend your right to this arrangement you're proposing. They're showing up right now on the side of "individual communities should do what they want" because they want the business from those communities, not because they believe you have privacy in your interpersonal communications.

    The people arguing that Discord should exert some control over whether its services end up getting used to create Actual Nazi clubhouses are not arguing that they should get some additional level of legal authority that they do not currently have, they are arguing that Discord should use the legal authority they already clearly have for good purposes.

    That's not particularly relevant. The issue is the mandate to eavesdrop officially, not whether or not it's really private or not right now. That's a separate issue. It's not really a gotcha to say "well there's nothing stopping them from doing something bad so lets make it official that they do something bad regularly."

    Instead the response to that is "hey, good point, on top of not making it official to do the bad thing regularly, let's also make it so they can't do it without a good reason in the first place".

    Ok but I feel like you should recongize that yours is the far more radical position then, yeah? You're talking about a massive government intervention to prevent private companies from... some indeterminate class of monitoring depending on how "private" the users expect the communications to be... with the data they generate from handling our interpersonal communications?

    Not really, since there's historical precedent for this, re wiretapping. If you missed that conversation, it was about data harvesting, and how governments haven't caught up to this invasion of privacy yet and begun to fully legislate it. Historically, wiretapping was legal, and there was nothing stopping governments from spying on your phone calls. There was a large push to make this illegal, but this happened decades after the invention of the technology. There were arguments at the time that it should remain legal in order to fight crime and etc. All of those arguments are extremely similar to the conversation we are having right now. Remember that phone companies are also private corporations, so that angle is also decades old.

    From my point of view supporting this is a very extremist position I can only assume is considered ok because they've decided "any cost" is an acceptable position to take to try and fix the problem. Or they just don't really know their history, since "private conversations are private" is a very traditional, well historically supported position, and attempting to overturn that just because it is not yet legislated for this particular technology is fairly strange. That war has already been fought, and the decision was made to say no, it is not ok.

    Basically, the radical one is actually you. :)

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    BogartBogart Streetwise Hercules Registered User, Moderator mod
    You can probably disagree, even strongly disagree with someone, without comparing them to the SS. I feel certain you can at least try.

  • Options
    MillMill Registered User regular
    Here's the thing, discord is a business. People need to learn that businesses aren't your fucking friends or family. They exist to extract profit. Sooner or later, if discord goes with their current approach of "teeheehee, our hands off approach protects us!" They'll get a judge says, "nope, now eat shit." At which point you're going to get a kneejerk reaction and discord will be way up in the business of your private discords, if they don't just decide to yank the private feature altogether because when that happens they are going to lose a shit ton of money. Hell, I'll bet that the first case to get their shit kicked in like that is going to be something dealing with child porn. There will also be a huge PR hit as well.

    You're best bet is to figure out a compromise. Random audits that mask board and use names, along with software that parses stuff for priority, is going to be the best bet for maintaining as much privacy as possible. A random audit does a few things. They don't need to monitor stuff 24/7, if it's random because that denies bad actors much of a chance to clean up house and hide shit. It also means, not much of a worry about the list getting public and asshole trying to brigand sites they don't like. Masking the board name and usernames ensures that random mook that has to look at stuff can't really do anything with it, if they happen to be a shitty Nazi that manages to get a job there; especially, if they can only recommend action that requires a committee they aren't on to sign off on it. A good AI can further reduce how much private information they need to read anyways. Yes, some asshole groups will start to try code words and some of those are going to get popularized enough that an AI will keep an eye.

    Again, we know that 230 doesn't protect them from their sites being used for illegal activity. Honestly, websites are a probably one of the few things that tech has brought about where there really aren't good analogs to compare them to. Phone doesn't work because it's often two way, conference calls can be a real pain, doesn't leave a transcript unless you go out of the way to make it happen and is actually not something some new rando can waltz into. A physical gathering place for like minded people to meet also doesn't work. Geography and population are going to put hard limits on how many people show up. You'd have to set up some sort of transcription process to keep track of things, which means it's not going to let some rando easily join into ongoing conversations. Also if shady shit is going down there, you probably see some like minded individuals opt out, even if they agree 100% with the ideas and actions because they figure if the authorities get wind of what's going on, everyone going to that location is getting a visit from the cops and possibly getting arrested. Both are also far less accessible than a free website.

    Part of the reason why we're dealing some some of the messes that social media has created is that websites are cheap. They are not bound by geography or population. Random ass chucklefuck in buttfuck nowhere, that would have probably been the obnoxious crank always running for school but mostly ignored because he was the only one in town. Can now find hundreds of similar minded chucklefucks and organize, while believing that they are the majority because it's pretty clear that most humans have a piss poor grasp of numbers beyond a certain point; especially, if they are ignorant of actual population figures and believe the government data is fake anyways. It's easy for people to jump into the conversations because unless someone opts for a setup that purges stuff on a regular basis it's there for people to read. This is where we see how these places can break from a physical location and actually become quite dangerous; especially, when they believe that the authorities can never find them.

    We've also seen what the "well, just leave it be, it's the price of freedom!" gets us. That's how Qanon and GOP fuckery has poisoned a ton of minds. That shit has been allowed to fester, I'm probably going to end up estranged from one of my parents pretty soon because of that shit. We've also seen that deplatform does work. No, it might not be perfect, but it does do a number on these fuckers ability to operate. I'll also address the argument of "but the Nazis can use this against us." It's a bad faith argument because we've already seen what happens when the likes of the Nazis are in control. I mean, Russia, China and a number of other authoritarian regimes have access to the internet. Ask any good dissident from one of those regimes about what you should do and they'll flat out tell you that you shouldn't use any services from a company that has been openly accepted by those regimes. Any company that gets the blessing of those regime, is not going to protect you and are actively allowing those authorities access to information they can use to find dissidents. Hell, if a company doesn't have a system those fuckers can exploit to find dissidents and they don't have one they can readily implement, they'll just make one. Actually, the argument is just a color swap of the old shitty "making perfect the enemy of better." Floors me that it comes up so much on this board because anyone with half a brain would realize that anything a human comes up with is going to have loopholes to exploit and that a really shitty actor can find a way to twist into something harmful.

  • Options
    MechMantisMechMantis Registered User regular
    Okay I think I see where we're sorta talking past each other.


    To me, Discord is FAR more like a communications platform than a traditional "social media site" where shit's just publicly out and available and served to all and sundry.

    If this was Twitter or Facebook, fine, yes, absolutely subject general public communications to anti-Nazi screening because that gets actively spread by the platform for engagement via algorithmic bullshit.

    But due to the invite-only and generally private nature of the communications on Discord, this falls more under SMS/phone than a general public site to me.

    Historically there have been Big Fucking Problems with people just cracking open SMS communications to see what's inside; News of the World learned this the hard way.

    Further you can bet your ass that group texts have been used to organize the exact sort of heinous Nazi-loving bullshit we want to stop, and I will absolutely say that no, NO ONE, phone carrier, private company, or government, NO ONE, should be systemically examining SMS communications on the basis that they COULD be objectionable or criminal.

    Since I personally feel that Discord falls closer to SMS style ccommunications and the like than anything else, and because I object to carriers, companies, governments etc. snooping through SMS communications, I similarly object to carriers, companies, governments etc. snooping through Discord messages.

    However, I also get where, if you're treating Discord as more of a traditional social media site, you hit the conclusion of "Check it all, it's publicly available"; I personally disagree with the classification but I get the thought process leading to that conclusion.

    Now, that being said, you could DEFINITELY try to go after the image/file hosting aspects of Discord, but they DO actually try to police that pretty hard as I understand regardless, and I have no issues with thay being subjected to higher scrutiny since they ARE actually hosting that content for use elsewhere cause they just end up being URLs.

  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited December 2021
    I definitely feel like I'm being talked past. To the point where I'm presented with arguments* that have nothing to do with what I said, and am at a loss to respond to because...well...who are they talking to?

    *beautifully put together ones, that I can even respect no less, they're just sadly not relevant.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    DarkPrimusDarkPrimus Registered User regular
    BREAKING:
    TwitterSafety has locked the well-respected Atlanta Antifascists (afainatl) out of their 5-year-old account under their new "private media" policy.

    Twitter is forcing removal of a 2018 tweet about the public actions of a white supremacist college student organizer.

    This is just one example of a huge wave that's occurring right now, where people (including journalists) who reported on people's ties to alt-right organizations are being reported for years' old tweets that did not violate any ToS when they were posted, but under this new "private media" policy...

    No way to hold individuals accountable for their actions now, if they don't want to be held accountable to them.

  • Options
    PaladinPaladin Registered User regular
    Looks like Mr. Dorsey got out while the getting was good

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    AridholAridhol Daddliest Catch Registered User regular
    DarkPrimus wrote: »
    BREAKING:
    TwitterSafety has locked the well-respected Atlanta Antifascists (afainatl) out of their 5-year-old account under their new "private media" policy.

    Twitter is forcing removal of a 2018 tweet about the public actions of a white supremacist college student organizer.

    This is just one example of a huge wave that's occurring right now, where people (including journalists) who reported on people's ties to alt-right organizations are being reported for years' old tweets that did not violate any ToS when they were posted, but under this new "private media" policy...

    No way to hold individuals accountable for their actions now, if they don't want to be held accountable to them.

    We'll that was faster than I expected.

    I for one am super shocked.

  • Options
    ButtersButters A glass of some milks Registered User regular
    Paladin wrote: »
    Looks like Mr. Dorsey got out while the getting was good

    I don't have an authoritative source, but I've read that he signed off on this right before leaving.

    PSN: idontworkhere582 | CFN: idontworkhere | Steam: lordbutters | Amazon Wishlist
  • Options
    PaladinPaladin Registered User regular
    Either way, he's no longer the lightning rod for the company. If they want to reverse it, they can anytime

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    DarkPrimusDarkPrimus Registered User regular
    Aridhol wrote: »
    DarkPrimus wrote: »
    BREAKING:
    TwitterSafety has locked the well-respected Atlanta Antifascists (afainatl) out of their 5-year-old account under their new "private media" policy.

    Twitter is forcing removal of a 2018 tweet about the public actions of a white supremacist college student organizer.

    This is just one example of a huge wave that's occurring right now, where people (including journalists) who reported on people's ties to alt-right organizations are being reported for years' old tweets that did not violate any ToS when they were posted, but under this new "private media" policy...

    No way to hold individuals accountable for their actions now, if they don't want to be held accountable to them.

    We'll that was faster than I expected.

    I for one am super shocked.

    So, if the tweet I linked earlier disappears, this is why:
    Nazis just got chadloder suspended again for posting public photos documenting public personality Brandon Straka engaging in newsworthy behavior, namely financing and participating in the January 6th attack on the US Capitol, for which he has been convicted in federal court.

  • Options
    CptHamiltonCptHamilton Registered User regular
    Wow, screenshots of tweets count as "private media"? That's something alright.

    PSN,Steam,Live | CptHamiltonian
  • Options
    ForarForar #432 Toronto, Ontario, CanadaRegistered User regular
    Well I guess we just doubled the reason why people need to quote the content of their tweets for anything remotely important.

    It simply being remotely important may be the very reason it returns to the aether under this new bullshit.

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKER!
  • Options
    ElldrenElldren Is a woman dammit ceterum censeoRegistered User regular
    So can we burn them all to the ground now?

    Guys?

    fuck gendered marketing
  • Options
    Phoenix-DPhoenix-D Registered User regular
    Wow, screenshots of tweets count as "private media"? That's something alright.

    Twitter moderation is very stupid.

    Since my last tweet, @chadloder
    has gone through multiple cycles of getting locked out, appealing, and being brought back online, only to have trolls file fraudulent complaints **against the same posts that made it through appeals.**

  • Options
    PolaritiePolaritie Sleepy Registered User regular
    That just tells me the process has effectively no human eyes on it whatsoever. I'd bet it's literally just:
    if(reports>X)
    ban();

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    DoodmannDoodmann Registered User regular
    The fact that they don't even have a white hat flag/filter or whatever you'd call it for things that have already gone through appeals is pretty stunning.

    Whippy wrote: »
    nope nope nope nope abort abort talk about anime
    I like to ART
  • Options
    MorganVMorganV Registered User regular
    Phoenix-D wrote: »
    Wow, screenshots of tweets count as "private media"? That's something alright.

    Twitter moderation is very stupid.
    Since my last tweet, @chadloder
    has gone through multiple cycles of getting locked out, appealing, and being brought back online, only to have trolls file fraudulent complaints **against the same posts that made it through appeals.**

    While I agree it is very very stupid, I do hope a consequence of this is that dipshits like Damien Jarrett and Brandon Straka get fully Streisand'd as a result.

    It won't be enough, because there'll be people who use it to cancel criticism without the furore and publicity. But fuck all these people.

  • Options
    Lord_AsmodeusLord_Asmodeus goeticSobriquet: Here is your magical cryptic riddle-tumour: I AM A TIME MACHINERegistered User regular
    The important question I suppose is, can this be abused to tuck over corpos and rightwingers enough to actually get it fixed?

    Capital is only the fruit of labor, and could never have existed if Labor had not first existed. Labor is superior to capital, and deserves much the higher consideration. - Lincoln
  • Options
    Commander ZoomCommander Zoom Registered User regular
    The important question I suppose is, can this be abused to tuck over corpos and rightwingers enough to actually get it fixed?

    Sounds like you're thinking it will ever be applied in a fair and unbiased way.
    (It will not.)

  • Options
    Lord_AsmodeusLord_Asmodeus goeticSobriquet: Here is your magical cryptic riddle-tumour: I AM A TIME MACHINERegistered User regular
    If it's an automated system it's easy to abuse

    Capital is only the fruit of labor, and could never have existed if Labor had not first existed. Labor is superior to capital, and deserves much the higher consideration. - Lincoln
  • Options
    KupiKupi Registered User regular
    Doodmann wrote: »
    The fact that they don't even have a white hat flag/filter or whatever you'd call it for things that have already gone through appeals is pretty stunning.

    I was going to say "probably to ensure that you can't edit a post that's been reinstated to be inflammatory now that it's invulnerable", but you can't even edit Tweets, apparently.

    My favorite musical instrument is the air-raid siren.
  • Options
    Phoenix-DPhoenix-D Registered User regular
    Kupi wrote: »
    Doodmann wrote: »
    The fact that they don't even have a white hat flag/filter or whatever you'd call it for things that have already gone through appeals is pretty stunning.

    I was going to say "probably to ensure that you can't edit a post that's been reinstated to be inflammatory now that it's invulnerable", but you can't even edit Tweets, apparently.

    For the best

  • Options
    shrykeshryke Member of the Beast Registered User regular
    Polaritie wrote: »
    That just tells me the process has effectively no human eyes on it whatsoever. I'd bet it's literally just:
    if(reports>X)
    ban();

    Yeah, it sounds like this is a shitshow right now because the right is very very good at brigading like this. They will quickly coordinate to mass spam complaints if given the chance and it seems like Twitter's system is very very stupid.

  • Options
    MayabirdMayabird Pecking at the keyboardRegistered User regular
    Facebook and Google not just amplify clickbait misinformation, but actively fund it with ad dollars. I had noticed a proliferation of shitty "news" sites recently that all posted the same "article" and this story is about how that's absolutely been rewarded. Most of it is appears to be just spammers trying to make a quick buck, but there's an obvious room for political brokers to start doing the same thing to amplify their messages while getting paid for it - and they're probably already doing it.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Yes, if you put ads on a website they will be paid out whether the content of that site is clickbait or not

  • Options
    shrykeshryke Member of the Beast Registered User regular
    edited December 2021
    Mayabird wrote: »
    Facebook and Google not just amplify clickbait misinformation, but actively fund it with ad dollars. I had noticed a proliferation of shitty "news" sites recently that all posted the same "article" and this story is about how that's absolutely been rewarded. Most of it is appears to be just spammers trying to make a quick buck, but there's an obvious room for political brokers to start doing the same thing to amplify their messages while getting paid for it - and they're probably already doing it.

    Is there some part of that where they actively fund it? Cause from what I could tell it just seemed like the article was saying "clickbait makes money". People game these systems to make money and the companies either can't stop it and/or don't care to try too hard to stop it. Which is a problem but is different then, say, the thing with AT&T and OAN. I didn't see anything in there where they were directly funding these operations by explicit choice but maybe I missed it.

    shryke on
  • Options
    TomantaTomanta Registered User regular
    shryke wrote: »
    Mayabird wrote: »
    Facebook and Google not just amplify clickbait misinformation, but actively fund it with ad dollars. I had noticed a proliferation of shitty "news" sites recently that all posted the same "article" and this story is about how that's absolutely been rewarded. Most of it is appears to be just spammers trying to make a quick buck, but there's an obvious room for political brokers to start doing the same thing to amplify their messages while getting paid for it - and they're probably already doing it.

    Is there some part of that where they actively fund it? Cause from what I could tell it just seemed like the article was saying "clickbait makes money". People game these systems to make money and the companies either can't stop it and/or don't care to try too hard to stop it. Which is a problem but is different then, say, the thing with AT&T and OAN. I didn't see anything in there where they were directly funding these operations by explicit choice but maybe I missed it.

    Considering the number of times Google has pushed the same clickbait articles from different sites (for the hundredth time, no, I don't care what Paltrow said about Hawkeye) I'm comfortable saying that Google actively funds it. At the very least they have enough machine learning/ai expertise to stop all the SEO gaming if they wanted to.

  • Options
    ElldrenElldren Is a woman dammit ceterum censeoRegistered User regular
    shryke wrote: »
    Mayabird wrote: »
    Facebook and Google not just amplify clickbait misinformation, but actively fund it with ad dollars. I had noticed a proliferation of shitty "news" sites recently that all posted the same "article" and this story is about how that's absolutely been rewarded. Most of it is appears to be just spammers trying to make a quick buck, but there's an obvious room for political brokers to start doing the same thing to amplify their messages while getting paid for it - and they're probably already doing it.

    Is there some part of that where they actively fund it? Cause from what I could tell it just seemed like the article was saying "clickbait makes money". People game these systems to make money and the companies either can't stop it and/or don't care to try too hard to stop it. Which is a problem but is different then, say, the thing with AT&T and OAN. I didn't see anything in there where they were directly funding these operations by explicit choice but maybe I missed it.

    The explicit choice is in the systems they set up that reward clickbait. Like they could change their ad revenue and content recommendation systems to not endlessly push clickbait but the fact is that it makes them lots of fucking money and their ad systems pay those clickbait farms p well.

    They might not explicitly intend to destabilize the global political system but intent doesn’t really matter when the outcomes are this dire.

    fuck gendered marketing
This discussion has been closed.