As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Twitter Continues To Have A [Twitter] Problem

11213151718102

Posts

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Daedalus wrote: »
    Daedalus wrote: »
    Mr. Fusion wrote: »
    Phoenix-D wrote: »
    What happens if that case goes the other way?

    Everyone shrugs and goes "duh."

    Good to know.

    Any other responses? I don't really understand the implications of favoring objective intent vs subjective intent. The justices most of us like all lined up to oppose Clarence Thomas. Why?

    Because they, like the rest of us, live in a society that lauds as a victory for free speech the right of neo-nazis to march in force through a community that is not only comprised significantly of Jewish people, but had a large population of Holocaust survivors.

    We, as a society, tend to not appreciate the power of speech at the same time we put it on a pedestal. Which is why we routinely respond to people being genuinely terrorized with "it's only words".

    If Neo-Nazis cannot march, then neither can BLM or protests against the Trump Administration. BLM terrorizes non-blacks and families of law enforcers, many recognize them as the black KKK. Trump protests terrorizes those who voted for and support the president, and intimidates them from speaking their mind in public.

    Is that what you want? Maybe it is, the way things are going who knows.

    Somehow, I think we can mark a difference between black protesters demanding that our society treat their lives as valuable, and adherents to one of the most murderous ideologies ever.

    Edit: I thought this was a good response to the argument that I saw elsewhere:
    snip

    There's no "we". The people who would be marking such a difference are not a "we", they are a "they", and they are not trustworthy to do it. You're gonna have the next seven or so years of this shit to remind you of that, it'll probably sink in by the end.

    At least in my opinion, this is less an argument for inaction lest the wrong people get in power (which doesn't work for a number of reasons, the foremost being that binding your hands doesn't bind theirs), and more an argument for making sure those people aren't given power in the first place.

    So, this: "making sure those people aren't given power in the first place", didn't work, and in fact is impossible in a democracy. However, having deeply entrenched mores about fundamental rights that the people in power aren't allowed to violate, while likewise imperfect, seems to work better than "hope that the wrong people never win an election". Crucially, for these mores to stick you need to not uproot them when it's convenient for your side.

    I know you're very much an "ends-justify-the-means" kind of guy, which is why I've been avoiding the "natural rights" argument in favor of the more pragmatic "the people you want to give this power to are actively hostile to you" argument.

    First, the last few years (and especially the last few months) have illustrated exactly how enduring those mores are when authoritarians find them an obstacle - namely, not very.

    Second, those mores you talk about have always had holes in them big enough for the marginalized to fall through. How many times have you heard criticism of minority groups taking offense at the soft bigotry of our society and the argument that they should "lighten up"? That's a way these mores silence them.

    The reality is free speech absolutism doesn't work, because the ability of the marginalized to feel safe in speaking out is necessarily in conflict with the privileged using their speech to assert dominance. If you want the marginalized to feel that they have a voice in society, this necessarily means you must stop the privileged from using their voice from beating them out of society. Conversely, if you are unwilling to restrain the privileged in using their voice, you need to accept that the marginalized will leave once abuse makes participation no longer safe.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    Squidget0Squidget0 Registered User regular
    edited October 2017
    I'm genuinely curious. How is all of this moderation supposed to be implemented on a platform like Twitter? What would it look like?

    Stronger automated harassment handling? We've seen with YouTube's content protection systems how that goes. The anti-abuse automation becomes a vector for abuse. Show me an automated system that bans harassers, and I'll show you a twitter bot that can get any account banned through a set of carefully curated harassment reports. Automated systems are inherently fragile, and terrible at separating the kind of nuance that makes true threats distinct from rap lyrics or political metaphor.

    Human moderators? Even assuming you could find someone to pay thousands of real humans to moderate political shitposts, the people moderating are still going to be...human. Most of them probably won't agree with the opinions of the Wokeosphere on what speech deserves to be curtailed. What are the qualifications to be a moderator? College education? Being the right kind of people? Why would it possibly be in Twitter's interest to pay an army of moderators when you're going to keep using their service regardless?

    Government intervention? What makes you think you'd be the one writing and enforcing the speech laws, when your side is out of power? Isn't there a fairly big deal on the SJ left about how laws tend to be used to oppress the marginalized over the privileged? Why would hypothetical speech laws be used any differently than drug possession laws have been?

    The whole thing seems like a pipe dream. A distant fantasy that we can somehow enforce the same ideological conformity and agenda-setting in public spaces that we've been able to enforce in leftist enclaves. There is no meaningful path to get there, or even a clear picture of what "there" might look like.

    Squidget0 on
  • Options
    AngelHedgieAngelHedgie Registered User regular
    And to highlight that Twitter not handling abuse is a global phenomenon, BuzzFeed did a look at Twitter's harassment issues in India:
    Within minutes, more than 31,000 notifications crashed Rajendran’s phone. Trolls had orchestrated the harassment campaign using a hashtag, #PublicityBeepDhanya (replace “beep” with an expletive of choice), which became one of the top five trends in India for hours.

    The fact that a hashtag crafted specifically to abuse someone trended for hours became national news in India. Vijay, whose fans started the campaign, condemned the incident. The Network of Women in Media, India (NWMI), an organization that aims to promote gender equality within the Indian media, urged Twitter India to be “more sensitive to online abuse, specifically of women.”

    And from Twitter? Dead silence. In fact, the social network did nothing about the hashtag until Rajendran picked up the phone and contacted a personal connection at the company, a member of Twitter India’s public policy team who offered to “put a word in,” Rajendran told BuzzFeed News. An hour later, Twitter India removed the hashtag from its list of trending topics

    This, by the way, is why the "firehose into a red Solo cup" argument doesn't hold water. Monitoring the top hash tags and killing any abusive ones is the very least the company's abuse team could be doing, and they're fucking even that up.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    AridholAridhol Daddliest Catch Registered User regular
    Nyysjan wrote: »
    Aridhol wrote: »
    Quid wrote: »
    Aridhol wrote: »
    Quid wrote: »
    Aridhol wrote: »
    Aridhol wrote: »
    Nyysjan wrote: »
    Aridhol wrote: »
    Nyysjan wrote: »
    Phoenix-D wrote: »
    Quid wrote: »
    I can not fathom in what way would allowing the government to take down and charge people/platforms with a fine for advocating genocide of others would be a terrible power for Trump to have.

    The alt-right loves to cry about immigration / inter racial marriage / non whites existing being "white genocide" so if you completely fuck up the law writing maybe that.
    Though if we're at a point where the cops, the courts, the legislature and the people would be willing to go along that shit, well, i don't think it's the "no advocating genocide" law is the problem.

    The idea that hate speech laws would let fascists attack social justice advocates presumes that
    A: Fascists give a fuck about precedent.
    B: Fascists are not attacking them already.

    So the solution is to make it legal and make it easier to do? Create a nightmare see-saw of speech laws?
    No, the solution is to make sane, well crafted, hate speech laws, and assume that majority of the nation is not willing to start a race war.
    Because if majority of the nation is willing to start a race war, well, battle's already lost.

    Can't wait to see what sane, well crafted, hate speech laws Roy Moore creates for everyone.

    I mean... that argument applies to all laws.

    What is the legislative power with which you would trust Roy Moore? Is the solution to the danger he poses to people to just scrub out the concept of legislating?

    Sorry to channel Spool here but it's a Right and shouldn't be subject to the prevailing political winds of the day.

    I don’t personally consider being opposed to advocating the death of ethnic groups a political fad.

    And, again, we limit speech already.

    I'm opposed to those things as well. No one is defending the shit ideas these people have.

    I know we limit speech already. I believe it's a reasonable position that there is a line that should not be crossed however. Treating people who have reservations of future curtailing of speech like they're directly supporting racist shitbags is fine I guess but it's not the convincing argument you imagine it to be.

    Then argue as to why that’s the case with specific ideas instead of vague statements about Roy Moore existing. Bad legislation was passed by people like him, yes. That is no reason to not try for other legislation when we are able.

    I have suggested a couple of legislative ideas in the thread but I'll reiterate.

    I Would like to see:
    • Legislation that forces companies to respond to user reports within a specific timeframe and penalties for failing to do so. It should be financially painful to run a cesspool like Twitter.
    • Laws against "Directive" speech that calls/asks for violence or harmful action against a specific group or person.
    • Strengthen penalties against direct threats. You shouldn't just get a police visit asking "did you mean it?" when you tell someone you're going to rape them. This should get you an arrest record and prosecution.

    Things I wouldn't like to see:
    • Legislation that makes it a crime to deride, make fun of or say you hate a person or group. I.e. "I think Gingers are dumb and hope they go extinct" or "Homosexuality should be illegal". Both of those statements are reprehensible but should not be illegal. The could however be reported by users on "NewTwitter" and get you banned.
    • Laws restricting criticism (harsh, vulgar, etc...) of the government, government officials or the country


    I'll look forward to everyone else's ideas :)
    What is your take on priests calling abortion doctors murderers and claiming they kill babies by putting them in blenders?
    Because that sort of talk pretty directly leads to abortion doctors getting harassed, and even killed.
    Most people here are not talking about someone saying they hate a person, or a group, but when organized harassment, implied or even explicit talk about "solutions" and spreading of obvious falsehoods that make people want to take violent action (anyone remember the name of the dude who shot a black church because he thought there was a race war going on people were ignoring?), well, that shit is not ok.

    It's easy to argue against hate speech laws if you think hatespeech is just someone saying a not very nice thing.
    But actual examples of hatespeech get lot harder to defend.

    I would say that it is horrible and if it was presented to me on any platform (TV, twitter, facebook) I would report it.
    I don't however think it should be illegal.

    "This might make someone do something" is not where I would draw the line.

  • Options
    NyysjanNyysjan FinlandRegistered User regular
    Aridhol wrote: »
    Nyysjan wrote: »
    Aridhol wrote: »
    Quid wrote: »
    Aridhol wrote: »
    Quid wrote: »
    Aridhol wrote: »
    Aridhol wrote: »
    Nyysjan wrote: »
    Aridhol wrote: »
    Nyysjan wrote: »
    Phoenix-D wrote: »
    Quid wrote: »
    I can not fathom in what way would allowing the government to take down and charge people/platforms with a fine for advocating genocide of others would be a terrible power for Trump to have.

    The alt-right loves to cry about immigration / inter racial marriage / non whites existing being "white genocide" so if you completely fuck up the law writing maybe that.
    Though if we're at a point where the cops, the courts, the legislature and the people would be willing to go along that shit, well, i don't think it's the "no advocating genocide" law is the problem.

    The idea that hate speech laws would let fascists attack social justice advocates presumes that
    A: Fascists give a fuck about precedent.
    B: Fascists are not attacking them already.

    So the solution is to make it legal and make it easier to do? Create a nightmare see-saw of speech laws?
    No, the solution is to make sane, well crafted, hate speech laws, and assume that majority of the nation is not willing to start a race war.
    Because if majority of the nation is willing to start a race war, well, battle's already lost.

    Can't wait to see what sane, well crafted, hate speech laws Roy Moore creates for everyone.

    I mean... that argument applies to all laws.

    What is the legislative power with which you would trust Roy Moore? Is the solution to the danger he poses to people to just scrub out the concept of legislating?

    Sorry to channel Spool here but it's a Right and shouldn't be subject to the prevailing political winds of the day.

    I don’t personally consider being opposed to advocating the death of ethnic groups a political fad.

    And, again, we limit speech already.

    I'm opposed to those things as well. No one is defending the shit ideas these people have.

    I know we limit speech already. I believe it's a reasonable position that there is a line that should not be crossed however. Treating people who have reservations of future curtailing of speech like they're directly supporting racist shitbags is fine I guess but it's not the convincing argument you imagine it to be.

    Then argue as to why that’s the case with specific ideas instead of vague statements about Roy Moore existing. Bad legislation was passed by people like him, yes. That is no reason to not try for other legislation when we are able.

    I have suggested a couple of legislative ideas in the thread but I'll reiterate.

    I Would like to see:
    • Legislation that forces companies to respond to user reports within a specific timeframe and penalties for failing to do so. It should be financially painful to run a cesspool like Twitter.
    • Laws against "Directive" speech that calls/asks for violence or harmful action against a specific group or person.
    • Strengthen penalties against direct threats. You shouldn't just get a police visit asking "did you mean it?" when you tell someone you're going to rape them. This should get you an arrest record and prosecution.

    Things I wouldn't like to see:
    • Legislation that makes it a crime to deride, make fun of or say you hate a person or group. I.e. "I think Gingers are dumb and hope they go extinct" or "Homosexuality should be illegal". Both of those statements are reprehensible but should not be illegal. The could however be reported by users on "NewTwitter" and get you banned.
    • Laws restricting criticism (harsh, vulgar, etc...) of the government, government officials or the country


    I'll look forward to everyone else's ideas :)
    What is your take on priests calling abortion doctors murderers and claiming they kill babies by putting them in blenders?
    Because that sort of talk pretty directly leads to abortion doctors getting harassed, and even killed.
    Most people here are not talking about someone saying they hate a person, or a group, but when organized harassment, implied or even explicit talk about "solutions" and spreading of obvious falsehoods that make people want to take violent action (anyone remember the name of the dude who shot a black church because he thought there was a race war going on people were ignoring?), well, that shit is not ok.

    It's easy to argue against hate speech laws if you think hatespeech is just someone saying a not very nice thing.
    But actual examples of hatespeech get lot harder to defend.

    I would say that it is horrible and if it was presented to me on any platform (TV, twitter, facebook) I would report it.
    I don't however think it should be illegal.

    "This mightwill make someone do something" is not where I would draw the line.
    When told that people are murdering babies by those people accept as an authority, eventually someone will act to stop it.

  • Options
    AridholAridhol Daddliest Catch Registered User regular
    edited October 2017
    Removed - Mod Decree

    My free speech is being curtailed!

    j/k :)

    Aridhol on
  • Options
    ElkiElki get busy Moderator, ClubPA mod
    We don’t need to have a general free speech debate here. Focus on twitter/social media policy.

    smCQ5WE.jpg
  • Options
    AngelHedgieAngelHedgie Registered User regular
    Squidget0 wrote: »
    I'm genuinely curious. How is all of this moderation supposed to be implemented on a platform like Twitter? What would it look like?

    Well, here are two very basic steps that nonetheless Twitter is currently fucking up:

    * Monitoring for trending hashtags that are abusive/hateful. As the story I posted above noted, in India there was a hashtag intended as directed abuse towards one person, which became a major domestic story there. Twitter did nothing until the victim reached out to a personal contacting at the company. This is unacceptable. Twitter should be monitoring trending hashtags, and dealing with any that are abusive or directing hate.

    * De-verifying bigots. It is ridiculous that Richard Spencer and David Duke have verified accounts. Twitter should be going through their verified accounts, and pulling the check from bigots on the platform.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    ArbitraryDescriptorArbitraryDescriptor changed Registered User regular
    edited October 2017
    Stronger automated harassment handling? We've seen with YouTube's content protection systems how that goes. The anti-abuse automation becomes a vector for abuse. Show me an automated system that bans harassers, and I'll show you a twitter bot that can get any account banned through a set of carefully curated harassment reports. Automated systems are inherently fragile, and terrible at separating the kind of nuance that makes true threats distinct from rap lyrics or political metaphor.

    Bots seem like a separate issue, but false report abuse seems like a solvable issue. We do it here:

    Tier 1: Normal
    Tier 2: "Probation" Take their button away from tweets not directed at them and temp ban them.
    Tier 3: Take their button away for tweets directed at them, and make them unfollowable.

    New accounts start in probation for x time.

    Or whatever. That Youtube doesn't is neiter here nor there.

    Moderating perfectly all at once is too high a bar. But, I suspect, good moderation leads to less need of it through deterrence, while the curremt "no moderation" encourages abuse.

    Start with actually taking death threats seriously, go from there.

    ArbitraryDescriptor on
  • Options
    ZibblsnrtZibblsnrt Registered User regular
    Squidget0 wrote: »
    Human moderators? Even assuming you could find someone to pay thousands of real humans to moderate political shitposts, the people moderating are still going to be...human. Most of them probably won't agree with the opinions of the Wokeosphere on what speech deserves to be curtailed. What are the qualifications to be a moderator? College education? Being the right kind of people? Why would it possibly be in Twitter's interest to pay an army of moderators when you're going to keep using their service regardless?

    That's actually how some of the larger sites do it, with exactly the problems you'd expect.

    Aside from algorithmically banning people who get a certain density of reports automatically - that's definitely a thing; a friend of mine's a constant target of it - Facebook's reports tend to get looked over first by huge cube-farms in the Philippines who have quotas that give them a window of only a few seconds for each individual report. Add their own views on what is and isn't hate speech/harassment/etc., and add to that Facebook's standards, which are comically specific and obviously intended to protect some of the more common types of hate speech... yeah.

    Meanwhile, Twitter hasn't even gotten that little done yet.

  • Options
    Spaten OptimatorSpaten Optimator Smooth Operator Registered User regular
    Start with actually taking death threats seriously, go from there.

    That's the real moderation problem--they don't want to do anything. The difficulties of doing enforcement only matter if Twitter bothers trying in the first place.

  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

  • Options
    Phoenix-DPhoenix-D Registered User regular
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    The actual effect it has is a social standing indicator. A mark of approval from Twitter. It's silly given the initial intent but that's how it went.

  • Options
    Harry DresdenHarry Dresden Registered User regular
    edited October 2017
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    How about instead he was banned immediately once it was known he was a bigot? And not just an ordinary bigot, either.

    Harry Dresden on
  • Options
    reVersereVerse Attack and Dethrone God Registered User regular
    Instead of de-verifying anyone, verified accounts should be subject to higher standards and supervision from actual human moderators.

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Phoenix-D wrote: »
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    The actual effect it has is a social standing indicator. A mark of approval from Twitter. It's silly given the initial intent but that's how it went.

    This is, much like the rest of the stuff we've discussed, Twitter's own fault - they actually pulled Milo's verified status after an incident, cementing the idea of the checkmark as endorsement.

    But besides that, why should someone like Richard Spencer be given protection over his name, when he openly espouses harming others for theirs? This again comes back to tolerance as peace treaty - if Spencer wants to advocate harming others, why should the rest of us protect him from harm?

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    Spaten OptimatorSpaten Optimator Smooth Operator Registered User regular
    edited October 2017
    reVerse wrote: »
    Instead of de-verifying anyone, verified accounts should be subject to higher standards and supervision from actual human moderators.

    "This is your only warning, Mr. President. Go back to retweeting compliments from Lou Dobbs."

    Spaten Optimator on
  • Options
    Harry DresdenHarry Dresden Registered User regular
    Phoenix-D wrote: »
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    The actual effect it has is a social standing indicator. A mark of approval from Twitter. It's silly given the initial intent but that's how it went.

    This is, much like the rest of the stuff we've discussed, Twitter's own fault - they actually pulled Milo's verified status after an incident, cementing the idea of the checkmark as endorsement.

    But besides that, why should someone like Richard Spencer be given protection over his name, when he openly espouses harming others for theirs? This again comes back to tolerance as peace treaty - if Spencer wants to advocate harming others, why should the rest of us protect him from harm?

    This is especially galling post-Charlottesville.

  • Options
    shrykeshryke Member of the Beast Registered User regular
    edited October 2017
    Squidget0 wrote: »
    I'm genuinely curious. How is all of this moderation supposed to be implemented on a platform like Twitter? What would it look like?

    Stronger automated harassment handling? We've seen with YouTube's content protection systems how that goes. The anti-abuse automation becomes a vector for abuse. Show me an automated system that bans harassers, and I'll show you a twitter bot that can get any account banned through a set of carefully curated harassment reports. Automated systems are inherently fragile, and terrible at separating the kind of nuance that makes true threats distinct from rap lyrics or political metaphor.

    Human moderators? Even assuming you could find someone to pay thousands of real humans to moderate political shitposts, the people moderating are still going to be...human. Most of them probably won't agree with the opinions of the Wokeosphere on what speech deserves to be curtailed. What are the qualifications to be a moderator? College education? Being the right kind of people? Why would it possibly be in Twitter's interest to pay an army of moderators when you're going to keep using their service regardless?

    Government intervention? What makes you think you'd be the one writing and enforcing the speech laws, when your side is out of power? Isn't there a fairly big deal on the SJ left about how laws tend to be used to oppress the marginalized over the privileged? Why would hypothetical speech laws be used any differently than drug possession laws have been?

    The whole thing seems like a pipe dream. A distant fantasy that we can somehow enforce the same ideological conformity and agenda-setting in public spaces that we've been able to enforce in leftist enclaves. There is no meaningful path to get there, or even a clear picture of what "there" might look like.

    Why should we give a fuck if they can implement it or not? If Twitter can't manage to not make their business model work without being a haven for nazis, that's their problem. Twitter doesn't have some inalienable right to exist.

    Of course, they aren't even doing any basic steps anyway. There's plenty of big easy targets out there on twitter you should sweep up in a day of light work.

    shryke on
  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    Phoenix-D wrote: »
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    The actual effect it has is a social standing indicator. A mark of approval from Twitter. It's silly given the initial intent but that's how it went.

    This is, much like the rest of the stuff we've discussed, Twitter's own fault - they actually pulled Milo's verified status after an incident, cementing the idea of the checkmark as endorsement.

    But besides that, why should someone like Richard Spencer be given protection over his name, when he openly espouses harming others for theirs? This again comes back to tolerance as peace treaty - if Spencer wants to advocate harming others, why should the rest of us protect him from harm?

    Having accurate information about the source of a tweet is a benefit to everyone.

    If part of the problem is that moderation is inconsistent then the same criticism applies here: the logic of verification is also inconsistent. The solution is thus a coherent and consistent approach to verification. If people continue to be unable to differentiate this from an endorsement then that is a people problem.

    Clearly the only way to achieve justice in this world is to reactivate Yiannopolous' account, restore the blue tick, and presumable immediate ban him again for whatever transgressions occurred previously.

  • Options
    milskimilski Poyo! Registered User regular
    a
    Apothe0sis wrote: »
    Phoenix-D wrote: »
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    The actual effect it has is a social standing indicator. A mark of approval from Twitter. It's silly given the initial intent but that's how it went.

    This is, much like the rest of the stuff we've discussed, Twitter's own fault - they actually pulled Milo's verified status after an incident, cementing the idea of the checkmark as endorsement.

    But besides that, why should someone like Richard Spencer be given protection over his name, when he openly espouses harming others for theirs? This again comes back to tolerance as peace treaty - if Spencer wants to advocate harming others, why should the rest of us protect him from harm?

    Having accurate information about the source of a tweet is a benefit to everyone.

    If part of the problem is that moderation is inconsistent then the same criticism applies here: the logic of verification is also inconsistent. The solution is thus a coherent and consistent approach to verification. If people continue to be unable to differentiate this from an endorsement then that is a people problem.

    Clearly the only way to achieve justice in this world is to reactivate Yiannopolous' account, restore the blue tick, and presumable immediate ban him again for whatever transgressions occurred previously.

    It isn't a people problem when verification is a form of endorsement. Twitter doesn't verify to confirm you are who you say you are, they do so to confirm you are the real account of somebody of interest. Even if Twitter doesn't explicitly agree with Nazis, verifying an actual Nazi confirms that Twitter believes his speech is a matter of public interest.

    I ate an engineer
  • Options
    ElkiElki get busy Moderator, ClubPA mod
    Apothe0sis wrote: »
    Phoenix-D wrote: »
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    The actual effect it has is a social standing indicator. A mark of approval from Twitter. It's silly given the initial intent but that's how it went.

    This is, much like the rest of the stuff we've discussed, Twitter's own fault - they actually pulled Milo's verified status after an incident, cementing the idea of the checkmark as endorsement.

    But besides that, why should someone like Richard Spencer be given protection over his name, when he openly espouses harming others for theirs? This again comes back to tolerance as peace treaty - if Spencer wants to advocate harming others, why should the rest of us protect him from harm?

    Having accurate information about the source of a tweet is a benefit to everyone.

    If part of the problem is that moderation is inconsistent then the same criticism applies here: the logic of verification is also inconsistent. The solution is thus a coherent and consistent approach to verification. If people continue to be unable to differentiate this from an endorsement then that is a people problem.

    Clearly the only way to achieve justice in this world is to reactivate Yiannopolous' account, restore the blue tick, and presumable immediate ban him again for whatever transgressions occurred previously.

    The concept of personal responsibility of the individual has no meaning in a social network of 300 million. The question is what is the system designed to do, and what is the system doing. The answer to both should be one. If the system isn't doing it's supposed to be doing, they need to reevaluate how to design a system that serves the intended purpose or -more fundamentally- if they have the wrong goals altogether.

    smCQ5WE.jpg
  • Options
    ArbitraryDescriptorArbitraryDescriptor changed Registered User regular
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    How about instead he was banned immediately once it was known he was a bigot? And not just an ordinary bigot, either.

    Phase 2 of my pipe-dream is to get all the social networks to agree to cross-police. So in that impossible scenario there would be long term value in keeping the verified accounts around until a ban on one network is a ban on all of them; then you would drop the hammer.

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Elki wrote: »
    Apothe0sis wrote: »
    Phoenix-D wrote: »
    Apothe0sis wrote: »
    Why would bigots need to be de-verified?

    Presumably the point is that anyone of sufficient public standing can be identified from imitators and charlatans. Using it as some sort of carrot seems to do violence to the term 'verified'.

    The actual effect it has is a social standing indicator. A mark of approval from Twitter. It's silly given the initial intent but that's how it went.

    This is, much like the rest of the stuff we've discussed, Twitter's own fault - they actually pulled Milo's verified status after an incident, cementing the idea of the checkmark as endorsement.

    But besides that, why should someone like Richard Spencer be given protection over his name, when he openly espouses harming others for theirs? This again comes back to tolerance as peace treaty - if Spencer wants to advocate harming others, why should the rest of us protect him from harm?

    Having accurate information about the source of a tweet is a benefit to everyone.

    If part of the problem is that moderation is inconsistent then the same criticism applies here: the logic of verification is also inconsistent. The solution is thus a coherent and consistent approach to verification. If people continue to be unable to differentiate this from an endorsement then that is a people problem.

    Clearly the only way to achieve justice in this world is to reactivate Yiannopolous' account, restore the blue tick, and presumable immediate ban him again for whatever transgressions occurred previously.

    The concept of personal responsibility of the individual has no meaning in a social network of 300 million. The question is what is the system designed to do, and what is the system doing. The answer to both should be one. If the system isn't doing it's supposed to be doing, they need to reevaluate how to design a system that serves the intended purpose or -more fundamentally- if they have the wrong goals altogether.

    The idea that verification was an implicit endorsement didn't just come out of thin air, but by the way it was implemented - that it was limited only to people of note, that it was framed more as a tool to protect them as opposed to protecting the userbase the a whole, etc.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    HefflingHeffling No Pic EverRegistered User regular
    edited October 2017
    Xaquin wrote: »
    Aridhol wrote: »
    I think that's a good point.
    Why aren't groups who espouse or advocate for genocide or ethnic cleansing classified as terrorist organizations?

    Can you put an ISIS flag on the back of your coal roller and not get arrested? I doubt it.

    because the gop would lose some support if various militias, the klan, and neo nazis were labeled terrorists and treated as such

    It's a terrible time to live in when white privilege allows one to be a domestic terrorist but not be punished as such. See White Nationalist Rally in Virginia, August 12th for reference.

    Social media like Twitter is the incubator that allows for such terrible events to grow and hatch.

    Heffling on
  • Options
    AngelHedgieAngelHedgie Registered User regular
    And in another example of "Twitter's abuse team needs to do their basic job", they were repeatedly warned about a Russian troll account masquerading as a GOP group, and did nothing:
    Twitter took 11 months to close a Russian troll account that claimed to speak for the Tennessee Republican Party even after that state's real GOP notified the social media company that the account was a fake.

    The account, @TEN_GOP, was enormously popular, amassing at least 136,000 followers between its creation in November 2015 and when Twitter shut it down in August, according to a snapshot of the account captured by the Internet Archive just before the account was "permanently suspended.

    ...

    The @TEN_GOP account offered a lesson in how inflammatory tweets can be used to gain followers and influence. In contrast, the actual Tennessee GOP’s Twitter account, @tngop, has only 13,400 followers, despite being the Twitter voice of the state party since 2007.

    The actual Tennessee Republican Party tried unsuccessfully for months to get Twitter to shut @TEN_GOP down.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    HefflingHeffling No Pic EverRegistered User regular
    Twitter wants to grow it's user base no matter the cost. In their mind, the account with 136K followers had more "value" than one with a tenth that following.

    It's the "Win at all costs" mentality that one expects from corporations, with predictable effects.

  • Options
    Harry DresdenHarry Dresden Registered User regular
    Heffling wrote: »
    Twitter wants to grow it's user base no matter the cost. In their mind, the account with 136K followers had more "value" than one with a tenth that following.

    It's the "Win at all costs" mentality that one expects from corporations, with predictable effects.

    That's what they think they're doing, when in reality they're doing exact opposite with accounts like that.

  • Options
    ZiggymonZiggymon Registered User regular
    edited October 2017
    https://engadget.com/2017/10/19/twitter-safety-calendar/

    So it looks like Twitter will have some new safety features added starting November 3rd
    Avatars and headers with hateful imagery and symbols will no longer be allowed and tweets that contain them will be placed behind a filter. Twitter says it will release examples of what it considers "hateful imagery" once the policy is finalized so there can be no doubts what kind of symbols aren't welcome anymore. In addition, Twitter will begin blocking people's ability to sign up with hateful names on November 22nd.

    The platform's Safety Calendar also outlines when the rules it announced in the past will go live, including new measures to protect victims of non-consensual nudity and unwanted sexual advances. Further, Twitter will update its witness reporting procedure to take user relationships into account, so it can act faster if it's more likely that the reporter truly has witnessed rule violations. We'll find out how the microblogging platform plans to enforce its new rules soon, as well: it will reveal the factors it considers when reviewing user reports on November 14th.

    Edit here is the actual list of dates:

    https://blog.twitter.com/official/en_us/topics/company/2017/safetycalendar.html

    Ziggymon on
  • Options
    NyysjanNyysjan FinlandRegistered User regular
    all pictures with a swear word written in them to be removed while every confederate flag and nazi symbol other than swastika to be remain in 5. 4. 3. 2...

  • Options
    AridholAridhol Daddliest Catch Registered User regular
    Nyysjan wrote: »
    all pictures with a swear word written in them to be removed while every confederate flag and nazi symbol other than swastika to be remain in 5. 4. 3. 2...

    Come on, it's just a statue. It's my heritage! I never enslaved nobody!

  • Options
    DiannaoChongDiannaoChong Registered User regular
    a
    Nyysjan wrote: »
    all pictures with a swear word written in them to be removed while every confederate flag and nazi symbol other than swastika to be remain in 5. 4. 3. 2...

    All the user icons of certain people change to the number 88.

    steam_sig.png
  • Options
    MorganVMorganV Registered User regular
    Nyysjan wrote: »
    all pictures with a swear word written in them to be removed while every confederate flag and nazi symbol other than swastika to be remain in 5. 4. 3. 2...
    Yep. While it's a good idea, until we see these policies actually being implemented and enforced, rather than announced, from a company that's flagrantly ignored breaches of their TOS in the past, I'm going to remain skeptical.

    And just to be clear, I don't hold them to needing to hit a 100% success rate of getting it right. Just a reasonable effort (unlike what that spraypainting German Youtuber showed was NOT the case), will be sufficient. Because as someone else here said (sorry, can't remember who), getting rid of the most egregious will go a long way to removing likeminded from the public square.

  • Options
    dispatch.odispatch.o Registered User regular
    I feel like if they ever even made an effort to enforce a policy they could easily make a dent in the attitude of the service.

    Even if you only review and catch 1% initially, over a long enough timeline you reduce the overall tweetshitters. If they'd have had enforcement early on and made even a token effort, this discussion would be about how to catch people who skirt the edges of acceptable behavior and whether they deserve suspension. Instead everyone knows they don't do anything at all and it's one big garbage fire on the internet.

  • Options
    Harry DresdenHarry Dresden Registered User regular
    Ziggymon wrote: »
    https://engadget.com/2017/10/19/twitter-safety-calendar/

    So it looks like Twitter will have some new safety features added starting November 3rd
    Avatars and headers with hateful imagery and symbols will no longer be allowed and tweets that contain them will be placed behind a filter. Twitter says it will release examples of what it considers "hateful imagery" once the policy is finalized so there can be no doubts what kind of symbols aren't welcome anymore. In addition, Twitter will begin blocking people's ability to sign up with hateful names on November 22nd.

    The platform's Safety Calendar also outlines when the rules it announced in the past will go live, including new measures to protect victims of non-consensual nudity and unwanted sexual advances. Further, Twitter will update its witness reporting procedure to take user relationships into account, so it can act faster if it's more likely that the reporter truly has witnessed rule violations. We'll find out how the microblogging platform plans to enforce its new rules soon, as well: it will reveal the factors it considers when reviewing user reports on November 14th.

    Edit here is the actual list of dates:

    https://blog.twitter.com/official/en_us/topics/company/2017/safetycalendar.html

    Too little, too late. They're still holding back too much.

  • Options
    CouscousCouscous Registered User regular
    Nyysjan wrote: »
    all pictures with a swear word written in them to be removed while every confederate flag and nazi symbol other than swastika to be remain in 5. 4. 3. 2...

    Swastikas to be replaced by German Imperial flags and kekistan flags.

  • Options
    Harry DresdenHarry Dresden Registered User regular
    Couscous wrote: »
    Nyysjan wrote: »
    all pictures with a swear word written in them to be removed while every confederate flag and nazi symbol other than swastika to be remain in 5. 4. 3. 2...

    Swastikas to be replaced by German Imperial flags and kekistan flags.

    And Confederate flags.

  • Options
    HenroidHenroid Mexican kicked from Immigration Thread Centrism is Racism :3Registered User regular
    Further, Twitter will update its witness reporting procedure to take user relationships into account, so it can act faster if it's more likely that the reporter truly has witnessed rule violations.
    In other words, the algorithm will throw out reports based on something stupid. This is vaguely worded and needs specificity ASAP. Because it can go one of two ways. It either means:
    - you can only report tweets having to do with people you follow
    - you CAN'T report tweets having to do with people you follow (bias)

    "Truly witnessed rule violations" though. That's... wow.

  • Options
    ArbitraryDescriptorArbitraryDescriptor changed Registered User regular
    Henroid wrote: »
    Further, Twitter will update its witness reporting procedure to take user relationships into account, so it can act faster if it's more likely that the reporter truly has witnessed rule violations.
    In other words, the algorithm will throw out reports based on something stupid. This is vaguely worded and needs specificity ASAP. Because it can go one of two ways. It either means:
    - you can only report tweets having to do with people you follow
    - you CAN'T report tweets having to do with people you follow (bias)

    "Truly witnessed rule violations" though. That's... wow.

    That phrasing confuses me, but I'll allow they're just being "bad at this" for time being.

    Perhaps they meant that the report gets prioritized if it

    Is a DM to you
    Mentions you
    Is followed by you
    Other

    Which makes a kind of sense; but it also means two people tweeting each other about killing jews will be doing so in relative safety. Which isn't exactly a step backwards, but...

  • Options
    Harry DresdenHarry Dresden Registered User regular
    Henroid wrote: »
    Further, Twitter will update its witness reporting procedure to take user relationships into account, so it can act faster if it's more likely that the reporter truly has witnessed rule violations.
    In other words, the algorithm will throw out reports based on something stupid. This is vaguely worded and needs specificity ASAP. Because it can go one of two ways. It either means:
    - you can only report tweets having to do with people you follow
    - you CAN'T report tweets having to do with people you follow (bias)

    "Truly witnessed rule violations" though. That's... wow.

    As long as they allow people like Richard Spencer to keep an account, their words are meaningless.

This discussion has been closed.