As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/

Adpocalypse

2»

Posts

  • CalicaCalica Registered User regular
    There's also the wrinkle that people value things more when they have to work for them. (This is true of all animals, actually; they enjoy foraging for food more than they enjoy food handed to them.) So if you make ISIS recruitment videos harder to find, the people who still manage to find them will be more inclined to listen.

  • SniperGuySniperGuy SniperGuyGaming Registered User regular
    Calica wrote: »
    There's also the wrinkle that people value things more when they have to work for them. (This is true of all animals, actually; they enjoy foraging for food more than they enjoy food handed to them.) So if you make ISIS recruitment videos harder to find, the people who still manage to find them will be more inclined to listen.

    Aren't those people going to work to find them regardless of platform? We should still reduce the availability of those hateful inciting messages if at all possible. We shouldn't make it easy for terrorists to chat with each other.

  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    Phyphor wrote: »
    I thought we were only concerned with Youtube (and Vimeo, sure), as they're what 99% of people use when they go looking for videos on the internet. Having fewer locations where extremists can post their videos and can only be found by people trying *really hard* to find them is the whole point. Like, I'm fairly certain everyone here understands there's never going to be a 100% effective single tool to curb this shit, so the fact that it's being brought up as a counter argument feels like a red herring.

    They "post" them in the sense that I can post a youtube link here and you can watch it (without even leaving here too). You don't have to find the videos through youtube's search feature, they can be posted more or less directly to their audience (and any extra views would just be a bonus)

    Right, so then one has to go to a site that allows those sorts of videos to be posted, of which there can't be many. And which police can monitor. I don't know what it is you're arguing, as all of the ways you mention that people can use to bypass an algorithm or hide/code their message still leads to fewer people being exposed to that message. That's the whole point of what I'm saying.

    Dude I can set up a web forum in minutes. These sites are legion and the URLs are constantly changing to evade filtering like what redx described as well as to evade the various military and intelligence agencies they're engaged with. I'm no expert on Islamic terrorism by any stretch but I follow a researcher who is and there must have been thousands of sites and URLs over the years as governments try to play the same game of whack a mole that YouTube is with the entire Internet. And the darknet too!

    Also URL shorteners kinda fuck up any Referer based blacklisting. Can't block those.

    But I also 100% agree with SniperGuy. We still gotta fight this fight, but expecting them to ever be able to win it is unreasonable.

    The problem is, and always will be: Technology is a double edged sword. It can be used for good or evil and security can't actually exist outside an authoritarian dystopia.

  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited January 2018
    BTW, there's a seminal paper in psychology that specifically talks about the need to have humans who monitor automated systems, and how the humans assigned to such roles have difficulties maintaining their attention for long periods of time.

    It's called Ironies of Automation and I strongly recommend reading it if you have any interest at all in machine learning or the social effects of automation.

    PDF link: https://www.ise.ncsu.edu/wp-content/uploads/2017/02/Bainbridge_1983_Automatica.pdf

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Just to give a real world example, Facebook uses a combination of AI and user reports to flag obvious objectionable content.

    They still need literally thousands of people to review those reports:

    https://qz.com/1101455/facebook-fb-is-hiring-more-people-to-moderate-content-than-twitter-twtr-has-at-its-entire-company/
    Facebook has spent this year ratcheting up the number of content moderators it employs across the world after being sent on a hiring spree following a series of accusations that its algorithms alone couldn’t handle the task.

    When the company brings on these new workers, it will have roughly 8,750 people focused on moderation, which is more than double Twitter’s entire headcount.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    edited January 2018
    Another problem with moderation by machine learning is as more and more obfuscation techniques are used more and more legitimate content gets caught in the crossfire. Especially with something with as many variables as video, getting it to work in the first place is a gigantic task, making sure it doesn't cannibalize your userbase with false positives will probably be a bigger one. As the space develops I'm sure they'll start integrating data from multiple points in the chain; network traffic (sources destinations, etc.), video content, platform details like the age of the accounts, the number of views, etc. Credit card fraud algorithms work a lot like this but they had both 50 years to work on it, and more importantly, 50 years to accumulate historical datasets of fraudulent vs. non-fraudulent behavior. But once you know how they detect fraud (gas purchase followed by a pair of shoes is a huge red flag) it's easy to introduce chaos into the system to make you invisible. Same will happen with this and I think there's probably a theoretical upper limit on what can be accomplished without making your service unusable (without human intelligent AIs) that somebody better at math than me has probably already figured out.

    Giggles_Funsworth on
  • NSDFRandNSDFRand FloridaRegistered User regular
    RE Radicalization and Islamism specifically: While the most visceral example of radicalization by video are the videos of combat and executions with nasheeds playing in the background, and they do make up a significant proportion of the propaganda, part of what draws people to extremism (in the case of Islam specifically) is also a sort of Fundamentalism Utopianism. The idea of being part of the new Caliphate, with no externally or self imposed borders, where the Ummah can live as they are meant to under righteous Islamic rule, is just as strong a draw. So when you talk about censoring recruitment propaganda, it's not going to just be the violent content, but the "keystone", so to speak, of their entire narrative of a Utopian Caliphate (Charlie Winter's write up about IS and the Virtual Caliphate) which is likely to start catching every video showing anything positive about Islam, Islamic society etc.

    And I only really see that as simultaneously feeding Islamist narratives while also alienating any Muslim content creators and viewers.

    That's also not going into the use of this propaganda, and the accounts associated with it, for collections purposes and network analysis.

  • shrykeshryke Member of the Beast Registered User regular
    edited January 2018
    Phyphor wrote: »
    shryke wrote: »
    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Is it? That's an assumption and not a good one. The harder your videos are to censor, the harder they will be to find and the harder time they will have conveying their message because those factors are the very things that get them censored. If it still conveys the idea, it still gets censored.

    The problem for the censors is identifying the video in the first place which is orthogonal to the content being able to get its message across. They don't have to pass a human screening them because that's impractical. There's no algorithm for detecting "bad" content and even if there was it would be different for each type of "bad" content. Any sort of machine learning can be gotten around by changing the audio track, changing the video, changing the length, adding pre-roll and post-roll content to fool the classifier, etc. Hell just put the entire video through a filter or do what people uploading tv shows do and have the video play in a box in the middle of a screen, that works.

    You make and upload a video and post it somewhere on the internet where people will see it. As the censor you have to identify it based on content alone, picking the needle that may or may not exist out of the conveyor belt of haystacks, and if you mess up people still just go "fucking google why don't they fix this how hard can it be." Your suggestion of sampling doesn't even work here as you'll only catch x% of attempts where x would realistically be around 1. They can make a new account for every video if they want. They could even upload inoffensive videos for a while before posting the isis video if you try to scrutinize new accounts more closely and this can all even be automated

    But human screening isn't impractical. It's just not perfect. There's a difference. I already explained ways you can do human screening to reduce the number of offensive videos that hit youtube. Now if people wanna switch their offensive videos to some other platform, sure. But that's not youtube's problem.

    I don't know why you are trying to confine this to just ISIS recruitment just using machine learning when those are far from the only problems or solutions presented in the thread.

    shryke on
  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    NSDFRand wrote: »
    RE Radicalization and Islamism specifically: While the most visceral example of radicalization by video are the videos of combat and executions with nasheeds playing in the background, and they do make up a significant proportion of the propaganda, part of what draws people to extremism (in the case of Islam specifically) is also a sort of Fundamentalism Utopianism. The idea of being part of the new Caliphate, with no externally or self imposed borders, where the Ummah can live as they are meant to under righteous Islamic rule, is just as strong a draw. So when you talk about censoring recruitment propaganda, it's not going to just be the violent content, but the "keystone", so to speak, of their entire narrative of a Utopian Caliphate (Charlie Winter's write up about IS and the Virtual Caliphate) which is likely to start catching every video showing anything positive about Islam, Islamic society etc.

    And I only really see that as simultaneously feeding Islamist narratives while also alienating any Muslim content creators and viewers.

    That's also not going into the use of this propaganda, and the accounts associated with it, for collections purposes and network analysis.

    This too. It's the polite Nazi problem on social media with a different set of far right radicals.

  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Is it? That's an assumption and not a good one. The harder your videos are to censor, the harder they will be to find and the harder time they will have conveying their message because those factors are the very things that get them censored. If it still conveys the idea, it still gets censored.

    The problem for the censors is identifying the video in the first place which is orthogonal to the content being able to get its message across. They don't have to pass a human screening them because that's impractical. There's no algorithm for detecting "bad" content and even if there was it would be different for each type of "bad" content. Any sort of machine learning can be gotten around by changing the audio track, changing the video, changing the length, adding pre-roll and post-roll content to fool the classifier, etc. Hell just put the entire video through a filter or do what people uploading tv shows do and have the video play in a box in the middle of a screen, that works.

    You make and upload a video and post it somewhere on the internet where people will see it. As the censor you have to identify it based on content alone, picking the needle that may or may not exist out of the conveyor belt of haystacks, and if you mess up people still just go "fucking google why don't they fix this how hard can it be." Your suggestion of sampling doesn't even work here as you'll only catch x% of attempts where x would realistically be around 1. They can make a new account for every video if they want. They could even upload inoffensive videos for a while before posting the isis video if you try to scrutinize new accounts more closely and this can all even be automated

    But human screening isn't impractical. It's just not perfect. There's a difference. I already explained ways you can do human screening to reduce the number of offensive videos that hit youtube. Now if people wanna switch their offensive videos to some other platform, sure. But that's not youtube's problem.

    I don't know why you are trying to confine this to just ISIS recruitment just using machine learning when those are far from the only problems or solutions presented in the thread.

    It's literally impossible at the scale required. We get rad social media and file sharing sites freely accessible by everyone or they charge (probably quite a bit more than you think) money to pay for hordes of moderators. Or we accept that it can be misused, impress upon tech companies (via the courts if necessary) that they need to do what they can with machine learning and smaller moderation teams. Twitter and Facebook certainly aren't trying all that hard* because accounts is accounts, and ad views are ad views, and in the case of the Russian funded extremism they're paying customers. So moderating that content and those people are against the company's best interests.** This may change if bot detection companies take off and remove that profit motive. That's what Dan Kaminsky's working on right now, he could be working on anything he wants to, dude has fuckoff money and last time he fixed DNS, one of the most important network technologies ever. I hadn't thought much about why he's doing what he's doing before now but after reasoning it out above, I suspect he figured out how important this was a couple years ahead of the rest of us.

    And it'll make him a lot of money.



    But really cool world changing technology is always going to have drawbacks depending on who's wielding it.

    *Using the shortsighted, ruthlessly capitalist short term growth at all costs definition here.

    **I'm not sure this is the case with malicious content on YouTube though. Clearly, it's bad for their brand, and it's not like it's good for their business to stop paying small content creators that frequently go viral and do the work of building their brand for them. It's possible they weighed the machine learning tech available to automate moderation, the cost of developing new solutions, and the cost of human moderation teams to monitor the machines vs. lost revenue from irrational advertisers being upset their brand is being advertised over an ISIS video pulling ad buys and it just wasn't worth it. It wouldn't be the first time a bunch of assholes screwed up something good for everyone.

  • shrykeshryke Member of the Beast Registered User regular
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Is it? That's an assumption and not a good one. The harder your videos are to censor, the harder they will be to find and the harder time they will have conveying their message because those factors are the very things that get them censored. If it still conveys the idea, it still gets censored.

    The problem for the censors is identifying the video in the first place which is orthogonal to the content being able to get its message across. They don't have to pass a human screening them because that's impractical. There's no algorithm for detecting "bad" content and even if there was it would be different for each type of "bad" content. Any sort of machine learning can be gotten around by changing the audio track, changing the video, changing the length, adding pre-roll and post-roll content to fool the classifier, etc. Hell just put the entire video through a filter or do what people uploading tv shows do and have the video play in a box in the middle of a screen, that works.

    You make and upload a video and post it somewhere on the internet where people will see it. As the censor you have to identify it based on content alone, picking the needle that may or may not exist out of the conveyor belt of haystacks, and if you mess up people still just go "fucking google why don't they fix this how hard can it be." Your suggestion of sampling doesn't even work here as you'll only catch x% of attempts where x would realistically be around 1. They can make a new account for every video if they want. They could even upload inoffensive videos for a while before posting the isis video if you try to scrutinize new accounts more closely and this can all even be automated

    But human screening isn't impractical. It's just not perfect. There's a difference. I already explained ways you can do human screening to reduce the number of offensive videos that hit youtube. Now if people wanna switch their offensive videos to some other platform, sure. But that's not youtube's problem.

    I don't know why you are trying to confine this to just ISIS recruitment just using machine learning when those are far from the only problems or solutions presented in the thread.

    It's literally impossible at the scale required. We get rad social media and file sharing sites freely accessible by everyone or they charge (probably quite a bit more than you think) money to pay for hordes of moderators. Or we accept that it can be misused, impress upon tech companies (via the courts if necessary) that they need to do what they can with machine learning and smaller moderation teams. Twitter and Facebook certainly aren't trying all that hard* because accounts is accounts, and ad views are ad views, and in the case of the Russian funded extremism they're paying customers. So moderating that content and those people are against the company's best interests.** This may change if bot detection companies take off and remove that profit motive. That's what Dan Kaminsky's working on right now, he could be working on anything he wants to, dude has fuckoff money and last time he fixed DNS, one of the most important network technologies ever. I hadn't thought much about why he's doing what he's doing before now but after reasoning it out above, I suspect he figured out how important this was a couple years ahead of the rest of us.

    And it'll make him a lot of money.



    But really cool world changing technology is always going to have drawbacks depending on who's wielding it.

    *Using the shortsighted, ruthlessly capitalist short term growth at all costs definition here.

    **I'm not sure this is the case with malicious content on YouTube though. Clearly, it's bad for their brand, and it's not like it's good for their business to stop paying small content creators that frequently go viral and do the work of building their brand for them. It's possible they weighed the machine learning tech available to automate moderation, the cost of developing new solutions, and the cost of human moderation teams to monitor the machines vs. lost revenue from irrational advertisers being upset their brand is being advertised over an ISIS video pulling ad buys and it just wasn't worth it. It wouldn't be the first time a bunch of assholes screwed up something good for everyone.

    No, it's not. No matter how many times you keep saying it, it remains a problem you can make progress on. They just don't want to invest the money in doing so because it has no specific monetary return that looks good to shareholders or the pocketbooks.

  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    Also I kinda got lost in tangents there, but if ISIS recruitment videos etc. will just keep relocating until they're hosted on fetish porn streaming sites, but that doesn't matter because that's not really their distribution network anyway, just their hosting provider; why should YouTube spend an extraordinary amount of time and money on trying to preemptively detect and combat them? If anything it seems more like a problem for governments and intelligence agencies.

    The best solution is probably to fund the tech companies automation and moderation teams with government money though, because there's no way in hell I want any government to have that kind of access to the way people use social media.

  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    shryke wrote: »
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Is it? That's an assumption and not a good one. The harder your videos are to censor, the harder they will be to find and the harder time they will have conveying their message because those factors are the very things that get them censored. If it still conveys the idea, it still gets censored.

    The problem for the censors is identifying the video in the first place which is orthogonal to the content being able to get its message across. They don't have to pass a human screening them because that's impractical. There's no algorithm for detecting "bad" content and even if there was it would be different for each type of "bad" content. Any sort of machine learning can be gotten around by changing the audio track, changing the video, changing the length, adding pre-roll and post-roll content to fool the classifier, etc. Hell just put the entire video through a filter or do what people uploading tv shows do and have the video play in a box in the middle of a screen, that works.

    You make and upload a video and post it somewhere on the internet where people will see it. As the censor you have to identify it based on content alone, picking the needle that may or may not exist out of the conveyor belt of haystacks, and if you mess up people still just go "fucking google why don't they fix this how hard can it be." Your suggestion of sampling doesn't even work here as you'll only catch x% of attempts where x would realistically be around 1. They can make a new account for every video if they want. They could even upload inoffensive videos for a while before posting the isis video if you try to scrutinize new accounts more closely and this can all even be automated

    But human screening isn't impractical. It's just not perfect. There's a difference. I already explained ways you can do human screening to reduce the number of offensive videos that hit youtube. Now if people wanna switch their offensive videos to some other platform, sure. But that's not youtube's problem.

    I don't know why you are trying to confine this to just ISIS recruitment just using machine learning when those are far from the only problems or solutions presented in the thread.

    It's literally impossible at the scale required. We get rad social media and file sharing sites freely accessible by everyone or they charge (probably quite a bit more than you think) money to pay for hordes of moderators. Or we accept that it can be misused, impress upon tech companies (via the courts if necessary) that they need to do what they can with machine learning and smaller moderation teams. Twitter and Facebook certainly aren't trying all that hard* because accounts is accounts, and ad views are ad views, and in the case of the Russian funded extremism they're paying customers. So moderating that content and those people are against the company's best interests.** This may change if bot detection companies take off and remove that profit motive. That's what Dan Kaminsky's working on right now, he could be working on anything he wants to, dude has fuckoff money and last time he fixed DNS, one of the most important network technologies ever. I hadn't thought much about why he's doing what he's doing before now but after reasoning it out above, I suspect he figured out how important this was a couple years ahead of the rest of us.

    And it'll make him a lot of money.



    But really cool world changing technology is always going to have drawbacks depending on who's wielding it.

    *Using the shortsighted, ruthlessly capitalist short term growth at all costs definition here.

    **I'm not sure this is the case with malicious content on YouTube though. Clearly, it's bad for their brand, and it's not like it's good for their business to stop paying small content creators that frequently go viral and do the work of building their brand for them. It's possible they weighed the machine learning tech available to automate moderation, the cost of developing new solutions, and the cost of human moderation teams to monitor the machines vs. lost revenue from irrational advertisers being upset their brand is being advertised over an ISIS video pulling ad buys and it just wasn't worth it. It wouldn't be the first time a bunch of assholes screwed up something good for everyone.

    No, it's not. No matter how many times you keep saying it, it remains a problem you can make progress on. They just don't want to invest the money in doing so because it has no specific monetary return that looks good to shareholders or the pocketbooks.

    Did you not read the entire part of my argument where I pointed out that that is factually incorrect in the case of YouTube? They are making less money and choosing to stymie their growth and relevance (all their stars started out as small content providers) by pulling affiliate programs for the small frys. I really do think they ran the numbers and it just wasn't worth it.

  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    Like for real, I would take a 1% income tax hike to fund moderation teams and tech to combat extremism online in a heartbeat. I don't see any other way the comment just doesn't shuffle off someplace else.

  • spool32spool32 Contrary Library Registered User regular
    Riot's effort to have the community ban toxic players failed pretty dramatically. You can't rely on community moderation, and even worse, some parts of the net will try and screw it up on purpose.

  • BogartBogart Streetwise Hercules Registered User, Moderator mod
    spool32 wrote: »
    You can't rely on community moderation.

    Hurtful.

  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    spool32 wrote: »
    Riot's effort to have the community ban toxic players failed pretty dramatically. You can't rely on community moderation, and even worse, some parts of the net will try and screw it up on purpose.

    See: Watercolor Goatse.

    Riot leaned into that hard too. They hired entire teams of psychologists to try and figure out how to get gamers to be less shitty on the internet. At a certain point you have to just accept that a subset of humans are terrible and there's no way to engineer around that completely. You can add controls, but there's always going to be people that figure out how to break out of them.

  • CalicaCalica Registered User regular
    Bogart wrote: »
    spool32 wrote: »
    You can't rely on community moderation.

    Hurtful.

    I know you were joking, but the mods being part of the PA forum community doesn't make the forums community-moderated. The mods here basically have autocratic rule, and it's a huge factor in making this place as good as it is.

  • AngelHedgieAngelHedgie Registered User regular
    spool32 wrote: »
    Riot's effort to have the community ban toxic players failed pretty dramatically. You can't rely on community moderation, and even worse, some parts of the net will try and screw it up on purpose.

    See: Watercolor Goatse.

    Riot leaned into that hard too. They hired entire teams of psychologists to try and figure out how to get gamers to be less shitty on the internet. At a certain point you have to just accept that a subset of humans are terrible and there's no way to engineer around that completely. You can add controls, but there's always going to be people that figure out how to break out of them.

    Which is why you have to always have the ultimate backup - the right to exclude someone problematic. Someone keeps trying to evade the rules? They don't get to be part of the community anymore.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • spool32spool32 Contrary Library Registered User regular
    spool32 wrote: »
    Riot's effort to have the community ban toxic players failed pretty dramatically. You can't rely on community moderation, and even worse, some parts of the net will try and screw it up on purpose.

    See: Watercolor Goatse.

    Riot leaned into that hard too. They hired entire teams of psychologists to try and figure out how to get gamers to be less shitty on the internet. At a certain point you have to just accept that a subset of humans are terrible and there's no way to engineer around that completely. You can add controls, but there's always going to be people that figure out how to break out of them.

    Their new system is better. They let people be nice to each other via gifting honor, and if you're nice enough as judged by randoms in solo queue, you eventually stop queueing with low-honor players.

    Toxic players hang out with each other, not with the rest of us.

  • Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    spool32 wrote: »
    Riot's effort to have the community ban toxic players failed pretty dramatically. You can't rely on community moderation, and even worse, some parts of the net will try and screw it up on purpose.

    See: Watercolor Goatse.

    Riot leaned into that hard too. They hired entire teams of psychologists to try and figure out how to get gamers to be less shitty on the internet. At a certain point you have to just accept that a subset of humans are terrible and there's no way to engineer around that completely. You can add controls, but there's always going to be people that figure out how to break out of them.

    Which is why you have to always have the ultimate backup - the right to exclude someone problematic. Someone keeps trying to evade the rules? They don't get to be part of the community anymore.

    That's what we used to do to the problematic CTF people, until there started to be enough children and the community at large had a dialogue about how uncomfortable women were at conferences and we decided we needed to stop having the projector (which is really fun when they're actually doing the CTF challenges instead of creating their own neural networks filter evasion challenge) or do better with automated detection.

    We we're able to cope initially (and so far using automation) because it's only a group of 20-30 people in a room all on the same local network. How do you do that online though? Sure, you can kick problematic people when you're alerted to them by the community, which seems to be the tack YouTube is attempting to take. But (right now) it is physically and technologically impossible to fully review every new video uploaded to their service. There's no way to blackhole problematic people from the internet unless you roll out a private key crypto based universal ID that you have to use to log on to the internet (or service by service I guess, which goes back to charging a fee for the service by way of the token). This is possible using existing technology, but extremely undesireable to me for a goddamn flotilla of reasons.
    spool32 wrote: »
    spool32 wrote: »
    Riot's effort to have the community ban toxic players failed pretty dramatically. You can't rely on community moderation, and even worse, some parts of the net will try and screw it up on purpose.

    See: Watercolor Goatse.

    Riot leaned into that hard too. They hired entire teams of psychologists to try and figure out how to get gamers to be less shitty on the internet. At a certain point you have to just accept that a subset of humans are terrible and there's no way to engineer around that completely. You can add controls, but there's always going to be people that figure out how to break out of them.

    Their new system is better. They let people be nice to each other via gifting honor, and if you're nice enough as judged by randoms in solo queue, you eventually stop queueing with low-honor players.

    Toxic players hang out with each other, not with the rest of us.

    This is amazing. Alone together in a hell of their own creation.

    Although really that's not any different than Gabai. They probably prefer it that way.

  • Dark Raven XDark Raven X Laugh hard, run fast, be kindRegistered User regular
    Wasn't there some big kerfuffle a year or more ago where youtube suddenly demonetized channels if they decided they were risky ad propositions? I guess there was blowback to that, they put ads on whatever and oops, new problem?

    Oh brilliant
  • Fuzzy Cumulonimbus CloudFuzzy Cumulonimbus Cloud Registered User regular
    Wasn't there some big kerfuffle a year or more ago where youtube suddenly demonetized channels if they decided they were risky ad propositions? I guess there was blowback to that, they put ads on whatever and oops, new problem?
    That’s when Adpocalypse was originally coined. I’ve been reading up on this and I can’t understand any of it. Polygon had a well written article but it was all PewDiePie this and other person that. I don’t follow YouTube personalities at all so it was confusing.

  • ZiggymonZiggymon Registered User regular
    Wasn't there some big kerfuffle a year or more ago where youtube suddenly demonetized channels if they decided they were risky ad propositions? I guess there was blowback to that, they put ads on whatever and oops, new problem?
    That’s when Adpocalypse was originally coined. I’ve been reading up on this and I can’t understand any of it. Polygon had a well written article but it was all PewDiePie this and other person that. I don’t follow YouTube personalities at all so it was confusing.

    While im not fully up to date on everything, the basics I believe are that journalists reported that some extremist or 'questionable' content on youtube was receiving monetisation and getting ad revenue. Big companies not happy with the negative image association of their own ads being on such channels demanded they be removed or they will boycott. Companies boycott after Google response. Google then starts to try and court these companies back by demonetising anything that could be considered questionable in content to offend.

    What has been interesting is that some channels have reported to have had content flagged on demonetisation are some that have been reviewing of some companies products or giving praise to rival products. Other big players such as Disney haven't been happy with Google trying to create a paid for streaming service out of youtube and many are believing these tactics are trying to strong arm a bigger percentage out of the ad revenue back. Other people believe that when google demonetised and restricted a lot of LGBTQ+ content briefly was a result of some companies not wanting advertisements associated with that content and not the algorithm error they reported.

Sign In or Register to comment.