As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Adpocalypse

AramoraAramora Registered User new member
edited January 2018 in Debate and/or Discourse
Reading the news piece accompanying today's comic, clicking the links, and seized by the need to ask a question, here goes:

If I understand this properly, the adpocalypse is a whole bunch of large companies boycotting Youtube (not purchasing ad space from them) because it was "revealed" (I put this in quotes because it was never really hidden) that many incongruous ads were being run during extremist videos. E.g. ISIS recruitment videos would have 30 second BBC ad breaks. Did I read this properly?

That's not my question, though; that just makes sure my question isn't completely crazy. My question is this: isn't it a good thing (or at least neutral) for extremist videos to be broken by mundane ads--especially for sections of the British government, or organizations like the BBC?

The disadvantage is clear--it seems to normalize the extremist video. From the perspective of the large companies, there is like the fear that it also seems to foster the image that the large company in some way endorses the content of the youtube video. And perhaps most problematically, it allows the extremist to profit from his video.

Although it may normalize the video itself, by placing it in a mundane context, it seems to me that the ads also serve two significant purposes. First, no matter the content of the paid ad, and no matter the skill in crafting the extremist video, the paid ad will always disrupt the narrative of the video in which it is nested. I would think such a reality break would lessen the immersion and emotional impact (even just trivially) of the propaganda.

Second, and perhaps more importantly, it seems to me that the more incongruous, the more jarring the nested ad, the better. There is no better vehicle (to my understanding) for weaning someone from the mammary of extremism than regular interaction with--rather than withdrawal from--society at large and inclusion of moderate views in discourse with them. At least, I would rather radicalized soon-to-be-warriors be distracted by BBC than probably any other possible product out there.

Aside from the way the ads detract from the efficiency of the extremist video, it also seems like their presence does not (or, perhaps more accurately, should not) reflect on the advertiser. Although they are tailored to my browsing habits, I certainly have never linked google's automatically inserted ads to the companies in the ads themselves. Do any of the readers here? Strangely, I would have linked the two if it were on tv, but I've always assumed Google to be much more..automated. Effectively content-neutral in its ad placement. Plus, the only people who are going to see these ads are the extremists watching the video.

I'll admit, I'm not certain how to address the fact that it allows the extremist to monetize his video. Except possibly to note that if only a few people are watching, then the revenue is peanuts, and if it's oodles of people watching--such that the ad revenue is significant--then trying to disrupt the insularity by any means possible seems wiser.

Am I crazy in this opinion?

P.S. Obviously nothing written here will affect the advertisement decisions of all the little horsemen of the adpocalypse; I just wanted to float my observations with others.

Aramora on
«1

Posts

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Making the video "mundane" is normalization. Which is part of the problem. It's unlikely that an ad break is going to disrupt the message either.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    YallYall Registered User regular
    "I was totally into that ISIS video and was thinking about moving to Syria, but then that McDonald's ad popped up and I was like man, can I even get a Big Mac in Aleppo?"

    Somehow I don't see that being a common occurrence.

  • Options
    Edith UpwardsEdith Upwards Registered User regular
    Yeah, lots of ISIS people are alt-right MAGA people and see literally no contradiction between the two other than being angry that Trump hasn't pulled out of Syria and betrayed the Kurds yet.

    Not going to link ISIS blogs here.

  • Options
    KaputaKaputa Registered User regular
    Yeah, lots of ISIS people are alt-right MAGA people and see literally no contradiction between the two other than being angry that Trump hasn't pulled out of Syria and betrayed the Kurds yet.

    Not going to link ISIS blogs here.
    ...what

  • Options
    AngelHedgieAngelHedgie Registered User regular
    Kaputa wrote: »
    Yeah, lots of ISIS people are alt-right MAGA people and see literally no contradiction between the two other than being angry that Trump hasn't pulled out of Syria and betrayed the Kurds yet.

    Not going to link ISIS blogs here.
    ...what

    It gets...weird on the fringes.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    HappylilElfHappylilElf Registered User regular
    edited January 2018
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    HappylilElf on
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    edited January 2018
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Phyphor on
  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them, and you can generally exclude the many videos where there is little to no talking)

  • Options
    SadgasmSadgasm Deluded doodler A cold placeRegistered User regular
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them, and you can generally exclude the many videos where there is little to no talking)

    Sure, but that would require money and effort, and YouTube's business strategy is more aimed towards half-assed patchwork fixes that just makes everything worse.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    edited January 2018
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them, and you can generally exclude the many videos where there is little to no talking)

    Well more like 100000 once you factor in effective hours worked, and probably 200000 in a few years, it is up 6x in the last 5 years

    As for machine learning that's tough, and it's not like the people making the videos are idiots, they can switch things up, what defines a recruitment video can be fairly nebulous

    Phyphor on
  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    Phyphor wrote: »
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them, and you can generally exclude the many videos where there is little to no talking)

    Well more like 100000 once you factor in effective hours worked, and probably 200000 in a few years, it is up 6x in the last 5 years

    As for machine learning that's tough, and it's not like the people making the videos are idiots, they can switch things up, what defines a recruitment video can be fairly nebulous

    Right, which is why you'd have to let it evolve. Look for flagged videos daily and feed them to it, have individuals check everything where the algorithm is like 70-80% sure.

    It's not perfect, but it's also not "hire a hundred thousand people to look at videos all day, every day" or "is ISIS really that bad, why waste money and time removing their videos?" Neither of the latter two seem like a good long term plan, despite them being easier.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Shivahn wrote: »
    Phyphor wrote: »
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them, and you can generally exclude the many videos where there is little to no talking)

    Well more like 100000 once you factor in effective hours worked, and probably 200000 in a few years, it is up 6x in the last 5 years

    As for machine learning that's tough, and it's not like the people making the videos are idiots, they can switch things up, what defines a recruitment video can be fairly nebulous

    Right, which is why you'd have to let it evolve. Look for flagged videos daily and feed them to it, have individuals check everything where the algorithm is like 70-80% sure.

    It's not perfect, but it's also not "hire a hundred thousand people to look at videos all day, every day" or "is ISIS really that bad, why waste money and time removing their videos?" Neither of the latter two seem like a good long term plan, despite them being easier.

    You have a lot more faith in machine learning than I do

  • Options
    HefflingHeffling No Pic EverRegistered User regular
    Youtube's problem: Major advertisers are pulling support because their advertisements are being used on videos that do not line up with the advertisers core values; and in many cases violate those core values. E.G. an advert for Frozen 2 shows then a white supremacist/Nazi rant video plays.

    Root Cause: IMO there are two problems. Firstly, that Youtube is willing to support toxic content providers simply because they draw in more views. Views are an easy metric to look at (see Twitter), but aren't a good metric for quality. This has created a situation where Youtube is propping up toxic content to inflate views instead of creating an advertiser friendly environment.

    Secondly, Youtube has no controlled program for identifying video content. E.G. no keywords are assigned, so there is no way to tie in advertiser content to video content.

    Youtube's response: Rather than instituting a better report system and adding QC positions to monitor the tools, Youtube instead pulls monetary support for all small content creators. This does nothing to address the issue, but what it does do is increase the profitability (or at least minimize losses) for Youtube. There are a large number of small content creators, and in total this represents millions if not tens of millions of savings for Youtube. It also reduces Youtube's overhead because they have to maintain a smaller payment system, can reduce staff, etc. It also disincentives small content creators, which will ultimately reduce the number of breakout stars.

    It's a very corporate move, as they are gaining short term profit at the cost of long term growth.

  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    Phyphor wrote: »
    Shivahn wrote: »
    Phyphor wrote: »
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them, and you can generally exclude the many videos where there is little to no talking)

    Well more like 100000 once you factor in effective hours worked, and probably 200000 in a few years, it is up 6x in the last 5 years

    As for machine learning that's tough, and it's not like the people making the videos are idiots, they can switch things up, what defines a recruitment video can be fairly nebulous

    Right, which is why you'd have to let it evolve. Look for flagged videos daily and feed them to it, have individuals check everything where the algorithm is like 70-80% sure.

    It's not perfect, but it's also not "hire a hundred thousand people to look at videos all day, every day" or "is ISIS really that bad, why waste money and time removing their videos?" Neither of the latter two seem like a good long term plan, despite them being easier.

    You have a lot more faith in machine learning than I do

    It's more that I have zero faith in people, in most cases :P

  • Options
    shrykeshryke Member of the Beast Registered User regular
    edited January 2018
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    shryke on
  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    edited January 2018
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them,
    and you can generally exclude the many videos where there is little to no talking)

    Machine learning is actually pretty dumb and easy to fool as soon as you know it's there. Hacker con I help run has a long running tradition of running a projector that displays a random stream of images pulled from the HTTP traffic of the CTF network. Of course, there's always that one asshole who decides it's his mission in life to get the filthiest images possible on that projector, and while they generally don't stay up that long before somebody blacklists them, we try to run a family friendly event so the other year there was an ultimatum laid down: help pay for a machine to run a neural net for porn (and other offensive content) or no more projector.

    And they did it, the people who play in the CTF every year funded like $10k or something for this beast of a machine that has Yahoo's porn filtering dataset and is learning as it goes.

    So then the game became "Who can get Goatse on the projector." :rotate:

    At first they'd just change sizes, color pallets, and contrast to fool it but it picked up on that pretty quick. Then they started creating abstract representations of Goatse which was actually pretty hilarious because they weren't really offensive unless you were in on the joke. No more scarred children, upset attendees, etc. There were MS Paint versions, ASCII renditions, and even some abstract watercolor variants.

    Anyway, my point is that they repeatedly found ways to beat the neural net and that's with a still image. There's a ridiculously larger amount of variables with video, and almost anything I can think of to filter on is going to be present in legitimate content, even if it's just news media about ISIS. This is actually a problem that community moderation can help with, but to really get everything you would have to have human eyes on every second of video uploaded to YouTube, at which point I doubt they're solvent anymore.

    EDIT: Also I should add, when I say that the neural net picked up on the initial bypasses, that's a process that requires human intervention. The guy running the box has to tell it "No, that's still Goatse." by adding the things that successfully got around it to the dataset.

    Giggles_Funsworth on
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    edited January 2018
    Heffling wrote: »
    Youtube's problem: Major advertisers are pulling support because their advertisements are being used on videos that do not line up with the advertisers core values; and in many cases violate those core values. E.G. an advert for Frozen 2 shows then a white supremacist/Nazi rant video plays.

    Root Cause: IMO there are two problems. Firstly, that Youtube is willing to support toxic content providers simply because they draw in more views. Views are an easy metric to look at (see Twitter), but aren't a good metric for quality. This has created a situation where Youtube is propping up toxic content to inflate views instead of creating an advertiser friendly environment.

    Secondly, Youtube has no controlled program for identifying video content. E.G. no keywords are assigned, so there is no way to tie in advertiser content to video content.

    Youtube's response: Rather than instituting a better report system and adding QC positions to monitor the tools, Youtube instead pulls monetary support for all small content creators. This does nothing to address the issue, but what it does do is increase the profitability (or at least minimize losses) for Youtube. There are a large number of small content creators, and in total this represents millions if not tens of millions of savings for Youtube. It also reduces Youtube's overhead because they have to maintain a smaller payment system, can reduce staff, etc. It also disincentives small content creators, which will ultimately reduce the number of breakout stars.

    It's a very corporate move, as they are gaining short term profit at the cost of long term growth.

    The payment system is automated and I doubt customer support positions will be affected all that much. Technically youtube is losing money here. The advertiser pays them to show an ad, then the money is split between youtube and the channel owner, strictly speaking they take in less money this way as they show fewer ads

    Phyphor on
  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them,
    and you can generally exclude the many videos where there is little to no talking)

    Machine learning is actually pretty dumb and easy to fool as soon as you know it's there. Hacker con I help run has a long running tradition of running a projector that displays a random stream of images pulled from the HTTP traffic of the CTF network. Of course, there's always that one asshole who decides it's his mission in life to get the filthiest images possible on that projector, and while they generally don't stay up that long before somebody blacklists them, we try to run a family friendly event so the other year there was an ultimatum laid down: help pay for a machine to run a neural net for porn (and other offensive content) or no more projector.

    And they did it, the people who play in the CTF every year funded like $10k or something for this beast of a machine that has Yahoo's porn filtering dataset and is learning as it goes.

    So then the game became "Who can get Goatse on the projector." :rotate:

    At first they'd just change sizes, color pallets, and contrast to fool it but it picked up on that pretty quick. Then they started creating abstract representations of Goatse which was actually pretty hilarious because they weren't really offensive unless you were in on the joke. No more scarred children, upset attendees, etc. There were MS Paint versions, ASCII renditions, and even some abstract watercolor variants.

    Anyway, my point is that they repeatedly found ways to beat the neural net and that's with a still image. There's a ridiculously larger amount of variables with video, and almost anything I can think of to filter on is going to be present in legitimate content, even if it's just news media about ISIS. This is actually a problem that community moderation can help with, but to really get everything you would have to have human eyes on every second of video uploaded to YouTube, at which point I doubt they're solvent anymore.

    EDIT: Also I should add, when I say that the neural net picked up on the initial bypasses, that's a process that requires human intervention. The guy running the box has to tell it "No, that's still Goatse." by adding the things that successfully got around it to the dataset.

    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    I thought most people see them by being posted on forums and such not actually being titled ISIS RECRUITING VIDEO WATCH WESTERN INFIDELS. And what happens once they figure out the audio is what is being used and just embed a transcript of the audio in the video and replace the audio with some music or something?

    Anyway, of course this exists: https://randomyoutube.net/watch
    You too can experience the life of a random sample youtube watcher. Good luck finding anything objectionable

  • Options
    Kristmas KthulhuKristmas Kthulhu Currently Kultist Kthulhu Registered User regular
    Also, if the people who wanted Goatse on the projector were forced to make it non-offensive to get around the neural net, that is a success. Abstract depictions of gross things that don't register as gross to the average person is vastly preferable to the original gross thing.

  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    Shivahn wrote: »
    Shivahn wrote: »
    Phyphor wrote: »
    Quite frankly I'm a lot less concerned about ads auto running during ISIS recruitment videos on Youtube than I am about Youtube fucking hosting ISIS recruitment videos in the first place

    It's whack a mole, 400 hours of content is uploaded every minute, theres no way to review it all

    Sure there is, just hire 24,000 people to watch youtube all day every day, easy peasy. Bonus: jobs program!

    Or seven people and put them in the hyperbolic time chamber from DragonBall Z.

    (more realistically, there has to be a way to filter that down substantially, through machine learning, if nothing else. There must be things in common between them,
    and you can generally exclude the many videos where there is little to no talking)

    Machine learning is actually pretty dumb and easy to fool as soon as you know it's there. Hacker con I help run has a long running tradition of running a projector that displays a random stream of images pulled from the HTTP traffic of the CTF network. Of course, there's always that one asshole who decides it's his mission in life to get the filthiest images possible on that projector, and while they generally don't stay up that long before somebody blacklists them, we try to run a family friendly event so the other year there was an ultimatum laid down: help pay for a machine to run a neural net for porn (and other offensive content) or no more projector.

    And they did it, the people who play in the CTF every year funded like $10k or something for this beast of a machine that has Yahoo's porn filtering dataset and is learning as it goes.

    So then the game became "Who can get Goatse on the projector." :rotate:

    At first they'd just change sizes, color pallets, and contrast to fool it but it picked up on that pretty quick. Then they started creating abstract representations of Goatse which was actually pretty hilarious because they weren't really offensive unless you were in on the joke. No more scarred children, upset attendees, etc. There were MS Paint versions, ASCII renditions, and even some abstract watercolor variants.

    Anyway, my point is that they repeatedly found ways to beat the neural net and that's with a still image. There's a ridiculously larger amount of variables with video, and almost anything I can think of to filter on is going to be present in legitimate content, even if it's just news media about ISIS. This is actually a problem that community moderation can help with, but to really get everything you would have to have human eyes on every second of video uploaded to YouTube, at which point I doubt they're solvent anymore.

    EDIT: Also I should add, when I say that the neural net picked up on the initial bypasses, that's a process that requires human intervention. The guy running the box has to tell it "No, that's still Goatse." by adding the things that successfully got around it to the dataset.

    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Sure. I was kind of assuming they have some of these systems in place already because search engines were the first place these technologies really shone, but I don't know how YouTube's automated filtering works. It definitely exists, at least for DMCA violations.

    I guess I'm just not sure how much of a problem ISIS recruitment videos actually are? This kinda seems like a little bit of a media panic at first glance.

  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    Phyphor wrote: »
    I thought most people see them by being posted on forums and such not actually being titled ISIS RECRUITING VIDEO WATCH WESTERN INFIDELS. And what happens once they figure out the audio is what is being used and just embed a transcript of the audio in the video and replace the audio with some music or something?

    Anyway, of course this exists: https://randomyoutube.net/watch
    You too can experience the life of a random sample youtube watcher. Good luck finding anything objectionable

    Written stuff is still language, that's the first thing I'd try to filter for. The source thing is a bigger issue.

  • Options
    shrykeshryke Member of the Beast Registered User regular
    edited January 2018
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Yes, and?

    Like, is this supposed to be an argument against anything? Who cares if you don't actually see anything that needs banning most of the time. That's how this kind of thing always works. It's not an argument against anything.

    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    shryke on
  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    Also, if the people who wanted Goatse on the projector were forced to make it non-offensive to get around the neural net, that is a success. Abstract depictions of gross things that don't register as gross to the average person is vastly preferable to the original gross thing.

    Yes. That was the point of the story. It was funny as hell.

    But there's a lot more ways to obfuscate propaganda videos and still retain the core messaging than there are for a still image.

  • Options
    mrondeaumrondeau Montréal, CanadaRegistered User regular
    Also, if the people who wanted Goatse on the projector were forced to make it non-offensive to get around the neural net, that is a success. Abstract depictions of gross things that don't register as gross to the average person is vastly preferable to the original gross thing.

    I can, 100% get Goatse on that projector by modifying pixels in a way that's invisible to human.

    This applies to any image you want to detect using a neural network.

    Machine learning is as awesome as it is limited.

  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Yes, and?

    Like, is this supposed to be an argument against anything? Who cares if you don't actually see anything that needs banning most of the time. That's how this kind of thing always works. It's not an argument against anything.

    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    But like Pyphor said, their efforts are largely direct marking campaigns run by dedicated recruiters.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    edited January 2018
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Yes, and?

    Like, is this supposed to be an argument against anything? Who cares if you don't actually see anything that needs banning most of the time. That's how this kind of thing always works. It's not an argument against anything.
    It's utterly ineffective. People are really bad at concentrating hard constantly for long periods of time with no end. You could legitimately go for days without seeing anything banworthy, so would you be paying attention when it comes along? And then there's the question of what's even bad enough to ban in the first place. Things that your advertisers don't like? Actually illegal stuff? Drugs? Not to mention that you would need to speak so many languages, transcription and machine translation are really poor still
    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Phyphor on
  • Options
    ZiggymonZiggymon Registered User regular
    I think that the reasons for large companies boycotting youtube has actually quite little to do with extremism but a power move to gain more control over user generated content in general.

  • Options
    shrykeshryke Member of the Beast Registered User regular
    Phyphor wrote: »
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Yes, and?

    Like, is this supposed to be an argument against anything? Who cares if you don't actually see anything that needs banning most of the time. That's how this kind of thing always works. It's not an argument against anything.
    It's utterly ineffective. People are really bad at concentrating hard constantly for long periods of time with no end. You could legitimately go for days without seeing anything banworthy, so would you be paying attention when it comes along? And then there's the question of what's even bad enough to ban in the first place. Things that your advertisers don't like? Actually illegal stuff? Drugs? Not to mention that you would need to speak so many languages, transcript and machine translation are really peer still

    Nah, his is bullshit. Tons of jobs work like this. It's basically how any job anything like this works. You spend 99% of your time doing nothing. Fighting off inattention becomes one of the big things you have to work on when managing the work and how you set up shifts and all that stuff. But these are problems that have been solved or at least mitigated.

    This is not a thing that has ever stopped a job from existing. This is not unique. This is not different. This is not unusual. This is normal and the work still gets done.


    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Is it? That's an assumption and not a good one. The harder your videos are to censor, the harder they will be to find and the harder time they will have conveying their message because those factors are the very things that get them censored. If it still conveys the idea, it still gets censored.

  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Yes, and?

    Like, is this supposed to be an argument against anything? Who cares if you don't actually see anything that needs banning most of the time. That's how this kind of thing always works. It's not an argument against anything.
    It's utterly ineffective. People are really bad at concentrating hard constantly for long periods of time with no end. You could legitimately go for days without seeing anything banworthy, so would you be paying attention when it comes along? And then there's the question of what's even bad enough to ban in the first place. Things that your advertisers don't like? Actually illegal stuff? Drugs? Not to mention that you would need to speak so many languages, transcript and machine translation are really peer still

    Nah, his is bullshit. Tons of jobs work like this. It's basically how any job anything like this works. You spend 99% of your time doing nothing. Fighting off inattention becomes one of the big things you have to work on when managing the work and how you set up shifts and all that stuff. But these are problems that have been solved or at least mitigated.

    This is not a thing that has ever stopped a job from existing. This is not unique. This is not different. This is not unusual. This is normal and the work still gets done.


    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Is it? That's an assumption and not a good one. The harder your videos are to censor, the harder they will be to find and the harder time they will have conveying their message because those factors are the very things that get them censored. If it still conveys the idea, it still gets censored.

    No. Because they aren't on YouTube so people can find them there, it's just free, embeddable content that can be posted where they actually do their recruitment.

    This is what I was getting at when I said I didn't know how big of a deal ISIS recruitment videos on YouTube we're.

    And if YouTube somehow figures out advanced AI all of a sudden and it becomes impossible to post ISIS recruitment videos, there are a lot of other sites that host video content that don't have access to the braintrust, finances, and technology of Google.

    I think y'all have fallen for marketing speak. Machine learning's great and it will only get better, but it's still in its infancy.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    shryke wrote: »
    Shivahn wrote: »
    I think, though, that forcing people to obfuscate it is a pretty big win? If it's hard to find ISIS recruitment videos, there's going to be a lot fewer people watching them. You also have the benefit of being able to generally use language for features. I don't think the fact that it's imperfect means it shouldn't be tried, just that it's not perfect. Making things harder for adversaries is still a win.

    Yup. This is the other bullshit argument people make. "Well, you can't stop everything! Or they'll just hide it!"

    Who cares? Making them have to work to hide or sneak these things around is good. That's a win. The harder it is for the censors to find them, the harder it is for everyone else too. If they have to go so abstract that it's no longer the thing it was that you were banning, you've won.

    An abstract isis recruiting video is still an isis recruiting video? It still conveys the same idea. MS paint goatse is still recognizable as goatse if you're looking for it. If you're not then you won't know but we're not concerned about the people who aren't looking for it

    Like "making them harder for the censors to find" means changing the content and that's it, so basically everybody who sees them now will still see them

    Is it? That's an assumption and not a good one. The harder your videos are to censor, the harder they will be to find and the harder time they will have conveying their message because those factors are the very things that get them censored. If it still conveys the idea, it still gets censored.

    The problem for the censors is identifying the video in the first place which is orthogonal to the content being able to get its message across. They don't have to pass a human screening them because that's impractical. There's no algorithm for detecting "bad" content and even if there was it would be different for each type of "bad" content. Any sort of machine learning can be gotten around by changing the audio track, changing the video, changing the length, adding pre-roll and post-roll content to fool the classifier, etc. Hell just put the entire video through a filter or do what people uploading tv shows do and have the video play in a box in the middle of a screen, that works.

    You make and upload a video and post it somewhere on the internet where people will see it. As the censor you have to identify it based on content alone, picking the needle that may or may not exist out of the conveyor belt of haystacks, and if you mess up people still just go "fucking google why don't they fix this how hard can it be." Your suggestion of sampling doesn't even work here as you'll only catch x% of attempts where x would realistically be around 1. They can make a new account for every video if they want. They could even upload inoffensive videos for a while before posting the isis video if you try to scrutinize new accounts more closely and this can all even be automated

  • Options
    Kristmas KthulhuKristmas Kthulhu Currently Kultist Kthulhu Registered User regular
    I thought we were only concerned with Youtube (and Vimeo, sure), as they're what 99% of people use when they go looking for videos on the internet. Having fewer locations where extremists can post their videos and can only be found by people trying *really hard* to find them is the whole point. Like, I'm fairly certain everyone here understands there's never going to be a 100% effective single tool to curb this shit, so the fact that it's being brought up as a counter argument feels like a red herring.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    I thought we were only concerned with Youtube (and Vimeo, sure), as they're what 99% of people use when they go looking for videos on the internet. Having fewer locations where extremists can post their videos and can only be found by people trying *really hard* to find them is the whole point. Like, I'm fairly certain everyone here understands there's never going to be a 100% effective single tool to curb this shit, so the fact that it's being brought up as a counter argument feels like a red herring.

    They "post" them in the sense that I can post a youtube link here and you can watch it (without even leaving here too). You don't have to find the videos through youtube's search feature, they can be posted more or less directly to their audience (and any extra views would just be a bonus)

  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    Phyphor wrote: »
    I thought most people see them by being posted on forums and such not actually being titled ISIS RECRUITING VIDEO WATCH WESTERN INFIDELS. And what happens once they figure out the audio is what is being used and just embed a transcript of the audio in the video and replace the audio with some music or something?

    Well, you look at the referer header on requests associated with ISIS garbage, and then block other stuff with the same known bad referer, or at least raise it to where it gets looked at by a person.

    probably also report that Facebook group/site to the authorities, and probably the reputation service start providing, and let Facebook or whatever service hosts that about the problem.

    it's wack a mole still, but you could find lot's of content to shut down. there also the thing where youtube is owned by Google who are pretty good at searching the web for stuff. Maybe someone could spend 10% of their time maping those networks and the content they link to.

    They moistly come out at night, moistly.
  • Options
    SniperGuySniperGuy SniperGuyGaming Registered User regular
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Actually most of those seem pretty harmful, like the weirdly edited ones of a violent dentist visit, or a bunch of disembodied heads floating around. They are taking shows designed to educate kids and doing all sorts of weird shit to them and parents aren't noticing. There's absolutely some very weird very fucked up stuff getting auto generated by bots and shit trying to follow trends to make youtube money. It's deceptive and creepy and there's a lot of it out there, but we probably won't see most of it because of how youtube serves stuff up.

    I think they at least need to try filtering this stuff out and hiring more people to get more human eyes on things. Youtube does not have a good track record in this department. Sure it's difficult but if they aren't even attempting it they aren't going to find good ways to fix it either.

  • Options
    HamHamJHamHamJ Registered User regular
    It seems like YouTube wants to be a platform just for official corporate content and not user created content.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    SniperGuy wrote: »
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Actually most of those seem pretty harmful, like the weirdly edited ones of a violent dentist visit, or a bunch of disembodied heads floating around. They are taking shows designed to educate kids and doing all sorts of weird shit to them and parents aren't noticing. There's absolutely some very weird very fucked up stuff getting auto generated by bots and shit trying to follow trends to make youtube money. It's deceptive and creepy and there's a lot of it out there, but we probably won't see most of it because of how youtube serves stuff up.

    I think they at least need to try filtering this stuff out and hiring more people to get more human eyes on things. Youtube does not have a good track record in this department. Sure it's difficult but if they aren't even attempting it they aren't going to find good ways to fix it either.

    Well I'm of the opinion that trying to ensure videos are safe for children to watch unsupervised is a truly sisyphean task that is doomed to fail outside of a highly restricted, curated environment. That actually can't be done except by just having people watching the entire thing and people don't scale

  • Options
    SniperGuySniperGuy SniperGuyGaming Registered User regular
    Phyphor wrote: »
    SniperGuy wrote: »
    Phyphor wrote: »
    shryke wrote: »
    You don't actually have to review everything that gets uploaded to Youtube. You could do a lot just setting up a review department and passing them a semi-random sample of new videos. You won't catch everything but you could probably do a lot to clean shit up. Prioritize accounts based on certain metrics, set up machine learning systems to run in parallel to evaluate the effectiveness, evolve the program over time as you figure out where problem videos are most likely to come from, etc, etc.

    None of this is hard to come up with or hard to implement. They just don't wanna spend the fucking money.

    The vast vast majority of content is not objectionable though. Even those kids videos that everybody was walking about a while back, most of those are just weird but not actually harmful in any way. You'd spend your day watching random weird youtube videos and maybe hit one actually bad one - and keep in mind that many people will just cry "censorship" unless you are incredibly selective about removing content

    Actually most of those seem pretty harmful, like the weirdly edited ones of a violent dentist visit, or a bunch of disembodied heads floating around. They are taking shows designed to educate kids and doing all sorts of weird shit to them and parents aren't noticing. There's absolutely some very weird very fucked up stuff getting auto generated by bots and shit trying to follow trends to make youtube money. It's deceptive and creepy and there's a lot of it out there, but we probably won't see most of it because of how youtube serves stuff up.

    I think they at least need to try filtering this stuff out and hiring more people to get more human eyes on things. Youtube does not have a good track record in this department. Sure it's difficult but if they aren't even attempting it they aren't going to find good ways to fix it either.

    Well I'm of the opinion that trying to ensure videos are safe for children to watch unsupervised is a truly sisyphean task that is doomed to fail outside of a highly restricted, curated environment. That actually can't be done except by just having people watching the entire thing and people don't scale

    But when the horrifying video of peppa pig going to a dentist with added horrifying sound effects is identified by a popular news article, why doesn't YouTube look into that? Sure, that's a very difficult task, but when there are clear things they should probably take a look at, they seem to be ignoring them instead. I checked, it's still up! That seems like something they could absolutely tackle and fix but just...aren't?

    This isn't a fight you can ever win but it is a fight you should be having.

  • Options
    Kristmas KthulhuKristmas Kthulhu Currently Kultist Kthulhu Registered User regular
    Phyphor wrote: »
    I thought we were only concerned with Youtube (and Vimeo, sure), as they're what 99% of people use when they go looking for videos on the internet. Having fewer locations where extremists can post their videos and can only be found by people trying *really hard* to find them is the whole point. Like, I'm fairly certain everyone here understands there's never going to be a 100% effective single tool to curb this shit, so the fact that it's being brought up as a counter argument feels like a red herring.

    They "post" them in the sense that I can post a youtube link here and you can watch it (without even leaving here too). You don't have to find the videos through youtube's search feature, they can be posted more or less directly to their audience (and any extra views would just be a bonus)

    Right, so then one has to go to a site that allows those sorts of videos to be posted, of which there can't be many. And which police can monitor. I don't know what it is you're arguing, as all of the ways you mention that people can use to bypass an algorithm or hide/code their message still leads to fewer people being exposed to that message. That's the whole point of what I'm saying.

Sign In or Register to comment.