The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.

Facebook And Research Consent [Tech Ethics {And The Lack Thereof}]

1235»

Posts

  • Squidget0Squidget0 Registered User regular
    Arch wrote: »
    Squidget0 wrote: »
    Arch wrote: »
    Squidge, you've missed my point by such a distance I'm not sure how to respond.

    It is worth noting that League of Legends directly told players they were instituting measures to control player behavior.

    You will also note that I have praised their work many times, because I knew what was going on (that is, I consented to the research).

    The whole issue of consent is what people are upset about, because if EULAs count as informed consent for manipulation experiments now, then we have undone a fair amount of ethical scientific precedent.

    I think the problem with this line of argument is that you haven't defined "manipulation experiment" very well. Is something an experiment because you say it's an experiment? Because you wrote stuff down? What separates a "manipulation experiment" from market research? Is it just how the paper that makes it an experiment, or is there something substantive you can point to there?

    For example, let's say that I am bored in meetings all day. While in these meetings, I start to change my posture day to day, to amuse myself. One day I might slouch, and the next I might sit up straight in a powerful position. Nothing going on there other than boredom.

    Eventually I start watching the people around me, and notice that there are subtle reactions to the way that I sit. Maybe the people sitting near me when I sit up straight also perk up a bit and seem more active themselves. Maybe the people sitting near me when I slouch fall asleep more often. If I stare out the window, other people start staring out the window too. Nothing really unusual here so far, these kinds of minor manipulations are common in anyone's daily life.

    Perhaps I become interested in this effect and decide to measure it better. I start writing down my observations, measuring how the people around me react. I keep a log of each position and the reactions it gets, and since I'm in a lot of meetings I start to track the data over a long period to see which positions get the most noticeable reactions.

    At that point in that chain did my boredom become a "manipulation experiment"? And more importantly, how responsible am I if one of my co-workers later commits suicide? Could it have been my careless posturing that pushed them over the edge?

    What the fuck even is this? This makes zero sense, for the reasons @zagdrob‌ already mentioned, but seriously dude. Do you not know how science is conducted? Or what actually constitutes a scientific experiment, and why it is different than "market research"?

    Actually, I take that last criticism back- I'm kind of curious as to what kind of ethical standards market research is held to, and if/why they have may have different/no ethical governing board.

    But, Squidge, you are right about something. I shouldn't call it by "manipulation experiment"- that word is kind of meaningless, or rather it doesn't have a set ethical definition for the IRB.

    Please, though, read this link- you need to understand the terms at play here before you can even begin your argument.

    I'll quote the relevant bits, so we are all working on the same definitions.
    What is the definition of Research?

    Research is defined in the Code of Federal Regulations as a systemic investigation that includes research development, testing and evaluation, designed to develop or contribute to general knowledge.

    Alright, this experiment easily falls under the category of "research".

    Here is where everyone is super mad-
    What is the definition of Intervention?

    Intervention includes both physical procedures by which data are gathered and manipulations of the subject or subject’s environment that are performed for research purposes.

    Oh hey, looks like the experiment literally falls under this description. And why everyone is mad, once again, is that they performed intervention research, on human subjects, without having proper IRB approval, and substituted in Facebook's EULA as their "consent" form.

    I'm not going to spend more time playing word games using silly made up situations here. This experiment was clearly research, on human subjects, and it was clearly an intervention study as defined by the ethical board enacted to oversee such things. They didn't follow correct scientific ethics in performing their research, and thus the current controversy.

    I agree that a literal reading of the definitions you listed would suggest that this kind of experimentation is ethical. I just think it's also pretty silly, and that the framework clearly wasn't written with these kinds of interactions in mind. The comparison to the Milgram experiments is silly and hyperbolic. I have yet to see any actual harm demonstrated that would not have occurred in the standard comings-and-goings of Facebook algorithms. The arguments for why it's wrong seem to come down to the terminology, rather than the action. "They published it as a research paper!" If they'd published the same words on their blog the way some tech companies do all the time, that would be different? The text and the actions are what matter, the titles ("Academic vs. Market Researcher") are ultimately just semantics.

    I do think there is an interesting ethical discussion here on who actually owns the data on your Facebook page.. The legal answer is that Facebook owns it - they pay for the servers, and they have supreme power over who has access to the page and what gets displayed on it at any given time. You seem to have ethical issues with Facebook running experiments on the data that comes through their pages, yet it seems to me that they have substantially more standing than their users in deciding what happens to their algorithms and their data. So who should be required to consent in a study of a Facebook page? The user? Facebook? Both? Should everyone who might be featured on the page also have to consent, even in situations (like the friends search algorithm) where literally anyone could appear? I would say that Facebook probably has a much better legal claim than any user does, but ethically? Who knows.

    But yeah, overally this just seems like a non-issue to me. The idea that we're happy to be studied by advertisers but get angry when we're studied by scientists probably says more about us than it does about Facebook or the researchers involved.

  • OptyOpty Registered User regular
    The difference between advertisers and scientists is that we expect advertisers to be manipulative towards us whereas scientists are supposed to be helpful. Any time a scientist does something like this they ruin some of the trust people have in scientists and then it hurts all of science. As already mentioned, Tuskegee being such a shit show means getting black folk to run studies is extremely hard.

  • The EnderThe Ender Registered User regular
    edited July 2014
    Quid wrote: »
    The Ender wrote: »
    Even a cynical reading of the Minerva Initiative guidelines doesn't come close to 'weapon'. Maybe general research that could be used down the lines to improve psychological operations, but not a weapon in and of itself.

    Story is here.
    They explicitly state that the intent is to develop a 'contagion' that can be used to foment coups in foreign states. That qualifies as a weapon in my opinion.

    Which part says the bolded?

    Story is here..


    EDIT: Of course, that article kind of goes off the deep end & draws conclusions from disparate pieces of data that are extremely spurious, but the AP report and linked material provided by Greenwald corroborate the underpinning fact: the U.S. DoD is using this research to figure-out how to best weaponize social media (as are several other countries).

    The Ender on
    With Love and Courage
  • BertezBertezBertezBertez Registered User regular
    Phoenix-D wrote: »
    Knight_ wrote: »
    some day EULAs will actually go to court and hopefully scotus has changed hands by then. because they are some hot bullshit.

    I especially love the "We can change this at any time, and don't actually have to tell you."

    The first part is bad enough, especially since they're changing the terms of the agreement AFTER I've paid them.

    Unfortunately our only recourse is to pray they don't alter them further.

    ...but I would say that

    steam_sig.png
  • japanjapan Registered User regular
    Squidget0 wrote: »
    Arch wrote: »
    Squidget0 wrote: »
    Arch wrote: »
    Squidge, you've missed my point by such a distance I'm not sure how to respond.

    It is worth noting that League of Legends directly told players they were instituting measures to control player behavior.

    You will also note that I have praised their work many times, because I knew what was going on (that is, I consented to the research).

    The whole issue of consent is what people are upset about, because if EULAs count as informed consent for manipulation experiments now, then we have undone a fair amount of ethical scientific precedent.

    I think the problem with this line of argument is that you haven't defined "manipulation experiment" very well. Is something an experiment because you say it's an experiment? Because you wrote stuff down? What separates a "manipulation experiment" from market research? Is it just how the paper that makes it an experiment, or is there something substantive you can point to there?

    For example, let's say that I am bored in meetings all day. While in these meetings, I start to change my posture day to day, to amuse myself. One day I might slouch, and the next I might sit up straight in a powerful position. Nothing going on there other than boredom.

    Eventually I start watching the people around me, and notice that there are subtle reactions to the way that I sit. Maybe the people sitting near me when I sit up straight also perk up a bit and seem more active themselves. Maybe the people sitting near me when I slouch fall asleep more often. If I stare out the window, other people start staring out the window too. Nothing really unusual here so far, these kinds of minor manipulations are common in anyone's daily life.

    Perhaps I become interested in this effect and decide to measure it better. I start writing down my observations, measuring how the people around me react. I keep a log of each position and the reactions it gets, and since I'm in a lot of meetings I start to track the data over a long period to see which positions get the most noticeable reactions.

    At that point in that chain did my boredom become a "manipulation experiment"? And more importantly, how responsible am I if one of my co-workers later commits suicide? Could it have been my careless posturing that pushed them over the edge?

    What the fuck even is this? This makes zero sense, for the reasons @zagdrob‌ already mentioned, but seriously dude. Do you not know how science is conducted? Or what actually constitutes a scientific experiment, and why it is different than "market research"?

    Actually, I take that last criticism back- I'm kind of curious as to what kind of ethical standards market research is held to, and if/why they have may have different/no ethical governing board.

    But, Squidge, you are right about something. I shouldn't call it by "manipulation experiment"- that word is kind of meaningless, or rather it doesn't have a set ethical definition for the IRB.

    Please, though, read this link- you need to understand the terms at play here before you can even begin your argument.

    I'll quote the relevant bits, so we are all working on the same definitions.
    What is the definition of Research?

    Research is defined in the Code of Federal Regulations as a systemic investigation that includes research development, testing and evaluation, designed to develop or contribute to general knowledge.

    Alright, this experiment easily falls under the category of "research".

    Here is where everyone is super mad-
    What is the definition of Intervention?

    Intervention includes both physical procedures by which data are gathered and manipulations of the subject or subject’s environment that are performed for research purposes.

    Oh hey, looks like the experiment literally falls under this description. And why everyone is mad, once again, is that they performed intervention research, on human subjects, without having proper IRB approval, and substituted in Facebook's EULA as their "consent" form.

    I'm not going to spend more time playing word games using silly made up situations here. This experiment was clearly research, on human subjects, and it was clearly an intervention study as defined by the ethical board enacted to oversee such things. They didn't follow correct scientific ethics in performing their research, and thus the current controversy.

    I agree that a literal reading of the definitions you listed would suggest that this kind of experimentation is ethical. I just think it's also pretty silly, and that the framework clearly wasn't written with these kinds of interactions in mind. The comparison to the Milgram experiments is silly and hyperbolic. I have yet to see any actual harm demonstrated that would not have occurred in the standard comings-and-goings of Facebook algorithms. The arguments for why it's wrong seem to come down to the terminology, rather than the action. "They published it as a research paper!" If they'd published the same words on their blog the way some tech companies do all the time, that would be different? The text and the actions are what matter, the titles ("Academic vs. Market Researcher") are ultimately just semantics.

    I do think there is an interesting ethical discussion here on who actually owns the data on your Facebook page.. The legal answer is that Facebook owns it - they pay for the servers, and they have supreme power over who has access to the page and what gets displayed on it at any given time. You seem to have ethical issues with Facebook running experiments on the data that comes through their pages, yet it seems to me that they have substantially more standing than their users in deciding what happens to their algorithms and their data. So who should be required to consent in a study of a Facebook page? The user? Facebook? Both? Should everyone who might be featured on the page also have to consent, even in situations (like the friends search algorithm) where literally anyone could appear? I would say that Facebook probably has a much better legal claim than any user does, but ethically? Who knows.

    But yeah, overally this just seems like a non-issue to me. The idea that we're happy to be studied by advertisers but get angry when we're studied by scientists probably says more about us than it does about Facebook or the researchers involved.

    Demonstrating harm after the fact and subjecting researchers to reprimand has historically been demonstrated to be too weak a safeguard when it comes to experimentation on human beings. You are correct that non-academic research has not tended to be subject to the same scrutiny as academic research. Maybe it should, maybe it shouldn't, but what the academic community should emphatically not be permitting or supporting is researchers carrying out whatever studies they like and then submitting those where everything goes fine as academic research. That's the danger here, even in the absence of demonstrated harm (I'm not convinced that there has been no harm, because the study is sufficiently poorly controlled that nobody seems to have checked - when you carry out research where your subjects are unwitting participants and you perform no follow up you pretty much guarantee none will be brought to your attention).

    As to who owns the data, that depends on jurisdiction. Certainly under UK law there are obligations and restrictions under the law as to what data you can collect, and what you can do as a data controller. I'm curious what the response would be to a Data Subject Access Request specifically querying whether the requestor was a subject in this experiment.

  • japanjapan Registered User regular
    Looks like since facebook is an Irish registered company, the relevant data protection regulations would be those in force in Ireland, which are broadly equivalent to the UK laws, being implementations of the same EU directive.

    Can't find an example of anyone actually making such a request with cursory googling. If they restricted the experiment to US citizens then there probably isn't anyone with standing to make such a request in the apparent absence of data access rights.

  • durandal4532durandal4532 Registered User regular
    I really want to stress that fucking with a half million customers produced exactly zero effective research. The paper is a pamphlet that uses a metaphor to make some real poorly controlled bullshit sound newsworthy. I mean not that really interesting work would actually justify not getting informed consent, but it would at least motivate it somewhat.

    Christ, my experiment in which participants read a list of words and were tested on which words they could remember had a fucking 2 page consent form with numbers to call if you were feeling frightened.

    We're all in this together
  • ArchArch Neat-o, mosquito! Registered User regular
    I really want to stress that fucking with a half million customers produced exactly zero effective research. The paper is a pamphlet that uses a metaphor to make some real poorly controlled bullshit sound newsworthy. I mean not that really interesting work would actually justify not getting informed consent, but it would at least motivate it somewhat.

    Christ, my experiment in which participants read a list of words and were tested on which words they could remember had a fucking 2 page consent form with numbers to call if you were feeling frightened.

    To be fair, the words on the paper were "murder", "arson", and "steam summer sale".

  • PantsBPantsB Fake Thomas Jefferson Registered User regular
    edited July 2014
    Arch wrote: »
    I really want to stress that fucking with a half million customers produced exactly zero effective research. The paper is a pamphlet that uses a metaphor to make some real poorly controlled bullshit sound newsworthy. I mean not that really interesting work would actually justify not getting informed consent, but it would at least motivate it somewhat.

    Christ, my experiment in which participants read a list of words and were tested on which words they could remember had a fucking 2 page consent form with numbers to call if you were feeling frightened.

    To be fair, the words on the paper were "murder", "arson", and "steam summer sale".

    And the letters were cut out of magazines and glued under a picture of a loved one holding a current newspaper

    PantsB on
    11793-1.png
    day9gosu.png
    QEDMF xbl: PantsB G+
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited July 2014
    Squidget0 wrote: »
    I have yet to see any actual harm demonstrated that would not have occurred in the standard comings-and-goings of Facebook algorithms. The arguments for why it's wrong seem to come down to the terminology, rather than the action. "They published it as a research paper!" If they'd published the same words on their blog the way some tech companies do all the time, that would be different? The text and the actions are what matter, the titles ("Academic vs. Market Researcher") are ultimately just semantics.

    This is yet another form of "if it's okay for advertisers, it's okay for scientists," which has been addressed multiple times in this thread. But just to reiterate some points, and to add a few more:
    • It's not really okay for marketers. Maybe you think it's okay for marketers. I don't think it's okay, regardless of the profession of whoever's doing it. Just because we don't have explicit ethical guidelines for marketers forbidding it doesn't mean we shouldn't.
    • Maybe it is okay for marketers to do some things that it's not okay for scientists to do. It is not prima facie obvious that marketers and scientists should be held to the same ethical standards.
    • I'm not aware of any marketing company deliberately manipulating the public with the explicit purpose of making people feel negative emotions in this way. An argument can be made that advertisers are trying to make you feel bad about your life so you'll buy their products to feel better - but, again, I'm not okay with that, and neither are a lot of other people. Regardless, I don't see that as analogous, because...
    • ...even in cases where advertisers do manipulate the emotional states of their viewers, they're not using information published by their viewers' friends and family to do it. I suspect that most people are far more skeptical of an ad for makeup or cars than they are of statuses posted by their college friends and in-laws. Regardless of Facebook's effect on me, my information might have been used to manipulate my friends, and I'm not okay with that, either.
    • There has been plenty of irritation over Facebook's "Top Stories" algorithms for their news feed. A lot of users wish that Facebook defaulted to chronological order, instead of ordering posts by an opaque algorithm. Most folks just accepted it as an irritation; some folks have quit Facebook over it. The usual presumption was that this was being done in order to increase focus on ads; now we've learned that this was also being done to make some people feel worse about their lives. It wasn't okay to begin with; it's even less acceptable now.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • AngelHedgieAngelHedgie Registered User regular
    edited July 2014
    Something that might explain how this issue snuck up on everyone - when asked why Facebook's CEO is so liked by his employees (Zuck has an approval rating that would make your average dictator envious), one responded with a disturbingly gushing paean to the man:
    Because he is just that awesome.

    There are several reasons why we "approve" of him:
    The story: He built this billion user and billion dollar company from his dorm room, overcame one obstacle after another, and assembled a company with the most talented employees in the world.
    The principles: He is dead-focused on "making the world more open and connected." The guy doesn't waver; all the investments in R&D and acquisitions have been along these lines.
    The heart: He was the biggest donor of 2013, and is generally a minimalist. He is clearly committed to Internet.org, even though that's not necessarily where the short term revenue increases are. We really feel he wants to change the world for the better.
    The guts: What other CEO has the... guts... to purchase a chat company for $19B??? It's a very smart purchase for various reasons, but still, $19B! Even other Silicon Valley CEOs acknowledge Zuck's fearlessness: http://read.bi/1n24ctW
    The wisdom: When we hear him speak, he gives us brain wrinkles. He has this uncanny ability to make all the right strategic moves, and when he explains the reasons for making those moves, it simply makes sense. Sure, mistakes have been made, and hindsight is 20/20, but at decision time, it was for all the right reasons.
    The trust: He doesn't make all the decisions, in fact far from it. We feel entrusted and empowered to drive our features the way we feel is best for the people that use Facebook. This is drastically different from many top-down corporations. We're happy with the balance between management-mandated and grass-roots-inspired decision making.
    The character: He wears T-Shirts and jeans, talks with humility, and he just seems generally very approachable. We like that.
    The business: Facebook is a rock solid business that is rapidly increasing in revenue as we speak. It makes more than 70% more in revenue than it was making just one year ago.
    The free food and perks: Yes, this makes us like him and the company too. He has the ability to put an end to it at any time, but he keeps it coming :-). If somebody gives me free cookies, I like them, this part is not rocket science.
    And, no, having a lower approval rating is not a good thing. People don't "approve" because they agree with everything, rather they know that they have a say, and that their opinion matters. It's a good thing to like your boss.

    Edit: Added the full monstrosity.

    AngelHedgie on
    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • AngelHedgieAngelHedgie Registered User regular
    And Sandberg "leans in" to apologize for the study:
    “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” Sandberg, Facebook’s chief operating officer, said while in New Delhi. “And for that communication we apologize. We never meant to upset you.”

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • hippofanthippofant ティンク Registered User regular
    And Sandberg "leans in" to apologize for the study:
    “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” Sandberg, Facebook’s chief operating officer, said while in New Delhi. “And for that communication we apologize. We never meant to upset you.”

    I feel like you put the quotation marks around the wrong word.

  • MvrckMvrck Dwarven MountainhomeRegistered User regular
    And Sandberg "leans in" to apologize for the study:
    “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” Sandberg, Facebook’s chief operating officer, said while in New Delhi. “And for that communication we apologize. We never meant to upset you.”

    You quite literally did.

  • Loren MichaelLoren Michael Registered User regular
    research_ethics.png

    a7iea7nzewtq.jpg
  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    Rather than use the fact that advertisers willingly intervene in the emotions of people as an example of why it's okay to do this as part of a scientific experiment, we should perhaps be using this as a jumping off point to questioning at what point should advertisement be regulated in the same manner as scientific experimentation.

    At what point is it wrong for advertisers to intervene in the lives of people without oversight or regulation?

  • shrykeshryke Member of the Beast Registered User regular
    Yeah, good point. I'm sure no one has already pointed that out on every single fucking page of this thread.

    That would explain why the same argument keeps getting trotted out I'm sure...

  • Squidget0Squidget0 Registered User regular
    edited July 2014
    Feral wrote: »
    [*] It's not really okay for marketers. Maybe you think it's okay for marketers. I don't think it's okay, regardless of the profession of whoever's doing it. Just because we don't have explicit ethical guidelines for marketers forbidding it doesn't mean we shouldn't.

    [*] Maybe it is okay for marketers to do some things that it's not okay for scientists to do. It is not prima facie obvious that marketers and scientists should be held to the same ethical standards.

    So let's have that conversation. What restrictions do you think marketers should be under in their use of Facebook data? Why do you think that those restrictions should be different for scientists?
    Feral wrote: »
    [*] I'm not aware of any marketing company deliberately manipulating the public with the explicit purpose of making people feel negative emotions in this way. An argument can be made that advertisers are trying to make you feel bad about your life so you'll buy their products to feel better - but, again, I'm not okay with that, and neither are a lot of other people. Regardless, I don't see that as analogous, because...

    [*] ...even in cases where advertisers do manipulate the emotional states of their viewers, they're not using information published by their viewers' friends and family to do it. I suspect that most people are far more skeptical of an ad for makeup or cars than they are of statuses posted by their college friends and in-laws. Regardless of Facebook's effect on me, my information might have been used to manipulate my friends, and I'm not okay with that, either.

    Er, no, they do all of this, openly. Selling your information to advertisers is literally Facebook's entire business model, and has been since the beginning. They are using information published by your friends and family to manipulate you, that's the very definition of targeted advertising on a site like Facebook.

    Maybe you're not okay with that! Fair enough! A lot of people are creeped out by this kind of thing. Just recognize that the alternative is probably for Facebook to use a subscription service model of some kind. Right now you are the product, not the customer, and someone has to pay the bills.

    Squidget0 on
  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    shryke wrote: »
    Yeah, good point. I'm sure no one has already pointed that out on every single fucking page of this thread.

    That would explain why the same argument keeps getting trotted out I'm sure...

    The two sides of the argument so far have generally been:

    *Advertisers get to do it, so it isn't a big deal

    versus

    *Advertisers getting to do it doesn't negate the ethical requirements for a scientific study

    There has been precious little discussion on reversing it and discussing why we allow advertisers to get away with possible unethical practices without the bare minimum of regulation or oversight that we hold the scientific community to.

  • hippofanthippofant ティンク Registered User regular
    Dedwrekka wrote: »
    There has been precious little discussion on reversing it and discussing why we allow advertisers to get away with possible unethical practices without the bare minimum of regulation or oversight that we hold the scientific community to.

    Uh, cuz scientists have moral principles and advertisers are sleazy capitalist scumbags?

    There is really minimal external regulation/oversight of science ethics. It's mostly self-policing. All these journals and universities and scientific funding agencies are primarily run by scientists themselves. That is not to say that government legislation and policy hasn't been important to reign in the outliers, but these ethical science guidelines generally have very strong support and compliance in the scientific community (see this thread).

    You'll note that some of the most critical opinions about this issue has been from scientists themselves. Have you heard any advertisers speak out about it? Other than Facebook issuing what was... not just a non-apology, but something more akin to an anti-apology.

  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    hippofant wrote: »
    Dedwrekka wrote: »
    There has been precious little discussion on reversing it and discussing why we allow advertisers to get away with possible unethical practices without the bare minimum of regulation or oversight that we hold the scientific community to.

    Uh, cuz scientists have moral principles and advertisers are sleazy capitalist scumbags?

    There is really minimal external regulation/oversight of science ethics. It's mostly self-policing. All these journals and universities and scientific funding agencies are primarily run by scientists themselves. That is not to say that government legislation and policy hasn't been important to reign in the outliers, but these ethical science guidelines generally have very strong support and compliance in the scientific community (see this thread).

    You'll note that some of the most critical opinions about this issue has been from scientists themselves. Have you heard any advertisers speak out about it? Other than Facebook issuing what was... not just a non-apology, but something more akin to an anti-apology.

    Yeah, that's not what I'm saying. I'm not saying "where's the add self-regulatory agency when this happens". I'm saying "holy shit, why is there no regulation for this stuff (internal or external), and why are we so placidly accepting it?"

  • The EnderThe Ender Registered User regular
    Yeah, that's not what I'm saying. I'm not saying "where's the add self-regulatory agency when this happens". I'm saying "holy shit, why is there no regulation for this stuff (internal or external), and why are we so placidly accepting it?"

    Cuz freedumb of speech.

    Everyone knows words can't hurt you, so obviously they shouldn't be regulated. :P

    With Love and Courage
  • hippofanthippofant ティンク Registered User regular
    edited July 2014
    Dedwrekka wrote: »
    hippofant wrote: »
    Dedwrekka wrote: »
    There has been precious little discussion on reversing it and discussing why we allow advertisers to get away with possible unethical practices without the bare minimum of regulation or oversight that we hold the scientific community to.

    Uh, cuz scientists have moral principles and advertisers are sleazy capitalist scumbags?

    There is really minimal external regulation/oversight of science ethics. It's mostly self-policing. All these journals and universities and scientific funding agencies are primarily run by scientists themselves. That is not to say that government legislation and policy hasn't been important to reign in the outliers, but these ethical science guidelines generally have very strong support and compliance in the scientific community (see this thread).

    You'll note that some of the most critical opinions about this issue has been from scientists themselves. Have you heard any advertisers speak out about it? Other than Facebook issuing what was... not just a non-apology, but something more akin to an anti-apology.

    Yeah, that's not what I'm saying. I'm not saying "where's the add self-regulatory agency when this happens". I'm saying "holy shit, why is there no regulation for this stuff (internal or external), and why are we so placidly accepting it?"

    This seems completely normal to me. Like Upton Sinclair's The Jungle. Or the railroad barons of the late 19th century. Or the Clean Air and Clean Water Acts. Superfund. Labour regulations circa the Industrial Revolution. Modern-day financial industry, particularly high-frequency trading. Climate change. Government regulation is usually decades behind. It usually requires a horrid state of affairs to prompt government action and rally enough support to overcome conservative apathy. Self-regulatory schemes are attractive primarily because they're far easier to implement, politically speaking.

    Christ, here in Canada, it took a spate of teenagers committing suicide for passage of some (completely bullshit) bullying legislation. And even then, I really think it was that it was cyberbullying in particular that prompted any sort of action; bullying in schools has been happening for decades and kids have been killing themselves for just as long, but it was the added fear factor of "OMG COMPUTERS!" that sparked enough parental panic to drive it through.

    Honestly, if Facebook had done something slightly subtler, like.. changing the number of times money was referenced in your news feeds, there wouldn't even be this amount of outrage. It's only because they went so mustache-twirling with explicitly trying to make people sad that we got this much, and even that's not nearly enough to drive any sort of action on the political level.

    So, no external. And re. internal, see the part about sleazy capitalist scumbags.*

    * That's not entirely it, of course. I was just remarking the other day to some other graduate students, "Don't give me your data! I'm a computer scientist! We're kind of sociopathic by nature! If we weren't, we wouldn't be so attracted to computers! We think of you all as 1s and 0s! You can't trust us with your data we'll totally fuck with your lives just to see if we can!"

    hippofant on
  • Squidget0Squidget0 Registered User regular
    Well, when government regulation of an industry has happened, it generally comes from a demonstration of wide-scale harm. That isn't really present in this case, beyond the very mild day-to-day harms of potentially seeing more sad facebook messages. We also tend to be more lenient with regulation on optional products (like entertainment) than we do on necessary products (like food), since the assumption is that an adult can make the rational choice about whether to use an optional product or not.

    There are probably certain argument vectors one could use to show that the advertising industry causes actionable harm (like body image issues), but most of them aren't really tied directly into the issues in this thread, nor do they have particularly clear regulatory solutions. "Makes people sad sometimes" isn't actionable in itself, especially when people always have the choice to walk away.

    In general, if advertising is that big a problem for you, the best thing to do is to start demanding more subscription services, and preparing yourself to pay $10 a month for all of your favorite websites. There is no scenario where the government steps in and waves a magic wand to give you all the benefits of an ad-supported internet without any of the drawbacks. The bills have to get paid one way or another.

  • japanjapan Registered User regular
    Calling what face book has done in this instance "advertising" and writing it off as "well, advertisers will necessarily act unethically and there's nothing to be done about it" seems like a huge copout.

    It hasn't even yet been demonstrated that facebook manipulating the posts of people's friends to affect their emotional state qualifies as "advertising", since it doesn't intrinsically seem to be promoting a product or service. If it was, then it would probably contravene (UK, at least) advertising codes:
    1.1 Marketing communications should be legal, decent, honest and truthful
    1.3 Marketing communications must be prepared with a sense of responsibility to consumers and to society.
    1.9 Marketers should deal fairly with consumers.
    ...
    2.1 Marketing communications must be obviously identifiable as such.
    2.3 Marketing communications must not falsely claim or imply that the marketer is acting as a consumer or for purposes outside its trade, business, craft or profession; marketing communications must make clear their commercial intent, if that is not obvious from the context.
    ...
    4.2 Marketing communications must not cause fear or distress without justifiable reason; if it can be justified, the fear or distress should not be excessive. Marketers must not use a shocking claim or image merely to attract attention.

    2.3 and 4.2 are particularly relevant, since the stated intent of the exercise was to inflict distress on at least half the people involved, and this was done by facebook manipulating content ostensibly originating from those people's Facebook friends.

  • BubbyBubby Registered User regular
    People still would not stop using Facebook even if they pulled shit 10 times worse than this. They have the civilized world by the balls and have demonstrably changed the social paradigm. It's tragic, but its true.

  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Squidget0 wrote: »
    Feral wrote: »
    [*] I'm not aware of any marketing company deliberately manipulating the public with the explicit purpose of making people feel negative emotions in this way. An argument can be made that advertisers are trying to make you feel bad about your life so you'll buy their products to feel better - but, again, I'm not okay with that, and neither are a lot of other people. Regardless, I don't see that as analogous, because...

    [*] ...even in cases where advertisers do manipulate the emotional states of their viewers, they're not using information published by their viewers' friends and family to do it. I suspect that most people are far more skeptical of an ad for makeup or cars than they are of statuses posted by their college friends and in-laws. Regardless of Facebook's effect on me, my information might have been used to manipulate my friends, and I'm not okay with that, either.

    Er, no, they do all of this, openly. Selling your information to advertisers is literally Facebook's entire business model, and has been since the beginning. They are using information published by your friends and family to manipulate you, that's the very definition of targeted advertising on a site like Facebook.

    Nope.

    Showing me ads based on my interests and demographic is categorically different from selectively showing other people my status updates in order to portray my life in a different light.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • CaptainNemoCaptainNemo Registered User regular
    As someone who is intimately familiar with the effects of clinical depression, I am both concerned and a little insulted by any statement that this experiment couldn't be harmful. I understand that depression can be a constant battle against some pretty terrible urges and seeing that kind of concentrated, negative news about my closest friends and family would definitely have an impact. That's why I would never opt in to an experiment like this and it's honestly kind of terrifying to think that I could have been part of this kind of thing without knowing.

    Same exact thoughts here. I've been clinically depressed for about five years, and the thought that someone could be deliberate trying to make feel worse just for some megacorp's giggles is fucking infuriating.

    PSN:CaptainNemo1138
    Shitty Tumblr:lighthouse1138.tumblr.com
  • AngelHedgieAngelHedgie Registered User regular
    The Atlantic has an interview with the author of the study. It's both interesting and scary how he and some of the other tech-associated researchers have just conceded control to Big Tech.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • AngelHedgieAngelHedgie Registered User regular
    So, not only was Facebook's research unethical and creepy, but may have violated Maryland law, too.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • hippofanthippofant ティンク Registered User regular
    I still can't believe the sheer amount of "But we're different!" emanating from these self-righteous geese.

  • VeeveeVeevee WisconsinRegistered User regular
    edited September 2014
    So last night I check facebook and the first ~20 messages, sorted by "Most recent" mind you, are all "This is so powerful" or "This made me cry" type bullshit.

    Then this thread gets resurrected.

    ...I'm infected, aren't I?

    Veevee on
  • YogoYogo Registered User regular
    Veevee wrote: »
    So last night I check facebook and the first ~20 messages, sorted by "Most recent" mind you, are all "This is so powerful" or "This made me cry" type bullshit.

    Then this thread gets resurrected.

    ...I'm infected, aren't I?

    No, I think that's just advertisers pushing click-bait articles in order to boost ad revenue.

    In other news related to Facebook, there was a Danish article about newspapers (mostly online) being "forced" to write certain types of stories in order to get enough revenue from Facebook advertising.

    An example would be writing about the annual budget to be set in November which affects us all. Such type of article wouldn't prosper on the Social Media because the Facebook algorithm wouldn't deem it interesting (based on people liking and commenting). Thus the newspapers spend an increasing amount of time writing click-bait stories because those flourish on the Social Media and is deemed interesting by the algorithm.

    As Bubby wrote earlier:
    Bubby wrote: »
    They have the civilized world by the balls and have demonstrably changed the social paradigm. It's tragic, but its true.

Sign In or Register to comment.