The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.

Facebook And Research Consent [Tech Ethics {And The Lack Thereof}]

Last week, it hit the news that Facebook had conducted a study on over 100,000 users without their consent.

This controversy highlights the divide between biomedical research ethics (which has a long and storied history that brought us to our current state of informed consent) and the well, non existent ethics of corporations conducting social research on end users.

Here's the SBM article which I really enjoy.

http://www.sciencebasedmedicine.org/did-facebook-and-pnas-violate-human-research-protections-in-an-unethical-experiment/


I'd have to say that I am profoundly disturbed not by Facebook, mainly because these antics are par for the course for them, but by the academic institutions and the individual researchers who were involved in carrying out this research. Research on humans that is interventional and not observational requires informed consent.

1) Informed consent is not a EULA agreement. It would never fly in a hospital or research institution.

2) The experiment performed deliberately attempted to alter the emotional states of its users by withholding information. This is an interventional experiment.

3) The IRB was most likely not notified and the editor of the article most certainly did not perform due diligence in determining whether this was an ethical thing to do.

A quick reminder for everyone, I am discussing the ethics and potential harm that could have arisen from this experiment. In that context, I think it is deplorable that this research was published and that this research was performed by actual human beings on other people without their consent. If even one of those unwilling participants was in detriment due to this intervention, then it is unforgivable. The idea that new social technologies get a pass from informed consent is insane.

I am also noticing a big divide between tech users and biomedical researchers and honestly, we learned some pretty big lessons about informed consent over the years (research hurt a lot of people because they did not understand what was going on) and I think tech users need to stop and think about the incredibly broad reach that a platform like Facebook can have and whether it is ethical to deliberately manipulate its end users.

Discuss!

«1345

Posts

  • Fuzzy Cumulonimbus CloudFuzzy Cumulonimbus Cloud Registered User regular
    Also, for those of you who are arguing that this was a business thingy and not research, well, they published it in an academic journal so its not exactly just "business modifications".

  • MvrckMvrck Dwarven MountainhomeRegistered User regular
    I said it in the other thread, but I'll repeat it here:

    They need to get fucking hammered, and anyone involved in the conception and implementation of this experiment without getting it cleared through an IRB should be charged.

  • DevoutlyApatheticDevoutlyApathetic Registered User regular
    Also, for those of you who are arguing that this was a business thingy and not research, well, they published it in an academic journal so its not exactly just "business modifications".

    Yea, this was the first I heard it was published academically. That pushes it way off the charts for me. Before when I thought it was internal to Facebook it was a different thing to me, mostly because it's clear businesses need to do some internal testing and validation on their entertainment products impact on users all of which are "experiments" in the broadest sense and I'm unsure where to draw the line. Using these platforms for academic research like this is just fucking crazy.

    I also think what I was interested in talking about is really tangential to what this thread is for. Unless you really think otherwise I'm planning on dropping it.

    Nod. Get treat. PSN: Quippish
  • ArchArch Neat-o, mosquito! Registered User regular
    Here's the fucked up thing- I've been complaining for months (you can find old posts of mine on these forums about it) in regards to how facebook has only been showing me dumb bullshit from people I don't care about for months.

    Like, I have said for a while that my feed has been fucked up- it is only showing me posts from people I don't talk to, or haven't talked to in years, instead of stuff from my close family and friends I interact with regularly.

    Now, when I read this article, I thought back- most of the shit I was seeing was terrible stuff.

    Random people from my past I hadn't had the time to filter out posting about their pets dying, their parents dying, losing their jobs, getting divorced.

    I've stopped opening facebook because it was too damn depressing.

    And then this fucking paper drops, and it just makes me wonder, 'yknow? I mean, 100,000 users out of the total Facebook population- odds are slim that my feed was one of the "lucky ones".

    But damnit, humans are pretty good at pattern recognition, and I'm pattern recognizing.

    I'm okay, really, ethically, if they were doing this correlationally- "Do people post more negative statuses after viewing other negative statuses", and getting that data by scanning feeds of users and identifying which posts they viewed/commented on and tracking later whether they posted negative keywords.

    But actively filtering, and feeding people negative statuses?

    Fuck that noise. Bring the IRB hammer down fucking hard.

    I wonder if you could somehow identify if you were "selected", and if so sue the goddam pants off of Facebook.

    This was unethical research- at the very least they should have given people the opportunity to "opt out" of the study.

  • AngelHedgieAngelHedgie Registered User regular
    There is nothing intrinsically wrong with using Facebook for academic research. The issue is that it's pretty clear that they were playing fast and loose with things like consent and making sure that any risk to the test subjects was minimized.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • SurfpossumSurfpossum A nonentity trying to preserve the anonymity he so richly deserves.Registered User regular
    That's a hell of a thing.

    I find it odd that people would argue that it would be fine if it wasn't for research. I guess technically it might be "fine" as in "legal" but it feels like maybe it shouldn't be?

  • MvrckMvrck Dwarven MountainhomeRegistered User regular
    Surfpossum wrote: »
    That's a hell of a thing.

    I find it odd that people would argue that it would be fine if it wasn't for research. I guess technically it might be "fine" as in "legal" but it feels like maybe it shouldn't be?

    It wouldn't be "fine" per say, but again, there is an important difference between "We found more people bought this service if a kitten was posted on the page, perhaps we should post more kittens to generate more revenue" and "We are going to see if intentionally exposing unsuspecting users to only negative content does anything to their state of mind."

  • Fuzzy Cumulonimbus CloudFuzzy Cumulonimbus Cloud Registered User regular
    The experiment borders on the sociopathic when you get right down to the core framework. Particularly their approach and the way they wrote about what they did.

  • AngelHedgieAngelHedgie Registered User regular
    And, as usual when it comes to stories like this, Ars sorely misses the point.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • HamHamJHamHamJ Registered User regular
    And, as usual when it comes to stories like this, Ars sorely misses the point.

    That is a perfectly reasonable and objective article and lays out the facts pretty well.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • shrykeshryke Member of the Beast Registered User regular
    HamHamJ wrote: »
    And, as usual when it comes to stories like this, Ars sorely misses the point.

    That is a perfectly reasonable and objective article and lays out the facts pretty well.

    How is it objective?

    It's literally got a subjective term in the title.

  • AngelHedgieAngelHedgie Registered User regular
    This bit explains the culture clash, I think:
    Academic researchers are brought up in an academic culture with certain practices and values. Early on they learn about the ugliness of unchecked human experimentation. They are socialized into caring deeply for the well-being of their research participants. They learn that a “scientific experiment” must involve an IRB review and informed consent. So when the Facebook study was published by academic researchers in an academic journal (the PNAS) and named an “experiment”, for academic researchers, the study falls in the “scientific experiment” bucket, and is therefore to be evaluated by the ethical standards they learned in academia.

    Not so for everyday Internet users and Internet company employees without an academic research background. To them, the bucket of situations the Facebook study falls into is “online social networks”, specifically “targeted advertising” and/or “interface A/B testing”. These practices come with their own expectations and norms in their respective communities of practice and the public at large, which are different from those of the “scientific experiment” frame in academic communities. Presumably, because they are so young, they also come with much less clearly defined and institutionalized norms. Tweaking the algorithm of what your news feed shows is an accepted standard operating procedure in targeted advertising and A/B testing.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • HamHamJHamHamJ Registered User regular
    shryke wrote: »
    HamHamJ wrote: »
    And, as usual when it comes to stories like this, Ars sorely misses the point.

    That is a perfectly reasonable and objective article and lays out the facts pretty well.

    How is it objective?

    It's literally got a subjective term in the title.

    It clearly states the facts and presents clearly the position of the various actors involved?

    Like, have you actually read anything past the headline?

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • shrykeshryke Member of the Beast Registered User regular
    HamHamJ wrote: »
    shryke wrote: »
    HamHamJ wrote: »
    And, as usual when it comes to stories like this, Ars sorely misses the point.

    That is a perfectly reasonable and objective article and lays out the facts pretty well.

    How is it objective?

    It's literally got a subjective term in the title.

    It clearly states the facts and presents clearly the position of the various actors involved?

    Like, have you actually read anything past the headline?

    Yes. I also know what objective means and they are clearly taking a position on the issue. Specifically, they are trying to say "It's not that bad".

  • HamHamJHamHamJ Registered User regular
    shryke wrote: »
    HamHamJ wrote: »
    shryke wrote: »
    HamHamJ wrote: »
    And, as usual when it comes to stories like this, Ars sorely misses the point.

    That is a perfectly reasonable and objective article and lays out the facts pretty well.

    How is it objective?

    It's literally got a subjective term in the title.

    It clearly states the facts and presents clearly the position of the various actors involved?

    Like, have you actually read anything past the headline?

    Yes. I also know what objective means and they are clearly taking a position on the issue. Specifically, they are trying to say "It's not that bad".

    Um, no, the position the article is taking is "what we should really be worried about is what Facebooks plans to do now that it knows it can manipulate people like this".

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • This content has been removed.

  • DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    Oh, goddammit. One of the co-authors on this thing was a former professor. I am most disappoint. :|

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • ArchangleArchangle Registered User regular
    Yeah, there's a huge can of worms that could potentially be opened if this went to any kind of legal challenge. And @Angelhedgie said, most large-scale communications involves A/B testing to see which message is most effective.

    For a tobacco health awareness campaign, could you claim damages because you were exposed to a message that was deliberately "more negative" than another test group? Could League of Legends be sued for testing changes to their chat system that resulted in your matches being exposed to more antisocial behaviour? Can ANY website that provides recommendations be exposed to liability on the basis of manipulating those recommendations in your stream? Is an old-fashioned print newspaper behaving unethically when it decides which news stories go on the front page for the early/late editions?

    Does the act of publishing the results for academic scrutiny suddenly make it explicitly unethical, and if so does all the communications manipulation data that is gathered in the terabytes daily by businesses all over the world become off-limits for any kind of publication?

  • HachfaceHachface Not the Minister Farrakhan you're thinking of Dammit, Shepard!Registered User regular
    So Facebook decided to make 100,000 users feel really shitty, just to see if they could.

    We do what we must, because we can.

  • ArchArch Neat-o, mosquito! Registered User regular
    ...the problem I think here is that it's not clear Facebook really did anything expressly wrong. This wasn't user targeted, so it's not bullying behavior. They didn't alter the content of posts, so it's not misrepresentation. And their product expressly revolves around making selections of the types of posts to show on the newsfeed by default anyway...so at some level the user has already surrendered that decision making power away, and it can and does change without notice all the time anyway.

    Other then being unsettling, it's hard to see that they've done anything which I would want to try and make explicitly illegal, because we'd tie ourselves in knots trying to define it.

    It's rather clear, actually- the experiment was predicted to have a direct result on human subjects (inasmuch as "seeing if negative posts affect user's own postings") and thus should have met the IRB approval for human intervention tests, which it seems like they did not take into account.

    They try and hide this behind "big data" obfuscation- "we were just monitoring and filtering or promoting keyword containing posts then collecting data on subsequent posts!" - when in reality that is a smokescreen. The subsequent posts are the proxy they are using to judge user's mental state after their experimental manipulation. Acting like this is just some strange data mining experiment (which they are trying to do) is a very loose definition of the word.

    Under the current IRB standards, this experiment should have had at the minimum a notification to the subjects that they were part of a study that could affect them negatively.

    Yes it could bias your data, but an n to the tune of hundreds of thousands should correct that noise.

  • ArchArch Neat-o, mosquito! Registered User regular
    The paper is unimaginably creepy, by the way
    Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.
    In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts

    And that's just the abstract

    Protip- if you're trying to explore something called "contagion", even if it is just a fancy buzzword, maybe you should make sure you have rigorous IRB approval before testing your hypothesis with humans

  • ArchArch Neat-o, mosquito! Registered User regular
    The study was perfectly legal, it was just highly unethical and needed to meet quite a few (easily met) criteria for ethical treatment of human subjects

    And let's not pretend that acceptance of Facebook's EULA constitutes informed consent for an experiment whose predicted outcome includes emotional changes in either direction (positive or negative).

  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited July 2014
    http://www.theguardian.com/commentisfree/2014/jun/30/facebook-evil-emotional-study-charlie-brooker

    <3 Charlie Brooker
    In other words, the fine folk at Facebook are so hopelessly disconnected from ground-level emotional reality they have to employ a team of scientists to run clandestine experiments on hundreds of thousands of their "customers" to discover that human beings get upset when other human beings they care about are unhappy.

    But wait! It doesn't end there. They also coolly note that their fun test provides "experimental evidence for massive-scale contagion via social networks". At least we can draw comfort from the fact that this terrifying power to sway the emotional state of millions is in the right hands: an anonymous cabal of secret experimenters who don't know what "empathy" is.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    I'm really surprised that this passed an IRB.

    Lack of informed consent, plus not-terribly-novel results (sad news make people sad!)

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • hippofanthippofant ティンク Registered User regular
    ...the problem I think here is that it's not clear Facebook really did anything expressly wrong. This wasn't user targeted, so it's not bullying behavior. They didn't alter the content of posts, so it's not misrepresentation. And their product expressly revolves around making selections of the types of posts to show on the newsfeed by default anyway...so at some level the user has already surrendered that decision making power away, and it can and does change without notice all the time anyway.

    Other then being unsettling, it's hard to see that they've done anything which I would want to try and make explicitly illegal, because we'd tie ourselves in knots trying to define it.

    Brought this up in the other thread, but I still have 2 outstanding questions:

    1) If someone in the study committed suicide, would Facebook be liable right now?
    2) Were there, say, 13-year olds included in the study?

  • MvrckMvrck Dwarven MountainhomeRegistered User regular
    Feral wrote: »
    I'm really surprised that this passed an IRB.

    Lack of informed consent, plus not-terribly-novel results (sad news make people sad!)

    Unless I missed it somewhere, there has been no evidence presented that it actually cleared an IRB. They didn't publish that info in their paper, and so far it is just one statement saying they used a "local" IRB. If you are IRB approved, you fucking include that in your published paper. That is the standard.

  • HamHamJHamHamJ Registered User regular
    Mvrck wrote: »
    Feral wrote: »
    I'm really surprised that this passed an IRB.

    Lack of informed consent, plus not-terribly-novel results (sad news make people sad!)

    Unless I missed it somewhere, there has been no evidence presented that it actually cleared an IRB. They didn't publish that info in their paper, and so far it is just one statement saying they used a "local" IRB. If you are IRB approved, you fucking include that in your published paper. That is the standard.

    Yeah I would like some clarification on that part.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • AManFromEarthAManFromEarth Let's get to twerk! The King in the SwampRegistered User regular
    hippofant wrote: »
    ...the problem I think here is that it's not clear Facebook really did anything expressly wrong. This wasn't user targeted, so it's not bullying behavior. They didn't alter the content of posts, so it's not misrepresentation. And their product expressly revolves around making selections of the types of posts to show on the newsfeed by default anyway...so at some level the user has already surrendered that decision making power away, and it can and does change without notice all the time anyway.

    Other then being unsettling, it's hard to see that they've done anything which I would want to try and make explicitly illegal, because we'd tie ourselves in knots trying to define it.

    Brought this up in the other thread, but I still have 2 outstanding questions:

    1) If someone in the study committed suicide, would Facebook be liable right now?
    2) Were there, say, 13-year olds included in the study?

    You would have a fuck of a time proving that you had standing, but if you could absolutely they'd be liable.

    Lh96QHG.png
  • hippofanthippofant ティンク Registered User regular
    hippofant wrote: »
    ...the problem I think here is that it's not clear Facebook really did anything expressly wrong. This wasn't user targeted, so it's not bullying behavior. They didn't alter the content of posts, so it's not misrepresentation. And their product expressly revolves around making selections of the types of posts to show on the newsfeed by default anyway...so at some level the user has already surrendered that decision making power away, and it can and does change without notice all the time anyway.

    Other then being unsettling, it's hard to see that they've done anything which I would want to try and make explicitly illegal, because we'd tie ourselves in knots trying to define it.

    Brought this up in the other thread, but I still have 2 outstanding questions:

    1) If someone in the study committed suicide, would Facebook be liable right now?
    2) Were there, say, 13-year olds included in the study?

    You would have a fuck of a time proving that you had standing, but if you could absolutely they'd be liable.

    I was thinking a situation where, somehow, it could be demonstrated that a kid was in the "negative feelings" group, and (s)he committed suicide during the experiment or shortly after, could there be criminal liability? Or could the parents sue Facebook for a bajillion dollars?

    I mean, so many people are saying, or at least were saying in the other thread, "Oh blah blah minimal effect, Facebook's well within its rights, it's just like all this other stuff that's done," but it's expressly these sorts of disastrous situations that compel us to get consent and run our studies past ethical review boards.

  • AManFromEarthAManFromEarth Let's get to twerk! The King in the SwampRegistered User regular
    I mean you'd never be able to prove it unless you somehow got hold of a big list Facebook kept or something.

    Even without a suicide any spike in depression among such a list would open up Facebook to huge liability.

    It's mind boggling that someone thought this was a good idea.

    Lh96QHG.png
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    HamHamJ wrote: »
    Mvrck wrote: »
    Feral wrote: »
    I'm really surprised that this passed an IRB.

    Lack of informed consent, plus not-terribly-novel results (sad news make people sad!)

    Unless I missed it somewhere, there has been no evidence presented that it actually cleared an IRB. They didn't publish that info in their paper, and so far it is just one statement saying they used a "local" IRB. If you are IRB approved, you fucking include that in your published paper. That is the standard.

    Yeah I would like some clarification on that part.

    an early report I read said that they cleared it with a Cornell IRB, but it looks like that might not actually be the case?

    http://www.theatlantic.com/technology/archive/2014/06/even-the-editor-of-facebooks-mood-study-thought-it-was-creepy/373649/

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • HamHamJHamHamJ Registered User regular
    hippofant wrote: »
    hippofant wrote: »
    ...the problem I think here is that it's not clear Facebook really did anything expressly wrong. This wasn't user targeted, so it's not bullying behavior. They didn't alter the content of posts, so it's not misrepresentation. And their product expressly revolves around making selections of the types of posts to show on the newsfeed by default anyway...so at some level the user has already surrendered that decision making power away, and it can and does change without notice all the time anyway.

    Other then being unsettling, it's hard to see that they've done anything which I would want to try and make explicitly illegal, because we'd tie ourselves in knots trying to define it.

    Brought this up in the other thread, but I still have 2 outstanding questions:

    1) If someone in the study committed suicide, would Facebook be liable right now?
    2) Were there, say, 13-year olds included in the study?

    You would have a fuck of a time proving that you had standing, but if you could absolutely they'd be liable.

    I was thinking a situation where, somehow, it could be demonstrated that a kid was in the "negative feelings" group, and (s)he committed suicide during the experiment or shortly after, could there be criminal liability? Or could the parents sue Facebook for a bajillion dollars?

    I mean, so many people are saying, or at least were saying in the other thread, "Oh blah blah minimal effect, Facebook's well within its rights, it's just like all this other stuff that's done," but it's expressly these sorts of disastrous situations that compel us to get consent and run our studies past ethical review boards.

    This is a pretty fantastical hypothetical you have constructed here.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • This content has been removed.

  • hippofanthippofant ティンク Registered User regular
    edited July 2014
    HamHamJ wrote: »
    hippofant wrote: »
    hippofant wrote: »
    ...the problem I think here is that it's not clear Facebook really did anything expressly wrong. This wasn't user targeted, so it's not bullying behavior. They didn't alter the content of posts, so it's not misrepresentation. And their product expressly revolves around making selections of the types of posts to show on the newsfeed by default anyway...so at some level the user has already surrendered that decision making power away, and it can and does change without notice all the time anyway.

    Other then being unsettling, it's hard to see that they've done anything which I would want to try and make explicitly illegal, because we'd tie ourselves in knots trying to define it.

    Brought this up in the other thread, but I still have 2 outstanding questions:

    1) If someone in the study committed suicide, would Facebook be liable right now?
    2) Were there, say, 13-year olds included in the study?

    You would have a fuck of a time proving that you had standing, but if you could absolutely they'd be liable.

    I was thinking a situation where, somehow, it could be demonstrated that a kid was in the "negative feelings" group, and (s)he committed suicide during the experiment or shortly after, could there be criminal liability? Or could the parents sue Facebook for a bajillion dollars?

    I mean, so many people are saying, or at least were saying in the other thread, "Oh blah blah minimal effect, Facebook's well within its rights, it's just like all this other stuff that's done," but it's expressly these sorts of disastrous situations that compel us to get consent and run our studies past ethical review boards.

    This is a pretty fantastical hypothetical you have constructed here.

    Suicide rate in the US is 12 per 100 000 (http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6128a8.htm). That is to say, the child part is fantastical, but the expected number of suicides amongst the group (without counting any intrinsic sampling characteristics) would be greater than 36 or so.

    Again, one of those things about science: when you're dealing with large sample populations, you'd be surprised at the sort of unlikely phenomenon that begin popping up. E.g. we never thought that publicizing one person's genome anonymously was a breach of privacy, but people have since demonstrated that they can perform reconstructive attacks on anonymous genome databases that can piece together genome-patient relationships.

    It also asks the same questions I'm asking above: suppose Facebook did nothing, their algorithm was just biased to choose slightly negative posts in general (not unexpected: see newspaper/TV media shock headlines). Would they still be liable, seeing as how the changes were not targeted? Is the news liable when someone decides moral decay is at an all time high and goes on a shooting spree?

    I don't think there's a hard answer here, not in the legal code anyways. Negligence is based on reasonability. There might be something in case law though.... I'd also add that news agencies are granted special rights in accordance with their societal service: e.g. in Canada, news organizations are shielded from some forms of fraud/libel if they report falsehoods, and can use images of people without their consent, though both are also subject to reasonability.

    Edit: Also, Facebook could have tripped laws in various countries, depending on where their users were located. Not sure how the legal ramifications might work out there, but I'd imagine the EU would probably take a unhappier outlook on the situation.

    hippofant on
  • Squidget0Squidget0 Registered User regular
    Maybe I'm misreading, but this kind of thing is pretty common in any kind of data-driven analysis in big software. A lot of major websites (Google and Amazon to name a couple ) also change their algorithm to adapt to your actions, and they run all kinds of tests giving different algorithms to different people to see how they'll react. Everything from cellphone games to shopping websites does similar stuff. How is this any different from that?

    Is it just a terminology thing, calling it an "experiment" vs market research or whatever? Or is this just about people being creeped out by big data in general?

    The legal threats and suicide comments just seems bizarre to me. If someone is committing suicide over their Facebook feed, the problem clearly is not with their Facebook feed. So what's the actual harm here, that doesn't occur when any other website messes around with their algorithms in order to learn stuff?

  • OptyOpty Registered User regular
    edited July 2014
    Squidget0 wrote: »
    Maybe I'm misreading, but this kind of thing is pretty common in any kind of data-driven analysis in big software. A lot of major websites (Google and Amazon to name a couple ) also change their algorithm to adapt to your actions, and they run all kinds of tests giving different algorithms to different people to see how they'll react. Everything from cellphone games to shopping websites does similar stuff. How is this any different from that?

    Is it just a terminology thing, calling it an "experiment" vs market research or whatever? Or is this just about people being creeped out by big data in general?

    The legal threats and suicide comments just seems bizarre to me. If someone is committing suicide over their Facebook feed, the problem clearly is not with their Facebook feed. So what's the actual harm here, that doesn't occur when any other website messes around with their algorithms in order to learn stuff?

    The difference between when something like YouTube makes their website unusable for you because they wanted to try something out and Facebook sneakily altering the feed contents you get is that one is blatantly obvious and the other isn't. You know you're in a special UI testing group when you complain that that Netflix's new design is bullshit and everyone else goes "What are you talking about? They haven't changed for months." It doesn't matter in a cell phone game if they try different button sizes or positionings when you finish a level to try and figure out the best way to get you to click through to the store partially for the same reason (that if you complain others can confirm one way or the other) and additionally because button placement doesn't evoke negative emotional responses.

    You don't know you're in a special experiment group on Facebook designed to affect your emotions by hiding every positive feed post because you don't know it's going on and there's no way to prove anything is happening. If you complain people will just tell you to find more positive friends/ditch the downer friends, but no one (until now) would go "hm, you're probably in a test group in an experiment to find out how you handle being surrounded by negativity and depression in your Facebook feed. Wait a month or so and you should start seeing positive stuff again." The reason people are bringing up suicide and whatnot is because of the directed negative force on someone's life this experiment would have evoked. The whole outcry boils down to the lack of discoverability of Facebook's bullshit "experiment."

    Opty on
  • ArchArch Neat-o, mosquito! Registered User regular
    edited July 2014
    I would just like to also point something out, having spent some time with the paper.

    The authors claim that this is what they are testing
    One such test is reported in this study: A test of whether posts with emotional content are more engaging.

    Which is absolute bullshit. Like, the very next paragraph contradicts this (i.e., their experimental design)
    The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. This tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure—thereby testing whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion.

    First of all, unless you are using a very strange definition of "engaging," then I don't see how setting up an experiment to see whether exposure to emotional content led people to post similarly-emotional content is "engaging". I'm already mad, even beyond the fact that what they actually tested was whether or not exposure to emotional content made you echo similar emotional content and not whether or not it was "more engaging".

    So, okay, what did they actually do?
    Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was reduced.

    Well, alright, that doesn't sound too bad...except your a priori predictions were that users would echo the emotion they experience most frequently. This immediately translates into the area where IRB should have taken a look at this- your predictive outcome can affect your subjects.

    What really fucking grinds my gears is this, though, and is how I'm going to respond to you Squidge
    Posts were determined to be positive or negative if they contained at least one positive or negative word, as defined by Linguistic Inquiry and Word Count software (LIWC2007) (9) word counting system, which correlates with self-reported and physiological measures of well-being, and has been used in prior research on emotional expression (7, 8, 10). LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.

    This is what I was saying earlier- they attempt to smokescreen this behind "it was just data! We didn't directly interact with the subjects, just tinkered with the data! The subjects already agreed to let us do this!"

    Bullshit. We agreed to let Facebook data mine to track purchasing habits, sure

    I mean, I read through this when I signed up for facebook, and glance through it from now and again
    As described in "How we use the information we receive" we also put together data from the information we already have about you, your friends, and others, so we can offer and suggest a variety of services and features. For example, we may make friend suggestions, pick stories for your News Feed, or suggest people to tag in photos. We may put together your current city with GPS and other location information we have about you to, for example, tell you and your friends about people or events nearby, or offer deals to you in which you might be interested. We may also put together data about you to serve you ads or other content that might be more relevant to you.

    Nowhere in there does it say "we will allow researchers to manipulate your newsfeed to see how you respond to emotional-coded posts".

    It doesn't fucking matter if you strip the names and make it completely anonymous, the whole thing is the experimental design itself was unethical without more rigorous review, and requires more consent from the subjects than was presented.

    That is really the sticking point- the researchers (and Facebook) are claiming that their research was consistent with Facebook's Data Use Policy, which it really, fucking isn't. And even moreso, it doesn't matter, because when dealing with manipulative experiments of any kind involving humans, there are strict guidelines on what informed consent is, and agreeing to use a social media service is not informed consent by any stretch of the word.

    I want it to be made clear though, that this wasn't an illegal experiment- I'm not banging the drum for tighter regulation or to sue facebook (although it would be funny!) but what I am saying is that this was a highly unethical experiment, and perhaps we really need to review how the IRB and companies employing Big Data generate informed consent from their users.

    For a hypothetical example, this wouldn't have been unethical had the researchers instituted a first-pass screen of candidates for the study by asking them upon login if (spitballing) they would like to be a part of a social experiment on Facebook.

    In fact, I am going to say that had that been true, everyone would have received this study quite differently.

    Arch on
  • Loren MichaelLoren Michael Registered User regular
    I don't really see much as being wrong with this, and I think my feelings are based largely on the fact that public perceptions are altered en masse daily by news, infotainment, advertising, church services, the word on the street, and the various ways the internet creeps in.

    This seems like one input out of many wanted to study the impact that it had.

    Like, other groups make editorial decisions all the time to alter and curate content, but it seems like it gets overlooked as an unethical thing because... it's not being quantified?

    a7iea7nzewtq.jpg
  • The EnderThe Ender Registered User regular
    edited July 2014
    ...Um. I don't understand the harm done?

    Like, the ethical standards are in place to prevent repeats of things like the Syphilis experiments, correct? Is there any credible reason to think that someone fed negative news on FB would get hurt?

    The Ender on
    With Love and Courage
  • hippofanthippofant ティンク Registered User regular
    edited July 2014
    The Ender wrote: »
    ...Um. I don't understand the harm done?

    Like, the ethical standards are in place to prevent repeats of things like the Syphilis experiments, correct? Is there any credible reason to think that someone fed negative news on FB would get hurt?

    That is backwards. They conduct the experiment because they don't know the answer to that question. If the answer to the question were truly no, then the experiment itself would be unnecessary.

    Also, market research does not have the express intent to see if they can make you sad / worsen your situation. You're being offered Coke and Pepsi, not Coke and arsenic.

    So would you guys all be okay if Nickelodeon started implanting subliminal messaging in their television shows to encourage children to eat more McDonald's? Hey, public perception is influenced by the media all the time anyways, so this is totally innocuous.

    hippofant on
Sign In or Register to comment.