Last week, it hit the news that Facebook had conducted a study on over 100,000 users without their consent.
This controversy highlights the divide between biomedical research ethics (which has a long and storied history that brought us to our current state of informed consent) and the well, non existent ethics of corporations conducting social research on end users.
Here's the SBM article which I really enjoy.
http://www.sciencebasedmedicine.org/did-facebook-and-pnas-violate-human-research-protections-in-an-unethical-experiment/
I'd have to say that I am profoundly disturbed not by Facebook, mainly because these antics are par for the course for them, but by the academic institutions and the individual researchers who were involved in carrying out this research. Research
on humans that is
interventional and not
observational requires informed consent.
1) Informed consent is not a EULA agreement. It would never fly in a hospital or research institution.
2) The experiment performed deliberately attempted to alter the emotional states of its users by withholding information. This is an interventional experiment.
3) The IRB was most likely not notified and the editor of the article most certainly did not perform due diligence in determining whether this was an
ethical thing to do.
A quick reminder for everyone, I am discussing the
ethics and
potential harm that could have arisen from this experiment. In that context, I think it is deplorable that this research was published and that this research was performed by actual human beings on other people without their consent. If even one of those unwilling participants was in detriment due to this intervention, then it is unforgivable. The idea that new social technologies get a pass from informed consent is insane.
I am also noticing a big divide between tech users and biomedical researchers and honestly, we learned some pretty big lessons about informed consent over the years (research hurt a lot of people because they did not
understand what was going on) and I think tech users need to stop and think about the incredibly broad reach that a platform like Facebook can have and whether it is ethical to deliberately manipulate its end users.
Discuss!
Posts
They need to get fucking hammered, and anyone involved in the conception and implementation of this experiment without getting it cleared through an IRB should be charged.
Yea, this was the first I heard it was published academically. That pushes it way off the charts for me. Before when I thought it was internal to Facebook it was a different thing to me, mostly because it's clear businesses need to do some internal testing and validation on their entertainment products impact on users all of which are "experiments" in the broadest sense and I'm unsure where to draw the line. Using these platforms for academic research like this is just fucking crazy.
I also think what I was interested in talking about is really tangential to what this thread is for. Unless you really think otherwise I'm planning on dropping it.
Like, I have said for a while that my feed has been fucked up- it is only showing me posts from people I don't talk to, or haven't talked to in years, instead of stuff from my close family and friends I interact with regularly.
Now, when I read this article, I thought back- most of the shit I was seeing was terrible stuff.
Random people from my past I hadn't had the time to filter out posting about their pets dying, their parents dying, losing their jobs, getting divorced.
I've stopped opening facebook because it was too damn depressing.
And then this fucking paper drops, and it just makes me wonder, 'yknow? I mean, 100,000 users out of the total Facebook population- odds are slim that my feed was one of the "lucky ones".
But damnit, humans are pretty good at pattern recognition, and I'm pattern recognizing.
I'm okay, really, ethically, if they were doing this correlationally- "Do people post more negative statuses after viewing other negative statuses", and getting that data by scanning feeds of users and identifying which posts they viewed/commented on and tracking later whether they posted negative keywords.
But actively filtering, and feeding people negative statuses?
Fuck that noise. Bring the IRB hammer down fucking hard.
I wonder if you could somehow identify if you were "selected", and if so sue the goddam pants off of Facebook.
This was unethical research- at the very least they should have given people the opportunity to "opt out" of the study.
I find it odd that people would argue that it would be fine if it wasn't for research. I guess technically it might be "fine" as in "legal" but it feels like maybe it shouldn't be?
It wouldn't be "fine" per say, but again, there is an important difference between "We found more people bought this service if a kitten was posted on the page, perhaps we should post more kittens to generate more revenue" and "We are going to see if intentionally exposing unsuspecting users to only negative content does anything to their state of mind."
That is a perfectly reasonable and objective article and lays out the facts pretty well.
How is it objective?
It's literally got a subjective term in the title.
It clearly states the facts and presents clearly the position of the various actors involved?
Like, have you actually read anything past the headline?
Yes. I also know what objective means and they are clearly taking a position on the issue. Specifically, they are trying to say "It's not that bad".
Um, no, the position the article is taking is "what we should really be worried about is what Facebooks plans to do now that it knows it can manipulate people like this".
For a tobacco health awareness campaign, could you claim damages because you were exposed to a message that was deliberately "more negative" than another test group? Could League of Legends be sued for testing changes to their chat system that resulted in your matches being exposed to more antisocial behaviour? Can ANY website that provides recommendations be exposed to liability on the basis of manipulating those recommendations in your stream? Is an old-fashioned print newspaper behaving unethically when it decides which news stories go on the front page for the early/late editions?
Does the act of publishing the results for academic scrutiny suddenly make it explicitly unethical, and if so does all the communications manipulation data that is gathered in the terabytes daily by businesses all over the world become off-limits for any kind of publication?
We do what we must, because we can.
It's rather clear, actually- the experiment was predicted to have a direct result on human subjects (inasmuch as "seeing if negative posts affect user's own postings") and thus should have met the IRB approval for human intervention tests, which it seems like they did not take into account.
They try and hide this behind "big data" obfuscation- "we were just monitoring and filtering or promoting keyword containing posts then collecting data on subsequent posts!" - when in reality that is a smokescreen. The subsequent posts are the proxy they are using to judge user's mental state after their experimental manipulation. Acting like this is just some strange data mining experiment (which they are trying to do) is a very loose definition of the word.
Under the current IRB standards, this experiment should have had at the minimum a notification to the subjects that they were part of a study that could affect them negatively.
Yes it could bias your data, but an n to the tune of hundreds of thousands should correct that noise.
And that's just the abstract
Protip- if you're trying to explore something called "contagion", even if it is just a fancy buzzword, maybe you should make sure you have rigorous IRB approval before testing your hypothesis with humans
And let's not pretend that acceptance of Facebook's EULA constitutes informed consent for an experiment whose predicted outcome includes emotional changes in either direction (positive or negative).
Charlie Brooker
the "no true scotch man" fallacy.
Lack of informed consent, plus not-terribly-novel results (sad news make people sad!)
the "no true scotch man" fallacy.
Brought this up in the other thread, but I still have 2 outstanding questions:
1) If someone in the study committed suicide, would Facebook be liable right now?
2) Were there, say, 13-year olds included in the study?
Unless I missed it somewhere, there has been no evidence presented that it actually cleared an IRB. They didn't publish that info in their paper, and so far it is just one statement saying they used a "local" IRB. If you are IRB approved, you fucking include that in your published paper. That is the standard.
Yeah I would like some clarification on that part.
You would have a fuck of a time proving that you had standing, but if you could absolutely they'd be liable.
I was thinking a situation where, somehow, it could be demonstrated that a kid was in the "negative feelings" group, and (s)he committed suicide during the experiment or shortly after, could there be criminal liability? Or could the parents sue Facebook for a bajillion dollars?
I mean, so many people are saying, or at least were saying in the other thread, "Oh blah blah minimal effect, Facebook's well within its rights, it's just like all this other stuff that's done," but it's expressly these sorts of disastrous situations that compel us to get consent and run our studies past ethical review boards.
Even without a suicide any spike in depression among such a list would open up Facebook to huge liability.
It's mind boggling that someone thought this was a good idea.
an early report I read said that they cleared it with a Cornell IRB, but it looks like that might not actually be the case?
http://www.theatlantic.com/technology/archive/2014/06/even-the-editor-of-facebooks-mood-study-thought-it-was-creepy/373649/
the "no true scotch man" fallacy.
This is a pretty fantastical hypothetical you have constructed here.
Suicide rate in the US is 12 per 100 000 (http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6128a8.htm). That is to say, the child part is fantastical, but the expected number of suicides amongst the group (without counting any intrinsic sampling characteristics) would be greater than 36 or so.
Again, one of those things about science: when you're dealing with large sample populations, you'd be surprised at the sort of unlikely phenomenon that begin popping up. E.g. we never thought that publicizing one person's genome anonymously was a breach of privacy, but people have since demonstrated that they can perform reconstructive attacks on anonymous genome databases that can piece together genome-patient relationships.
I don't think there's a hard answer here, not in the legal code anyways. Negligence is based on reasonability. There might be something in case law though.... I'd also add that news agencies are granted special rights in accordance with their societal service: e.g. in Canada, news organizations are shielded from some forms of fraud/libel if they report falsehoods, and can use images of people without their consent, though both are also subject to reasonability.
Edit: Also, Facebook could have tripped laws in various countries, depending on where their users were located. Not sure how the legal ramifications might work out there, but I'd imagine the EU would probably take a unhappier outlook on the situation.
Is it just a terminology thing, calling it an "experiment" vs market research or whatever? Or is this just about people being creeped out by big data in general?
The legal threats and suicide comments just seems bizarre to me. If someone is committing suicide over their Facebook feed, the problem clearly is not with their Facebook feed. So what's the actual harm here, that doesn't occur when any other website messes around with their algorithms in order to learn stuff?
The difference between when something like YouTube makes their website unusable for you because they wanted to try something out and Facebook sneakily altering the feed contents you get is that one is blatantly obvious and the other isn't. You know you're in a special UI testing group when you complain that that Netflix's new design is bullshit and everyone else goes "What are you talking about? They haven't changed for months." It doesn't matter in a cell phone game if they try different button sizes or positionings when you finish a level to try and figure out the best way to get you to click through to the store partially for the same reason (that if you complain others can confirm one way or the other) and additionally because button placement doesn't evoke negative emotional responses.
You don't know you're in a special experiment group on Facebook designed to affect your emotions by hiding every positive feed post because you don't know it's going on and there's no way to prove anything is happening. If you complain people will just tell you to find more positive friends/ditch the downer friends, but no one (until now) would go "hm, you're probably in a test group in an experiment to find out how you handle being surrounded by negativity and depression in your Facebook feed. Wait a month or so and you should start seeing positive stuff again." The reason people are bringing up suicide and whatnot is because of the directed negative force on someone's life this experiment would have evoked. The whole outcry boils down to the lack of discoverability of Facebook's bullshit "experiment."
The authors claim that this is what they are testing
Which is absolute bullshit. Like, the very next paragraph contradicts this (i.e., their experimental design)
First of all, unless you are using a very strange definition of "engaging," then I don't see how setting up an experiment to see whether exposure to emotional content led people to post similarly-emotional content is "engaging". I'm already mad, even beyond the fact that what they actually tested was whether or not exposure to emotional content made you echo similar emotional content and not whether or not it was "more engaging".
So, okay, what did they actually do?
Well, alright, that doesn't sound too bad...except your a priori predictions were that users would echo the emotion they experience most frequently. This immediately translates into the area where IRB should have taken a look at this- your predictive outcome can affect your subjects.
What really fucking grinds my gears is this, though, and is how I'm going to respond to you Squidge
This is what I was saying earlier- they attempt to smokescreen this behind "it was just data! We didn't directly interact with the subjects, just tinkered with the data! The subjects already agreed to let us do this!"
Bullshit. We agreed to let Facebook data mine to track purchasing habits, sure
I mean, I read through this when I signed up for facebook, and glance through it from now and again
Nowhere in there does it say "we will allow researchers to manipulate your newsfeed to see how you respond to emotional-coded posts".
It doesn't fucking matter if you strip the names and make it completely anonymous, the whole thing is the experimental design itself was unethical without more rigorous review, and requires more consent from the subjects than was presented.
That is really the sticking point- the researchers (and Facebook) are claiming that their research was consistent with Facebook's Data Use Policy, which it really, fucking isn't. And even moreso, it doesn't matter, because when dealing with manipulative experiments of any kind involving humans, there are strict guidelines on what informed consent is, and agreeing to use a social media service is not informed consent by any stretch of the word.
I want it to be made clear though, that this wasn't an illegal experiment- I'm not banging the drum for tighter regulation or to sue facebook (although it would be funny!) but what I am saying is that this was a highly unethical experiment, and perhaps we really need to review how the IRB and companies employing Big Data generate informed consent from their users.
For a hypothetical example, this wouldn't have been unethical had the researchers instituted a first-pass screen of candidates for the study by asking them upon login if (spitballing) they would like to be a part of a social experiment on Facebook.
In fact, I am going to say that had that been true, everyone would have received this study quite differently.
This seems like one input out of many wanted to study the impact that it had.
Like, other groups make editorial decisions all the time to alter and curate content, but it seems like it gets overlooked as an unethical thing because... it's not being quantified?
Like, the ethical standards are in place to prevent repeats of things like the Syphilis experiments, correct? Is there any credible reason to think that someone fed negative news on FB would get hurt?
That is backwards. They conduct the experiment because they don't know the answer to that question. If the answer to the question were truly no, then the experiment itself would be unnecessary.
Also, market research does not have the express intent to see if they can make you sad / worsen your situation. You're being offered Coke and Pepsi, not Coke and arsenic.
So would you guys all be okay if Nickelodeon started implanting subliminal messaging in their television shows to encourage children to eat more McDonald's? Hey, public perception is influenced by the media all the time anyways, so this is totally innocuous.