The new forums will be named Coin Return (based on the most recent
vote)! You can check on the status and timeline of the transition to the new forums
here.
The Guiding Principles and New Rules
document is now in effect.
Facebook And Research Consent [Tech Ethics {And The Lack Thereof}]
Posts
I agree that a literal reading of the definitions you listed would suggest that this kind of experimentation is ethical. I just think it's also pretty silly, and that the framework clearly wasn't written with these kinds of interactions in mind. The comparison to the Milgram experiments is silly and hyperbolic. I have yet to see any actual harm demonstrated that would not have occurred in the standard comings-and-goings of Facebook algorithms. The arguments for why it's wrong seem to come down to the terminology, rather than the action. "They published it as a research paper!" If they'd published the same words on their blog the way some tech companies do all the time, that would be different? The text and the actions are what matter, the titles ("Academic vs. Market Researcher") are ultimately just semantics.
I do think there is an interesting ethical discussion here on who actually owns the data on your Facebook page.. The legal answer is that Facebook owns it - they pay for the servers, and they have supreme power over who has access to the page and what gets displayed on it at any given time. You seem to have ethical issues with Facebook running experiments on the data that comes through their pages, yet it seems to me that they have substantially more standing than their users in deciding what happens to their algorithms and their data. So who should be required to consent in a study of a Facebook page? The user? Facebook? Both? Should everyone who might be featured on the page also have to consent, even in situations (like the friends search algorithm) where literally anyone could appear? I would say that Facebook probably has a much better legal claim than any user does, but ethically? Who knows.
But yeah, overally this just seems like a non-issue to me. The idea that we're happy to be studied by advertisers but get angry when we're studied by scientists probably says more about us than it does about Facebook or the researchers involved.
Story is here..
EDIT: Of course, that article kind of goes off the deep end & draws conclusions from disparate pieces of data that are extremely spurious, but the AP report and linked material provided by Greenwald corroborate the underpinning fact: the U.S. DoD is using this research to figure-out how to best weaponize social media (as are several other countries).
Unfortunately our only recourse is to pray they don't alter them further.
Demonstrating harm after the fact and subjecting researchers to reprimand has historically been demonstrated to be too weak a safeguard when it comes to experimentation on human beings. You are correct that non-academic research has not tended to be subject to the same scrutiny as academic research. Maybe it should, maybe it shouldn't, but what the academic community should emphatically not be permitting or supporting is researchers carrying out whatever studies they like and then submitting those where everything goes fine as academic research. That's the danger here, even in the absence of demonstrated harm (I'm not convinced that there has been no harm, because the study is sufficiently poorly controlled that nobody seems to have checked - when you carry out research where your subjects are unwitting participants and you perform no follow up you pretty much guarantee none will be brought to your attention).
As to who owns the data, that depends on jurisdiction. Certainly under UK law there are obligations and restrictions under the law as to what data you can collect, and what you can do as a data controller. I'm curious what the response would be to a Data Subject Access Request specifically querying whether the requestor was a subject in this experiment.
Can't find an example of anyone actually making such a request with cursory googling. If they restricted the experiment to US citizens then there probably isn't anyone with standing to make such a request in the apparent absence of data access rights.
Christ, my experiment in which participants read a list of words and were tested on which words they could remember had a fucking 2 page consent form with numbers to call if you were feeling frightened.
To be fair, the words on the paper were "murder", "arson", and "steam summer sale".
And the letters were cut out of magazines and glued under a picture of a loved one holding a current newspaper
QEDMF xbl: PantsB G+
This is yet another form of "if it's okay for advertisers, it's okay for scientists," which has been addressed multiple times in this thread. But just to reiterate some points, and to add a few more:
the "no true scotch man" fallacy.
Edit: Added the full monstrosity.
I feel like you put the quotation marks around the wrong word.
You quite literally did.
At what point is it wrong for advertisers to intervene in the lives of people without oversight or regulation?
That would explain why the same argument keeps getting trotted out I'm sure...
So let's have that conversation. What restrictions do you think marketers should be under in their use of Facebook data? Why do you think that those restrictions should be different for scientists?
Er, no, they do all of this, openly. Selling your information to advertisers is literally Facebook's entire business model, and has been since the beginning. They are using information published by your friends and family to manipulate you, that's the very definition of targeted advertising on a site like Facebook.
Maybe you're not okay with that! Fair enough! A lot of people are creeped out by this kind of thing. Just recognize that the alternative is probably for Facebook to use a subscription service model of some kind. Right now you are the product, not the customer, and someone has to pay the bills.
The two sides of the argument so far have generally been:
*Advertisers get to do it, so it isn't a big deal
versus
*Advertisers getting to do it doesn't negate the ethical requirements for a scientific study
There has been precious little discussion on reversing it and discussing why we allow advertisers to get away with possible unethical practices without the bare minimum of regulation or oversight that we hold the scientific community to.
Uh, cuz scientists have moral principles and advertisers are sleazy capitalist scumbags?
There is really minimal external regulation/oversight of science ethics. It's mostly self-policing. All these journals and universities and scientific funding agencies are primarily run by scientists themselves. That is not to say that government legislation and policy hasn't been important to reign in the outliers, but these ethical science guidelines generally have very strong support and compliance in the scientific community (see this thread).
You'll note that some of the most critical opinions about this issue has been from scientists themselves. Have you heard any advertisers speak out about it? Other than Facebook issuing what was... not just a non-apology, but something more akin to an anti-apology.
Yeah, that's not what I'm saying. I'm not saying "where's the add self-regulatory agency when this happens". I'm saying "holy shit, why is there no regulation for this stuff (internal or external), and why are we so placidly accepting it?"
Cuz freedumb of speech.
Everyone knows words can't hurt you, so obviously they shouldn't be regulated. :P
This seems completely normal to me. Like Upton Sinclair's The Jungle. Or the railroad barons of the late 19th century. Or the Clean Air and Clean Water Acts. Superfund. Labour regulations circa the Industrial Revolution. Modern-day financial industry, particularly high-frequency trading. Climate change. Government regulation is usually decades behind. It usually requires a horrid state of affairs to prompt government action and rally enough support to overcome conservative apathy. Self-regulatory schemes are attractive primarily because they're far easier to implement, politically speaking.
Christ, here in Canada, it took a spate of teenagers committing suicide for passage of some (completely bullshit) bullying legislation. And even then, I really think it was that it was cyberbullying in particular that prompted any sort of action; bullying in schools has been happening for decades and kids have been killing themselves for just as long, but it was the added fear factor of "OMG COMPUTERS!" that sparked enough parental panic to drive it through.
Honestly, if Facebook had done something slightly subtler, like.. changing the number of times money was referenced in your news feeds, there wouldn't even be this amount of outrage. It's only because they went so mustache-twirling with explicitly trying to make people sad that we got this much, and even that's not nearly enough to drive any sort of action on the political level.
So, no external. And re. internal, see the part about sleazy capitalist scumbags.*
* That's not entirely it, of course. I was just remarking the other day to some other graduate students, "Don't give me your data! I'm a computer scientist! We're kind of sociopathic by nature! If we weren't, we wouldn't be so attracted to computers! We think of you all as 1s and 0s! You can't trust us with your data we'll totally fuck with your lives just to see if we can!"
There are probably certain argument vectors one could use to show that the advertising industry causes actionable harm (like body image issues), but most of them aren't really tied directly into the issues in this thread, nor do they have particularly clear regulatory solutions. "Makes people sad sometimes" isn't actionable in itself, especially when people always have the choice to walk away.
In general, if advertising is that big a problem for you, the best thing to do is to start demanding more subscription services, and preparing yourself to pay $10 a month for all of your favorite websites. There is no scenario where the government steps in and waves a magic wand to give you all the benefits of an ad-supported internet without any of the drawbacks. The bills have to get paid one way or another.
It hasn't even yet been demonstrated that facebook manipulating the posts of people's friends to affect their emotional state qualifies as "advertising", since it doesn't intrinsically seem to be promoting a product or service. If it was, then it would probably contravene (UK, at least) advertising codes:
2.3 and 4.2 are particularly relevant, since the stated intent of the exercise was to inflict distress on at least half the people involved, and this was done by facebook manipulating content ostensibly originating from those people's Facebook friends.
Nope.
Showing me ads based on my interests and demographic is categorically different from selectively showing other people my status updates in order to portray my life in a different light.
the "no true scotch man" fallacy.
Same exact thoughts here. I've been clinically depressed for about five years, and the thought that someone could be deliberate trying to make feel worse just for some megacorp's giggles is fucking infuriating.
Shitty Tumblr:lighthouse1138.tumblr.com
Then this thread gets resurrected.
...I'm infected, aren't I?
No, I think that's just advertisers pushing click-bait articles in order to boost ad revenue.
In other news related to Facebook, there was a Danish article about newspapers (mostly online) being "forced" to write certain types of stories in order to get enough revenue from Facebook advertising.
An example would be writing about the annual budget to be set in November which affects us all. Such type of article wouldn't prosper on the Social Media because the Facebook algorithm wouldn't deem it interesting (based on people liking and commenting). Thus the newspapers spend an increasing amount of time writing click-bait stories because those flourish on the Social Media and is deemed interesting by the algorithm.
As Bubby wrote earlier: