Club PA 2.0 has arrived! If you'd like to access some extra PA content and help support the forums, check it out at patreon.com/ClubPA
The image size limit has been raised to 1mb! Anything larger than that should be linked to. This is a HARD limit, please do not abuse it.
Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.
Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

"Because we can," ethics in scientific experiments

17891012

Posts

  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

  • PaladinPaladin Registered User regular
    Paladin wrote: »
    Paladin wrote: »
    So it seems like our disagreement is just about how much we get out of science experiments compared to how much displeasure it takes for us to do them.

    Pretty much. You appear to think that there is no such thing as an experiment that has benefits which outweigh the moral values involved in hurting or killing a test animal. I think there are a shit ton of them. Do you have any idea how many studies are done every year using animal models wherein the expected outcome is death? They're used to determine safe concentrations and workplace guidelines for chemicals, food additives, etc. I think every one of them is more important than the rats that die in the process.
    I'm not denying that experiments can outweigh the costs of using animals. The line I've said god knows how many times in this thread is that we should only do experiments on non-human animals if we would also be willing to use non-consenting human beings in these experiments.

    It harms society less to do these experiments on animals.
    I thought we agreed that "society" doesn't matter, just the sum total of pleasure/pain.

    Actually we never did. If we accepted that, it would mean awful things about attempted rape and things like that. We're not going for Brave New World here.
    Whoops, sorry, confused you with CptHamilton.

    Okay, then I want to claim that "society" doesn't matter any more than "white people society" matters.

    Our society is also one were individual freedom matters. If we do not let people perform actions that may hurt themselves, then what we should do is introduce a chemical into the atmosphere that makes the next generation of humans infertile. The few that make it through will then be a manageable population who can be herded into an arcology where they will be cared for with all the stimulation we can automate. All who become diseased or depressed or senile will be put down and replaced with new people, but otherwise the population will not be allowed to expand.

    Reducing the size of the population and its prospects for resource consumption ensures that for every human existing, we can optimize pleasure instead of negating it with the pain of competition and work. With such a small population, we can extend the use of this system far past the shelf life of humanity until the total amount of pleasure experienced over generations becomes greater than a globe full of free humans could ever hope to achieve without bankrupting itself.

    Experimentation and freedom is a tempered form of evolution, which was always a force to maximize pain and minimize pleasure. These tenets as well as others combine with the objective to maximize pleasure and minimize pain to form the institution of society, so it is not acceptable to most to give up the whole to maintain a fragment of our objectives as a species.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • PaladinPaladin Registered User regular
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

    as long as it's thorough, brain damage can be quite painless to the victim. We abolished frontotemporal lobotomies not because they caused pain, but because they greatly diminished a more complicated measure we call quality of life.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • poshnialloposhniallo Registered User regular
    Paladin wrote: »
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

    as long as it's thorough, brain damage can be quite painless to the victim. We abolished frontotemporal lobotomies not because they caused pain, but because they greatly diminished a more complicated measure we call quality of life.

    My point was that we wouldn't call a lobotomy subjective, And perhaps we shouldn't call pain subjective either.

    I figure I could take a bear.
  • PaladinPaladin Registered User regular
    poshniallo wrote: »
    Paladin wrote: »
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

    as long as it's thorough, brain damage can be quite painless to the victim. We abolished frontotemporal lobotomies not because they caused pain, but because they greatly diminished a more complicated measure we call quality of life.

    My point was that we wouldn't call a lobotomy subjective, And perhaps we shouldn't call pain subjective either.

    pain is subjective because fMRI technology is not actually as magical as it looks. For something as general as pain it does not really say anything meaningful at all.

    This is why pain is measured on subjective scales like the numerical rating scale, where a physician will ask you "Rate your pain from 0 to 10, where 0 is no pain and 10 is the worst you've ever felt," because the one thing we can reliably follow is relative pain in one person over the course of a disease, which is usually good enough for diagnostic and triage purposes.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • poshnialloposhniallo Registered User regular
    Paladin wrote: »
    poshniallo wrote: »
    Paladin wrote: »
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

    as long as it's thorough, brain damage can be quite painless to the victim. We abolished frontotemporal lobotomies not because they caused pain, but because they greatly diminished a more complicated measure we call quality of life.

    My point was that we wouldn't call a lobotomy subjective, And perhaps we shouldn't call pain subjective either.

    pain is subjective because fMRI technology is not actually as magical as it looks. For something as general as pain it does not really say anything meaningful at all.

    This is why pain is measured on subjective scales like the numerical rating scale, where a physician will ask you "Rate your pain from 0 to 10, where 0 is no pain and 10 is the worst you've ever felt," because the one thing we can reliably follow is relative pain in one person over the course of a disease, which is usually good enough for diagnostic and triage purposes.

    The magical comment is patronising. There are studies measuring pain using fMRIs and computer algorithms.

    And even if we don't have the technology to measure pain well, the fact that we can measure it in any way surely shows that it is an objective phenomenon?

    I figure I could take a bear.
  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    edited August 2012
    Paladin wrote: »
    Paladin wrote: »
    Paladin wrote: »
    So it seems like our disagreement is just about how much we get out of science experiments compared to how much displeasure it takes for us to do them.

    Pretty much. You appear to think that there is no such thing as an experiment that has benefits which outweigh the moral values involved in hurting or killing a test animal. I think there are a shit ton of them. Do you have any idea how many studies are done every year using animal models wherein the expected outcome is death? They're used to determine safe concentrations and workplace guidelines for chemicals, food additives, etc. I think every one of them is more important than the rats that die in the process.
    I'm not denying that experiments can outweigh the costs of using animals. The line I've said god knows how many times in this thread is that we should only do experiments on non-human animals if we would also be willing to use non-consenting human beings in these experiments.

    It harms society less to do these experiments on animals.
    I thought we agreed that "society" doesn't matter, just the sum total of pleasure/pain.

    Actually we never did. If we accepted that, it would mean awful things about attempted rape and things like that. We're not going for Brave New World here.
    Whoops, sorry, confused you with CptHamilton.

    Okay, then I want to claim that "society" doesn't matter any more than "white people society" matters.

    Our society is also one were individual freedom matters. If we do not let people perform actions that may hurt themselves, then what we should do is introduce a chemical into the atmosphere that makes the next generation of humans infertile. The few that make it through will then be a manageable population who can be herded into an arcology where they will be cared for with all the stimulation we can automate. All who become diseased or depressed or senile will be put down and replaced with new people, but otherwise the population will not be allowed to expand.

    Reducing the size of the population and its prospects for resource consumption ensures that for every human existing, we can optimize pleasure instead of negating it with the pain of competition and work. With such a small population, we can extend the use of this system far past the shelf life of humanity until the total amount of pleasure experienced over generations becomes greater than a globe full of free humans could ever hope to achieve without bankrupting itself.

    Experimentation and freedom is a tempered form of evolution, which was always a force to maximize pain and minimize pleasure. These tenets as well as others combine with the objective to maximize pleasure and minimize pain to form the institution of society, so it is not acceptable to most to give up the whole to maintain a fragment of our objectives as a species.
    I don't understand any of this. Is the first paragraph supposed to be an implication of my view? I don't care about people hurting themselves. I care about people hurting others. Is the second paragraph a continuation of the first? I don't get how it's supposed to happen or why it needs to use non-human animals in science experiments but not use humans in science experiments. The third paragraph is the most confusing. Evolution isn't designed to maximize pain and minimize pleasure, it's just a process by which the genes that increase reproductive fitness get passed on and therefore the process by which mutations that increase reproductive fitness become more prevalent. I don't think that "society" is composed of evolution + the drive to maximize pleasure and minimize pain and even if it were my point is that society is largely irrelevant to the morality of harming other living creatures.

    The most confusing part by far is the very last word of the very last sentence, plus the times when you say "human." Why the focus just on human beings? I don't understand why this makes more sense than just focusing on white people.
    Paladin wrote: »
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

    as long as it's thorough, brain damage can be quite painless to the victim. We abolished frontotemporal lobotomies not because they caused pain, but because they greatly diminished a more complicated measure we call quality of life.
    Okay then forget brain damage. Take cancer. Or anything. Whatever. I AM TALKING ABOUT PAIN. Obviously if you end up with the lucky kind of painless brain damage then you're part of a separate example.
    Paladin wrote: »
    poshniallo wrote: »
    Paladin wrote: »
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

    as long as it's thorough, brain damage can be quite painless to the victim. We abolished frontotemporal lobotomies not because they caused pain, but because they greatly diminished a more complicated measure we call quality of life.

    My point was that we wouldn't call a lobotomy subjective, And perhaps we shouldn't call pain subjective either.

    pain is subjective because fMRI technology is not actually as magical as it looks. For something as general as pain it does not really say anything meaningful at all.

    This is why pain is measured on subjective scales like the numerical rating scale, where a physician will ask you "Rate your pain from 0 to 10, where 0 is no pain and 10 is the worst you've ever felt," because the one thing we can reliably follow is relative pain in one person over the course of a disease, which is usually good enough for diagnostic and triage purposes.
    And even if it isn't good enough for diagnostic and triage purposes, it's pretty much all we have, because like I've pointed out, there isn't a magic pain sensing machine and I doubt there ever will be.

    edit: @poshniallo I have pointed this out once before but you are confusing fMRI scans of brains of people in pain with the actual pain itself. There is a component of pain over and above what the fMRI shows, which is the feeling that you get when you are in pain. There is no way for the fMRI to show that the feeling of one person in one fMRI scan is the same as the feeling of another person in another fMRI scan even if the fMRI scans are identical, because they might feel those two pains differently. And the feeling of pain is subjective.

    TychoCelchuuu on
  • PaladinPaladin Registered User regular
    The first paragraph is basically Brave New World, Logan's Run, The Time Machine, the last story in I, Robot, and Harrison Bergeron, though I suppose I could have done a better job of it. These works of literature span several eras and share a common theme of the deconstruction of the hedonist utopia.

    What you are advocating is not utilitarianism, it is hedonism. Hedonism was actually a genuine philosophical precursor to utilitarianism and not just

    tumblr_m8u7lkRQlz1qj25hxo1_400.jpg

    But it was also old and since supplanted by more complex theories.


    It makes sense to separate human society from animals because all human society has basic rules enforced around the globe that exempt animals. Humans are not allowed to kill other humans unless warranted by specific circumstances outlined in the law, and animals are allowed to kill eachother with no moderation. That is one of the examples that forms the foundation of basic society. There are smaller societies within the largest scope with even more rules, but all successful societies do not stray from the basics. Everyone who fits in this basic envelope of society is granted rights as humans, and everyone who does not, is not. The reason why "white society" isn't appropriate is because "black society" also treated its own members as deserving of human rights, even if white society did not. Therefore, even when segregated and ranked, both societies still fit in the larger blanket of society that conferred the most basic of human rights to each and none outside.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • QuidQuid I don't... what... hnnng Registered User regular
    edited August 2012
    Paladin wrote: »
    It is very okay to torture animals for fun; it's called hunting.

    Kill =/= torture. And even though I don't like hunting for sport, even I can accept that the point of it isn't to prolong an animal's pain.

    Edit: And believe me, I'm all for ending any hunting that isn't necessary for a healthy ecosystem.

    Quid on
  • PaladinPaladin Registered User regular
    poshniallo wrote: »
    Paladin wrote: »
    poshniallo wrote: »
    Paladin wrote: »
    poshniallo wrote: »
    Is pain a subjective thing? It is nerve impulses, neurotransmitters and the like. You can do fMRIs to show pain responses.

    I mean, I understand it is a brain event, but so are strokes and nobody considers them subjective. I think we have to think carefully about brain events and how we apply the subjective/objective distinction.

    There are a lot of flat claims in this thread - pain cannot be measured, pain always serves a useful purpose etc - that are just not true. People with disabilities and chronic conditions suffer pain that is not only useless, but often the main problem. Computer-assisted scanning can detect pain.

    I also think that we shouldn't be so simplistic about pain. For example, killing babies makes more suffering and reduces utility more than killing rats. Because parents care about their babies more, and would probably be willing to kill to protect them.
    You can do an fMRI to show a pain response but you can't know that the person feeling that pain response feels the same pain that you would feel if you had the same reading in the fMRI, because pain is more than just a brain pattern: it's the feeling that the brain pattern causes.

    If you disagree and think that pain is just what the fMRI shows, then fine, that's what pain is. In that case we should care about dog pain (I presume dogs can be put in fMRI machines and that we can see their pain?) just as much as we do about human pain, generally, and so on for the rest of the animals that can feel pain.

    You're right about the killing babies thing. This is why I am trying to stay away from death. Let's just focus on experiments like the one where we cause brain damage to something to study how the brain deals with brain damage. This is causing a lot of pain to the subject. Should these be allowed? I'm arguing that unless we think it would make sense to allow these experiments on humans, we shouldn't think it makes sense to allow these experiments on non-human animals.

    as long as it's thorough, brain damage can be quite painless to the victim. We abolished frontotemporal lobotomies not because they caused pain, but because they greatly diminished a more complicated measure we call quality of life.

    My point was that we wouldn't call a lobotomy subjective, And perhaps we shouldn't call pain subjective either.

    pain is subjective because fMRI technology is not actually as magical as it looks. For something as general as pain it does not really say anything meaningful at all.

    This is why pain is measured on subjective scales like the numerical rating scale, where a physician will ask you "Rate your pain from 0 to 10, where 0 is no pain and 10 is the worst you've ever felt," because the one thing we can reliably follow is relative pain in one person over the course of a disease, which is usually good enough for diagnostic and triage purposes.

    The magical comment is patronising. There are studies measuring pain using fMRIs and computer algorithms.

    And even if we don't have the technology to measure pain well, the fact that we can measure it in any way surely shows that it is an objective phenomenon?

    It is eventually objective, but the magic I was referring to is just a reflection of the great disappointment fMRI has been in science so far. It is extremely useful for certain things but otherwise it tends to get very tea-leaf. Rating pain objectively would be very useful but we settle for subjective measures because that's the only thing science will allow. We should call pain subjective as an admission of our failure to objectively characterize it, not because that's a fact.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • PaladinPaladin Registered User regular
    Quid wrote: »
    Paladin wrote: »
    It is very okay to torture animals for fun; it's called hunting.

    Kill =/= torture. And even though I don't like hunting for sport, even I can accept that the point of it isn't to prolong an animal's pain.

    Edit: And believe me, I'm all for ending any hunting that isn't necessary for a healthy ecosystem.

    there's a small difference between causing pain to an animal because you enjoy causing pain and causing pain to an animal because your favorite activity happens to cause pain to animals

    the point was kind of lost with the excessive dog kicking

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • QuidQuid I don't... what... hnnng Registered User regular
    Paladin wrote: »
    Quid wrote: »
    Paladin wrote: »
    It is very okay to torture animals for fun; it's called hunting.

    Kill =/= torture. And even though I don't like hunting for sport, even I can accept that the point of it isn't to prolong an animal's pain.

    Edit: And believe me, I'm all for ending any hunting that isn't necessary for a healthy ecosystem.

    there's a small difference between causing pain to an animal because you enjoy causing pain and causing pain to an animal because your favorite activity happens to cause pain to animals

    the point was kind of lost with the excessive dog kicking

    No? Kicking a dog for no reason over and over is not at all the equivalent of killing an animal with minimal pain, especially if it's necessary to kill the animal to curb the population. You might as well say humanely raising and killing a cow is the equivalent of bear baiting.

  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    edited August 2012
    Paladin wrote: »
    The first paragraph is basically Brave New World, Logan's Run, The Time Machine, the last story in I, Robot, and Harrison Bergeron, though I suppose I could have done a better job of it. These works of literature span several eras and share a common theme of the deconstruction of the hedonist utopia.

    What you are advocating is not utilitarianism, it is hedonism. Hedonism was actually a genuine philosophical precursor to utilitarianism and not just

    tumblr_m8u7lkRQlz1qj25hxo1_400.jpg

    But it was also old and since supplanted by more complex theories.
    This is incorrect. Classic utilitarianism as defined by Bentham and Mill is hedonism plus consequentialism. Hedonism is the claim that pleasure is the only good and consequentialism is the claim that we should maximize the good. Utilitarianism therefore is the philosophy that we should maximize pleasure. There are other versions of hedonism that do not also accept consequentialism, including the versions of hedonism supported by ancient philosophers, but utilitarianism itself includes branches that rely on hedonism. Other kinds of utilitarianism identify the good not with pleasure (and are thus not hedonistic) but with something like preference satisfaction (this is Singer's position I believe), but even these work with my claims, because the preferences of many individuals, including basically every non-human animal and almost every human, include pleasure and a minimization of pain, such that a consequentialism that holds preference satisfaction to be the good can still support the claims that I have made about animal experimentation.

    But none of that is really relevant because I still don't understand what your post was meant to say. In any case, moving on:
    Paladin wrote: »
    It makes sense to separate human society from animals because all human society has basic rules enforced around the globe that exempt animals. Humans are not allowed to kill other humans unless warranted by specific circumstances outlined in the law, and animals are allowed to kill eachother with no moderation. That is one of the examples that forms the foundation of basic society. There are smaller societies within the largest scope with even more rules, but all successful societies do not stray from the basics. Everyone who fits in this basic envelope of society is granted rights as humans, and everyone who does not, is not. The reason why "white society" isn't appropriate is because "black society" also treated its own members as deserving of human rights, even if white society did not. Therefore, even when segregated and ranked, both societies still fit in the larger blanket of society that conferred the most basic of human rights to each and none outside.
    It's okay to exclude non-human animals from society because society has rules that exclude non-human animals? That sounds circular. You then attempt to justify this by noting that there are two problems with limits on society, but these problems don't apply to the exclusion of non-human animals.

    The first problem is that societies with a smaller scope and more rules are not successful. To be successful, a society cannot stray from the basics, and the basics exclude non-human animals but not any specific humans.

    This sounds wrong. Most successful societies exclude, partially, outsiders, including human outsiders. The United States does not provide as much aid to foreigners as it does to its own citizens. Nor does any country on earth.

    Let's say it's right, though. Why should the morality of pain infliction give a flying fuck about the success of societies? When we decide whether it's okay for me to give a dog a swift kick to the ribs, and this will have no influence one way or another on society, isn't it still wrong for me to kick the dog? Isn't it sometimes still wrong even if it helps society very slightly?

    Let's say that argument fails somehow, though, and what really matters for determining whether pain infliction is okay is how society fares. Fine, I still want to understand why we exclude non-human animals from society. Your first reason was that to be successful a society has to exclude animals from the "don't hurt" rule. That sounds wrong. Lots of successful societies have animal cruelty laws. I think we could easily have stricter animal cruelty laws and be just as successful, just like we used to have more lax animal cruelty laws and were still successful.

    So much for the first reason to exclude non-human animals, which is still fuzzy to me even after all that.

    The second reason to exclude non-human animals but not people with other skin colors is because if we exclude, for instance, black people, then they make their own society that treats all black people equally, so even though we've formed a smaller, white-only society, they've formed a black-only society and then it all fits in to a wonderful human rights quilt.

    First of all, that's wrong. Back when we excluded black people in America, they didn't form their own wonderful loving society. They were forced to work as slaves on farms.

    Second of all, that's not an argument for why I can't form a white person society. In fact, you say it's okay, because then black people just form their own society too! So you're saying that, when it comes to inflicting pain on living creatures, it is okay to care more about whether I inflict pain on white people, because then black people can care more about whether they inflict pain on black people, and it all works out. This sounds like poppycock to me. That doesn't sounds like a solution at all. It's just everyone being racist.

    TychoCelchuuu on
  • PaladinPaladin Registered User regular
    I don't understand where you're going with this as I was simply saying that the fundamental rule of society is that its members must obey a minimum basic social contract necessary for civilization to function. Animals are exempt from following these simple laws - don't kill another member of society being one of them - and as a consequence they don't have human rights. You cannot generalize that to the white society subset, because black societies were always capable of obeying the social contract.

    The idea that has continually eluded discussion for some reason is that animals cannot obey the social contract. That is the reason why the dividing line is made between humans and animals and is why these analogies to slavery are weird. Babies eventually turn into people able to obey the social contract, while certain disabled people and criminals don't, and their freedom is restricted accordingly - they do not have the same rights as realized citizens, even under the law. However, we have institutions in place to fit them in the framework of our society as much as possible, so they can't be called exempt.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    edited August 2012
    Yes! Everything you say is right! And completely irrelevant to whether it is okay to cause pain to individuals that are unable (or unwilling) to be party to the social contract!

    TychoCelchuuu on
  • PaladinPaladin Registered User regular
    Quid wrote: »
    Paladin wrote: »
    Quid wrote: »
    Paladin wrote: »
    It is very okay to torture animals for fun; it's called hunting.

    Kill =/= torture. And even though I don't like hunting for sport, even I can accept that the point of it isn't to prolong an animal's pain.

    Edit: And believe me, I'm all for ending any hunting that isn't necessary for a healthy ecosystem.

    there's a small difference between causing pain to an animal because you enjoy causing pain and causing pain to an animal because your favorite activity happens to cause pain to animals

    the point was kind of lost with the excessive dog kicking

    No? Kicking a dog for no reason over and over is not at all the equivalent of killing an animal with minimal pain, especially if it's necessary to kill the animal to curb the population. You might as well say humanely raising and killing a cow is the equivalent of bear baiting.

    I didn't put forth the analogy of kicking a dog for no reason over and over, which was continually stacked on with confounding qualifiers over the course of the discussion. I'm just talking about killing dogs vs. killing deer with minimal attention paid to reducing suffering prior to death, and was arguing that the taboo of causing pain to dogs but not deer is cultural, not moral.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • PaladinPaladin Registered User regular
    Yes! Everything you say is right! And completely irrelevant to whether it is okay to cause pain to individuals that are unable (or unwilling) to be party to the social contract!

    It is okay to cause pain to individuals that are unable or unwilling to be party to the social contract. Self defense is protected from being second degree murder or assault. The consumption of the meat of cephalic animals is acceptable. Negative reinforcement is a valid parenting mechanism. Lawbreakers are prevented from engaging in illegal activities pleasing to them. Pain as a part of animal studies is approved on a case by case basis.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • DolraithDolraith Registered User
    You estimate? You figure that a dog feels as much pain as you do when both of you get poked with a fork.
    I'll go ahead and blow your mind: not only is it impossible for us to know if the snake is experiencing the same amount of pain as we do when it's poked with a fork... we can't even tell if other human beings feel the same amount of pain! It's impossible! I can ask you to rate your pain on a scale of 1 to 10, but your 1 to 10 might be different from my 1 to 10. I could break your toe and break my toe and then ask you to rate all subsequent pains with respect to the broken toe, but we don't know that your broken toe feels like my broken toe! It's impossible to get inside someone else's head because experiences are specific to the mind experiencing them.

    Ummm.....pick one? We can't have an oversight institution that relies on the empathy of a single human - it'd be much too subjective. If you can't rate others' pain, then your system is untenable. If you can rate pain, I want to know how you adjust it to different physiques. How much pain does a dog hit on the nose feel? You don't have an organ with that nerve concentration. Same for pulling a cat's whiskers out. If you have a systematic way of gauging the pain of others - I'm listening.


    Also, either we're trying to minimize pain across time, or at a particular moment. If we're going across time, then any marginal advance from animal experiments is justified, as it will affect (in a very small way) billions of lives across the millennia. If we're minimizing pain at a certain moment of time, we should all get hopped up on endorphins, dopamine, opiates, etc, and die out in a glorious moment of global pleasure.

  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    edited August 2012
    Paladin wrote: »
    Yes! Everything you say is right! And completely irrelevant to whether it is okay to cause pain to individuals that are unable (or unwilling) to be party to the social contract!

    It is okay to cause pain to individuals that are unable or unwilling to be party to the social contract. Self defense is protected from being second degree murder or assault. The consumption of the meat of cephalic animals is acceptable. Negative reinforcement is a valid parenting mechanism. Lawbreakers are prevented from engaging in illegal activities pleasing to them. Pain as a part of animal studies is approved on a case by case basis.
    Okay. And however you determine how much pain it's okay to cause to a baby or someone else not party to the social contract should be the same way you determine how much pain it's okay to cause to a non-human animal, for consistency's sake.

    edit: and given that babies are cephalic animals, you're going to either have to say baby eating is okay or dump the cephalic animal flesh eating part.

    TychoCelchuuu on
  • PaladinPaladin Registered User regular
    Paladin wrote: »
    Yes! Everything you say is right! And completely irrelevant to whether it is okay to cause pain to individuals that are unable (or unwilling) to be party to the social contract!

    It is okay to cause pain to individuals that are unable or unwilling to be party to the social contract. Self defense is protected from being second degree murder or assault. The consumption of the meat of cephalic animals is acceptable. Negative reinforcement is a valid parenting mechanism. Lawbreakers are prevented from engaging in illegal activities pleasing to them. Pain as a part of animal studies is approved on a case by case basis.
    Okay. And however you determine how much pain it's okay to cause to a baby or someone else not party to the social contract should be the same way you determine how much pain it's okay to cause to a non-human animal, for consistency's sake.

    A baby becomes an individual that obeys the social contract and is therefore not exempt. Causing physical pain damages the baby which compromises its ability to develop, but we do administer mental agony in the form of restricted freedom of behavior and rescinded ownership of possessions to again allow it to adjust to the social contract. If it cannot adjust after a certain period of time, it is further incarcerated against its will.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    Dolraith wrote: »
    You estimate? You figure that a dog feels as much pain as you do when both of you get poked with a fork.
    I'll go ahead and blow your mind: not only is it impossible for us to know if the snake is experiencing the same amount of pain as we do when it's poked with a fork... we can't even tell if other human beings feel the same amount of pain! It's impossible! I can ask you to rate your pain on a scale of 1 to 10, but your 1 to 10 might be different from my 1 to 10. I could break your toe and break my toe and then ask you to rate all subsequent pains with respect to the broken toe, but we don't know that your broken toe feels like my broken toe! It's impossible to get inside someone else's head because experiences are specific to the mind experiencing them.

    Ummm.....pick one? We can't have an oversight institution that relies on the empathy of a single human - it'd be much too subjective. If you can't rate others' pain, then your system is untenable. If you can rate pain, I want to know how you adjust it to different physiques. How much pain does a dog hit on the nose feel? You don't have an organ with that nerve concentration. Same for pulling a cat's whiskers out. If you have a systematic way of gauging the pain of others - I'm listening.


    Also, either we're trying to minimize pain across time, or at a particular moment. If we're going across time, then any marginal advance from animal experiments is justified, as it will affect (in a very small way) billions of lives across the millennia. If we're minimizing pain at a certain moment of time, we should all get hopped up on endorphins, dopamine, opiates, etc, and die out in a glorious moment of global pleasure.
    Honestly, you're asking very good questions for another thread. You're getting to the heart of very important questions in ethics and the philosophy of mind. This is the thread about science experiments done to animals. If you're saying things like "how much does it really hurt a dog when we bop it on the nose? It's a mystery!" or "we need everything to be 100% systematic before we can ever make a decision!" then you're going to run into issues not just for solving science experiment questions but for solving questions like "is murder wrong" or "can I stab my son in the eye with a fork" or "can ethics ever exist in a world where we do not have direct access to the qualia of other beings" or "if I murder 1000 people right now and just get lucky enough to have killed the next Hitler, aren't I a hero" and stuff like that. These are all stupendous questions but they're at the wrong level for this thread, because we can't challenge all of philosophy whenever we ask a question like "is it okay to breed a bunch of rats to have cancer and then watch what happens." I'm not saying your questions are irrelevant to the issue at hand. It's just that your questions challenge a lot of positions in a lot of areas for a lot of reasons, and I think any of the many halfway sensible resolutions to your questions are just going to bring us back to where this thread began.

  • DolraithDolraith Registered User
    I am saying if you're trying to develop a system that tells you whether a specific experiment on an animal is ok, then the system needs to be consistent. Otherwise what you end up with is an experiment is ok or not will depend on whether mr. Beurocrat had his coffee today. And that makes for terrible science. I'm not asking you to produce a finished system, I'm saying unless you have an idea on how to implement your system in the real world, it has no place in applied ethics.

  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    Apply the same rules to non-human animals in science studies as are applied to babies.

  • PaladinPaladin Registered User regular
    edit: and given that babies are cephalic animals, you're going to either have to say baby eating is okay or dump the cephalic animal flesh eating part.

    I think I see what the problem is.

    When I make logical statements, I do so with the broadest possible interpretation. For instance, people should exercise because exercise is good for the body. There are examples of people who shouldn't exercise even though exercise is good for the body because of another qualifier - maybe they're too busy, or their active lifestyle is enough exercise already.

    If the argument were a sculpture, I'd be starting with the entire block and carving out unnecessary sections where appropriate. You're making an argument like legos, starting with specific examples and snapping on others to eventually describe the applicable population.

    So when I say "cephalic animals are okay to eat," it is understood that exceptions exist and this must be further qualified (which I do - cephalic animals that are expected obey the social contract are not eaten), but what I'm simply saying is that we eat animals with brains - brains seemingly capable of feeling pain. This is not the only qualifier of what we eat, but it is one of the general ones.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • QuidQuid I don't... what... hnnng Registered User regular
    Paladin wrote: »
    I didn't put forth the analogy of kicking a dog for no reason over and over, which was continually stacked on with confounding qualifiers over the course of the discussion. I'm just talking about killing dogs vs. killing deer with minimal attention paid to reducing suffering prior to death, and was arguing that the taboo of causing pain to dogs but not deer is cultural, not moral.

    Then don't run with it? And you're also actually very wrong. We kill dogs as a method of population control all the time.

  • PaladinPaladin Registered User regular
    Quid wrote: »
    Paladin wrote: »
    I didn't put forth the analogy of kicking a dog for no reason over and over, which was continually stacked on with confounding qualifiers over the course of the discussion. I'm just talking about killing dogs vs. killing deer with minimal attention paid to reducing suffering prior to death, and was arguing that the taboo of causing pain to dogs but not deer is cultural, not moral.

    Then don't run with it? And you're also actually very wrong. We kill dogs as a method of population control all the time.

    It was made clear that TychoCelchuuu would no longer wish to continue the argument if I maintained that harming dogs could in some way be perceived as not immoral, so I endeavored to explain the argument without dismissing it. Apparently I did so badly.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • QuidQuid I don't... what... hnnng Registered User regular
    That doesn't address that we still do, in fact, kill dogs.

  • TychoCelchuuuTychoCelchuuu ___________PIGEON _________San Diego, CA Registered User regular
    edited August 2012
    I thought I was pretty clear that I was talking about hurting dogs and that killing is an entirely separate matter. I've said this... so many times now.

    And with that, I'm out for now. I'm traveling starting tomorrow and I don't have a smart phone so unless I end up near a computer and decide to check the thread I'm unlikely to be part of the conversation anymore for at least a week. I was thinking about doing a summary of what I've been arguing for but that can basically be found in the first few posts @MrMister made where he referenced Peter Singer's argument, because with a few exceptions I think I've been trying to push his argument, basically, and because he actually sat down and wrote it all out logically rather than presenting it in the scattershot manner I've done here, that's probably the best source. Alternatively it might help to go back and read through the thread from start to finish (I know I will at some point) because I've repeated myself quite a bit on certain points.

    Bye for now!

    TychoCelchuuu on
  • PaladinPaladin Registered User regular
    edited August 2012
    Quid wrote: »
    That doesn't address that we still do, in fact, kill dogs.

    Most of the relevant discussion is below:
    Paladin wrote: »
    ElJeffe wrote: »
    You know, I think when we get to the point where we are making comparisons between horses and black people as if they are actually the same thing, the conversation has gone to a really weird place.

    Like, the whole argument strikes me as a sort of ethical shorthand for people too intellectually lazy to bother thinking about why we grant any entity moral value to begin with.
    I think it makes more sense to argue that an idea is wrong rather than just to throw personal attacks at the people who hold the idea, but to honestly call someone like Peter Singer "intellectually lazy" sounds pretty strange to me. Peter Singer is one of the most celebrated philosophers alive today and he has made important contributions to a number of debates in applied ethics. The idea that he has not though about "why we grant any entity moral value to begin with" is very strange, even more so because he's hardly the only one to have said this sort of stuff. Jeremy Bentham, one of the fathers of utilitarianism, made basically the same argument hundreds of years ago, and if Bentham was intellectually lazy then I'm a pigeon. As for myself I can say that I've certainly thought about that quite a bit too, and if I thought any of it was relevant to whether we can use non-human animals to experiment on, then I would be talking about it. I don't think any of it's really relevant, though, because Jeremy Bentham and Peter Singer have a simple argument that I am appropriating: when it comes to causing pain, it is unacceptable to draw an arbitrary line between creatures it is okay to cause pain to and creatures that it is not okay to cause pain to, and to distinguish between humans and non-humans animals is to draw exactly such an arbitrary line and it is also to make the same mistake that a racist does when they draw an arbitrary line between members of their own race and those of other races.

    edit: Paladin, there is no difference between kicking a dog and shooting a deer (unless the deer dies a painless death, which complicates matters, but let's just say it doesn't). You're right that I don't have the majority view, but neither do you, because you say kicking a dog is OK, which almost nobody believes. I actually have an argument for why I'm right: I say that drawing a line between dogs and deer is arbitrary, so you have to either give up your dog beliefs or your deer beliefs, and your dog beliefs are stronger. Do you have an argument for why we should instead give up our beliefs that kicking a dog is right in order to salvage our belief that hurting a deer is OK? Because you have a long ways to go to do that - I doubt most hunters would accept that kicking a dog would be okay, and in fact I know hunters who would probably beat you up if you kicked their dog.

    You can go to east asia and find killing a dog to be perfectly acceptable. It is usually then eaten, but not always.

    An objective reason why it's not acceptable here is for a few reasons, none of which clash with our taste for killing deer.

    One is that dogs are usually somebody's property. You get in trouble for damaging somebody's property, and it makes no sense to damage your own property.

    Second, large animals are a good model for human/human violence as well as human/human studies. This may be a reason why humans prone to animal violence later become serial killers. For selfish reasons alone, it is not a good idea to torture domesticated animals.

    Third, dogs in particular reciprocate violence, so for reasons of personal safety and the safety of others, it is not a good idea to familiarize your animal with violence.

    However, if you are going to eat your dog or your snake or your fish, then all you have to fear is the judgmental eye of your neighbor, which is not a good barometer of morality. Fish are a very good example because they are vertebrates that can feel pain, yet we suffocate them and decapitate them while they're thrashing in order to make sashimi. Listen to the kitchen in the back the next time you go to a restaurant with a live aquarium.
    If you really think that kicking a dog is wrong not because you shouldn't hurt dogs but because the dog is someone's property and because it makes you a jerk towards humans and because it might bite people, then you're basically off the train. We could have this argument elsewhere but for now I'd like to just abandon attempting to convince you, because I feel like any normal person with empathy and no vested interest in protecting some philosophical theory which supports dog-kicking is going to say that kicking a dog is wrong because it hurts the dog.

    The basic supposition was that since animals can feel pain, you shouldn't hurt animals. However, we do cause pain to animals acceptably, so I produced objective caveats that modified the equation. Your example of killing dogs is actually more evidence for my argument.


    Note: this is all supposing that getting shot with a hunting rifle is painful for the deer, and killing a dog is painful for the dog. I came up with the deer argument, and TychoCelchuuu responded with the dog argument.

    Paladin on
    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • DolraithDolraith Registered User
    I think what's going on is that the argument is the following:
    1. Causing pain in the animals is wrong.
    2. Cause pain for the greater good is not wrong.
    3. Since we cannot actually estimate how much pain animals feel, we need to go to the worst-case scenario (human analogue).
    Of course his argument implies that cannibalism is as OK as hunting, provided the stripping of social stigma from it. He does not actually define whether or not it is OK, just that it is as OK.

  • QuidQuid I don't... what... hnnng Registered User regular
    Paladin wrote: »
    Note: this is all supposing that getting shot with a hunting rifle is painful for the deer, and killing a dog is painful for the dog. I came up with the deer argument, and TychoCelchuuu responded with the dog argument.

    I'm perfectly fine assuming neither is as painful as slowly starving to death. Cause that's pretty much the alternative.

  • PLAPLA The process.Registered User regular
    A "moral" judgment or requirement is about what is right or wrong, good and bad, all things considered. Other judgments (like "that painting is pretty" or "this burrito tastes bad") are about other things.

    I can't intuit a difference, and have generally been unconvinced by explanations of what it is.

  • Craw!Craw! Registered User
    edited August 2012
    Craw! wrote: »
    Craw! wrote: »
    Non-human animal experimentation is like this except instead of babies we use rats. Can we justify using rats instead of babies? I argue that we cannot. There is no good line to be drawn that is not arbitrary and based on blind unthinking prejudice.

    Where do you draw the line, TychoCelchuuu? Is every organism that has at least one neuron a no-go? Are isolated neurons a no-go?
    The line is ability to feel pain. I don't think anyone thinks that neurons feel anything.

    I could say that saying neurons don't feel anything would be like saying our brains don't feel anything but I'll assume that you meant single neurons. Well, again, where do you draw the line? What counts as feeling pain and how do you know that that is what is going on?
    Tough question! It's not really necessary to answer this as long as you agree that some things, like human beings and dogs, feel pain, because then at least you will agree that for something like a dog, we ought not to use it in an experiment where we wouldn't use a baby. If you don't think that human beings or dogs feel pain, then feel free to stab yourself and a dog with a fork until you've gotten the picture.

    I was asking you where you draw the line and yes it is necessary to answer because otherwise you'll just end up with some OTHER "arbitrary line" that you happen to like(and runs the risk of arbitrarily shifting, too). I never said anything about dogs. You've been switching back and forth a bit but sometimes you've said that all non-human animals' pain should be seen as just as important as humans' pain but do all non-human animals feel pain? What about these fellas? http://en.wikipedia.org/wiki/Placozoa

    Edit: Actually here, can you point out on this phylogenetic tree what organisms should count as not feeling pain and so being okay to experiment on for our own gain? http://138.26.40.199/Metazoantree.jpg

    Craw! on
  • MrMisterMrMister A pup must first get in the water to be successful as a seal!Registered User regular
    MrMister wrote: »
    MrMister wrote: »
    Maximizing pleasure and minimizing pain in humans. Utilitarian morality, like any moral system that doesn't pre-suppose some universal, inherent moral code, is ultimately either selfish or about humans. We can maximize utility based on its impact on us, individually and personally, or based on its impact on humanity as a whole, largely dependent upon whether we think that the species' ability to prosper is a driving goal. In either case, avoiding unnecessary harm to humans is an easily defensible utilitarian moral position.

    Two comments.

    1) Your use of 'utilitarianism' here is pretty non-standard. Utilitarianism is usually taken to be the code that the ultimate good is happiness, which is tightly identified with maximal pleasure and minimal pain, and that actions should be taken in accord to how much either they, or the principles of choice from which they flow, serve to maximize happiness. Of the classical Utilitarians, Bentham saw no difference between humans and animals on this score:
    The French have already discovered that the blackness of skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may come one day to be recognized, that the number of legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate... the question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?... The time will come when humanity will extend its mantle over everything which breathes"

    John Stuart Mill, his protege, thought that there was a difference between humans and animals on this score, but that it had not to do with what they deserved but rather with the sorts of pleasures and pains to which they were susceptible. Pigs cannot, for instance, get any pleasure out of the sublime beauty of the works of ancient philosophy. But he took this to be irrelevant toward the fact that what pleasures and pains they are capable of are morally important in just the same way as those same pleasures and pains are when they occur in humans. Furthermore, all classical Utilitarians, and everyone following in the tradition, has put forth the requirements of the Utilitarian calculus as fully impartial: in Bentham's words it requires "each to count for one, and none for more than one." So Utilitarianism, in standard use, is actually an example of a moral theory which is neither selfish nor about humans.

    In standard philosophical use, but as it applies to experimental and industrial ethics? Humanity has never, on average, given much of a shit about anything that isn't human. Bentham may have believed that there will come a day when we consider our treatment of rats to be equivalent to the 19th century treatment of blacks and Asians, but to what benefit? Why should all things that breath be included? For that matter, why should things that don't breath be excluded? There are some plants with stimulus response systems more complex than some animals; shouldn't we include them? Without significantly more information than modern science possesses, any distinction we choose to make is going to be horribly arbitrary. And even with more knowledge than we have now, any distinction that doesn't include every system in the universe--organic or otherwise, living or otherwise--which has states that may be divided into some binary set of more and less prosperous is, ultimately, arbitrary.

    I have a preference for humanity. Bentham has a preference for things that breath. There are folks out there who think picking fruit off of plants is immoral because it harms the plant; they apparently have a preference for organic systems. Is a star a less complex system than a rat? A rat deprived of food will starve, and this deprivation is immoral because feeding the rat would put it in a more pleasurable state. Is it then not our moral imperative to ensure the sun's supply of hydrogen? What makes one hideously complex physical system more or less important to moral consideration?
    MrMister wrote: »
    2) The 'why' question you ask here--why include animals?--can just as well be asked about the fundamental principles of any moral system. The Kantians think they have a non-question-begging answer, but I (and many others) think that it is not any good. Instead, it seems that there may be some primitives in this area of discourse which are not given much in substantive further explanation and defense. And this is so in all sorts of places--try giving a substantive definition of truth, for instance; even the barest logic must appeal to some notions that are not themselves further argued or explained. So it's not obvious that the why-question cannot be met with "that's simply how it is."

    The why question appears, to me at least, to be the only relevant one here. We can extend moral consideration to any degree, to the extent that essentially any action any human takes could be considered immoral on the basis of its impact on some species of animal, some species of plant, or some equivalently complex non-living system. That extension makes the framework fundamentally useless. It can no longer tell us anything useful about morality because every action aside from submitting to entropy is, in some fashion, immoral.

    Humanity, on average, wants to survive and to thrive. Human pleasure, as a sum over all humans, increases as the prosperity of our species increases. There is abundant historical evidence that subjugating and mistreating any fraction of the human population of a region is objectively worse, over a long enough time scale, for the entire population of the region than to do otherwise. There is no equivalent evidence with regard to animals. Certainly there are limited classes of animals whose survival and prosperity have an influence on that of humanity, either because they provide some resource on which we are dependent or because we've integrated them into our society in such a way that their displeasure affects those humans around them in the same way that human displeasure would. But, by and large, being mean to other living things has no significant impact on humanity.

    No species aside from humanity, and those few animals we've altered through the domestication process to the extent that they now believe humans to be part of their own species' social order, gives a shit about any species aside from its own. Any attempt we make to ensure the pleasure of a non-human species is already rising well above the biological norm. To the extent that there can be a reason for "why should we include non-human, non-sapient creatures in our moral framework", the only obvious one is "because it makes us feel good to do so". In a human-pleasure-maximization calculus, being nice to animals is good. Being nice to plants is good too, for that matter, as is being nice to many non-living systems. And that's the only calculus that I see in evidence over the course of human history.

    We can devise any arbitrary philosophical system of morality we'd like to, but when it comes down to making decisions about what sort of medical and biological research to perform, what kinds of food to farm, and what impact on the environment to allow, pragmatic concerns about the future of the human species have to win out. Until the point where humanity, on average, cares about something other than its own survival more than it cares about continuing to exist, our decisions must be, at most, a selection from among those options which most benefit humanity while least impacting everything else in the universe.

    A moral framework which judges the pleasure of rats equivalent to the pleasure of humans sounds nice but, so long as our species' biological imperative is to survive, it has no business determining whether or not we perform potentially human-life-saving research. Treating the research animals as nicely as possible within the constraints of the research is both scientifically valuable and, via humans' empathetic response, pleasant, but I see no logical reason that we should, as a rule, consider the pleasure of non-humans equally important to our own. Their pleasure, unlike ours, is largely irrelevant to our survival, and survival is the closest thing to a uniform imperative that our species has.

    I don't agree with most of what SKFM has had to say, but I do agree with him that our actions, and indeed our survival, is not inherently important to the universe in any way. Our actions have no inherent import to the world around us. But our survival is inherently important to us. Being moral carries no such intrinsic weight, except within such moral frameworks as support our long-term prosperity as a species. I can't see any logical reason, therefore, to select a moral framework on which to base our ethical considerations in which the prosperity of the human species isn't foremost.

    Nothing would necessarily even change were we to one day locate or create another sapient species. I imagine that, in most cases, it would be in our own long-term best interest to treat them decently. Perhaps we would even find that the mistreatment of other sapient beings has the same effect on human societies that the mistreatment of minorities has; I don't know.

    Two remarks.

    1) You say that it's arbitrary to care about pain, and the functional systems capable of feeling it, without caring about the well-being of other functional systems--like stars, rivers, and lakes and so on. But this is not so arbitrary as you make it out to be. For instance, who's to say that it's better for a star, for its own sake, to keep on burning, rather than to go out? I cannot see why you would pick one rather than the other. Similarly, although we sometimes use intentional language to describe things like rivers--it reaches for the sea--I take such language to be strictly analogical. Strictly speaking, rivers do not reach for anything, and there is no answer to the question of whether a river would rather be dammed or not. Creatures which feel pain, however, do have very obvious, and very asymmetrical answers to these sorts of questions. A rat would most definitely rather not be stabbed; it is better for the rat not to be. A capacity for pleasure and pain, it turns out, is a pre-requisite for having interests at all; since rivers lack it, there is no answer to whether they have an interest in being dammed or an interest in being allowed to flow free. So when one tries to take account of everything which is even capable of having interests, the result is that one considers those creatures which can feel pleasure and pain. But that surely is not arbitrary--after all, you're trying to take account of everything with interests!

    Selecting any state as preferred is going to be arbitrary. Is it better for a star to burn or to die? There's no answer, since 'better' has no quantitative meaning. Is it better for a rat to live or die? Is it better to feel pleasure or pain? We, and all self-motivating life-forms, strive for pleasure over pain in whatever fashion we're capable of, but that only makes it better from our own point of view. From a certain point of view our striving could be seen as counter to some conceptual 'good' in the same way that different viewpoints might label a star's consumption of hydrogen good or bad.

    My point with the non-living systems was that 'having interests' is fairly arbitrary. It's easy for humans to ascribe anthropic qualities to any sufficiently complicated system. I bitch about the things my computer 'decides to do' or 'wants to do' on a daily basis. It doesn't actually want or decide anything in a strictly living-being sense, but lacking a good understanding of what it does do, it certainly appears to be making decisions and operating on the basis of desires. We consider that animals have intentions and interests because animals are, to one degree or another, all fairly similar to ourselves. We have those things and we assume that anything that acts in a manner basically similar to ours probably does too. At least, their actions are consistent with such an assumption and we have no reason to assume otherwise.

    But, provided a sufficient understanding, is an insect really interested in anything to a degree greater than Google's YouTube object recognition neural net is 'interested' in cats? What makes one very complex system's behavior more important than another? Why do the stimulus responses of a given category of things deemed 'acting in their interests' while another is 'reacting to the environment'? Is there any way to draw a line between the two that isn't either arbitrary or anthropocentric?
    MrMister wrote: »
    2) You're a little incautious with claims about how we must make decisions. White supremacists do not care about certain subsets of the species and make their decisions accordingly. I, by contrast, do not care about the species per se at all, preferring a more universalistic approach, and also make my decisions accordingly. Am I or the white supremacists part of the relevant 'we?' If yes, then we will surely object that you are not correctly describing what we must do, because we are in fact not doing it.

    I am. Largely because I don't have the time to review and edit drafts of my posts to ensure that everything is couched sufficiently.

    You can design pretty much any arbitrary moral framework and show that it's valid based on some axiom or another, many of which will be self-evidently less valid than others. There's nothing wrong with a utilitarian moral framework where the suffering of all things that can suffer (even limiting this to sufficiently human-like systems that we would consider their condition 'suffering') is on equal footing. The question, and the crux of my argument here, is how do you select one when it comes down to legislating things like medical experimentation?

    A moral framework that equates humans with animals and plants and rivers or whatever isn't going to be terribly useful, from a human-centric, practical point of view. A framework that favors white people above everything else in the universe is going to be, at least initially, pretty awesome from white people and shitty for everything else. So how do we decide where to put the line? Looking back through human history we can see examples, over and over again, of governments that chose to put the line on the side of white people, or Israelites, or Egyptians, or Romans, or Christians, or males, or North Koreans named Kim Jong, or whatever. And every single time we see that putting the line somewhere inside of the human population doesn't end well. Civil unrest, economic distress, war with your neighbors who disagree with where you drew the line. If, then, we put the line such that it is inclusive of at least all of humanity, we can avoid all of that shit. At least when it comes to medical experimentation.

    So what if we put the line further out? If we include, say, chimps and whales. Sure, no problem. It's not a terribly large impact on humanity and what impact there is we can fairly quickly mitigate. But what happens in the converse? If we keep the line such that chimps and whales are on the outside? Does this lead to a situation which is untenable for humanity? I don't see any historical evidence indicating that it does, so, from a practical standpoint, I don't see any reason to select a moral framework on which to base our experimental ethical standards which favors non-humans to any degree beyond that which would create a detrimental impact on humanity as a by-product.

    And, to TychoCelchuuu: it doesn't matter if you give a shit about humans. There is nothing special about humans that makes us worth giving a shit about. But there is no practical foundation on which to construct a system of legislated experimental ethics that does not favor humans. They're our laws. We have to abide by them and they are there, ostensibly, to help our society flourish. A government that actively de-emphasizes the wellfare of its human populace in favor of a silent, politically and socially inactive non-human populace isn't going to last long. We aren't talking about what's 'most good' in some abstract fashion, we're talking about what kinds of scientific and medical experiments should be allowed to occur with government sanction. How can that not give a shit about humans?

    You run together two issues in your discussion of non-humans' interests: first, how mental properties are distributed in the natural world, and second, the moral relevance of mental properties. The first question concerns what sorts of things have minds (sufficiently advanced computers? fetuses? beehives?) and how we recognize them as such; the second concerns what to make of that fact, morally speaking. So, for instance, when you ask "What makes one very complex system's behavior more important than another?" you could be asking either of these two questions--"what makes it that case that this complicated system has a mind and this one doesn't?" or "what makes it the case that this system's mind makes it morally more significant than that mindless system?"

    If you intended to suggest that the first question, about which sorts of things have minds, is either in principle unanswerable (there's no way to separate out the minds from the non-minds) or must be universally answered in the positive (everything is part of a system, so everything must have a mind), then your proposal is really quite radical--at least as radical as TychoCelchuuu's suggestion, being pounced upon here as cray-cray, of complete animal liberation. The notion that we cannot tell what has a mind at all makes a lot of trouble for our understanding of and interaction with other people, let alone our pets or hypothetical Martians, and the notion that everything whatsoever has a mind is just hard to believe. But once we accept that, actually, we do have some grasp on which sorts of things have minds and which don't, and that the categories are more or less as we thought (higher animals yes, lower animals maybe, rivers and trees no), then we are in a fine position to say why this complex system's behavior is more important than that one's--this one has a mind.

    On the other hand, if you accept that we have at least a rough grasp of which sort of things have minds, but are asking why we should take that to matter, then I have comparatively less to say. It strikes me as simply obvious that it matters, and I'm not sure if there's any more fundamental principle I could appeal to in defense of it. That the suffering of a mind matters in a way that the destruction of an inert block of matter does not is as close to a primitive truth in ethics as I can imagine.

    Finally, your suggestion that we must include all humans in our moral calculus because to do otherwise leads to social distress has a number of problems. First, it takes a relatively sanguine view of human social organization and history. Slavery was a thriving social institution for thousands of years; it was not slavery, for instance, that brought down the Empires of antiquity. So it does not seem that the social distress you mention is always so threatening. But worse, even if it is conceded that disenfranchising people causes social distress, it's not clear why that always must be a decisive consideration. Many people are willing to endorse societies as right even while acknowledging that their internal structure causes social distress--for instance, extreme libertarians allow that inequalities might cause distress but nonetheless maintain that the just role of the state is not to cure it; or, on the other end, hardcore communists will allow that forced collectivization causes social upheaval but are willing to pay that price in pursuit of a more egalitarian society. Even if these people aren't right on the specifics, they aren't confused on the concepts: the concept of social harmony is distinct from the concept of justice.

  • poshnialloposhniallo Registered User regular
    I am not so sure societies with slavery were 'thriving'. They certainly persisted for quite some time, so they were fairly stable. That doesn't mean they were more stable than societies without slavery - just that they were able to stay stable in a world where everyone had slavery. And certainly stability is not our only metric of 'lack of distress'. I think that viewing societies with strong inequality - slave-owning, apartheid, misogynist etc - as in some way harmonious is merely to ignore the oppressed. It is entirely possible to suffer in silence, and when your people are not allowed access to the materials and methods of recording your history, that silence can be misread by other societies.

    I figure I could take a bear.
  • MrMisterMrMister A pup must first get in the water to be successful as a seal!Registered User regular
    MrMister wrote: »
    @spacekungfuman

    On my phone in an airport, so I must be brief. When you ask what value a moral claim can have it strikes me as the wrong question. I think some moral claims are simply true, and the practical use to which they can be put is irrelevant to that. A parallel: I have no idea whether the fact that fermats last theorem is true is useful for anything at all, but it is still a fact either way. Same with the historical fact that I drank sparkling water this morning. Useful to no one, but still a fact as much as any other.

    The question of what moral claims could mean in abscence of motivational impact is I think better than what value they could have. There is some conceptual connection between morality and action insofar as it tells us how we ought to behave. But the extent to which it must necessarily motivate us to follow it all on its own is very controversial. Some people do indeed accept the view that it is always necessarily motivating. But even they tend to only hold that to be true for the person who makes the judgment (aka, I judge i ought to phi, so then I am motivated to phi). But this is compatible with making moral judgments you know no one else will follow.

    Thanks for the reply! I think this all makes sense, but what seems to be missing is an accounting the mechanism that makes people follow your moral judgement. It seems to me that people will only follow moral requirements to the extent that they accept them (which I think will be based on them being requirements people agree with for their own reasons) or because they are imposed on them by the majority. And so this brings me back to my original question of what additional content something being a "moral" judgement or requirement brings that is not already included in the concept of a judgement or requirement.

    I'm not sure exactly what you mean by the last question, but it sounds interesting. Elaborate?

  • EWomEWom Registered User regular
    Paladin wrote: »
    Quid wrote: »
    Paladin wrote: »
    It is very okay to torture animals for fun; it's called hunting.

    Kill =/= torture. And even though I don't like hunting for sport, even I can accept that the point of it isn't to prolong an animal's pain.

    Edit: And believe me, I'm all for ending any hunting that isn't necessary for a healthy ecosystem.

    there's a small difference between causing pain to an animal because you enjoy causing pain and causing pain to an animal because your favorite activity happens to cause pain to animals

    the point was kind of lost with the excessive dog kicking

    A good hunter kills an animal with less pain than any cow in the country in any slaughterhouse would ever feel. A bad hunter puts the animal on par with a slaughterhouse. Only terrible hunters "torture" any animals, and these people typically aren't hunters, they are poachers.

    Equating hunting with torture, shows that you are absolutely ignorant on the subject of hunting.

    Whether they find a life there or not, I think Jupiter should be called an enemy planet.
  • CptHamiltonCptHamilton Registered User regular
    edited August 2012
    MrMister wrote: »
    You run together two issues in your discussion of non-humans' interests: first, how mental properties are distributed in the natural world, and second, the moral relevance of mental properties. The first question concerns what sorts of things have minds (sufficiently advanced computers? fetuses? beehives?) and how we recognize them as such; the second concerns what to make of that fact, morally speaking. So, for instance, when you ask "What makes one very complex system's behavior more important than another?" you could be asking either of these two questions--"what makes it that case that this complicated system has a mind and this one doesn't?" or "what makes it the case that this system's mind makes it morally more significant than that mindless system?"

    If you intended to suggest that the first question, about which sorts of things have minds, is either in principle unanswerable (there's no way to separate out the minds from the non-minds) or must be universally answered in the positive (everything is part of a system, so everything must have a mind), then your proposal is really quite radical--at least as radical as TychoCelchuuu's suggestion, being pounced upon here as cray-cray, of complete animal liberation. The notion that we cannot tell what has a mind at all makes a lot of trouble for our understanding of and interaction with other people, let alone our pets or hypothetical Martians, and the notion that everything whatsoever has a mind is just hard to believe. But once we accept that, actually, we do have some grasp on which sorts of things have minds and which don't, and that the categories are more or less as we thought (higher animals yes, lower animals maybe, rivers and trees no), then we are in a fine position to say why this complex system's behavior is more important than that one's--this one has a mind.

    I actually meant the first, more radical option. I realize it's not mainstream, and may well even be cray-cray, but from a scientific standpoint I feel that the more we learn about the processes underlying what we call a 'mind', the less sense it makes to say that this thing is a mind and that thing is not. Or, possibly, the less sense it makes to state that such a thing as a mind exists at all.

    Clearly minds do exist, but to harken back to the Podly-era epistemological discussions on this board, I think they may exist in the same fashion as, say, the color red or Sherlock Holmes. We can, to a certain degree of fidelity, define what a mind is, but is a mind a thing that exists in the physical universe? Or is it a pattern of properties and phenomena which we have chosen to label 'mind' in much the same way that we choose to label certain assemblages of sticks and reeds 'chair'?

    If we can create a neural network on a computer that follows very specific rules in a very well-defined fashion and is, really, no more or less complex than an enterprise statistics software package, and this network is capable of behaviors that we would traditionally associate with intent and desire, is it now a mind? It has none of the ineffable qualities we tend to associate philosophically with the mind, whether you believe that the mind is strictly ineffable as a matter of course or if you simply think that the mind arises as more than the sum of the parts of the brain. Is the network not strictly the sum of its parts?

    And yet, in the case of Google's recent research, there is something within the network which is the idea 'cat' but is not something we can point to as a line of code or a specific set of data points. Have we then created a mind? Or have we created a machine which behaves like a philosophical zombie, acting indistinguishably from a thinking (in a very limited fashion) being despite not being one?

    Whichever way you choose to believe -- mind or zombie -- this presents what appears to me a fundamental roadblock in the concept of mind-distinction. If the computer program has a mind then how can we limit minds to only the animal kingdom? If the computer program is a zombie then how can we say with any degree of certainty that a rat, for example, has a mind?

    I can say that I have a mind and, barring some hideous solipsistic pit wherein I worry that perhaps I am, myself, a zombie which behaves as though it believed it had a mind, I can be confident that I'm correct. I don't think it's reaching to then go out on a limb and say that other humans also have minds. There's a solipsistic/zombie argument to contend with, but neither is very useful or interesting and humans are sufficiently similar physical systems that if I am to accept that the mind is a product of the meat, there is no reason that this other person's meat shouldn't produce one as well.

    How similar, then, must another complex system be for me to extend the umbrella of assumed-mind-possession? A chimp? Other primates? A rat? Google's cat-bot? I certainly believe that there is an answer to this question. The mind is a thing, even if it is a Sherlock Holmes sort of existence, and is a thing shared by many systems, even if all those systems are human bodies. I don't doubt that there will come a day in the future when we can authoritatively say, "The human brain is capable of producing a mind. This [insert system noun] is not; it is only capable of stimulus processing."

    That day isn't today. And while I agree that the moral action when assembling a moral framework is to make the broader assumption, I don't think that basing a system of experimental ethics on that framework is a good idea.
    MrMister wrote: »
    On the other hand, if you accept that we have at least a rough grasp of which sort of things have minds, but are asking why we should take that to matter, then I have comparatively less to say. It strikes me as simply obvious that it matters, and I'm not sure if there's any more fundamental principle I could appeal to in defense of it. That the suffering of a mind matters in a way that the destruction of an inert block of matter does not is as close to a primitive truth in ethics as I can imagine.

    Finally, your suggestion that we must include all humans in our moral calculus because to do otherwise leads to social distress has a number of problems. First, it takes a relatively sanguine view of human social organization and history. Slavery was a thriving social institution for thousands of years; it was not slavery, for instance, that brought down the Empires of antiquity. So it does not seem that the social distress you mention is always so threatening. But worse, even if it is conceded that disenfranchising people causes social distress, it's not clear why that always must be a decisive consideration. Many people are willing to endorse societies as right even while acknowledging that their internal structure causes social distress--for instance, extreme libertarians allow that inequalities might cause distress but nonetheless maintain that the just role of the state is not to cure it; or, on the other end, hardcore communists will allow that forced collectivization causes social upheaval but are willing to pay that price in pursuit of a more egalitarian society. Even if these people aren't right on the specifics, they aren't confused on the concepts: the concept of social harmony is distinct from the concept of justice.

    I don't think it's terribly sanguine. Human history has never had an example of a large-scale society that actually lacked social distress. The Romans were pretty prosperous on the backs of slaves (though I imagine their long-term success in that realm had something to do with the relatively positive way that they treated said slaves, compared to some societies), but they were hardly what I'd call stable or harmonious. Their economic and social prosperity was largely dependent upon going to war with their neighbors, whom they dehumanized as barbarians, to continuously expand their empire and access to slave labor and raw materials. It was an inherently untenable social state that eventually declined and dissolved into new social structures for a shit ton of reasons.

    Slavery isn't the only dehumanizing factor. War, caste-subjugation, impoverisation (is that a word?) of one class to fuel another class' prosperity, the exploitation of minority classes for the scientific or economic benefit of the culture... Every human society in history has demonstrated behaviors that dehumanize and segregate some subset of humanity from another. It's always, to one degree or another, us against them. I don't have an example of a society that doesn't do this to point to as a beacon of social stability because it's never happend.

    But! It's relatively easy to look over the course of history and see which societies were the most stable, the most sustainably prosperous, and look at their position on human equality. The Roman and Chinese empires were particularly successful, and, for their times, had fairly progressive social policies. Yes, slavery existed, because pre-industrial societies basically can't grow beyond the level of city-states without a reliance on unpaid labor, but slaves were treated fairly well by comparison to other societies of the time. They went to war with their neighbors (or themselves) all the time, but both empires had policies of integration. Once you'd been conquered, though, you (or at least your children) could become citizens just like anyone else.

    Contrast them with highly xenophobic cultures with high rates of slave abuse, the regular enslavement of their own citizens, etc. Rome didn't conquer half of the ancient world by being giant dicks.

    If we're talking about a system by which to legislate medical experimentation then, in the best interest of our society, which is, after all, the body doing the legislating in the first place, we have to be generally inclusive to all of humanity. We could dehumanized all the colored people, but we don't have to look very far to see what a stupid idea that is.

    What I've said several times now and nobody has addressed (I don't particularly expect you to, MrMister, since I don't think you're on the Free All The Rats boat) is that while we have evidence (however tenuous you might argue it to be) that disinclusivity of humans based on race, religion, region of origin, eye color, or whatever is a long term bad idea, we have no such equivalent evidence that treating non-humans as not part of our society is a problem. And so long as we are talking about what actions a society should take in choosing what sort of scientific and medical experiments to perform, I don't see any compelling argument for why we should pick an option that has even a potential for detrimental impact on said society if choosing the other way does not.

    It may not be the most moral choice, but it is the most pragmatic choice. Determining lethal doses of household detergents by feeding them to homeless people in decreasing amounts until they stop dying is unacceptable because it will detrimentally impact the human community surrounding those homeless people. Doing it on rats is, from a practical standpoint, practically required if we want to actually have those chemicals in production. There's no other way to find out what a lethal dose is and while mass rat-icide isn't going to make one whit of difference in anyone's life, a single human dying when they inhale fumes from a too-concentrated floor de-greaser will have a wide ripple of detrimental impact on the society that decided it was better to save the rats.

    CptHamilton on
    PSN,Steam,Live | CptHamiltonian
  • PaladinPaladin Registered User regular
    EWom wrote: »
    Paladin wrote: »
    Quid wrote: »
    Paladin wrote: »
    It is very okay to torture animals for fun; it's called hunting.

    Kill =/= torture. And even though I don't like hunting for sport, even I can accept that the point of it isn't to prolong an animal's pain.

    Edit: And believe me, I'm all for ending any hunting that isn't necessary for a healthy ecosystem.

    there's a small difference between causing pain to an animal because you enjoy causing pain and causing pain to an animal because your favorite activity happens to cause pain to animals

    the point was kind of lost with the excessive dog kicking

    A good hunter kills an animal with less pain than any cow in the country in any slaughterhouse would ever feel. A bad hunter puts the animal on par with a slaughterhouse. Only terrible hunters "torture" any animals, and these people typically aren't hunters, they are poachers.

    Equating hunting with torture, shows that you are absolutely ignorant on the subject of hunting.

    I am using torture very loosely as defined by applied harm without anesthetic. I do not have opinions about hunting and was using it as a hopefully less inflammatory example of the animal pain topic than dog kicking, which I thought was a tad hyperbolic as a comparison to scientific animal studies.

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • psyck0psyck0 Registered User regular
    Ask any pain specialist in the world whether pain is subjective and they will say yes. I am really surprised there is any debate about this.

    By the way, here is the most widely accepted definition of pain from the international association for the study of pain:
    an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage

    The last bit, "described in terms of such damage", acknowledges that people can feel extremely severe and crippling pain while having very minimal or no tissue damage. Proof positive that it is subjective.

    Play Smash Bros 3DS with me! 4399-1034-5444
    steam_sig.png
Sign In or Register to comment.