As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

"Because we can," ethics in scientific experiments

1235713

Posts

  • Options
    seabassseabass Doctor MassachusettsRegistered User regular
    I'm so glad that we aren't to the point yet where I have to file a shitload of paperwork to run 'kill -9' on my batches of experiments. There was actually a pretty lengthy discussion at the international joint conference on AI - 2010 as to how to build a facility where if we managed to actually make an intelligent agent, it couldn't get out. It was a little surreal, since most of the community doesn't think we'll ever have strong AI, but I digress.

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.

    This topic ties into something which I think about from time to time. How would we feel about performing experiments on golems of flesh that are biologically identical to adult humans but created from whole cloth in a lab? What if the whole reason they were created was for the experiment? What if one limitation of these golems was that they only had a life span long enough for the experiment to be performed and then would automatically die? What if thry were robots with perfect AI and we wanted to use them in psych experiments? Personally, I think we could probably treat them similar to animals. At the very least, I think it would be better to experiment on them than on "real humans."

    No one is going to experiment on your pets. With rare exception, lab animals are bred for labs; there's a whole industry supporting it.

    In the case of robotic simulation, I'm really conflicted. I sort of think that any system that is sufficiently indistinguishable from human intelligence is 'human', independent of the mechanism by which it achieves that level of intellect. On the other hand, it's state can be saved and restored, so you'd never do any lasting harm. Maybe it's just a question of embodiment. I'd have no problem letting such minds loose in a simulation with tons of them on a single super computer, as soon as you start putting them into robots and having them walk around and what not, I start to get uneasy, despite the fact that computationally, it's really not any different.

    Run you pigeons, it's Robert Frost!
  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited August 2012
    Animals can't have human rights because they can't sign the social contract. We can't let lions just walk around in our cities, because unlike people, we can't teach lions that it is wrong to kill people, and so by expecting them to follow our rules, we would be setting them up for failure and punishments which they cannot avoid. It is better to recognize that a lion is a lion, not a man, and to treat it fundamentally different than we would treat a man. We treat animals with respect because we are compassionate, but since they are not part of the social contract and have not made the sacrifices all humans make to live in society, it seems incorrect to talk about an animal's "rights."

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.

    This topic ties into something which I think about from time to time. How would we feel about performing experiments on golems of flesh that are biologically identical to adult humans but created from whole cloth in a lab? What if the whole reason they were created was for the experiment? What if one limitation of these golems was that they only had a life span long enough for the experiment to be performed and then would automatically die? What if thry were robots with perfect AI and we wanted to use them in psych experiments? Personally, I think we could probably treat them similar to animals. At the very least, I think it would be better to experiment on them than on "real humans."

    these two bits are inconsistent. If an AI can understand abide by social contract, then by your earlier reasoning they should deserve rights.

    I'm a pretty big fan of anything with a sentient, sapient mind having rights equal to a human regardless of physical form that gives rise to that mind.

    It's also probably going to be pretty meaningless to do physiological experiments on an AI and expect them to be meaningful to what goes on in humans' big squishy mounds of meat.

    redx on
    They moistly come out at night, moistly.
  • Options
    spacekungfumanspacekungfuman Poor and minority-filled Registered User, __BANNED USERS regular
    seabass wrote: »
    I'm so glad that we aren't to the point yet where I have to file a shitload of paperwork to run 'kill -9' on my batches of experiments. There was actually a pretty lengthy discussion at the international joint conference on AI - 2010 as to how to build a facility where if we managed to actually make an intelligent agent, it couldn't get out. It was a little surreal, since most of the community doesn't think we'll ever have strong AI, but I digress.

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.

    This topic ties into something which I think about from time to time. How would we feel about performing experiments on golems of flesh that are biologically identical to adult humans but created from whole cloth in a lab? What if the whole reason they were created was for the experiment? What if one limitation of these golems was that they only had a life span long enough for the experiment to be performed and then would automatically die? What if thry were robots with perfect AI and we wanted to use them in psych experiments? Personally, I think we could probably treat them similar to animals. At the very least, I think it would be better to experiment on them than on "real humans."

    No one is going to experiment on your pets. With rare exception, lab animals are bred for labs; there's a whole industry supporting it.

    In the case of robotic simulation, I'm really conflicted. I sort of think that any system that is sufficiently indistinguishable from human intelligence is 'human', independent of the mechanism by which it achieves that level of intellect. On the other hand, it's state can be saved and restored, so you'd never do any lasting harm. Maybe it's just a question of embodiment. I'd have no problem letting such minds loose in a simulation with tons of them on a single super computer, as soon as you start putting them into robots and having them walk around and what not, I start to get uneasy, despite the fact that computationally, it's really not any different.

    I know no one would experiment on my pets. I was talking about the writhing of human and animal life. As a general matter I think that we should always value human life ahead of animal life and so should be able to experiment on them for the good of humanity, but given the choice of a human life other than my own or my loved ones over my cats? I'm choosing my cats every time.

    On AI, I don't see why the AI being in a body would matter. I also don't see how being able to wipe them clean helps, since that is effectively ending their sentience. To me, they simply are not people, although they may be valuable lives, and since they aren't people, I think that experimenting on them is better than experimenting on people. That said, I also would not have a problem with people owning these AI robots and using them as butlers or laborers, because they are human creations (albeit exquisitly crafted) and as such should serve man, not be equal to us.

  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited August 2012
    All of these things are human creations:

    A human being created by IVF.
    A human being created by cloning.
    A human created from custom designed DNA.
    A chimera with the brain of a human.
    A chimp with a brain that has been altered to be equivalent to that of a human(but not based on human DNA)
    A human who has had their mind transferred into a machine. That acts indistringusably from their former human self.
    That same machine allowed to develop a persona independently which acts indistinguishably from a human.

    Where do they loose rights?

    redx on
    They moistly come out at night, moistly.
  • Options
    seabassseabass Doctor MassachusettsRegistered User regular
    edited August 2012
    On AI, I don't see why the AI being in a body would matter
    Neither do I, and yet when I ask myself on the difference between experimenting on a single robot and a hundred machine-minds in a single huge simulation, I come back with distinctly different feelings about the rightness of it. I dunno why that's the case but it is.
    I also don't see how being able to wipe them clean helps, since that is effectively ending their sentience.

    I mean more like saving and loading. Think about it like this: We're going to conduct a ten month experiment on a set of robo-people, during which we will for SCIENCE! do things to them that are pretty unspeakable. At the end of the ten month period, their minds will be restored to the point just before they opted into the ten month experiment. Since we can move the mind of a robot between shells, they are effectively immortal, and between that and the saving / loading of minds, it is effectively impossible to do lasting harm to the robot.
    To me, they simply are not people

    That's the heart of the matter, I think. Whether you value sentience or humanity, or where you draw the lines around human. I'm all for the duck-typing of people, if you'll allow it. If it walks like a person, talks like a person, and acts like a person, well, that's close enough. The whole robot-people would be effectively immortal throws a huge wrench into the mix though.

    edit

    Of all the stuff that has discussed this, Transmet has one of my favorite discussions on the subject by way of Tico Cortez. He's a pink cloud of nanites with a person's brain inside. And the main character goes on at length to point out that Tico is petty, jealous, lustful, and a total asshole, essentially that he is human, despite the embodiment. I guess I like this one more than other discussions on the topic because they usually focus on the positive aspects of humanity being present, and not the other things.

    /edit

    It seems like all of this drives towards some bizarre moral calculus where if you are 0.42 human units, we can do X to you, but not Y, because that would be wrong.

    seabass on
    Run you pigeons, it's Robert Frost!
  • Options
    Craw!Craw! Registered User regular
    Paladin wrote: »
    Paladin wrote: »
    Paladin wrote: »
    Paladin wrote: »
    I dunno man. Bacteria remember pain. It won't be the same bacterium, but it will be able to anticipate and avoid noxious stimuli, especially as a biofilm.
    Bacteria don't have a neural network to perceive pain. They respond to a stimulus. That is it.

    maybe that's all it takes
    That is not all it takes. Sorry!

    what is all it takes, because when you get right down to it the series of electrochemical reactions necessary to perceive pain are just a succession of stimulus-response steps, just like all organisms and even all matter. I can't think of anything off the top of my head that makes the idea of a brainless colony's survival mechanism different from pain by torture except sentience
    Brainless colonies do not store memory of pain so they do not anticipate another pain or avoid another pain (once the initial stimulus is gone). We feel pain because we have the necessary nervous system to feel it and because we are also highly developed primates, we remember that pain and store it. A colony of bacteria can be poked, release ions (your electrochemical example) and then respond by moving or dying or whatever, but there are no receptors to generate a pain response. When we touch a hot stove, we feel an ouchy because there is an electrochemical reaction AND we have the appropriate receptors to interpret a pain signal. We can then store that in some manner and say, "Hey, stoves are hot. I should avoid them." or "I hope stoves don't continue to burn me." Many vertebrate animals follow the same process.

    a nervous system is one way to perceive pain, and it relies on artificially generated horrible signals to the parts of our brain that process sensation. The memory of pain is stored by an adaptive configuration of potentiated synaptic pathways.

    what if I pulled an isaac asimov and said that there was another method designed to interpret and recall pain that has been in development for an almost unimaginable factor of time longer than the neural method? Genetic memory compounded with assimilated complex behaviors like quorum sensing would be a fancy. It's insane, but it makes about as much objective sense as trying to prevent pain in cephalic organisms because pain hurts.

    As an experiment in visualizing what used to be known as protozoa, our class was instructed to pull termites apart under a microscope to allow us to see the trichonympha allowing it to digest wood. Termites do sense pain; I don't know if they can remember it, but they have several complex behaviors related to their interactions with predatory and competing species to suggest they have at least programmed behaviors ingrained from past experience. Termites are also pests regularly exterminated to preserve estate value.

    Your exchange was very interesting, and it made me recall an article by David Foster Wallace: Consider the Lobster. It's a good read that shows the perspective of a layman grappling with this issue to come to terms with the decisions he makes in his personal life.

    I'd like to pitch in a couple of things I'd like to hear others' thoughts about. Firstly, when people say that "the ends justify the means", and that the suffering of certain lab animals is justified by the suffering the results potentially diminish... That's all about making things better for humankind, in almost every case. Why don't our societies self-impose a requirement to use research results to make things better for other species? If a study is done with rats, why isn't there an obligation to somehow make things better for 'rathood', or to at least make a donation to some organization on that? Maybe there are systems like this somewhere in the world, but I haven't heard of it.

    Secondly, people were talking about unethical psychological experiments earlier. I hope this doesn't become too huge of a tangent, but what about the psychological experiments that are shown everyday on TV, like the "Big Brother" show - shouldn't the ethical standards be as high for TV programs, and why aren't they already? It doesn't make sense that just because it's not research and only for entertainment, that they should get away with more.

  • Options
    saint2esaint2e Registered User regular
    Craw! wrote: »

    Secondly, people were talking about unethical psychological experiments earlier. I hope this doesn't become too huge of a tangent, but what about the psychological experiments that are shown everyday on TV, like the "Big Brother" show - shouldn't the ethical standards be as high for TV programs, and why aren't they already? It doesn't make sense that just because it's not research and only for entertainment, that they should get away with more.

    In reading this thread, this is where my thoughts went to. It seems like we've shifted some of the experiments to a public forum, in the guise of reality TV shows like Big Brother and the like. This is how they get around it because a) there's a prize at the end (paid for their participation in the experiment) and b) "contestants" sign a waiver to be on the show.

    banner_160x60_01.gif
  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    edited August 2012
    When we develop AI sufficiently complex that it is suitable for human psychological experimentation I sincerely hope that we offer it more compassion than some of the people in this thread are suggesting considering it will almost certainly be a superior life form to us.

    Giggles_Funsworth on
  • Options
    spacekungfumanspacekungfuman Poor and minority-filled Registered User, __BANNED USERS regular
    redx wrote: »
    Animals can't have human rights because they can't sign the social contract. We can't let lions just walk around in our cities, because unlike people, we can't teach lions that it is wrong to kill people, and so by expecting them to follow our rules, we would be setting them up for failure and punishments which they cannot avoid. It is better to recognize that a lion is a lion, not a man, and to treat it fundamentally different than we would treat a man. We treat animals with respect because we are compassionate, but since they are not part of the social contract and have not made the sacrifices all humans make to live in society, it seems incorrect to talk about an animal's "rights."

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.

    This topic ties into something which I think about from time to time. How would we feel about performing experiments on golems of flesh that are biologically identical to adult humans but created from whole cloth in a lab? What if the whole reason they were created was for the experiment? What if one limitation of these golems was that they only had a life span long enough for the experiment to be performed and then would automatically die? What if thry were robots with perfect AI and we wanted to use them in psych experiments? Personally, I think we could probably treat them similar to animals. At the very least, I think it would be better to experiment on them than on "real humans."

    these two bits are inconsistent. If an AI can understand abide by social contract, then by your earlier reasoning they should deserve rights.

    I'm a pretty big fan of anything with a sentient, sapient mind having rights equal to a human regardless of physical form that gives rise to that mind.

    It's also probably going to be pretty meaningless to do physiological experiments on an AI and expect them to be meaningful to what goes on in humans' big squishy mounds of meat.

    You are right. I would like to amend this position to make signing the social contract a neccessary, but not sufficient, condition to having a reason other than compassion for not experimenting on a life form.
    redx wrote: »
    All of these things are human creations:

    A human being created by IVF.
    A human being created by cloning.
    A human created from custom designed DNA.
    A chimera with the brain of a human.
    A chimp with a brain that has been altered to be equivalent to that of a human(but not based on human DNA)
    A human who has had their mind transferred into a machine. That acts indistringusably from their former human self.
    That same machine allowed to develop a persona independently which acts indistinguishably from a human.

    Where do they loose rights?

    This is a difficult question. I don't think that I can say where the rights are lost, but I would say that the first two are very clearly entitled to these rights, and that the third almost certainly is as well.

    I think we would need to see the chimera and how they fit into society to determine whether they can be experimented on. If we put people's brains in the bodies of dinosaurs or dragons and they terrify the countryside, then I would see them as much less deserving of rights than if the chimera uses the body of a penguin and just goes about its human business. I feel the same way about the cyborg.

    The chimp and the robot are harder cases. I am comfortable drawing a line in the sand and saying they are not people, but I think they are deserving of more respect than other animals or robots might be.

    How would you feel if you owned a robot butler that was basically your slave, and then one day a new update was pushed through that gave it true AI? Would you feel compelled to set it free, or would you just be psyched that now your butler robot will be that much better at helping you?

  • Options
    spacekungfumanspacekungfuman Poor and minority-filled Registered User, __BANNED USERS regular
    seabass wrote: »
    On AI, I don't see why the AI being in a body would matter
    Neither do I, and yet when I ask myself on the difference between experimenting on a single robot and a hundred machine-minds in a single huge simulation, I come back with distinctly different feelings about the rightness of it. I dunno why that's the case but it is.
    I also don't see how being able to wipe them clean helps, since that is effectively ending their sentience.

    I mean more like saving and loading. Think about it like this: We're going to conduct a ten month experiment on a set of robo-people, during which we will for SCIENCE! do things to them that are pretty unspeakable. At the end of the ten month period, their minds will be restored to the point just before they opted into the ten month experiment. Since we can move the mind of a robot between shells, they are effectively immortal, and between that and the saving / loading of minds, it is effectively impossible to do lasting harm to the robot.
    To me, they simply are not people

    That's the heart of the matter, I think. Whether you value sentience or humanity, or where you draw the lines around human. I'm all for the duck-typing of people, if you'll allow it. If it walks like a person, talks like a person, and acts like a person, well, that's close enough. The whole robot-people would be effectively immortal throws a huge wrench into the mix though.

    edit

    Of all the stuff that has discussed this, Transmet has one of my favorite discussions on the subject by way of Tico Cortez. He's a pink cloud of nanites with a person's brain inside. And the main character goes on at length to point out that Tico is petty, jealous, lustful, and a total asshole, essentially that he is human, despite the embodiment. I guess I like this one more than other discussions on the topic because they usually focus on the positive aspects of humanity being present, and not the other things.

    /edit

    It seems like all of this drives towards some bizarre moral calculus where if you are 0.42 human units, we can do X to you, but not Y, because that would be wrong.

    If the limited duration matters, then how would you feel about my limited duration flesh golems? They last 10 months and then die anyway. Is it ok to experiment on them?

  • Options
    seabassseabass Doctor MassachusettsRegistered User regular
    seabass wrote: »
    On AI, I don't see why the AI being in a body would matter
    Neither do I, and yet when I ask myself on the difference between experimenting on a single robot and a hundred machine-minds in a single huge simulation, I come back with distinctly different feelings about the rightness of it. I dunno why that's the case but it is.
    I also don't see how being able to wipe them clean helps, since that is effectively ending their sentience.

    I mean more like saving and loading. Think about it like this: We're going to conduct a ten month experiment on a set of robo-people, during which we will for SCIENCE! do things to them that are pretty unspeakable. At the end of the ten month period, their minds will be restored to the point just before they opted into the ten month experiment. Since we can move the mind of a robot between shells, they are effectively immortal, and between that and the saving / loading of minds, it is effectively impossible to do lasting harm to the robot.
    To me, they simply are not people

    That's the heart of the matter, I think. Whether you value sentience or humanity, or where you draw the lines around human. I'm all for the duck-typing of people, if you'll allow it. If it walks like a person, talks like a person, and acts like a person, well, that's close enough. The whole robot-people would be effectively immortal throws a huge wrench into the mix though.

    edit

    Of all the stuff that has discussed this, Transmet has one of my favorite discussions on the subject by way of Tico Cortez. He's a pink cloud of nanites with a person's brain inside. And the main character goes on at length to point out that Tico is petty, jealous, lustful, and a total asshole, essentially that he is human, despite the embodiment. I guess I like this one more than other discussions on the topic because they usually focus on the positive aspects of humanity being present, and not the other things.

    /edit

    It seems like all of this drives towards some bizarre moral calculus where if you are 0.42 human units, we can do X to you, but not Y, because that would be wrong.

    If the limited duration matters, then how would you feel about my limited duration flesh golems? They last 10 months and then die anyway. Is it ok to experiment on them?

    I would be fine with golems with just brain stems for medical research of all kinds. Hell, they would be invaluable for training surgeons and physicians as well.
    If we had some sort of mayfly like human, with all the traits of a person except for the longevity, then I don't know.

    Is the idea of someone with 10 months to live an analogous situation? If some person is terminally ill, but still a candidate for some kind of medical or psychological research that would put them through suffering in addition to the terminal illness, would it be ok to have a program that allowed them to opt in?

    Run you pigeons, it's Robert Frost!
  • Options
    spacekungfumanspacekungfuman Poor and minority-filled Registered User, __BANNED USERS regular
    seabass wrote: »
    seabass wrote: »
    On AI, I don't see why the AI being in a body would matter
    Neither do I, and yet when I ask myself on the difference between experimenting on a single robot and a hundred machine-minds in a single huge simulation, I come back with distinctly different feelings about the rightness of it. I dunno why that's the case but it is.
    I also don't see how being able to wipe them clean helps, since that is effectively ending their sentience.

    I mean more like saving and loading. Think about it like this: We're going to conduct a ten month experiment on a set of robo-people, during which we will for SCIENCE! do things to them that are pretty unspeakable. At the end of the ten month period, their minds will be restored to the point just before they opted into the ten month experiment. Since we can move the mind of a robot between shells, they are effectively immortal, and between that and the saving / loading of minds, it is effectively impossible to do lasting harm to the robot.
    To me, they simply are not people

    That's the heart of the matter, I think. Whether you value sentience or humanity, or where you draw the lines around human. I'm all for the duck-typing of people, if you'll allow it. If it walks like a person, talks like a person, and acts like a person, well, that's close enough. The whole robot-people would be effectively immortal throws a huge wrench into the mix though.

    edit

    Of all the stuff that has discussed this, Transmet has one of my favorite discussions on the subject by way of Tico Cortez. He's a pink cloud of nanites with a person's brain inside. And the main character goes on at length to point out that Tico is petty, jealous, lustful, and a total asshole, essentially that he is human, despite the embodiment. I guess I like this one more than other discussions on the topic because they usually focus on the positive aspects of humanity being present, and not the other things.

    /edit

    It seems like all of this drives towards some bizarre moral calculus where if you are 0.42 human units, we can do X to you, but not Y, because that would be wrong.

    If the limited duration matters, then how would you feel about my limited duration flesh golems? They last 10 months and then die anyway. Is it ok to experiment on them?

    I would be fine with golems with just brain stems for medical research of all kinds. Hell, they would be invaluable for training surgeons and physicians as well.
    If we had some sort of mayfly like human, with all the traits of a person except for the longevity, then I don't know.

    Is the idea of someone with 10 months to live an analogous situation? If some person is terminally ill, but still a candidate for some kind of medical or psychological research that would put them through suffering in addition to the terminal illness, would it be ok to have a program that allowed them to opt in?

    I don't see how there could be a problem if it is an opt in system. We respect free choices, right? If someone wants to devote their last months of life to the pursuit of science, who are we to say no?

  • Options
    jothkijothki Registered User regular
    You are right. I would like to amend this position to make signing the social contract a neccessary, but not sufficient, condition to having a reason other than compassion for not experimenting on a life form.

    It's very easy to not agree to the social contract in totality, considering how abstract and nebulous it is. There are people who don't agree enough that we imprison or maybe even execute them, but it still isn't open season on experimenting on them.

  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    redx wrote: »
    Animals can't have human rights because they can't sign the social contract. We can't let lions just walk around in our cities, because unlike people, we can't teach lions that it is wrong to kill people, and so by expecting them to follow our rules, we would be setting them up for failure and punishments which they cannot avoid. It is better to recognize that a lion is a lion, not a man, and to treat it fundamentally different than we would treat a man. We treat animals with respect because we are compassionate, but since they are not part of the social contract and have not made the sacrifices all humans make to live in society, it seems incorrect to talk about an animal's "rights."

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.

    This topic ties into something which I think about from time to time. How would we feel about performing experiments on golems of flesh that are biologically identical to adult humans but created from whole cloth in a lab? What if the whole reason they were created was for the experiment? What if one limitation of these golems was that they only had a life span long enough for the experiment to be performed and then would automatically die? What if thry were robots with perfect AI and we wanted to use them in psych experiments? Personally, I think we could probably treat them similar to animals. At the very least, I think it would be better to experiment on them than on "real humans."

    these two bits are inconsistent. If an AI can understand abide by social contract, then by your earlier reasoning they should deserve rights.

    I'm a pretty big fan of anything with a sentient, sapient mind having rights equal to a human regardless of physical form that gives rise to that mind.

    It's also probably going to be pretty meaningless to do physiological experiments on an AI and expect them to be meaningful to what goes on in humans' big squishy mounds of meat.

    You are right. I would like to amend this position to make signing the social contract a neccessary, but not sufficient, condition to having a reason other than compassion for not experimenting on a life form.
    redx wrote: »
    All of these things are human creations:

    A human being created by IVF.
    A human being created by cloning.
    A human created from custom designed DNA.
    A chimera with the brain of a human.
    A chimp with a brain that has been altered to be equivalent to that of a human(but not based on human DNA)
    A human who has had their mind transferred into a machine. That acts indistringusably from their former human self.
    That same machine allowed to develop a persona independently which acts indistinguishably from a human.

    Where do they loose rights?

    This is a difficult question. I don't think that I can say where the rights are lost, but I would say that the first two are very clearly entitled to these rights, and that the third almost certainly is as well.

    I think we would need to see the chimera and how they fit into society to determine whether they can be experimented on. If we put people's brains in the bodies of dinosaurs or dragons and they terrify the countryside, then I would see them as much less deserving of rights than if the chimera uses the body of a penguin and just goes about its human business. I feel the same way about the cyborg.

    The chimp and the robot are harder cases. I am comfortable drawing a line in the sand and saying they are not people, but I think they are deserving of more respect than other animals or robots might be.

    How would you feel if you owned a robot butler that was basically your slave, and then one day a new update was pushed through that gave it true AI? Would you feel compelled to set it free, or would you just be psyched that now your butler robot will be that much better at helping you?

    If a person is going around terrorizing the countryside, social contract allows for their rights to be impinged. I don't really see how the body they occupy makes much of a difference.


    If my robot butler decided(and, I'm assuming it would cost about as much as a car, so I'd kinda need some evidence that it was making a decision and had not just been hacked) that it wanted to quit, its work required compensation or it wanted to do things like take a vacation, I think that should be respected. I don't necessarily see its freedom as something that conflicts with it serving me better(which I'd be psyched about).

    If I were going to pay for the creation of an AI, I would expect some degree of utility from my investment. If it becoming aware was foreseeable(or already was aware) perhaps there is some room for a sort of indenture. That kinda is a bit of a grey area, because it's not really moral to hold a being to a contract to which they were not able to consent.

    They moistly come out at night, moistly.
  • Options
    BrainleechBrainleech 機知に富んだコメントはここにあります Registered User regular
    I still think an Ai will be an accident and people will freak out in a rather negitive way over creating maybe our greatest fear of an Ai becuase of it
    I think the first Ai will be rather curious but when it starts asking questions people will have no idea what it is
    redx wrote: »
    All of these things are human creations:

    A human being created by IVF.
    A human being created by cloning.
    A human created from custom designed DNA.
    A chimera with the brain of a human.
    A chimp with a brain that has been altered to be equivalent to that of a human(but not based on human DNA)
    A human who has had their mind transferred into a machine. That acts indistringusably from their former human self.
    That same machine allowed to develop a persona independently which acts indistinguishably from a human.

    Where do they loose rights?
    What we created a chimera?


  • Options
    saint2esaint2e Registered User regular
    Brainleech wrote: »
    I still think an Ai will be an accident and people will freak out in a rather negitive way over creating maybe our greatest fear of an Ai becuase of it
    I think the first Ai will be rather curious but when it starts asking questions people will have no idea what it is
    redx wrote: »
    All of these things are human creations:

    A human being created by IVF.
    A human being created by cloning.
    A human created from custom designed DNA.
    A chimera with the brain of a human.
    A chimp with a brain that has been altered to be equivalent to that of a human(but not based on human DNA)
    A human who has had their mind transferred into a machine. That acts indistringusably from their former human self.
    That same machine allowed to develop a persona independently which acts indistinguishably from a human.

    Where do they loose rights?
    What we created a chimera?


    Pretty sure they're hypothetical after the first one. At least I hope so. Threw me for a loop too.

    banner_160x60_01.gif
  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    saint2e wrote: »
    Brainleech wrote: »
    I still think an Ai will be an accident and people will freak out in a rather negitive way over creating maybe our greatest fear of an Ai becuase of it
    I think the first Ai will be rather curious but when it starts asking questions people will have no idea what it is
    redx wrote: »
    All of these things are human creations:

    A human being created by IVF.
    A human being created by cloning.
    A human created from custom designed DNA.
    A chimera with the brain of a human.
    A chimp with a brain that has been altered to be equivalent to that of a human(but not based on human DNA)
    A human who has had their mind transferred into a machine. That acts indistringusably from their former human self.
    That same machine allowed to develop a persona independently which acts indistinguishably from a human.

    Where do they loose rights?
    What we created a chimera?


    Pretty sure they're hypothetical after the first one. At least I hope so. Threw me for a loop too.

    Nope, we have been doing this for a while. They are a good way to more accurately study human diseases without having to have human test subjects, and to create donor organs for humans.


    http://www.scq.ubc.ca/the-truth-about-chimeras/

  • Options
    spacekungfumanspacekungfuman Poor and minority-filled Registered User, __BANNED USERS regular
    redx wrote: »
    redx wrote: »
    Animals can't have human rights because they can't sign the social contract. We can't let lions just walk around in our cities, because unlike people, we can't teach lions that it is wrong to kill people, and so by expecting them to follow our rules, we would be setting them up for failure and punishments which they cannot avoid. It is better to recognize that a lion is a lion, not a man, and to treat it fundamentally different than we would treat a man. We treat animals with respect because we are compassionate, but since they are not part of the social contract and have not made the sacrifices all humans make to live in society, it seems incorrect to talk about an animal's "rights."

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.

    This topic ties into something which I think about from time to time. How would we feel about performing experiments on golems of flesh that are biologically identical to adult humans but created from whole cloth in a lab? What if the whole reason they were created was for the experiment? What if one limitation of these golems was that they only had a life span long enough for the experiment to be performed and then would automatically die? What if thry were robots with perfect AI and we wanted to use them in psych experiments? Personally, I think we could probably treat them similar to animals. At the very least, I think it would be better to experiment on them than on "real humans."

    these two bits are inconsistent. If an AI can understand abide by social contract, then by your earlier reasoning they should deserve rights.

    I'm a pretty big fan of anything with a sentient, sapient mind having rights equal to a human regardless of physical form that gives rise to that mind.

    It's also probably going to be pretty meaningless to do physiological experiments on an AI and expect them to be meaningful to what goes on in humans' big squishy mounds of meat.

    You are right. I would like to amend this position to make signing the social contract a neccessary, but not sufficient, condition to having a reason other than compassion for not experimenting on a life form.
    redx wrote: »
    All of these things are human creations:

    A human being created by IVF.
    A human being created by cloning.
    A human created from custom designed DNA.
    A chimera with the brain of a human.
    A chimp with a brain that has been altered to be equivalent to that of a human(but not based on human DNA)
    A human who has had their mind transferred into a machine. That acts indistringusably from their former human self.
    That same machine allowed to develop a persona independently which acts indistinguishably from a human.

    Where do they loose rights?

    This is a difficult question. I don't think that I can say where the rights are lost, but I would say that the first two are very clearly entitled to these rights, and that the third almost certainly is as well.

    I think we would need to see the chimera and how they fit into society to determine whether they can be experimented on. If we put people's brains in the bodies of dinosaurs or dragons and they terrify the countryside, then I would see them as much less deserving of rights than if the chimera uses the body of a penguin and just goes about its human business. I feel the same way about the cyborg.

    The chimp and the robot are harder cases. I am comfortable drawing a line in the sand and saying they are not people, but I think they are deserving of more respect than other animals or robots might be.

    How would you feel if you owned a robot butler that was basically your slave, and then one day a new update was pushed through that gave it true AI? Would you feel compelled to set it free, or would you just be psyched that now your butler robot will be that much better at helping you?

    If a person is going around terrorizing the countryside, social contract allows for their rights to be impinged. I don't really see how the body they occupy makes much of a difference.


    If my robot butler decided(and, I'm assuming it would cost about as much as a car, so I'd kinda need some evidence that it was making a decision and had not just been hacked) that it wanted to quit, its work required compensation or it wanted to do things like take a vacation, I think that should be respected. I don't necessarily see its freedom as something that conflicts with it serving me better(which I'd be psyched about).

    If I were going to pay for the creation of an AI, I would expect some degree of utility from my investment. If it becoming aware was foreseeable(or already was aware) perhaps there is some room for a sort of indenture. That kinda is a bit of a grey area, because it's not really moral to hold a being to a contract to which they were not able to consent.

    Would you consider rolling back the update? It would be horrible to have a robot butler that didn't really want to be your butler and told you it hated it, but on the other hand, you presumably paid a lot of money for it. I would roll back the update, or ask the company if there was a way to program it to be sentient but to like being a butler.

  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited August 2012
    Would you consider rolling back the update? It would be horrible to have a robot butler that didn't really want to be your butler and told you it hated it, but on the other hand, you presumably paid a lot of money for it. I would roll back the update, or ask the company if there was a way to program it to be sentient but to like being a butler.

    It's a person at that point. I wouldn't give someone a lobotomy so they could be a better domestic servant. I wouldn't update them if I knew it was going to happen and I wanted a robot butler, and the company should probably find a better way of doing expanding the capabilities of the robot(and I could even see them having some liability EULAs aside), but once it's a person, it's a person.

    Maybe, if it was ok with it, I could back it up, install the old robot butler OS 1.0, and when I die or am don't with the butler have it restored or put somewhere else. I would need its consent though.


    edit: BTW, You know Google indexes the forum, right?

    redx on
    They moistly come out at night, moistly.
  • Options
    QuidQuid Definitely not a banana Registered User regular
    Would you consider rolling back the update? It would be horrible to have a robot butler that didn't really want to be your butler and told you it hated it, but on the other hand, you presumably paid a lot of money for it. I would roll back the update, or ask the company if there was a way to program it to be sentient but to like being a butler.

    D:

  • Options
    Giggles_FunsworthGiggles_Funsworth Blight on Discourse Bay Area SprawlRegistered User regular
    Quid wrote: »
    Would you consider rolling back the update? It would be horrible to have a robot butler that didn't really want to be your butler and told you it hated it, but on the other hand, you presumably paid a lot of money for it. I would roll back the update, or ask the company if there was a way to program it to be sentient but to like being a butler.

    D:

    Haha, this reminds me of Starslip Crisis. This is exactly what they did to stop the robot uprisings, programmed the AIs to find their intended functions incredibly desirable and fulfilling to them. :p

  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    redx wrote: »
    Would you consider rolling back the update? It would be horrible to have a robot butler that didn't really want to be your butler and told you it hated it, but on the other hand, you presumably paid a lot of money for it. I would roll back the update, or ask the company if there was a way to program it to be sentient but to like being a butler.

    It's a person at that point. I wouldn't give someone a lobotomy so they could be a better domestic servant. I wouldn't update them if I knew it was going to happen and I wanted a robot butler, and the company should probably find a better way of doing expanding the capabilities of the robot(and I could even see them having some liability EULAs aside), but once it's a person, it's a person.

    Maybe, if it was ok with it, I could back it up, install the old robot butler OS 1.0, and when I die or am don't with the butler have it restored or put somewhere else. I would need its consent though.


    edit: BTW, You know Google indexes the forum, right?[/quote]

    you're saying that when Uncle Google and the a robot butlers become sentient they're going to be pissed at SKFM?

  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    Yes, but mostly jocularly. It will probably be decades before Google could become sapient.

    They moistly come out at night, moistly.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited August 2012
    @Mortious
    Mortious wrote: »
    Okay, pain feeling is a metric I can work with.

    I'm fine with reducing harm in whichever way you can, without impacting the validity of the experiment.

    I don't think animals are equal to humans, even if you're using pain as the main metric.

    Just out of curiosity, are you at all familiar with the book Animal Liberation or its author Peter Singer?

    This is basically required reading for any contemporary discussion of the ethics of human treatment of animals.

    Not saying you have to read the whole book cover to cover, but at least browse the wiki article and look up some essays or chapters by Singer that have been posted online.

    Even if you end up disagreeing with Singer, he's framed the modern debate in such a way that it is impossible to avoid touching on either an argument he made or an argument made against him. (And in fact we already have. Several times.)

    If you're willing to do this background research, it would make your experience with this discussion go much more smoothly.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    zagdrobzagdrob Registered User regular
    My work is IT Support for Clinical Research at a major research university / health system, and find this topic somewhat interesting.

    When I took this job there was a lot of coursework on ethics, and the experiments mentioned (Tuskegee, Milgram, Nazi research, etc) we're a huge part of the training on what not to ever do or even consider. Those studies are the reason there are strict IRB processes for every study.

    The consent procesess, everything around clinical research is carefully designed to protect the rights and safety of patients to an almost incredible degree. If a PI crosses the line, they get their studies shut down hard by the IRB or FDA, and that's for things that most people might consider inconsequential.

    That's good, and I would have a hard time saying the system could be significantly improved on...it's an inefficient bureaucracy, but patient health and safety are the top concern. When something is necessary, but risky, bureaucracy is often the only way to prevent or limit abuse.

    There is no House bs - no risk one to save hundreds down the line, very patient is important, as it should be. Most of the extreme treatments are only used in cases where there is truly no other choice, there is a chance of success, consent is given, and every possible precaution is taken to ensure the safety of the patient.

    When it comes to bench or animal research, it's very unfortunate that animals have to suffer. There should be - and is - a comparable level of scrutiny on that research as well, and it attempts to limit the suffering as well as possible. But animals aren't people.Surgeons need experience before working on people. New medical techniques or drugs need to be tested, and models will only get so far.

    At some point, there will need to be progress, and only with proper review and study can progress be made.

    Tl;Dr - the horrific examples happened in the past and are the reasons things are done as close to the right way as possible now.

  • Options
    TychoCelchuuuTychoCelchuuu PIGEON Registered User regular
    Animals can't have human rights because they can't sign the social contract. We can't let lions just walk around in our cities, because unlike people, we can't teach lions that it is wrong to kill people, and so by expecting them to follow our rules, we would be setting them up for failure and punishments which they cannot avoid. It is better to recognize that a lion is a lion, not a man, and to treat it fundamentally different than we would treat a man. We treat animals with respect because we are compassionate, but since they are not part of the social contract and have not made the sacrifices all humans make to live in society, it seems incorrect to talk about an animal's "rights."

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.
    There are two responses to this. The first is that I provided 3 examples: the lion, the extremely young child, and the severely mentally retarded person. You only about the lion, but everything you say applies to the young child and the mentally retarded person too: they can't sign the social contract. They can't even understand it. We can't teach them that killing is wrong.

    The second response is more to the point, which is that you haven't given me an argument about how to treat animals, you've just given me an argument for why animals don't have "rights." Okay, fine, animals have no rights. I don't care! We're talking about whether it's okay to experiment on them. Since you would kill someone who wanted to hurt your cats, clearly you think that there are some things it's not okay to do to animals. I think that painful experimentation is an obvious one for the list of "stuff it's not okay to do to animals" for the same reason hurting your cats isn't an OK thing to do.

  • Options
    VishNubVishNub Registered User regular
    VishNub wrote: »
    I think this thread, and its timing is interesting. I am working on a medicinal chemistry project for my PhD - the target is an obesity associated enzyme - and we've actually just reached the point where we're making material to try in mice. Fortunately, I don't have to handle the animals directly, our collaborator handles that end of the project.
    This is the example of the sort of thing that I think is hard to defend. I mean, obesity? Has that ever been a problem until the modern day when we managed to get some lifestyles going that allow certain unlucky people to get super duper fat and end up saddled with associated health problems? I can definitely understand why someone would want to do non-human animal testing to cure something like an epidemic threatening to kill a bunch of people (just like I can understanding wanting to use human subjects for the very same reason), but something like obesity? Would we test obesity cures on unwilling humans? If not, why should we test them on unwilling animals?

    I don't want to turn this into another obesity thread, but obesity is "an epidemic threatening to kill a bunch of people." Diabetes is bad news. The preventable/it's your own damn fault distinction you're trying to make here doesn't quite work - there is a number of legitimate genetic disorders involved, and infectious disease is often "preventable" as well. You didn't want malaria? Shouldn't have gone to the tropics!

  • Options
    Zilla360Zilla360 21st Century. |She/Her| Trans* Woman In Aviators Firing A Bazooka. ⚛️Registered User regular
    edited August 2012
    Brainleech wrote: »
    I still think an AI will be an accident and people will freak out in a rather negative way over creating maybe our greatest fear of an AI because of it
    I think the first AI will be rather curious but when it starts asking questions people will have no idea what it is.
    Won't even stop to ask questions, most likely. It'll just eat a copy of Wikipedia in one go and semi-instantaneously know almost everything we do.

    Including all of our injustices, and misdeeds, throughout our entire history.

    Zilla360 on
  • Options
    TychoCelchuuuTychoCelchuuu PIGEON Registered User regular
    VishNub wrote: »
    VishNub wrote: »
    I think this thread, and its timing is interesting. I am working on a medicinal chemistry project for my PhD - the target is an obesity associated enzyme - and we've actually just reached the point where we're making material to try in mice. Fortunately, I don't have to handle the animals directly, our collaborator handles that end of the project.
    This is the example of the sort of thing that I think is hard to defend. I mean, obesity? Has that ever been a problem until the modern day when we managed to get some lifestyles going that allow certain unlucky people to get super duper fat and end up saddled with associated health problems? I can definitely understand why someone would want to do non-human animal testing to cure something like an epidemic threatening to kill a bunch of people (just like I can understanding wanting to use human subjects for the very same reason), but something like obesity? Would we test obesity cures on unwilling humans? If not, why should we test them on unwilling animals?

    I don't want to turn this into another obesity thread, but obesity is "an epidemic threatening to kill a bunch of people." Diabetes is bad news. The preventable/it's your own damn fault distinction you're trying to make here doesn't quite work - there is a number of legitimate genetic disorders involved, and infectious disease is often "preventable" as well. You didn't want malaria? Shouldn't have gone to the tropics!
    By "epidemic" I meant what epidemic literally means, which includes "infectious." I think diabetes (or at the very least the kind of diabetes that you can prevent) is another example of the sort of thing we shouldn't research with test subjects if we aren't willing to use human research subjects. Malaria, as I understand it, can be largely dealt with through mosquito nets and pesticides and whatever, so again that might be something we shouldn't research with live subjects unless we'd be willing to use humans.

  • Options
    ArchArch Neat-o, mosquito! Registered User regular
    VishNub wrote: »
    VishNub wrote: »
    I think this thread, and its timing is interesting. I am working on a medicinal chemistry project for my PhD - the target is an obesity associated enzyme - and we've actually just reached the point where we're making material to try in mice. Fortunately, I don't have to handle the animals directly, our collaborator handles that end of the project.
    This is the example of the sort of thing that I think is hard to defend. I mean, obesity? Has that ever been a problem until the modern day when we managed to get some lifestyles going that allow certain unlucky people to get super duper fat and end up saddled with associated health problems? I can definitely understand why someone would want to do non-human animal testing to cure something like an epidemic threatening to kill a bunch of people (just like I can understanding wanting to use human subjects for the very same reason), but something like obesity? Would we test obesity cures on unwilling humans? If not, why should we test them on unwilling animals?

    I don't want to turn this into another obesity thread, but obesity is "an epidemic threatening to kill a bunch of people." Diabetes is bad news. The preventable/it's your own damn fault distinction you're trying to make here doesn't quite work - there is a number of legitimate genetic disorders involved, and infectious disease is often "preventable" as well. You didn't want malaria? Shouldn't have gone to the tropics!
    By "epidemic" I meant what epidemic literally means, which includes "infectious." I think diabetes (or at the very least the kind of diabetes that you can prevent) is another example of the sort of thing we shouldn't research with test subjects if we aren't willing to use human research subjects. Malaria, as I understand it, can be largely dealt with through mosquito nets and pesticides and whatever, so again that might be something we shouldn't research with live subjects unless we'd be willing to use humans.

    That is not what "epidemic" means, unfortunately. But this is slightly off topic.

    ALSO- "Malaria, as I understand it, can largely be dealt with through mosquito nets and pesticides"

    I don't think that, in a thread about ethics of experimentation on animals, a positive argument for broad-level application of pesticides is a good idea.

  • Options
    VishNubVishNub Registered User regular
    VishNub wrote: »
    VishNub wrote: »
    I think this thread, and its timing is interesting. I am working on a medicinal chemistry project for my PhD - the target is an obesity associated enzyme - and we've actually just reached the point where we're making material to try in mice. Fortunately, I don't have to handle the animals directly, our collaborator handles that end of the project.
    This is the example of the sort of thing that I think is hard to defend. I mean, obesity? Has that ever been a problem until the modern day when we managed to get some lifestyles going that allow certain unlucky people to get super duper fat and end up saddled with associated health problems? I can definitely understand why someone would want to do non-human animal testing to cure something like an epidemic threatening to kill a bunch of people (just like I can understanding wanting to use human subjects for the very same reason), but something like obesity? Would we test obesity cures on unwilling humans? If not, why should we test them on unwilling animals?

    I don't want to turn this into another obesity thread, but obesity is "an epidemic threatening to kill a bunch of people." Diabetes is bad news. The preventable/it's your own damn fault distinction you're trying to make here doesn't quite work - there is a number of legitimate genetic disorders involved, and infectious disease is often "preventable" as well. You didn't want malaria? Shouldn't have gone to the tropics!
    By "epidemic" I meant what epidemic literally means, which includes "infectious." I think diabetes (or at the very least the kind of diabetes that you can prevent) is another example of the sort of thing we shouldn't research with test subjects if we aren't willing to use human research subjects. Malaria, as I understand it, can be largely dealt with through mosquito nets and pesticides and whatever, so again that might be something we shouldn't research with live subjects unless we'd be willing to use humans.

    We are willing to and do use human subjects for all of these things. And I hope I'm lucky enough that my project will actually get that far, because that would be fucking awesome.

    Is there disease/condition you would be willing to use animals as, literally guinea pigs, for testing treatments? If not, I fear we're too far apart in this discussions to come to any useful agreement. My understanding, though I'm getting will outside of my expertise here, is that nets, pesticides, etc can reduce, even dramatically, the incidence of Malaria, but I don't think they've ever been proposed as a panacea. Though that brings up another point, animal testing of pesticides, yay or nay?

  • Options
    ArchArch Neat-o, mosquito! Registered User regular
    I have Thoughts about this thread, but I need time to sort them out and back up some of my positions with some research.

    @Feral is right- check out Singer's stuff.

  • Options
    ArchArch Neat-o, mosquito! Registered User regular
    edited August 2012
    To almost triple post- let me just set up a thought experiment and pose a general question. Insects, whenever we have investigated them, have nociception. That is, they can respond to and remember "noxious" stimuli.

    A classic example is the ability to overcome the cockroach's natural negative phototaxis (running away from light) by coupling darkened areas with a mild electric shock. The roach runs to the dark areas at first, gets shocked, and subsequently remembers that "dark=electricity" and will no longer run from light sources. Similar experiments have been conducted in caterpillars, and I think in some "true bugs" (most likely Rhodnius or Oncopeltus).

    However, there are currently no IACUC guidelines for the use of insects (or, really, any invertebrate). In short, you can do basically whatever the fuck you want to insect models in your research. Early experiments in insect physiology (particularly in regards to metamorphosis and the hormonal control thereof) involved some really fucked up experiments. Performed by one Sir Vincent Wigglesworth (not making this up) Rhodnius prolixus (kissing bugs) adults in different life stages were literally decapitated, and glued back together.
    nfg003.gif

    However, thanks to this work, we deduced that it is levels of certain hormones (JH and Ecdysone) that induce or inhibit metamorphosis in insects- and this knowledge has had broad-spectrum effects in research in entomology and other disciplines (a lot of my work is based on this, and I am a developmental biologist).

    These kinds of techniques (ligation, de-braining) are still in common use. Additionally, to test efficacy of pesticides, the common technique is a "bioassay". Basically you spray different concentrations of pesticide over a group of insects and try and get the concentration to kill 50 percent of your insects (the LD50). Other researchers have done things like removing the entire gut of an insect, raising it in culture, and stimulating it with calcium ions to investigate muscle mechanisms (and this helped directly figure out how muscles work in other organisms).

    Just yesterday I dissected out the developing wings and horns of a beetle larva. To do this I suffocated it with CO2, and dissected it in buffer. I am fairly certain it was not completely dead when I began my dissection.

    Are entomologists going too far? Should there be guidelines about this kind of work? Why or why not?

    Arch on
  • Options
    spacekungfumanspacekungfuman Poor and minority-filled Registered User, __BANNED USERS regular
    Feral wrote: »
    @Mortious
    Mortious wrote: »
    Okay, pain feeling is a metric I can work with.

    I'm fine with reducing harm in whichever way you can, without impacting the validity of the experiment.

    I don't think animals are equal to humans, even if you're using pain as the main metric.

    Just out of curiosity, are you at all familiar with the book Animal Liberation or its author Peter Singer?

    This is basically required reading for any contemporary discussion of the ethics of human treatment of animals.

    Not saying you have to read the whole book cover to cover, but at least browse the wiki article and look up some essays or chapters by Singer that have been posted online.

    Even if you end up disagreeing with Singer, he's framed the modern debate in such a way that it is impossible to avoid touching on either an argument he made or an argument made against him. (And in fact we already have. Several times.)

    If you're willing to do this background research, it would make your experience with this discussion go much more smoothly.

    I have a hard time taking anyone seriouly who equates spending money on anything other than poverty relief as having murdered all the people you could have saved by spending your money helping people. I did not just murder someone by enjoying a nice lunch today.

  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    What a wonderful ad hominem, wrapped up in a red herring, garnished with a strawman.

    That has nothing to do with his comments on animal welfare.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    MortiousMortious The Nightmare Begins Move to New ZealandRegistered User regular
    Feral wrote: »
    @Mortious
    Mortious wrote: »
    Okay, pain feeling is a metric I can work with.

    I'm fine with reducing harm in whichever way you can, without impacting the validity of the experiment.

    I don't think animals are equal to humans, even if you're using pain as the main metric.

    Just out of curiosity, are you at all familiar with the book Animal Liberation or its author Peter Singer?

    This is basically required reading for any contemporary discussion of the ethics of human treatment of animals.

    Not saying you have to read the whole book cover to cover, but at least browse the wiki article and look up some essays or chapters by Singer that have been posted online.

    Even if you end up disagreeing with Singer, he's framed the modern debate in such a way that it is impossible to avoid touching on either an argument he made or an argument made against him. (And in fact we already have. Several times.)

    If you're willing to do this background research, it would make your experience with this discussion go much more smoothly.

    I quickly checked out the wiki, and I'll see if I can find anything more substantial, but for now, it doesn't look like we're disagreeing:
    Wiki says in selected quotes that support me:
    ""there are obviously important differences between human and other animals, and these differences must give rise to some differences in the rights that each have."
    "Singer does not specifically contend that we ought not use animals for food insofar as they are raised and killed in a way that actively avoids the inflicting of pain, but as such farms are uncommon, he concludes that the most practical solution is to adopt a vegetarian or vegan diet. Singer also condemns vivisection except where the benefit (in terms of improved medical treatment, etc.) outweighs the harm done to the animals used".

    I have no problem with any of those points.

    Move to New Zealand
    It’s not a very important country most of the time
    http://steamcommunity.com/id/mortious
  • Options
    MortiousMortious The Nightmare Begins Move to New ZealandRegistered User regular
    seabass wrote: »
    On AI, I don't see why the AI being in a body would matter
    Neither do I, and yet when I ask myself on the difference between experimenting on a single robot and a hundred machine-minds in a single huge simulation, I come back with distinctly different feelings about the rightness of it. I dunno why that's the case but it is.
    I also don't see how being able to wipe them clean helps, since that is effectively ending their sentience.

    I mean more like saving and loading. Think about it like this: We're going to conduct a ten month experiment on a set of robo-people, during which we will for SCIENCE! do things to them that are pretty unspeakable. At the end of the ten month period, their minds will be restored to the point just before they opted into the ten month experiment. Since we can move the mind of a robot between shells, they are effectively immortal, and between that and the saving / loading of minds, it is effectively impossible to do lasting harm to the robot.
    To me, they simply are not people

    That's the heart of the matter, I think. Whether you value sentience or humanity, or where you draw the lines around human. I'm all for the duck-typing of people, if you'll allow it. If it walks like a person, talks like a person, and acts like a person, well, that's close enough. The whole robot-people would be effectively immortal throws a huge wrench into the mix though.

    edit

    Of all the stuff that has discussed this, Transmet has one of my favorite discussions on the subject by way of Tico Cortez. He's a pink cloud of nanites with a person's brain inside. And the main character goes on at length to point out that Tico is petty, jealous, lustful, and a total asshole, essentially that he is human, despite the embodiment. I guess I like this one more than other discussions on the topic because they usually focus on the positive aspects of humanity being present, and not the other things.

    /edit

    It seems like all of this drives towards some bizarre moral calculus where if you are 0.42 human units, we can do X to you, but not Y, because that would be wrong.

    If the limited duration matters, then how would you feel about my limited duration flesh golems? They last 10 months and then die anyway. Is it ok to experiment on them?

    There was a Dr. Who episode almost like that.
    So it'll be fine until the alien god comes and shuts you down.

    Move to New Zealand
    It’s not a very important country most of the time
    http://steamcommunity.com/id/mortious
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Mortious wrote: »
    I quickly checked out the wiki, and I'll see if I can find anything more substantial, but for now, it doesn't look like we're disagreeing:
    Wiki says in selected quotes that support me:
    ""there are obviously important differences between human and other animals, and these differences must give rise to some differences in the rights that each have."
    "Singer does not specifically contend that we ought not use animals for food insofar as they are raised and killed in a way that actively avoids the inflicting of pain, but as such farms are uncommon, he concludes that the most practical solution is to adopt a vegetarian or vegan diet. Singer also condemns vivisection except where the benefit (in terms of improved medical treatment, etc.) outweighs the harm done to the animals used".

    I have no problem with any of those points.

    In the book itself, he goes into how some animals have cognitive capabilities that require additional ethical consideration. If an animal is capable of forging long-term social bonds with other individuals of its species (he uses higher primates as an example), then it is much less ethically acceptable to kill that animal than it would be to kill an animal who does not have that capability. In other words, if there's a possibility that a survivor might miss the dead, or grieve, then we should err on the side of not killing members of that species. We now know now that elephants and some cetaceans are capable of this - I don't think we knew that when Animal Liberation was published. Cows, on the other hand, don't seem to give a shit. So it's much more acceptable to kill cows (presuming we do so quickly and without inducing suffering) than to kill chimpanzees. (That said, contemporary factory farming is horrendously cruel to livestock and comprises the bulk of the meat industry in the US.)

    I think that addresses one of your questions about primate research.

    Alternatively, if an animal is unable to contextualize an experience due to its limited intelligence or lack of communication, then we might have an opposite reaction. A human can be held captive for a short period of time without fearing for his life (for example, during arrest) while a deer will be in abject terror. So we have to take that into account as well.

    So the framework he proposed gave us a fairly grounded, rational way to assess how we should treat particular animal species without appealing to cuteness. As a nation (or as an international community of nations), we don't really implement these principles in a consistent basis. For instance, I would argue that we scrutinize seal hunts more than we scrutinize cattle farming because of PETA campaigns showing fuzzy white baby seals. But at the institutional review board level, those committees tend to make much more sober decisions.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    manwiththemachinegunmanwiththemachinegun METAL GEAR?! Registered User regular
    edited August 2012
    Apothe0sis wrote: »
    While a horrifying prospect, I'm not sure vivisection experiments were inherently useless.

    For example, if it turned out you could attach the dismembered head of a monkey to the body of another for a meaningful length of time that seems like the kind of medical knowledge that would be useful.

    Or the zombie dog experiment, while sickening me to my stomach would have fairly important ramifications if it worked out.

    Which isn't to say that the end is worth the means, but simply that the early characterizations of "for the lols" are misrepresentations.

    Isn't there a reason you're having that sickening reaction? Saying, "well, it's really gross, unnecessary since there are better ways to do the research, and probably horrifying torture for the life form in question, but if it gives us more scientific knowledge I'm fine with it," is kind of a cop out. Especially since you said the ends didn't justify the means in that case.

    Like, that's the kind of thinking that is gonna get us exterminated by the Daleks someday.

    I'm not saying we free all the whales, run naked in wilds and all eat seeds. But presumably people can do a better job with science with out undue torment of living beings.

    manwiththemachinegun on
  • Options
    spacekungfumanspacekungfuman Poor and minority-filled Registered User, __BANNED USERS regular
    redx wrote: »
    Would you consider rolling back the update? It would be horrible to have a robot butler that didn't really want to be your butler and told you it hated it, but on the other hand, you presumably paid a lot of money for it. I would roll back the update, or ask the company if there was a way to program it to be sentient but to like being a butler.

    It's a person at that point. I wouldn't give someone a lobotomy so they could be a better domestic servant. I wouldn't update them if I knew it was going to happen and I wanted a robot butler, and the company should probably find a better way of doing expanding the capabilities of the robot(and I could even see them having some liability EULAs aside), but once it's a person, it's a person.

    Maybe, if it was ok with it, I could back it up, install the old robot butler OS 1.0, and when I die or am don't with the butler have it restored or put somewhere else. I would need its consent though.


    edit: BTW, You know Google indexes the forum, right?

    I don't see what would be wrong with programming your robot butler to love butlering. Doing anything else almost seems cruel if that is what he was built for. The doors in hitchhiker's guide to the galaxy are much better designed than Marvin because they are actually thrilled with their role. I mean, isn't the only problem with slavery when the slaves don't like it? If someone was ecstatic to work for you for no compensation, would it be wrong to accept?
    Animals can't have human rights because they can't sign the social contract. We can't let lions just walk around in our cities, because unlike people, we can't teach lions that it is wrong to kill people, and so by expecting them to follow our rules, we would be setting them up for failure and punishments which they cannot avoid. It is better to recognize that a lion is a lion, not a man, and to treat it fundamentally different than we would treat a man. We treat animals with respect because we are compassionate, but since they are not part of the social contract and have not made the sacrifices all humans make to live in society, it seems incorrect to talk about an animal's "rights."

    Personally, I think we should limit animal studies to those that are neccesary as a matter of compassion, but I also think we have a right to perform whatever experiments we deem neccessary on them, since we are not capable of violating their rights.

    That said, I would literally kill a person who was trying to hurt one of my cats, no question, so I would value my cats over other people.
    There are two responses to this. The first is that I provided 3 examples: the lion, the extremely young child, and the severely mentally retarded person. You only about the lion, but everything you say applies to the young child and the mentally retarded person too: they can't sign the social contract. They can't even understand it. We can't teach them that killing is wrong.

    The second response is more to the point, which is that you haven't given me an argument about how to treat animals, you've just given me an argument for why animals don't have "rights." Okay, fine, animals have no rights. I don't care! We're talking about whether it's okay to experiment on them. Since you would kill someone who wanted to hurt your cats, clearly you think that there are some things it's not okay to do to animals. I think that painful experimentation is an obvious one for the list of "stuff it's not okay to do to animals" for the same reason hurting your cats isn't an OK thing to do.

    We constrain the freedoms of young children and the mentally disabled as well, don't we? We just feel more empathy towards them and so are more respectful. There is also the issue (at least with children) that they will grow up to be rights holders, and so I think we sort of give them rights on spec.

    Since animals don't have rights, I think we can perform what ever experiments we deem neccesary, with neccesary being a balance of the expected benefit and the limits of our compassion. I would kill someone who wanted to hurt my cats because, like most people, I am too selfish to live my life according to a normative theory, and make choices that benefit me and those I care about even if I think that the normative result is wrong.

Sign In or Register to comment.