As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Which of these two options would you choose?

145679

Posts

  • Options
    For the FutureFor the Future ClubPA regular
    edited December 2006
    The OP gave a rather shitty putrid example. Nobody cares about a billion people dying, it's a statistic, a mind can't comprehend things in quantities that large. The best we can do is feel bad about individual people dying, and then multiplying that until we feel it amounts to a billion whatever.

    A better example would be: There are 5 people tied to a train track and a train is coming. You could pull a switch and divert the train to another track, which there is only 1 person sleeping on it. This person will die and the other 5 will live. Which do you choose?

    And now, you are standing on a bridge with 1 other stranger and there are 5 people tied to a road. A car is coming. You could let the car continue it's course and kill all five people, or you could push the stranger standing on the bridge off from the bridge, onto the car's windshield, killing the 1 stranger and causing it to stop and thus save the 5 people. Which do you choose?

    For the Future on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited December 2006
    The OP gave a rather shitty putrid example. Nobody cares about a billion people dying, it's a statistic, a mind can't comprehend things in quantities that large. The best we can do is feel bad about individual people dying, and then multiplying that until we feel it amounts to a billion whatever.

    A better example would be: There are 5 people tied to a train track and a train is coming. You could pull a switch and divert the train to another track, which there is only 1 person tied to it. This person will die and the other 5 will live. Which do you choose?

    And now, you are standing on a bridge with 1 other stranger and there are 5 people tied to a road. A car is coming. You could let the car continue it's course and kill all five people, or you could push the stranger standing on the bridge off from the bridge, onto the car's windshield, killing the 1 stranger and causing it to stop and thus save the 5 people. Which do you choose?
    For both I choose option three: carry batarangs.

    Edit: I feel my moral framework is discriminated against by these hypotheticals, damnit.

    ViolentChemistry on
  • Options
    For the FutureFor the Future ClubPA regular
    edited December 2006
    You did not have time to prepare.

    For the Future on
  • Options
    MahnmutMahnmut Registered User regular
    edited December 2006
    poshniallo wrote:
    Thanks Mahnmut for beating me - that massively obvious objection never occurred to me before!!!
    Mahnmut was referring to my post which contains the same objection when he wrote "beat'd".

    Right. ;)

    Mahnmut on
    Steam/LoL: Jericho89
  • Options
    Loren MichaelLoren Michael Registered User regular
    edited December 2006
    correct me if i'm wrong, but isn't intuition just a mental shorthand for a preexisting er, argument (for lack of a better term)? can't you weigh the reasons behind the feeling of intuition? if someone doesn't have a compelling reason behind their moral judgments, can't we call them on it?
    That is not how I understand intuition, and I don't believe I am alone either. As far as I know, intuition refers to a sort of "gut feeling" unsupported by any reasoning. There might be subconscious forces or whatever at play, but even those do not constitute reasons in the way that a Kantian appeal to the categorical imperative does.

    ah, you're right, i had the term slightly confused with some other ideas in my head.

    that said, the fact that good ideas are intuitively cached does not mean that bad ideas are somehow respectable. our ethical intuitions can be blinded and altered by a large number of factors, and i'm reasonably certain that a fair amount of our ethical intuitions are learned.

    intuition in morality strikes me as the same intuitive leap that gets past solipsism in other fields, like biology, history, and cosmology. people who would support grandmother punching or honor killings are not dissenting ethicists any more than holocaust deniers are dissenting historians. they have signed off on the discussion.

    Loren Michael on
    a7iea7nzewtq.jpg
  • Options
    YarYar Registered User regular
    edited December 2006
    A better example would be: There are 5 people tied to a train track and a train is coming. You could pull a switch and divert the train to another track, which there is only 1 person sleeping on it. This person will die and the other 5 will live. Which do you choose?
    Yeah I presented a similar one about disarming bombs. Unfortunately, this completely removes what I believe to be a critical aspect of the OP's scenario: the fact that a another moral subject - another human being's moral decision - is what most directly kills the greater number. If it's just a matter of "choose either 4 or 5 to die" then it's simple. But I'm assuming that there is a reason that there is another person presented who does the killing of the 5, instead of it just being your choice to push the 4 or the 5 button.
    And now, you are standing on a bridge with 1 other stranger and there are 5 people tied to a road. A car is coming. You could let the car continue it's course and kill all five people, or you could push the stranger standing on the bridge off from the bridge, onto the car's windshield, killing the 1 stranger and causing it to stop and thus save the 5 people. Which do you choose?
    Well, one obvious choice is to throw yourself on the car to save them all.

    And since it got BotP'd: What if one of the Jews exterminated in the Holocaust would have otherwise, had the holocaust never happened, grown to become a dictator twice as evil and devastating as Hitler?

    Yar on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited December 2006
    Bliss 101 wrote:
    It's not clear to me that you can apply a price tag to a life at all, absent unnamed factors.

    If we had more money to spend on hurricane-proofing Louisiana, then more people would survive hurricanes there. Same for poverty relief anywhere around the globe, and so on. There are plenty of ways you can spend money to save lives, so why is the idea of lives and money being exchangeable on some level absurd?
    Yar wrote:
    Well, one obvious choice is to throw yourself on the car to save them all.

    The usual formulation stipulates that the man is exceptionally fat. Your (presumably normal) mass wouldn't be enough to stop the car.
    poshniallo wrote:
    I'm trying to be truthful - both to speak the truth and to see it. Not to be a persuasive philosopher.

    Well, I guess you're just a beautiful, unique snowflake.

    MrMister on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    poshniallo wrote:
    @electricitylikesme - I wish you wouldn't keep putting words in my mouth. Every post I make you say that I actually mean something else. I don't just think this hypothetical is 'unrealistic' - I think it is physically impossible, meaningless and pointless. That it has no connection to any real moral conundrum I've ever faced. I'm aware it is supposed to illustrate the idea of inaction vs action or something similar, but I don't think it does. Please stop telling me I'm evasive or somesuch. If I wanted to evade the question, I wouldn't stay in the discussion.
    You are being evasive, and you're interpreting my responses as anger because it suits you too. They're not. You're being evasive because you won't answer the question, because you feel it's too simple. Well by the same token, if it's simple, then someone with a defined moral system of any sort should have no trouble giving an answer. Except that's hardly been the case here, so perhaps this moral hypothetical has some worth in analyzing morality, as unrealistic as it is?

    Which is the point I've been making and you keep ignoring. Of course it's unrealistic, it's supposed to be unrealistic. Because it is a model system in which we can more easily analyze our ideas.

    And don't start with the accusations of putting words into your mouth, because it's hardly what I've done - I've been describing your responses, and you've been telling me repeatedly, despite having addressed the point, that the system is unrealistic. You are arguing to not have to answer the question, which makes sense, because it's a crappy choice to have to make, but this is why it's a hypothetical - it's just interesting to see people react so violently to it.

    electricitylikesme on
  • Options
    Bliss 101Bliss 101 Registered User regular
    edited December 2006
    MrMister wrote:
    Bliss 101 wrote:
    It's not clear to me that you can apply a price tag to a life at all, absent unnamed factors.

    If we had more money to spend on hurricane-proofing Louisiana, then more people would survive hurricanes there. Same for poverty relief anywhere around the globe, and so on. There are plenty of ways you can spend money to save lives, so why is the idea of lives and money being exchangeable on some level absurd?

    I didn't say it's absurd. I just question the idea of it ever being a clear-cut transaction, [edit] or that it would apply both ways. And I'd contend that your example only serves to illustrate my point. Spending money to save lives and killing people to save money are not, in my opinion, morally equivalent transactions.

    Bliss 101 on
    MSL59.jpg
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    Bliss 101 wrote:
    MrMister wrote:
    Bliss 101 wrote:
    It's not clear to me that you can apply a price tag to a life at all, absent unnamed factors.

    If we had more money to spend on hurricane-proofing Louisiana, then more people would survive hurricanes there. Same for poverty relief anywhere around the globe, and so on. There are plenty of ways you can spend money to save lives, so why is the idea of lives and money being exchangeable on some level absurd?

    I didn't say it's absurd. I just question the idea of it ever being a clear-cut transaction. And I'd contend that your example only serves to illustrate my point. Spending money to save lives and killing people to save money are not, in my opinion, morally equivalent transactions.
    While there's a degree of variance due to the actual actions involved in the transaction, they seem pretty equivalent to me. Pulling the patient off life-support and all.

    electricitylikesme on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited December 2006
    You did not have time to prepare.
    Failing to make time to prepare was my immoral act, then. Not whatever I do with the situation, the minute I didn't make time to prepare, I killed however many people died in the hypothetical.

    ViolentChemistry on
  • Options
    Bliss 101Bliss 101 Registered User regular
    edited December 2006
    Bliss 101 wrote:
    MrMister wrote:
    Bliss 101 wrote:
    It's not clear to me that you can apply a price tag to a life at all, absent unnamed factors.

    If we had more money to spend on hurricane-proofing Louisiana, then more people would survive hurricanes there. Same for poverty relief anywhere around the globe, and so on. There are plenty of ways you can spend money to save lives, so why is the idea of lives and money being exchangeable on some level absurd?

    I didn't say it's absurd. I just question the idea of it ever being a clear-cut transaction. And I'd contend that your example only serves to illustrate my point. Spending money to save lives and killing people to save money are not, in my opinion, morally equivalent transactions.
    While there's a degree of variance due to the actual actions involved in the transaction, they seem pretty equivalent to me. Pulling the patient off life-support and all.

    I think I must be missing the point you're trying to make. If you tie the moral value of an action into its monetary cost (even if you allow for a "degree of variance"), you assign an inherent moral value to money itself, as if morality could be reduced into a zero-sum game. That is in my opinion an absurd proposition.

    Bliss 101 on
    MSL59.jpg
  • Options
    poshnialloposhniallo Registered User regular
    edited December 2006
    @Electricitylikesme: You're still applying the term 'evasive' - a negative term. This is not an interrogation where I have to answer the question. As far as I can see, this is a debate on the validity of this question and many others like it. I and many others have explained why we feel these hypotheticals are pointless. But I don't think you are inclined to listen to me, or Violent Chemistry, or others.

    Mahnmut - sorry for being snippy - I thought you were trashtalking me :lol:

    Gridsystem - you used the word 'truth value', which I wasn't familiar with, so I looked it up. Apparently, there are formal logical systems in which propositions can be said to be neither true or false.

    http://en.wikipedia.org/wiki/Multi-valued_logic

    I would say that the question, when applied to a purely internal, subjective thing like morality, is flawed. Or I could say that the propositions 'He thinks X is good, I think X is bad' are true.

    But I'm not sure. I am not a logician.

    I'll try to be clear. I think that people derive their moral frameworks from cultural, educational, environmental and genetic causes. Supposedly logical frameworks, such as Utilitarianism, are then laid over the top, attempting to justify the moral system. However, the supposedly objective philosophies are breakable, ethnocentric and used in a rhetorical fashion. And that the feature of moral dilemmas which makes them difficult is their real-world detail, and our own personal morality. Thus unrealistic moral hypotheticals tell us NOTHING about morality, or a person, but merely serve to promote the idea that ethics is based on logical criteria.

    poshniallo on
    I figure I could take a bear.
  • Options
    syndalissyndalis Getting Classy On the WallRegistered User, Loves Apple Products regular
    edited December 2006
    In response to the OP, I would not push the button. Even if that meant the other guy was going to push his and kill more people than I would have.

    When my judgment comes (personal or divine, take your pick), I will not be able to say that my acts of murder were justifiable at the time, and that morality was inconvenient. That it was other men who forced me to commit my atrocities.

    syndalis on
    SW-4158-3990-6116
    Let's play Mario Kart or something...
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    poshniallo wrote:
    @Electricitylikesme: You're still applying the term 'evasive' - a negative term. This is not an interrogation where I have to answer the question. As far as I can see, this is a debate on the validity of this question and many others like it. I and many others have explained why we feel these hypotheticals are pointless. But I don't think you are inclined to listen to me, or Violent Chemistry, or others.
    You don't have to answer anything, but you haven't made a coherent case why the question is not valid as a model system for analyzing ones own morality or a particular type of moral system. All you've argued is that it's not in anyway possible it could be real. Which is obvious, but you haven't explained why answering a simpler black and white question as you put it, is not a valid way to begin dealing with more sophisticated questions of morality with more variables. Instead, as I've said, you've evaded by simply yelling "not real! not real!" and then playing the "I don't have to answer that" card. Well no, you don't, but this is a debate forum, usually we deal with others arguments.

    electricitylikesme on
  • Options
    poshnialloposhniallo Registered User regular
    edited December 2006
    Again, I'm not yelling. Nor 'playing a card'. And I definitely think your tone is more aggressive than mine.

    OK. The fact that you are able to perfectly predict the other person's actions. This is unrealistic, and forms a large part of real world moral choices. For example, I would like to push my incredibly aggressive colleague towards a post that doesn't involve contact with people, but I don't know he would react to that, as he isn't very stable. Perhaps he needs people around him to function. Perhaps interacting with people all day stresses him. I don't know. So I don't know what to do. Perhaps I should leave well enough alone. But I'm involved, so maybe I have a responsibility. If only I had some kind of prescient ability, it would be easier to work out. BUT I DON'T.

    The fact that 5 billion people would die means that I would basically decimate the world. More people would die in the ensuing chaos. Does killing 4 billion change much? You're destroying the world as you know it. Is it possible to know how if the suffering that would ensue would be different? Do the numbers actually matter? If I take an example from real life - if I have to save 10m yen, and I can choose between firing 2 expensive employees or 3 cheap ones, how do I choose? Do I choose the least number of people? But don't I need to know how they'd react? Don't I have conflicting duties as a boss, as a colleague, perhaps as a friend? How do I reconcile my own needs, with theirs, with my company's needs? How can I even claim to know what their needs are?

    You can't affect the other person? Why not? Did God put you there? Perhaps there is something to be gained from resisting the games of a malicious God. But how can we know? And in real life is there a God who controls our actions? I don't believe so.

    I could go on and on and on (and have done, really). But my point is that the things which ACTUALLY, IN MY LIFE, make things hard, do not exist in this hypothetical at all. Things like not knowing what other people will do, what other people feel, whether I have the right to control others, whether I have the ability to control others. Oh, and all the time, every day, not being sure what's going to happen next.

    This hypothetical is not simple. If it were simple it wouldn't annoy people.

    Here is a simple hypothetical:

    There's a guy in your school/company who annoys you a bit. Sometimes he mimics your voice in a funny way. Do you torture him to death and kill his family? And then cut off your own toes and throw them at the corpses, just to show him what's what? And then cut off one of your hands to add that to the pile of gore, and die of blood loss while trying to work out how to cut off the other?

    I'm guessing your answer would be.... no? Now that's simple!

    There is also the point that ALL hypotheticals are flawed - just read Violent Chemistry's posts, and Yar's, which work towards a reductio ad absurdum of Utilitarianism and moral systems generally.

    poshniallo on
    I figure I could take a bear.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    You're doing it again. This is going in circles. I say "it's not meant to be perfectly realistic, it's a model system" i.e. idealized. You say:
    poshniallo wrote:
    OK. The fact that you are able to perfectly predict the other person's actions. This is unrealistic, and forms a large part of real world moral choices. For example, I would like to push my incredibly aggressive colleague towards a post that doesn't involve contact with people, but I don't know he would react to that, as he isn't very stable. Perhaps he needs people around him to function. Perhaps interacting with people all day stresses him. I don't know. So I don't know what to do. Perhaps I should leave well enough alone. But I'm involved, so maybe I have a responsibility. If only I had some kind of prescient ability, it would be easier to work out. BUT I DON'T.
    Idealized system. I have already stated - repeatedly - that it is not supposed to represent the complexities of a real problem, it is meant to represent an extremely simplified moral decision as a basis for which we can analyze a more complicated one - specifically, does a moral system allow us to kill others to save a greater number? The question itself, is of course, badly phrased for this, but that doesn't mean similar hypotheticals are invalid tools.
    poshniallo wrote:
    The fact that 5 billion people would die means that I would basically decimate the world. More people would die in the ensuing chaos. Does killing 4 billion change much? You're destroying the world as you know it. Is it possible to know how if the suffering that would ensue would be different? Do the numbers actually matter? If I take an example from real life - if I have to save 10m yen, and I can choose between firing 2 expensive employees or 3 cheap ones, how do I choose? Do I choose the least number of people? But don't I need to know how they'd react? Don't I have conflicting duties as a boss, as a colleague, perhaps as a friend? How do I reconcile my own needs, with theirs, with my company's needs? How can I even claim to know what their needs are?

    You can't affect the other person? Why not? Did God put you there? Perhaps there is something to be gained from resisting the games of a malicious God. But how can we know? And in real life is there a God who controls our actions? I don't believe so.
    This is all yet more arguing that the system is "totally unrealistic so I don't have to answer it". Which I have already acknowledged it is, but this doesn't mean it is invalid as a tool to analyze concepts.
    poshniallo wrote:
    I could go on and on and on (and have done, really). But my point is that the things which ACTUALLY, IN MY LIFE, make things hard, do not exist in this hypothetical at all. Things like not knowing what other people will do, what other people feel, whether I have the right to control others, whether I have the ability to control others. Oh, and all the time, every day, not being sure what's going to happen next.

    This hypothetical is not simple. If it were simple it wouldn't annoy people.
    The hypothetical has simplified elements. That doesn't make the choice represented in it less simple. It annoys you because you feel like you've lost control of the situation. Which is kind of the point of these things though, morality is never really tested when you have complete control over a situation, it's tested when you have to decide between two otpions that cut anyway are pretty terrible.
    poshniallo wrote:
    Here is a simple hypothetical:

    There's a guy in your school/company who annoys you a bit. Sometimes he mimics your voice in a funny way. Do you torture him to death and kill his family? And then cut off your own toes and throw them at the corpses, just to show him what's what? And then cut off one of your hands to add that to the pile of gore, and die of blood loss while trying to work out how to cut off the other?

    I'm guessing your answer would be.... no? Now that's simple!
    No, that falls in the realm of retarded which is different from a simplified, idealized model. We also call it a strawman and it does nothing to prove your point.
    poshniallo wrote:
    There is also the point that ALL hypotheticals are flawed - just read Violent Chemistry's posts, and Yar's, which work towards a reductio ad absurdum of Utilitarianism and moral systems generally.
    You assume I haven't been. I agree Mr^2. Just because the measurement of utility is difficult doesn't mean ideas behind it are invalid. And VC has been strawmanning the hell out of it by deliberately setting up hypotheticals where he adds just enough detail to make it seem retarded. Hence the point of not specifying where these billions of people who will die are coming from, or other contexts - because it's not necessary, we're dealing with the value of lives vs assisting in ending those lives.

    electricitylikesme on
  • Options
    poshnialloposhniallo Registered User regular
    edited December 2006
    If you feel I'm 'doing it again' then you and I clearly have different ideas about a lot of stuff and will have to agree to differ. You seem to think the tricky important questions of morality are the ones on which almost everyone agrees, while the ones which people disagree on, and are stressed by in their lives, are.... details.

    Also straw manning (that you feel VC and I are using) is where we say 'you think A? Well that's like Z! So you are wrong!' That's different from a reductio ad absurdum. Similar, but different. Reductio is 'If A is true, then absurd consequences result. Hence A is untrue'.

    The comedy simple dilemma wasn't supposed to be strawmanish. I meant that it is simple. And easy to answer. It was intended to be a rhetorical example of an actual simple question. The sillyness was added just because.

    You say that the OP model system. OK fine. A model is a simple version of a real thing which allows us to imagine ways to manipulate the original, right?

    What real world situation does this mirror?

    Just because a situation is explained in a few words does not mean it is simple.

    poshniallo on
    I figure I could take a bear.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    poshniallo wrote:
    If you feel I'm 'doing it again' then you and I clearly have different ideas about a lot of stuff and will have to agree to differ. You seem to think the tricky important questions of morality are the ones on which almost everyone agrees, while the ones which people disagree on, and are stressed by in their lives, are.... details.
    No I think that if you can't answer the obvious case (of which this is the obvious case), then how are you supposed to answer anything approaching a real world case?
    poshniallo wrote:
    Also straw manning (that you feel VC and I are using) is where we say 'you think A? Well that's like Z! So you are wrong!' That's different from a reductio ad absurdum. Similar, but different. Reductio is 'If A is true, then absurd consequences result. Hence A is untrue'.
    Which is exactly what's being done, more heinously by VC earlier. Although both terms probably hold true.
    poshniallo wrote:
    The comedy simple dilemma wasn't supposed to be strawmanish. I meant that it is simple. And easy to answer. It was intended to be a rhetorical example of an actual simple question. The sillyness was added just because.
    And you set it up purely to say that was what I was advocating. I think I made it pretty clear throughout that post that that wasn't the case, and it contributed nothing.
    poshniallo wrote:
    You say that the OP model system. OK fine. A model is a simple version of a real thing which allows us to imagine ways to manipulate the original, right?

    What real world situation does this mirror?
    Can I let some die to better the majority? Is it moral to kill in order to save lives later on? This is pertinent to questions of collateral damage in wartime campaigns, to allocation of funds for health care, to our aid programs to the third world.
    poshniallo wrote:
    Just because a situation is explained in a few words does not mean it is simple.
    I already stated this. The question is simple - the answer is perhaps not, which makes the question important to answer before we go in adding more and more variables.

    electricitylikesme on
  • Options
    MahnmutMahnmut Registered User regular
    edited December 2006
    poshniallo,
    specifically, does a moral system allow us to kill others to save a greater number?

    Is this a question "on which almost everyone agrees?"

    Mahnmut on
    Steam/LoL: Jericho89
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited December 2006
    poshniallo wrote:
    There is also the point that ALL hypotheticals are flawed - just read Violent Chemistry's posts, and Yar's, which work towards a reductio ad absurdum of Utilitarianism and moral systems generally.
    You assume I haven't been. I agree Mr^2. Just because the measurement of utility is difficult doesn't mean ideas behind it are invalid. And VC has been strawmanning the hell out of it by deliberately setting up hypotheticals where he adds just enough detail to make it seem retarded. Hence the point of not specifying where these billions of people who will die are coming from, or other contexts - because it's not necessary, we're dealing with the value of lives vs assisting in ending those lives.
    Your inability to accept that as a utilitarian position does not make it logically inconsistent with utilitarianism, nor does it make it a straw-man.

    ViolentChemistry on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    poshniallo wrote:
    There is also the point that ALL hypotheticals are flawed - just read Violent Chemistry's posts, and Yar's, which work towards a reductio ad absurdum of Utilitarianism and moral systems generally.
    You assume I haven't been. I agree Mr^2. Just because the measurement of utility is difficult doesn't mean ideas behind it are invalid. And VC has been strawmanning the hell out of it by deliberately setting up hypotheticals where he adds just enough detail to make it seem retarded. Hence the point of not specifying where these billions of people who will die are coming from, or other contexts - because it's not necessary, we're dealing with the value of lives vs assisting in ending those lives.
    Your inability to accept that as a utilitarian position does not make it logically inconsistent with utilitarianism, nor does it make it a straw-man.
    It's a utilitarian position if you want to selectively ignore a myriad of other factors - this is what MrMister has been arguing. The issue is that guageing utility is difficult, not that the idea is necessarily wrong.

    While I certainly don't think I advocate it as an absolute position, you'd have to be pretty blind to think it doesn't feature strongly in most people's morality when dealing with the large scale. It is an underpinning of how you make the hard choice in democracy.

    EDIT 3 (original text follows): The problem with the position you constructed is that it only works if we neatly assume that grieving due to the deaths of loved ones, failure of infrastructure and loss of intellectual knowledge and thus technological development time and many other factors are not being used as some measure of overall utility. So yes, it can be a utilitarian position, but is it necessarily logical? It highlights the problem of the whole thing being dependent on how you define utility, but it would be silly to declare this to be a flaw of the idea of utilitarianism rather then merely an absence which prevents it being a complete theory of morality. But that would be absurd, since it obviously has some value since it is, essentially, the underpinning of democracy - do the most good by the most number of people (simplistically stated).
    Your example is akin to saying "if we totally ignore the massive disaster for civilization that killing 5 billion people suddenly would be, then it's totally the right choice to prevent some unspecified future starvation". If we throw out enough factors we can turn anything into a utilitarian position, and that is a strawman. EDIT 2: As ironically, is this, somewhat.

    EDIT: And to go further, he also pointed out that a gradual reduction in the population from 5 billion to 1 billion is in fact, a morally justifiable position depending on context, so you're argument that it's not wrong under utilitarianism works all factors included, but certainly not if you just suddenly wiped them all out somehow. The point is, in a practical system we tend to mix empathic concerns on the small scale with utility concerns on the large, which is why I'm deeply skeptical of your attempts to call the whole thing bogus, since there's no reason the former isn't a part of the latter as well.

    electricitylikesme on
  • Options
    MabuseMabuse Registered User regular
    edited December 2006
    htpothetical or not, I'd make sure the button in which 5 billion people were killed was pushed. Get rid of lots of stupidity. Collateral damage happens.

    Mabuse on
  • Options
    YarYar Registered User regular
    edited December 2006
    You're both right. Hypotheticals can be quite shrewd in their ability to extract core concepts that apply in the real world. However, at the same time, proper analysis of a hypothetical may lead us to realize that the very construct of it alters reality so much (and perhaps in such an underhanded manner) that the concepts you extract are horribly fraudulent.

    I like hypotheticals that try to pin you to rationalization (4 dead is better than 5 dead) or principle (killing is wrong). I don't like the underhanded manner in which this hypothetical presents you with a moral subject - the guy who's going to push a button if you don't - and then grants you omniscience in such a way as to render him a moral object. This screws with the ethical dilemma so fundamentally that it makes it difficult to answer. Hence we have JCM not wanting blood on his hands, and others seeing the blood there anyway, because of this strange omniscience we've granted.

    Yar on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    Yar wrote:
    You're both right. Hypotheticals can be quite shrewd in their ability to extract core concepts that apply in the real world. However, at the same time, proper analysis of a hypothetical may lead us to realize that the very construct of it alters reality so much (and perhaps in such an underhanded manner) that the concepts you extract are horribly fraudulent.

    I like hypotheticals that try to pin you to rationalization (4 dead is better than 5 dead) or principle (killing is wrong). I don't like the underhanded manner in which this hypothetical presents you with a moral subject - the guy who's going to push a button if you don't - and then grants you omniscience in such a way as to render him a moral object. This screws with the ethical dilemma so fundamentally that it makes it difficult to answer. Hence we have JCM not wanting blood on his hands, and others seeing the blood there anyway, because of this strange omniscience we've granted.
    I agree with this analysis. A big problem of this hypothetical is the addition of "the guy". It adds a dimension that the OP appears not to have really wanted to add since it appeals to our reality because we know we can reason with/kill people to stop them doing things.

    A hypothetical I think would be better for this type of question is the following: there is a nuclear missile in a bunker below a city. In 10 seconds, the missile will launch to another city and explode, killing 20 million people. But, you are in the control room and can stop the missile launching, but not the bomb exploding - in which case it will explode underground and kill only 5 million people in the city it's in. In both cases, you will survive since the control room is far from the bunker. Do you stop the missile launching? You do not know which city it is in, nor which city it will hit. The missile cannot be shot down once launched (US missile defense doesn't work yet guys).

    Now, to my mind, the obvious choice is of course to stop the missile launching and only kill 5 million people, and from there we can complicate the scenario by saying that there's a chance the missile can be intercepted in flight etc.

    IMO this is a more useful hypothetical.

    electricitylikesme on
  • Options
    YarYar Registered User regular
    edited December 2006
    Well, sure. Or just boil it down. Either 4 random people or 5 random people are going to die and it's all up to you which happens. There is no option except to choose one. That's just stupid math.

    I think, though, one of the concepts these hypotheticals are trying to get at is whether or not you are your brother's keeper - whether or not you can be held responsible for the ethical decision of a another rational sane adult human being, simply because you had the option to deny him the ability to decide in the first place and allowed him to make his decision anyway.

    Or, more specifically, whether or not it is worth committing a sin yourself, not simply in order to prevent a bigger tragedy, but to remove another's ability to decide to commit a bigger sin.

    Yar on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    Yar wrote:
    Well, sure. Or just boil it down. Either 4 random people or 5 random people are going to die and it's all up to you which happens. There is no option except to choose one. That's just stupid math.

    I think, though, one of the concepts these hypotheticals are trying to get at is whether or not you are your brother's keeper - whether or not you can be held responsible for the ethical decision of a another rational sane adult human being, simply because you had the option to deny him the ability to decide in the first place and allowed him to make his decision anyway.

    Or, more specifically, whether or not it is worth committing a sin yourself, not simply in order to prevent a bigger tragedy, but to remove another's ability to decide to commit a bigger sin.
    Well my proposal was aimed at dealing with the realism aspect brought up earlier, but also because it's basically my take on "press the button, don't press button" in which I really do think the decision there is obvious.

    electricitylikesme on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited December 2006
    Well my proposal was aimed at dealing with the realism aspect brought up earlier, but also because it's basically my take on "press the button, don't press button" in which I really do think the decision there is obvious.
    "That's not realistic!" isn't actually a coherent defense when you look at the conditions the hypothetical set up in the first place. Realism is exactly as relavent to this problem as it is to the question "who would you fuck; Huntress or Faye Valentine?". And I think the decision is obvious too, kill as many as possible to put off humanity's eventual extinction for as long as possible.

    ViolentChemistry on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited December 2006
    Bliss 101 wrote:
    Spending money to save lives and killing people to save money are not, in my opinion, morally equivalent transactions.

    But given that we can spend money to save lives, that means that if we are given enough money in exchange for a life then we can turn around and save more people with it. So then it's a matter of weighing the one person's sacrifice against the many's gain--which some ethical systems aren't down with, but which consequentialists generally are (barring complicating factors).

    MrMister on
  • Options
    Saddam_I'm_addasSaddam_I'm_addas Registered User regular
    edited December 2006
    Is starting a war so an appreciable portion of your unemployed population can be turned into fodder for your war machine and most likely die before becoming a long term burden on the welfare and health care system an economic benefit?

    Saddam_I'm_addas on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited December 2006
    Is starting a war so an appreciable portion of your unemployed population can be turned into fodder for your war machine and most likely die before becoming a long term burden on the welfare and health care system an economic benefit?
    Is the social upheavel, net loss in happiness and certainty and creation of future long term enemies and unrest a net social benefit?

    Utility doesn't have to be economically based.

    electricitylikesme on
  • Options
    Saddam_I'm_addasSaddam_I'm_addas Registered User regular
    edited December 2006
    Is starting a war so an appreciable portion of your unemployed population can be turned into fodder for your war machine and most likely die before becoming a long term burden on the welfare and health care system an economic benefit?
    Is the social upheavel, net loss in happiness and certainty and creation of future long term enemies and unrest a net social benefit?

    Utility doesn't have to be economically based.

    Well the long term prospects are negative for the population as a whole, but if the short term economic benefits of cheap disposable slave labor favor the few enough to provide an maximized profit ratio (war profiteering) doesn't it happen anyways?

    Saddam_I'm_addas on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited December 2006
    You're not particularly coherent. You should work on that.

    Whether it does happen has little bearing on whether it should. What are you even asking?

    Edit: apologies.

    MrMister on
  • Options
    Saddam_I'm_addasSaddam_I'm_addas Registered User regular
    edited December 2006
    English is not my home language. Forgive.

    Saddam_I'm_addas on
  • Options
    Saddam_I'm_addasSaddam_I'm_addas Registered User regular
    edited December 2006
    This thread is asking about justifications for killing.

    In the arabic wars, America has propped up oil based economies to serve its own interests, similar to the banana republics in the Southern Americas.

    War has a tendency to benefit the few.

    The majority pay the price in being the ones who actually have to fight the war for the benefit of the few.

    Does the net economic benefit to the few outweigh the "net loss in happiness and certainty and creation of future long term enemies and unrest" of the majority?

    Saddam_I'm_addas on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited December 2006
    Does the net economic benefit to the few outweigh the "net loss in happiness and certainty and creation of future long term enemies and unrest" of the majority?

    No.

    MrMister on
  • Options
    CavilCavil Registered User regular
    edited December 2006
    The biggest problem with Iraq, besides the loss of human life, eroding international relations and all that good stuff, is the debt. China is our biggest loaner and rather than trying to be more competitive in the global economy, we're initiating the sort of protectionist policies that we would deplore abroad, all the while trying to secure key oil reserves, not for ourselves but to stymie China's economic powerhouse. None seem particularly effective in the long haul.

    Cavil on
    Virtue finds and chooses the mean.
  • Options
    Saddam_I'm_addasSaddam_I'm_addas Registered User regular
    edited December 2006
    At what point does a revolution of the majority to overthrow the corruption of the few become the only alternative?

    Saddam_I'm_addas on
  • Options
    CavilCavil Registered User regular
    edited December 2006
    In Generation Y America? You're kidding right?

    Cavil on
    Virtue finds and chooses the mean.
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited December 2006
    Cavil wrote:
    In Generation Y America? You're kidding right?

    I don't think he realizes that the vast majority of America is purely materialistic and doesn't really give a shit about the world beyond what new toys it can bring them.

    Which proves that he lives in a cave. On the moon.

    Incenjucar on
This discussion has been closed.