Options

[Morality] Subjectivity vs Objectivity

1356710

Posts

  • Options
    Grid SystemGrid System Registered User regular
    edited June 2011
    Moridin wrote: »
    Moridin wrote: »
    I don't really understand what's being optimized for. Like, given some moral dilemma, I agree with the assertion, "There's probably a way to resolve the situation such that all parties involved benefit the most in accordance with their own values and desires." This is the more utilitarian approach.

    I just don't see how you get to, "There's an objectively correct way to resolve the situation, independent of the values and desires of the actors involved."

    Why bother trying to get to the second statement? How is your first formulation any less objective than the second?

    How is my first statement objective? It's akin to saying "George desires X, Sally desires Y, there probably exists some arrangement such that George gets some of X, Sally some of Y, and both are mutually content with the outcome."
    It's objective because you are making claims about the world external to you, and the truth or falsity of those claims is not contingent on your own mental states.
    The second statement is akin to "George and Sally should do J always."
    Right, but that's only also a claim about the world external to you. I still don't really see the difference.
    It's entirely possible that George's desire X and Sally's desire Y are mutually contradictory. And then you have some sort of irreconcileable value difference, to which I suppose the only solution is single combat :P.
    If it is in fact the case that George and Sally have mutually exclusive desires, then the statement about there probably being a mutually satisfactory outcome is just wrong. If that's a problem for realists, I'm not sure why.

    Grid System on
  • Options
    Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    hanskey wrote: »
    But... that is the point! since there is no objective scale, no action and/or code can be judged to be better or worse than another, only to better fulfil subjective criteria, which has no non-subjective value.

    Please try again, as this post made no sense at all.

    Without a defined, objective, scale you cannot attach objective value to actions.
    Moral relativism states there is no objective scale.

    Under utilitarianism, the utility of each code is not universal; The utility of each code depends on the goals of the entity that seeks to make use of it. In other words, the value of each subjective code shifts based on the subjective goals of those that seek to make use of it. Thus, utilitarianism does not acknowledges the existence of universal value either.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    lazegamer wrote: »
    hanskey wrote: »
    hanskey wrote: »
    Do you not see the problem with what you just said?

    As soon as you take the position that "no moral code is somehow more true than another" then you have no basis for preferences that can be argued from.

    No, as I noted earlier, many people have overlapping moral values, and arguments can (and do) suss themselves out by appealing to common moral grounds. Nothing precludes having substantive discussions on how to apply those values, either.

    Actually you are still wrong, or you don't actually believe in Moral Relativism (but think you do because you don't know what that is) or you want to change the definition so that your belief is not actually Moral Relativism.

    Moral Relativism has nothing to say about "common moral grounds" nor preferring commonalities to differences. Under Moral Relativism your basis for judgement is undermined by the fact that your morals are no more important than any others. Common grounds is no more justification than "because I said so" except it reads "because we said so".

    If you are interested in Ethical approaches that do justify rights and provide a basis to judge an act despite the prevailing culture, then Utilitarianism or Deontology and their derivatives are excellent. They both also acknowledge the fact that moral codes differ between groups of people.

    You're confusing an observation about human interaction with an argument for moral relativism. The quote above does not suggest that moral relativism has anything at all to say about common moral grounds, merely that in the absence of a recognized objective morality social frameworks can still exist.

    Luckily many smart people have been working on making sure there is not an "absence of a recognized objective morality social frameworks". That's what drove the development of Utilitarianism and Deonotology and their descendents.

    For example, "Killing is always wrong except in self defense" is a standard that can be supported by Utilitarianism and Deontology, but not Moral Relativism.

    Ethics purports to tell us how we should act, Moral Relativism just says "do what your culture says".

    Ethics judges culture, Moral Relativism does not.

    hanskey on
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    hanskey wrote: »
    But... that is the point! since there is no objective scale, no action and/or code can be judged to be better or worse than another, only to better fulfil subjective criteria, which has no non-subjective value.

    Please try again, as this post made no sense at all.

    Without a defined, objective, scale you cannot attach objective value to actions.
    Moral relativism states there is no objective scale.

    Under utilitarianism, the utility of each code is not universal; The utility of each code depends on the goals of the entity that seeks to make use of it. In other words, the value of each subjective code shifts based on the subjective goals of those that seek to make use of it. Thus, utilitarianism does not acknowledges the existence of universal value either.

    That is an incorrect understanding of Utilitarianism, not in line with any philosophical work that I'm aware of on the subject.

    hanskey on
  • Options
    Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    I see. If so, how can my original post be regarded as a less thought-out version of utulitarianism, if utilitarianism apparently opposes moral relativism, which said code fully supports while still holding to its ethical values for logical reasons?

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Options
    LoserForHireXLoserForHireX Philosopher King The AcademyRegistered User regular
    edited June 2011
    jothki wrote: »
    The problem there is more that moral objectivists have difficulty comprehending moral relativism.

    Okay, then break it down for me.

    I want you to tell me what Moral Relativism is.

    LoserForHireX on
    "The only way to get rid of a temptation is to give into it." - Oscar Wilde
    "We believe in the people and their 'wisdom' as if there was some special secret entrance to knowledge that barred to anyone who had ever learned anything." - Friedrich Nietzsche
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    hanskey wrote: »
    But... that is the point! since there is no objective scale, no action and/or code can be judged to be better or worse than another, only to better fulfil subjective criteria, which has no non-subjective value.

    Please try again, as this post made no sense at all.

    Without a defined, objective, scale you cannot attach objective value to actions.
    Moral relativism states there is no objective scale.

    Under utilitarianism, the utility of each code is not universal; The utility of each code depends on the goals of the entity that seeks to make use of it. In other words, the value of each subjective code shifts based on the subjective goals of those that seek to make use of it. Thus, utilitarianism does not acknowledges the existence of universal value either.

    I think you're confusing the contextuality/conditionality of a moral rule with ethical subjectivism.

    See my first post in the thread. One person might find the word "cunt" to be deeply offensive; another might find it funny; a third person might not understand what those phonemes mean at all. Yet all three, given a sufficient understanding of the circumstances, might agree that calling my grandmother a cunt is morally wrong.

    Likewise, "non-suffering is better than suffering" may be a moral fact, even if what causes suffering is different for different people (leading to a complex utilitarian calculus).

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    I see. If so, how can my original post be regarded as a less thought-out version of utulitarianism, if utilitarianism apparently opposes moral relativism, which said code fully supports while still holding to its ethical values for logical reasons?

    I hate to say this and I know you don't want to hear it , but: "come again?"

    hanskey on
  • Options
    Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    @Feral: I think my issue is that this view universalizes a code that only comes naturally to humanity.
    What if a tiger were to try to eat your grandmother? Would it be morally wrong for it? What if it was a sentient, alien, tiger? Why would the universe inherently favor one intelligent species' moral code over another?

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    You can't even assess the Tiger at all, not even the alien one, from a position of Moral Relativism.

    Using a Deontological or Utilitarian approach you could analyze a specific hypothetical, such as the sentient alien tiger killing your grandma.

    Then you'd want to know if the sentiger can recognize a human as sentient and having Free Will, and clarify a bunch of base conditions.

    Mostly though this is a waste since ethical systems are meant to be employed on real problems.

    hanskey on
  • Options
    Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    hanskey wrote: »
    hanskey wrote: »
    Moral relativism may not be tolerant, but it has no ground to stand on when it tries to judge because it takes the position that no moral code is preferable to another.

    As soon as you say you prefer one moral code over another, then you must justify that, and in justifying such a belief you cease to be morally relativistic. In fact, as soon as depart the rubric of "it's all the same" then you are working in an Ethics that is not Moral Relativism, but something else.

    You can fully believe that morals are relative while holding that for every given set X of purposes there is an optimal code Y to follow in order to achieve them and remain in coherence with their basis.

    If one person values his happiness above all other things, while another values the color brown and making things brown, then they cannot agree on a code of ethics that fits them both. Luckily, humanity as a whole has a set of pretty similar purposes. This does not means the best code for us to follow is the best code for said brown-loving alien, but as outlined in my first post in this thread there is a code that best fits the purpose most of humanity holds to.
    What you described is a less well thought out version of Utilitarianism, which is in opposition to Moral Relativism.

    Again, Utilitarianism and Deonotlogy do not deny the plain fact that moral codes differ over time, space and groups of people. The difference between them and Moral Relativism is that neither Utilitarianism or Deonotlogy says that all moral codes are equally valid, because they plainly are not.

    First, you claimed the code I outlined is a version of utilitarianism.
    Then, you stated that moral relativism is opposed to utilitarianism.
    Given that the code I speak of supports moral relativism, how can you say that it relies on utilitarianism and is not, in fact, derived from moral relativism?

    EDIT2: Could you please not delete the original content of your posts, instead adding the changes? It is somewhat confusing to follow.
    Moral relativism treats every entity identically; it does not needs a special case for a sentient alien tiger. On the other hand, any implied objective scale in a universe with multiple intelligent species that hold opposed codes of ethics is bound to support one over the other.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Options
    lazegamerlazegamer The magnanimous cyberspaceRegistered User regular
    edited June 2011
    hanskey wrote: »
    For example, "Killing is always wrong except in self defense" is a standard that can be supported by Utilitarianism and Deontology, but not Moral Relativism.

    Ethics purports to tell us how we should act, Moral Relativism just says "do what your culture says".

    Ethics judges culture, Moral Relativism does not.

    My knife is terrible for eating soup. But that isn't a failing of my knife; that isn't its purpose. Moral relativism doesn't attempt to answer the question of what should be done. Nor does it conflict with morality. Only with the notion that morality is objective.

    It is entirely possible to both believe that killing outside of self-defense is wrong and also acknowledge that this value is not objectively true.

    lazegamer on
    I would download a car.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    @Feral: I think my issue is that this view universalizes a code that only comes naturally to humanity.
    What if a tiger were to try to eat your grandmother? Would it be morally wrong for it? What if it was a sentient, alien, tiger? Why would the universe inherently favor one intelligent species' moral code over another?

    Well, let's say that the fundamental universal moral value is "non-suffering is better than suffering." This would apply to all beings capable of suffering.

    You might argue that this fundamental value is subjective. I posed this question to EM above: why is non-suffering better?

    I personally lean towards the idea that suffering being bad is a self-evident first-order value. It simply is bad. Justification is neither possible nor necessary.

    I can illustrate the problem with "alien tiger" counterarguments with a thought experiment: imagine a creature that desires suffering. This is not the same as a masochist, who gets pleasure from suffering to a degree that overwhelms the suffering. This is not the same as an addict who seeks out a drug that will eventually cause suffering due to poor prediction of how much suffering the drug will cause.

    No, try to imagine a creature that desires suffering for its own sake. Would denying it suffering cause greater suffering? Then why wouldn't it simply sit in isolation without seeking worldly suffering? Would causing it suffering sate its hunger and therefore reduce its suffering? Such a creature would be intrinsically paradoxical. It wouldn't "suffer" in any meaningful sense of the term.

    For any given bizarre hypothetical alien lifeform, we can conclude that it either suffers or it does not. Even if the suffering differs in nature, expression, or subjective experience from our own; we can still apply the moral value "suffering is bad" in a way that is homologous for its exotic form of suffering.

    From there, the rest of the discussion is orthogonal to objective vs. subjective morality. We can discuss which is the best system to apply to such a creature: utilitarian, deontological, virtue ethics, etc. These discussions are not dependent upon a decision for the subjective vs. objective question.

    (As an aside: there is a similar thought experiment that the 'alien tiger' idea reminds me of. It's called the 'utility monster' problem, and I'm sure you can Google it. But again, the utility monster problem is indifferent, in a sense, to subjective vs. objective morality; it is concerned with utilitarianism vs. other moral systems.)

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    @Grey Paladin -

    Let me be clear then since I confused you. Allow me to first restate the standard definition of Moral Relativism as I, and all the actual philosophers I know that went to school to study this, understand it:

    1. Morals are defined by cultures.
    2. No culture's morals are more correct than another culture's.

    Utilitarianism accepts 1, and denies 2 and so do most other systems of ethics, with the exception of Moral Relativism.

    However, the way you stated your position you nearly reject 2, but fail to in the end, and the way you worded it sounded similar to Utilitarianism, thought that is where the similarity ends.

    hanskey on
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    lazegamer wrote: »
    ...Moral relativism doesn't attempt to answer the question of what should be done....

    Then it is not a system of ethics at all and is irrelevant to any discussion of morality.

    hanskey on
  • Options
    jothkijothki Registered User regular
    edited June 2011
    jothki wrote: »
    The problem there is more that moral objectivists have difficulty comprehending moral relativism.

    Okay, then break it down for me.

    I want you to tell me what Moral Relativism is.

    The idea behind relativism isn't that there is no truth, but that there are multiple truths, and that the judgement of one sort of truth doesn't affect the nature of others. Someone who believes that murder is wrong may consider someone who believes that murder is right to be evil, and be correct. The person who supports murder may consider someone who opposes it to be evil, and also be correct. This is not necessarily a contradiction, though it is also probably necessarily a contradiction to the people who believe that they are objectively correct. This is itself both a contradiction and not a contradiction, which is a contradiction and not a contradiction, and so on. It's possible to be bothered by this infinite regress, but it's also possible to not be bothered. A relativist is not bothered by it.

    One consequence of this is that moral relativism concludes that moral objectivism is correct, even though moral objectivism concludes that moral relativism is wrong. By debating, I'm just trying to get moral objectivists to better understand why they're right.

    If anyone disagrees with my evaluation of relativism, don't worry, you're correct too. :P

    jothki on
  • Options
    lazegamerlazegamer The magnanimous cyberspaceRegistered User regular
    edited June 2011
    hanskey wrote: »
    lazegamer wrote: »
    ...Moral relativism doesn't attempt to answer the question of what should be done....

    Then it is not a system of ethics at all and is irrelevant to any discussion of morality.

    Depending upon what you mean by system, I think we agree. I'm not sure what all of the forceful italics and bold is for, as I haven't defended it as such. I feel that you're trying to suggest that because it doesn't prescribe what you should do, it is somehow inferior to positions that claim that there is a correct path?

    edit: Poor discipline on my part, skimmed the sentence. I agree that it isn't a system, but that doesn't make it irrelevant to a discussion of morality. It is a study of morality, but does not propose a single justifiable application of morality.

    lazegamer on
    I would download a car.
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited June 2011
    Moridin wrote: »
    This hasn't really ever been satisfactorily explained to me. I mean, I'm personally disgusted by murder, and would never do it short of some life or death situation, but why is murder objectively wrong?

    Is my personal disgust powerful enough to be a reason to believe it to be objectively true? That seems to be pretty shoddy logic.

    When I add two and two I get four, but why is two and two objectively four? Is my personal calculation enough to be a reason to believe it objectively four?

    Well, yes, if you think that your personal calculation is tracking some arithmetic matter of fact, then your personal calculation is a good reason to believe that two and two sum to four. Similarly, if you think that your disgust with murder is tracking some moral matter of fact then your disgust with murder is a good reason to believe that murder is wrong.

    Of course, if you start at the outset by assuming that there are no such moral matters of fact for you to track, then this will seem puzzling. But that is the very issue at hand: are there moral matters of fact? It would not be fair, so to speak, to first assume that the answer is no, and then use that assumption to show that the answer is no.
    Moridin wrote:
    I don't really understand what's being optimized for. Like, given some moral dilemma, I agree with the assertion, "There's probably a way to resolve the situation such that all parties involved benefit the most in accordance with their own values and desires." This is the more utilitarian approach.

    Ought we to resolve situations in this way? If so, is that itself a moral fact?

    MrMister on
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    lazegamer wrote: »
    hanskey wrote: »
    lazegamer wrote: »
    ...Moral relativism doesn't attempt to answer the question of what should be done....

    Then it is not a system of ethics at all and is irrelevant to any discussion of morality.

    Depending upon what you mean by system, I think we agree. I'm not sure what all of the forceful italics and bold is for, as I haven't defended it as such. I feel that you're trying to suggest that because it doesn't prescribe what you should do, it is somehow inferior to positions that claim that there is a correct path?

    Well answering "the question of what should be done" is exactly the function of ethical systems.

    If Moral Relativism does not do this then it simply is not a system of ethics.

    Another way to put it is that ethical systems define what is and isn't morally acceptable, and since Moral Relativism does not purport to do this, then it is irrelevant to any discussion of morals, except to say that it is a bankrupt idea about morality having exactly 0 usefulness.

    Edit:
    Don't read too much into the emphasis.

    Edit x2:
    I feel that you're trying to suggest that because it doesn't prescribe what you should do, it is somehow inferior to positions that claim that there is a correct path?
    You feel correctly.

    hanskey on
  • Options
    Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    @Feral: I concede that suffering, by its very definition, is bad given that any creature which is capable of having at least two states where one is preferable to the other could dub the lesser one suffering, and any creature incapable of changing its state would have no reason to act at all.

    I have just read about the utility monster, however, and this has brought a thought to my mind: were a creature to exist that suffers as long as it does not inflicts suffering on others, how would an objective morality system treat it?

    @Hanskey: Thank you for the clarification. I think that while the method used is largely the same, what I proposed differs from utilitarianism by not treating mankind's natural tendencies as the center of the universe. I am not sure I can debate this with you, however, since you believe that ethical systems are meant to be employed on practical problems while I think discussing the hypothetical has value.

    @Jothki: It is best to think of each case individually.
    You have a drill and a spoon.
    Alice wishes to break down a wall.
    Bob wishes to eat his soup.
    When each asks 'which tool should I use?' the answer is different because the goals are different. There is no contradiction in Alice using a drill and Bob using a spoon, because their goals and what they value is different.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Options
    PaladinPaladin Registered User regular
    edited June 2011
    I always thought that ethics was a system of rules and morality was the machine used to crank out those rules. Even if Moral Relativism doesn't directly say "hey, this stuff is bad, don't do it," isn't it used to make ethical systems that do this sort of thing?

    And since ethical systems are worthy to be talked about, shouldn't the method used to create them also matter in a discussion about morals?

    Paladin on
    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    Paladin wrote: »
    I always thought that ethics was a system of rules and morality was the machine used to crank out those rules. Even if Moral Relativism doesn't directly say "hey, this stuff is bad, don't do it," isn't it used to make ethical systems that do this sort of thing?

    And since ethical systems are worthy to be talked about, shouldn't the method used to create them also matter in a discussion about morals?

    You are very confused sir.

    Morals are correct ways to behave and ethics is the study of morals, or moral behavior.

    Moral Relativism is not the underpinnings of Deontology nor Utilitarianism or their spin-offs. In fact, while Moral Relativism, Deontology and Utilitarianism agree that moral codes differ between groups of people, they disagree in most other ways.

    hanskey on
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Moridin wrote: »
    This hasn't really ever been satisfactorily explained to me. I mean, I'm personally disgusted by murder, and would never do it short of some life or death situation, but why is murder objectively wrong?

    Is my personal disgust powerful enough to be a reason to believe it to be objectively true? That seems to be pretty shoddy logic.

    I want to head off a semantic issue at the pass. The moral rule "murder is wrong" is a poor example. All murder is killing, but not all killing is murder, and murder is that subset of killing that is not morally justified. Consequently we can't really discuss "is murder morally wrong?" in a coherent way, because murder is "morally wrong killing" by definition.

    But if we shift our example a little bit to say, for example, "Can killing ever be objectively wrong?" we can return to having a meaningful discussion.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited June 2011
    Feral wrote: »
    Isn't that Moore's paradox?

    Isn't that mostly a semantic problem regarding the definition of the word "believe?"

    Yeah, that is indeed Moore's paradox--"It's raining but I don't believe it's raining" is never okay to say, even though nothing prevents it from being true. I don't know if it's just a semantic problem arising from the definition of belief, though, as you can arguably generate the same paradox with intention, e.g. "I won't go to the store but I intend to go to the store." Often we intend to do things without doing them, so there's nothing impossible about that sentence being true, but it's nonetheless not okay.

    In both cases it seems like what generates the problem is an inbuilt standard: belief aims at truth, intention aims at actually doing the thing. So although there's nothing paradoxical about believing something false or intending to do something you won't, there is something paradoxical about believing something you think is false or intending something you think you won't do. The number of other attitudes you can generate this same problem for depends on how many attitudes you think have such an inbuilt standard.

    The general point here was just that "I personally think murder is wrong but murder is objectively right" is weird because it's Moore-paradoxical, not because there's no such thing as objective right and wrong which is capable of disagreeing with individual judgment.

    MrMister on
  • Options
    zeenyzeeny Registered User regular
    edited June 2011
    Man, a lot of people in here seem to be talking philosophy without using philosphy's language.
    That way lies madness, but I shall keep reading!

    zeeny on
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    @Hanskey: Thank you for the clarification. I think that while the method used is largely the same, what I proposed differs from utilitarianism by not treating mankind's natural tendencies as the center of the universe. I am not sure I can debate this with you, however, since you believe that ethical systems are meant to be employed on practical problems while I think discussing the hypothetical has value.
    It's cool. I've been conditioned by my philosopher friends and several ethics classes that computer scientists must take to graduate, that really stuck in my mind for some reason. Probably, because I used to believe in Moral Relativism, but I'm convinced that it is fairly useless for all the reasons I've been offering today. I and my classmates raised all of the same questions (and more) as have been raised here in the thread, and I've been countering each in the same manner that my teacher did.

    As a friend, I would suggest you learn a bit more about the many ethics beyond Moral Relativism, because I think they have a lot more to offer, by virtue of being fairly consistent systems of thought developed by serious and scholarly people. I particularly think you'd enjoy Utilitarianism, based on your own posts. Like I said, none of them deny the plain fact that moral codes differ between peoples.

    Side Note: While it's true that I think that Ethics other than Moral Relativism are better for the real world, I actually think they are better for hypothetical scenarios as well.

    hanskey on
  • Options
    PaladinPaladin Registered User regular
    edited June 2011
    What I'm interested in is if you have a practical moral dilemma, what are the most useful parts of whatever ethcial algorithm do you guys think will make for the best solution?

    I recently tried to advise a friend who had a secret from his wife on whether to tell her or not. I don't know about moral objectivism, but I tried my best to put myself in his position while trying to get the best possible outcome regardless of whether he would come out on top or not.

    I found it hard to establish goals because he has a moral system different from mine. Specifically, my morals are looser than his, so I was very afraid of giving him advice that would lead to a result that I would be fine with but he would find unacceptable.

    This is sounding a lot like H/A, but the difference is that I'm just interested in what you think is a good practical framework to approach common ethical problems, and this particular one is already resolved as far as I'm concerned. I can't keep up with the metaphilosophy of moral paradigms, but like everyone else I'd still like to see the practical side of it. If that's not really what this thread is about then that's fine.

    Paladin on
    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    Hmmmm. The best thing I can think of is using several ethical tests and comparing the results to see if they match.

    He is lying to her about something, in part, to protect her from the harm of the truth, but also he is covering his own ass to avoid his just deserts, which is definitely unethical.

    You took the Utilitarian approach first, but as you noticed it can be very difficult to analyze all possible effects for harm and good, let alone weighing those.

    The other approach, Deontology, does not take consequences into account when assessing the morality of a given action. In this case, perhaps his duty to tell the truth is conflicting with his duty to not hurt her. In this scenario, you must weigh which duty is more fundamental and choose it regardless of consequences. However, you need to know much more about the specifics of the case to effectively use either.

    A third approach is the rights approach. She has the right not to be harmed by him and the right not to be lied to by him, so we have a conflict again. Again you must determine which is the more fundamental right since they are in conflict.


    My personal take, at this moment, is that if you did a thing that you must lie to your wife about, then you have already committed harm against her by violating your marital duties toward her and lying simply compounds the harm.


    However, you haven't really provided enough detail for me to be completely sure that is what happened here. I know in my case, the one time I lied to my wife about cheating on her was long before we got married and the lie tore me up inside and slowly but surely I became terrible toward her until I confessed, like a year or two after the incident. I have never cheated again or even seriously entertained such a thought subsequently and consider myself lucky to still be with her, but the truth went a long way to fixing the harm I caused her. Basically: You make your bed and now you must lay in it and there is life after cheating. If you are genuine about reforming behavior then often these things can be worked out without divorce, unless there is a pattern of behavior in which case your friend is MINO anyway and a divorce shouldn't be that bad.

    Also, for clarity's sake, try to see the situation genuinely from her view, not your friends view or your own and do this with him: would you prefer for the person that you are married to and sexually committed to to cheat and lie about it, or cheat and tell the truth about it?

    Edit: Another thing that should be considered in a Utilitarian approach is whether he can continue to treat her the same while withholding serious secrets over a long time period. If not, then that is additional harm stemming from the lie that must be factored in. Did other factors such as fuck frequency and fuck count get taken into account? Did he cheat once or is this a pattern of behavior? Are there extenuating circumstances that you have not yet related - like drug use, or the marriage being a long distance relationship?

    hanskey on
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Paladin wrote: »
    I found it hard to establish goals because he has a moral system different from mine. Specifically, my morals are looser than his, so I was very afraid of giving him advice that would lead to a result that I would be fine with but he would find unacceptable.

    If you understand his moral system, can't you use logic to extrapolate courses of action from that?

    I give relationship advice on occasion to people whose values are markedly different from mine. I try to understand and assume their most important values and work things out from there, much like a negotiation, but without the adversarial implications.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    YarYar Registered User regular
    edited June 2011
    Part of the dilemma here is akin to what we went over in the truth thread. The notion of absolute objectivity, of any sort, is an inconsistent notion. Seeking the purely objective is like seeking the largest integer.

    However, this doesn't mean the answer is moral relativism. Presumably, if we could agree upon some universal axioms of morality, from there we can reason a system of moral truth that is not necessarily any more culturally relative than is mathematics or science.

    And I agree that "suffering more is bad, suffering less is good" and also perhaps "more joy and happiness is good, less joy and happiness is bad" can serve quite well as universal moral axioms. The fact that I can't necessarily "prove" those statements is a mostly meaningless distraction. You also can't "prove" the most basic axiomatic assumptions that allow us to use math or science, either. Whether you're aware of it or not, any consistent system of reason and truth is necessarily an incomplete system. But I can argue and believe wholeheartedly that no one can make a rational, consistent, convincing, or useful argument against the idea that suffering equates to bad and joy equates to good, and thus it suffices that morality is universally, objectively, a matter of making the best argument regarding how we might minimize suffering and maximize joy.

    If a culture says that 12-yr-old girls need to undergo a rape and then a painful mutilation of genitals in order to protect their honor and chastity, I think I can make a good argument that this is a lot of suffering, and that it is not offset by some greater joy or reduction in suffering. So relativism loses; it's objectively immoral.

    And the great thing about all this is that it doesn't immediately mean that system X is the answer. You can still make a case for or against religion, principled morality, personal resposibility, Kantian ethics, etc. But it does allow arguments and consensus to be made regarding morality, without leaving it at "it's all relative."

    Yar on
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    Yar wrote: »
    Part of the dilemma here is akin to what we went over in the truth thread. The notion of absolute objectivity, of any sort, is an inconsistent notion. Seeking the purely objective is like seeking the largest integer.

    However, this doesn't mean the answer is moral relativism. Presumably, if we could agree upon some universal axioms of morality, from there we can reason a system of moral truth that is not necessarily any more culturally relative than is mathematics or science.

    And I agree that "suffering more is bad, suffering less is good" and also perhaps "more joy and happiness is good, less joy and happiness is bad" can serve quite well as universal moral axioms. The fact that I can't necessarily "prove" those statements is a mostly meaningless distraction. You also can't "prove" the most basic axiomatic assumptions that allow us to use math or science, either. Whether you're aware of it or not, any consistent system of reason and truth is necessarily an incomplete system. But I can argue and believe wholeheartedly that no one can make a rational, consistent, convincing, or useful argument against the idea that suffering equates to bad and joy equates to good, and thus it suffices that morality is universally, objectively, a matter of making the best argument regarding how we might minimize suffering and maximize joy.

    If a culture says that 12-yr-old girls need to undergo a rape and then a painful mutilation of genitals in order to protect their honor and chastity, I think I can make a good argument that this is a lot of suffering, and that it is not offset by some greater joy or reduction in suffering. So relativism loses; it's objectively immoral.

    And the great thing about all this is that it doesn't immediately mean that system X is the answer. You can still make a case for or against religion, principled morality, personal resposibility, Kantian ethics, etc. But it does allow arguments and consensus to be made regarding morality, without leaving it at "it's all relative."

    This is pretty close to exactly the reasonable analysis that allows philosophers to nearly universally reject Moral Relativism and do decades of work on actual ethical systems.

    hanskey on
  • Options
    PaladinPaladin Registered User regular
    edited June 2011
    I don't want to put too many details in as it would probably get off-topic. This tack was fruitful enough for me anyway.

    So with the exception of deontology, which by what I can gather is a principle sort of thing, all of these strategies involve theory of mind bits, so I was right to try to erase myself and my moral system from the equation as possible, meaning no "here's what I would do if I was in your situation" exposition.

    So, to cover all my bases, I should quiz him on what he thinks are the fundamental rights, duties, and values pertinent to the topic to better understand his moral system, and then tell him how they apply to his situation. Next, I should ask him about his goals and do a risk benefit analysis, both practical and according to the moral system he just explained to me. Then I should spend the rest of the time doing method acting for all the people involved, and encouraging him to do the same. If I do at least one of those I guess I'm actually helping instead of running my mouth.

    Well all right, I got what I wanted. This was productive.

    He also said that if he didn't receive repercussions for his behavior, he was afraid he would be more inclined to make the same mistake in the future. Back on topic with that, I guess it's the reasoning behind why suffering has its place in a moral society, and therefore should never be eradicated, which means that there may be a point where more joy and happiness is bad, because there is a threshold toward what is stably possible and what is ideal yet unattainable. This is out of my league, though, so whatever.

    Paladin on
    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    Well, you have me convinced. I finally realized something that I previously failed to: An objective scale existing does not equals a highlander situation; multiple codes can reside at different levels of the scale or even at the same one if they are equally valid, as opposed to my mental picture of The One True Way this argument supposedly supported. This along with Feral's argument denying the anthrocentrism of moral realism, making it universal, converted me.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Paladin wrote: »
    I don't want to put too many details in as it would probably get off-topic. This tack was fruitful enough for me anyway.

    So with the exception of deontology, which by what I can gather is a principle sort of thing, all of these strategies involve theory of mind bits, so I was right to try to erase myself and my moral system from the equation as possible, meaning no "here's what I would do if I was in your situation" exposition.

    So, to cover all my bases, I should quiz him on what he thinks are the fundamental rights, duties, and values pertinent to the topic to better understand his moral system, and then tell him how they apply to his situation. Next, I should ask him about his goals and do a risk benefit analysis, both practical and according to the moral system he just explained to me. Then I should spend the rest of the time doing method acting for all the people involved, and encouraging him to do the same. If I do at least one of those I guess I'm actually helping instead of running my mouth.

    Well, imagine you're helping a friend buy a used car.

    I like sporty sedans. That's my thing. My friend might want a pickup truck. I hate pickup trucks.

    But I would try to understand why my friend wants a pickup truck. I want to know his reasoning, what purpose it serves. Then I would try to help him achieve that purpose while optimally balancing his other goals.

    For instance maybe he wants one so he can tow a boat. Okay, I might look at how much the boat weighs, determine how much truck he needs to tow it, and find a large enough truck without horrible gas mileage. Then we can look at his budget, figure out if a new truck from a dealer or a certified used truck from a dealer or an old truck from a classified ad is the best.

    I can do all this just by understanding and empathizing with his goals.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    jothkijothki Registered User regular
    edited June 2011
    Yar wrote: »
    But I can argue and believe wholeheartedly that no one can make a rational, consistent, convincing, or useful argument against the idea that suffering equates to bad and joy equates to good, and thus it suffices that morality is universally, objectively, a matter of making the best argument regarding how we might minimize suffering and maximize joy.

    Since when does something being rational, consistent, convincing, or useful mean that it is guaranteed to be correct?

    jothki on
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Well, you have me convinced. I finally realized something that I previously failed to: An objective scale existing does not equals a highlander situation; multiple codes can reside at different levels of the scale or even at the same one if they are equally valid, as opposed to my mental picture of The One True Way this argument supposedly supported. This along with Feral's argument denying the anthrocentrism of moral realism, making it universal, converted me.

    achievement.jpg

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    PaladinPaladin Registered User regular
    edited June 2011
    So generally, if helping another person make a moral decision, I should think of myself as a tool. If he needs a screwdriver and all I've got is a knife, then break off the tip. If I don't have anything even remotely resembling a screwdriver, I still know where the nearest hardware store is, so I'm a map.

    Paladin on
    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    Feral wrote: »
    achievement.jpg
    Arguing for the sake of arguing is counter-productive, no matter what the internet might tell you :P The purpose of debating is to try to reach for the truth so that all can learn from it.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Arguing for the sake of arguing is counter-productive, no matter what the internet might tell you :P The purpose of debate is to try and reach for the truth so that all can learn from it.

    Or to learn from it myself. I don't think I would have developed any interest in philosophy at all if it weren't for this forum.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    TaberTaber Registered User regular
    edited June 2011
    Suffering is bad is a great axiom, but you can't argue it is objective truth. Someone could believe that suffering leads to personal enlightenment, and personal enlightenment is the ultimate goal even if it never leads to less suffering, and design a morality system around that. Is there an objective way to say this other person is wrong? It seems like what we are optimizing for is subjective even if there are objective ways to measure how successfully we are optimizing for.

    Taber on
Sign In or Register to comment.