Options

All my atheist morals

123457

Posts

  • Options
    The CatThe Cat Registered User, ClubPA regular
    edited February 2007
    MrMister wrote:
    my opinion on a lot of controversial moral issues

    The point was that they're all to some degree controversial and in need of clear-headed examination. I strongly disagree with your opinions on almost all of them.


    Why? What's your take on them?


    And I'll just go ahead and ignore that casual poke at my writing. If you paid attention it would probably appear as more coherent.

    No, MrMister was dead right on both points. You're derailing, and ellipses-dependence makes you read like you're stoned.

    The Cat on
    tmsig.jpg
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    Incenjucar wrote:
    Generally speaking, intent is the way to judge the morality of a person. Their values are how you judge the potential of their intent. The benefit or harm of their actions, separated from the intent, are unrelated to the morality.

    Someone who intends to be helpful to others based on what they consider valuable, but whose values are ultimately harmful, is still moral.

    They're just also very very dangerous.

    i cant agree. under this kind of thinking, what kind of person would be considered immoral? someone who actively acknowledges the immorality of an act and nevertheless goes ahead with it? when has this ever happened?

    maybe this just relates to how i think people never go against their own morals. they just have very detailed, nuanced moral codes that allow for piracy of music but not stealing of cds, etc., etc. i understand that i may not be in the majority with this kind of thinking though.

    Ketherial on
  • Options
    SumalethSumaleth Registered User regular
    edited February 2007
    I wish I could contribute to this discussion at a higher level. Any level at all really. I should have... chosen my uni subjects more carefully I guess. Anyone want an unused set of engineering learnings?

    I go back and forth on who I agree with, which doesn't happen often in these things. It's either the sign of a good discussion, or that I'm not totally getting it.

    I assumed early on, and my post reflected this, that we were looking for the reason that athiests have morals. The OP read to me like a "how can athiests have morals?", and athiests clearly do, so I was interested in unearthing that universal source.

    The thread has become a sort of search for a sort of hands-off morals decision system. Something that will allow us to type in a problem and out pops the answer. Which is bibley in a way.

    I'm still figuring it out. If only there were weight distributions involved, I could solve for the points of high force.

    Sumaleth on
  • Options
    itylusitylus Registered User regular
    edited February 2007
    Yar wrote:
    Itylus' semantics do not constitute any sort of disagreement with me.

    I'll happily acknowledge that I'm talking about semantics; different concepts naturally have different semantics to go with them.

    But you really think that I'm not disagreeing with you?

    Let me put three propositions forward:

    1. All actions, events, objects or the like which can be judged as having moral significance, ie, a moral meaning or value, can ultimately be evaluated to single scale of value, which is evident. That is, all values are commensurable, and their commensurability is evident and calculable.

    2. Actions, events, &c which have a moral meaning or value, could in theory be evaluated on a single scale of value, but cannot in practice because that scale is not evident. While all value is theoretically commensurable, that commensurability is not calculable in a fashion that is clearly logically superior to other methods, so the practical problem of achieving commensurability between differing types of value remains unsolved.

    3. Actions, events, &c, which have a moral meaning or value, may be of different types which are fundamentally incommensurable. The value of beauty is of a fundamentally different type to the value of mercy, and one cannot be expressed as a quantity or variation of the other. Efforts to do so produce only category error in the guise of false clarity.


    I'm saying that these are three different propositions, and that my inclination is to believe that either no.2 or no.3 is more likely than no.1, while you believe that no.1 is true. I think that's pretty clearly a case of me not agreeing with you. :)

    Incidentally, I note somewhere above you've argued that joy is logically inseperable from good, because you can't have a joy which is bad. While I'm not sure this is really a significant point of disagreement compared with what's above, I thought I should point out that this is like saying that because you can't have an orange which is not fruit, fruit=oranges. The question which remains open is, is it possible to have a concept of goodness which is not always joyous? Clearly you would disagree with such a concept, but I don't think that's the same as saying - as you have - that there can be no such concept.

    itylus on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited February 2007
    Sumaleth wrote:
    I wish I could contribute to this discussion at a higher level. Any level at all really. I should have... chosen my uni subjects more carefully I guess. Anyone want an unused set of engineering learnings?

    I go back and forth on who I agree with, which doesn't happen often in these things. It's either the sign of a good discussion, or that I'm not totally getting it.

    I assumed early on, and my post reflected this, that we were looking for the reason that athiests have morals. The OP read to me like a "how can athiests have morals?", and athiests clearly do, so I was interested in unearthing that universal source.

    The thread has become a sort of search for a sort of hands-off morals decision system. Something that will allow us to type in a problem and out pops the answer. Which is bibley in a way.

    I'm still figuring it out. If only there were weight distributions involved, I could solve for the points of high force.
    What it really comes down to is that all morality comes from people convincing eachother that there is a certain manner in which they should behave. Atheists' morals and religious people's morals will almost always come from a very large number of seperate sources throughout their development. What becomes a problem is that a lot of people seem to think that people need a "better" reason than anything-not-punishment-based. There's this way-more-prevalent-than-it-ought-be idea that "but, but if nothing bad happens to bad people, why would anyone be good?" And there are plenty of perfectly sound answers, but people just won't accept something like "because I don't like hurting people for no reason" or "because I value my honor and pride" as compelling enough reasons to be "real".

    ViolentChemistry on
  • Options
    YarYar Registered User regular
    edited February 2007
    Long post.
    Ketherial wrote:
    Moral codes represent our agreements regarding how we act towards each other.
    No, those are ethics.
    Your axiom is insufficient.
    Insufficient for what?
    Morals are inherant within us regardless if you believe in any gods.
    Well, that's only acceptable to us if we can accomplish what we're trying to accomplish here. "Instinct" won't really cut it. Why are they inherent?
    And...Yar!: *confused* Is the atheist goal simply to become the antithesis of all world religions yet still maintain the most basic functions therof? Please explain the idea behind seperating yourself from religion and/or God...yet maintaining the essential idea of the Golden Rule or Ten Commandments.
    If we can establish certain axioms, we can then derive from them all sorts or moral and ethical systems, including those which perhaps mirror the Ten Commandments, without any need for a story about Mt. Sinai.
    Ketherial wrote:
    i consider the desire to aid others, even if one doesnt have the means to actually do so, to be generosity. is there some reason why a poor man cannot have a generous heart? or is your concept of generosity one which simply does not consider realistic capability?

    a poorer man, who donates a larger percentage of his meager assets is more generous than a richer man who donates a tiny portion of his considerable assets. instead of arguing numbers, can we just agree that the richer man would create more joy? if so, i find your proposal (e.g. joy and sorrow are the only axioms) to be problematic because i believe the poorer man is "more moral" than the rich man. to me, that translates into a moral code that does not only use joy as its sole or primary axiom.
    You switched the scenario on me, mid-argument, and then mixed it with a separate point, and most importantly, you're completely ignoring my responses now, which makes this all an impossible discussion.

    To reiterate: I never made any claims about how to judge people. Starting with the axioms I've provided, there is certainly a capacity to form a system with which to judge people. But what I don't understand is why you've gone ahead and invented an obviously faulty system of judgment, and then keep insisting on attributing it to me, instead of the person who brought it up (you).

    And, like I already said, perhaps the amount of joy created with respect to one's opportunities might be a better system of judgment, just for starters.
    Ketherial wrote:
    bravery on the part of an "evil" man who directly causes suffering by his bravery is bravery nonetheless. same with respect to compassion. i think we will simply have to agree to disagree on this point.
    I think you're agreeing to accept something very few people would agree with. I think you're trying to avoid the discussion. Bravery may not always achieve the best results; it may not always be the course of action. But it's still bravery. But when you've already established ahead of time that we know that a certain is going to cause suffering without hope of a better future, then there is no way in hell that it is either a) "bravery," or b) moral.

    Our very concept of bravery is simply the act of forgoing one's fear of suffering in order to risk the rewards of a greater happiness for self and/or others. That is what it is regardless of any cooked-up hypotheticals you use to try and force me into a more complicated explanation of why the simple truth is still the simple truth. The risk doesn't always succeed, but is it a virtue because although we can't always precognitively know the results, we believe it to ultimately succeed more often than not. However, irredeemable bravery is called brashness, foolheartiness, brazenness, etc. It isn't a virtue or moral at all.
    Ketherial wrote:
    like ive said, regardless of whether you consider it possible or not, regardless of whether we consider the situation in a vacuum or not, i find a generous poor man to be "gooder" than a stingy rich man who in actuality creates more joy. this is not a hard situation to understand.
    It is impossible to understand. How can generosity possibly exist outside of context? It can't; it has no meaning. The very nature of the word connotes interactions among people.
    Ketherial wrote:
    for me, it doesnt matter. maybe he's taking an educated guess. maybe the numbers are just too overwhelming. maybe he's a precog. it doesnt matter. i value the will to struggle against "evil", to in effect "never give up" even if the results are unchanged. i would grant positive moral value to such conviction even if it were on the side of the "evil" party (e.g. a villian who never gives up on trying to thrust the land into darkness).
    Because the focused struggle, the desire to never give up, doing what you will in the face of likely failure, are all virtues which ultimately tend to improve the world we live and make it a happier, less sorrowful existence. If not, then why?
    Ketherial wrote:
    no, it is none of the things you mentioned. i think it is better for him to fight because "never giving in to evil" (and i use evil simply as a term for the universal negative, as viewed by the actor) is admirable in and of itself, regardless of whether or not such actions influence anyone else or anything else. it is something that again, i find to be axiomatically good (a sibling of integrity if you will).
    Your axioms fail because I have, several times, broken them down into value-neutral, meaningless concepts. Placed in a vaccum outside of the context, they have no merit. Without my axiom of joy and sorrow, your "axioms" of integrity and bravery crumble into dust. The only other possible pedestal you could rest them on is "God said so." With the axiom of joy and sorrow, however, we can logically construct a system in which everything you say is still a valid virtue. We can still construct a rule-based ethical system in which adherence to perserverence, no matter what, still achieves what we believe to be the happiest world to live in, even if that perserverence fails in a circumstance, or even fails over the course of an entire lifespan.

    Remember, it isn't about one person. That's where I keep missing you. You keep wanting this to be about judging someone, or about a single person's capacity to create joy and sorrow. It isn't. At least, I have never argued it as such.
    Ketherial wrote:
    i think this is really the main point of our disagreement. for the exact same reason why you think joy is the only axiom, i think joy cannot be the only axiom. because perception can be manipulated, it can hypothetically be created by any kind of set up (e.g. purposely developing low standards or brain in a vat). hence, experiences are only partially meaningful with respect to morality because no specific requirements can directly result from them. maximizing joy could mean killing or not killing, helping or not helping, doing or not doing. it could mean anything depending on the circumstances at hand and hence are insufficient to deal with morality. generosity as an axiom provides desired direction where as joy only provides desired in-brain chemical result.
    It's not the same. You are misunderstanding the relationship between an axiom and a system. I am invalidating your axioms by breaking them down and showing that that are either reliant on my less complex axioms, or else they fall apart. You, on the other hand, are challenging my axioms by adding your own stipulations to them and making them more complicated. You are building them up in directions that they don't logically have to go in order to create strawmen that don't necessarily rest on my axioms just because you think they do. Nevertheless, I have fully sound and complete answers to every one of the "directions" you've tried to take my axioms, which consistently support my axioms while explaing why your jdugment of them was flawed. Whereas my challenges to your axioms are just answered with "agree to disagree" or "that's just not how I see it." That is a fundamental difference in how we're attacking each other.

    For example, I've already answered the lowering standards argument. Anyone with the capacity to lower standards in that manner likely would also have the ability to instead help someone along the path to setting and achieving even higher standards, which is a greater joy than lowerered standards could ever be.

    As for the brain in the vat - you're either arguing total solipsism or you're making the "happy drugs" argument. Solipsism can defeat any argument any time and I'm assuming that's not where you're going. "Happy drugs" cannot achieve the same order of joy that can generosity, or bravery, or teaching your son to ride a bike.
    Ketherial wrote:
    i am assuming that you are trying to say something that has meaning. if we cannot measure net gains and losses, then what does "maximize joy, where joy is not measurable" mean?
    That's what we're here to do. That's the next step.

    Maximizing joy is the ideal. As with any ideal, it cannot be achieved to absolute perfection by a mortal. One cannot guarntee his actions will maximize joy over any other possible action. But to a lesser extent, he could also never guarantee that he will be absolutely perfectly brave or generous, either. We could always look at the results and then theorize in hindsight what would have been an even more perfectly ideal brave or generous decision.

    I mean, I can challenge you the same way - if bravery is the ideal, how do we measure that? We don't, we just collectively believe and judge an action as brave, just as we can collectively believe and judge which results are more joyous or sorrowful.

    A human can't know the absolute universal net gains and losses to joy and sorrow that will result from each action. That's why we have virtues like bravery and integrity. They don't always net the best result, but we value adherence to them regardless, because we believe that adhering to them as rules will, in the long run, across multiple circumstances and multiple lives, achieve a better world full of more happiness and less sorrow. Compared to, say, just trying to constantly calculate the net effects of actions regardless of the derived rules of virtue.
    Ketherial wrote:
    im not sure how to respond because a lack of interest doesnt invalidate my response. if what we are trying to do is develop a system that will allow us to understand the scope of morality and in turn, make morally meaningful decisions, i dont understand how non-consideration (interest?) of individual circumstances can be a legitimate position.
    Sorry, let me be more clear: this isn't just about how happy or sad Ketherial or a brave knight or a hobo is. It's about happiness and sadness across lives and time. You keep trying to break it down into a microcosm where people are happy, with the implied point being that there is a larger unhappiness as a result, and you somehow think that invalidates what I'm saying.
    Ketherial wrote:
    this is a good point and definitely a misunderstanding on my part. apologies. but i have also been arguing that competing axioms do exist which can outweigh or invalidate your joy axiom. please disregard any misunderstandings and let's continue our discussion regarding what i still consider to be an unresolved point. if i devolve into misunderstanding again, please let me know. the line is not as clear to me as it may be to those who have studied philosophy beyond college courses.
    You're no fun. Tell me I'm wrong and that I'm a stubborn prick or something, that's what I feed on.

    Srsly, though, it's not like I invented this concept of joy and sorrow. It is the basic axiom behind a lot of significant work in non-religious ethics and morality, going back decades and centuries.
    Ketherial wrote:
    net joy and sorrow, amount, etc., are quantitative terms. please clarify what you mean because i think this discrepency in what you have posted is critical to the discussion.
    I think the problem is that I'm fluctuating between talking about what is an unattainable ideal, and what we as moral mortal creatures can achieve, without really explaining everything.

    Maximizing joy and minimizing sorrow is the ideal. Our ability to know the joys and sorrows we can cause is not perfect, though. We are able to some extent to rank and rate and compare and predict, but we have no numerical measurement or formula that is guaranteed or provable.

    I'd argue that one of the key characteristics of an appropriate moral system is that it acknowledge this imperfect state and accurately reflect our ability (and inability) to predict the outcomes of our decisions. We value virtues like bravery and patience because we know that a man with patience as his means, even if he himself can't always see the ends, is likely to make more happiness.

    If you have a very pessimistic view of man's ability to make decisions that will lead to more happiness for all, then perhaps an appropriate system will involve the Guy in the Sky who makes lightning and makes the crops grow. Work hard and don't murder or steal or eat pork, because otherwise he'll throw lightning at you and wither your crops.

    If you have a very optimistic view of our ability, then perhaps you support some form of Act Utilitarianism or Machiavellianism, whereby any sort of despicable genocide etc. can be considered moral so long as the actor believes that in the end he will achieve a greater happiness.

    Somewhere in between we get to the hard and interesting stuff, and we find reasoned principles and rules, and a reasoned boundary on how firm those rules must be in each cirucmstance. Hopefully we find some logical formula (like Kant's Universal Law Formation, for example) that reasonable people can agree on as a proper process for determining those principles and boundaries. And, ultimately, we have to continually reassess man's progress and determine if our abilities with regards to understanding the conscious experience we create are actually improving, such that maybe it's ok now to let people believe the earth is round and we evolved from apes and it won't result in a spiral into total amorality.
    Anyway, judging the hobo has little or nothing to do with what the millionaire did. Let's make it a moral decision: Ketherial is presented only one of two options: he can accept 1% of a millionaire's wealth for his favorite charity, or 1% of a hobo's. Which should he take?
    Ketherial wrote:
    i thought morals were about who or what is "Gooder" not about who or what i can gain more real world value from. just because i can make more money from stealing doesnt make my choice more moral.
    I think you misread. You are accepting this money from a voluntary donor and it all goes to a charity of your choice. But you have to pick one donor. Which would be gooder of you to pick? The point is that I have taken the scenario that troubled you and repositioned the question to keep all the same actors and relevant cirucmstances, but remove the troublesomeness. Judging someone, if that's what you want to do, is about looking at the moral decisions they've made. Giving away $1 million was never a decision in front of the hobo, so the rich man's millions have absolutely nothing to do with the hobo's moral judgment. But put an actual moral decision in front of someone that pits the $1 vs. the $million and it becomes clearer.
    MrMister wrote:
    Doing your best simply is maximizing joy. They do not diverge.
    Yeah, they do. That's why Utilitarianism sucks.
    Ketherial wrote:
    not really that hard. you guys are being silly.
    No, I think you are strangely unwilling to even consider that morality isn't inherently about judging people. Sure, we can devise judgment with respect to morality, but nowhere is it required that a process for judging someone must be as simple as possible. The only thing required to be as simple as possible is the axiom.

    Like we've all said numerous times, if you insist on making it about judging a person's moral worth (as opposed to juding a moral decision or moral system), then that ought to be about their moral decisions, those things they can consciously control, and not their moral outcomes, because those rely on what decisions were presented to them and numerous other factors outside their control, like probablility and chance and the weather and who-knows-what.
    Ketherial wrote:
    now we are getting somewhere! from the above, another rule might be something like:

    good intentions may be > good results.

    now why is that? what axiom creates that rule? joy = good certainly does not. in fact, joy = good is in conflict with such rule because joy can only ever be a result; either there is joy created or there is no joy created. if joy = good were the only axiom, joy (good result) must be > good intentions (that cause no joy).
    Wrong, it absolutely does flow from that axiom. You still are unable to get your mind away from the idea that we are talking about the entire universe, not about judging a single individual.

    Consider a group of people R and a group S, in separate universes. They are basically blank slates, with little knowledge of anything, except they have the ability to understand what I'm about to tell them, and will do their best to obey me unwaveringly.

    I tell everyone in R, "Be patient, generous, loving, brave, sympathetic, caring, just, and industrious. Even when it seems you must sacrifice happiness to do so."

    I tell everyone in S, "Try to figure out which actions will make you and everyone else the happiest and always do those."

    I come back in X amount of time and look at them. Which group do you think will be happier? Which sadder? I think R will be a will be a fine society of joyous supermen. I think S will be either a) a tortured realm of Sodom, with unbridled hedonistic tendencies wreaking sorrow and strife, or b), they will be somehwere on the path to figuring out all those things I told R to be.

    In the end, good intentions can be viewed as morally superior to good results if and only if those intentions, as a principle, still normally tend to produce even better results. In the end, it is still entirely about joy and sorrow.

    Yar on
  • Options
    Marblehead JohnsonMarblehead Johnson Registered User regular
    edited February 2007
    Yar! wrote:
    You switched the scenario on me, mid-argument, and then mixed it with a separate point, and most importantly, you're completely ignoring my responses now, which makes this all an impossible discussion.
    Yeah. Don'tcha HATE when people do that? Sweet bippity, why couldn't you bring that capacity for spontaneous intelligence to any other discussion I've had with you?

    To me, it seems the big problem here is that people are quibbling over the fundamental building blocks of the definitions of morality / ethics, which to me seems contradictory to the nature of humanity. Working out a method to render quantifiable the amount of generosity and joy produced by different people is probably the silliest thing I've ever heard of, largely because were you to actually succeed, it would change absolutely nothing anywhere, and let's put it in a timeframe.... ever.

    Still, it is exceedingly interesting to read, and in a lot of cases, downright hilarious.

    EDIT: To below...

    Mock me not, Duke. Mock me not.

    Marblehead Johnson on
    Magus` wrote: »
    It's human nature to derive meaning from that something that actually lacks it in order to suit your goals.

    Dismayed By Humanity Since 1992.
  • Options
    ShintoShinto __BANNED USERS regular
    edited February 2007
    Sweet bippity guys.

    Sweet bippity.

    Shinto on
  • Options
    YarYar Registered User regular
    edited February 2007
    Yeah. Don'tcha HATE when people do that? Sweet bippity, why couldn't you bring that capacity for spontaneous intelligence to any other discussion I've had with you?
    I didn't do any of those things to you. And I don't think you and I are even having much of a discussion here until now, so what are you talking about?

    Trying to bring up a forumer's performance in another unrelated thread is another one of those tell-tale signs I told you about. As is the general practice of following around a forumer from thread to thread with the intent of trying to redeem yourself from a previous thread. Jumpin' Jehosaphat.
    To me, it seems the big problem here is that people are quibbling over the fundamental building blocks of the definitions of morality / ethics, which to me seems contradictory to the nature of humanity.
    Limed for the truth.
    Working out a method to render quantifiable the amount of generosity and joy produced by different people is probably the silliest thing I've ever heard of, largely because were you to actually succeed, it would change absolutely nothing anywhere, and let's put it in a timeframe.... ever.
    Care to explain? But anyway, my contention is that we likely can't quantify it exactly, but we nevertheless are able to make significant and believable greater-than/less-than statements.

    Yar on
  • Options
    Marblehead JohnsonMarblehead Johnson Registered User regular
    edited February 2007
    Hmm, two outright lies in one post... allrighty then.

    I hope you'll excuse me for not attributing too much respect to your discussions of morality. And please get over yourself... I did not "follow you around", I merely found it amusing that you're criticizing others for what you yourself were doing to me. It's a little funny.

    Marblehead Johnson on
    Magus` wrote: »
    It's human nature to derive meaning from that something that actually lacks it in order to suit your goals.

    Dismayed By Humanity Since 1992.
  • Options
    YarYar Registered User regular
    edited February 2007
    Hmm, two outright lies in one post... allrighty then.

    I hope you'll excuse me for not attributing too much respect to your discussions of morality. And please get over yourself... I did not "follow you around", I merely found it amusing that you're criticizing others for what you yourself were doing to me. It's a little funny.
    Please do not make posts that do nothing to contribute to the discussion. The PMs are doing just fine.

    I am interested in hearing your reasons for why the discussion of morality here doesn't deserve your respect, or why a quantification of happiness wouldn't change anything in spite of all the discussions we've had relating to that.

    Yar on
  • Options
    ElkiElki get busy Moderator, ClubPA mod
    edited February 2007
    Hmm, two outright lies in one post... allrighty then.

    I hope you'll excuse me for not attributing too much respect to your discussions of morality. And please get over yourself... I did not "follow you around", I merely found it amusing that you're criticizing others for what you yourself were doing to me. It's a little funny.
    Well, don't do it here. It's an on-topic thread, and you should send him a PM if you want to talk that, bread-winning, big-tits, or whatever.

    Elki on
    smCQ5WE.jpg
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    yar, you kind of make points in your line by line which i think are answered by my subsequent posts to mrmister and as such, i will not address them. if you think some point remains unaddressed, please feel free to bring it back up.
    Yar wrote:
    Ketherial wrote:
    Moral codes represent our agreements regarding how we act towards each other.
    No, those are ethics.

    you quoted incorrectly. it's easy to tell because there's a capital letter in that quote.
    Ketherial wrote:
    re: generosity
    You switched the scenario on me, mid-argument, and then mixed it with a separate point, and most importantly, you're completely ignoring my responses now, which makes this all an impossible discussion.

    To reiterate: I never made any claims about how to judge people. Starting with the axioms I've provided, there is certainly a capacity to form a system with which to judge people. But what I don't understand is why you've gone ahead and invented an obviously faulty system of judgment, and then keep insisting on attributing it to me, instead of the person who brought it up (you).

    im still not sure how and why anyone continues to state that a moral axiom does not logically conclude in a "judgment of people". can someone please clarify, either mrmister, grid or yar? dictionary dot com tells me that morals mean
    of, pertaining to, or concerned with the principles or rules of right conduct or the distinction between right and wrong.
    how can anything that pertains to "right" and "wrong" conduct ever not be a judgment of a person? unless of course we are saying that conduct can occur without an actor, which is obviously absurd.
    And, like I already said, perhaps the amount of joy created with respect to one's opportunities might be a better system of judgment, just for starters.

    but from what axiom does the opportunity qualifer spawn? if the sole axiom for any moral system is joy = good, then as ive said multiple times, "opportunity" doesnt matter.
    Ketherial wrote:
    like ive said, regardless of whether you consider it possible or not, regardless of whether we consider the situation in a vacuum or not, i find a generous poor man to be "gooder" than a stingy rich man who in actuality creates more joy. this is not a hard situation to understand.
    It is impossible to understand. How can generosity possibly exist outside of context? It can't; it has no meaning. The very nature of the word connotes interactions among people.

    ive never stated that generosity exists outside of context. im stating that it can be separated from action and result.

    also, i think your understanding of the term generosity is incorrect. generosity only indicates "feelings" harbored by individuals with respect to other individuals. the word contains no nuances with respect to actual interaction.
    generous:
    1. readiness or liberality in giving.
    2. freedom from meanness or smallness of mind or character.
    3. a generous act: We thanked him for his many generosities.
    4. largeness or fullness; amplitude.

    as you can see, generosity doesnt refer to actual giving, just the mindset involved. one can be generous without actually giving (due to inability perhaps). maybe im still not clear on what youre trying to say here.
    Ketherial wrote:
    i think this is really the main point of our disagreement. for the exact same reason why you think joy is the only axiom, i think joy cannot be the only axiom. because perception can be manipulated, it can hypothetically be created by any kind of set up (e.g. purposely developing low standards or brain in a vat). hence, experiences are only partially meaningful with respect to morality because no specific requirements can directly result from them. maximizing joy could mean killing or not killing, helping or not helping, doing or not doing. it could mean anything depending on the circumstances at hand and hence are insufficient to deal with morality. generosity as an axiom provides desired direction where as joy only provides desired in-brain chemical result.
    It's not the same. You are misunderstanding the relationship between an axiom and a system.

    no, i am stating that your axiom is insufficient as a basis for a moral system that gives us "right" answers. a reasonable moral system will require more axioms, distinct from just joy = good. just for example, we may want to create rules regarding how "good" relates to intentions or opportunity. but what axioms would such rules be based on? there must be something other than joy that we are interested in because opportunity has no direct relation to joy. there must be some reason why we think it is good to make moral decisions based on intentions, opportunity, upbringing, capability.
    I am invalidating your axioms by breaking them down and showing that that are either reliant on my less complex axioms, or else they fall apart. You, on the other hand, are challenging my axioms by adding your own stipulations to them and making them more complicated.

    no, i am not. i am providing you with situations in which your axiom cannot create the sufficient rules to come to (what we might consider) the "right" conclusion.
    For example, I've already answered the lowering standards argument. Anyone with the capacity to lower standards in that manner likely would also have the ability to instead help someone along the path to setting and achieving even higher standards, which is a greater joy than lowerered standards could ever be.

    but not only is that an attempted empirical claim that can never be proven, it's also one i disagree with. i could just as easily claim that chemical induced joy maximizes joy to the highest extent possible, and that as such, the only moral course of action would be to put everyone into chemically induced ecstasy. i dont make such an argument because it is nonsense, just like your claim that higher standards leads to higher levels of joy is nonsense.

    this is another big problem with all of your responses to my hypotheticals. for example, with respect to slavery, you say, well, freeing slaves brought more overall happiness in the long run for everyone. but that's only a valid response if you have the evidence to back up such claim. you dont and you agree that we can never measure happiness on such a grand scale. when i then say, let's measure it on individual scales, you come back with, im not interested in individual experiences.

    in effect, you are concluding your claim:

    i say, there were happy slaves, so if we are only interested in maximizing joy, then freeing those specific happy slaves was immoral.

    you say, but in the long run, freeing slaves was best for everyone, im not interested in just one slave. oh, by the way, we could never measure the net joy or sorrow from freeing all the slaves. so just assume that it's true. hence, freeing all the slaves, even the happy ones was moral because ive concluded that freeing them adds to net joy in the long run (but we cant measure it).

    let's just say, i am not convinced.
    As for the brain in the vat - you're either arguing total solipsism or you're making the "happy drugs" argument. Solipsism can defeat any argument any time and I'm assuming that's not where you're going. "Happy drugs" cannot achieve the same order of joy that can generosity, or bravery, or teaching your son to ride a bike.

    this quote, especially the portion in bold, is a perfect example of the point i bring up above.
    Maximizing joy is the ideal. As with any ideal, it cannot be achieved to absolute perfection by a mortal. One cannot guarntee his actions will maximize joy over any other possible action. But to a lesser extent, he could also never guarantee that he will be absolutely perfectly brave or generous, either. We could always look at the results and then theorize in hindsight what would have been an even more perfectly ideal brave or generous decision.

    i think this portion is fraught with problems. you keep trying to state that the long term result of maximizing joy for everyone is the ideal, but then say that it is impossible to measure such result for everyone in the long run. you dont find this to be problematic? more below.
    I mean, I can challenge you the same way - if bravery is the ideal, how do we measure that? We don't, we just collectively believe and judge an action as brave, just as we can collectively believe and judge which results are more joyous or sorrowful.

    see, the big difference between bravery and joy is that bravery is not a result, and hence can be measured in any specific instance and for any specific individual. can we say that someone who ditches his child when being attacked by a bear is not brave? no. but we can certainly say that someone who tries to protect their child when a bear attacks is brave (and hence good, if we consider bravery an axiomatic good). again, i dont think this is such a controversial point.
    A human can't know the absolute universal net gains and losses to joy and sorrow that will result from each action. That's why we have virtues like bravery and integrity. They don't always net the best result, but we value adherence to them regardless, because we believe that adhering to them as rules will, in the long run, across multiple circumstances and multiple lives, achieve a better world full of more happiness and less sorrow. Compared to, say, just trying to constantly calculate the net effects of actions regardless of the derived rules of virtue.

    but the belief that adhering to the rules will in the long run, across multiple circumstances and multiple lives lead to a better world full of happiness is based on absolutely nothing. it's in effect an irrational and bad belief which adds nothing to this discussion, because, by your only terms, we have never and can never measure the effect of one brave act on net joy or sorrow. as youve defined it, net joy or sorrow is wholly beyond our understanding and hence, absolutely meaningless.
    Sorry, let me be more clear: this isn't just about how happy or sad Ketherial or a brave knight or a hobo is. It's about happiness and sadness across lives and time. You keep trying to break it down into a microcosm where people are happy, with the implied point being that there is a larger unhappiness as a result, and you somehow think that invalidates what I'm saying.

    no, im not trying to invalidate your points, im trying to make them meaningful. because at the moment, theyre meaningless. if you say that being moral must necessarily be about maximizing joy across all lives and time, and not about any individual or measurable circumstance or situation, then what you are in effect saying is, we have no idea what moral means because we have no idea and can never have any idea what will maximize joy throughout all lives and time.
    Ketherial wrote:
    net joy and sorrow, amount, etc., are quantitative terms. please clarify what you mean because i think this discrepency in what you have posted is critical to the discussion.
    I think the problem is that I'm fluctuating between talking about what is an unattainable ideal, and what we as moral mortal creatures can achieve, without really explaining everything.

    Maximizing joy and minimizing sorrow is the ideal. Our ability to know the joys and sorrows we can cause is not perfect, though. We are able to some extent to rank and rate and compare and predict, but we have no numerical measurement or formula that is guaranteed or provable.

    no, you've made it impossible to rank and rate and compare because the scale youve set up is necessarily infinite. how does my tiny act of giving a hobo a sandwich rank in light of the entire universe for infinity? either we do what ive been doing, which is segmenting out specific, individual and distinct decisions, periods or populations and then basing our moral decisions on them, possibly discussing them in a vacuum, or we just throw up our hands in the air because time is infinite.
    I'd argue that one of the key characteristics of an appropriate moral system is that it acknowledge this imperfect state and accurately reflect our ability (and inability) to predict the outcomes of our decisions. We value virtues like bravery and patience because we know that a man with patience as his means, even if he himself can't always see the ends, is likely to make more happiness.

    why the moral rules with respect to acknowledgement of our imperfect state? how do they arise from your axiom?
    No, I think you are strangely unwilling to even consider that morality isn't inherently about judging people. Sure, we can devise judgment with respect to morality, but nowhere is it required that a process for judging someone must be as simple as possible. The only thing required to be as simple as possible is the axiom.

    im not really about judging people per se. im about judging actions, people, things, anything, on a moral scale. either the moral system we devise has meaning or it doesnt. for it to have meaning, there must be a right and a wrong. if there is a wrong then people who choose the wrong are not as "good" as the people who choose the right. i dont understand how this can be something that is somehow distinct or separable from morality.
    Ketherial wrote:
    now we are getting somewhere! from the above, another rule might be something like:

    good intentions may be > good results.

    now why is that? what axiom creates that rule? joy = good certainly does not. in fact, joy = good is in conflict with such rule because joy can only ever be a result; either there is joy created or there is no joy created. if joy = good were the only axiom, joy (good result) must be > good intentions (that cause no joy).
    Wrong, it absolutely does flow from that axiom. You still are unable to get your mind away from the idea that we are talking about the entire universe, not about judging a single individual.
    *snip*

    please provide a logical proof showing the conclusion "intention to do good is good" which derives from the axiom joy = good. then we can talk. because the part i snipped was meaningless without some kind of real logical display. please feel free to assume logical axioms such as a = a, 2 > 1, etc.

    edit: clarity

    Ketherial on
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    yar, ive responded to your long post, but it was mostly explanatory and probably confusing. let's just go right to the heart of the issue.

    your claim is that joy = good is the sole (and sufficient) axiom for any and all moral systems (assuming the axioms of logic, of course). is that correct?

    now, i think this is not true because if there are no competing axioms, the rules that we can derive from your axiom strike me as "wrong".

    logical axioms: a = a; 2 > 1
    yar's axiom: joy = good; sorrow = bad

    assumptions: x and y lead similar lives with identical amounts of joy (let's say 10) and sorrow (let's say 10).

    x is taken into a room, where he views y. he is given the choice to kill y or to kill himself. y knows nothing about this. if he chooses to kill y, he will be instantly brainwiped so that he never suffers from what he chose (also, it's a stranger. given how much duress is involved, im not sure he would suffer anyway. or if you really, really want to, we can assume a sadistic killer who gets joy from killing which would make the equation even more lopsided in my favor.).

    a) x chooses to kill y. x continues life with 10 joy and 10 sorrow.
    b) x chooses not to kill y and kills himself instead. y continues life with 10 joy and 10 sorrow.

    both of these situations result in the same amout of joy / sorrow on the ultimate scale. but i would not consider them equally moral decisions. however, if joy = good is the only axiom, then the logical conclusion is that they are morally equal because they net identical amounts of joy / sorrow.

    please justify or perhaps clarify if i havent understood or addressed your basic point.

    Ketherial on
  • Options
    jothkijothki Registered User regular
    edited February 2007
    Ketherial wrote:
    yar, ive responded to your long post, but it was mostly explanatory and probably confusing. let's just go right to the heart of the issue.

    your claim is that joy = good is the sole (and sufficient) axiom for any and all moral systems (assuming the axioms of logic, of course). is that correct?

    now, i think this is not true because if there are no competing axioms, the rules that we can derive from your axiom strike me as "wrong".

    logical axioms: a = a; 2 > 1
    yar's axiom: joy = good; sorrow = bad

    assumptions: x and y lead similar lives with identical amounts of joy (let's say 10) and sorrow (let's say 10).

    x is taken into a room, where he views y. he is given the choice to kill y or to kill himself. y knows nothing about this. if he chooses to kill y, he will be instantly brainwiped so that he never suffers from what he chose (also, it's a stranger. given how much duress is involved, im not sure he would suffer anyway. or if you really, really want to, we can assume a sadistic killer who gets joy from killing which would make the equation even more lopsided in my favor.).

    a) x chooses to kill y. x continues life with 10 joy and 10 sorrow.
    b) x chooses not to kill y and kills himself instead. y continues life with 10 joy and 10 sorrow.

    both of these situations result in the same amout of joy / sorrow on the ultimate scale. but i would not consider them equally moral decisions. however, if joy = good is the only axiom, then the logical conclusion is that they are morally equal because they net identical amounts of joy / sorrow.

    please justify or perhaps clarify if i havent understood or addressed your basic point.

    People don't inherently have more moral worth just because they aren't you. If sacrificing yourself to save another would have the same or worse consequences than letting them die, then I see nothing wrong with choosing your own life. There's only an issue when you choose to assign more moral worth to yourself than to others.

    jothki on
  • Options
    YarYar Registered User regular
    edited February 2007
    Keth - massive inlines like this do get unruly and it's hard to remember when I'm going through a post with points you've already addressed elsewhere. Sorry.
    Ketherial wrote:
    you quoted incorrectly.
    My bad. Forward the message to the guy who said it for me plz? :)
    Ketherial wrote:
    im still not sure how and why anyone continues to state that a moral axiom does not logically conclude in a "judgment of people".
    We can likely form a system for judging people based on these axioms, if you desire to. There is no reason why it has to be the absurd one that you are proposing. I think that regardless of what axioms we're using, though, and regardless of this particular discussion or whatever, the concept of "judging," by itself, implies some notion of what was compared to what could have been. If a hobo giving away $1million does not reasonably fall under what could have been, then maybe it isn't relevant to the concept of "judgment."

    But anyway, I still don't see where or why we're all about judging people. Why? You haven't shown why you believe that an axiom of morality must necessarily be about judging a person. The axiom doesn't even define judgment. If you want to develop a system for judging people, that's a different line of reasoning altogether, and it starts with, "what do you mean, 'judge,' and for what purposes do you intend these judgments to be performed?" The axioms don't require that any judgment of people ever be performed, so what are your reasons for wanting judgment? I'm not outright denying the value of judgment, it's just that without first stating what "judgment" is and what its goals are, then I can't logically lead us towards how to go about it under the axioms. All I can do is watch you construct your own absurd judgment systems based on the axiom, ones that don't seem to be of much perceivable value, and then claim that this somehow is a valid argument against the axiom. In short, the axiom says nothing about an individual being good or bad, and therefore while I think we could devise a method for applying to a person, I completely disagree with how you've chosen to apply it to individuals' goodness, or that your method somehow follows from my axiom.
    Ketherial wrote:
    can someone please clarify, either mrmister, grid or yar? dictionary dot com tells me that morals mean
    of, pertaining to, or concerned with the principles or rules of right conduct or the distinction between right and wrong.
    how can anything that pertains to "right" and "wrong" conduct ever not be a judgment of a person? unless of course we are saying that conduct can occur without an actor, which is obviously absurd.
    Well, other than the obvious fact that 'judging' and 'people' weren't mentioned in that definition? I think what you are getting at is that it is only natural for us to want to form a revelevant judgment system. Fine. But morality as its most simplest exists even if there is never a judgment of humans or any system for judging humans.
    Ketherial wrote:
    but from what axiom does the opportunity qualifer spawn? if the sole axiom for any moral system is joy = good, then as ive said multiple times, "opportunity" doesnt matter.
    It probably spawns from the same logic wherever "judgment" spawend from. The burden is on you. To what ends are the judgment? If they are to be used in society to reward or promote or condemn or such, then I don't see them being too useful if they completely ignore circumstance. At the same time, there may be at least some value in your proposed system as well, in which we at least recognize the guy who was able to create the most total happiness, assuming we were even able to know such a thing. Again, what's the goal of your desire to judge? That will affect my answer.
    Ketherial wrote:
    ive never stated that generosity exists outside of context. im stating that it can be separated from action and result.

    also, i think your understanding of the term generosity is incorrect. generosity only indicates "feelings" harbored by individuals with respect to other individuals. the word contains no nuances with respect to actual interaction.
    Those feelings only have merit as feelings alone inasmuch as there is no current opportunity to act on them. The feelings are still impossible to even conceive of other than with respect to interactions among people aimed at creating joy. Sure, I'm happy to think of the generous heart with no one to be generous to, but "generous heart" still only has meaning with regard to happiness he would /wants to make for others if he could.
    Ketherial wrote:
    as you can see, generosity doesnt refer to actual giving, just the mindset involved. one can be generous without actually giving (due to inability perhaps). maybe im still not clear on what youre trying to say here.
    Only the second definition was not completely in disagreement with you. And even then, the "mindset" is still an inconceivable existence beyond what it means with regards to what the person has done or will do when presented with opportunity.

    Look, this is getting ridiculous. I thought we were past it. There is absolutely, positively, without argument, no fucking way in hell that the concept of generosity can possibly have any meaning whatsoever beyond interactions among individuals, and specifically, people creating happiness for others as opposed to hoarding it. Even if you construct a complex hypothetical of a generous person who never gets the chance to actually be generous, that still doesn't change the fact that every single time you are using the word "generous" in the hypothetical you cannot possibly be referring to any concept other than a quality which concerns itself with being that which one believes to create happiness in others. It has no meaning beyond that.
    Ketherial wrote:
    no, i am stating that your axiom is insufficient as a basis for a moral system that gives us "right" answers. a reasonable moral system will require more axioms, distinct from just joy = good.
    Well, that's what we're working on here. It may be insufficient as a system, but it is sufficient as an axiom of morality that the system will ultimately relate back to. You are correct that more thought and deduction and pontification and systemization is required before we can create a system for, say, judging someone, or for making moral decisions. I'm not disputing that. I am disputing that any other axiom is required to base these system(s). Or there might be, but nothing you've presented qualifies. I can and will continue to show that any concept that you claim is a moral axiom will in fact break down into that which ultimately seeks to increase joy and decrease sorrow. It's unfortunate, because a lot of the concepts you're discussing are in fact very critical to progress on moral discussion, it's just that it will help later on if we properly acknowledge basic axioms and their relation to higher-order concepts based on them.
    Ketherial wrote:
    just for example, we may want to create rules regarding how "good" relates to intentions or opportunity. but what axioms would such rules be based on?
    Because disregarding intentions and opportunity is poor mechanism for promoting virtue in everyone? Because all the unfortunate people in the world doing the best they can in their individual unfortunate circumstances will likely add up in aggregate to more good than any single fortunately circumstanced person ever could create, and therefore judging the individual on the net of their individual contributions, instead of recognizing circumstance, will miss the bigger picture? That's just a guess there, I'm still unclear on the intent and goal of the proposed judgment.
    Ketherial wrote:
    must be something other than joy that we are interested in because opportunity has no direct relation to joy.
    It does, I just discussed it above. You are still viciously locked into the concept of judging an individual being, or individual anything, as more important than happiness across time and population. THAT is contradictory to the axiom.
    Ketherial wrote:
    i am providing you with situations in which your axiom cannot create the sufficient rules to come to (what we might consider) the "right" conclusion.
    By adding in your own assumptions that conflict with the axiom to begin with. GIGO.
    Ketherial wrote:
    this is another big problem with all of your responses to my hypotheticals. for example, with respect to slavery, you say, well, freeing slaves gave them more happiness. but that's only a valid response if you have the evidence to back up such claim.
    I have as much evidence as we as mortals can reasonably expect to have on such a thing. You are simply outside of what is considered rational, sane thought processes if your position is that African Americans are or would be happier as a whole if they'd been kept slaves. There is no numerical measurement, but we can look at writings, history, current events, quality of life, and so forth, and figure out together as a society that yeah, they are definitely happier now.
    Ketherial wrote:
    you dont and you agree that we can never measure happiness on such a grand scale. when i then say, let's measure it on individual scales, you come back with, im not interested in individual experiences.
    I'm saying there is no quantitative measurement period, individual or otherwise, unless you are being purely hypothetical. And in the purely hypothetical, you've yet to deduce why we should focus on an individual.
    Ketherial wrote:
    you say, but in the long run, freeing slaves was best for everyone, im not interested in just one slave. oh, by the way, we could never measure the net joy or sorrow from freeing all the slaves. so just assume that it's true. hence, freeing all the slaves, even the happy ones was moral because ive concluded that freeing them adds to net joy in the long run (but we cant measure it).

    let's just say, i am not convinced.
    Read that again and think about what you're saying. Realistically. You are not convinced that slavery wasn't really a happier existence for them overall?
    Ketherial wrote:
    i think this portion is fraught with problems. you keep trying to state that the long term result of maximizing joy for everyone is the ideal, but then say that it is impossible to measure such result for everyone in the long run. you dont find this to be problematic?
    Absolutely not. At least, no more problematic than any discussion in which we know at some point we must transition from the theoretical ideal to the real-world achievable application.
    Ketherial wrote:
    see, the big difference between bravery and joy is that bravery is not a result, and hence can be measured in any specific instance and for any specific individual. can we say that someone who ditches his child when being attacked by a bear is not brave? no. but we can certainly say that someone who tries to protect their child when a bear attacks is brave (and hence good, if we consider bravery an axiomatic good).
    I don't understand the distinction. We can say someone is happy because they are similing, and sad because they are crying, too. And we can't quantitavely measure how brave someone is, either. You've made no distinction. Here are some distinctions, though: happy/sad is less troublesome, because even though we can't exactly measure happiness, there's nevertheless not much conceivable disagreement on what happiness and sadness are, whereas people can and will disagree on exactly what bravery is, because their experiences may lead them to believe different qualitative amounts of happiness or sadness resulting from different kinds of activities that might be judged "brave." And, another distinction that I've provided 12 ways from Sunday: bravery is solely understood and valued in terms of overall resulting happiness and sadness. Not vice versa.
    Ketherial wrote:
    we have no idea what moral means because we have no idea and can never have any idea what will maximize joy throughout all lives and time.
    No, it just means that we have to do our best. I think you know very well that we have very good ideas on it. They just aren't perfectly measurable or predictable. They are qualitative and not quantitative. We can imagine what it would feel like for ourselves to experience that which we perceive others are experiencing, such as a family member being murdered. And we can compare that to what it would feel like if someone farted near us and then ran away, and we can have a qualitative analysis of the relative happiness and sorrow. We can also look at that in terms of one person or a thousand, and so forth.

    We also acknowledge many common situations in which the immedate circumstances won't reveal the larger result. As the discussion transitions from "what is the hypothetical axiom of moral ideal" to "how do we go about achieving it as non-omnipotent mortals?" then I become more and more intolerant of the absurdity you try to push, such as that we have no practical idea whatsoever what leads to happiness or sadness just because we don't have the theoretical perfect quantitative measurement/prediction. Like I said, it is precisely because of our inability to always measure and predict, or our inability to see beyond just our selves, that we adopt principles like bravery and justice and generosity that we value in and of themselves even when they don't seem to always work in every situation. But that doesn't invalidate the axiom and the ideal that they all stem from.

    Like I said, a practical system will need to, as accurately as possible, reflect exactly what we believe our ability is in that respect.
    Ketherial wrote:
    why the moral rules with respect to acknowledgement of our imperfect state? how do they arise from your axiom?

    ...please provide a logical proof showing the conclusion "intention to do good is good" which derives from the axiom joy = good. then we can talk. because the part i snipped was meaningless without some kind of real logical display. please feel free to assume logical axioms such as a = a, 2 > 1, etc.
    1) joy = good, sorrow = bad.
    2) Morality is at it's most basic: maximization of joy; minimization of sorrow (really, I could just leave out #1 and start here).
    3) Humans are able to make decisions that can lead to sorrow and happiness in themselves and others (meh, I guess this is another axiom? fairly self-evident even outside of morality, though).
    4) The ability of humans to precisely measure or predict joys or sorrows that have or will result from their decisions in #3 is imperfect and qualitative, and perhaps even dynamic.
    [4a) For biological/evolutionary reasons, humans also have imperfectly rational minds and a natural tendency towards self-interest and emotion even when those qualities, knowingly or unknowingly, conflict with #2.]
    (again, perhaps 4/4a constitutes another axiom? Not really, because it is arguably demonstrable, and even if it were shown to be untrue, that would not change the above axioms, it would only change the course of the below derivatives).
    5) Despite #4, humans can and do experience, express, recognize, rank, qualify, and compare various types and degrees of happiness and sadness. Humans have the ability of empathy which allows them to assess happiness and sadness in a meaningful and relevant manner consistent with #1 and #2. This ability extends theoretically at any level and into the infinite, relative to our accuracy in #4.
    (same as #3, maybe this is an axiom of morality, or maybe it's just self-evident outside of morality)
    6) a = a and 2 > 1 (you know, for the kids)
    7) Because of #4, decisions which are based solely on a human's determination of the circumstances at one level and what actions that human believes will result in the most happiness and least sorrow at that level will instead, in fact, lead to what, through #5, turns out to be a more sorrowful result on another level.
    8) Results described in #7 are perhaps altogether unavoidable in many cases due to #4.
    9) However, despite #8, and due to #5, certain rules and modes of behavior when adhered to even in apparent contradiction to #2 or #4a under immediate circumstances, nevertheless tend to achieve a greater total happiness as determined by #5 when viewed over a larger scale. The apparent "contradiction" with #2 was therefore not a contradiction and was merely an error due to #4.
    10) Because of #9, decisions that may seem to violate #2 under a microcosm of circumstances will nevertheless result in a greater support of #2 when viewed on a larger scale (this is where we get the idea of "virtue" and is sort of redundant to #9).
    11) Our imperfect ability in #4 often prevents us from predicting whether the result of an individual decision (as determined by #5) will be a #7 + #8 (net bad under #5), or rather a #9 + #10 (net good under #5).
    12) Again, though, through #5 and #9, certain principles exist which, over time and over populations, when adhered to in the decision-making process, are more likely to produce a #9/#10 than a #7/#8.
    13) Therefore, because of #4, # 11, and #12, a person who adheres to a virtuous principle in making decisions is in fact doing the moral thing and doing the most he can do to support #2, regardless of whether or not an individual decision he made can ever be determined (either through #5 or through some theoretical perfect quantification) to have been a 7/8 instead of a 9/10.

    The reason you value patience or bravery or even certain forms of "good intent" are because although people can't always predict or calculate which actions will elicit the most joy, and even though being brave or patient or having good intentions may utterly fail in individual circumstances, you nevertheless believe, or perhaps you even know, through experience, reason, history, and so forth, that adherence to those principles in relevant circumstances across multiple people and events will result in a better society with more happiness (even just under whatever qualitative imperfect mechanism you or society has at your disposal to measure such a thing) than will the proposed alternative of just trying to calculate and predict results, which we know if often doomed to fail.

    Even if someone fails their entire life to ever do anything but cause sorrow, so long as he adhered to the best principles that we believe generally to be significantly more likely than not to avoid sorrow and generate happiness, then he was just the anamoly fucked by circumstances beyond his reasonable control. But viewed outside of just the results of him as an individual, viewed from the whole, his behavior was still that which best supports joy = good; sorrow = bad.

    The virtues and principles we're talking about instantly vanish into nonsense as soon as you try to remove this relationship and claim that they are just good for no real reason. But once we accept this relationship, we're in a better position to move forward and what these principles are and just how strictly one must adhere to them relative to how successful we can be in doing better by discarding them.

    Yar on
  • Options
    YarYar Registered User regular
    edited February 2007
    Holy cow, exceeded post length! I've never done that before. Anyway, if you want to ignore the above and just start over here...:
    Ketherial wrote:
    both of these situations result in the same amout of joy / sorrow on the ultimate scale. but i would not consider them equally moral decisions. however, if joy = good is the only axiom, then the logical conclusion is that they are morally equal because they net identical amounts of joy / sorrow.

    please justify or perhaps clarify if i havent understood or addressed your basic point.
    Simple. You don't consider them equally moral decisions because of the ramifications of the principles they represent to the overall happiness and sadness of a larger universe.

    Yar on
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    i'm unfortunately somewhat pressed for time today, but will address a few quick points that just came to me as i was reading your response.

    with respect to judgment, i still dont understand how you can state joy = good, without considering that a judgment. the idea of good is necessarily a judgment. as such, i have no idea what your point is regarding "why" i feel the need to make judgments. it's not necessarily that i feel the need to make a judgment; it is that the concept of good is by definition a judgment. if we want to say something is good or bad, then we must accept that a judgment is being made. i still have no idea why we are caught up on this point. i will re-read your post later, but i fail to see how good / bad can be separated from judgment.

    also, with respect to my sacrifice hypothetical, your response is insufficient. even if we were to have the killer kill the stranger instead of himself in all lives and across time, even if the principle were to apply to all identical situations throughout a larger universe, the net joy that would result would be higher than or equal to the opposite possibility.

    in other words, if xinfinite is chooser and yinfinite is victim, xinfinite choosing to kill yinfinite will always either net equal or more joy in the same situation. so either you can side with jothki (who i think understood the point better) and state that sacrificing yourself for another is a morally equal position to killing that same person (assuming identical joy and sorrow) or you can say that there is something good about sacrifice that isnt actually connected to joy or sorrow.

    one more thing: making empirical claims in proofs makes me think you and i are disconnecting on a fundamental level. it's fine to assign values or make assumptions (e.g. x receives 100 joy for doing y), but what you've done in steps 7 - end are make empirical claims which you haven't proved. i will quote some of your false steps in an edit.
    Yar wrote:
    7) Because of #4, decisions which are based solely on a human's determination of the circumstances at one level and what actions that human believes will result in the most happiness and least sorrow at that level will instead, in fact, lead to what, through #5, turns out to be a more sorrowful result on another level. unproven empirical claim
    8 ) Results described in #7 are perhaps altogether unavoidable in many cases due to #4. unproven claim, possibly empirical
    9) However, despite #8, and due to #5, certain rules and modes of behavior when adhered to even in apparent contradiction to #2 or #4a under immediate circumstances, nevertheless tend to achieve a greater total happiness as determined by #5 when viewed over a larger scale. unproven empirical claim The apparent "contradiction" with #2 was therefore not a contradiction and was merely an error due to #4.
    10) Because of #9, decisions that may seem to violate #2 under a microcosm of circumstances will nevertheless result in a greater support of #2 when viewed on a larger scale (this is where we get the idea of "virtue" and is sort of redundant to #9). unproven empirical claim
    11) Our imperfect ability in #4 often prevents us from predicting whether the result of an individual decision (as determined by #5) will be a #7 + #8 (net bad under #5), or rather a #9 + #10 (net good under #5). unproven claim, possibly empirical
    12) Again, though, through #5 and #9, certain principles exist which, over time and over populations, when adhered to in the decision-making process, are more likely to produce a #9/#10 than a #7/#8. unproven empirical claim
    13) Therefore, because of #4, # 11, and #12, a person who adheres to a virtuous principle in making decisions is in fact doing the moral thing and doing the most he can do to support #2, regardless of whether or not an individual decision he made can ever be determined (either through #5 or through some theoretical perfect quantification) to have been a 7/8 instead of a 9/10. false conclusion

    Ketherial on
  • Options
    poshnialloposhniallo Registered User regular
    edited February 2007
    I don't understand the problems with the ramifications of the joy=good axiom. I mean, fine if you think it's an unfounded axiom, but the idea is to think deeply about the possible ramifications of an action, and not just take narrow view. That's obvious, surely?

    So if you're analysing a murder from this kind of perspective (act utilitarian?) then you've got to think about which person will create more joy in others in the future, which of course wouldn't be the murderer.

    Or am I missing something?

    poshniallo on
    I figure I could take a bear.
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    poshniallo wrote:
    I don't understand the problems with the ramifications of the joy=good axiom. I mean, fine if you think it's an unfounded axiom, but the idea is to think deeply about the possible ramifications of an action, and not just take narrow view. That's obvious, surely?

    So if you're analysing a murder from this kind of perspective (act utilitarian?) then you've got to think about which person will create more joy in others in the future, which of course wouldn't be the murderer.

    Or am I missing something?

    youre missing the fact that sacrificing yourself for a child would be considered a better and more moral act than not sacrificing yourself for the same child, regardless of whether or not the child would cause more net joy in the long run and across lives or not.

    Ketherial on
  • Options
    poshnialloposhniallo Registered User regular
    edited February 2007
    But couldn't you say that a society where people don't sacrifice themselves for children might die out, and therefore reduce the amount of joy being caused?

    I know I haven't gone into the details of that argument, because that would take quite a while, but that seems a fairly clear supporting argument.

    And how can you know how the kid might turn out? You're not prescient. All you can do is try to maximise joy.

    poshniallo on
    I figure I could take a bear.
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited February 2007
    Ketherial wrote:
    youre missing the fact that sacrificing yourself for a child would be considered a better and more moral act than not sacrificing yourself for the same child, regardless of whether or not the child would cause more net joy in the long run and across lives or not.

    Allow me to challenge your assertion that it is better, or more moral, to give up some amount of happiness in order to cause a smaller amount of happiness in someone else.

    Suppose Dick and Jane are both people. Dick has some amount of happiness, perhaps in the form of a pair of concert tickets to a band he likes. He can sacrifice it in order to cause some amount of happiness in Jane, perhaps by hawking them and buying her a nice coat. Jane's happiness upon receipt of the coat will not, however, be as great as his would have been by going to the concert. Now, if your moral system obligates Dick to make that trade, then you run into an immediate problem: namely, Jane is also obligated to trade back, again at a loss. Jane is morally bound to sell her scarf and buy Dick a couple books, which he likes less than she liked her scarf, and still less than he liked his concert tickets.

    Under this conception of morality, when people are on their best behavior they squander their time and resources on gifts for each other, and no one ever actually enjoys anything. That is not, in fact, a better or more moral use of their time, and is evidence to the fact that you're confused.

    MrMister on
  • Options
    GlyphGlyph Registered User regular
    edited February 2007
    Now let's say Jane decides to buy Dick a chain to go with his prized pocket watch, given to him by his father. To raise the funds, she has her hair cut off and sold to make a wig. Meanwhile, Dick decides to sell his watch to buy Jane a beautiful set of combs for her lovely, knee-length hair...

    Glyph on
  • Options
    YarYar Registered User regular
    edited February 2007
    Keth: You are still leaping straight from joy = good right into Act Utilitarianism. I thought you acknowledged pages ago that this is not what I am advocating. There are other concepts in between. The most important one is that this isn't just about you, or another single individual.
    Ketherial wrote:
    with respect to judgment, i still dont understand how you can state joy = good, without considering that a judgment. the idea of good is necessarily a judgment. as such, i have no idea what your point is regarding "why" i feel the need to make judgments. it's not necessarily that i feel the need to make a judgment; it is that the concept of good is by definition a judgment. if we want to say something is good or bad, then we must accept that a judgment is being made. i still have no idea why we are caught up on this point. i will re-read your post later, but i fail to see how good / bad can be separated from judgment.
    You seem to be altering your stance at this point. Sure, you can generalize the word "judge" until it encompasses just about anything. You were specifically trying to make this about which person was more moral. And as I've said every fucking time we go through this: yes, we can judge people if you want. If you think that judgment follows naturally from morality, fine. But there is no reason why we have to use your shitty strawman judgment system. There's no reason why we should be forced to isolate our judgment to one person or one decision in such a way as to produce an absurd judgment. Or, at least, you have not shown why we should be forced to, other than to keep claiming that my axioms require it. That's an unfounded empirical claim.
    Ketherial wrote:
    also, with respect to my sacrifice hypothetical, your response is insufficient. even if we were to have the killer kill the stranger instead of himself in all lives and across time, even if the principle were to apply to all identical situations throughout a larger universe, the net joy that would result would be higher than or equal to the opposite possibility.
    I missed it until now. It is an absurd hypothetical, involving mind-wipe powers and such. More importantly, you purposefully constructed it in such a way as to be utterly meaningless, beacuse that it the only way it achieves your goal. Anyway, the larger universe I'm talking about that values sacrifice in other circumstances is the one and only reason you want to find this person moral. How else otherwise? By what measure is one life worth more than the other if you purposefully constructed it so that there cannot be any difference? How do you judge y if he allows x to kill himself? It is meaningless. You value sacrifice because you know it is an immensely powerful mechanism for achieving greater good. Sacrifice, be it sacrificing the party tonight for no hangover tomorrow, or sacrificing your life for the life of the village, or whatever, is one of the most powerful and complicated tools for increasing overall joy. So we value it immensely, even if it doesn't always succeed.
    Ketherial wrote:
    and state that sacrificing yourself for another is a morally equal position to killing that same person (assuming identical joy and sorrow) or you can say that there is something good about sacrifice that isnt actually connected to joy or sorrow.
    1) It assumes omnipotence. I disagree with that premise. 2) See above about sacrifice. That is the only reason you value it.
    Ketherial wrote:
    but what you've done in steps 7 - end are make empirical claims which you haven't proved. i will quote some of your false steps in an edit.
    No, I haven't made any empirical claims. 1 and 2 are axioms, 3 and 4 are arguably empirical claims, but I acknowledged that and I can argue their merits if needed, and they don't change the axiom. The rest, 7 - end, all follows precisely and logically. Show me specifically where you think it doesn't (your claims of "this is empirical" doesn't cut it). #8 was a gimme, it isn't required in the argument.
    Ketherial wrote:
    youre missing the fact that sacrificing yourself for a child would be considered a better and more moral act than not sacrificing yourself for the same child, regardless of whether or not the child would cause more net joy in the long run and across lives or not.
    You're missing the fact that your "whether or not" there at the end is actually a question you've already answered for yourself and therefore your point is moot. The presumption most hold is that a young child, on average, has a much greater potential for happiness ahead of him than does an adult. That's why we sacrifice. That's exactly why I'd die for my son. There are joys in life I've already shat on that he hasn't even conceived of yet. Maybe he'll enjoy life and maybe he won't, but the general belief most people hold is that a child has more joy ahead of him than an adult.

    Yar on
  • Options
    AcidSerraAcidSerra Registered User regular
    edited February 2007
    I would actually say that the reason you are having such a hard time definding your single axiom, is because it has an inherent flaw which has been touched on but not thought over in greater detail.

    The flaw is actually #1, Joy = Good, Sorrow = bad. If this were true then as has been said the best thing to do would be to get everyone high on opiates and let the world devolve into a happy go lucky drug youself to death marathon. This would bring about perfect joy for a time then result in the entire race's destruction and therefore cause no sorrow to ever again be an option. If the race did not self-destruct they would continue on in a drug haze never questioning right wrong good or evil, because they would all be in a perpetual state of joy. From the responses I have seen to this, this is decided to be a negative result despite being the best course overall.

    So having established that Joy = Good and Sorrow = bad is not a self evident truth, nor a universally accepted rule, might I introduce you to what I call the 'spinning axiom'.

    The spinning axiom is anchored on a single concept. All things are inherently equal. From this you may have infinite amounts of seemingly contradictory truths. Joy = good, sorrow = good, freedom = good, order = good, individuality = good, society = good, and alternately, joy = bad, sorrow = bad, freedom = bad, order = bad, etc... you get the picture.

    What happens in a spinning axiom is that one member of the pair is on top, the bad position, the other is on bottom, the good position. As something approaches the top, it becomes heavier, as it approaches the bottom it becomes lighter. Therefore if you strive for joy, the more you get of it, the heavier it becomes, and now sorrow becomes good. It could be for appreciation of what you have, or to wake you up from a selfish state, since joy is easy to lose yourself in. It can also be to grow and progress so that you may attain a better joy the next go around.

    Basically, this creates a system whereby nothing is set in stone, all definitions of good or bad are realistically defined by where you are here and now. But is also in some way concretely measurable. If your order is to the point of stifling freedom, that is bad, if your freedom is to the point of destroying order, this is bad. IF your joy is to the point where your a mindless vegetable, this is bad, if your sorrow is to the point where you refuse to be anyhting BUT a vegetable, this is also bad. In a very sad situation, joy is a good thing, in a very joyful situation, sadness is a good thing.

    Your personal job is to attempt to balance these in your normal life and decide for yourself which you prefer and seek after. Your societal job is to attempt to balance these and decide for yourselves which you seek after. =)

    This is some pretty heavy stuff and I may not have explained it the best, but if you want any clarification, I'll try to help.

    AcidSerra on
  • Options
    NavocNavoc Registered User regular
    edited February 2007
    AcidSerra wrote:
    I would actually say that the reason you are having such a hard time definding your single axiom, is because it has an inherent flaw which has been touched on but not thought over in greater detail.

    The flaw is actually #1, Joy = Good, Sorrow = bad. If this were true then as has been said the best thing to do would be to get everyone high on opiates and let the world devolve into a happy go lucky drug youself to death marathon. This would bring about perfect joy for a time then result in the entire race's destruction and therefore cause no sorrow to ever again be an option. If the race did not self-destruct they would continue on in a drug haze never questioning right wrong good or evil, because they would all be in a perpetual state of joy. From the responses I have seen to this, this is decided to be a negative result despite being the best course overall.

    So having established that Joy = Good and Sorrow = bad is not a self evident truth, nor a universally accepted rule, might I introduce you to what I call the 'spinning axiom'.

    *snip*

    I admit I haven't fully read the thread yet, so I apologize if I'm asking something already covered, but I fail to see how your scenario disproves the fundamental axiom of Joy=Good and Sorrow=Bad.

    Your description does not seem to maximize joy at all, in fact. While entering a drug-induced stupor is an experience characterized by joy, it is not something most consider a desirable state to be in permanently. This seems likely to be due to the experiences such a state denies. While it is a very good method of increasing joy, it necessarily denies access to other sources of joy, that makes it, overall, an undesirable situation. Being able to fully contemplate issues, such as joy and sorrow, is an essential factor of human enjoyment.

    Maybe I'm misunderstanding, but it seems to me that while the scenario increases joy in a "physical" or immediate sense, it denies a significant amount of joy that demands full consciousness and the capability of thought. This denied joy is obviously seen as of greater value, and thus one would reject the situation in an effort to maximize joy, in accordance with the original axiom.

    I feel I probably missed something in your description though, or misunderstood you point, so any clarification would be appreciated. I enjoy thinking about the subject, despite how poorly I may understand it.

    Navoc on
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited February 2007
    In summary: An assertion that joy!=euphoria.

    Pleasure vs. happiness, and so forth.

    Incenjucar on
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    MrMister wrote:
    Allow me to challenge your assertion that it is better, or more moral, to give up some amount of happiness in order to cause a smaller amount of happiness in someone else.

    Suppose Dick and Jane are both people. Dick has some amount of happiness, perhaps in the form of a pair of concert tickets to a band he likes. He can sacrifice it in order to cause some amount of happiness in Jane, perhaps by hawking them and buying her a nice coat. Jane's happiness upon receipt of the coat will not, however, be as great as his would have been by going to the concert. Now, if your moral system obligates Dick to make that trade, then you run into an immediate problem: namely, Jane is also obligated to trade back, again at a loss. Jane is morally bound to sell her scarf and buy Dick a couple books, which he likes less than she liked her scarf, and still less than he liked his concert tickets.

    Under this conception of morality, when people are on their best behavior they squander their time and resources on gifts for each other, and no one ever actually enjoys anything. That is not, in fact, a better or more moral use of their time, and is evidence to the fact that you're confused.

    you say im confused and yet your whole analysis seems to have no connection with reality.

    dick sells concert tickets to get jane a coat that she really wants (jane +10 joy).

    dick goes to concert by himself (dick +11 joy).

    which dick do we think is a better boyfriend? which one do you want to be your boyfriend? if creating joy were the only issue, then we would be obligated to say "going to concert dick" is the better boyfriend. but we dont because, like ive been saying this entire time, joy is not the only consideration for good.

    the situation is more accurately reflected as such:

    dick sells concert tickets to get jane a coat that she really wants (jane +10 joy, dick +2 sacrifice; sacrifice / joy = good; aggregate good = 12).

    dick goes to concert by himself (dick +11 joy, dick +0 sacrifice; aggregate good = 11).

    and that's why we choose "sacrificing dick" to be our boyfriends and not "go to concert dick" because we think sacrifice for others is good in and of itself.

    Ketherial on
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited February 2007
    Dude.

    Altruistic acts create joy for both parties.

    Incenjucar on
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    Yar wrote:
    Keth: You are still leaping straight from joy = good right into Act Utilitarianism. I thought you acknowledged pages ago that this is not what I am advocating. There are other concepts in between. The most important one is that this isn't just about you, or another single individual.

    i think im okay with everything you are saying above but im not sure what it is specifically that i am doing that is leaping straight from joy = good to act utilitarianism. for example, when i say something like "assuming joy = good is the only axiom upon which we base our moral systems leads to results such as *example*", have i done what you are complaining about? if so, explain to me why this is problematic. to me, claiming that joy = good is the sole axiom leads to certain logical consequences, many of which i object to.
    Ketherial wrote:
    with respect to judgment, i still dont understand how you can state joy = good, without considering that a judgment. the idea of good is necessarily a judgment. as such, i have no idea what your point is regarding "why" i feel the need to make judgments. it's not necessarily that i feel the need to make a judgment; it is that the concept of good is by definition a judgment. if we want to say something is good or bad, then we must accept that a judgment is being made. i still have no idea why we are caught up on this point. i will re-read your post later, but i fail to see how good / bad can be separated from judgment.
    You seem to be altering your stance at this point.

    no, im not. ive had this stance since the beginning.
    Sure, you can generalize the word "judge" until it encompasses just about anything. You were specifically trying to make this about which person was more moral. And as I've said every fucking time we go through this: yes, we can judge people if you want. If you think that judgment follows naturally from morality, fine.

    okay, we agree.
    But there is no reason why we have to use your shitty strawman judgment system. There's no reason why we should be forced to isolate our judgment to one person or one decision in such a way as to produce an absurd judgment. Or, at least, you have not shown why we should be forced to, other than to keep claiming that my axioms require it. That's an unfounded empirical claim.

    but the judgment system which you call a shitty strawman is the logical consequence of your claim that joy = good is the sole axiom (e.g. whether or not more joy is created is the only factor from which we can make moral judgments).
    Ketherial wrote:
    also, with respect to my sacrifice hypothetical, your response is insufficient. even if we were to have the killer kill the stranger instead of himself in all lives and across time, even if the principle were to apply to all identical situations throughout a larger universe, the net joy that would result would be higher than or equal to the opposite possibility.
    I missed it until now. It is an absurd hypothetical, involving mind-wipe powers and such. More importantly, you purposefully constructed it in such a way as to be utterly meaningless, beacuse that it the only way it achieves your goal. Anyway, the larger universe I'm talking about that values sacrifice in other circumstances is the one and only reason you want to find this person moral. How else otherwise? By what measure is one life worth more than the other if you purposefully constructed it so that there cannot be any difference? How do you judge y if he allows x to kill himself? It is meaningless.

    no, it's not meaningless, it's a purposely constructed equation, based on your axiom, that creates a logical result that we are inclined to disagree with. it is, in that sense, extremely meaningful. the point is, if your axioms cannot be disagreed with, then there would be no possible way to create any equations where a disagreeable result occurs. even without the mind wiping or the buttons (all devices used to distill the main issue), we can casually consider the situation where a soldier who smothers a grenade with his body to save his buddy is considered more moral than one who ditches his pal. and when that situation is placed in front of us for consideration, i would argue that we dont give a damn about how much net joy or sorrow results. we see a guy throwing his own life away to save another and we see a guy running and we say "motherfucker". even if we could definitively prove that the guy who ran away would net more joy into the world by living than by sacrificing himself for the other guy, i still think we would view him less positively than one who was willing to make the sacrifice.
    You value sacrifice because you know it is an immensely powerful mechanism for achieving greater good. Sacrifice, be it sacrificing the party tonight for no hangover tomorrow, or sacrificing your life for the life of the village, or whatever, is one of the most powerful and complicated tools for increasing overall joy. So we value it immensely, even if it doesn't always succeed.

    but i think the problem with your claim (and the logical consequences of such claim: necessarily linking sacrifice to joy) are that we could post facto look at a situation, measure the resulting joy, and then post facto decide whether or not any such decision was moral (e.g. he sacrificed, but didnt create more joy, hence such sacrifice was not moral). of course this is not always possible in real life, but it is certainly possible in many situations.

    for example, i could, hoping for the best, provide some advice to a friend. the friend, following the advice, ends up in a bad situation where much sorrow is created. does that mean my advice to him was immoral? of course not. but if we go back and consider what wouldve happened had i not given such advice, we could easily imagine that the friend would not have had to undergo such sorrow. how can i consider such decision to be moral, even though it clearly caused more sorrow in the long run? and yet, we do consider it moral because like i said, joy is not the only thing we consider.
    Ketherial wrote:
    and state that sacrificing yourself for another is a morally equal position to killing that same person (assuming identical joy and sorrow) or you can say that there is something good about sacrifice that isnt actually connected to joy or sorrow.
    1) It assumes omnipotence. I disagree with that premise. 2) See above about sacrifice. That is the only reason you value it.

    im unsure where we are trying to go with this. more at the end.
    Ketherial wrote:
    but what you've done in steps 7 - end are make empirical claims which you haven't proved. i will quote some of your false steps in an edit.
    No, I haven't made any empirical claims. 1 and 2 are axioms, 3 and 4 are arguably empirical claims, but I acknowledged that and I can argue their merits if needed, and they don't change the axiom. The rest, 7 - end, all follows precisely and logically. Show me specifically where you think it doesn't (your claims of "this is empirical" doesn't cut it). #8 was a gimme, it isn't required in the argument.

    when you state that one action will lead to more joy than another action, that is an empirical claim (that also happens to be one that cannot be proven). all such claims must be eliminated from a logical proof.
    Ketherial wrote:
    youre missing the fact that sacrificing yourself for a child would be considered a better and more moral act than not sacrificing yourself for the same child, regardless of whether or not the child would cause more net joy in the long run and across lives or not.
    You're missing the fact that your "whether or not" there at the end is actually a question you've already answered for yourself and therefore your point is moot. The presumption most hold is that a young child, on average, has a much greater potential for happiness ahead of him than does an adult. That's why we sacrifice. That's exactly why I'd die for my son. There are joys in life I've already shat on that he hasn't even conceived of yet. Maybe he'll enjoy life and maybe he won't, but the general belief most people hold is that a child has more joy ahead of him than an adult.

    i think you are jumping around in circles between possibilities and reality, principles and actual result which is why i think you havent been able to make a real point. let's go with some easy questions:

    slavery:
    moral or immoral?
    immoral for everyone or only for those where no joy results?
    why?
    if your answer is based on how much joy results, how do you know how much joy resulted?
    how is it measured?
    when is it measured (e.g. at the time of liberation? 50 years later? 100 years later? at the end of time?) and why do you measure it at such time?
    who was it measured for (the slaves and owners? the world? the universe? alternate universes?) and why?

    i think we need to set a clear scope on what we are trying to discuss. you are moving around too much between this and that for me to be able to really respond.

    just for your reference, i've provided my answers below:

    slavery:
    moral or immoral? immoral.
    immoral for everyone or only for those where no joy results? immoral for everyone, including the owners who kept their slaves happy.
    why? because freedom is an axiomatic good that must be granted to everyone, regardless of how much joy / sorrow such freedom might cause.
    if your answer is based on how much joy results, how do you know how much joy resulted? it's not. measuring freedom is not easy either, but fortunately it is easy to compare (being a slave is less free than not being a slave).
    how is it measured? whether or not an individual can act according to their own will.
    when is it measured (e.g. at the time of liberation? 50 years later? 100 years later? at the end of time?) and why do you measure it at such time? measured at the end of each affected individual's life. secondary and tertiery effects (such as how outraged someone might be at hearing that slaves existed) may be considered but hold infinitely less weight than those actually affected by the events (to avoid the cry baby problem)
    who was it measured for (the slaves and owners? the world? the universe? alternate universes?) and why? measured for all actually affected (but not measured for those who are "affected" by "principles" because that's too nebulous).

    Ketherial on
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    Incenjucar wrote:
    Dude.

    Altruistic acts create joy for both parties.

    dude, in situations where sacrifice creates more joy for the sacrificing party than had he not sacrificed, the act would not be considered a sacrifice.

    Ketherial on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited February 2007
    Ketherial wrote:
    dick sells concert tickets to get jane a coat that she really wants (jane +10 joy).

    dick goes to concert by himself (dick +11 joy).

    which dick do we think is a better boyfriend? which one do you want to be your boyfriend?

    Making Dick and Jane into a couple and then asking which is a better date confounds the issue: after all, if I were selfish I would probably want a partner who bought me nice things instead of nicer things for himself. That doesn't mean it's better, in general, that we buy others nice things instead of nicer things for ourselves.
    the situation is more accurately reflected as such:

    dick sells concert tickets to get jane a coat that she really wants (jane +10 joy, dick +2 sacrifice; sacrifice / joy = good; aggregate good = 12).

    dick goes to concert by himself (dick +11 joy, dick +0 sacrifice; aggregate good = 11).

    The point of my analysis is as follows: you say that Dick buying Jane the coat is better than him keeping his tickets, because trading in some happiness to realize the abstract value of sacrifice is a good deal. However, you fail to see that this is an iterable step. Under that very definition of better, it is also better that Jane sell the coat and buy something for Dick, even if he will enjoy it less than she would enjoy the coat.

    By deciding that the first transaction is good, Dick's present to Jane, you also decide that the second transaction is good, Jane's present back to Dick. Furthermore, by the rules of the hypothetical we can see that the net result of these two transactions is that instead of concert tickets Dick has something that will make him less happy. Supposedly, this is balanced out by the fact that both Dick and Jane got the chance to sacrifice, which is a good unto itself. But this is not intuitive at all: rather than ponging increasingly bad presents back and forth Dick and Jane would be better served by actually going to a concert or keeping a coat.

    Furthermore, your notation is fairly illegible.

    MrMister on
  • Options
    VicVic Registered User regular
    edited February 2007
    Everyone wrote:
    WALLS OF TEXT

    Alright, I read some of it. Not bloody all though.

    I too used to simplify the moral "good" to be a pure pursuit of maximum Happiness and minimum Sorrow, but in the end I found this somewhat lacking due to some of the counterarguments that are commonly mentioned in connection to this claim. One is that if maximum enjoyment is the goal, then why not just plug everyone matrix-style into a big orgasm computer, or just drug them into a permanent state of euphoria? While these are just "examples", there is no denying their relevance.

    Happiness is not enough in itself, and while it is an admirable goal in a normal society such as our own and certainly a good cornerstones in any moral system, more is needed.

    In her lifetime, a baby learns, evolves, and grows as a human being through both happiness and suffering. This growth is key, and it is what makes us truly human. To truly grow into a decent person, one needs not only selfless devotion and love, or to have ones every need taken care of. One needs stimulation, challenges, and a wealth of experiences and information to learn from and enjoy.

    The pursuit of happiness is an admirable goal to be sure, but we must strive for something greater, and the only way to reach this is to further human society into something where people through new experiences and open minded and constructive interactions with each other can further themselves. The only other option is stagnation, decadence.

    So, uh, I guess what I am saying is that while life has a value in itself, what we must strive for is furthering mankind in itself, because we are the only beings on this planet (or in the universe, for all we know) that can possibly reach a purpose higher than the barely self-justifying clinging to existance that is what all life is originally programmed into.

    Yeah.

    I probably wrote this too fast, I might have to re-write or elaborate on it. Another point of interest is that human psychology evens out positive and negative influences to a rather high degree, meaning that the "bean-counting" of happiness that everyone seems to think is the practical point of the greater happiness philosophy is too simplistic. Simply put, happiness is not a currency and does not work as one.

    Vic on
  • Options
    YarYar Registered User regular
    edited February 2007
    AcidSerra wrote:
    I would actually say that the reason you are having such a hard time definding your single axiom, is because it has an inherent flaw which has been touched on but not thought over in greater detail.
    No, it has been touched on, before and since your post. As you are already indicating by the way you wrote it, we would not be happier in such a universe. We'd likely all be dead. Or at least unable to experience love and triumph and all those other joys that, at least on some level, are even more pleasurable to experience than opiates.
    Ketherial wrote:
    dick sells concert tickets to get jane a coat that she really wants (jane +10 joy, dick +2 sacrifice; sacrifice / joy = good; aggregate good = 12).

    dick goes to concert by himself (dick +11 joy, dick +0 sacrifice; aggregate good = 11).
    This is what I'm talking about when I say you jump right into Act Utilitarianism. This is the fundamental misunderstanding I'm trying to get past. You want to isolate my axiom so that it applies to only the subset of circumstances in which you can show a net loss. IT APPLIES TO EVERYTHING!! You cannot choose a set of circumstances and then purposefully cut off the larger meaning. You cannot just limit this to the concert and the coat and erase what such sacrifice represents in the big picture. You yourself only value that sacrifice because of what similar sacrifices generally represent to the overall happiness of us all as a species, so it is unfair for you to try and cut all of that out from your equation when you try to challenge the axiom.

    What is the natural implication of your point? Isn't this your implication: "imagine a world where no one ever sacrificed like that... imagine what a sorrowful place that would be..."

    The sacrifice creates a stronger bond of love for both of them, just for example, which is a joy greater than any concert. I'm sorry if I sound like I'm arrogantly reading your mind, but yes, that is the only possible reason you value the sacrifice. Despite the strawman in which you calculate a certain subset of circumstances to be a net negative, your implication is nevertheless that we are all overall happier when people do things like you describe. It is not an axiom. I can and always will break it down to where sacrifice is meaningless and increased joy is the only measure. Sacrifice is a principle based on the axiom, as are all virtues.
    Ketherial wrote:
    i think im okay with everything you are saying above but im not sure what it is specifically that i am doing that is leaping straight from joy = good to act utilitarianism. for example, when i say something like "assuming joy = good is the only axiom upon which we base our moral systems leads to results such as *example*", have i done what you are complaining about?
    Yes, because you never created a moral system. All you did was try to apply the axiom directly and in a narrow manner to a individual circumstance, and refuse to admit that a different view of such circumstances that doesn't directly apply the axiom in such a way will nevertheless support the axiom even more effectively.
    Ketherial wrote:
    if so, explain to me why this is problematic. to me, claiming that joy = good is the sole axiom leads to certain logical consequences, many of which i object to.
    Because all of the "consequences," if you would continue to follow them logically, would eventually end with, "and wouldn't that be even more sorrowful?" which just retroactively proves that this doesn't follow from the axiom to begin with.
    Ketherial wrote:
    but the judgment system which you call a shitty strawman is the logical consequence of your claim that joy = good is the sole axiom (e.g. whether or not more joy is created is the only factor from which we can make moral judgments).
    Except, as I and others have pointed out, it absolutely does not follow. More joy is the axiom.

    IF our ability to create the most joy turns out actually to result from following principles which only succeed in creating the most joy sometimes, or which only succeed in doing so in aggregate as opposed to individually, and, sometimes, without our ability to accurately predict every result, these principles will nevertheless fail to produce the greatest joy in a microcosm of circumstances, THEN it only follows that a proper judgment system would take this into account and judge a person based on following said principles, at least to some extent, since that would be what, as far as the person knew and was able to affect the outcome, was most likely to result in the greater joy, or what most contributed to the greater joy of the universe overall, regardless of any failure in the net outcome of the specific microcosm.

    Note the IF and the THEN. There are no empirical claims, just an IF/THEN. I believe you support the IF, and I'm trying to show how the THEN is directly supportive of my axiom. The IF maybe isn't a solid truth. Hypothetically, we can say it is or isn't a given. In reality, we can say it is a general belief we've reasoned and experienced. Either way, the THEN is as strong as the IF.

    The crux of your complaint is this: Person A goes around trying to calculate direct net joys and sorrows of actions while completely disregarding any notion of higher principles of virtue. Person B follows principles like integrity, sacrifice, generosity, patience, and so forth, and does so perhaps even blindly. Who's likely to create more joy for himself and others? You keep claiming that Person A is what follows from my axioms, and yet person B is the who will most likely create more joy and less sorrow, and you don't realize why that is a direct contradiction.
    Ketherial wrote:
    no, it's not meaningless, it's a purposely constructed equation, based on your axiom, that creates a logical result that we are inclined to disagree with.
    The logical extension of why you are inclined to disagree with it follows more directly from the axiom than does the equation that you think follows from the axiom. Why are you inclined to disagree with the result?
    Ketherial wrote:
    i would argue that we dont give a damn about how much net joy or sorrow results. we see a guy throwing his own life away to save another and we see a guy running and we say "motherfucker".
    Yeah, using a better judgment system than the crappy one you created that no one here ever agreed followed from the axiom.
    Ketherial wrote:
    even if we could definitively prove that the guy who ran away would net more joy into the world by living than by sacrificing himself for the other guy, i still think we would view him less positively than one who was willing to make the sacrifice.
    Having an ability to 'prove' such a thing would violently alter the entire notion of "morality" in a practical sense and this whole discussion would change - however, the axioms themselves would still be the same.
    Ketherial wrote:
    slavery:
    moral or immoral? immoral.
    immoral for everyone or only for those where no joy results? immoral for everyone, including the owners who kept their slaves happy.
    why? because freedom is an axiomatic good that must be granted to everyone, regardless of how much joy / sorrow such freedom might cause.
    Because granting freedom on principle regardless of some misconstrued prediction that slavery might actually be better for certain individuals is what we recognize as generating more happiness for all in the long run. Whether it's antebellem slave-owners arguing that the slaves are better off this way, Hitler striving for a master race (omg godwinz), or the robots in AI enslaving humans for their own good, we recognize in real life and in hyptheticals that trying to construct a calculated justification of joy and sorrow that ignores higher principles of virtue runs a very strong risk of actually creating the worst kinds of sorrow.
    Vic wrote:
    <some crap>
    No. Me being really happy does not make sorrow better and better. That is absurd. Your thesis/antithesis/synthesis approach is valid for certain concepts, but its moral value still relies on an axiom of happiness and sadness.

    Yar on
  • Options
    AcidSerraAcidSerra Registered User regular
    edited February 2007
    So, if euphoria!=joy, then Joy must come from fee thought, and free thought can only be found by not being in a state of constant euphoria, and therefore you must be in a state that includes pain. But I'm sure you would argue that pain!=sorrow becaue sacrificing is a reward in and of itself...

    So if I'm reading this right... Joy = A greater state of being happy that is wholely undefined and infinitely subjective. Sorrow = A state of being that is bad in that it is not Joy, and therefore a lesser state of being.

    And the Joy of all = greater than the joy of the one and the one should be happy to sacrifice their joy for the greater joy since sacrifice is a reward in and of itself.

    So... the way I read this your fundamental arguement can actually be broken down to, Good == Good && Good != Bad && Good > Bad. Meaning that so long as you define Joy, a superflous term that you define as a higher state of happiness and love, to equal good, you will always be right in any arguement, because it is an axiom. However, it breaks down in a case where Sorrow == Good, Sadism & Masochism, unless you redefine that in these circumstances Sorrow == Joy and then it becomes true again, despite the fact that unless you are a masochist you don't know wether Sorrow == Joy or not, and since you can't prove that Sadism produces Joy for both parties you write them off as a Minorty and therefore inconsequential, despite the fact that your axiom cannot be true so long as they exist. So long as they exist Good remains variable and open for interpretation and Joy, despite being what the majority seek for, does not always = good, and therefore cannot be an axiom.

    So in short terms.

    So long as Good = Good & Good != Bad & Good > Bad & Joy = Good
    You are right.
    IF Good = Good & Good != Bad & Good > Bad & Joy != Good
    you are wrong.

    Basically, I find it rather assuming of you to say that Joy is always equal to good simply because you personally seek for it. An Axiom is either inherently true or a standard which can be universally agreed to. I have proven that so long as I disagree with your axiom it cannot, by it's very nature BE an axiom.

    And my arguement for the circular morality as based on my theory that Good = Bad rather than Good > Bad and this is why I say that your axiom is FUNDAMENTALY flawed, because the axiom you are plug and playing Joy into would appear to have a flaw from my perspective, and thus cannot be an axiom either.

    [edit] I am renaming my spinning axiom, to circular morality, since it would be hypocritcal of me to keep the name Axiom, knowing full well it can not be defined as such.

    AcidSerra on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited February 2007
    AcidSerra wrote:
    Incoherence

    Most Consequentialists define preference satisfaction to be the moral good these days, rather than joy--so, what is good is that everyone gets what they want. That deals both with your example of Masochism, and of the earlier mentions of drug-induced euphoria.

    MrMister on
  • Options
    AcidSerraAcidSerra Registered User regular
    edited February 2007
    MrMister wrote:
    AcidSerra wrote:
    Incoherence

    Most Consequentialists define preference satisfaction to be the moral good these days, rather than joy--so, what is good is that everyone gets what they want. That deals both with your example of Masochism, and of the earlier mentions of drug-induced euphoria.

    You still don't define what "Joy" is. Simply something that it is not.

    http://dictionary.reference.com/browse/joy

    Here is a definition of Joy which roughly matches what I ws going by. If you have a definition of Joy beyond, Joy is whatever makes me right, I'd love ot hear it.

    [edit] On rereading it, I don't think you were being combative.. so this question is more for Yar.

    AcidSerra on
  • Options
    YarYar Registered User regular
    edited February 2007
    We've been through it. Joy is three letters, makes the typing easier. Positive conscious experience is probably a better description of it. And there are high and lower orders of it which we may or may not all agree on.

    However, a masochist necessarily enjoys something emotional or sexual or something that is an even great positive conscious experience for himself than the physical pain. Masochists enjoy what they do, as it were. We've been through that, too.

    Really, the masochist argument is rather petty and sophomoric and almost beneath worthy of being entertained at this point.

    The reason it's an axiom is because we do all absolutely agree that there are experiences that are positive on our consciousness and those that are negative. The specifics of circumstance which bring about the experience may vary from consciousness to consciousness, but that does not affect the axiom.

    As for getting what one wants, I disagree. Simply getting what one wants often leads to an ever-increasing feedback loop of sorrow. A greater joy is a greater joy even if we never realized we preferred it. But the difference is slim.

    Yar on
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    Yar wrote:
    The crux of your complaint is this: Person A goes around trying to calculate direct net joys and sorrows of actions while completely disregarding any notion of higher principles of virtue. Person B follows principles like integrity, sacrifice, generosity, patience, and so forth, and does so perhaps even blindly. Who's likely to create more joy for himself and others? You keep claiming that Person A is what follows from my axioms, and yet person B is the who will most likely create more joy and less sorrow, and you don't realize why that is a direct contradiction.

    no, i've stated time and again, that i am absolutely uninterested in how much or little joy is created. as ive stated before (against claims of impossibility), i would choose various other "virtues" over positive consciense experience in any number of situations, as a principle and regardless of joy related consequences (assuming it were possible to know such consequences).
    Ketherial wrote:
    even if we could definitively prove that the guy who ran away would net more joy into the world by living than by sacrificing himself for the other guy, i still think we would view him less positively than one who was willing to make the sacrifice.
    Having an ability to 'prove' such a thing would violently alter the entire notion of "morality" in a practical sense and this whole discussion would change - however, the axioms themselves would still be the same.
    Because granting freedom on principle regardless of some misconstrued prediction that slavery might actually be better for certain individuals is what we recognize as generating more happiness for all in the long run. Whether it's antebellem slave-owners arguing that the slaves are better off this way, Hitler striving for a master race (omg godwinz), or the robots in AI enslaving humans for their own good, we recognize in real life and in hyptheticals that trying to construct a calculated justification of joy and sorrow that ignores higher principles of virtue runs a very strong risk of actually creating the worst kinds of sorrow.

    obviously, ive been thinking about this quite a bit and i think these are really the points where we fundamentally differ.

    this may be a new point, though ive obviously touched on it. im not really trying to change the subject on you, but i think ive finally been able to crystallize in my thoughts regarding what it is about your position that i find to be unworkable.

    by basing a moral system on what we agree is simply an experience, you are necessarily subjecting morality to human perception. however, although you subject morality to perception, which is necessarily limited, you are unwilling to restrict such perception to any specific subject (person or even current society as a whole). as such, you are creating a strange paradoxical starting point: the joyfulness of all humanity across all time and lives.

    the reason i find this to be objectionable is because you are attempting to make a normative claim based on an unproven, unprovable empirical claim: x is moral because it will maximize joy through all lives and time and hence we should do x.

    yet, even if i were to accept your empirical claim, i find it problematic that the claim only works as a whole, but often fails when applied to specific circumstances. because what may be best for the whole, what may cause the most joy for the entirety of human lives across time, may not be best or cause the most joy for any specific individual at any specific moment. it may even end a particular individual's life (e.g. sacrifice).

    if sacrifice (or any variable) is valuable only as a vehicle which causes more joy, then shouldnt we necessarily not value sacrifice when it fails to cause more joy? your answer to this is seems to be: we recognize sacrifice as valuable because it is a powerful tool for causing joy, even though there are situations in which it fails to do so. hence, we value sacrifice on principle because we believe that in the long run across lives and time, sacrifice does empirically create more joy.

    but my question then is, if we were able to do the math, if we were able to calculate the empirical answer, "was more joy caused?" would we nevertheless value the principle of sacrifice? if we calculated all of the joys and sorrows that resulted from sacrifice and came out with a net negative, would that mean then, that all sacrifices were actually immoral?

    i dont think im leaping to act utilitarianism or creating a strawman or whatever here. im really trying to understand the fundamental principle behind your thinking. does what youre saying really boil down to: at the end of time, how much chemical x was created (regardless of whether or not we can measure it)?

    Ketherial on
  • Options
    KetherialKetherial Registered User regular
    edited February 2007
    MrMister wrote:
    Ketherial wrote:
    dick sells concert tickets to get jane a coat that she really wants (jane +10 joy).

    dick goes to concert by himself (dick +11 joy).

    which dick do we think is a better boyfriend? which one do you want to be your boyfriend?

    Making Dick and Jane into a couple and then asking which is a better date confounds the issue: after all, if I were selfish I would probably want a partner who bought me nice things instead of nicer things for himself. That doesn't mean it's better, in general, that we buy others nice things instead of nicer things for ourselves.
    the situation is more accurately reflected as such:

    dick sells concert tickets to get jane a coat that she really wants (jane +10 joy, dick +2 sacrifice; sacrifice / joy = good; aggregate good = 12).

    dick goes to concert by himself (dick +11 joy, dick +0 sacrifice; aggregate good = 11).

    The point of my analysis is as follows: you say that Dick buying Jane the coat is better than him keeping his tickets, because trading in some happiness to realize the abstract value of sacrifice is a good deal. However, you fail to see that this is an iterable step. Under that very definition of better, it is also better that Jane sell the coat and buy something for Dick, even if he will enjoy it less than she would enjoy the coat.

    By deciding that the first transaction is good, Dick's present to Jane, you also decide that the second transaction is good, Jane's present back to Dick. Furthermore, by the rules of the hypothetical we can see that the net result of these two transactions is that instead of concert tickets Dick has something that will make him less happy. Supposedly, this is balanced out by the fact that both Dick and Jane got the chance to sacrifice, which is a good unto itself. But this is not intuitive at all: rather than ponging increasingly bad presents back and forth Dick and Jane would be better served by actually going to a concert or keeping a coat.

    Furthermore, your notation is fairly illegible.

    i dont think this is a hard point. iteration doesnt mean that all the values are equal in every iteration. the value of sacrificing concert tickets for the coat might net good, but the value of sacrificing your hair for a watch chain might not net the same good.

    dick sacrificing concert tickets might equate to an aggregate "good" because the good that results from both jane's joy from receiving a coat and from dick's sacrifice outweigh the sorrow that results from not going to the concert.

    however, jane sacrificing her hair might net an aggregate "bad", because the good that results from jane's sacrifice and dick's joy at receiving a watch chain might not outweigh the bad that results from the sorrow of not having hair.

    im not proposing that sacrifice be considered the end all be all measurement of good. im saying it's one of the competing ones, along with joy, generosity, etc. sacrifice may always be good, but it may not be good enough.

    in other words, i still dont think your rebuttal is sufficient.

    Ketherial on
Sign In or Register to comment.