As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/

[Morality] Subjectivity vs Objectivity

1456810

Posts

  • jothkijothki Registered User regular
    edited June 2011
    hanskey wrote: »
    General rules are not thrown out over exceptions that may be accounted for. Look at the wiki on fallacies to figure out which one you're employing here.

    If it looks like a bear, swims like a turtle, and hoots like an owl, those exceptions are not sufficient cause to throw out the general rule that it probably is a duck.

    jothki on
  • hanskeyhanskey Registered User regular
    edited June 2011
    jothki wrote: »
    hanskey wrote: »
    General rules are not thrown out over exceptions that may be accounted for. Look at the wiki on fallacies to figure out which one you're employing here.

    If it looks like a bear, swims like a turtle, and hoots like an owl, those exceptions are not sufficient cause to throw out the general rule that it probably is a duck.

    You know what, I'm not sure what you are saying here. Sorry, but can you clarify please.

    hanskey on
  • redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited June 2011
    hanskey wrote: »
    General rules are not thrown out over exceptions that may be accounted for. Look at the wiki on fallacies to figure out which one you're employing here.

    what is this general rule which you are claiming exists?

    Do me a favor. When you describe it, don't use terms that require the act to be immoral in the society in which it occurs, otherwise you are merely providing evidence that lawfulness is a requirement of objective morality(which is fucking ridiculous).

    redx on
    They moistly come out at night, moistly.
  • hanskeyhanskey Registered User regular
    edited June 2011
    hanskey wrote: »
    hanskey wrote: »
    2. Despite the fact that people do murder that does not make it right. Ethics purports to tell us the "right" or "best" way to act and behave, and additionally tells us that there are objective measures for determining this. One objective procedure is the test of Universality that Kant developed. This involves making a specific act under question a general rule that everyone should follow and examining the consequences. If such a general rule makes no sense from that perspective then it must be rejected. Murdering people cannot be universalized as a general rule: no rational being would accept it as the "right" or "best" way to behave and if accepted it would inevitably lead to our collective extinction, which should be obviously not a rational or reasonable goal. However, a reasonable and logical exception is added for self-defense.

    That doesn't explain why the capacity for adoption as a general rule is compelling, though. And societies with much higher levels of 'justifiable exceptions' to the rule against killing have gotten along just fine.
    I was addressing the specific question so naturally my answer does not address other issues. However, they can be addressed pretty easily, but I'm tired and don't really want to keep up this nonsense for two people that should just get a lot more familiar with ethical systems and a lot less attached to magical thinking and irrational positions.

    Oh dear. Will you be taking your ball, too?
    Not sure what you are saying, but I recognize your unnecessary and sarcastic hostility - if you don't like your position to be characterized as irrational then you might want to ensure that it actually isn't.

    I suggested other reading, because nothing anyone brought up today hasn't already been said on this topic, by virtue of the fact that Kantian Ethics is about as old as the U. S. I don't have all the arguments or the time and energy to look them up. I'm not trying to be a dick but I really did spend the whole of yesterday in this thread and so far the whole of today, covering basic shit that you get in your first day or two of any college level ethics class. In addition, several of you certainly are exhibiting signs of willful irrationality to avoid losing an argument, and that is tiresome as well.

    Edit:
    If your goal is to win the argument and you wish to do so by abandoning reason, then that is your prerogative. However, that does preclude the possibility of us coming to any resolution.

    If your actual interest is in understanding the arguments for objective morality and plumbing their depths then you would be better served to explore elsewhere for the time being and a good place to start is reading what you can about actual objective ethical systems.

    hanskey on
  • Tiger BurningTiger Burning Dig if you will, the pictureRegistered User, SolidSaints Tube regular
    edited June 2011
    hanskey wrote: »
    I'm not trying to be a dick but I really did spend the whole of yesterday in this thread and so far the whole of today, covering basic shit that you get in your first day or two of any college level ethics class.

    Yes, but you're doing it poorly. The arguments you're struggling with have just as hallowed a pedigree as Kantian ethics (as if that mattered), and you will never convince anyone of anything so long as your argument is, "You're just being irrational. If you were more rational you would agree with me."

    Tiger Burning on
    Ain't no particular sign I'm more compatible with
  • hanskeyhanskey Registered User regular
    edited June 2011
    redx wrote: »
    hanskey wrote: »
    General rules are not thrown out over exceptions that may be accounted for. Look at the wiki on fallacies to figure out which one you're employing here.

    what is this general rule which you are claiming exists?

    Do me a favor. When you describe it, don't use terms that require the act to be immoral in the society in which it occurs, otherwise you are merely providing evidence that lawfulness is a requirement of objective morality(which is fucking ridiculous).
    From the Deontology wiki:
    The non-aggression principle, also known as the non-aggression axiom and zero aggression principle, is an ethical stance which states that any initiation of force is illicit and contrary to natural law. It is the basic moral axiom of deontological libertarianism, most famously upheld by Robert Nozick, Murray Rothbard.

    Rothbard described the axiom as such:
    “ No one may threaten or commit violence ('aggress') against another man's person or property. Violence may be employed only against the man who commits such violence; that is, only defensively against the aggressive violence of another. In short, no violence may be employed against a non-aggressor.

    hanskey on
  • hanskeyhanskey Registered User regular
    edited June 2011
    ...ou will never convince anyone of anything so long as your argument is, "You're just being irrational. If you were more rational you would agree with me."
    That's not my argument in defense of objective morality.

    Rather, that is a straw-man version of my statement that we have little common ground from which to communicate and as a side point that if we can't agree on a basis to converse, then we cannot resolve this argument between us.

    hanskey on
  • QuidQuid Definitely not a banana Registered User regular
    edited June 2011
    Why is it natural law that violence may only be applied to those who commit acts of violence?

    Quid on
  • CptHamiltonCptHamilton Registered User regular
    edited June 2011
    Quid wrote: »
    Why is it natural law that violence may only be applied to those who commit acts of violence?

    If you're approaching the question from a utilitarian perspective it doesn't make any sense at all. An act of violence can be expected to have such and such an impact on the victim which may outweigh the moral debt, as it were, of practicing violence upon the attacker to stop the disfavorable outcome. But then the same thing could be said of any action that has a sufficiently negative outcome for the victim. If they're going to suffer enough, they're justified in using violence to stop the suffering if violence is the method least likely to cause undue suffering all around.

    But the quote you're responding to came from a Deontologist. Deontology thinks that consequences are unimportant and only rules or values are relevant. If the rule is "you must not make violence" then it's necessary to carve out exceptions for situations where adhering to that rule are ridiculous, like when someone is trying to kill you. I believe that's why hanskey keeps saying that you can just "account for" exceptions to his "axioms". Deontologic axioms are rules made up by whomever is making the rules, and since the only logic used in deontology is with regard to the rules which apply to a prospective action, rather than the outcome of the action, it's necessary to expand the ruleset every time you realize that your rules don't cover every situation.

    I think a more interesting question is: is it possible to construct a set of moral truths (defined howsoever you like) such that any two people applying them logically will be guaranteed to arrive at the same solution to a given ethica dilemma? I think it's obvious that we all employ some variant of utilitarianism in our day-to-day decisions, but fairly rarely are the outcomes based on any actual calculation of utility. Even utilitarian philosophers recognize that it's impossible for people to carry out accurate utility decisions since you can't possibly have complete information. So is true utilitarianism, based purely on logical derivations from 'objective' or 'commonly agreed' or whatever first principles even a thing that could exist?

    CptHamilton on
    PSN,Steam,Live | CptHamiltonian
  • MoridinMoridin Registered User regular
    edited June 2011
    Hanskey, many of the people in this thread know quite a bit about objective ethical systems. There have been several very long threads on these forums talking about this very topic. You aren't going to get anywhere by telling people to "go read about them", nor will you get anywhere quoting axioms.

    Anticipate the argument. If you post an axiom, defend it. Someone that doesn't think the is-ought gap can be easily bridged is not going to be convinced by any of your arguments so far.

    Moridin on
    sig10008eq.png
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    True whether people believe it or not. Something that is mind independent.

    These are two different statements.

    At 9:30 am this morning, I was irritated by a coworker. That is an objective fact for any functional definition of "objective fact." I was irritated by that coworker regardless of whether you, or any other third-party, believed it or not. Yet it is not mind-independent. It is clearly a statement about my mental state, so it is inaccurate to call it "mind-independent."

    We can say that the fact is independent of the mind of the observer. It remains true that I was irritated this morning regardless of who observes me (or if anybody else observes me at all), just like any basic physical facts about matter remain true. A fact that remains true independently of the mind of the observer is a functional definition for an objective fact.

    That said, knowledge of the external is impossible without an observer - we cannot know anything without observing it - which means that all knowledge is subjective. We can run in a few different directions with this understanding. One is to take the idealist approach and say that all knowledge is subjective, therefore there is no objectively real world; we can only interact by consensus. What you earlier called "a standard that we can use to test, confirm, or correct when disagreements arise" is, from the idealist perspective, just a standard for consensus building. We cannot prove that it tests the external world; merely that we were able to align our subjective experiences. We're all trapped in the Matrix by a Cartesian evil demon. Logic and evidence are merely parliamentary rules for a human Congress.

    Alternatively, there are realist perspectives, which hold that there is an external world, even if our knowledge is inescapably subjective, it is a somewhat inaccurate but also somewhat accurate understanding of an external world. We can endeavor to make our understanding more accurate by challenging our assumptions and adopting null hypotheses.

    Ultimately, I do not see out of that trap without resorting to utility. (Perhaps there is a better argument, one of which I am unaware.) The former perspective makes logic and evidence arbitrary. I don't need to agree; I can reject any logical or evidential rule I want. If sanity is just consensus reality, then there is no fundamental reason to choose it; any attempts to convince me otherwise are based on an infinite regress of arbitrary assumptions. Even fundamental logical axioms like identity and the law of noncontradiction are arbitrary given a sufficiently imaginative thinker.The latter perspective, however, makes knowledge possible. Even if we are trapped in the Matrix, it's a Matrix we can understand and develop. Science: it just works.

    I don't see any reason why morality is any different. If you ask me "how do we know is the law of identity true?" I say, "How can you possibly disagree?" Likewise, if you try to ask me, "how do we know that suffering is bad and hedonia is good?" I say, "How can you possibly disagree?" Imagining a lifeform for which suffering is good and hedonia is bad is a bit like imagining a universe in which the law of identity is false.

    From these basic axioms, we can apply exactly the same logical and empirical process. Does this system maximize hedonia and minimize suffering? Is this system internally consistent? Is this system consistent with the evidence that we have regarding things in the external world that are relevant to hedonia and suffering?

    You are still free to reject these axioms, to claim them to be metaphysically subjective but I'm going to treat that endeavor a little bit like somebody going, "But we could be trapped in a Cartesian Matrix!" I cannot disprove it, but that doesn't make it any less silly.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • CptHamiltonCptHamilton Registered User regular
    edited June 2011
    Feral wrote: »
    I don't see any reason why morality is any different. If you ask me "how do we know is the law of identity true?" I say, "How can you possibly disagree?" Likewise, if you try to ask me, "how do we know that suffering is bad and hedonia is good?" I say, "How can you possibly disagree?" Imagining a lifeform for which suffering is good and hedonia is bad is a bit like imagining a universe in which the law of identity is false.

    From these basic axioms, we can apply exactly the same logical and empirical process. Does this system maximize hedonia and minimize suffering? Is this system internally consistent? Is this system consistent with the evidence that we have regarding things in the external world that are relevant to hedonia and suffering?

    You didn't say anything I disagree with, but is the suffering/hedonia scale sufficient ground on which to build a useful moral framework? The trivial example is hedonism, which has obvious and well-covered problems, so is it possible to go from "being happy is better than not being happy" to a collection of statements that are observer-independent (in the 'anyone can be the observer' sense) and which are sufficient to address a realistic ethical dilemma?

    It's easy to take a first principle and say "Suffering is bad and, through a proof left as an exercise for the reader, it is therefore more moral to stop medicating the elderly" or some such. Is the proof actually possible?

    CptHamilton on
    PSN,Steam,Live | CptHamiltonian
  • MoridinMoridin Registered User regular
    edited June 2011
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Moridin on
    sig10008eq.png
  • CptHamiltonCptHamilton Registered User regular
    edited June 2011
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.

    CptHamilton on
    PSN,Steam,Live | CptHamiltonian
  • hanskeyhanskey Registered User regular
    edited June 2011
    Moridin wrote: »
    Hanskey, many of the people in this thread know quite a bit about objective ethical systems. There have been several very long threads on these forums talking about this very topic. You aren't going to get anywhere by telling people to "go read about them", nor will you get anywhere quoting axioms.

    Anticipate the argument. If you post an axiom, defend it. Someone that doesn't think the is-ought gap can be easily bridged is not going to be convinced by any of your arguments so far.

    Thanks for the heads up, I think?

    I think I'm big boy, so I'll admit I got a bit trollish when people kept moving the goal posts or using straw-men attacks while rejecting any attempt to say that empirical evidence + rationality = fairly good and universal rules for acceptable human behavior. I also got a bit tired of people attacking from positions that they themselves would never accept as general rules.

    As far as being convincing, that cuts both ways, because I haven't seen a single person effectively argue for Moral Relativism either.

    hanskey on
  • hanskeyhanskey Registered User regular
    edited June 2011
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.
    It might be specific enough to be extended by logic into a moral framework.

    hanskey on
  • YarYar Registered User regular
    edited June 2011
    jothki wrote: »
    Since when does something being rational, consistent, convincing, or useful mean that it is guaranteed to be correct?
    It's as if you purposefully ignored the entire basis of what I said.

    This notion of "guaranteed correct" is a meaningless fallacy, and isn't anything I suppose to apply here. There are only ever basic assumptions and best arguments. Being more rational, consistent, convincing, and useful don't make something "guaranteed correct," but they do make it more rational, consistent, convincing, and useful. You keep chasing that dragon of absolute objectivity - I think Godel disproved it a long time ago; I'll take reason and utility and pursuit of happiness as far as my mind can handle.
    Taber wrote: »
    Suffering is bad is a great axiom, but you can't argue it is objective truth. Someone could believe that suffering leads to personal enlightenment, and personal enlightenment is the ultimate goal even if it never leads to less suffering, and design a morality system around that. Is there an objective way to say this other person is wrong? It seems like what we are optimizing for is subjective even if there are objective ways to measure how successfully we are optimizing for.
    Well, I can argue that this person is wrong, that this is a less usable or describable idea you are supposing. What I likely cannot do is offer some sort of complete proof establishing that no argument will ever be formed that might change my mind.

    Anyway, what is enlightenment? Are people happy or satisfied or blissful when they achieve it? If it isn't any less suffering, what is it?
    Taber wrote: »
    You are stepping around my point. Imagine a third person who believes that life is an abomination, and the most moral action is to let it all burn. Who's to say he's wrong?
    The better question is: what convincing argument does he have that he's right? What use or benefit can we experience or communicate based on his belief? Do people tend to intuitively argee with his base assumptions?

    Like other lines of discourse in this dicussion, you seem to want to resolve truth to completion, or else deny that there is any truth. The reality is that we always live in between. There is no completion, and when we seek truth, we're always just looking for more evidence, better rationale. If I want, I could claim that cows fly, they just also have psychic powers that make us think they don't. And they have special invisible cow-poop-washers so we don't notice when huge cow turds fall out of the sky. Can you "objectively" prove me wrong? Anything you say, I can defeat by telling you that the cows' psychic powers have fooled you. Ultimately, though, it's pointless for me to suppose such a thing, with such unconvincing arguments for it, regardless of whether I can still trap us both in a "you can't prove it's not true" loop.
    Taber wrote: »
    Abrahamic religions' morality explicitly isn't for reducing suffering, it is to please God. A side effect of pleasing God is reducing suffering, but that isn't why you are supposed to follow it.
    There are a number of ways to explain this one. Maybe most people don't have the time or interest to resolve morality down to it's most rational logical descriptions, maybe too often they get lost in this line of reasoning and end up at Hedonism, and its eventual failure and suffering, and so the best way for them and others to be happy and avoid suffering is just to go with "don't think about it, Man in the Sky Said So, that's why."
    Because no one has any particular reason to claim that their knowledge of solar positioning is based on objective measurements and therefore superior. I'm fine with a conceptual ethical framework founded on selected axioms which are acknowledged to be subjective. I don't think that such a framework could be composed as even the simplest ethical decision would require a prohibitively complex calculus. I just find it ridiculous to say that such and such is an objective moral standard for the same reason I'd be confounded by someone saying "obviously my measurents are perfectly accurate... Just look at them!"
    The trick here is that not only is the system of morality incomplete (as are all systems and you seem to be ok with this), but so too are moral arguments typically incomplete. This is where morality would differ from mathematics, for instance. Assuming the axioms that allow for the incomplete system of math and logic, you can still create a complete proof within that system that 2 + 2 = 4. Science, however, for instance, tends not to have such complete proofs. There are only arguments, theories, formulas, etc., that get better and better and more refined, or perhaps counter-argued and replaced. However, this does not preclude us from reasonably showing one scientist right and another wrong, one experimental finding better than another, one theory superior to another, and so on.

    Interestingly, the continued consistent progress and function and refinement of the statements within the system itself is an argument for the validity of the system. The fact that science proceeds as it does, allowing us to make arguments, theories, disprove them, etc., is itself an argument that we've got a decent scientific method going.

    All the same goes for morality. Moral arguments aren't likely to be complete proofs. But we can use reason, history, and science to continually refine our arguments, to make ever more convincing statements and theories and laws that aim to reduce suffering, and we are not precluded from making a solid case for moral superiority. A system like "burn it all" isn't likely to evolve and progress and create arguments and counter-arguments and and ever-increasing set of refined, inter-related statements. It seems that it would literally burn itself out, quite quickly, rendering itself meaningless. It wouldn't take long for someone to be like, "nuke the whole planet!" and then suddenly there's nothing left to burn, so "burn it all" doesn't make sense anymore. Again, this doesn't "prove" it's wrong, it's just a convincing argument.

    Now, if the above description is deemed to be "moral relativism" since it acknowledges that moral arguments are always relatively better or worse, but never objective fact, then fine, I'm a relativist. But as others have pointed out, all truth is relative in this fashion, and I don't think this is typically what is meant by moral relativism. Moral relativism as I've understood it has been a more agnostic stance on morality that typically denies much value in morality beyond studying different cultural norms, and conversely objectivity or realism has always assumed a base acceptance of uncertainty and fallability in all things, but still asserts that morality can be reasoned universally.

    Anyway, this thread has way too much snark going on. Don't say things like "go read XYZ..." as if that's an argument.

    Yar on
  • CptHamiltonCptHamilton Registered User regular
    edited June 2011
    hanskey wrote: »
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.
    It might be specific enough to be extended by logic into a moral framework.

    How?

    That beings prefer one state over another says nothing about the combined states of multiple beings. Say that happiness is a scale along the real line with indefinite maximum and minimum. If we take as a given that all beings prefer to have a more positive happiness value then we can say that actions which increase happiness are preferable over actions which decrease happiness. But is it necessarily true that a state of greater total happiness summed over all entities is inherently preferable? If a group of entities can increase happiness by decreasing the happiness of a single entity, is it necessarily moral that they do so?

    Can we move from "increased happiness is better" to talking about specific actions? Pefect knowledge is impossible, so to what extent is it possible to include confidence estimates on the degree of happiness increase/decrease consequent to an action in our logical framework? Are there real actions that we can say are as universally positive as the abstract "more happiness is better"?

    CptHamilton on
    PSN,Steam,Live | CptHamiltonian
  • MoridinMoridin Registered User regular
    edited June 2011
    hanskey wrote: »
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.
    It might be specific enough to be extended by logic into a moral framework.

    How?

    That beings prefer one state over another says nothing about the combined states of multiple beings. Say that happiness is a scale along the real line with indefinite maximum and minimum. If we take as a given that all beings prefer to have a more positive happiness value then we can say that actions which increase happiness are preferable over actions which decrease happiness. But is it necessarily true that a state of greater total happiness summed over all entities is inherently preferable? If a group of entities can increase happiness by decreasing the happiness of a single entity, is it necessarily moral that they do so?

    Can we move from "increased happiness is better" to talking about specific actions? Perfect knowledge is impossible, so to what extent is it possible to include confidence estimates on the degree of happiness increase/decrease consequent to an action in our logical framework? Are there real actions that we can say are as universally positive as the abstract "more happiness is better"?

    It's not unimpeachable. It's downright tautological. "People prefer the things they prefer."

    Moridin on
    sig10008eq.png
  • CptHamiltonCptHamilton Registered User regular
    edited June 2011
    Moridin wrote: »
    hanskey wrote: »
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.
    It might be specific enough to be extended by logic into a moral framework.

    How?

    That beings prefer one state over another says nothing about the combined states of multiple beings. Say that happiness is a scale along the real line with indefinite maximum and minimum. If we take as a given that all beings prefer to have a more positive happiness value then we can say that actions which increase happiness are preferable over actions which decrease happiness. But is it necessarily true that a state of greater total happiness summed over all entities is inherently preferable? If a group of entities can increase happiness by decreasing the happiness of a single entity, is it necessarily moral that they do so?

    Can we move from "increased happiness is better" to talking about specific actions? Perfect knowledge is impossible, so to what extent is it possible to include confidence estimates on the degree of happiness increase/decrease consequent to an action in our logical framework? Are there real actions that we can say are as universally positive as the abstract "more happiness is better"?

    It's not unimpeachable. It's downright tautological. "People prefer the things they prefer."

    Well, yeah. Tautologies are usually the foundations of logical frameworks. Identity, the law of the excluded middle, etc.

    Edit: It falls apart if you assume that there could exist sentient (or at least sentient-enough to be 'actors') beings which were incapable of preference. Or for whom all states were equally prefered. Provided that there are a range of possible states and some of them are 'better' for the being in question in a manner that's at least consistent to that particular being, it holds up.

    CptHamilton on
    PSN,Steam,Live | CptHamiltonian
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    You didn't say anything I disagree with, but is the suffering/hedonia scale sufficient ground on which to build a useful moral framework? The trivial example is hedonism, which has obvious and well-covered problems, so is it possible to go from "being happy is better than not being happy" to a collection of statements that are observer-independent (in the 'anyone can be the observer' sense) and which are sufficient to address a realistic ethical dilemma?

    First off, I'm strongly biased in favor of act-utilitarianism using a hedonic calculus, but I see that as a separate question. The question, "are these values, as identified, axiomatic values?" is different from "are there axiomatic values at all?"

    Identifying a problem with those values specifically does not itself imply that morality is subjective; you're still pointing to something external from you and me as observers to argue that the values I specified don't serve as axiomatic values. This requires appeal to another set of values that is more important. This very endeavor implies that we are discussing moral facts.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • hanskeyhanskey Registered User regular
    edited June 2011
    Yar wrote: »
    Anyway, this thread has way too much snark going on.
    My bad! I apologize. I didn't intend my suggestions as arguments, but I recognize they seemed that way. I was just snarking on my way off the field of combat and I don't have a defense for that.

    I was frustrated and not thinking straight, and although this does not excuse my rudeness, I hope you all can forgive me.

    hanskey on
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.

    Yeah, I wrote up a long post defending it in which I recognized that the quantification of good/bad experiences is problematic and admitted that comparing the experiences of two different conscious beings (particularly cross-species) is very problematic, and suggested that non-utilitarian ethical systems offer ways around that problem. But, as I said above, I think that would have been tangential - we can argue that the values I identified are not axiomatic, but how do you do that without appealing to some higher value?

    BTW, regarding your other post, I would argue that a conscious being with no preferences is not a moral actor in any meaningful sense. "A conscious being with no preferences" might (arguably) describe a not-too-terribly-far-flung AI. Would you say that there is something morally wrong with turning off and deleting an AI that doesn't even care if it's alive?

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • YarYar Registered User regular
    edited June 2011
    Happiness/suffering are even more basic in the proposed system than are consciousness, or the existence of the self, or of discrete individuals. A group of multiple beings is even more certainly a greater capacity of joy and suffering than it is a group of multiple beings. That's sort of what's required when you put joy and suffering right down at the base of the whole dogma.

    So, if you're referencing "combined state of multiple beings" then you've necessarily already implied an assumption that there is a larger scale of potential joy and suffering. There couldn't be multiple beings if that wasn't a multiplication of potential joy and suffering.

    But talking about specific actions and moral judgments and such, well yeah, that's what we do all the time in this world. And I'm afraid that the more practical a statement is for making decisions, the less universal and complete it will tend be. The Golden Rule perhaps? The Formula of Universal Law? The Eightfold Path? Those are pretty good ones. We'll quickly veer off of the Subj-Obj thing and end up debating all morality.

    EDIT: I will say, though, that I firmly believe that humans are incapable of an act-level calculus, and not only that, but there is reasoning and evidence to support the notion that attempts at act-level calculus morality will tend to fail and create more suffering. As Doctor Manhattan said, there is no end [which I took as a very convincing criticism of trying to use "ends justify means" as a basis for moral decision making. All are means; there is no end].

    Yar on
  • hanskeyhanskey Registered User regular
    edited June 2011
    Moridin wrote: »
    hanskey wrote: »
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.
    It might be specific enough to be extended by logic into a moral framework.

    How?

    That beings prefer one state over another says nothing about the combined states of multiple beings. Say that happiness is a scale along the real line with indefinite maximum and minimum. If we take as a given that all beings prefer to have a more positive happiness value then we can say that actions which increase happiness are preferable over actions which decrease happiness. But is it necessarily true that a state of greater total happiness summed over all entities is inherently preferable? If a group of entities can increase happiness by decreasing the happiness of a single entity, is it necessarily moral that they do so?

    Can we move from "increased happiness is better" to talking about specific actions? Perfect knowledge is impossible, so to what extent is it possible to include confidence estimates on the degree of happiness increase/decrease consequent to an action in our logical framework? Are there real actions that we can say are as universally positive as the abstract "more happiness is better"?

    It's not unimpeachable. It's downright tautological. "People prefer the things they prefer."

    Well, yeah. Tautologies are usually the foundations of logical frameworks. Identity, the law of the excluded middle, etc.

    Edit: It falls apart if you assume that there could exist sentient (or at least sentient-enough to be 'actors') beings which were incapable of preference. Or for whom all states were equally prefered. Provided that there are a range of possible states and some of them are 'better' for the being in question in a manner that's at least consistent to that particular being, it holds up.

    How would a sentient being capable of being an actor ever be free of preference?

    Isn't preference the fundamental basis of survival by processes such as breathing, eating and reproducing?

    When preferences are unconcious are they still preferences?

    hanskey on
  • hanskeyhanskey Registered User regular
    edited June 2011
    Feral wrote: »
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.

    Yeah, I wrote up a long post defending it in which I recognized that the quantification of good/bad experiences is problematic and admitted that comparing the experiences of two different conscious beings (particularly cross-species) is very problematic, and suggested that non-utilitarian ethical systems offer ways around that problem. But, as I said above, I think that would have been tangential - we can argue that the values I identified are not axiomatic, but how do you do that without appealing to some higher value?

    BTW, regarding your other post, I would argue that a conscious being with no preferences is not a moral actor in any meaningful sense. "A conscious being with no preferences" might (arguably) describe a not-too-terribly-far-flung AI. Would you say that there is something morally wrong with turning off and deleting an AI that doesn't even care if it's alive?

    As someone who has created an AI agent in the past, preference is used in learning AI agents as the mechanism that drives learning. Even using heuristically modeled problem solving AIs, preference still reaers it ugly head because you program it to prefer to win the game, or whatever. Then again, these are not preferences related to the AI's own existence, so maybe they are irrelevant?

    hanskey on
  • jothkijothki Registered User regular
    edited June 2011
    Feral wrote: »
    Ultimately, I do not see out of that trap without resorting to utility. (Perhaps there is a better argument, one of which I am unaware.) The former perspective makes logic and evidence arbitrary. I don't need to agree; I can reject any logical or evidential rule I want. If sanity is just consensus reality, then there is no fundamental reason to choose it; any attempts to convince me otherwise are based on an infinite regress of arbitrary assumptions. Even fundamental logical axioms like identity and the law of noncontradiction are arbitrary given a sufficiently imaginative thinker.The latter perspective, however, makes knowledge possible. Even if we are trapped in the Matrix, it's a Matrix we can understand and develop. Science: it just works.

    Doesn't resorting to utility push you right back in the same trap you were trying to escape, since you know you're just incapable of fully comprehending utility as you are of reality?

    jothki on
  • MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited June 2011
    I have a question for those in the know.

    Is there a logical proof that subjective morality and objective morality are dichotomous extremes? That is, that there cannot be a mix of the two, but any given morality must be one or the other? I know this is probably going to look like a stupid question, but that's okay, I'm not afraid to ask a stupid question.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • jothkijothki Registered User regular
    edited June 2011
    Well, some forms of subjective morality are willing to accept the existence of infinite forms of objective morality. As far as I'm aware, all forms of objective morality deny all forms of subjective morality, with the exception of objective moralities based on the belief that there is only one subject, and thus subjective and objective are synonyms.

    jothki on
  • MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited June 2011
    But do they have logical proofs or are they making disguised personal assertations?
    I'm looking for the basics here, I would assume something that many people are so certain about would have a nice logical proof behind it. I was just wondering what it was.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • QuidQuid Definitely not a banana Registered User regular
    edited June 2011
    Quid wrote: »
    Why is it natural law that violence may only be applied to those who commit acts of violence?

    If you're approaching the question from a utilitarian perspective it doesn't make any sense at all. An act of violence can be expected to have such and such an impact on the victim which may outweigh the moral debt, as it were, of practicing violence upon the attacker to stop the disfavorable outcome. But then the same thing could be said of any action that has a sufficiently negative outcome for the victim. If they're going to suffer enough, they're justified in using violence to stop the suffering if violence is the method least likely to cause undue suffering all around.

    I wasn't even considering it from that point of view. Merely the fact that there's a variety of social interactions where violence is perfectly acceptable, expected, and preferred. Granted, these situations probably occur less often and on a smaller scale than say war, muggings, etc but given we're supposed to assume violence is inherently bad there seems to be a disconnect.

    Quid on
  • LoserForHireXLoserForHireX Philosopher King The AcademyRegistered User regular
    edited June 2011
    But do they have logical proofs or are they making disguised personal assertations?
    I'm looking for the basics here, I would assume something that many people are so certain about would have a nice logical proof behind it. I was just wondering what it was.

    No one has a proof. Mostly because no one does proofs for really any position they hold.

    Objective and Subjective are opposites. They can't both be true, that would be a contradiction. Now, Jothiki is all for contradictions, but most people aren't.

    Largely because once you have a contradiction, logically, anything follows.

    So if we accept some moral system that is both objective and subjective the moon is made of cheese and the sky is green.

    LoserForHireX on
    "The only way to get rid of a temptation is to give into it." - Oscar Wilde
    "We believe in the people and their 'wisdom' as if there was some special secret entrance to knowledge that barred to anyone who had ever learned anything." - Friedrich Nietzsche
  • TaberTaber Registered User regular
    edited June 2011
    Yar wrote: »
    jothki wrote: »
    Since when does something being rational, consistent, convincing, or useful mean that it is guaranteed to be correct?
    It's as if you purposefully ignored the entire basis of what I said.

    This notion of "guaranteed correct" is a meaningless fallacy, and isn't anything I suppose to apply here. There are only ever basic assumptions and best arguments. Being more rational, consistent, convincing, and useful don't make something "guaranteed correct," but they do make it more rational, consistent, convincing, and useful. You keep chasing that dragon of absolute objectivity - I think Godel disproved it a long time ago; I'll take reason and utility and pursuit of happiness as far as my mind can handle.
    Taber wrote: »
    Suffering is bad is a great axiom, but you can't argue it is objective truth. Someone could believe that suffering leads to personal enlightenment, and personal enlightenment is the ultimate goal even if it never leads to less suffering, and design a morality system around that. Is there an objective way to say this other person is wrong? It seems like what we are optimizing for is subjective even if there are objective ways to measure how successfully we are optimizing for.
    Well, I can argue that this person is wrong, that this is a less usable or describable idea you are supposing. What I likely cannot do is offer some sort of complete proof establishing that no argument will ever be formed that might change my mind.

    Anyway, what is enlightenment? Are people happy or satisfied or blissful when they achieve it? If it isn't any less suffering, what is it?
    Taber wrote: »
    You are stepping around my point. Imagine a third person who believes that life is an abomination, and the most moral action is to let it all burn. Who's to say he's wrong?
    The better question is: what convincing argument does he have that he's right? What use or benefit can we experience or communicate based on his belief? Do people tend to intuitively argee with his base assumptions?

    Like other lines of discourse in this dicussion, you seem to want to resolve truth to completion, or else deny that there is any truth. The reality is that we always live in between. There is no completion, and when we seek truth, we're always just looking for more evidence, better rationale. If I want, I could claim that cows fly, they just also have psychic powers that make us think they don't. And they have special invisible cow-poop-washers so we don't notice when huge cow turds fall out of the sky. Can you "objectively" prove me wrong? Anything you say, I can defeat by telling you that the cows' psychic powers have fooled you. Ultimately, though, it's pointless for me to suppose such a thing, with such unconvincing arguments for it, regardless of whether I can still trap us both in a "you can't prove it's not true" loop.
    Taber wrote: »
    Abrahamic religions' morality explicitly isn't for reducing suffering, it is to please God. A side effect of pleasing God is reducing suffering, but that isn't why you are supposed to follow it.
    There are a number of ways to explain this one. Maybe most people don't have the time or interest to resolve morality down to it's most rational logical descriptions, maybe too often they get lost in this line of reasoning and end up at Hedonism, and its eventual failure and suffering, and so the best way for them and others to be happy and avoid suffering is just to go with "don't think about it, Man in the Sky Said So, that's why."
    Because no one has any particular reason to claim that their knowledge of solar positioning is based on objective measurements and therefore superior. I'm fine with a conceptual ethical framework founded on selected axioms which are acknowledged to be subjective. I don't think that such a framework could be composed as even the simplest ethical decision would require a prohibitively complex calculus. I just find it ridiculous to say that such and such is an objective moral standard for the same reason I'd be confounded by someone saying "obviously my measurents are perfectly accurate... Just look at them!"
    The trick here is that not only is the system of morality incomplete (as are all systems and you seem to be ok with this), but so too are moral arguments typically incomplete. This is where morality would differ from mathematics, for instance. Assuming the axioms that allow for the incomplete system of math and logic, you can still create a complete proof within that system that 2 + 2 = 4. Science, however, for instance, tends not to have such complete proofs. There are only arguments, theories, formulas, etc., that get better and better and more refined, or perhaps counter-argued and replaced. However, this does not preclude us from reasonably showing one scientist right and another wrong, one experimental finding better than another, one theory superior to another, and so on.

    Interestingly, the continued consistent progress and function and refinement of the statements within the system itself is an argument for the validity of the system. The fact that science proceeds as it does, allowing us to make arguments, theories, disprove them, etc., is itself an argument that we've got a decent scientific method going.

    All the same goes for morality. Moral arguments aren't likely to be complete proofs. But we can use reason, history, and science to continually refine our arguments, to make ever more convincing statements and theories and laws that aim to reduce suffering, and we are not precluded from making a solid case for moral superiority. A system like "burn it all" isn't likely to evolve and progress and create arguments and counter-arguments and and ever-increasing set of refined, inter-related statements. It seems that it would literally burn itself out, quite quickly, rendering itself meaningless. It wouldn't take long for someone to be like, "nuke the whole planet!" and then suddenly there's nothing left to burn, so "burn it all" doesn't make sense anymore. Again, this doesn't "prove" it's wrong, it's just a convincing argument.

    Now, if the above description is deemed to be "moral relativism" since it acknowledges that moral arguments are always relatively better or worse, but never objective fact, then fine, I'm a relativist. But as others have pointed out, all truth is relative in this fashion, and I don't think this is typically what is meant by moral relativism. Moral relativism as I've understood it has been a more agnostic stance on morality that typically denies much value in morality beyond studying different cultural norms, and conversely objectivity or realism has always assumed a base acceptance of uncertainty and fallability in all things, but still asserts that morality can be reasoned universally.

    Anyway, this thread has way too much snark going on. Don't say things like "go read XYZ..." as if that's an argument.


    Alright, I'll admit defeat. The only thing I can come up with against the objectivity of the axiom a good action is one that causes the least suffering, where suffering is defined incredibly subjectively, is to posit a different objective morality that is only held by a small number of people, which is a terrible argument for subjective morality.

    Aaaand I missed Feral, who pointed out the same thing.

    I certainly have other complaints about the formulation, but they involve how impossible suffering is to calculate (how many people need to hate me before killing me becomes moral?), but that doesn't attack the objectivity at all.

    Taber on
  • voodoosporkvoodoospork Registered User regular
    edited June 2011
    Yar wrote: »
    Happiness/suffering are even more basic in the proposed system than are consciousness, or the existence of the self, or of discrete individuals. A group of multiple beings is even more certainly a greater capacity of joy and suffering than it is a group of multiple beings. That's sort of what's required when you put joy and suffering right down at the base of the whole dogma.

    So, if you're referencing "combined state of multiple beings" then you've necessarily already implied an assumption that there is a larger scale of potential joy and suffering. There couldn't be multiple beings if that wasn't a multiplication of potential joy and suffering.

    But talking about specific actions and moral judgments and such, well yeah, that's what we do all the time in this world. And I'm afraid that the more practical a statement is for making decisions, the less universal and complete it will tend be. The Golden Rule perhaps? The Formula of Universal Law? The Eightfold Path? Those are pretty good ones. We'll quickly veer off of the Subj-Obj thing and end up debating all morality.

    EDIT: I will say, though, that I firmly believe that humans are incapable of an act-level calculus, and not only that, but there is reasoning and evidence to support the notion that attempts at act-level calculus morality will tend to fail and create more suffering. As Doctor Manhattan said, there is no end [which I took as a very convincing criticism of trying to use "ends justify means" as a basis for moral decision making. All are means; there is no end].

    This is (the pro version of) the gist of my complaint as well. Not only is the use of "objective" in this sense creative in the extreme, but my experience is that all of our endeavor and expectation is strictly probabilistic when it comes to dealing with real people in the real world. Life is a lot more like Blood Bowl than chess from the human perspective, despite the physical reality of the situation.

    There are just too many fuzzy inputs in the equation; the end result is that a universal morality strikes me as being universally useless. Any answer to these questions that doesn't start with "it depends" leaves me wondering at which game the speaker is playing.

    voodoospork on
  • LoserForHireXLoserForHireX Philosopher King The AcademyRegistered User regular
    edited June 2011
    Quid wrote: »
    Quid wrote: »
    Why is it natural law that violence may only be applied to those who commit acts of violence?

    If you're approaching the question from a utilitarian perspective it doesn't make any sense at all. An act of violence can be expected to have such and such an impact on the victim which may outweigh the moral debt, as it were, of practicing violence upon the attacker to stop the disfavorable outcome. But then the same thing could be said of any action that has a sufficiently negative outcome for the victim. If they're going to suffer enough, they're justified in using violence to stop the suffering if violence is the method least likely to cause undue suffering all around.

    I wasn't even considering it from that point of view. Merely the fact that there's a variety of social interactions where violence is perfectly acceptable, expected, and preferred. Granted, these situations probably occur less often and on a smaller scale than say war, muggings, etc but given we're supposed to assume violence is inherently bad there seems to be a disconnect.

    There is only really one ethical position in the mainstream literature that views any particular action as "inherently wrong."

    If one is a utilitarian, they need not accept that any action or set of actions is always wrong. Now, that's a very non-nuanced way of looking at it, and it's far more complex than that. However, most utilitarians seem to be rule utilitarians, so they adopt rules that apply to most cases, but not all. So a utilitatarian might say "violence is usually wrong." However, they could make exceptions based on higher order rules, or for particular sets of circumstances.

    LoserForHireX on
    "The only way to get rid of a temptation is to give into it." - Oscar Wilde
    "We believe in the people and their 'wisdom' as if there was some special secret entrance to knowledge that barred to anyone who had ever learned anything." - Friedrich Nietzsche
  • Grid SystemGrid System Registered User regular
    edited June 2011
    But do they have logical proofs or are they making disguised personal assertations?
    I'm looking for the basics here, I would assume something that many people are so certain about would have a nice logical proof behind it. I was just wondering what it was.

    The difficulty is that there are a lot of different flavours of objective and subjective morality, so it's difficult to speak to any contradictions that may exist between given flavours. There are lots of specific instances of contradiction though, such as cognitivism v non-cognitivism, where the debates are whether moral claims relate to properties of things in the world, and whether moral statements can be true or false in the first place.

    Grid System on
  • CptHamiltonCptHamilton Registered User regular
    edited June 2011
    Feral wrote: »
    Moridin wrote: »
    I just can't put the existence of an objective reality and the existence of moral truths on the same epistemic footing.

    For the latter, you need to appeal to fuzzy things like goodness, badness, pleasure, and pain, none of which are necessary to accurately describe the world or the persistence of an objective reality, all of which are floating, rabidly disagreed about mental states over which humanity, miraculously, on average, happens to agree about in some key places.

    Hell, what if everyone was a sociopath? That doesn't strike me as something as pathological as "what if we're all in the matrix maaaan" and it completely alters the supposedly "objective" moral landscape.

    That bothers me.

    Well, Feral's moral axiom is pretty universal. If you break it down to "every thing capable of having a preference between two states would prefer to be in the state it prefers" with bad/suffering equated with "the un-prefered state" and good/hedonia equated with "the prefered state" then there's not really a basis for argument. I'm not sure that it's specific enough to be useful for anything, but it does seem to be fairly unimpeachable.

    Yeah, I wrote up a long post defending it in which I recognized that the quantification of good/bad experiences is problematic and admitted that comparing the experiences of two different conscious beings (particularly cross-species) is very problematic, and suggested that non-utilitarian ethical systems offer ways around that problem. But, as I said above, I think that would have been tangential - we can argue that the values I identified are not axiomatic, but how do you do that without appealing to some higher value?

    Also in regard to your other reply:
    For the purposes of discussing ethics and morality I'm not sure there's any point. Nobody here is suggesting that there are objective moral truths in the style of "Thou shalt not covet thy neighbor's wife" or "Wear thee the holy mitre" so none of it is downright ridiculous. The majority agrees that it's true is a good enough definition for 'objective' as it relates to morality, I just take issue with claims like "it's self-evidently true that killing is wrong". Honestly I'm not sure how you can position objectivist and relativist philosophers in opposite camps, unless the relativists happen to be moral nihilists or something. As far as I can tell moral relativism is more commonly descriptive than proscriptive and there's nothing preventing one from utilizing an objectivist framework within some relative cultural context or employing known-culturally-derived elements of a moral system in an objectivist consideration. So I quit arguing about it to move on to questions that actually seemed interesting to me.
    Feral wrote: »
    BTW, regarding your other post, I would argue that a conscious being with no preferences is not a moral actor in any meaningful sense. "A conscious being with no preferences" might (arguably) describe a not-too-terribly-far-flung AI. Would you say that there is something morally wrong with turning off and deleting an AI that doesn't even care if it's alive?

    From a suffering/happiness standpoint? No, I don't think it would be amoral. If the being (AI or whatever) doesn't care then it would appear to be value neutral.

    CptHamilton on
    PSN,Steam,Live | CptHamiltonian
  • Grey PaladinGrey Paladin Registered User regular
    edited June 2011
    Consider that morality cannot exist in a void. In a world without any living beings capable of making choices the term is meaningless.

    Suppose there is a universe where only two kinds of living beings exist: birds and dogs.
    Birds prefer seeds to other food, and cannot eat meat.
    Dogs prefer meat to other food, and cannot eat seeds.
    Given such a state, it is impossible to objectively state which of the two sources of food is 'superior' to the other.
    Dogs go extinct, and disappear from this world.
    Considering that the entire existence of ethics is now anchored on the birds, the preferences of the birds are the only ones that matter. Were the birds to go extinct, morality would lose all meaning and cease to exist.
    So, in a world where only birds exist, seeds can be said to be universally better than meat as far as food goes. This is the result of a process of elimination.

    For the same reason, suffering can be said to be wrong in our world in the sense that none wishes to suffer. It is not as much that inflicting suffering is inherently bad, but that each being considers it inherently bad for itself to suffer. Thus, if you inflict suffering, were you to universalize this infliction of suffering, you open yourself to others inflicting suffering on you under identical conditions, restricting and harming you. Ethics exist as a guide to a successful society.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • Loren MichaelLoren Michael Registered User regular
    edited June 2011
    Feral wrote: »
    I don't see any reason why morality is any different. If you ask me "how do we know is the law of identity true?" I say, "How can you possibly disagree?" Likewise, if you try to ask me, "how do we know that suffering is bad and hedonia is good?" I say, "How can you possibly disagree?" Imagining a lifeform for which suffering is good and hedonia is bad is a bit like imagining a universe in which the law of identity is false.

    From these basic axioms, we can apply exactly the same logical and empirical process. Does this system maximize hedonia and minimize suffering? Is this system internally consistent? Is this system consistent with the evidence that we have regarding things in the external world that are relevant to hedonia and suffering?

    I certainly adhere to a rough utilitarian framework; that's the moral mental state that my personal history has led me to. Others might be more into filial piety though, or the integrity of their family name. Others might not care about people outside their immediate family, or about people who have or lack some arbitrary characteristic, like skin color.

    I can easily imagine a lifeform that is more concerned with honor than suffering or pleasure.

    But really, the same question applies to that lifeform as one that follows something more utilitarian: Why do its preferences map onto oughts? Why are our preferences relevant to what should be done? I don't dispute that we tend to have some innate set of relatively common values, and it's difficult for me to imagine someone who doesn't share my set of priorities. But why should we abide by our innate tendencies, and why should we simply settle on fundamentally nebulous and conflicting ones like suffering and pleasure, rather than say, making offspring that survive to spawn succeeding generations?

    Loren Michael on
    a7iea7nzewtq.jpg
  • MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited June 2011
    But do they have logical proofs or are they making disguised personal assertations?
    I'm looking for the basics here, I would assume something that many people are so certain about would have a nice logical proof behind it. I was just wondering what it was.

    No one has a proof. Mostly because no one does proofs for really any position they hold.

    Objective and Subjective are opposites. They can't both be true, that would be a contradiction. Now, Jothiki is all for contradictions, but most people aren't.

    Largely because once you have a contradiction, logically, anything follows.

    So if we accept some moral system that is both objective and subjective the moon is made of cheese and the sky is green.

    I don't find this argument particularly self-evident. I could easily see a logical system built out of consistent contradictions. Consistency is what you want in a system, to my mind. It doesn't matter if something is contradictory as long as it is consistently so, and this contradiction does not actually affect anything other than the two contradictions.

    It doesn't "logically follow" because that is a generalisation from a specific to a universal.
    It might be very difficult to think in that way, but so what? It doesn't have to be easy.

    I generally avoid moral discussions because I can't find any underlying structure behind most of the arguments. Wether someone chooses one or the other mostly appears to be based on personal axioms in the end. It's really confusing to find someone talking about objectivity when they base it, fundamentally, on their personal beliefs. This is a contradiction, but obviously its a consistent one that is allowed. So one contradiction is okay, but another is not? I can't get my head around it. So I don't know where to start.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
Sign In or Register to comment.