As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Moral Relativism

1678911

Posts

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited November 2008
    MikeMan wrote: »
    ElJeffe wrote: »

    Doesn't this just imply that the moral reasoning happens before hand? You, based on your upbringing or whatnot, and according to whatever thought you've given to the matter up until now, react to situation X by doing Y. Your reaction is instinctual. You rationalize it after the fact by saying, "I did Y because of blah blah yadda." That doesn't mean that you behaved completely at random and wanted to justify it after the fact; it means that you've ingrained your personal moral system to the point where you act on it automatically based on prior musings. The rational judgment still occurred, it just occurred before the act in question.
    At which point did a rational judgment occur, though?

    At some point along the prior continuum.

    Think of it like military training. You're taught to respond to certain situations with particular actions. The rationalization happens during the training: "Okay, if an enemy ever does this, you need to respond like this, because of X, Y and Z. Now do it, grunt!" And the action becomes internalized. When you're in the field and "this" happens, you respond instinctively. You don't stop and think, "Okay, I should do this," you just do it. Afterward, if pressed, you could give an explanation for it. Before, if asked, you could explain why you would act as such. In the moment, though, you just act. Because you've been taught to do so.

    Now apply that to moral dilemmas.

    I mean, I think it's pretty clear that people do at least sometimes apply reason to their moral actions. Because sometimes people do actually stop and think, "Should I do this?" Sometimes they think about these things a whole lot prior to acting.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    MrMonroeMrMonroe passed out on the floor nowRegistered User regular
    edited November 2008
    Sometimes. More often as a move to grant one group or another more power in a society, or to make decisions easier (cheat-sheet), or to further the expansion of human civilization.

    And what dictates easier decision making and the expansion of human civilization? Most of humanity has been following a pretty similar direction (with notable setbacks) in terms of becoming more "liberal" in the classical sense, that is, placing a higher emphasis on individual, equal rights and as much freedom of choice as possible. Is there a reason for that? I think so. It's because, of everything that's been tried so far, this works the best. We, as a society, are reasoning our way towards a firmer understanding of how the physical world constrains our behavior if we want to advance as a collective entity.
    We have morals as a way of dealing with each other, not the universe. We have tools and technology as ways of dealing with the universe.

    You're not seriously implying that humans aren't part of the universe, right? When you say "everything," that includes us.
    Ambition is what inspires someone to advance to a higher level of achievement, in this case advancement towards expanding human civilization. We can make better tools and technology if we can expand human civilization, but a moral code isn't enforceable once you're dealing with a large enough populace that morals on things irrelevant to societal function (cheat-sheet morals) are going to cause conflict that pushes us backwards (dark ages) so we build laws and a justice system that should touch only on the things that are necessary to a functional society, which is why lots of "immoral" things (lying, adultery, sodomy, consumption of coffee) are not illegal.

    I think you're missing my point here... I'm not suggesting that there is a perfect moral code that should be enforced everywhere. I think moral knowledge isn't ever complete as a Platonic Ideal, but rather infinite. All we can do is approach infinity. Having a slew of different people working from different cultures is probably the best way to accomplish that.

    Also, I think we've come far enough in our society to accept that sodomy isn't immoral.

    MrMonroe on
  • Options
    darthmixdarthmix Registered User regular
    edited November 2008
    ElJeffe wrote: »
    MikeMan wrote: »
    ElJeffe wrote: »

    Doesn't this just imply that the moral reasoning happens before hand? You, based on your upbringing or whatnot, and according to whatever thought you've given to the matter up until now, react to situation X by doing Y. Your reaction is instinctual. You rationalize it after the fact by saying, "I did Y because of blah blah yadda." That doesn't mean that you behaved completely at random and wanted to justify it after the fact; it means that you've ingrained your personal moral system to the point where you act on it automatically based on prior musings. The rational judgment still occurred, it just occurred before the act in question.
    At which point did a rational judgment occur, though?

    At some point along the prior continuum.

    Think of it like military training. You're taught to respond to certain situations with particular actions. The rationalization happens during the training: "Okay, if an enemy ever does this, you need to respond like this, because of X, Y and Z. Now do it, grunt!" And the action becomes internalized. When you're in the field and "this" happens, you respond instinctively. You don't stop and think, "Okay, I should do this," you just do it. Afterward, if pressed, you could give an explanation for it. Before, if asked, you could explain why you would act as such. In the moment, though, you just act. Because you've been taught to do so.
    But in military training, you can presumably learn the correct action, and perform it correctly later, without ever learning why it's correct. Your training course was developed through rational judgment, but that was the judgment of someone else. Similarly, social intuitionism seems to suggest that we can issue moral judgments without being fully aware of the reasons those moral systems are in place - that you can be an effective moral agent without being a fully rational one.

    darthmix on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited November 2008
    ElJeffe wrote: »
    I mean, I think it's pretty clear that people do at least sometimes apply reason to their moral actions. Because sometimes people do actually stop and think, "Should I do this?" Sometimes they think about these things a whole lot prior to acting.

    That.

    MrMister on
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited November 2008
    darthmix wrote: »
    ElJeffe wrote: »
    MikeMan wrote: »
    ElJeffe wrote: »

    Doesn't this just imply that the moral reasoning happens before hand? You, based on your upbringing or whatnot, and according to whatever thought you've given to the matter up until now, react to situation X by doing Y. Your reaction is instinctual. You rationalize it after the fact by saying, "I did Y because of blah blah yadda." That doesn't mean that you behaved completely at random and wanted to justify it after the fact; it means that you've ingrained your personal moral system to the point where you act on it automatically based on prior musings. The rational judgment still occurred, it just occurred before the act in question.
    At which point did a rational judgment occur, though?

    At some point along the prior continuum.

    Think of it like military training. You're taught to respond to certain situations with particular actions. The rationalization happens during the training: "Okay, if an enemy ever does this, you need to respond like this, because of X, Y and Z. Now do it, grunt!" And the action becomes internalized. When you're in the field and "this" happens, you respond instinctively. You don't stop and think, "Okay, I should do this," you just do it. Afterward, if pressed, you could give an explanation for it. Before, if asked, you could explain why you would act as such. In the moment, though, you just act. Because you've been taught to do so.
    But in military training, you can presumably learn the correct action, and perform it correctly later, without ever learning why it's correct. Your training course was developed through rational judgment, but that was the judgment of someone else. Similarly, social intuitionism seems to suggest that we can issue moral judgments without being fully aware of the reasons those moral systems are in place - that you can be an effective moral agent without being a fully rational one.

    At first I was thinking, "Well, the analogy isn't perfect, but..." and then I thought about it some more and now I think it was even a better analogy than I first planned. Go me.

    That said, I'll have to dig more into the evidence in support of social intuitionism.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    darthmixdarthmix Registered User regular
    edited November 2008
    Having been familiar with it for all of 12 hours now it gels very strongly with what I've long suspected, which is that morality as practiced in the wild is largely an organic process, with the role of formal reason greatly diminished compaired to what we usually see in philosophy.

    darthmix on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited November 2008
    MrMonroe wrote: »
    Sometimes. More often as a move to grant one group or another more power in a society, or to make decisions easier (cheat-sheet), or to further the expansion of human civilization.

    And what dictates easier decision making and the expansion of human civilization? Most of humanity has been following a pretty similar direction (with notable setbacks) in terms of becoming more "liberal" in the classical sense, that is, placing a higher emphasis on individual, equal rights and as much freedom of choice as possible. Is there a reason for that? I think so. It's because, of everything that's been tried so far, this works the best. We, as a society, are reasoning our way towards a firmer understanding of how the physical world constrains our behavior if we want to advance as a collective entity.

    You're confusing morality with physics now, but other than that yes.
    MrMonroe wrote: »
    We have morals as a way of dealing with each other, not the universe. We have tools and technology as ways of dealing with the universe.

    You're not seriously implying that humans aren't part of the universe, right? When you say "everything," that includes us.

    No, I'm not.
    MrMonroe wrote: »
    Ambition is what inspires someone to advance to a higher level of achievement, in this case advancement towards expanding human civilization. We can make better tools and technology if we can expand human civilization, but a moral code isn't enforceable once you're dealing with a large enough populace that morals on things irrelevant to societal function (cheat-sheet morals) are going to cause conflict that pushes us backwards (dark ages) so we build laws and a justice system that should touch only on the things that are necessary to a functional society, which is why lots of "immoral" things (lying, adultery, sodomy, consumption of coffee) are not illegal.

    I think you're missing my point here... I'm not suggesting that there is a perfect moral code that should be enforced everywhere. I think moral knowledge isn't ever complete as a Platonic Ideal, but rather infinite. All we can do is approach infinity. Having a slew of different people working from different cultures is probably the best way to accomplish that.

    Also, I think we've come far enough in our society to accept that sodomy isn't immoral.

    I think so too, yet we haven't actually done it. Since that belief is useful, it remains. That other stuff is special but doesn't really say anything except that you do think there's such a thing as moral truth (though not a perfect moral code, not sure how you distinguish the two), which is silly. There's no truth, infinite or otherwise, it's a set of rules we make up that aren't really checked against anything except whether or not they drive the species to extinction.

    ViolentChemistry on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited November 2008
    darthmix wrote: »
    Having been familiar with it for all of 12 hours now it gels very strongly with what I've long suspected, which is that morality as practiced in the wild is largely an organic process, with the role of formal reason greatly diminished compaired to what we usually see in philosophy.

    I don't think that ethicists have to commit themselves to the claim that their theories accurately describe how most people actually come to make moral judgments. Instead, their theories accurately describe how it is rational to make moral judgments, and the decisions of ordinary people are rational insorfar as they conform to those theories. If a person is interested in making rational moral decisions, they will therefore reflectively bring their action more into line with whichever given moral theory.

    MrMister on
  • Options
    darthmixdarthmix Registered User regular
    edited November 2008
    MrMister wrote: »
    darthmix wrote: »
    Having been familiar with it for all of 12 hours now it gels very strongly with what I've long suspected, which is that morality as practiced in the wild is largely an organic process, with the role of formal reason greatly diminished compaired to what we usually see in philosophy.

    I don't think that ethicists have to commit themselves to the claim that their theories accurately describe how most people actually come to make moral judgments. Instead, their theories accurately describe how it is rational to make moral judgments, and the decisions of ordinary people are rational insorfar as they conform to those theories. If a person is interested in making rational moral decisions, they will therefore reflectively bring their action more into line with whichever given moral theory.
    I guess I'm skeptical that anyone but the tiniest fraction of academics will ever apply that kind of rational moral decision making with any regularity in their day-to-day lives. If it's true that there's a complex and powerful organic process in place through which the larger culture arrives at moral ideas, fueled in various parts by evolution and social adaptation, then I don't see rationalist models ever overtaking it or contributing to the moral advancement of the culture in any real way; our predilection for the more naturalistic, intuitive processes of morality is too strong. I can see a culture trying to wrap itself in the label of this or that rationalist theory, to legitimize its moral ideas, while in fact it continues to arrive at its ideas through these buried intuitive systems.

    darthmix on
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited November 2008
    I don't think most people even have much occasion to apply rational moral decision making in their lives except very rarely. I mean, I didn't really make any explicit moral decisions at all today, unless you want to consider things like "Do I pull over here on my way to work and randomly throttle that pedestrian?" to fit the bill. Most people's standard day is pretty mundane. You wake up, go to work, do your job, come home, play with the wife and kids, go to bed. Genuine moral quandaries just don't pop up very frequently for most people, and when they do they're probably more along the lines of "I just fucked up at work - do I try to blame someone else?" for which most people already have pre-written responses. Either they're lying fuckers or they're not.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited November 2008
    darthmix wrote: »
    I guess I'm skeptical that anyone but the tiniest fraction of academics will ever apply that kind of rational moral decision making with any regularity in their day-to-day lives. If it's true that there's a complex and powerful organic process in place through which the larger culture arrives at moral ideas, fueled in various parts by evolution and social adaptation, then I don't see rationalist models ever overtaking it or contributing to the moral advancement of the culture in any real way; our predilection for the more naturalistic, intuitive processes of morality is too strong. I can see a culture trying to wrap itself in the label of this or that rationalist theory, to legitimize its moral ideas, while in fact it continues to arrive at its ideas through these buried intuitive systems.

    This is why humanism. Most people don't have the time or interest to seek out the most rational behaviors on a regular basis, so turning to a moral code with the occassional bit of nuance and thought here and there has a use. It's just that whoever came up with the code needs to have actually thought things out, which is where things like ethical egoism come on.

    Read the Wiki on Nihilism, for instance. It notes that Nietsche's idea of nihilism wasn't that it was an end in itself, but a clean slate on which to build a new behavioral system divorced from all the useless artifacts picked up from prior generations.

    --

    ElJeffe: I have a debate with myself every single day at work whether or not to tell the receptionist to stop singing because it drives me nuts. :P

    Incenjucar on
  • Options
    darthmixdarthmix Registered User regular
    edited November 2008
    ElJeffe wrote: »
    I don't think most people even have much occasion to apply rational moral decision making in their lives except very rarely. I mean, I didn't really make any explicit moral decisions at all today, unless you want to consider things like "Do I pull over here on my way to work and randomly throttle that pedestrian?" to fit the bill. Most people's standard day is pretty mundane. You wake up, go to work, do your job, come home, play with the wife and kids, go to bed. Genuine moral quandaries just don't pop up very frequently for most people, and when they do they're probably more along the lines of "I just fucked up at work - do I try to blame someone else?" for which most people already have pre-written responses. Either they're lying fuckers or they're not.

    But let us still pause to notice that the mundane, non-quandary activities of most people's standard day are still heavily informed by social moral structures that mediate things like a good work ethic or how to raise your family or maintain a happy marriage. And we might also notice that on any day in which you don't commit murder you're still responding, by omission in this case, to a moral conditioning that your culture has adopted to protect you and your neighbor. You remain a moral agent through your actions, or inactions, regardless of whether you consider those actions in a philisophical context. This leads me to believe that the bulk of our moral existence is conducted organically, intuitively. There are isolated events in which you're maybe exercising strict rational analysis, but even there it seems likely that those decisions are colored by these less-rational (but still purposeful, if not perfectly so) social-intuitive moral influences.

    It just seems like the rationalist approach is always going to be confined to the ivory tower, repeating over and over a wishful-thinking model of morality that's never going to have any real force in the world. The real weight of our moral consciousness is always going to be behind these more organic, cultural processes that produce simple rules we can internalize easily, that feel like second nature.

    darthmix on
  • Options
    MrMonroeMrMonroe passed out on the floor nowRegistered User regular
    edited November 2008
    ViolentChemistry - You're ascribing to me opinions I do not hold, which is why this conversation keeps going. I'm pretty sure now you basically agree with me, with the main difference that you seem to think moral codes are shaped mostly through evolution rather than the reasoning of human beings. I am not arguing that there is moral truth in the universe the way you are describing it. There's no perfect moral truth that exists in a vacuum without rational, moral actors.

    What I am describing is this: because we live in the universe, our actions are constrained by its rules. Thus, as we reason our way towards better societies, we're grasping at a set of rules that works the best considering our physical situation and our social situation. (essentially the same thing, really) The best approximation of a "perfect" (unattainable) moral code might be very different in a society in which everyone is about equally wealthy as in one in which a very few people hold the vast majority of the wealth. Any other trillion number of factors might influence exactly what that unattainable set of knowledge contains. "Morality" is the set of codes we come up with that help us all do better and better and, you know, not just all die. The "perfect morality," the infinite set of rules that would describe every possible situation, including situations rendered impossible by the existence of other situations therein described, doesn't exist in any rational set. It's an imaginary concept like limit(x->infinity). You can describe it rationally, and doing so helps us to understand how we might proceed when examining our own current moral codes, but that doesn't make it "real" in any sense.

    Essentially, my point is that while there's no universal truth, saying that the moral codes we create are essentially equal because they are all equally constructs is ridiculous, and saying there's no way to rationally examine the universe and produce a theory of morality which is more effective or less self-contradictory is just wrong. Some constructs are better than others, because they more closely conform to and take note of the physical and social limitations of the environment, and thus produce better results for their adherents. Extinction need not be the only test of viability, and societies need not continue on in their old codes until they die.



    Y'all can probably tell I really hated Plato's Republic. (even if it was fun to think about)

    MrMonroe on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited November 2008
    The distinction is that I don't think that moral rules are the rules of the universe, I think they're completely human fabrications. All morals are equal in their truthiness or accuracy, it's not even that they're all 0 so much as they have no inherent value. In order to compare the value of any two sets of morals we have to establish a third set, because otherwise we have no way of deciding what is good or bad.

    ViolentChemistry on
  • Options
    MrMonroeMrMonroe passed out on the floor nowRegistered User regular
    edited November 2008
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    You said yourself they could be tested against whether it causes the society to become extinct. Even by that merest of concessions to reality, isn't that an objective test of one moral code being better than another? If you're defining a moral code as "a list of rules that allows society to function," then a set of rules that destroys the society is demonstrably better than a set of rules which does not, yes?

    MrMonroe on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited November 2008
    MrMonroe wrote: »
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    How are you defining better without establishing a third moral standard to measure betterness by?
    MrMonroe wrote: »
    You said yourself they could be tested against whether it causes the society to become extinct. Even by that merest of concessions to reality, isn't that an objective test of one moral code being better than another? If you're defining a moral code as "a list of rules that allows society to function," then a set of rules that destroys the society is demonstrably better than a set of rules which does not, yes?

    It's admittedly a subjective test. I could easily construct a moral code under which human extinction is the ultimate good. I went by its practical function rather than its intended goals because the details of its intended goals vary between codes.

    ViolentChemistry on
  • Options
    ScalfinScalfin __BANNED USERS regular
    edited November 2008
    MrMonroe wrote: »
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    How are you defining better without establishing a third moral standard to measure betterness by?
    MrMonroe wrote: »
    You said yourself they could be tested against whether it causes the society to become extinct. Even by that merest of concessions to reality, isn't that an objective test of one moral code being better than another? If you're defining a moral code as "a list of rules that allows society to function," then a set of rules that destroys the society is demonstrably better than a set of rules which does not, yes?

    It's admittedly a subjective test. I could easily construct a moral code under which human extinction is the ultimate good. I went by its practical function rather than its intended goals because the details of its intended goals vary between codes.

    I believe that there is an objective right and wrong, but that it would be the epitomeof hubris to say that my morals are anything better than a best guess, and that the only one who knows what the true moral code is happens to be Schrodinger's cat.

    Scalfin on
    [SIGPIC][/SIGPIC]
    The rest of you, I fucking hate you for the fact that I now have a blue dot on this god awful thread.
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited November 2008
    Scalfin wrote: »
    MrMonroe wrote: »
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    How are you defining better without establishing a third moral standard to measure betterness by?
    MrMonroe wrote: »
    You said yourself they could be tested against whether it causes the society to become extinct. Even by that merest of concessions to reality, isn't that an objective test of one moral code being better than another? If you're defining a moral code as "a list of rules that allows society to function," then a set of rules that destroys the society is demonstrably better than a set of rules which does not, yes?

    It's admittedly a subjective test. I could easily construct a moral code under which human extinction is the ultimate good. I went by its practical function rather than its intended goals because the details of its intended goals vary between codes.

    I believe that there is an objective right and wrong, but that it would be the epitomeof hubris to say that my morals are anything better than a best guess, and that the only one who knows what the true moral code is happens to be Schrodinger's cat.

    I don't see how there could be an objective right and wrong, at least in terms of morality. I mean yeah, 2+2=3 is objectively wrong, but that's a different kind of wrong.

    ViolentChemistry on
  • Options
    darthmixdarthmix Registered User regular
    edited November 2008
    MrMonroe wrote: »
    You said yourself they could be tested against whether it causes the society to become extinct. Even by that merest of concessions to reality, isn't that an objective test of one moral code being better than another? If you're defining a moral code as "a list of rules that allows society to function," then a set of rules that destroys the society is demonstrably better than a set of rules which does not, yes?

    It's admittedly a subjective test. I could easily construct a moral code under which human extinction is the ultimate good. I went by its practical function rather than its intended goals because the details of its intended goals vary between codes.

    I don't think human society is capable of adopting or taking seriously a moral system whose ultimate goal is their own extinction. My bet is that, since morality is a social adaptation, people instinctively understand that moral law is intended to serve some social good; to adopt a system whose ultimate good is destruction would be incompatible with that. It would be analagous to an animal choosing to die.

    I'm going to quote Jonathan Haidt here, from that essay I linked to earlier: "On our account, moral facts exist, but not as objective facts which would be true for any rational creature anywhere in the universe. Moral facts are facts only with respect to a community of human beings that have created them, a community of creatures that share a 'particular fabric and constitution,' as Hume said. We believe that moral truths are what David Wiggins calls 'anthropocentric truths,' for they are true only with respect to the kinds of creatures that human beings happen to be."

    darthmix on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited November 2008
    darthmix wrote: »
    MrMonroe wrote: »
    You said yourself they could be tested against whether it causes the society to become extinct. Even by that merest of concessions to reality, isn't that an objective test of one moral code being better than another? If you're defining a moral code as "a list of rules that allows society to function," then a set of rules that destroys the society is demonstrably better than a set of rules which does not, yes?

    It's admittedly a subjective test. I could easily construct a moral code under which human extinction is the ultimate good. I went by its practical function rather than its intended goals because the details of its intended goals vary between codes.

    I don't think human society is capable of adopting or taking seriously a moral system whose ultimate goal is their own extinction. My bet is that, since morality is a social adaptation, people instinctively understand that moral law is intended to serve some social good; to adopt a system whose ultimate good is destruction would be incompatible with that. It would be analagous to an animal choosing to die.

    I'm going to quote Jonathan Haidt here, from that essay I linked to earlier: "On our account, moral facts exist, but not as objective facts which would be true for any rational creature anywhere in the universe. Moral facts are facts only with respect to a community of human beings that have created them, a community of creatures that share a 'particular fabric and constitution,' as Hume said. We believe that moral truths are what David Wiggins calls 'anthropocentric truths,' for they are true only with respect to the kinds of creatures that human beings happen to be."

    Ra's Al Ghul doesn't care what you think about his morals. :P

    ViolentChemistry on
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited November 2008
    MrMonroe wrote: »
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    How are you defining better without establishing a third moral standard to measure betterness by?

    You establish some objective criteria by which to gauge the success of a moral system. It's not like you test competing scientific hypotheses by crafting a third hypothesis to compare them against.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited November 2008
    ElJeffe wrote: »
    MrMonroe wrote: »
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    How are you defining better without establishing a third moral standard to measure betterness by?

    You establish some objective criteria by which to gauge the success of a moral system. It's not like you test competing scientific hypotheses by crafting a third hypothesis to compare them against.

    How does that not constitute establishing a new moral code? And you're right, it's not like science at all. That's why the mathematics analogy fails.

    ViolentChemistry on
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited November 2008
    ElJeffe wrote: »
    MrMonroe wrote: »
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    How are you defining better without establishing a third moral standard to measure betterness by?

    You establish some objective criteria by which to gauge the success of a moral system. It's not like you test competing scientific hypotheses by crafting a third hypothesis to compare them against.

    How does that not constitute establishing a new moral code? And you're right, it's not like science at all. That's why the mathematics analogy fails.

    An axiom might be, for example, that a proper moral code maximizes happiness across the society. Or that it best propogates the genes of that society. Or whatever. Then the moral code can be tested based upon how well it meets those goals.

    "Maximize happiness!" is not a moral system.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited November 2008
    ElJeffe wrote: »
    ElJeffe wrote: »
    MrMonroe wrote: »
    Yes, they are human fabrications.

    No, they are not all equal because some, in similar situations, cause a society to function demonstrably better than others.

    How are you defining better without establishing a third moral standard to measure betterness by?

    You establish some objective criteria by which to gauge the success of a moral system. It's not like you test competing scientific hypotheses by crafting a third hypothesis to compare them against.

    How does that not constitute establishing a new moral code? And you're right, it's not like science at all. That's why the mathematics analogy fails.

    An axiom might be, for example, that a proper moral code maximizes happiness across the society. Or that it best propogates the genes of that society. Or whatever. Then the moral code can be tested based upon how well it meets those goals.

    "Maximize happiness!" is not a moral system.

    Well, actually, it is. Utilitarianism. Anyway, "maximizing happiness is definitive of moral good" is a new moral system. Which is how you have to say it to determine which of the two is better.

    ViolentChemistry on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited November 2008
    My point about Social Intuitist theory is not that reasoning can't occur prior to the automatic judgement.
    It's that it allows you to indicate when people haven't done this but have picked up their judgements from another source through social learning without understanding. To me, if someone's argument is deconstructed to the point where they must outline their reasoning but they cannot give it, it shows they don't have an understanding of why that judgement occured. So in that instance, their opinion must be suspect, and they should be asked to go away and think about it before playing with the big boys.

    It's useful because it gives more structure to an argument.

    I wouldn't be saying that morals don't exist or something, as that is an opinion and I don't want to do that. Sorry if I made myself unclear.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    MrMonroeMrMonroe passed out on the floor nowRegistered User regular
    edited November 2008
    Now ho'ed up, ViolentChem.

    How are you defining morality? A system of laws which exists for no reason? A system of codes that has no utility whatsoever? If you're going to define morality as basically useless, why would you bother to judge it against whether it causes extinction or not?

    Here's the deal: I posit that morality is a system of codes we use to make our society function better. Thus a moral coda that works better towards that end is better than one that doesn't. Even if a society is absolutely terrible and deserves to be destroyed by that metric, there's an ideal moral code that indicates that the individual should work to destroy that society.

    The physical laws of the universe and our social situation (which can be expressed purely as a physical situation) make certain codas work better than others at that explicit goal. Are you defining morals as existing purely in a vacuum? Why? What is the purpose of a moral code if it has no bearing on reality?

    I'm not asking you to make a definitive judgment on the question of "for what are morals codes best?" I'm asking you to accept that, no matter the answer to that question, there are better moral codes than others, and the result depends on how the moral code accepts the contours of the physical and social universe.

    Here is the important part:

    What you seem to be attempting is to define morality as wholly subjective; any use of morality is defined by that morality itself and therefore useless. But this is a false assumption; any use of morality can be defined objectively by whether it succeeds in perpetrating what it claims to be good. If a moral code indicates that all humans should die, that code can be examined on the merits of whether it actually results in the death of all humans. If a moral code purports to advance human society, that code can be objectively examined on the basis of whether it actually make people more well-off.

    MrMonroe on
  • Options
    YarYar Registered User regular
    edited December 2008
    MrMonroe wrote: »
    What you seem to be attempting is to define morality as wholly subjective; any use of morality is defined by that morality itself and therefore useless. But this is a false assumption; any use of morality can be defined objectively by whether it succeeds in perpetrating what it claims to be good. If a moral code indicates that all humans should die, that code can be examined on the merits of whether it actually results in the death of all humans. If a moral code purports to advance human society, that code can be objectively examined on the basis of whether it actually make people more well-off.
    When you said a "better" society, or to "advance" society, would that not be measured by the total happiness and sorrow of those within it? How else could it be measured if not that?

    By virtue of the fact that one person might ever be said to be "happier" than another, objectively so, then therefore it is objectively, universally, an absolutely true that adherence to some set of ethical rules can and will result in greater happiness for than will some other set. Individual moral decision or evaluations certainly can, and should, take into effect circumstances, including culture, but that does not mean that the entire set of them cannot still adhere to a universal standard of morality.

    Yar on
  • Options
    darthmixdarthmix Registered User regular
    edited December 2008
    Yar wrote: »
    MrMonroe wrote: »
    What you seem to be attempting is to define morality as wholly subjective; any use of morality is defined by that morality itself and therefore useless. But this is a false assumption; any use of morality can be defined objectively by whether it succeeds in perpetrating what it claims to be good. If a moral code indicates that all humans should die, that code can be examined on the merits of whether it actually results in the death of all humans. If a moral code purports to advance human society, that code can be objectively examined on the basis of whether it actually make people more well-off.
    When you said a "better" society, or to "advance" society, would that not be measured by the total happiness and sorrow of those within it? How else could it be measured if not that?

    By virtue of the fact that one person might ever be said to be "happier" than another, objectively so, then therefore it is objectively, universally, an absolutely true that adherence to some set of ethical rules can and will result in greater happiness for than will some other set. Individual moral decision or evaluations certainly can, and should, take into effect circumstances, including culture, but that does not mean that the entire set of them cannot still adhere to a universal standard of morality.

    I guess one problem with this is that happiness is not really a known quantity, so that very different moral systems, with drastically different goals, could describe their adherants as equally happy according to their different ideas of what comprises happiness. Is a constant state of indescribable ecstacy better than a quiet, contemplative fulfillment? Or maybe I decide that all existence really is suffering, and people are happier not existing at all. At some point happiness becomes just another way of saying that the moral system is meeting whatever arbitrary goal it sets for itself, instead of actually providing a definition of what that goal is.

    But my real problem with this way of thinking is that it appears to me to mischaracterize the process by which society arrives at its moral ideas. I suspect that a culture knows, at some basic level, whether it's working or not, it understands instinctively what goals its morality and social organization are intended to serve, in much the same way that an animal knows that it should go on living. But just as the animal isn't at pains to describe his life as the pursuit of a single abstract concept, human culture isn't obliged to do that either, and indeed it doesn't do that. The animal's life, after all, is not really an abstraction - it's a real thing, a tactile and complicated experience, and so too is our collective experience of living in society and relating to one another. People across cultures identify values that seem to operate of independantly of happiness - they value fairness, they value purity, in various aspects of their life. They don't reduce those values to a single value, and when we try to do so after the fact - when we say "all those values are really about pursuing happiness" we've simply widened the definition of happiness to include those things. It's not that the statement is wrong; it's just that it distorts what morality really is, because it forgets the way morality actually works in culture. I don't see morality operating as a single-minded pursuit of any particular emotion or indentifiable quality, and I don't think imagining that it is that can really teach us anything about it.

    darthmix on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited December 2008
    darthmix wrote: »
    Yar wrote: »
    MrMonroe wrote: »
    What you seem to be attempting is to define morality as wholly subjective; any use of morality is defined by that morality itself and therefore useless. But this is a false assumption; any use of morality can be defined objectively by whether it succeeds in perpetrating what it claims to be good. If a moral code indicates that all humans should die, that code can be examined on the merits of whether it actually results in the death of all humans. If a moral code purports to advance human society, that code can be objectively examined on the basis of whether it actually make people more well-off.
    When you said a "better" society, or to "advance" society, would that not be measured by the total happiness and sorrow of those within it? How else could it be measured if not that?

    By virtue of the fact that one person might ever be said to be "happier" than another, objectively so, then therefore it is objectively, universally, an absolutely true that adherence to some set of ethical rules can and will result in greater happiness for than will some other set. Individual moral decision or evaluations certainly can, and should, take into effect circumstances, including culture, but that does not mean that the entire set of them cannot still adhere to a universal standard of morality.

    I guess one problem with this is that happiness is not really a known quantity, so that very different moral systems, with drastically different goals, could describe their adherants as equally happy according to their different ideas of what comprises happiness. Is a constant state of indescribable ecstacy better than a quiet, contemplative fulfillment? Or maybe I decide that all existence really is suffering, and people are happier not existing at all. At some point happiness becomes just another way of saying that the moral system is meeting whatever arbitrary goal it sets for itself, instead of actually providing a definition of what that goal is.

    But my real problem with this way of thinking is that it appears to me to mischaracterize the process by which society arrives at its moral ideas. I suspect that a culture knows, at some basic level, whether it's working or not, it understands instinctively what goals its morality and social organization are intended to serve, in much the same way that an animal knows that it should go on living. But just as the animal isn't at pains to describe his life as the pursuit of a single abstract concept, human culture isn't obliged to do that either, and indeed it doesn't do that. The animal's life, after all, is not really an abstraction - it's a real thing, a tactile and complicated experience, and so too is our collective experience of living in society and relating to one another. People across cultures identify values that seem to operate of independantly of happiness - they value fairness, they value purity, in various aspects of their life. They don't reduce those values to a single value, and when we try to do so after the fact - when we say "all those values are really about pursuing happiness" we've simply widened the definition of happiness to include those things. It's not that the statement is wrong; it's just that it distorts what morality really is, because it forgets the way morality actually works in culture. I don't see morality operating as a single-minded pursuit of any particular emotion or indentifiable quality, and I don't think imagining that it is that can really teach us anything about it.

    You kind of have the right idea.

    But just because a bunch of individual humans working on individual and small scale shared principles all happen to have similar goals, this doesn't give those morals a heavier weight. That's technically the greatest appeal to the audience that could ever have existed. I reject this the same way I would any other logical fallacy.

    Culture, by itself, is a process, not an entity. It does not think. It is the individual human constituents that "think". Any kind of theory which forgets this will never understand a moral system.

    Although I may have misread you.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    YarYar Registered User regular
    edited December 2008
    darthmix wrote: »
    I guess one problem with this is that happiness is not really a known quantity, so that very different moral systems, with drastically different goals, could describe their adherants as equally happy according to their different ideas of what comprises happiness. Is a constant state of indescribable ecstacy better than a quiet, contemplative fulfillment? Or maybe I decide that all existence really is suffering, and people are happier not existing at all. At some point happiness becomes just another way of saying that the moral system is meeting whatever arbitrary goal it sets for itself, instead of actually providing a definition of what that goal is.
    It is true that at some point it is an axiom, an accepted truth that can't be proven. We can't measure happiness objectively or numerically, and we might argue about different kinds of happiness. But nevertheless all morality comes down to increasing happiness, and I've never heard an argument for relativism, or again absolutism, that was merited on people having different concepts of happiness. They generally take the form of "if they think female circumcision is cool, who are we to judge?" in which case i say bullshit, it causes and sustains much more harm and suffering than any joy it causes, and I'm sure we can measure that somehow if need be.
    darthmix wrote: »
    But my real problem with this way of thinking is that it appears to me to mischaracterize the process by which society arrives at its moral ideas. I suspect that a culture knows, at some basic level, whether it's working or not, it understands instinctively what goals its morality and social organization are intended to serve, in much the same way that an animal knows that it should go on living. But just as the animal isn't at pains to describe his life as the pursuit of a single abstract concept, human culture isn't obliged to do that either, and indeed it doesn't do that. The animal's life, after all, is not really an abstraction - it's a real thing, a tactile and complicated experience, and so too is our collective experience of living in society and relating to one another. People across cultures identify values that seem to operate of independantly of happiness - they value fairness, they value purity, in various aspects of their life. They don't reduce those values to a single value, and when we try to do so after the fact - when we say "all those values are really about pursuing happiness" we've simply widened the definition of happiness to include those things. It's not that the statement is wrong; it's just that it distorts what morality really is, because it forgets the way morality actually works in culture. I don't see morality operating as a single-minded pursuit of any particular emotion or indentifiable quality, and I don't think imagining that it is that can really teach us anything about it.
    Yes just as we instinctively learn to count and do math long before, and perhaps without ever, learning about the logical axioms in set theory that make such operations possible. But they are there. They are inevitably what a reasoned analysis will always reduce to.

    The common fallacy you are subscribing to is that you're assuming I'm positing hedonism. It is not really very hard to prove that any concept of fairness or purity is only held, right or wrong, in the interest of promoting overall happiness and preventing overall suffering, even if it doesn't work to those ends in each case.

    Yar on
  • Options
    darthmixdarthmix Registered User regular
    edited December 2008
    Yar wrote: »
    It is true that at some point it is an axiom, an accepted truth that can't be proven. We can't measure happiness objectively or numerically, and we might argue about different kinds of happiness. But nevertheless all morality comes down to increasing happiness, and I've never heard an argument for relativism, or against absolutism, that was merited on people having different concepts of happiness. They generally take the form of "if they think female circumcision is cool, who are we to judge?" in which case i say bullshit, it causes and sustains much more harm and suffering than any joy it causes, and I'm sure we can measure that somehow if need be.

    I think those kinds of relativist arguments are mostly strawmen. Even the rare species of philosopher who self-identifies as a moral relativist doesn't make the extreme claim that we can never judge the practices of another culture; the premise that there's no absolute standard of morality does not lead lead inevitably to that position. I was just pointing out that happiness is not really a useful metric for a universal standard of morality, because in order to make the statement "all morality comes down to increasing happiness" applicable in every instance you really have to define happiness as "that which all morality comes down to increasing." In the absence of an objective means of measuring happiness, or quantifying its different forms, the argument degrades into a rhetorical tautology. You can say it, and it isn't false, but it also isn't especially enlightening.

    Whether or not we can objectively measure happiness - and I'm not at all convinced we can do that - I don't think we need to. It doesn't appear to me that we've ever required a universal standard of anything before we went about judging others or ourselves. It's not through rational analysis, a consideration of a universal metric of happiness, that I'm able to see that female circumcision is abhorrent. Instead, I have deeply-ingrained moral reflexes - drawn from my own experience and the collective experience of the society that raised me - that sensitize me to things like violence and body mutiliation and the oppression of women. It's those complicated impulses - not the pursuit of an abstract virtue - that form the basis of our moral structures. Saying "all morality is about increasing happiness" is a way of describing that long and nuanced process, and it's not an especially good or useful one, so I'm very skeptical of it as a universal truth. In practice it turns out to be not much more than an aura of infallibility that people wrap around whatever it is they already believe.

    Finally, if morality is really an expression of our social instinct - and I think it is - then is it really an accurate description to say that it's oriented toward maximizing happiness? Presumably it's at least as old as culture; I can imagine it being older, older even than language, with early humans picking up certain normative patterns that governed how they treated other members of their tribe. Was that all because they wanted to be happy? Or did it serve the same end that their tools served, and their lungs served, and their opposable thumbs served? They couldn't tell you what that end was, and I'm not even sure I can. I don't think morality tells you why; I think it tells you how. The why is probably larger, and more formless, and older than happiness. A happy tribe (but not too happy!) might be better positioned survive, thrive, or whatever, but in that sense happiness is a tool rather than a goal.
    Yes just as we instinctively learn to count and do math long before, and perhaps without ever, learning about the logical axioms in set theory that make such operations possible. But they are there. They are inevitably what a reasoned analysis will always reduce to.
    What I've long argued is that to understand morality strictly through reasoned analysis is to misunderstand it. It is not the product of reasoned analysis, but of experience and practice. The moral system that you arrive at strictly through analysis is an inorganic product that cannot be embraced by a human culture, since humans understand instinctively where morality actually comes from. You can inform experience through reasoning, enlarge it, and in that way push on and change moral responses, hopefully for the better; but in the end those responses won't really reflect the conclusions of reasoned analysis, won't be intellectually consistent, won't be any number of other things we require reasoned analysis to be. The conclusions you form through reasoned analysis can be useful, and they can be true, but they will not be morality. Morality is something else.
    The common fallacy you are subscribing to is that you're assuming I'm positing hedonism. It is not really very hard to prove that any concept of fairness or purity is only held, right or wrong, in the interest of promoting overall happiness and preventing overall suffering, even if it doesn't work to those ends in each case.
    Hedonism has a lot of different meanings, and the term is loaded, I guess because it's associated with egoism. There are schools of altruistic/non-egoist hedonism that look very similar to what you're espousing here. But it's not really on that basis that I'm questioning you. I question whether morality was ever formed - consciously or unconsciously - purely with the pursuit of happiness, or any single definable quantity, in mind. We can invent systems that are formed with that in mind, but they will always be fundamenally unlike the thing that human society recognizes as morality.

    darthmix on
  • Options
    YarYar Registered User regular
    edited December 2008
    darthmix wrote: »
    It's those complicated impulses - not the pursuit of an abstract virtue - that form the basis of our moral structures.
    The depends on what you mean by the basis. Is the basis of mathematics the first three axioms of set theory, or is it humans learning to use our fingers or seashells to keep track of things? That's the distinction you're drawing here. And yes, morality is defined to be only that which seeks to increase happiness and decrease sorrow, and "Good" and even "Truth" ultimately are only that which increase satisfaction and joy and minimize disappointment and sorrow. But the ontological argument is probably not where we want to go right now.
    darthmix wrote: »
    Saying "all morality is about increasing happiness" is a way of describing that long and nuanced process, and it's not an especially good or useful one, so I'm very skeptical of it as a universal truth.
    It's an essential one if morality is to have much meaning at all beyond whimsy.
    darthmix wrote: »
    In practice it turns out to be not much more than an aura of infallibility that people wrap around whatever it is they already believe.
    I see no evidence of this.
    darthmix wrote: »
    Was that all because they wanted to be happy? Or did it serve the same end that their tools served, and their lungs served, and their opposable thumbs served?
    Such are one in the same, and obviously so to me. It hurts not to breathe, not to eat, not to keep warm.
    darthmix wrote: »
    They couldn't tell you what that end was, and I'm not even sure I can. I don't think morality tells you why; I think it tells you how. The why is probably larger, and more formless, and older than happiness. A happy tribe (but not too happy!) might be better positioned survive, thrive, or whatever, but in that sense happiness is a tool rather than a goal.
    Happiness is not a tool for anything. It is the one thing, the only thing, that is an end unto itself, and which is objectively and undeniably a good thing, and not a bad thing, in absence of any given context.

    But again, the ability to "tell" me isn't what I'm arguing. I agree that the "how" of morality is something that originates organically, perhaps even genetically, and not through logic and reason. But at some point we ask ourselves the "why." At some point we realize that there is reason and rationality to it, that things which we have always thought to be true (or moral) have themselves logical bases that we never thought about at all. And those bases, once drawn out, help us to refine, help us to justify and show which beliefs are perhaps not as true as we thought and which ones are.
    darthmix wrote: »
    The moral system that you arrive at strictly through analysis is an inorganic product that cannot be embraced by a human culture, since humans understand instinctively where morality actually comes from. You can inform experience through reasoning, enlarge it, and in that way push on and change moral responses, hopefully for the better; but in the end those responses won't really reflect the conclusions of reasoned analysis, won't be intellectually consistent, won't be any number of other things we require reasoned analysis to be. The conclusions you form through reasoned analysis can be useful, and they can be true, but they will not be morality. Morality is something else.
    I agree with this. We aren't omniscient. Our ability to rationally "solve" a moral dilemma is limited by many things, most particularly including our capacity to reason. We've filled in huge gaps, with things like war and religion and utilitarianism and intuition and emotion, and only through the progress of humankind are we able to work backwards through it all. What I disagree with is that this means there must be some other true source of morality. In practice, moral decisions arise from all sorts of instinct and social intuition and custom and culture. In theory, they all seek the most reasonable solution to the plight of human suffering.
    darthmix wrote: »
    I question whether morality was ever formed - consciously or unconsciously - purely with the pursuit of happiness, or any single definable quantity, in mind. We can invent systems that are formed with that in mind, but they will always be fundamenally unlike the thing that human society recognizes as morality.
    I guess what I'm saying to this point is that yes, intuitively, even if we have never thought it consciously, the root of every moral thought anyone ever had was, on some deep subconscious or even biological level, a statement about what path they felt would make existence a happier place for all those capable of happiness. And nothing else.

    Yar on
  • Options
    ViolentChemistryViolentChemistry __BANNED USERS regular
    edited December 2008
    MrMonroe wrote: »
    Now ho'ed up, ViolentChem.

    How are you defining morality? A system of laws which exists for no reason? A system of codes that has no utility whatsoever? If you're going to define morality as basically useless, why would you bother to judge it against whether it causes extinction or not?

    Here's the deal: I posit that morality is a system of codes we use to make our society function better. Thus a moral coda that works better towards that end is better than one that doesn't. Even if a society is absolutely terrible and deserves to be destroyed by that metric, there's an ideal moral code that indicates that the individual should work to destroy that society.

    The physical laws of the universe and our social situation (which can be expressed purely as a physical situation) make certain codas work better than others at that explicit goal. Are you defining morals as existing purely in a vacuum? Why? What is the purpose of a moral code if it has no bearing on reality?

    I'm not asking you to make a definitive judgment on the question of "for what are morals codes best?" I'm asking you to accept that, no matter the answer to that question, there are better moral codes than others, and the result depends on how the moral code accepts the contours of the physical and social universe.

    Here is the important part:

    What you seem to be attempting is to define morality as wholly subjective; any use of morality is defined by that morality itself and therefore useless. But this is a false assumption; any use of morality can be defined objectively by whether it succeeds in perpetrating what it claims to be good. If a moral code indicates that all humans should die, that code can be examined on the merits of whether it actually results in the death of all humans. If a moral code purports to advance human society, that code can be objectively examined on the basis of whether it actually make people more well-off.

    A moral code doesn't have to conform to the existing contours of the social universe, though, it can go ahead and dynamite inconvenient geographical features of the contours of the social universe. Because they are invariably based in a set of axiomatic assumptions the validity of every moral code is natively subjective. If we make a bad law in physics we can test it against the physical world and come up with empirical reasons why it doesn't work, as the goal of all physical laws is the same, and it is a descriptive goal rather than a prescriptive role. In other words, any comparison to physics fails because physical laws are necessarily logically disprovable, whereas moral codes are natively impossible to disprove.

    ViolentChemistry on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited December 2008
    darthmix wrote: »
    MrMister wrote: »
    I don't think that ethicists have to commit themselves to the claim that their theories accurately describe how most people actually come to make moral judgments. Instead, their theories accurately describe how it is rational to make moral judgments, and the decisions of ordinary people are rational insorfar as they conform to those theories. If a person is interested in making rational moral decisions, they will therefore reflectively bring their action more into line with whichever given moral theory.
    I guess I'm skeptical that anyone but the tiniest fraction of academics will ever apply that kind of rational moral decision making with any regularity in their day-to-day lives. If it's true that there's a complex and powerful organic process in place through which the larger culture arrives at moral ideas, fueled in various parts by evolution and social adaptation, then I don't see rationalist models ever overtaking it or contributing to the moral advancement of the culture in any real way; our predilection for the more naturalistic, intuitive processes of morality is too strong. I can see a culture trying to wrap itself in the label of this or that rationalist theory, to legitimize its moral ideas, while in fact it continues to arrive at its ideas through these buried intuitive systems.

    First off, I think you are simply wrong to claim that ethics, as practiced as a discipline, has no substantial impact on our actual practice. For instance, consider the work of Jeremy Bentham, the founder of Utilitarianism:
    The philosophy of Utilitarianism influenced many of the social reforms in Great Britain during the early half of the nineteenth century. The name most frequently associated with Utilitarianism is that of Jeremy Bentham. Bentham's philosophical principles extended into the realm of government. These principles have been associated with several reform acts entered into English law such as the Factory Act of 1833, the Poor Law Amendment Act of 1834, the Prison Act of 1835, the Municipal Corporations Act of 1835, the Committee on Education in 1839,the Lunacy Act of 1845, and the Public Health Act of 1845.

    Similarly, the writings of Locke influenced the writers of the American Constitution, and Peter Singer's work has greatly strengthened the animal rights movement. You will find few places with more vegetarians than a philosophy department--it turns out that exposure to ethical theories does change people's behavior in predictable ways.

    You could claim that the rational arguments of Bentham, Locke, and Singer didn't actually change people's moral code. Instead, people's moral code shifted as part of some underlying cultural change that happened at the same time. You could even say that those philosopher's writings were an expression of that cultural shift, rather than a cause of it. However, I don't see any compelling reason to accept that view. You might as well say that my typing of this message was not caused by the movement of my fingers, but instead by a cultural system. Sure, in some sense that might be true--it depends on how you define a cultural system and how you define causality, as well as on some almost impossible-to-test empirical assumptions about how such cultural systems function. But regardless, even if turns out to be true that a cultural system caused the movement of my fingers, that's a completely worthless fact to me as an individual. It has absolutely no potential to inform my actions or change my evaluation of the world in a meaningful way.

    Furthermore, I think you are correct in your claim that few philosophers who endorse relativism also endorse the 'anything-goes' attitude that we cannot ever pass judgment on members of other cultures for their practices. However, the fact that they do not endorse the 'anything-goes' attitude does not mean that their theories don't entail it. Of course it depends on the specific formulation the relativist takes, but in general I suspect that their theories do entail an 'anything-goes' attitude, regardless of whether they like to tell themselves otherwise.

    MrMister on
  • Options
    YarYar Registered User regular
    edited December 2008
    Either anything goes, or there is moral truth. There is no middle there. There may be some compromises that entail both, though, such as culture sometimes being a circumstance that rationally affects how a moral decision is made.

    Yar on
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited December 2008
    Right and wrong isn't built into the fabric of the universe. That doesn't mean we can't create a social contract to make up for that lack. It just means we can't appeal to some outside force about it.

    Incenjucar on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited December 2008
    Yar wrote: »
    Either anything goes, or there is moral truth. There is no middle there. There may be some compromises that entail both, though, such as culture sometimes being a circumstance that rationally affects how a moral decision is made.

    No fucking way mate.

    No fucking way do you get to decribe it as an on/off logic switch.

    Get the head out of your ass.

    Anything goes, there is moral truth, or people are interactively working with the reasoning that works best in both their own self interest and the people around them.

    I'll let you guess which one psych has all the evidence for.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited December 2008
    Eh.

    Existance or non-existance is in fact something that can be yes/no.

    Incenjucar on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited December 2008
    Incenjucar wrote: »
    Eh.

    Existance or non-existance is in fact something that can be yes/no.

    So what.

    You can come up to me and convince me, logically, that morals don't exist. But if you try to rob me, I'm still going to fucking beat your head in.

    Let's talk about morals in terms of the real world. I don't like morality reasoning topics that go off into pointless fairyland. I guess you can write that off as a significant bias on my part, if you like.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited December 2008
    Values exist and are even something one could measure, though they are variable over time and environment and other details, while people tend to assume they're some kind of absolute. They're a much more useful thing to work with.

    Incenjucar on
Sign In or Register to comment.