As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

[Morality] Subjectivity vs Objectivity

14567810»

Posts

  • Options
    YarYar Registered User regular
    edited June 2011
    I suppose for some understandings of honor, you might have a point. What is honorable kind of depends on the culture dealing with the honor though. What principle necessitates that suffering and joy need to be the foundation of honor?
    It's intrinsic to a rational concept of morality. It doesn't require a certain person of a certain culture to consciously agree, however, in any culture that calls it "honorable" to do XYZ, and reasonable people come to believe that XYZ does more harm than good over the long run, then inevitably those people will come to state their case, and ask, "where's the honor in that?"

    The whole idea of principled morality rests on a reasonable belief that we often aren't capable of act-level calculus of harm and good, however, certain forms of behavior can be held to even when they would seem to do no good, and in the long run, holding to those behaviors will do more good than any attempt at act-level moral decisions. Which, for many people, will mean that you simply don't concern yourself with act-level reasoning, just tell the truth and uphold your honor and work hard and mind your courtesies and etc., etc., and you're likely to do way more good for the world than if you try to calculate a machiavellian means to an end. The road to hell is paved with good intentions, and there is no end, all are means.
    You are more or less saying that there are objective facts, but that knowing them is relative to the observer. I don't think that's a contentious idea, but it doesn't, as far as I can tell, change the definitions in any meaningful way.
    Because you are splitting some pretty thin hairs when you try to distinguish between "relative to the observer" and "subjective."

    Yar on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited June 2011
    I found something interesting just now while doing the net surfatron.
    It's related to altruism.
    They got it to work in robots.

    Observe.

    http://www.wired.com/wiredscience/2011/05/robot-altruism/

    Robots in a Swiss laboratory have evolved to help each other, just as predicted by a classic analysis of how self-sacrifice might emerge in the biological world.

    “Over hundreds of generations … we show that Hamilton’s rule always accurately predicts the minimum relatedness necessary for altruism to evolve,” wrote researchers led by evolutionary biologist Laurent Keller of Switzerland’s University of Lausanne in Public Library of Science Biology. The findings were published May 3.

    Hamilton’s rule is named after biologist W.D. Hamilton who in 1964 attempted to explain how ostensibly selfish organisms could evolve to share their time and resources, even sacrificing themselves for the good of others. His rule codified the dynamics — degrees of genetic relatedness between organisms, costs and benefits of sharing — by which altruism made evolutionary sense. According to Hamilton, relatedness was key: Altruism’s cost to an individual would be outweighed by its benefit to a shared set of genes.

    .....

    In the new study, inch-long wheeled robots equipped with infrared sensors were programmed to search for discs representing food, then push those discs into a designated area. At the end of each foraging round, the computerized “genes” of successful individuals were mixed up and copied into a fresh generation of robots, while less-successful robots disappeared from the gene pool.

    Each robot was also given a choice between sharing points awarded for finding food, thus giving other robots’ genes a chance of surviving, or hoarding. In different iterations of the experiment, the researchers altered the costs and benefits of sharing; they found that, again and again, the robots evolved to share at the levels predicted by Hamilton’s equations.

    “A fundamental principle of natural selection also applies to synthetic organisms,” wrote the researchers. “These experiments demonstrate the wide applicability of kin selection theory.”

    I found it utterly fascinating.

    I have no idea how their programming works out, but based on the above I could postulate a possible reason. The robots that chose to help others support both hoarders and other robots who choose to help others. The hoarders eventually die off despite the support, because the recriprocating robots are so much stronger combined that they put the hoarders at the bottom of the rankings, and the hoarders are then removed.
    But it could be something more complicated. I'm just speculating. I don't actually know hamilton's rule or how they programmed it.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    hanskeyhanskey Registered User regular
    edited June 2011
    They used learning AI agents to run the robots. The genes are actually the behavioral rules that are programmed and learned and instead of biological reproduction, they used personality admixture to create a new generation of unique individuals. Past experience apparently shows the robots of each new iteration that cooperation more effectively mitigates the risk of death, and this becomes more apparent with each generation. This would actually explain many of the complex cooperative components of natural ecosystems that make them appear to be perfectly designed, like the movement of pollen by insects.

    What would be an interesting extension of this experiment would be to have several different varieties of robots that have different food sources and include predation robots. Then after an ecosystem forms, where population sizes are roughly stable, you could remove a "species and see the results of ecosystem collapse to predict real ecosystems collapse and then we might be able to find ways of reducing the harm.

    hanskey on
  • Options
    YarYar Registered User regular
    edited June 2011
    hanskey wrote: »
    They used learning AI agents to run the robots. The genes are actually the behavioral rules that are programmed and learned and instead of biological reproduction, they used personality admixture to create a new generation of unique individuals. Past experience apparently shows the robots of each new iteration that cooperation more effectively mitigates the risk of death, and this becomes more apparent with each generation. This would actually explain many of the complex cooperative components of natural ecosystems that make them appear to be perfectly designed, like the movement of pollen by insects.

    What would be an interesting extension of this experiment would be to have several different varieties of robots that have different food sources and include predation robots. Then after an ecosystem forms, where population sizes are roughly stable, you could remove a "species and see the results of ecosystem collapse to predict real ecosystems collapse and then we might be able to find ways of reducing the harm.
    These kinds of studies have been done, for years and years. Fairly simple from a computer science perspective. You have a grid, and little blips that can move around it. And then you throw in whatever basic simulated entities you want, like predators that kill blips, food they need, and so on. Blips get various "behaviors" they have access to, such as movement patterns, the ability to send a signal to other blips, and so forth. At the end of a round, the blips that survived "mate" with each other, basically each taking half of their behaviors and randomly matching them up to create a new blip with a new set of behaviors. Also maybe a "mutation" function that scrambles up behaviors from the parents every now and then. The more food the parents got, the more they get to mate. And as you run rounds, complex behavior sets emerge.

    For example, one study had the predators, signals, and movement patterns I mentioned above. After several generations, the grid was populated with blips that would would send out a signal when a predator neared, and also would run away if they saw a signal from another blip. Using the signal as a warning, and running away, were not behaviors programmed into the system, they just evolved. It seems that the study mentioned above was able to support a formula (that I've also never heard of) about animals cooperating with each other, likely by tweaking the coefficient that determines just how much more the blips get to mate based on their food success (i.e., winner gets a small bonus vs. winner gets to be the take-all alpha male).

    Yar on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited June 2011
    Eventually as psychological computer modelling becomes even more advanced you could probably start modelling the evolution of simple culture.
    which would be incredibly interesting. Do you get a robot plato and robot aristotle if you set the right conditions? Or do you get something really weird and whack we would never think of.
    I can't wait for this kind of stuff.
    A robot that conceives of an entity like god would be the best thing ever.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    LachrymiteLachrymite Registered User regular
    edited June 2011
    Can anyone recommend some good jumping off points for reading some modern philosophers of ethics?

    Mostly into philosophy of mind so I'm currently reading Churchland's Neurophilosophy, part of which touches on possible biological/neural mechanisms behind moral systems, but of course doesn't really get much into actual ethics.

    I naturally intuit to a similar position to Loren Michael (moral relativism with utilitarianism leanings for personal pragmatic usage), but like to read as much as I can on different systems to acquire new data to possibly revise my position. I'm also reading Sam Harris's The Moral Landscape because some friends kept bitching at me to, but quite honestly so far I think Harris is pretty terrible and a very muddy thinker.

    Lachrymite on
  • Options
    YarYar Registered User regular
    edited June 2011
    Eventually as psychological computer modelling becomes even more advanced you could probably start modelling the evolution of simple culture.
    which would be incredibly interesting. Do you get a robot plato and robot aristotle if you set the right conditions? Or do you get something really weird and whack we would never think of.
    I can't wait for this kind of stuff.
    A robot that conceives of an entity like god would be the best thing ever.

    5884406809_5b31f3dafe_m.jpg

    Yar on
Sign In or Register to comment.