As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Robots, AI, and how we treat them

1246

Posts

  • Options
    Witch_Hunter_84Witch_Hunter_84 Registered User regular
    edited January 2010
    All arguments of desensitization aside, at what point in the evolution of lifelike robotics would we start to recognize the possible sentience of cybernetic life? I mean, it would kind of be awkward after years of using these things for manual labor and rape simulation, only for the Deus Ex Machina to suddenly manifest itself and "Sky-Net" the fuck out of us because we had enslaved its toaster ancestors.

    I really would rather that the end of the Battlestar Galactica series did not turn out to be prophetic.

    Witch_Hunter_84 on
    If you can't beat them, arrange to have them beaten in your presence.
  • Options
    PonyPony Registered User regular
    edited January 2010
    All arguments of desensitization aside, at what point in the evolution of lifelike robotics would we start to recognize the possible sentience of cybernetic life? I mean, it would kind of be awkward after years of using these things for manual labor and rape simulation, only for the Deus Ex Machina to suddenly manifest itself and "Sky-Net" the fuck out of us because we had enslaved its toaster ancestors.

    I really would rather that the end of the Battlestar Galactica series did not turn out to be prophetic.

    It's sort of a disturbing idea.

    As is that in-between step where we aren't at quite a sapient robot, but we've got something at least sentient on the level of a dog.

    I mean, well in advance of us ever trying to determine "Is this robot a person and should it be treated like a person?", we're going to have to deal with "should we at least have some legal protection for this machine to protect it from cruelty?"

    We aren't going to jump from Aibos to Cylons. There's going to be quite a few steps in-between there, and that's a scary space to be in because some really fucked up shit can happen there.

    "How do you measure a man?" will be a less imminent question than "How do you measure that it's alive?"

    Because once we acknowledge a machine as something even potentially on the same level as an animal... now we've got a shit-ton of ethical questions about how we treat that machine.

    Pony on
  • Options
    matt has a problemmatt has a problem Points to 'off' Points to 'on'Registered User regular
    edited January 2010
    Pony wrote: »
    All arguments of desensitization aside, at what point in the evolution of lifelike robotics would we start to recognize the possible sentience of cybernetic life? I mean, it would kind of be awkward after years of using these things for manual labor and rape simulation, only for the Deus Ex Machina to suddenly manifest itself and "Sky-Net" the fuck out of us because we had enslaved its toaster ancestors.

    I really would rather that the end of the Battlestar Galactica series did not turn out to be prophetic.

    It's sort of a disturbing idea.

    As is that in-between step where we aren't at quite a sapient robot, but we've got something at least sentient on the level of a dog.

    I mean, well in advance of us ever trying to determine "Is this robot a person and should it be treated like a person?", we're going to have to deal with "should we at least have some legal protection for this machine to protect it from cruelty?"

    We aren't going to jump from Aibos to Cylons. There's going to be quite a few steps in-between there, and that's a scary space to be in because some really fucked up shit can happen there.

    "How do you measure a man?" will be a less imminent question than "How do you measure that it's alive?"

    Because once we acknowledge a machine as something even potentially on the same level as an animal... now we've got a shit-ton of ethical questions about how we treat that machine.
    There's going to remain a divide until humans are just as back-uppable as robots are, or if robotic sentience occurs and there is no way to copy it, it merely exists in the robot it occurred in. When the mechanical brain of a fully self-aware robot can be copied, saved, and then replaced were that robot to be destroyed, there is no loss aside from monetary caused by having to rebuild it. No matter how conscious it is, it's still the equivalent of reinstalling Windows. Unless robotic sentience is a complete, non-replicable fluke that occurs purely at random, the ability to have a permanently stored, upgradeable and reusable copy of your consciousness and all your experiences ironically makes you less than human, not more. As Matrix-corny as it sounds, humanity really is based on loss.

    matt has a problem on
    nibXTE7.png
  • Options
    zerg rushzerg rush Registered User regular
    edited January 2010
    Realistic sex bots create a substitute good for <evil sex thing>. By definition, increasing supply for a substitute good lowers the demand for the real good.

    I find it hard to believe that 1) sex robots will become complementary goods to <rape/child sex abuse/bestiality>, or that 2) sex robots will increase the overall demand for <evil sex thing> enough to compensate for their substitution effect on <evil sex> demand.

    Has the addition of a substitute good ever increased overall long term demand for the original good, ever, in the history of all inventions? Why would robo-rape be any different?

    zerg rush on
  • Options
    emnmnmeemnmnme Registered User regular
    edited January 2010
    zerg rush wrote: »
    Realistic sex bots create a substitute good for <evil sex thing>. By definition, increasing supply for a substitute good lowers the demand for the real good.

    I find it hard to believe that 1) sex robots will become complementary goods to <rape/child sex abuse/bestiality>, or that 2) sex robots will increase the overall demand for <evil sex thing> enough to compensate for their substitution effect on <evil sex> demand.

    Has the addition of a substitute good ever increased overall long term demand for the original good, ever, in the history of all inventions? Why would robo-rape be any different?

    Sugar in colas?

    emnmnme on
  • Options
    Kevin R BrownKevin R Brown __BANNED USERS regular
    edited January 2010
    First: Your 'ethical concerns' are a special pleading fallacy. You claim that, for example, 'raping' a robot as part of a fetish rape fantasy is different from violent video games or consensual scenarios between couples, but you fail to establish why this would be different. You just repeat the assertion over and over again using different language each time.

    Second: Your bias is painfully obvious. You create an 'us and them' dichotomy almost immediately - 'those people who are sex creeps with their rape fetishes!'; just about every single person on the planet has sexual fetishes of some kind, and rape fantasies are extremely common (especially among women).

    Third: Even if you were able to demonstrate that desensitization would be a likely end product of sex robotics, you haven't established why this is a bad thing. You've said repeatedly that you did not think, in fact, that this would actually cause anyone to become rapists. So, what's the concern? If people become desensitized to an activity like rape or murder, but this desensitization does not produce negative results in the world, what's the point in demonizing it? Hell, perhaps it would be beneficial - perhaps it would allow more people to adopt a much more objective perspective and decrease emotional knee-jerk reactions to emotion provoking crimes like rape & murder.

    Without evidence, of course, it's hardly worth anyone's time to start speculating either way.

    Kevin R Brown on
    ' As always when their class interests are at stake, the capitalists can dispense with noble sentiments like the right to free speech or the struggle against tyranny.'
  • Options
    matt has a problemmatt has a problem Points to 'off' Points to 'on'Registered User regular
    edited January 2010
    So here's some more info about the sexbot that spawned the OP.

    http://www.somethingawful.com/d/news/roxxxxy-love-robot.php

    And one of the creepiest things I've ever read, courtesy of the creator:
    "I had a friend who passed away in 9-11," Hines said. "I promised myself I would create a program to store my friend's personality, and that became the foundation for Roxxxy True Companion."

    The video is mostly parody. Watch it to the end though, it gets hilarious.

    matt has a problem on
    nibXTE7.png
  • Options
    emnmnmeemnmnme Registered User regular
    edited January 2010
    Third: Even if you were able to demonstrate that desensitization would be a likely end product of sex robotics, you haven't established why this is a bad thing. You've said repeatedly that you did not think, in fact, that this would actually cause anyone to become rapists. So, what's the concern? If people become desensitized to an activity like rape or murder, but this desensitization does not produce negative results in the world, what's the point in demonizing it? Hell, perhaps it would be beneficial - perhaps it would allow more people to adopt a much more objective perspective and decrease emotional knee-jerk reactions to emotion provoking crimes like rape & murder.

    Without evidence, of course, it's hardly worth anyone's time to start speculating either way.

    I think Pony's also a little concerned about what these bad habits, when made public, will mean for our society's image. How others view us. If we're burning effigies of politicians 24/7 and constantly airing CSI episodes where the victim is graphically raped and murdered, it's a black eye for us civilized folk even though no real harm is being done. We start to look crazy. Crazier?

    emnmnme on
  • Options
    _J__J_ Pedant Registered User, __BANNED USERS regular
    edited January 2010
    Pony wrote: »
    What I am saying is, the less value people pay to the depiction of human suffering as it becomes more accurate and realistic, the more overall negative net effect this has on societal empathy and people's willingness to give a shit.

    The problem is that raping a robot is not in any way involved with human suffering any more than shooting an NPC in GTA involves human suffering any more than hitting a chicken with a baseball bat involves human suffering.

    Your argument is something of either a slippery slope argument (If they can shoot a chicken then they can shoot a human) or a confusion of the ontological status of particular entities (robots be humans).

    The point I'm trying to make is that your ultimate concern is the well-being of humans. That is entirely sensible. However:

    1) Robots are not humans.
    2) NPCs are not humans.
    3) Chickens are not humans.

    So I can rape a robot, an NPC, and a chicken without it having any impact on how I treat or regard humans. Because, again, Robots, NPCs, and chickens are not humans.

    Somehow you've embraced a world-view within which agents somehow confuse Robots, NPCs, and chickens with humans. And I do not think that says anything about raping Robots, NPCs, or chickens so much as it indicates a degree of idiocy with the people who think that Robots, NPCs, and chickens are humans.

    Because, again:

    1) Robots are not humans.
    2) NPCs are not humans.
    3) Chickens are not humans.

    _J_ on
  • Options
    LoserForHireXLoserForHireX Philosopher King The AcademyRegistered User regular
    edited January 2010

    Third: Even if you were able to demonstrate that desensitization would be a likely end product of sex robotics, you haven't established why this is a bad thing. You've said repeatedly that you did not think, in fact, that this would actually cause anyone to become rapists. So, what's the concern? If people become desensitized to an activity like rape or murder, but this desensitization does not produce negative results in the world, what's the point in demonizing it? Hell, perhaps it would be beneficial - perhaps it would allow more people to adopt a much more objective perspective and decrease emotional knee-jerk reactions to emotion provoking crimes like rape & murder.

    This is interesting, mostly because it's blatantly consequentialist.

    What if we aren't utilitarians, and concerned with the net happiness gain? So what if the consequences aren't bad. Perhaps the act itself is.

    I can easily see this kind of thing being objectionable from the basis of both Virtue Theory and from some Deontological positions.

    LoserForHireX on
    "The only way to get rid of a temptation is to give into it." - Oscar Wilde
    "We believe in the people and their 'wisdom' as if there was some special secret entrance to knowledge that barred to anyone who had ever learned anything." - Friedrich Nietzsche
  • Options
    ElitistbElitistb Registered User regular
    edited January 2010
    emnmnme wrote: »
    zerg rush wrote: »
    Realistic sex bots create a substitute good for <evil sex thing>. By definition, increasing supply for a substitute good lowers the demand for the real good.

    I find it hard to believe that 1) sex robots will become complementary goods to <rape/child sex abuse/bestiality>, or that 2) sex robots will increase the overall demand for <evil sex thing> enough to compensate for their substitution effect on <evil sex> demand.

    Has the addition of a substitute good ever increased overall long term demand for the original good, ever, in the history of all inventions? Why would robo-rape be any different?

    Sugar in colas?
    I don't think you quite understood what he said. Sugar is an additive to colas, not a substitute for colas.

    That said, I also don't agree with his "substitutes never increased demand of an original good", but that is because of his usage of the term "ever". It requires an amount of research that I have not yet performed.

    Elitistb on
    steam_sig.png
  • Options
    ElitistbElitistb Registered User regular
    edited January 2010
    So what if the consequences aren't bad. Perhaps the act itself is.
    I'm missing something here. If the consequences aren't bad, how can the act be bad? Could you give me an example?

    Elitistb on
    steam_sig.png
  • Options
    KaputaKaputa Registered User regular
    edited January 2010

    Third: Even if you were able to demonstrate that desensitization would be a likely end product of sex robotics, you haven't established why this is a bad thing. You've said repeatedly that you did not think, in fact, that this would actually cause anyone to become rapists. So, what's the concern? If people become desensitized to an activity like rape or murder, but this desensitization does not produce negative results in the world, what's the point in demonizing it? Hell, perhaps it would be beneficial - perhaps it would allow more people to adopt a much more objective perspective and decrease emotional knee-jerk reactions to emotion provoking crimes like rape & murder.

    This is interesting, mostly because it's blatantly consequentialist.

    What if we aren't utilitarians, and concerned with the net happiness gain? So what if the consequences aren't bad. Perhaps the act itself is.

    I can easily see this kind of thing being objectionable from the basis of both Virtue Theory and from some Deontological positions.
    The act itself isn't bad unless it's harming someone. And I believe one of the premises of this thread was that these sex-bots aren't "someones," in that they have no more sentience than a Mega Man boss.

    Kaputa on
  • Options
    _J__J_ Pedant Registered User, __BANNED USERS regular
    edited January 2010
    Elitistb wrote: »
    So what if the consequences aren't bad. Perhaps the act itself is.
    I'm missing something here. If the consequences aren't bad, how can the act be bad? Could you give me an example?

    We would have to argue that the act itself is an instantiation of "badness" such that the act can be known to be bad insofar as we discern that the act in itself is an instantiation of "badness".

    To say that a thing is bad is to say that a thing instantiates badness, that the predicate "badness" is exemplified by the thing.

    So, with regard to a given act, one could discern that the act is itself an exemplification of "badness".

    _J_ on
  • Options
    KaputaKaputa Registered User regular
    edited January 2010
    _J_ wrote: »
    Elitistb wrote: »
    So what if the consequences aren't bad. Perhaps the act itself is.
    I'm missing something here. If the consequences aren't bad, how can the act be bad? Could you give me an example?

    We would have to argue that the act itself is an instantiation of "badness" such that the act can be known to be bad insofar as we discern that the act in itself is an instantiation of "badness".

    To say that a thing is bad is to say that a thing instantiates badness, that the predicate "badness" is exemplified by the thing.

    So, with regard to a given act, one could discern that the act is itself an exemplification of "badness".
    Any reasonable definition of "badness" shouldn't include things that, in and of themselves, are harmless to all parties. This act doesn't pass that test, so any discussion on whether raping your robot is bad should center on the overarching consequences, not the act of robot rape itself.

    God, I feel ridiculous even typing out that paragraph.

    Kaputa on
  • Options
    _J__J_ Pedant Registered User, __BANNED USERS regular
    edited January 2010
    Kaputa wrote: »
    _J_ wrote: »
    Elitistb wrote: »
    So what if the consequences aren't bad. Perhaps the act itself is.
    I'm missing something here. If the consequences aren't bad, how can the act be bad? Could you give me an example?

    We would have to argue that the act itself is an instantiation of "badness" such that the act can be known to be bad insofar as we discern that the act in itself is an instantiation of "badness".

    To say that a thing is bad is to say that a thing instantiates badness, that the predicate "badness" is exemplified by the thing.

    So, with regard to a given act, one could discern that the act is itself an exemplification of "badness".
    Any reasonable definition of "badness" shouldn't include things that, in and of themselves, are harmless to all parties. This act doesn't pass that test, so any discussion on whether raping your robot is bad should center on the overarching consequences, not the act of robot rape itself.

    God, I feel ridiculous even typing out that paragraph.

    Your definition of badness is a consequentialist view; to be bad is to have bad consequences. The point is to explain that not all articulations of badness are consequentialist.

    You would maintain that to be bad is to have bad consequences. That is fine. But to refute the position that "badness" can be discerned independent of consequences requires more than a simple repetition of your own position with an added invocation of "should".

    _J_ on
  • Options
    LoserForHireXLoserForHireX Philosopher King The AcademyRegistered User regular
    edited January 2010
    Kaputa wrote: »
    _J_ wrote: »
    Elitistb wrote: »
    So what if the consequences aren't bad. Perhaps the act itself is.
    I'm missing something here. If the consequences aren't bad, how can the act be bad? Could you give me an example?

    We would have to argue that the act itself is an instantiation of "badness" such that the act can be known to be bad insofar as we discern that the act in itself is an instantiation of "badness".

    To say that a thing is bad is to say that a thing instantiates badness, that the predicate "badness" is exemplified by the thing.

    So, with regard to a given act, one could discern that the act is itself an exemplification of "badness".
    Any reasonable definition of "badness" shouldn't include things that, in and of themselves, are harmless to all parties. This act doesn't pass that test, so any discussion on whether raping your robot is bad should center on the overarching consequences, not the act of robot rape itself.

    God, I feel ridiculous even typing out that paragraph.

    The most common example is something like Lying, or Murder. It seems plausible (though I don't agree, so don't jump too hard on me because I'm not defending my own position here) that Lying is just bad. Whether the consequences are happiness, sadness, or flowers, or ponies, or green. Lying is bad, period. Now, to put this in context, it's because lying would violate someone else's autonomy (you wouldn't be respecting it) and thus is wrong. The consequences don't weigh into it at all.

    The Virtue Theorist goes in another direction (one that might more closely mirror some ideas in this thread), such that acts themselves are neither right or wrong. To describe the act of humping a sex doll as moral is mistaken. It is the character of the person that is truly the issue. If the person has a virtuous character, then they just are morally correct. It's not about the act(s), it's about the character of a man. Now, we can try to judge the character of the man by his actions, and that might entail the consequences of them (a man that only spreads misery and pain can hardly be characterized as Compassionate), but focus must be maintained on the person in question. In addition, Virtue Theory from it's inception has realized that everyone has different capabilities, so that what is Courageous or Compassionate for me, might not be for someone else (a good way to conceptualize this is Generosity, those that have more have a higher bar for being generous than those that don't, because they have different capacities for giving)

    LoserForHireX on
    "The only way to get rid of a temptation is to give into it." - Oscar Wilde
    "We believe in the people and their 'wisdom' as if there was some special secret entrance to knowledge that barred to anyone who had ever learned anything." - Friedrich Nietzsche
  • Options
    ElitistbElitistb Registered User regular
    edited January 2010
    I'm still missing something in those examples. Lying is bad - but then you say the reason it is bad is because it is violating autonomy. This is still a consequence. You lied, the consequence of which is that a person's autonomy was violated. The same for murder. You are still saying the act is bad because of the consequences, and thus neither of these is an appropriate example of a bad act that has no consequences.

    _J_'s statement went over my head, in that he is using words that I know with definitions that I apparently do not. This is not his/her fault, I've had a minor in philosophy but I stayed away from the word games.

    edit: How could you define "badness" without relating it to negative consequences? I would think that something is either bad because it arises from something negative, or it is bad because it causes something negative. Which would mean that "bad" is a label of a state of negative consequences or a label of a state that leads to negative consequences.

    Elitistb on
    steam_sig.png
  • Options
    SpindizzySpindizzy Registered User regular
    edited January 2010
    _J_ wrote: »
    Pony wrote: »

    The point I'm trying to make is that your ultimate concern is the well-being of humans. That is entirely sensible. However:

    1) Robots are not humans.
    2) NPCs are not humans.
    3) Chickens are not humans.

    So I can rape a robot, an NPC, and a chicken without it having any impact on how I treat or regard humans. Because, again, Robots, NPCs, and chickens are not humans.

    Somehow you've embraced a world-view within which agents somehow confuse Robots, NPCs, and chickens with humans. And I do not think that says anything about raping Robots, NPCs, or chickens so much as it indicates a degree of idiocy with the people who think that Robots, NPCs, and chickens are humans.

    I fully agree with your point that these three categories are not humans, but, the issue will be, is there a critical mass in the way robots are created and represented that changes this clear division between robot and human?

    There are ethical and moral arguments that suggest people should be free to do as they wish in the privacy of their own home. If think its very hard to pin down whether or not seeing or experienced simulations of violence and sex or even non-standard sexual practices will create an increase in those activities. As someone pointed out before the have been arguements about new media and its impact upon society since cave paintings were invented.

    But for me the question is really about whether a line is crossed between non-human and too-human. de-sensitization aside how far along technologically would we have to be before this reaches a watershed. Thats when things need to be reconsidered both morally, legally and communually.

    Spindizzy on
  • Options
    Eat it You Nasty Pig.Eat it You Nasty Pig. tell homeland security 'we are the bomb'Registered User regular
    edited January 2010
    If we come to a time when robots are aware/smart/ubiquitous enough to be indistinguishable from humans, then I think the problem will solve itself (possibly in the Great Robot War of 2083, but whatever.)
    What I am saying is, the less value people pay to the depiction of human suffering as it becomes more accurate and realistic, the more overall negative net effect this has on societal empathy and people's willingness to give a shit.

    Maybe, but we already have this. Is the difference between pornography depicting whatever act, or acting it out with a partner, really that different from doing it with a robot? The idea of acting out a rape fantasy with a consenting partner is still fundamentally about enjoying something taboo/forbidden.

    Eat it You Nasty Pig. on
    NREqxl5.jpg
    it was the smallest on the list but
    Pluto was a planet and I'll never forget
  • Options
    Bliss 101Bliss 101 Registered User regular
    edited January 2010
    _J_ wrote: »
    The point I'm trying to make is that your ultimate concern is the well-being of humans. That is entirely sensible. However:

    1) Robots are not humans.
    2) NPCs are not humans.
    3) Chickens are not humans.

    So I can rape a robot, an NPC, and a chicken without it having any impact on how I treat or regard humans. Because, again, Robots, NPCs, and chickens are not humans.

    You can, but there's a degree of overlap between the properties of robots, NPCs and chickens, and arguably the bigger the overlap, the bigger the potential impact on your regard for other humans is. Raping a humanlike robot is more relevant to human relationships than raping a watermelon is, even if you consider both of them far removed from raping an actual human.

    There is the well-being of the actor himself to consider (and by extension the well-being of society), because people develop over time, and their own behavior is a key element in determining how they develop. There isn't necessarily anything wrong with your ability to relate to other humans even if you live a life of raping and torturing chickens, but psychology tells us that there is a correlation.

    Bliss 101 on
    MSL59.jpg
  • Options
    ArchArch Neat-o, mosquito! Registered User regular
    edited January 2010
    _J_ wrote: »

    The point I'm trying to make is that your ultimate concern is the well-being of humans. That is entirely sensible. However:

    1) Robots are not humans.
    2) NPCs are not humans.
    3) Chickens are not humans.

    So I can rape a robot, an NPC, and a chicken without it having any impact on how I treat or regard humans. Because, again, Robots, NPCs, and chickens are not humans.

    Somehow you've embraced a world-view within which agents somehow confuse Robots, NPCs, and chickens with humans. And I do not think that says anything about raping Robots, NPCs, or chickens so much as it indicates a degree of idiocy with the people who think that Robots, NPCs, and chickens are humans.

    Whoa whoa whoa.
    Let us begin this discussion with some qualifiers here. You say that Robots, NPC's, and Chickens are 'not human'. Alright, let us take this from there. First of all, even you must realize that a binary "human/not human" classification system is utterly false. Second of all, even if it WASN'T, if you create the categories "human" and "non-human" then you immediately lump EVERYTHING that ISNT human together. Meaning that a Robot, an NPC, and a chicken are all classified to the same degree as, say, a table.

    Now, working from this angle you can see how quickly what you are saying becomes rather flawed. Not only are there dangers in classifying anything 'non-human' at the same level as personal possessions (to wit: if everything not a human is an object that you can possess, you are free to treat it however you like with no ramifications) it is also a huge logical gap to assert that chickens and NPC's in video games occupy the same ethical sphere.

    Even IF these things are all classified as "possessions", you are STILL not allowed to treat them in whatever way strikes your fancy. "Raping" something is an extreme example and not applicable in all cases so I will move to "damage", which carries a lot of the same undertones (minus the sexual fetish).

    If it is all right to "rape" a robot, an NPC, or a chicken because it is "not human" then it must ALSO be morally acceptable to exert a less extreme action (damaging) on those same things for the same reason. However, if I was given a robot and my first action was to saw it in half, that most assuredly reflects on my personality and human-to-human interactions. If I feel that it is perfectly within my right to act out any sort of destructive behavior, and I willingly destroy all my personal possessions because "I can do what I want, they aren't human, and they are MINE!" I most certainly betray at the very least the beginnings of a very dangerous personality disorder. Now I am not saying that this is necessarily indicative of something like a serial killer, but a child who when given a toy truck, a game boy, or a puppy, immediately throws them off of a bridge because they "like to destroy things! It doesn't matter what I do to them, they aren't humans!" is definitely more maladjusted than a child who treats the objects with some degree of respect.

    I can go on for a long time about the dangers of "its not a HUMAN, I can do what I want to it!" line of thinking, but I will stop here.

    tl:dr- just because it isn't human doesn't mean you can treat it however you like. Human and non-human is also not a binary classification.

    Arch on
  • Options
    Eat it You Nasty Pig.Eat it You Nasty Pig. tell homeland security 'we are the bomb'Registered User regular
    edited January 2010
    how about 'alive' and 'not alive.'

    Eat it You Nasty Pig. on
    NREqxl5.jpg
    it was the smallest on the list but
    Pluto was a planet and I'll never forget
  • Options
    ArchArch Neat-o, mosquito! Registered User regular
    edited January 2010
    Dyscord wrote: »
    how about 'alive' and 'not alive.'

    Better. Much better. But still not good enough, because 'alive' is pretty hard to pin down. You could theoretically design a self-replicating robot that absorbs energy from sunlight and responds to stimuli that would respect all the classical definitions of 'life'.

    In addition, there is a lot of work going on studying and modeling evolutionary theory with digital organisms that meet all the qualifiers of life (as they were designed to do precisely that) yet they just take place in a virtual environment. The only difference between a digital microorganism and a real bacterium is that one exists in vitro and the other in silico.

    Arch on
  • Options
    _J__J_ Pedant Registered User, __BANNED USERS regular
    edited January 2010
    Bliss 101 wrote: »
    _J_ wrote: »
    The point I'm trying to make is that your ultimate concern is the well-being of humans. That is entirely sensible. However:

    1) Robots are not humans.
    2) NPCs are not humans.
    3) Chickens are not humans.

    So I can rape a robot, an NPC, and a chicken without it having any impact on how I treat or regard humans. Because, again, Robots, NPCs, and chickens are not humans.

    You can, but there's a degree of overlap between the properties of robots, NPCs and chickens, and arguably the bigger the overlap, the bigger the potential impact on your regard for other humans is. Raping a humanlike robot is more relevant to human relationships than raping a watermelon is, even if you consider both of them far removed from raping an actual human.

    I am denying that there is a transitive property to rape.

    1) To rape a human is to rape a human.
    2) To rape a robot is to rape a robot.
    3) To rape a watermellon is to rape a watermellon.

    There is no relation between these. To engage in 3 is not to somehow partially engage in 1. For, 3 is seperate and distinct from 1. A chronic watermellon rapist is not in any way a danger to humans, as the watermellon rapist rapes watermellons, not humans.

    Moreover, it is impossible to engage in 2 or 3 given that to rape is to act contrary to the will of the object raped. Robots and watermellons do not will. One may only rape that which wills.

    So, not only is there no transitive property to rape, but the only rape which can occur is the rape of a human, as a human wills. It is impossible to rape anything other than a human. So, there is no transition from "rape a watermellon" to "rape a human" because, again, it is impossible to rape a watermellon; watermellons do not will.

    Arch wrote: »
    if I was given a robot and my first action was to saw it in half, that most assuredly reflects on my personality and human-to-human interactions.

    No. It reflects your understanding of human-to-robot interactions.

    Because, again, a robot is not a human.

    _J_ on
  • Options
    LanzLanz ...Za?Registered User regular
    edited January 2010
    Pony wrote: »
    All arguments of desensitization aside, at what point in the evolution of lifelike robotics would we start to recognize the possible sentience of cybernetic life? I mean, it would kind of be awkward after years of using these things for manual labor and rape simulation, only for the Deus Ex Machina to suddenly manifest itself and "Sky-Net" the fuck out of us because we had enslaved its toaster ancestors.

    I really would rather that the end of the Battlestar Galactica series did not turn out to be prophetic.

    It's sort of a disturbing idea.

    As is that in-between step where we aren't at quite a sapient robot, but we've got something at least sentient on the level of a dog.

    I mean, well in advance of us ever trying to determine "Is this robot a person and should it be treated like a person?", we're going to have to deal with "should we at least have some legal protection for this machine to protect it from cruelty?"

    We aren't going to jump from Aibos to Cylons. There's going to be quite a few steps in-between there, and that's a scary space to be in because some really fucked up shit can happen there.

    "How do you measure a man?" will be a less imminent question than "How do you measure that it's alive?"

    Because once we acknowledge a machine as something even potentially on the same level as an animal... now we've got a shit-ton of ethical questions about how we treat that machine.
    There's going to remain a divide until humans are just as back-uppable as robots are, or if robotic sentience occurs and there is no way to copy it, it merely exists in the robot it occurred in. When the mechanical brain of a fully self-aware robot can be copied, saved, and then replaced were that robot to be destroyed, there is no loss aside from monetary caused by having to rebuild it. No matter how conscious it is, it's still the equivalent of reinstalling Windows. Unless robotic sentience is a complete, non-replicable fluke that occurs purely at random, the ability to have a permanently stored, upgradeable and reusable copy of your consciousness and all your experiences ironically makes you less than human, not more. As Matrix-corny as it sounds, humanity really is based on loss.

    Sapient Life viewed as the continuing stream of consciousness would have something to say about this.

    Lanz on
    waNkm4k.jpg?1
  • Options
    Eat it You Nasty Pig.Eat it You Nasty Pig. tell homeland security 'we are the bomb'Registered User regular
    edited January 2010
    Arch wrote: »
    Dyscord wrote: »
    how about 'alive' and 'not alive.'

    Better. Much better. But still not good enough, because 'alive' is pretty hard to pin down. You could theoretically design a self-replicating robot that absorbs energy from sunlight and responds to stimuli that would respect all the classical definitions of 'life'.

    In addition, there is a lot of work going on studying and modeling evolutionary theory with digital organisms that meet all the qualifiers of life (as they were designed to do precisely that) yet they just take place in a virtual environment. The only difference between a digital microorganism and a real bacterium is that one exists in vitro and the other in silico.

    right, and like I said earlier, when we get to that point I strongly suspect that the question will be answered to large extent by circumstance, and the debate about 'what life is' will mostly be consigned the same theoretical realm currently occupied by 'what is culture?'

    But we aren't really talking about that, or at least pony and J aren't. We're talking about something that is, for all intents and purposes, a really complicated toaster.

    Eat it You Nasty Pig. on
    NREqxl5.jpg
    it was the smallest on the list but
    Pluto was a planet and I'll never forget
  • Options
    _J__J_ Pedant Registered User, __BANNED USERS regular
    edited January 2010
    Dyscord wrote: »
    We're talking about something that is, for all intents and purposes, a really complicated toaster.

    Exactly. Toasters do not will. So, toasters cannot be raped. And, unless we are behaviorists, a toaster which is programmed to scream will not illicit an empathetic response from a human. This is because, again, toasters do not experience; toasters lack qualia; there is nothing which it is to be a toaster.

    To claim that there is a commonality between a human and a toaster is to be confused.

    _J_ on
  • Options
    ArchArch Neat-o, mosquito! Registered User regular
    edited January 2010
    _J_ wrote: »

    I am denying that there is a transitive property to rape.

    1) To rape a human is to rape a human.
    2) To rape a robot is to rape a robot.
    3) To rape a watermellon is to rape a watermellon.

    There is no relation between these. To engage in 3 is not to somehow partially engage in 1. For, 3 is seperate and distinct from 1. A chronic watermellon rapist is not in any way a danger to humans, as the watermellon rapist rapes watermellons, not humans.

    Again, you are extremely wrong here, but to elaborate is to turn this not into a robot thread but into a rape thread. You can PM me if you want a breakdown of how this is incorrect. (Hint, it has to do with the act of rape being ultimately an act of domination that applies a transitive property to the action)
    _J_ wrote:
    Moreover, it is impossible to engage in 2 or 3 given that to rape is to act contrary to the will of the object raped. Robots and watermellons do not will. One may only rape that which wills.

    So, not only is there no transitive property to rape, but the only rape which can occur is the rape of a human, as a human wills. It is impossible to rape anything other than a human. So, there is no transition from "rape a watermellon" to "rape a human" because, again, it is impossible to rape a watermellon; watermellons do not will.

    Again, you fail to grasp the true point of my argument, which is that there is exists no dichotomy between "human" and "non human". I can assure you that a watermelon most definitely "does not wish to be raped" if you take "rape" to mean "forced sexual encounter". If you just mean "rape as we know it and generally consider it, aka forcible penetration by a penis (which is a narrow and incorrect view)", watermelons DEFINITELY do not wish to be forcibly penetrated or they would not have evolved a thick cuticle!
    Moreover, and less tangentially (meaning I will be returning this discussion to robotics) let us consider the following scenario. Suppose one codes a machine, and implants it into a robot that this robot does not wish to have forced sexual encounters with Homo sapiens, and is then programmed to take adequate action to avoid this outcome.

    How then, is this any different than a human being "not willing" to be raped? The action is the same, the reaction is the same, and by all measures we would use to judge whether or not a human "wills" not to be raped the robot passes. Moreover, the distinction is even EASIER to make in a robot regarding whether or not it "wills" a certain thing, as the behavior is a strict binary.
    _J_ wrote:
    Arch wrote: »
    if I was given a robot and my first action was to saw it in half, that most assuredly reflects on my personality and human-to-human interactions.

    No. It reflects your understanding of human-to-robot interactions.

    Because, again, a robot is not a human.

    Yes, a robot is not a human. But don't most things hold some sort of inherent value? If I am willing to needlessly destroy a robot that countless hours (presumably) of labor and materials went into to satisfy my own personal desires, doesn't that depict me at least in a slight negative light?

    Arch on
  • Options
    PonyPony Registered User regular
    edited January 2010
    _J_ wrote: »
    Dyscord wrote: »
    We're talking about something that is, for all intents and purposes, a really complicated toaster.

    Exactly. Toasters do not will. So, toasters cannot be raped. And, unless we are behaviorists, a toaster which is programmed to scream will not illicit an empathetic response from a human. This is because, again, toasters do not experience; toasters lack qualia; there is nothing which it is to be a toaster.

    To claim that there is a commonality between a human and a toaster is to be confused.

    And to react to horrified screaming from an accurate human facsimile subjected to something that (if done to humans) would constitute torture is a normal empathetic reaction to perceived suffering.

    If you can watch a robot go through all the motions of being in agony on a level that is a clear emulation of human behavior, if it looks and acts human, and you don't feel even the slightest twinge of instinctive response to that sensory input I think you're a little psychologically busted.

    There is imagery, sounds, fictional depictions of horrific acts that I not only do not enjoy watching, but actually find difficult to even tolerate. This gradient of tolerance and enjoyment is hardly objective, and is different for everyone.

    However, to have no gradient at all, I think, is probably indicative of some disturbing psychological behavior.

    The more accurate a simulation of a real thing something becomes, the more visceral and real the emotional responses from people are. This is a fact. Even if a person maintains continuous knowledge that it is a simulation, the closer the simulation gets to fully engaging in identical sensory input to the real act, emotion over-rides.

    You might find yourself "so what? It's just an emotional response" but emotional responses are important. They do, in fact, make up a part of what compels people to act ethically towards each other. If you blunt a person's emotional responses and empathy for others, you blunt them as a human being.

    You may not recognize the difference between a dog and a toaster, but that's on you, quite frankly. Other people don't equate damage inflicted upon the two. If I sit there and cut into a toaster with a saw, it doesn't provoke an emotional response from psychologically healthy people. If I do the same thing to a living and conscious puppy, it does provoke an emotional response from people who don't have diagnosable psychological disorders.

    Now, a robot that exhibits the outward reactions of a human isn't a dog, but it isn't quite a toaster either. That's what I'm getting at and what you and some people seem to be missing.

    There is, in fact, a meaningful difference that should be acknowledged. A dog isn't a toaster. A Sim isn't a dog. A robot isn't a Sim. None of these things are human, and none of these things are "people", but there is varying degrees to which intentionally damaging or harming them has an emotional response from people. Those degrees are quite different.

    If you can stand there, and carve into a howling dog with a reciprocating saw, and be amused or even aroused by the experience, then I think that's actually diagnostic criteria for a psychological illness.

    We certainly make the conscious distinction because it's a living, biological organism (which a toaster or a robot certainly isn't) but that's not the entirety of the thought process behind why we consider torturing animals for amusement to be of questionable ethics.

    People have emotional responses to things they consciously know are simulated because that is how the brain operates. On an emotional level, it doesn't always make that distinction because the sensory input is so identical.

    The more identical the sensory input, the more pronounced the emotional response. There is absolutely nothing wrong with that, and I think that's psychologically healthy.

    However, it is possible to blunt that response and to desensitize oneself to varying levels of simulated input via exposure. A person's instinctive, natural responses to a simulated experience can lose its emotional quality as the person's logical knowledge that the experience "isn't real" asserts itself over emotional response.

    That exposure can have many, many different levels, and the desensitizing process works best when it starts small and as remotely removed from an accurate depiction as possible, as you slowly ramp up the realism and allow the person's logical mind to assert itself over an emotional reaction.

    This would be a good thing... if it only applied to simulations. But the reality is, desensitizing people to having an emotional response to the real thing is something that can be done via simulation on some level. Not to the level that histrionic demagogues like Jack Thompson would have you believe, but certainly on a level that is measurable.

    It's my opinion that robotic simulation, physically emulating not only the human form but also human movement and emotional responses, in a way that can be fully interacted with in the context of the act being performed... that's several steps removed from a toaster, or a game of Grand Theft Auto.

    That level of simulation, that level of sensory immersion into a facsimile of acts that would be abominable to do to people, is not something that I feel is psychologically healthy. I feel it's a line that, perhaps, people are better off for not crossing.

    The fact that we seem to be approaching that line and are likely to see it crossed in our lifetime troubles me.

    I think, to some extent with a large swath of people, we are already so media-saturated with fictional depictions of human suffering, or clinically detached from real suffering via a television screen, that we've lost a great deal of human empathy and emotional response that we'd have been better off as a society to maintain.

    If you can flip on your TV, and see one of those manipulative African charity ads featuring starving children with flies buzzing around them, and you feel nothing, I think you've lost something along the way that perhaps you shouldn't have lost.

    Pony on
  • Options
    DasUberEdwardDasUberEdward Registered User regular
    edited January 2010
    Jesus christ Pony i've been wondering what that whole robot thing was about for the last two weeks. Now I know.

    But really there's no inherent danger with robots. No matter how real they get.

    DasUberEdward on
    steam_sig.png
  • Options
    LanzLanz ...Za?Registered User regular
    edited January 2010
    Question: Where does the perception that one isn't dealing with a sapient, artificial being (since we're not dealing with self-aware machines, if I understand right, but very accurate physical simulations*) fit into all this? I don't think I saw you address that in the OP Pony and when we're dealing with the psychological impacts of the actions we perform, our perceptions of what we're acting upon seem to me to be a core element of the equation.

    That said, sexually assaulting robots that actually act out a programmed victim-response... D:


    *The point we have a sapient robot, I'd say we're beyond "simulation" and into "New, albeit artificial, species"

    EDIT: Is it wrong I really really want to talk about the implications of human-level cognizant robots, in light of this main discussion and outside of it? ohdearh.png

    Lanz on
    waNkm4k.jpg?1
  • Options
    Eat it You Nasty Pig.Eat it You Nasty Pig. tell homeland security 'we are the bomb'Registered User regular
    edited January 2010
    Pony wrote: »
    It's my opinion that robotic simulation, physically emulating not only the human form but also human movement and emotional responses, in a way that can be fully interacted with in the context of the act being performed... that's several steps removed from a toaster, or a game of Grand Theft Auto.

    That level of simulation, that level of sensory immersion into a facsimile of acts that would be abominable to do to people, is not something that I feel is psychologically healthy. I feel it's a line that, perhaps, people are better off for not crossing.

    The fact that we seem to be approaching that line and are likely to see it crossed in our lifetime troubles me.

    In the future, our controllers will shudder with each unwelcome thrust.

    Seriously though this feels a little overwrought to me. I don't think it's a "fine line" at all, just another step along a path we've been on at least since mass media existed. I could go through the routine of replacing "robots" with a bugaboo from yester-era, but I don't feel like bothering.
    I think, to some extent with a large swath of people, we are already so media-saturated with fictional depictions of human suffering, or clinically detached from real suffering via a television screen, that we've lost a great deal of human empathy and emotional response that we'd have been better off as a society to maintain.

    This assumes we had that much empathy in the first place, which I think is a massively generous assumption at best.

    Eat it You Nasty Pig. on
    NREqxl5.jpg
    it was the smallest on the list but
    Pluto was a planet and I'll never forget
  • Options
    PonyPony Registered User regular
    edited January 2010
    Lanz wrote: »
    Question: Where does the perception that one isn't dealing with a sapient, artificial being (since we're not dealing with self-aware machines, if I understand right, but very accurate physical simulations*) fit into all this? I don't think I saw you address that in the OP Pony and when we're dealing with the psychological impacts of the actions we perform, our perceptions of what we're acting upon seem to me to be a core element of the equation.

    That said, sexually assaulting robots that actually act out a programmed victim-response... D:


    *The point we have a sapient robot, I'd say we're beyond "simulation" and into "New, albeit artificial, species"

    I intentionally excluded the subject of sapience from the discussion because we aren't even close to that and I don't think we'll even see it on our lifetimes.

    So, that's a sci-fi conversation piece, in my opinion.

    However, sex-droids that realistically depict human behavior are in the pipe. The dude in the OP youtube video has a pretty lame example, but that's the sort of thing that progress is marching towards and it's something we pretty much will see in our lifetime.

    That makes it a valid topic of discussion, in my opinion, because the conversation has stayed firmly in the "is it right to treat objects like this?" territory and that's sorta where I want it to be.

    Discussions of artificial sapience and "synthetic people" is outside the realm of this topic and really needs a topic in its own right.

    Pony on
  • Options
    KamarKamar Registered User regular
    edited January 2010
    Sure, relative to other media, this would be exponentially closer to reality than anything else. It would desensitize people to something much more realistic.

    But in the end? That line between real and not real is really fucking thick. That wall is really tall. And just because you're closer to the line than ever before doesn't mean the line gets any easier to cross.

    Kamar on
  • Options
    LanzLanz ...Za?Registered User regular
    edited January 2010
    Pony wrote: »
    Lanz wrote: »
    Question: Where does the perception that one isn't dealing with a sapient, artificial being (since we're not dealing with self-aware machines, if I understand right, but very accurate physical simulations*) fit into all this? I don't think I saw you address that in the OP Pony and when we're dealing with the psychological impacts of the actions we perform, our perceptions of what we're acting upon seem to me to be a core element of the equation.

    That said, sexually assaulting robots that actually act out a programmed victim-response... D:


    *The point we have a sapient robot, I'd say we're beyond "simulation" and into "New, albeit artificial, species"

    I intentionally excluded the subject of sapience from the discussion because we aren't even close to that and I don't think we'll even see it on our lifetimes.

    So, that's a sci-fi conversation piece, in my opinion.

    However, sex-droids that realistically depict human behavior are in the pipe. The dude in the OP youtube video has a pretty lame example, but that's the sort of thing that progress is marching towards and it's something we pretty much will see in our lifetime.

    That makes it a valid topic of discussion, in my opinion, because the conversation has stayed firmly in the "is it right to treat objects like this?" territory and that's sorta where I want it to be.

    Discussions of artificial sapience and "synthetic people" is outside the realm of this topic and really needs a topic in its own right.

    Eh, I'm not so sure if we won't see it in our lifetimes. it may be just at the twilight of it, but I don't think it's outside the realm of possibility.

    But anyhow, what about that other part I mentioned: Where the perceptions of what our actions are acting upon fit into your situation?

    EDIT: Agreed on needing another thread for Sapience of artificial lifeforms

    Lanz on
    waNkm4k.jpg?1
  • Options
    ArchArch Neat-o, mosquito! Registered User regular
    edited January 2010
    Pony wrote: »
    Lanz wrote: »
    Question: Where does the perception that one isn't dealing with a sapient, artificial being (since we're not dealing with self-aware machines, if I understand right, but very accurate physical simulations*) fit into all this? I don't think I saw you address that in the OP Pony and when we're dealing with the psychological impacts of the actions we perform, our perceptions of what we're acting upon seem to me to be a core element of the equation.

    That said, sexually assaulting robots that actually act out a programmed victim-response... D:


    *The point we have a sapient robot, I'd say we're beyond "simulation" and into "New, albeit artificial, species"

    the conversation has stayed firmly in the "is it right to treat objects like this?" territory and that's sorta where I want it to be.
    .

    This is entirely what I am getting at. Where does the breakdown between "you must value and hold sacred human life" and "objects that are non-human are completely subject to our every whim" happen? And is that dichotomy, as I am arguing, inherently damaging?

    We really are NOT making a big hullabaloo about a new bugaboo, but rather trying to understand how we should deal with the approaching event of objects being frighteningly close to "human", and how highly we should value them.

    In addition, the dichotomy of "human" and "non-human object" is, to me, false and dangerously so.

    Arch on
  • Options
    LanzLanz ...Za?Registered User regular
    edited January 2010
    Also I forget: Pony, does the "sexbot," for lack of a less crude term, have any level of cognizance, even comparable to animals? or is it still just a simulation?

    Lanz on
    waNkm4k.jpg?1
  • Options
    KamarKamar Registered User regular
    edited January 2010
    I honestly can't wrap my head around what you're trying to argue here, Arch. An object doesn't feel, doesn't suffer, doesn't care. It can't be harmed in any meaningful way, not even the way animals can. The robots in question would unquestionably be robots; they would weakly resist, but because that resistance was what they were programmed for.

    Not because they don't want to be penetrated. They don't care; they can't!

    Even if we got to higher level machines, we'd probably not use them for this kind of thing, and if we did, we'd program them to enjoy it, despite the token physical resistance they put up; it'd be roleplay the same as humans do with each other now.

    Kamar on
  • Options
    PonyPony Registered User regular
    edited January 2010
    Kamar wrote: »
    Sure, relative to other media, this would be exponentially closer to reality than anything else. It would desensitize people to something much more realistic.

    But in the end? That line between real and not real is really fucking thick. That wall is really tall. And just because you're closer to the line than ever before doesn't mean the line gets any easier to cross.

    I'm not saying people will obfuscate the simulation for reality.

    I simply mean that the emotional blunting will be so much more severe that the effect it has on people's responses to real suffering will be significantly impacted (and negatively, in my opinion)

    The emotional response is less about whether it's "real" or not and more about how accurate the sensory input is. The emotional components of our brain do not care if something is real or not, only how much it feels real via our senses.

    The more you blunt those emotional responses, the less responsive they become, and those emotional responses do not fully make the distinction between reality and simulation. If you blunt those emotional responses to the simulation, the "bleed-through" into how those same responses are triggered by the real event is proportionate to the accuracy of the simulation. The more accurate the simulation, the less reactive those responses are to the real circumstance.

    People might knee-jerk with the "That's what they've said about violent video games or rock music or any entertainment hobgoblin ever" and there's an extent to which that is true.

    But the level of difference between shooting a NPC in Grand Theft Auto and shooting a robot that bleeds fake blood and cries out in pain is significant in how it is desensitizing your emotional responses to the real occasion.

    Pony on
Sign In or Register to comment.