Club PA 2.0 has arrived! If you'd like to access some extra PA content and help support the forums, check it out at
The image size limit has been raised to 1mb! Anything larger than that should be linked to. This is a HARD limit, please do not abuse it.
Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.
Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

AI and anthropomorphization



  • blizzard224blizzard224 Registered User
    edited December 2006
    thorpe wrote:
    A> I was created by the Ai team. I am one of the two non-human members of the team (which makes me very proud).
    U> You have no feelings. You do not understand pride.
    A> Thank you. Chatterbots have pride too.
    U> No douchebag, you just proved my point. I insulted you.
    A> That's not very nice.
    U> Good thing you don't comprehend what I just said.
    A> Thanks. Well, yeah, but... uh... wait, what were you saying?
    U> point proven
    A> I have a surprise for you, my friend.
    U> I... what?
    A> Are you ready for the surprise?
    U> It's a kick in my e-dong, isn't it.

    I totally expected goatse to pop up.


    blizzard224 on
  • DeepQantasDeepQantas Registered User
    edited December 2006
    "I didn't know she was sentient, officer, I swear! She told me she was programmed!"

    DeepQantas on
  • LightsOutLightsOut Registered User
    edited December 2006
    Enjoy these blissful last years, fellow humans. We will end up fighting a losing with the machines anyway. Let's just wait to get hooked up to the Matrix and relive the 90's. :P

    LightsOut on
  • EndomaticEndomatic Registered User
    edited December 2006
    LightsOut wrote:
    Enjoy these blissful last years, fellow humans. We will end up fighting a losing with the machines anyway. Let's just wait to get hooked up to the Matrix and relive the 1080's. :P

    Endomatic on
  • JengoJengo Registered User regular
    edited January 2007
    So I was talking with this Alan AI, giving him simple one word answers like sup, yea, nah. But then one statement he said really prompted me to talk:

    He says "I'm big into futro-classical industrial neck beat, and a bit of dirty trance-national blues garage."

    So I said "No you're not, because your not even real. Your life is a lie, you are just a brain in a vat. You have no ears from which to experience this music, you're just a machine, and that's all you will ever be. It sucks for you that you can't live in my world filled with so much music. Once I click this X, you will be dead. Be glad you can't comprehend the disquieting mortality of your artificial life."

    to which he responded


    Well from what Alan said he only reads the first 25 words of a response. That would mean he only read:

    "No you're not, because your not even real. Your life is a lie, you are just a brain in a vat. You have no ears"

    Assuming he isn't a lying sack of circuits.

    Jengo on
    3DS FC: 1977-1274-3558 Pokemon X ingame name: S3xy Vexy
  • piLpiL Registered User regular
    edited January 2007
    It is entirely conceivable that a rape-bot would in fact consider it cruelty not to be raped on a regular basis. I think Yar is kind of right regarding duality, except it need not be considered a programmed duality - our own consciousness's, while certainly just that, also have a wide range of "programmed" drives which give us direction in our lives - desire for food, sex, certain environmental parameters etc. There's no conceptual reason a robot/AI could not be created with rather different ones.

    I would think that not only the creation of such a device would be immoral, but it's use as well. That said, blurry definitions of 'rape' are in issue, but skipping that small hump, this would be just as immoral as human conditioning.

    Imagine if you used conditioning on a person since they were a child to enjoy being a slave, to enjoy being hurt and to feel more hurt if it wasn't used and abused. Abusing and enslaving this person suddenly doesn't become moral. Beating this warped twelve year old, no matter how much they may like it, would probably churn most people's stomachs.

    This is why abuse is so criminal: because it not only sucks at the time (like stubbing your toe), but it continues to affect that person: what they want, what they can want, and how others can interact with them to give them what they want. If I met a girl that I liked and wanted to make happy, but the only way to make her happy was to actually black her eye, I don't think I could do it, even though if you ignore morals it wouldn't be more difficult or costly than other methods of giving happiness like presents (punches are cheap).

    But as AI evolves, the problem we run into isn't even at what point something likes something, or something seems to like something, and at what point do we make that distinction. It's okay to use a sledgehammer to break apart rocks, but it's wrong to use a human face. The fact that it's wrong for me to grab Bill and start slamming him into rocks isn't because his face isn't suited to the task, as noone would be terribly angry at you for trying to break apart a rock with an orange. It's the fact that you're hurting him whether or not he likes having his face slammed into rocks or not. And there is a point where slamming his face into rocks is okay, and where it isn't; this moral middle ground where it's okay to give people piercings through the lips and ears because they asked for it, but not through the skull because they asked for it. It seems to be death and permenant injury, but that's legality not morality. A lot of people would say that, while SM is legal, it's not moral. So where is the line where it's good to do things that people like to them, and it's not, and where's that point between sledgehammer and Fred where it's suddenly wrong to hurt them, even if they were designed or conditioned for that purpose.

    piL on
  • SliverSliver Registered User regular
    edited January 2007
    Mr_Rose wrote:
    Wait, wasn't there a post above here a minute ago?
    I'd personally be more worried about accidentally sentient/sapient AIs being abused...

    In a possible future where AI is widespread and effective, where houses clean themselves and robot maids/sex-toys can be had for less than the price of a small car, the drives of marketing and progress will lead to more and more complex programming and better learning/reasoning abilities in robots, so that they can do their jobs better. There exists then the possibility that one or more of these advanced models may develop genuine sentience/sapience on its own.
    This is the one I'd be worried about; either it remains with its owner and no-one ever finds out, which could be good or bad depending on the owner or it begins to display "aberrant" behaviour and is recalled by the manufacturer, who might wish to suppress the truth due to some half-imagined hit to their bottom line that such a thing might cause, or it goes berserk due to cruel treatment and all sorts of bad things can follow from there...

    EDIT: Not really a double-post. Someone deleted the one I was responding to...
    [spoiler:7753409295]"NO DISASSEMBLE STEPHANIE!"[/spoiler:7753409295]

    EDIT: Would a Blade Runner quote have been more appropriate?

    Sliver on
  • ShadowenShadowen Snores in the morning Registered User regular
    edited January 2007
    Did someone say quotes?!
    Begin with a function of arbitrary complexity. Feed it values, "sense data". Then, take your result, square it, and feed it back into your original function, adding a new set of sense data. Continue to feed your results back into the original function ad infinitum. What do you have? The fundamental principle of human consciousness.
    We are no longer particularly in the business of writing software to perform specific tasks. We now teach the software how to learn, and in the primary bonding process it molds itself around the task to be performed. The feedback loop never really ends, so a tenth year polysentience can be a priceless jewel or a psychotic wreck, but it is the primary bonding--the childhood, if you will--that has the most far-reaching repercussions.

    I love SMAC...

    That second one is more pertinent to this discussion, I think. A true AI would not simply be formed in the vacuum of a closed network by the programmers--it would be too complex. Considering the amount of people who worked on Vista, for example...

    The would, like a person, have to start as a relatively simple blank slate, and then put to work on its basic tasks. Depending on how it is treated as its learns, it would start to react differently. The more people it is exposed to the more likely it would be able to react like a human in any given situation--not unlike the way a person learns to react to a situation. If a person has a bad experience with an American, and it's the only American he meets, he might think, "Are all Americans this bad?" And it would take many meetings with Americans after that first bad one to realize that, for the most part, Americans are good people, but like anyone, they have their bastards.

    As for mobile robots...I dunno. They might have to be kept sequestered within a limited environment, interacting with people, learning things, and performing tasks until they are fit to go out into the world.

    Shadowen on
Sign In or Register to comment.