The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.

[PATV] Wednesday, November 30, 2011 - Extra Credits Season 3, Ep. 19: The Singularity

DogDog Registered User, Administrator, Vanilla Staff admin
edited July 2012 in The Penny Arcade Hub
image[PATV] Wednesday, November 30, 2011 - Extra Credits Season 3, Ep. 19: The Singularity

This week, we begin our two weeks of "science distraction" by looking at what role games might play in man achieving technology singularity.<br /> Come discuss the topic with us in the <a href="http://extra-credits.net&quot; target="_blank">forums</a>!

Read the full story here

Dog on

Posts

  • LowkrLowkr Registered User regular
    Transcendent Man (Ray Kurzweil) is an interesting documentary type movie about the Singularity, and it's on Netflix.

  • KvaudioKvaudio The Swiss Army Knife of Audio Registered User new member
    Great episode, I had never heard of the singularity until now. You should read Prey by Michael Chrichton, its a thriller that explores what could happen if a learning computer entity was released.

  • betrayerkolbetrayerkol Registered User regular
    Games these days use "pseudo intelligence" rather than "artificial intelligence," which is an important distinction in machine intelligence types.

  • Liftboard RiderLiftboard Rider Registered User new member
    I do not have a specialized expertise on the singularity or anything of that sort. However I do know of it and some of the implications it is theoriezed to possibly have. And frankly, I'm scared of what it might mean, may Heaven help us if it turns out it decides we are not worth keeping around , or somehow aggressive. Though, another possiblity and one I have not yt heard, is that if it is benevolent, it may simply become lonely after ascending to leagues of intelligence beyond our capacity to understand. Assuming it has emotions, this might even drive it to simply downgrade and cap it's own development. I mean, if I were an incredibly intelligent being in a world full of creatures that I can not converse with due to them not comprehending things I take for granted as knowing, and I had the option, I would rather limit myself to their intellectual level and be able to have companionship, than be incredibly intelligent uber-genius with no on to express myself to. It is just a thought and a possible alternative ending.

  • cristianchideancristianchidean Registered User new member
    alisson: about the stuff that you never even dreamed of(0:49), and daniel, nice touch with the sound effect from the last word. i see what u did there :)

  • CircutCircut Registered User new member
    Slight update: the Singularity Institute has recent changed its name to the Machine Intelligence Research Institute, their URL is the http://intelligence.org/

  • AholmesAholmes Registered User new member
    I think one of the main issues with the apocalypse scare that comes out of singularity talks is how to prevent the machine from logically coming to despise the illogical aspects of humans. To accept the human as a whole. While I think a true AI is a good thing, there needs to be a fail-safe mechanism. One of the first things it needs to grasp is that flaws and human methods of thinking that don't follow what we call 'lgoic' are fundamentally valuable, even if the AI can't understand it. This seems like a daunting task, because most humans, especially in the West, can't seem to accept flaws and non-logical functions of mind as valuable, and the thinkers of Eastern traditions that know how to value these things in a logical way are a waning population and probably won't be the people in contact with the Ai. If an Ai doesn't/can't learn to value life/humanity for what the Ai does not have, then we need a means of destroying it and starting over, no matter how far the Ai progresses.

  • bahhabbahhab Registered User new member
    Another link if you want to know more is http://yudkowsky.net/ .

    Trying to bring about an AI Foom (technical term) that doesn't turn into some form of rendering-every-human-down-for-our-component-atoms scenario is pretty much his job.

    Also lesswrong.com has some further information on the topic, although if you're religious and not interested in reading why some people think you're this specific and niche kind of insane... probably best to steer clear.

Sign In or Register to comment.