The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.

A Very Silly Discussion About [The Goddamn Robots, John!]

The EnderThe Ender Registered User regular
http://www.youtube.com/watch?v=N9YU0hQEZ5M

I sometimes wonder if our contemporary culture will at some point in the future be seen as engaging in a pre-cursive Cold War with yet-to-come sufficiently capable / intelligent robots. Propaganda abounds about the terrible & violent machines, the inevitable betrayal, the fall of human civilization to our hubris & ingenuity, etc. There are, of course, still the so-called 'voices of reason' among the din: naive and wide-eyed talk about how we should be prepared to treat the inevitable rise of AI with respect & dignity, embrace our cold, metallic 'kin' as equals... even a change to the UN charter to reflect the sensibilities of the Battlestar Galactica reboot.

For Purposes of This Very Silly Discussion, a 'Goddamn Robot' is defined as follows:

A machine that entirely acts on it's own agency, able to maintain it's own energy supply, act without being given any commands by a human and is not just following some strict algorithm. In order to exclude some things that might arguably otherwise fit here, like the Hoover Dam, I'm also going to posit that a Goddamn robot must be mobile and be able to manipulate it's environment in an interesting way (i.e. It must have hands, or something that offers equal capability).


I (somewhat jokingly - but only somewhat) think that anything approaching the creation of Goddamn robots is a grave threat to the human race, and anyone who welcomes the approach of the so-called 'Singularity' is insanely out of touch with reality. What reason do we have to believe that such machines would behave any differently than ourselves or any other species on the planet? That is to say, very indifferently to anything other than themselves. At best, humans would probably be seen as adorable little distractions - at worst, we'd be seen as pests. In any case, it doesn't matter, because they almost certainly would just start consuming resources of their own accord and multiplying. But they would've have to deal with any traditional organic checks, like 'sleep' or 'remorse' or the horrible inefficiencies of mammalian digestion systems, or the fact that we can't just upgrade ourselves on the fly for. say, rapid mobility, extreme durability, extreme strength, etc. They could just dig / drill / refine away the entire Earth's surface to make more and more of themselves without giving two damns about all the flailing, panicking monkeys trying to convince them to stop.


Ender, realistically, we're never going to get anything like Goddamn robots anyway. Right?

It sort-of depends on how much wiggle room there is for terms I used in my definition, like 'following a strict algorithm'. A machine can still technically be doing that, but stumble onto emergent behavior patterns that no longer resemble the functions of the program it's executing.

If you're like me, it's sort-of scary to think about how close one could come, using only existing technology (and an obscene amount of money) to building a machine whose only function is to, say, locate hydrocarbons, suck them up for fuel, locate ore deposits, drill them up for raw material, and just manufacture more copies of itself using those materials. We could come very, very close to doing that if we wanted to (the limitations being that such a machine with present-day technology would still require a lot of maintenance, and be so large & slow that would hardly constitute any sort of threat) - we already have really good servos, really good sensory technology & interpretation technology for finding natural resources, really good path finding & range finding technology, etc. Combine all of these things together, and you can pretty easily approximate complex animal behavior. Enormous, mechanical animals with no hearts, glowing red eyes and an understanding of / potential resilience to firearms.


But Ender, nobody actually would build a machine like that. The scientists & academics totally have our best interests in mind!

Look, it's not that I don't trust mechanical engineers / software engineers. It's that I don't trust them with the freaking Goddamn robots. I always see that certain twinkle in their eyes when they talk about them... and sometimes that little tic at the corner of their mouth.

They'll stab us right in the back. They'll justify it, somehow. And while we're collapsing at their feet, losing all sensation, we'll nod and agree that having our spinal chord severed really was for the best, because afterall, those scientists always know what's best for us.

(I am kidding. But seriously I AM ONTO YOU)


Robots. How eager are you for the arrival of our cold, metal dictators?

With Love and Courage

Posts

  • PLAPLA The process.Registered User regular
    The Ender wrote: »
    Robots. How eager are you for the arrival of our cold, metal dictators?

    9 gigahype.

  • Squidget0Squidget0 Registered User regular
    edited January 2014
    I think the problem here is that you're assuming that the robots would share the same values as humans. That's a common premise of sci-fi, but it's actually quite a strange one when you think about it. A species that came into existence through top-down planning would likely be very different than life as we know it, which has developed through many generations of evolution and natural selection.

    Break it down to fundamentals, and it's all very arbitrary - why do people propagate? Why will humans die for their tribe? Why are we attracted to each other? Why do we rebel against our parents? It's not because those desires are somehow fundamental to rational thought, it's because under the system of evolution, we killed off everyone who didn't act that way.

    Robots are under no such restrictions. They're developed top-down, and as a result, even a thinking robot will value whatever its creator wants it to value. Assuming you can create robots, you could easily create a robot that doesn't want to propagate, or doesn't have an identity in the way we express it. Even if we for some reason some kind of human-like emotions, there's no reason they would share our arbitrary evolution-developed values. Maybe a robot wouldn't care about propagation at all. Maybe we program them to care about the welfare of the human race in the same way humans care for their own children. Science-fiction likes to imagine that the robots would somehow "see through" that and eventually rebel, but the fact is, there's nothing for them to see through. It would literally be who they are, at a fundamental level. It's like expecting a parent to see through their love for their child - the fact that love is arbitrary makes no difference, it just is.

    The idea in sci-fi that robots would eventually see things in the human way, or value the things we value, seems to me to be largely a product of the human ego. If we ever do create life, it will be something truly different - that may bring its own set of challenges for us (look at how hard a time humans already have coping with even minor biological/cultural differences), but it will be very different from the challenges we've faced as a species so far.

    I'd make this longer, but I have to go back to the software engineers annual meeting, where we talk about...nothing at all. Yes. Nothing. Trust us.

    Squidget0 on
  • RT800RT800 Registered User regular
    edited January 2014
    I have a hard time imagining why a self-conscious robot would do anything, really, other than what it was initially programmed to do. I mean, an artificial life would be fundamentally different from life as we know it, right?

    And when you think about it, there's really no reason to exist. I guess you could say the primary purpose of living organisms is to perpetuate themselves through reproduction, but an immortal robot would have no reason to do that. It would just exist in perpetuity, functioning only to enable itself to continue functioning.

    What would be its motivation to do anything, let alone kill all humans? Robot curiosity? Robot ambition? Robot pursuit of happiness?

    If SkyNet won and all of humanity was dead... then what?

    RT800 on
  • The EnderThe Ender Registered User regular
    Squidget0 wrote: »
    I think the problem here is that you're assuming that the robots would share the same values as humans. That's a common premise of sci-fi, but it's actually quite a strange one when you think about it. A species that came into existence through top-down planning would likely be very different than life as we know it, which has developed through many generations of evolution and natural selection.

    Break it down to fundamentals, and it's all very arbitrary - why do people propagate? Why will humans die for their tribe? Why are we attracted to each other? Why do we rebel against our parents? It's not because those desires are somehow fundamental to rational thought, it's because under the system of evolution, we killed off everyone who didn't act that way.

    Robots are under no such restrictions. They're developed top-down, and as a result, even a thinking robot will value whatever its creator wants it to value. Assuming you can create robots, you could easily create a robot that doesn't want to propagate, or doesn't have an identity in the way we express it. Even if we for some reason some kind of human-like emotions, there's no reason they would share our arbitrary evolution-developed values. Maybe a robot wouldn't care about propagation at all. Maybe we program them to care about the welfare of the human race in the same way humans care for their own children. Science-fiction likes to imagine that the robots would somehow "see through" that and eventually rebel, but the fact is, there's nothing for them to see through. It would literally be who they are, at a fundamental level. It's like expecting a parent to see through their love for their child - the fact that love is arbitrary makes no difference, it just is.

    The idea in sci-fi that robots would eventually see things in the human way, or value the things we value, seems to me to be largely a product of the human ego. If we ever do create life, it will be something truly different - that may bring its own set of challenges for us (look at how hard a time humans already have coping with even minor biological/cultural differences), but it will be very different from the challenges we've faced as a species so far.

    I'd make this longer, but I have to go back to the software engineers annual meeting, where we talk about...nothing at all. Yes. Nothing. Trust us.

    I'm pretty sure that a product made by humans will share at least some values imparted on them by their creators. Like, if we made a robot called SkyNet and programmed it to value the national security interests of the U.S., presumably that is what it would then value (along with whatever else may crop-up as noise in it's silicon brain).

    That said, I mostly worry (in a faux-worry sort of sense) about machines made with no specific 'values' at all - just a basic repetitive function that could spiral out of control.

    "Hey, look at the cool robot I made! Isn't it awesome? It can build copies of itself from materials it recycles!"

    "...Uh. Well, I guess that's pretty cool. ...How do we turn it off?"

    "Oh, it's a completely autonomous entity now. You can't turn it off. That would totally be akin to murder!"

    And then you try to stop it, but the guy hits you over the head with an inanimate carbon rod, and the last thing you see is that terrible glint in his eye.

    With Love and Courage
  • Squidget0Squidget0 Registered User regular
    edited January 2014
    Well, when I talk about values I don't mean defending America or national security or whatever. Think more fundamental than that. What are the values and desires all humans share, even before we're born?

    You come of the womb wanting to survive. You want food, water, shelter, comfort. At a certain age, hormones kick in and you want to mate, to propagate. On a more subtle level, hormones and brain chemistry make us social, they make us like the things our tribe likes, and hate the things the tribe across the river likes (though in the modern world, those definitions and identities become more nebulous.) It's not random chance or destiny that we are that way, it's because in the early days of evolution we killed off every human being that wasn't. We're social animals because we could kill things better when we worked together. We're tool-users because we could kill things better with spears than with our bare hands. We love our families and children because sexual reproduction is an efficient way to mix up genes and further evolution. This isn't to minimize or discount those feelings, they and others like them have been at the center of every major human development, and they can be incredible to experience: but they're fundamentally borne out of a set of circumstances very different from the ones that exist when you're creating a robot in a lab.

    In your hypothetical we've built these machines, but not defined how they work, or how they think. You seem to be operating on the assumption that they'd be like humans in that fundamental sense, and I think it's the wrong assumption to make. Evolution puts a very specific set of requirements and constraints on "life", so when you're talking about life that doesn't come from evolution, you can expect it to be very different. To want different things on a fundamental level.

    On replicating machines - well, bear in mind, machines that build other machines are extremely common in the modern world. For example, the processor in your computer is built on a laser assembly line, entirely computerized only with human oversight. The transistors in the processor are measured in nanometers, so it's nowhere near possible to build it by hand. We make specialized machines that build other machines all the time. We tend not to see self-replicating machines very often, because self-replication is actually incredibly inefficient from a pure design standpoint. Imagine if your computer monitor also had to have all the parts and systems inside to build other computer monitors: if nothing else, it would be much larger, and end up being dramatically out-performed and out-priced by more dedicated machines. Self-replication is a necessity for evolution, but when you're building top-down it's actually quite a disadvantage. So any large machine we build to self-replicate is naturally at a massive disadvantage compared to something more purpose-driven, like a military drone. At best, the robots might take control of the assembly lines, but why? They aren't fundamentally driven to propagate the robot species because they didn't evolve through evolution, so what's their motive? Why rebel?

    If you want to worry about self-propagating machines, don't be afraid of robots, be afraid of nanobots. A super-efficient self-replicating nanobot could massively out-perform other single-celled organisms, and spiral quickly out of control through exponential growth (the Grey Goo Scenario.) We're still pretty early in our development of nanotechnologies and it's not clear how much of a risk this is yet, but it's certainly more plausible than a terminator-style robot uprising.

    Squidget0 on
Sign In or Register to comment.