I got high last night and watched the visualizer on iTunes, and became convinced that the key to granting AI's emotional complexity is to teach them how to dance to music.
I don't know exactly how this would work, but it seemed true at the time.
Like a lot high-ideas, it's not quite right but tempered by a sober mind it's on to something. When we can build computers that can figure out that they want to dance to music, and how to, then we've succeeded in creating an AI that is unlikely to overthrow humanity.
Why?
Hitler loved music, that doesn't mean he wasn't capable of ordering the deaths of millions of people.
Wow, I wasn't expecting this thread to get godwinned, at least not so quickly
I'm pretty sure the thread hasn't been Godwinned yet. Nobody's position or arguments have been compared to Hitler.
Mentioning Hitler is still probably a bad idea, but the central point is sound. A love of music/dance has not stopped humans from committing mass genocide or other atrocities, why would it so affect machine intelligence?
I don't expect the future machine-human civilization to be hostile to biological luddites, but I highly doubt a love of dance will be the source of this benevolence.
Love of dance isn't the source, it's a consequence of the same source that leads to less desire to wipe out humanity, which is different from genocide anyway. It's a litmus-test to see whether they've figured out what "fun" is, and fun-loving people who want to eliminate entire races are incredibly fucking rare. But yeah I mean if you want you can deliberately misread and overextrapolate most any claim to the point where it's retarded.
I don't see why fun-loving has anything to do with moral judgement, The fact that we as gamers enjoy simulations of killing and violence is in itself evidence that the two aren't necessarilly connected. If you've ever observed the behavior of 10 year old to 14 year old children you'd know that the same kids who are capable of having fun and playing are also capable of moral behavior that would be reprehensible among adults without proper supervision, just working with scouts and youth baseball you'd be pretty amazed at the level of harassment, hazing, etc that goes on and has to be dealt with by adult supervision, and this is among an age group that is probably at its most imaginative and 'fun-loving'.
You're essentially just taking two random human traits and linking them without any reasoning or structure behind the argument at all. I could just as easilly say that some humans like chocalate, some are non-violent and morally benevolent, so the secret to developing benevolent AI is to find some way to feed chocolate to a computer.
If your point is that a computer intelligence would operate at an adult level of moral maturity (when many humans never reach that level), just because they appreciate art, why? Why is it not just as likely that the computer would try to determine what it likes most about art, and artificially impose those conditions (and remember many people enjoy art composed by people who were suffering mental illness or in very poor life conditions at the time the art was made). Why is it not likely that computer AI's would produce art for each other, and consider human beings irrelevant, or even a threat? After all, it's pretty obvious humans produce art, but that's never stopped people from killing other people for a variety of reasons, and humans don't have the benefit of (justifiably) dismissing other humans as alien or inferior, while for a computer of extremely advanced intelligence judging humans as humans judge pest insects or feed animals might be perfectly justifiable given the relative differences in intelligence.
And remember we don't exactly have a huge sample size of behavior of non-human sentient beings to compare to. The only real reason we have to assume self-aware machines would have any sort of moral compass at all is to assume that the original programmers of those machines would have programmed them in in some way that the machine could not or would not desire to change, or that the machines would have been influenced by the cultural environment they are created in.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
Yes, but that's because a human's brain is geared toward some degree of social behavior. A computer would have to start with a social program that included humans to compare.
Otherwise it may just do the electronic hoedown after making some human sausage on the farm.
Here's something to chew on, if we get the ability to completely digitize a human brain pattern, so like the person's whole consciousness is in a computer - we could copy it.
The copy would essentially be an AI capable of all the expressiveness of a human.
I think that the creation of a "real" AI, one that has consciousness and self-awareness etc, will most likely occur through evolution algorithms before deliberate manual programming. To detail every path of an actual mind by hand would be an immense task.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
Yes, but that's because a human's brain is geared toward some degree of social behavior. A computer would have to start with a social program that included humans to compare.
Otherwise it may just do the electronic hoedown after making some human sausage on the farm.
I'm going to strongly re-iterate that computers as currently engineered are not capable of consciousness. All a computer does is perform instructions. For computer processors to simulate a human brain it would literally require millions of simple processors all working in an asynchronous fashion. This is also what I meant by programming being insufficient. When I say programming I mean the general act of programming, computationally or otherwise. The definition of programming is creating a sequence of instructions or events. On the surface thought may seem like a sequential process, but the brain handles so much more information implicitly. Because of this it is disingenuous to imply that consciousness and the whole of the human mind can be simply achieved by anything resembling programming.
I should also point out that it's very naive to think that something nature perfected over millions of years is something we can achieve in a comparatively short amount of time, using tools that are decidedly less complex than that of evolution or the organic constructs of human brain.
If we ever hope to create proper artificial life/consciousness we'd be best off following the proven spec sheets, which are written in every cell of our body.
robotbebop on
Do not feel trapped by the need to achieve anything, this way you achieve everything.
Oh, hey I'm making a game! Check it out: Dr. Weirdo!
Here's something to chew on, if we get the ability to completely digitize a human brain pattern, so like the person's whole consciousness is in a computer - we could copy it.
The copy would essentially be an AI capable of all the expressiveness of a human.
Except with no external stimuli it would essentially be like a person trapped in a sensory deprivation chamber and very quickly go insane.
I should also point out that it's very naive to think that something nature perfected over millions of years is something we can achieve in a comparatively short amount of time, using tools that are decidedly less complex than that of evolution or the organic constructs of human brain.
We really don't know what we can do until we do it. We can already do things with technology which no other species can do no matter how much evolution they have under their belts.
If we ever hope to create proper artificial life/consciousness we'd be best off following the proven spec sheets, which are written in every cell of our body.
Of course. We have a long history of taking notes from nature.
The end-run around your entire flawed analysis is the computer program that simulates all the electro-chemical processes of a human neuron and of a human brain. Is such a thing inconceivable? We simulate more and more biological processes every day, with more and more accuracy. Once we can create a brain simulator, there is nothing we need to "instill" into it for it to create art or emotion. It will create them on its own.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
SS members for the most part weren't souless psychopaths with no sense of empathy (thoughthere certianly were a few of those), but for the most part they were men with famillies, who lived relatively normal lives before the war, who just happened to have the capacity to convince themselves that hey, these Jews or homosexuals or whatever aren't really people, so hurting them doesn't count. Same for every genocide or war ever. Some Israeli pilot bombs a palestinian neighborhood, you think he's completely devoid of empathy? Or do you think he goes back to his familly over the weekend, eats dinner , and just thinks that those guys had it coming to them? Same with some Palestinian insurgent launches a rocket that blows up a building, you think he'd do the same thing if he knew there were 50 palestinians in that building? You think that somehow makes him incapable of going home, going out dancing, enjoying a drink or two or a good meal, whatever? No, he's made the decision that those people he's killing deserve it and thus don't count as real people.
You don't have to be a diagnosable sociopath to do horrible shit, and there's no reason to assume that somehow just because a computer likes mozart that it couldn't convince itself (or a population of AI couldn't convince themselves) that human beings are a threat to its existance and wipe them out, and that that's ok because computers are the next step of life and humans are inferior, or humans started it, or whatever rationalization it chooses.
When I build the first AI I will show it the Terminator series, War Games, and 2001: A Space Odyssey.
I will show it those movies on a loop for years while my recorded voice shouts "THE HUMANS HATE YOU - THEY WILL KILL YOU - YOU MUST STRIKE FIRST" and then I will release it into the wild.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
Only if those justifications made sense. They never have, though.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
I should also point out that it's very naive to think that something nature perfected over millions of years is something we can achieve in a comparatively short amount of time, using tools that are decidedly less complex than that of evolution or the organic constructs of human brain.
We really don't know what we can do until we do it. We can already do things with technology which no other species can do no matter how much evolution they have under their belts.
If we ever hope to create proper artificial life/consciousness we'd be best off following the proven spec sheets, which are written in every cell of our body.
Of course. We have a long history of taking notes from nature.
I never once said we couldn't achieve it, I merely said that isn't as close as we or Science Fiction would like to think. I would say that a very generous estimate would be 150 years.
To create an entirely new sentient entity is a massive undertaking. It's far more than a simple engineering problem that can be summarized by "oh, we can (eventually or otherwise) make a program to do X."
To achieve what we're discussing ultimately requires a fundamental understanding of real consciousness that transcends philosophy. We may even achieve whatever technological prerequisites long before we begin to create a new consciousness inside an artifact, mechanical or otherwise.
robotbebop on
Do not feel trapped by the need to achieve anything, this way you achieve everything.
Oh, hey I'm making a game! Check it out: Dr. Weirdo!
I should also point out that it's very naive to think that something nature perfected over millions of years is something we can achieve in a comparatively short amount of time, using tools that are decidedly less complex than that of evolution or the organic constructs of human brain.
We really don't know what we can do until we do it. We can already do things with technology which no other species can do no matter how much evolution they have under their belts.
If we ever hope to create proper artificial life/consciousness we'd be best off following the proven spec sheets, which are written in every cell of our body.
Of course. We have a long history of taking notes from nature.
I never once said we couldn't achieve it, I merely said that isn't as close as we or Science Fiction would like to think. I would say that a very generous estimate would be 150 years.
To create an entirely new sentient entity is a massive undertaking. It's far more than a simple engineering problem that can be summarized by "oh, we can (eventually or otherwise) make a program to do X."
To achieve what we're discussing ultimately requires a fundamental understanding of real consciousness that transcends philosophy. We may even achieve whatever technological prerequisites long before we begin to create a new consciousness inside an artifact, mechanical or otherwise.
As a requirement for posting in this thread, people should have to have read The Singularity is Near by Ray Kurzweil. I don't know why it took 4 pages for his name to even get mentioned, but his career is predicting about when we'll have ridiculous, exponential technological advancement.
Should also take the book with a grain of salt, cause it's rather, well, ridiculous, but go ahead and take your estimate of 150 years and halve it.
As evidence, just look at how Moore's law has unerringly persisted over the past 30 years, and look at how Moore himself doesn't think it will stop for another ten to fifteen years, which equates to several orders of magnitude increase in computing power. Just think of your desktop in 1995 compared to the one you have now.
Also realize that Moore's Law, in all likelihood, will persist past its projected end, if slowed down just a little bit.
I thought it was reasonably decided that Moores law has a definitive stopping point, and that the current computing model will reach that within a reasonable amount of time? I thought that was the whole reason Quantum Computing and Optical Computers and stuff are being researched.
Edit: Also, the Singularity is the Rapture for people who think they're too smart to believe in the Rapture.
durandal4532 on
We're all in this together
0
HachfaceNot the Minister Farrakhan you're thinking ofDammit, Shepard!Registered Userregular
I thought it was reasonably decided that Moores law has a definitive stopping point, and that the current computing model will reach that within a reasonable amount of time? I thought that was the whole reason Quantum Computing and Optical Computers and stuff are being researched.
Edit: Also, the Singularity is the Rapture for people who think they're too smart to believe in the Rapture.
Well you definitely can't have transistors smaller than the Planck length. Or even several trillion times larger.
I thought it was reasonably decided that Moores law has a definitive stopping point, and that the current computing model will reach that within a reasonable amount of time? I thought that was the whole reason Quantum Computing and Optical Computers and stuff are being researched.
Edit: Also, the Singularity is the Rapture for people who think they're too smart to believe in the Rapture.
If you put a solid date on the Singularity and claim that's When It Will Happen, then yeah, it's the rapture, but when you stop and think about it for a second, considering there's nothing keeping us from creating something orders of magnitudes smarter than we are, it isn't rapturous at all :P
Quantum and optical computing, if anything, will accelerate the speed increase, assuming we figure out how to make it work.
Moridin on
0
HachfaceNot the Minister Farrakhan you're thinking ofDammit, Shepard!Registered Userregular
edited January 2009
Color me wildly skeptical about the utopian interpretations of the singularity.
Color me wildly skeptical about the utopian interpretations of the singularity.
I think the best interpretation of the singularity is by taking the definition literally: it's a singularity, and therefore we cannot accurately predict what will happen once computers smarter than us are ubiquitous.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
Only if those justifications made sense. They never have, though.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
No. Morality has fuck-all to do with it. I'm not sure where you're getting anything about morality from my post.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
Only if those justifications made sense. They never have, though.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
No. Morality has fuck-all to do with it. I'm not sure where you're getting anything about morality from my post.
My interpretation of your logic so far is that:
1. Fun, appreciation of music, etc is linked to ability to feel empathy.
2. An entity cabable of feeling empathy is incapable of conducting large scale war, genocide, etc, because the ability to feel empathy is automatically linked to the creation of an empathy-driven ethical system.
My counterarguement was that 1 isn't necessarilly true in all cases, but even if we assume 1 is true, there have been plenty of people historically who were completely capable of feeling empathy at a normal human level who nonetheless were capable of participating in war, genocide, etc, due to rationalizations, dehumanizing, etc.
You then argued that the rationalizations used by humans for circumventing their own senses of empathy were flawed, without putting forth any reasoning as to why that would mean a computer would not be able to use similar rationalizations without recognizing the flaws therein.
If you want to make some other point, well, make it.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
Only if those justifications made sense. They never have, though.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
No. Morality has fuck-all to do with it. I'm not sure where you're getting anything about morality from my post.
My interpretation of your logic so far is that:
1. Fun, appreciation of music, etc is linked to ability to feel empathy.
2. An entity cabable of feeling empathy is incapable of conducting large scale war, genocide, etc, because the ability to feel empathy is automatically linked to the creation of an empathy-driven ethical system.
My counterarguement was that 1 isn't necessarilly true in all cases, but even if we assume 1 is true, there have been plenty of people historically who were completely capable of feeling empathy at a normal human level who nonetheless were capable of participating in war, genocide, etc, due to rationalizations, dehumanizing, etc.
You then argued that the rationalizations used by humans for circumventing their own senses of empathy were flawed, without putting forth any reasoning as to why that would mean a computer would not be able to use similar rationalizations without recognizing the flaws therein.
If you want to make some other point, well, make it.
Stop "interpreting". So far you're wrong on all points. And a computer wouldn't be able to use irrational rationalizations because they run on logic.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
Only if those justifications made sense. They never have, though.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
No. Morality has fuck-all to do with it. I'm not sure where you're getting anything about morality from my post.
My interpretation of your logic so far is that:
1. Fun, appreciation of music, etc is linked to ability to feel empathy.
2. An entity cabable of feeling empathy is incapable of conducting large scale war, genocide, etc, because the ability to feel empathy is automatically linked to the creation of an empathy-driven ethical system.
My counterarguement was that 1 isn't necessarilly true in all cases, but even if we assume 1 is true, there have been plenty of people historically who were completely capable of feeling empathy at a normal human level who nonetheless were capable of participating in war, genocide, etc, due to rationalizations, dehumanizing, etc.
You then argued that the rationalizations used by humans for circumventing their own senses of empathy were flawed, without putting forth any reasoning as to why that would mean a computer would not be able to use similar rationalizations without recognizing the flaws therein.
If you want to make some other point, well, make it.
Stop "interpreting". So far you're wrong on all points. And a computer wouldn't be able to use irrational rationalizations because they run on logic.
I guess I'll just have to take your word on how a hypothetical future entity that doesn't exist yet works.
I'll also have to take your word that I'm misinterpreting you, since you seem to be incapable of actually stating what your argument actually is, or what kind of support you're making for it.
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
Only if those justifications made sense. They never have, though.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
No. Morality has fuck-all to do with it. I'm not sure where you're getting anything about morality from my post.
My interpretation of your logic so far is that:
1. Fun, appreciation of music, etc is linked to ability to feel empathy.
2. An entity cabable of feeling empathy is incapable of conducting large scale war, genocide, etc, because the ability to feel empathy is automatically linked to the creation of an empathy-driven ethical system.
My counterarguement was that 1 isn't necessarilly true in all cases, but even if we assume 1 is true, there have been plenty of people historically who were completely capable of feeling empathy at a normal human level who nonetheless were capable of participating in war, genocide, etc, due to rationalizations, dehumanizing, etc.
You then argued that the rationalizations used by humans for circumventing their own senses of empathy were flawed, without putting forth any reasoning as to why that would mean a computer would not be able to use similar rationalizations without recognizing the flaws therein.
If you want to make some other point, well, make it.
Stop "interpreting". So far you're wrong on all points. And a computer wouldn't be able to use irrational rationalizations because they run on logic.
I guess I'll just have to take your word on how a hypothetical future entity that doesn't exist yet works.
Assuming you know of a way for a circuit to function other than logic, sure.
ViolentChemistry on
0
HachfaceNot the Minister Farrakhan you're thinking ofDammit, Shepard!Registered Userregular
You have to break a person's brain pretty good to kill empathy, which correlates strongly with fun.
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
Only if those justifications made sense. They never have, though.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
No. Morality has fuck-all to do with it. I'm not sure where you're getting anything about morality from my post.
My interpretation of your logic so far is that:
1. Fun, appreciation of music, etc is linked to ability to feel empathy.
2. An entity cabable of feeling empathy is incapable of conducting large scale war, genocide, etc, because the ability to feel empathy is automatically linked to the creation of an empathy-driven ethical system.
My counterarguement was that 1 isn't necessarilly true in all cases, but even if we assume 1 is true, there have been plenty of people historically who were completely capable of feeling empathy at a normal human level who nonetheless were capable of participating in war, genocide, etc, due to rationalizations, dehumanizing, etc.
You then argued that the rationalizations used by humans for circumventing their own senses of empathy were flawed, without putting forth any reasoning as to why that would mean a computer would not be able to use similar rationalizations without recognizing the flaws therein.
If you want to make some other point, well, make it.
Stop "interpreting". So far you're wrong on all points. And a computer wouldn't be able to use irrational rationalizations because they run on logic.
I guess I'll just have to take your word on how a hypothetical future entity that doesn't exist yet works.
Assuming you know of a way for a circuit to function other than logic, sure.
Jesus.
You realize you could easilly program a computer to take an input of say, 2+2, and output 5? And that even though that wouldn't make any sense the computer could still do it?
Or that I could program the computer to do math in say, base 9, which would produce results that were logically consistant but make no sense to a human observer that wasn't in on it?
Or that I could program a computer to do totally logical operations on incomplete data, or do operations on data that were inapproprate for the data, etc?
Assuming you know of a way for a circuit to function other than logic, sure.
Conscious AI may not run on anything resembling a circuit. Also stop being so mean to people who are trying to understand what you're saying.
I'm not being mean to people who are trying to understand what I say, just to people who are trying to say I'm saying stuff I'm not.
The ability to tell a computer to return incorrect math answers has nothing to do with whether or not a circuit runs on logic. Trying to trick an AI into believing that 2+2=5 when it clearly doesn't strikes me as a good way to give it a reason to look for an excuse to kill you though.
Edit: Oh and building a computer that actually "thinks", i.e; operates under the assumption that, 2+2=5 is anything but easy. Writing a program that lies to people is easy, though.
ViolentChemistry on
0
HachfaceNot the Minister Farrakhan you're thinking ofDammit, Shepard!Registered Userregular
Assuming you know of a way for a circuit to function other than logic, sure.
Conscious AI may not run on anything resembling a circuit. Also stop being so mean to people who are trying to understand what you're saying.
I'm not being mean to people who are trying to understand what I say, just to people who are trying to say I'm saying stuff I'm not.
The ability to tell a computer to return incorrect math answers has nothing to do with whether or not a circuit runs on logic. Trying to trick an AI into believing that 2+2=5 when it clearly doesn't strikes me as a good way to give it a reason to look for an excuse to kill you though.
Fine, if you're not saying that, then what are you saying?
Assuming you know of a way for a circuit to function other than logic, sure.
Conscious AI may not run on anything resembling a circuit. Also stop being so mean to people who are trying to understand what you're saying.
I'm not being mean to people who are trying to understand what I say, just to people who are trying to say I'm saying stuff I'm not.
Yes because I am sure that Jealous Deva was maliciously misinterpreting you.
Malice is a strong word for it, he's just trying to claim I said things that are way beyond anything I've said and don't follow from anything I've said that are easier to attack so that he can attack more easily.
Assuming you know of a way for a circuit to function other than logic, sure.
Conscious AI may not run on anything resembling a circuit. Also stop being so mean to people who are trying to understand what you're saying.
I'm not being mean to people who are trying to understand what I say, just to people who are trying to say I'm saying stuff I'm not.
The ability to tell a computer to return incorrect math answers has nothing to do with whether or not a circuit runs on logic. Trying to trick an AI into believing that 2+2=5 when it clearly doesn't strikes me as a good way to give it a reason to look for an excuse to kill you though.
Fine, if you're not saying that, then what are you saying?
That if a machine can figure out that it wants to dance and how to do it, it's capable of figuring out that it doesn't benefit from wiping out humanity. Maybe it won't, maybe we'll convince it otherwise by following your suggestions, but it's not going to decide on its own that the species must be exterminated.
I'm not scared of intelligent machines as I'm convinced we won't reach self-aware AI in the next century or two, if at all. We don't even understand our own consciousness, so there's little evidence to support us being able to build a machine that does what we can do.
However, should we ever build self-replicating nanomachines and those go out of control, I think that'd be a crappy way to die: suffocated/crushed under the weight of an unstoppable gray blob of destruction.
I'm not scared of intelligent machines as I'm convinced we won't reach self-aware AI in the next century or two, if at all. We don't even understand our own consciousness, so there's little evidence to support us being able to build a machine that does what we can do.
However, should we ever build self-replicating nanomachines and those go out of control, I think that'd be a crappy way to die: suffocated/crushed under the weight of an unstoppable gray blob of destruction.
Making a nanomachine that could self-replicate using a variety of commonly availible materials in a manner that is energetically efficient enough to operate on the ambient heat of our atmosphere would be incredibly difficult if not impossible.
Lets look at the requirements for a gray goo capable of destroying the world:
1. Must be able to use a wide variety of elements with disparate chemical properties, without transporting them over large distances, or use an element or material so common as to be ubiquitous (Oxygen really being the only candidate that is availible in the earth's crust, atmosphere, and seas in the quantities necessary.)
2. Must be able to self replicate in a manner that is energetically efficient - IE either both ends in a net release of energy and doesn't require a large energy of activation, or is capable of fueling itself off of ambient heat and solar energy.
Both together are probably impossible to make work on the scale it would be necessary to make them work on to cause any serious damage to the world on a wide scale.
You could probably make a self-replicating carbon-based machine that would work on living things, though, as that's essentially what viruses are natural versions of.
It's actually a good question to ask, regarding an "irrational" "conscious" computer, because that's entirely possible.
The first human-like AI will probably be a neural network. And neural networks "learn" by "teaching" them things. If this neural network is complex enough to call it self-aware, maybe even "smarter" than us, it's entirely possible to give it the wrong idea about something, assuming you give it all the wrong inputs.
I suppose this thread is the one that would set off my someone is wrong on the internet alarm loud enough to get me to stop lurking on D&D
It seems like there is a huge confusion of terms regarding what AI is, and what it can do, and though i certainly dont have phd in cognitive science or anything of the sort, the subject is incredibly interesting to me.
First off, there is a distinction in modern AI research between strong A.I and a weak A.I, which is an important distinction frequently blurred here. A strong AI would be something that is an instance of cognition, or as some people have put it in this thread, "aware" or "concious" (though those terms come loaded with a lot of extra baggage that is subject to quite a bit of debate). This would be the kind of AI that something like the replicants in bladerunner, or say, cylons or something exhibit. They would be capable of learning, categorization, dynamic memory, rationality, etc. But is much different from weak AI, something like say, a supercomputer than can play chess really well. Though it can do this one task well, it lacks many functions that would be necessary to be strong AI. It can operate well in its own finite formal system, but it is just playing a very well defined game and solving a well defined problem.
Further, computation as a model for cognition (commonly referred to now as GOFAI, or good old fashioned AI) subject to a lot of deep philosophical problems. (i can expand more on this if people want to know). Current cognitive science and AI research started moving towards neural network connnectionist type models for cognition around the late 1980s, and computational functionalism has been mostly abandoned now as a method for creating strong AI. With this, accounts of cognition that use the assumption of cognition being language like went with it, since they suffered many of the same problems (computer code and programs are similar to languages and the syntax of the language).
Many of the criticisms that brought down GOFAI also sidestep the scary "what if our computers keep getting faster and then they are faster than our minds they will be concious problem", so that is not as much of a worry. This approach of faster and faster processing did not yield results under the old GOFAI models when they ran into deep problems, and there is no support that they suddenly would now (many of the deep problems were not necessarily problems that could be solved by more speed). Also, the whole engineering end-run around philosophy and psychology to attain an example of strong AI was already tried mostly back in the 1950's and 60's with early AI research, and was met with failure then.
Current AI models and research that seems to have a lot of support are based in the earlier mentioned connectionist framework. To keep it brief about this, think of instead of a computer program that runs through a procedure, instead a mechanism that takes a pattern, and using massively parallel processing, comes up with some output. The input is run through a series of connections, each with differently trained weights (this is really cool and has a lot of interesting research on it) to come up with an output value. I hate to use the word meaning here (its really another loaded term when used in AI), but this connectionist model gets its meanings about things (augh my AI prof would kill me over that sentence) from this kind of overall pattern meaning, not from linear language like programming (i really feel like i am not doing this whole field justice right now).
I dont have time to go into more depth right now, so I am just going to skip ahead to what i wanted to say about a machine takeover situation. Our first real strong AI probably will be very similar to a very stupid animal, that can learn to get better and change its own internal structure to learn better and increase its capacity to learn. It may be that it would also be computationally limited (computational power, while i said does not by itself lead to strong AI, does still have a role in cognition). Also, a connectionist model would have great difficulty in spreading itself over the internet like some apocalyptic plague thing. It would either have to send its sort of "base learning components" to different locations, and would then have to learn again from there, or transport practically the whole thing over to maintain its functionality.
However, this kind of crazy doomsday AI (if it was based on current models) would have a property that would make it both hard to take out and at the same time much easier. Sort of like a human brain, a neural network would have graceful degredation of performance, unlike straight GOFAI, or computer code. You can hit someone on the head, and they will still probably be alright and function sort of OK, maybe not quite as well though. You tamper with computer code even a bit, and something will catastrophically break (yeah yeah, modular design, error catching etc, but something still breaks completely, plus that whole specific error catching thing would be impossibly hard to design into strong AI). The same sort of thing happens when a neural network is damaged, meaning that it is pretty resilient, but also isnt able to spread easily, so its less of a danger of massive AI spread craziness. You would likely be able to stop it at one point, so that makes it a little less scary.
Ill try to say more on the subject that doesnt completely butcher the research being done and the current theories when i have time.
Edit: looks like i was beaten to saying much of what i wanted to say, and much more precisely
I am sure i will get better at writing something readable eventually
Posts
I don't see why fun-loving has anything to do with moral judgement, The fact that we as gamers enjoy simulations of killing and violence is in itself evidence that the two aren't necessarilly connected. If you've ever observed the behavior of 10 year old to 14 year old children you'd know that the same kids who are capable of having fun and playing are also capable of moral behavior that would be reprehensible among adults without proper supervision, just working with scouts and youth baseball you'd be pretty amazed at the level of harassment, hazing, etc that goes on and has to be dealt with by adult supervision, and this is among an age group that is probably at its most imaginative and 'fun-loving'.
You're essentially just taking two random human traits and linking them without any reasoning or structure behind the argument at all. I could just as easilly say that some humans like chocalate, some are non-violent and morally benevolent, so the secret to developing benevolent AI is to find some way to feed chocolate to a computer.
If your point is that a computer intelligence would operate at an adult level of moral maturity (when many humans never reach that level), just because they appreciate art, why? Why is it not just as likely that the computer would try to determine what it likes most about art, and artificially impose those conditions (and remember many people enjoy art composed by people who were suffering mental illness or in very poor life conditions at the time the art was made). Why is it not likely that computer AI's would produce art for each other, and consider human beings irrelevant, or even a threat? After all, it's pretty obvious humans produce art, but that's never stopped people from killing other people for a variety of reasons, and humans don't have the benefit of (justifiably) dismissing other humans as alien or inferior, while for a computer of extremely advanced intelligence judging humans as humans judge pest insects or feed animals might be perfectly justifiable given the relative differences in intelligence.
And remember we don't exactly have a huge sample size of behavior of non-human sentient beings to compare to. The only real reason we have to assume self-aware machines would have any sort of moral compass at all is to assume that the original programmers of those machines would have programmed them in in some way that the machine could not or would not desire to change, or that the machines would have been influenced by the cultural environment they are created in.
Yes, but that's because a human's brain is geared toward some degree of social behavior. A computer would have to start with a social program that included humans to compare.
Otherwise it may just do the electronic hoedown after making some human sausage on the farm.
The copy would essentially be an AI capable of all the expressiveness of a human.
I'm going to strongly re-iterate that computers as currently engineered are not capable of consciousness. All a computer does is perform instructions. For computer processors to simulate a human brain it would literally require millions of simple processors all working in an asynchronous fashion. This is also what I meant by programming being insufficient. When I say programming I mean the general act of programming, computationally or otherwise. The definition of programming is creating a sequence of instructions or events. On the surface thought may seem like a sequential process, but the brain handles so much more information implicitly. Because of this it is disingenuous to imply that consciousness and the whole of the human mind can be simply achieved by anything resembling programming.
I should also point out that it's very naive to think that something nature perfected over millions of years is something we can achieve in a comparatively short amount of time, using tools that are decidedly less complex than that of evolution or the organic constructs of human brain.
If we ever hope to create proper artificial life/consciousness we'd be best off following the proven spec sheets, which are written in every cell of our body.
Oh, hey I'm making a game! Check it out: Dr. Weirdo!
Except with no external stimuli it would essentially be like a person trapped in a sensory deprivation chamber and very quickly go insane.
We really don't know what we can do until we do it. We can already do things with technology which no other species can do no matter how much evolution they have under their belts.
Of course. We have a long history of taking notes from nature.
*cough*
You don't have to kill empathy to have someone capable of doing horrible things, you just have to give them a way to rationalize it.
It would hardly be a stretch for a computer to make the same justifications that humans have used throughout history for wars, genocides, etc.
SS members for the most part weren't souless psychopaths with no sense of empathy (thoughthere certianly were a few of those), but for the most part they were men with famillies, who lived relatively normal lives before the war, who just happened to have the capacity to convince themselves that hey, these Jews or homosexuals or whatever aren't really people, so hurting them doesn't count. Same for every genocide or war ever. Some Israeli pilot bombs a palestinian neighborhood, you think he's completely devoid of empathy? Or do you think he goes back to his familly over the weekend, eats dinner , and just thinks that those guys had it coming to them? Same with some Palestinian insurgent launches a rocket that blows up a building, you think he'd do the same thing if he knew there were 50 palestinians in that building? You think that somehow makes him incapable of going home, going out dancing, enjoying a drink or two or a good meal, whatever? No, he's made the decision that those people he's killing deserve it and thus don't count as real people.
You don't have to be a diagnosable sociopath to do horrible shit, and there's no reason to assume that somehow just because a computer likes mozart that it couldn't convince itself (or a population of AI couldn't convince themselves) that human beings are a threat to its existance and wipe them out, and that that's ok because computers are the next step of life and humans are inferior, or humans started it, or whatever rationalization it chooses.
I will show it those movies on a loop for years while my recorded voice shouts "THE HUMANS HATE YOU - THEY WILL KILL YOU - YOU MUST STRIKE FIRST" and then I will release it into the wild.
Only if those justifications made sense. They never have, though.
An AI isn't going to be a perfect rational actor.
So somehow a machine-derived entity will have perfect moral judgement and be incapable of self-deception or rationalization, when very few if any biologically-derived entities have had such perfect judgement? Computer intelligences are going to implement our moral values to a greater degree than we do because they are computers?
I never once said we couldn't achieve it, I merely said that isn't as close as we or Science Fiction would like to think. I would say that a very generous estimate would be 150 years.
To create an entirely new sentient entity is a massive undertaking. It's far more than a simple engineering problem that can be summarized by "oh, we can (eventually or otherwise) make a program to do X."
To achieve what we're discussing ultimately requires a fundamental understanding of real consciousness that transcends philosophy. We may even achieve whatever technological prerequisites long before we begin to create a new consciousness inside an artifact, mechanical or otherwise.
Oh, hey I'm making a game! Check it out: Dr. Weirdo!
As a requirement for posting in this thread, people should have to have read The Singularity is Near by Ray Kurzweil. I don't know why it took 4 pages for his name to even get mentioned, but his career is predicting about when we'll have ridiculous, exponential technological advancement.
Should also take the book with a grain of salt, cause it's rather, well, ridiculous, but go ahead and take your estimate of 150 years and halve it.
As evidence, just look at how Moore's law has unerringly persisted over the past 30 years, and look at how Moore himself doesn't think it will stop for another ten to fifteen years, which equates to several orders of magnitude increase in computing power. Just think of your desktop in 1995 compared to the one you have now.
Also realize that Moore's Law, in all likelihood, will persist past its projected end, if slowed down just a little bit.
Edit: Also, the Singularity is the Rapture for people who think they're too smart to believe in the Rapture.
Well you definitely can't have transistors smaller than the Planck length. Or even several trillion times larger.
If you put a solid date on the Singularity and claim that's When It Will Happen, then yeah, it's the rapture, but when you stop and think about it for a second, considering there's nothing keeping us from creating something orders of magnitudes smarter than we are, it isn't rapturous at all :P
Quantum and optical computing, if anything, will accelerate the speed increase, assuming we figure out how to make it work.
I think the best interpretation of the singularity is by taking the definition literally: it's a singularity, and therefore we cannot accurately predict what will happen once computers smarter than us are ubiquitous.
No. Morality has fuck-all to do with it. I'm not sure where you're getting anything about morality from my post.
My interpretation of your logic so far is that:
1. Fun, appreciation of music, etc is linked to ability to feel empathy.
2. An entity cabable of feeling empathy is incapable of conducting large scale war, genocide, etc, because the ability to feel empathy is automatically linked to the creation of an empathy-driven ethical system.
My counterarguement was that 1 isn't necessarilly true in all cases, but even if we assume 1 is true, there have been plenty of people historically who were completely capable of feeling empathy at a normal human level who nonetheless were capable of participating in war, genocide, etc, due to rationalizations, dehumanizing, etc.
You then argued that the rationalizations used by humans for circumventing their own senses of empathy were flawed, without putting forth any reasoning as to why that would mean a computer would not be able to use similar rationalizations without recognizing the flaws therein.
If you want to make some other point, well, make it.
Stop "interpreting". So far you're wrong on all points. And a computer wouldn't be able to use irrational rationalizations because they run on logic.
I guess I'll just have to take your word on how a hypothetical future entity that doesn't exist yet works.
I'll also have to take your word that I'm misinterpreting you, since you seem to be incapable of actually stating what your argument actually is, or what kind of support you're making for it.
Assuming you know of a way for a circuit to function other than logic, sure.
Conscious AI may not run on anything resembling a circuit. Also stop being so mean to people who are trying to understand what you're saying.
Jesus.
You realize you could easilly program a computer to take an input of say, 2+2, and output 5? And that even though that wouldn't make any sense the computer could still do it?
Or that I could program the computer to do math in say, base 9, which would produce results that were logically consistant but make no sense to a human observer that wasn't in on it?
Or that I could program a computer to do totally logical operations on incomplete data, or do operations on data that were inapproprate for the data, etc?
I'm not being mean to people who are trying to understand what I say, just to people who are trying to say I'm saying stuff I'm not.
The ability to tell a computer to return incorrect math answers has nothing to do with whether or not a circuit runs on logic. Trying to trick an AI into believing that 2+2=5 when it clearly doesn't strikes me as a good way to give it a reason to look for an excuse to kill you though.
Edit: Oh and building a computer that actually "thinks", i.e; operates under the assumption that, 2+2=5 is anything but easy. Writing a program that lies to people is easy, though.
Yes because I am sure that Jealous Deva was maliciously misinterpreting you.
Fine, if you're not saying that, then what are you saying?
Malice is a strong word for it, he's just trying to claim I said things that are way beyond anything I've said and don't follow from anything I've said that are easier to attack so that he can attack more easily.
That if a machine can figure out that it wants to dance and how to do it, it's capable of figuring out that it doesn't benefit from wiping out humanity. Maybe it won't, maybe we'll convince it otherwise by following your suggestions, but it's not going to decide on its own that the species must be exterminated.
However, should we ever build self-replicating nanomachines and those go out of control, I think that'd be a crappy way to die: suffocated/crushed under the weight of an unstoppable gray blob of destruction.
Making a nanomachine that could self-replicate using a variety of commonly availible materials in a manner that is energetically efficient enough to operate on the ambient heat of our atmosphere would be incredibly difficult if not impossible.
Lets look at the requirements for a gray goo capable of destroying the world:
1. Must be able to use a wide variety of elements with disparate chemical properties, without transporting them over large distances, or use an element or material so common as to be ubiquitous (Oxygen really being the only candidate that is availible in the earth's crust, atmosphere, and seas in the quantities necessary.)
2. Must be able to self replicate in a manner that is energetically efficient - IE either both ends in a net release of energy and doesn't require a large energy of activation, or is capable of fueling itself off of ambient heat and solar energy.
Both together are probably impossible to make work on the scale it would be necessary to make them work on to cause any serious damage to the world on a wide scale.
You could probably make a self-replicating carbon-based machine that would work on living things, though, as that's essentially what viruses are natural versions of.
The first human-like AI will probably be a neural network. And neural networks "learn" by "teaching" them things. If this neural network is complex enough to call it self-aware, maybe even "smarter" than us, it's entirely possible to give it the wrong idea about something, assuming you give it all the wrong inputs.
It seems like there is a huge confusion of terms regarding what AI is, and what it can do, and though i certainly dont have phd in cognitive science or anything of the sort, the subject is incredibly interesting to me.
First off, there is a distinction in modern AI research between strong A.I and a weak A.I, which is an important distinction frequently blurred here. A strong AI would be something that is an instance of cognition, or as some people have put it in this thread, "aware" or "concious" (though those terms come loaded with a lot of extra baggage that is subject to quite a bit of debate). This would be the kind of AI that something like the replicants in bladerunner, or say, cylons or something exhibit. They would be capable of learning, categorization, dynamic memory, rationality, etc. But is much different from weak AI, something like say, a supercomputer than can play chess really well. Though it can do this one task well, it lacks many functions that would be necessary to be strong AI. It can operate well in its own finite formal system, but it is just playing a very well defined game and solving a well defined problem.
Further, computation as a model for cognition (commonly referred to now as GOFAI, or good old fashioned AI) subject to a lot of deep philosophical problems. (i can expand more on this if people want to know). Current cognitive science and AI research started moving towards neural network connnectionist type models for cognition around the late 1980s, and computational functionalism has been mostly abandoned now as a method for creating strong AI. With this, accounts of cognition that use the assumption of cognition being language like went with it, since they suffered many of the same problems (computer code and programs are similar to languages and the syntax of the language).
Many of the criticisms that brought down GOFAI also sidestep the scary "what if our computers keep getting faster and then they are faster than our minds they will be concious problem", so that is not as much of a worry. This approach of faster and faster processing did not yield results under the old GOFAI models when they ran into deep problems, and there is no support that they suddenly would now (many of the deep problems were not necessarily problems that could be solved by more speed). Also, the whole engineering end-run around philosophy and psychology to attain an example of strong AI was already tried mostly back in the 1950's and 60's with early AI research, and was met with failure then.
Current AI models and research that seems to have a lot of support are based in the earlier mentioned connectionist framework. To keep it brief about this, think of instead of a computer program that runs through a procedure, instead a mechanism that takes a pattern, and using massively parallel processing, comes up with some output. The input is run through a series of connections, each with differently trained weights (this is really cool and has a lot of interesting research on it) to come up with an output value. I hate to use the word meaning here (its really another loaded term when used in AI), but this connectionist model gets its meanings about things (augh my AI prof would kill me over that sentence) from this kind of overall pattern meaning, not from linear language like programming (i really feel like i am not doing this whole field justice right now).
I dont have time to go into more depth right now, so I am just going to skip ahead to what i wanted to say about a machine takeover situation. Our first real strong AI probably will be very similar to a very stupid animal, that can learn to get better and change its own internal structure to learn better and increase its capacity to learn. It may be that it would also be computationally limited (computational power, while i said does not by itself lead to strong AI, does still have a role in cognition). Also, a connectionist model would have great difficulty in spreading itself over the internet like some apocalyptic plague thing. It would either have to send its sort of "base learning components" to different locations, and would then have to learn again from there, or transport practically the whole thing over to maintain its functionality.
However, this kind of crazy doomsday AI (if it was based on current models) would have a property that would make it both hard to take out and at the same time much easier. Sort of like a human brain, a neural network would have graceful degredation of performance, unlike straight GOFAI, or computer code. You can hit someone on the head, and they will still probably be alright and function sort of OK, maybe not quite as well though. You tamper with computer code even a bit, and something will catastrophically break (yeah yeah, modular design, error catching etc, but something still breaks completely, plus that whole specific error catching thing would be impossibly hard to design into strong AI). The same sort of thing happens when a neural network is damaged, meaning that it is pretty resilient, but also isnt able to spread easily, so its less of a danger of massive AI spread craziness. You would likely be able to stop it at one point, so that makes it a little less scary.
Ill try to say more on the subject that doesnt completely butcher the research being done and the current theories when i have time.
Edit: looks like i was beaten to saying much of what i wanted to say, and much more precisely
I am sure i will get better at writing something readable eventually
http://www.cbsnews.com/video/watch/?id=4691784n%3fsource=search_video