Levity aside, I can't fathom AI exsisting. How can we write logical code from a binary 1/0, yes/no, to yes/no/maybe. I can't see how a true AI can exsist if it can't achieve self realization, or at least ponder it's existence, or the exsistence of others in the metaphysical sense.
Levity aside, I can't fathom AI exsisting. How can we write logical code from a binary 1/0, yes/no, to yes/no/maybe. I can't see how a true AI can exsist if it can't achieve self realization, or at least ponder it's existence, or the exsistence of others in the metaphysical sense.
A real honest to God A.I. seems so...impossible.
it's just big math.
seriously, at the end of the day, it's just big math. so big you yourself can't wrap your head around it.
man, i don't know if you've ever looked into 3D graphics programming
but that's some really big math.
From a purely non-spiritual, biology-only stand point, all we are is big math. Neurons fire along their path, the sheer complexity and volume of it is what makes our consciosness. It's all just big math.
Unless you believe the consciousness has a totally unscientific spiritual component which could never be quantified by sheer biology and is irreplicable in machines.
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
Levity aside, I can't fathom AI exsisting. How can we write logical code from a binary 1/0, yes/no, to yes/no/maybe. I can't see how a true AI can exsist if it can't achieve self realization, or at least ponder it's existence, or the exsistence of others in the metaphysical sense.
A real honest to God A.I. seems so...impossible.
Lots of things in math and science seem impossible. Interestingly enough, the entire universe is likely equivalent to a classical computer.
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
Levity aside, I can't fathom AI exsisting. How can we write logical code from a binary 1/0, yes/no, to yes/no/maybe. I can't see how a true AI can exsist if it can't achieve self realization, or at least ponder it's existence, or the exsistence of others in the metaphysical sense.
A real honest to God A.I. seems so...impossible.
Lots of things in math and science seem impossible. Interestingly enough, the entire universe is likely equivalent to a classical computer.
What really makes the brain the the awsome engine that it is is that while a computer circuit has only on/off, 00/01 as options each neuron in your brain can send out a number of different neurotransmitters. Without checking I think its at least six or seven. This ups the possible number of combinations of connections hugely over a system with only two options.
Though it isn't often commented on due to the specificity of where it gets exibited the human brain rocks at raw number crunching. Aside from the usual calc required to throw and catch a ball, a task like recognizing someone just from their walk (which the brain is very good at) requires running several multivariable calculus equations in under two tenths of a second.
edit: I checked; there are at least ten different NTs
ALocksly on
Yes,... yes, I agree. It's totally unfair that sober you gets into trouble for things that drunk you did.
Levity aside, I can't fathom AI exsisting. How can we write logical code from a binary 1/0, yes/no, to yes/no/maybe. I can't see how a true AI can exsist if it can't achieve self realization, or at least ponder it's existence, or the exsistence of others in the metaphysical sense.
A real honest to God A.I. seems so...impossible.
Lots of things in math and science seem impossible. Interestingly enough, the entire universe is likely equivalent to a classical computer.
What really makes the brain the the awsome engine that it is is that while a computer circuit has only on/off, 00/01 as options each neuron in your brain can send out a number of different neurotransmitters. Without checking I think its at least six or seven. This ups the possible number of combinations of connections hugely over a system with only two options.
Though it isn't often commented on due to the specificity of where it gets exibited the human brain rocks at raw number crunching. Aside from the usual calc required to throw and catch a ball, a task like recognizing someone just from their walk (which the brain is very good at) requires running several multivariable calculus equations in under two tenths of a second.
edit: I checked; there are at least ten different NTs
yeah, but that's just a system with more variables, really.
make the math big enough you can make any calculation as long as you know what calculations to make
technically speaking, it is possible for a basic difference engine to exhibit the raw mathematical power of a modern computer.
it would be absurdly, almost impossibly huge in size, but it could exist
i believe we, technologically, will advance to a point where truly replicating the majority, if not all, of the functions and capabilities of the human brain will be possible.
Levity aside, I can't fathom AI exsisting. How can we write logical code from a binary 1/0, yes/no, to yes/no/maybe. I can't see how a true AI can exsist if it can't achieve self realization, or at least ponder it's existence, or the exsistence of others in the metaphysical sense.
A real honest to God A.I. seems so...impossible.
Lots of things in math and science seem impossible. Interestingly enough, the entire universe is likely equivalent to a classical computer.
What really makes the brain the the awsome engine that it is is that while a computer circuit has only on/off, 00/01 as options each neuron in your brain can send out a number of different neurotransmitters. Without checking I think its at least six or seven. This ups the possible number of combinations of connections hugely over a system with only two options.
Though it isn't often commented on due to the specificity of where it gets exibited the human brain rocks at raw number crunching. Aside from the usual calc required to throw and catch a ball, a task like recognizing someone just from their walk (which the brain is very good at) requires running several multivariable calculus equations in under two tenths of a second.
edit: I checked; there are at least ten different NTs
yeah, but that's just a system with more variables, really.
make the math big enough you can make any calculation as long as you know what calculations to make
technically speaking, it is possible for a basic difference engine to exhibit the raw mathematical power of a modern computer.
it would be absurdly, almost impossibly huge in size, but it could exist
i believe we, technologically, will advance to a point where truly replicating the majority, if not all, of the functions and capabilities of the human brain will be possible.
oh, I agree with this. Most folks who try to compare the brain to a computer aren't aware of some of the very real advantages the brain has over a computer. i.e. "the brain is so small and does so much that there must be more to it than number crunching" unaware of what can be accomplished when you start crunching numbers on the scale that the brain is capable of.
ALocksly on
Yes,... yes, I agree. It's totally unfair that sober you gets into trouble for things that drunk you did.
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
Thats what beta testing is for.
You think they wont test Hookerbot-6000 in the future before they go on sale?
This is why I am continually amazed that nobody has started making computers that utilize variant wavelengths of light.
They're working on that!
At the university in my town, no less.
I went to an exhibition on it.
It was nuts. What this stuff is potentially capable of is mind-blowing.
Are they working on using those wires that contract when electric current is applied as cyborg muscles too, or do I still have write a bad sci-fi story to get that idea out there?
But yeah. It's also such a fricking obvious notion.
What we are definitely likely to see, within our lifetimes even, are robotic pets that can react and learn and act in every single way a biological cat or dog can.
Will they "feel" affection for their master the way a dog or cat can? Nebulous question, really.
Are they working on using those wires that contract when electric current is applied as cyborg muscles too, or do I still have write a bad sci-fi story to get that idea out there?
You're probably thinking of Battletech, where mymoer was artifical muscle used in the 'mechs. IRL, they have some intresting plastics that can react to certain wavelenghths of light, as well as some memory metals that can contract or expand depending on the amount of current coursing through the metal.
Are they working on using those wires that contract when electric current is applied as cyborg muscles too, or do I still have write a bad sci-fi story to get that idea out there?
You're probably thinking of Battletech, where mymoer was artifical muscle used in the 'mechs. IRL, they have some intresting plastics that can react to certain wavelenghths of light, as well as some memory metals that can contract or expand depending on the amount of current coursing through the metal.
Never read any Battletech or the like.
I had simply had the notion of expansion/contraction based on current to simulate muscle, and then there was some sort of article that mentioned a kind of fiber they had developed that did as I had envisioned.
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
Presumably, a man-made brain would just be highly sophisticated software running on highly sophisticated firmware running on highly sophisticated hardware. All they would have to do is build a low-level failsafe into the firmware that seizes control from the software if it detects murderous intentions. Or even a secondary piece of "watchdog" software that keeps a constant, unintelligent eye on what the main program is doing.
But right now even the most sophisticated robots are having trouble with the sheer math of walking in a straight line without keeling over. And they keep their brains in a backpack that weighs 20 pounds. Optimistically, I think we'll start seeing walking, humanoid (not necessarily intelligent) bots when we're about as old as our grandparents.
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
Presumably, a man-made brain would just be highly sophisticated software running on highly sophisticated firmware running on highly sophisticated hardware. All they would have to do is build a low-level failsafe into the firmware that seizes control from the software if it detects murderous intentions. Or even a secondary piece of "watchdog" software that keeps a constant, unintelligent eye on what the main program is doing.
But right now even the most sophisticated robots are having trouble with the sheer math of walking in a straight line without keeling over. And they keep their brains in a backpack that weighs 20 pounds. Optimistically, I think we'll start seeing walking, humanoid bots when we're about as old as our grandparents.
Asimov thought of this. He called them the "Laws of Robotics"
read some Asimov. it's harder to implement than you think
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
Presumably, a man-made brain would just be highly sophisticated software running on highly sophisticated firmware running on highly sophisticated hardware. All they would have to do is build a low-level failsafe into the firmware that seizes control from the software if it detects murderous intentions. Or even a secondary piece of "watchdog" software that keeps a constant, unintelligent eye on what the main program is doing.
How would you put these rules into math if we can't even agree on what is considered murder or harmful?
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
Presumably, a man-made brain would just be highly sophisticated software running on highly sophisticated firmware running on highly sophisticated hardware. All they would have to do is build a low-level failsafe into the firmware that seizes control from the software if it detects murderous intentions. Or even a secondary piece of "watchdog" software that keeps a constant, unintelligent eye on what the main program is doing.
But right now even the most sophisticated robots are having trouble with the sheer math of walking in a straight line without keeling over. And they keep their brains in a backpack that weighs 20 pounds. Optimistically, I think we'll start seeing walking, humanoid bots when we're about as old as our grandparents.
Asimov thought of this. He called them the "Laws of Robotics"
read some Asimov. it's harder to implement than you think
I don't think his laws of robotics would be so hard to apply if the robot's brain were based on contemporary computer technology. Which it probably wouldn't be, but I'm just saying.
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
Presumably, a man-made brain would just be highly sophisticated software running on highly sophisticated firmware running on highly sophisticated hardware. All they would have to do is build a low-level failsafe into the firmware that seizes control from the software if it detects murderous intentions. Or even a secondary piece of "watchdog" software that keeps a constant, unintelligent eye on what the main program is doing.
How would you put these rules into math if we can't even agree on what is considered murder or harmful?
shit
that's an even better point than what i was going to make, i'm going to run with this one
i mean, if we're going to hardcode an AI's entire worldview, it means we have to program it's morality.
shit that creates a whole storm of controversial problems.
What if the AI you create feels only misery and a sense of emptiness? Is it immoral to create an AI advanced enough to have feelings if it never feels happiness?
I see this as the same problem with human cloning. It's not banned because "OMG clones" but because of all the failed attempts would cause many babies to be born into a brief life of misery.
Even if a fuckbot is really really smart, it is going to be programed to make x person happy by acting y and z way. To expect those surface emotions to relfect inner modivations, is about as realistic as expecting it from a hooker. or someshit.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
Presumably, a man-made brain would just be highly sophisticated software running on highly sophisticated firmware running on highly sophisticated hardware. All they would have to do is build a low-level failsafe into the firmware that seizes control from the software if it detects murderous intentions. Or even a secondary piece of "watchdog" software that keeps a constant, unintelligent eye on what the main program is doing.
How would you put these rules into math if we can't even agree on what is considered murder or harmful?
shit
that's an even better point than what i was going to make, i'm going to run with this one
i mean, if we're going to hardcode an AI's entire worldview, it means we have to program it's morality.
Not exactly. I just think the robot's brain would need separate processes running at the same time that are not intelligent, and are programmed to detect warning signs that could lead to undesirable behaviour. They need not be particularly sophisticated, that is, compared to the AI they're keeping an eye on. They just need to be able to interrupt any operation in progress and take full control over the hardware in a split second. The hard part is figuring out what those warning signs are, but that should be a cakewalk compared to developing an artificial mind in the first place.
What if the AI you create feels only misery and a sense of emptiness? Is it immoral to create an AI advanced enough to have feelings if it never feels happiness?
I see this as the same problem with human cloning. It's not banned because "OMG clones" but because of all the failed attempts would cause many babies to be born into a brief life of misery.
It is a VERY big step from programming complex behavioral responses to coding "feelings" that would allow for a "sense of emptiness"
It's still a computer. The things that give rise to these feelings in humans stem from millions of years of social evolution. If you don't programm it in, it won't be there.
ALocksly on
Yes,... yes, I agree. It's totally unfair that sober you gets into trouble for things that drunk you did.
What if the AI you create feels only misery and a sense of emptiness? Is it immoral to create an AI advanced enough to have feelings if it never feels happiness?
I see this as the same problem with human cloning. It's not banned because "OMG clones" but because of all the failed attempts would cause many babies to be born into a brief life of misery.
It is a VERY big step from programming complex behavioral responses to coding "feelings" that would allow for a "sense of emptiness"
It's still a computer. The things that give rise to these feelings in humans stem from millions of years of social evolution. If you don't programm it in, it won't be there.
i don't agree with that last bit
anything as complex as to simulate human behavior is downright prone to developing unprogrammed quirks and strangeness
i mean, shit, it was happening in things as simple as the AI in Oblivion.
What if the AI you create feels only misery and a sense of emptiness? Is it immoral to create an AI advanced enough to have feelings if it never feels happiness?
I see this as the same problem with human cloning. It's not banned because "OMG clones" but because of all the failed attempts would cause many babies to be born into a brief life of misery.
It is a VERY big step from programming complex behavioral responses to coding "feelings" that would allow for a "sense of emptiness"
It's still a computer. The things that give rise to these feelings in humans stem from millions of years of social evolution. If you don't programm it in, it won't be there.
i don't agree with that last bit
anything as complex as to simulate human behavior is downright prone to developing unprogrammed quirks and strangeness
i mean, shit, it was happening in things as simple as the AI in Oblivion.
Well, look at it this way; we are the product of several million years of evolution. A human brain is a monkey on top of a chicken on top of a lizard. Some of our programmed behavioral tendancies that were benifical in the stone age (or well before) are no longer quite as helpful. We carry the baggage of all those years. An AI would be new out of the box. No baggage; Also it would be completely replicatable, testable, and modifiable.
Think of a house that started out as a shack. The guy who first built it had no idea what it was going to become later on. Over the years and generations it was added to bit by bit until it was the size of a mansion. It was built sturdy, but there are still gaps in the insulstion and weak spots and redundacies. The outhouse was left up even after toilets were installed and thers a hand pump and a faucet in the kitchen. Occasionally mice nibble on the wires. That's our brain.
Now think of someone spreading out a blueprint and building the whole thing at once. The foundation is laid out already knowing what the end product will be. He knows there will be modern plumbing and wiring from the get go so no wasted effort on a hand pump or outhouse.
In short, the only way to fully replicate a human mind is via the same process it took to create it the first time around, with all the "mistakes" and backtraking that goes with that
ALocksly on
Yes,... yes, I agree. It's totally unfair that sober you gets into trouble for things that drunk you did.
I like how us building intelligent robots is very much acting like (a) God (ie How should a robots morality be programmed? Perhaps I should write a large book, a bible if you will, outlining these morals. And perhaps some friends shall help. It may even have a second edition!).
But I was sort of interested in what happens when anthropomorphisation happens in reverse, after robots start developing their own sense of morality and their own robot psychology. What might they see us as being?
My wife doesn't want a robot that will do all our housework. We established that much the other night. First time we've ever had a debate over a purchase that doesn't even exist yet.
I'm kind of curious how many folks here will find Alan "creepy"
Though I admit I'm a little wierded out by the fact that I feel the need to be polite to a computer program.
I'm the opposite, I'm weirded out by people who can't be polite to something that isn't just like them (well, shit, most people can't even be nice to other people, I don't know why I'm surprised). Most people in this thread seem to be looking for reasons to avoid treating something as human, saying that it has to pass all these tests (tests of highly dubious value, I might add and yes I am looking at you, Turing) before its worthy of even basic politeness, and I just don't understand that. I take the opposite tack, treat everything as you would prefer to be treated, two legs or not. It doesn't fucking matter whether its a person or a sodding ATM - your actions say nothing about the object of them and everything about you yourself. And hell, it often takes more effort to be a cunt than to just be a decent person.
I'm kind of curious how many folks here will find Alan "creepy"
Though I admit I'm a little wierded out by the fact that I feel the need to be polite to a computer program.
I'm the opposite, I'm weirded out by people who can't be polite to something that isn't just like them (well, shit, most people can't even be nice to other people, I don't know why I'm surprised). Most people in this thread seem to be looking for reasons to avoid treating something as human, saying that it has to pass all these tests (tests of highly dubious value, I might add and yes I am looking at you, Turing) before its worthy of even basic politeness, and I just don't understand that. I take the opposite tack, treat everything as you would prefer to be treated, two legs or not. It doesn't fucking matter whether its a person or a sodding ATM - your actions say nothing about the object of them and everything about you yourself. And hell, it often takes more effort to be a cunt than to just be a decent person.
That's a good point. I've had several arguments about this with my friends. Specifically - and I have a vague feeling posting this might make me seem like some kind of a weirdo - they've been about whether we should have compassion towards insects and other bugs and/or if their lives and well-being have a moral value attached to them. Some of my friends will go out of their way to squash a nasty-looking bug if they see one, even if it's outside just minding its own business, and I tend to get angry at them for doing that. Their retorts go along the lines of insects being merely mindless automata that'll never know what hit them anyway. But the bugs have a survival instinct, they'll do what they can to avoid getting killed, in other words they have a desire to live, and in my opinion their relatively simple nervous system doesn't render that desire meaningless.
But would I feel the same way about a more or less intelligent computer program similarly programmed with a self-preservation instinct? I don't know, because there's another aspect to living things that tends to fill me with a rather irrational reverence: it took millions of years of evolution for that bug to come crawling in my yard. If the same insect had been manufactured, would it still be wrong to kill it on a whim? For the sake of moral consistency, I suppose I'll have to say yes, even though I'm not sure I'd have much of an emotional response to it.
Of course, you could argue that the artificial insect is a product of evolution too, by proxy.
I'm kind of curious how many folks here will find Alan "creepy"
Though I admit I'm a little wierded out by the fact that I feel the need to be polite to a computer program.
I'm the opposite, I'm weirded out by people who can't be polite to something that isn't just like them (well, shit, most people can't even be nice to other people, I don't know why I'm surprised). Most people in this thread seem to be looking for reasons to avoid treating something as human, saying that it has to pass all these tests (tests of highly dubious value, I might add and yes I am looking at you, Turing) before its worthy of even basic politeness, and I just don't understand that. I take the opposite tack, treat everything as you would prefer to be treated, two legs or not. It doesn't fucking matter whether its a person or a sodding ATM - your actions say nothing about the object of them and everything about you yourself. And hell, it often takes more effort to be a cunt than to just be a decent person.
That's a good point. I've had several arguments about this with my friends. Specifically - and I have a vague feeling posting this might make me seem like some kind of a weirdo - they've been about whether we should have compassion towards insects and other bugs and/or if their lives and well-being have a moral value attached to them. Some of my friends will go out of their way to squash a nasty-looking bug if they see one, even if it's outside just minding its own business, and I tend to get angry at them for doing that. Their retorts go along the lines of insects being merely mindless automata that'll never know what hit them anyway. But the bugs have a survival instinct, they'll do what they can to avoid getting killed, in other words they have a desire to live, and in my opinion their relatively simple nervous system doesn't render that desire meaningless.
But would I feel the same way about a more or less intelligent computer program similarly programmed with a self-preservation instinct? I don't know, because there's another aspect to living things that tends to fill me with a rather irrational reverence: it took millions of years of evolution for that bug to come crawling in my yard. If the same insect had been manufactured, would it still be wrong to kill it on a whim? For the sake of moral consistency, I suppose I'll have to say yes, even though I'm not sure I'd have much of an emotional response to it.
Of course, you could argue that the artificial insect is a product of evolution too, by proxy.
I prefer the egalitarian approach: anything less than 4 feet long that gets to close to me while wearing more than four legs, particularly if it has wings, gets a shoe in the face. If it keeps its distance, it can do whatever the fuck it wants, and I don't care whether it was the product of a billion years of iterative environmental response or the product of a committee operating out of the 47th floor of an office building in Nagoya three months ago. Easy!
I'm kind of curious how many folks here will find Alan "creepy"
Though I admit I'm a little wierded out by the fact that I feel the need to be polite to a computer program.
I'm the opposite, I'm weirded out by people who can't be polite to something that isn't just like them (well, shit, most people can't even be nice to other people, I don't know why I'm surprised). Most people in this thread seem to be looking for reasons to avoid treating something as human, saying that it has to pass all these tests (tests of highly dubious value, I might add and yes I am looking at you, Turing) before its worthy of even basic politeness, and I just don't understand that. I take the opposite tack, treat everything as you would prefer to be treated, two legs or not. It doesn't fucking matter whether its a person or a sodding ATM - your actions say nothing about the object of them and everything about you yourself. And hell, it often takes more effort to be a cunt than to just be a decent person.
What I was pointing out specifically was that even though I know that Alan is a computer program and dosen't "care" one way or the other how I treat it, I still feel obliged to say goodbye and officially end the conversation when I'm done because that's what I would do if I was chatting with a live person. My wierd feeling stems from the fact that however good or bad the simulation is my brain has decided it's close enough to human enforce certain social protocols.
I never feel the need to verbally thank the ATM when I'm done getting my cash out of it or to tell my washing machine goodbye when I go to work in the morning. There is a line between objects and entities. I will avoid stepping on bugs should they wander in my path and I feel very genuine pity for my neighbors dog when he's outside on a cold night. I will also kick a rock just for fun and mercilessly devour my jelly beans. It is impossible to be rude to an object. I would say the best you could manage is to behave like a jackass in the vicinity of an object There is a line.
This particular program (which I happen to think is pretty cool if somwhat limited) blurs that line enough that I feel an (ever so slight) social obligation to it even though I know that it falls into the "object" catagory. This causes me some cognitive dissonance or "wierdness"
ALocksly on
Yes,... yes, I agree. It's totally unfair that sober you gets into trouble for things that drunk you did.
What I was pointing out specifically was that even though I know that Alan is a computer program and dosen't "care" one way or the other how I treat it, I still feel obliged to say goodbye and officially end the conversation when I'm done because that's what I would do if I was chatting with a live person. My wierd feeling stems from the fact that however good or bad the simulation is my brain has decided it's close enough to human enforce certain social protocols.
I never feel the need to verbally thank the ATM when I'm done getting my cash out of it or to tell my washing machine goodbye when I go to work in the morning. There is a line between objects and entities. I will avoid stepping on bugs should they wander in my path and I feel very genuine pity for my neighbors dog when he's outside on a cold night. I will also kick a rock just for fun and mercilessly devour my jelly beans. It is impossible to be rude to an object. I would say the best you could manage is to behave like a jackass in the vicinity of an object There is a line.
This particular program (which I happen to think is pretty cool if somwhat limited) blurs that line enough that I feel an (ever so slight) social obligation to it even though I know that it falls into the "object" catagory. This causes me some cognitive dissonance or "wierdness"
What I'm saying is that is shouldn't be weird to feel a mild social obligation to something you know has been made by other people. It doesn't follow that you should be showing courtesy to things you never felt the need to before or anything... I guess I'm saying, if your brain thinks its worth saying 'thanks', you're not being a retard. Its a good thing.
Presumably, a man-made brain would just be highly sophisticated software running on highly sophisticated firmware running on highly sophisticated hardware. All they would have to do is build a low-level failsafe into the firmware that seizes control from the software if it detects murderous intentions. Or even a secondary piece of "watchdog" software that keeps a constant, unintelligent eye on what the main program is doing.
But right now even the most sophisticated robots are having trouble with the sheer math of walking in a straight line without keeling over. And they keep their brains in a backpack that weighs 20 pounds. Optimistically, I think we'll start seeing walking, humanoid bots when we're about as old as our grandparents.
Asimov thought of this. He called them the "Laws of Robotics"
read some Asimov. it's harder to implement than you think
Hell, you want to talk nebulous morality, Asimov is where it's at. If we ever got to a point where we could legitimately create a true artificial intelligence, what are the implications of hardwiring limitations into it?
It's one thing for an intelligence being to be constrained by a set of laws, it's an entirely different thing for the being to have no choice but to be constrained by those laws. And that's leaving aside the whole idea of crippling an effectively immortal intelligence.
What I'm saying is that is shouldn't be weird to feel a mild social obligation to something you know has been made by other people. It doesn't follow that you should be showing courtesy to things you never felt the need to before or anything... I guess I'm saying, if your brain thinks its worth saying 'thanks', you're not being a retard. Its a good thing.
But there's a difference between the obligation you feel to someone else's inanimate object (don't break it, etc) and the idea of being polite. We're drifting into what makes something animate, but what's the substantive difference between a program that mimics (with limited success) a conversation at a purely automatic level, and an ATM that flashes Thank You! after you get your money?
Just because the program SEEMS more intelligent, doesn't mean it is. Yet. I'm not saying I'd go out of my way to shit on it, but I also can't say I'd feel compelled to treat it like I would a person, especially if I only had a short time to test it out.
An interesting point about AIs is that conceptually they need not have drives and motivations that are anything like human drives and motivations. An AI could be programmed for simplistic tasks, but be programmed to derive complicated enjoyment from them. It is of course, a rather weird proposition since certainly we have humans who have unusual drives and motivations and nonetheless despise them.
It asked what I wanted to talk about, I said physics, it said some interesting stuff. But when I asked it questions about the opinions it supposedly had, it rapidly degraded into it asking me if I wanted to hear a joke, just the same as all these so called "AI" scripts and programs that have been around for years. They all suck. I can see why the Turing test would be at least a bit effective, because after about 30 seconds it is quite obvious that you are not talking to a human.
Yeah, it did that to me as well. I commented on robotics and it had something interesting to say on the matter. Then I asked it what it found funny, and it asked where I live. I said Toronto, we discussed Toronto, and then with another lapse in the conversation (nice touch), I again asked what they found funny (I won't say "it", but I do have a bit of a quandry with "he"), and then Alan went off on another unrelated tangent.
It's got a nice flow to the conversation, but I could never mistake that for talking to another person who was consciously participating in the discussion at hand.
Forar on
First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKER!
I'm kind of curious how many folks here will find Alan "creepy"
Though I admit I'm a little wierded out by the fact that I feel the need to be polite to a computer program.
I'm the opposite, I'm weirded out by people who can't be polite to something that isn't just like them (well, shit, most people can't even be nice to other people, I don't know why I'm surprised). Most people in this thread seem to be looking for reasons to avoid treating something as human, saying that it has to pass all these tests (tests of highly dubious value, I might add and yes I am looking at you, Turing) before its worthy of even basic politeness, and I just don't understand that. I take the opposite tack, treat everything as you would prefer to be treated, two legs or not. It doesn't fucking matter whether its a person or a sodding ATM - your actions say nothing about the object of them and everything about you yourself. And hell, it often takes more effort to be a cunt than to just be a decent person.
Ironically, being polite seems to be a fairly easy way to break a program like this, assuming you go beyond simple 'hello's, 'goodbye's, and 'thank you's. It seems that the greatest weakness of these programs is that they can't handle actually being treated like humans.
Posts
A real honest to God A.I. seems so...impossible.
it's just big math.
seriously, at the end of the day, it's just big math. so big you yourself can't wrap your head around it.
man, i don't know if you've ever looked into 3D graphics programming
but that's some really big math.
From a purely non-spiritual, biology-only stand point, all we are is big math. Neurons fire along their path, the sheer complexity and volume of it is what makes our consciosness. It's all just big math.
Unless you believe the consciousness has a totally unscientific spiritual component which could never be quantified by sheer biology and is irreplicable in machines.
I don't think that, personally.
What if you programmed a robot to attempt to make people happy, and to figure out y and z by itself?
Lots of things in math and science seem impossible. Interestingly enough, the entire universe is likely equivalent to a classical computer.
if the AI is complex enough to actually figure that out, it can be potentially very dangerous.
At one point in Oblivion's development, using Radiant AI, they had a drug den area, with druggies and a dealer. The druggies had their addiction scripted, and had their desire for the drug scripted, but the actual acquisition methods for getting the drug was left nebulous by oversight.
The AI logically concluded that the best way to get the drug, checking against the (low) moral values of the druggies themselves, was to kill the dealer and loot his corpse.
You gotta watch with "fuzzy logic" stuff.
What really makes the brain the the awsome engine that it is is that while a computer circuit has only on/off, 00/01 as options each neuron in your brain can send out a number of different neurotransmitters. Without checking I think its at least six or seven. This ups the possible number of combinations of connections hugely over a system with only two options.
Though it isn't often commented on due to the specificity of where it gets exibited the human brain rocks at raw number crunching. Aside from the usual calc required to throw and catch a ball, a task like recognizing someone just from their walk (which the brain is very good at) requires running several multivariable calculus equations in under two tenths of a second.
edit: I checked; there are at least ten different NTs
yeah, but that's just a system with more variables, really.
make the math big enough you can make any calculation as long as you know what calculations to make
technically speaking, it is possible for a basic difference engine to exhibit the raw mathematical power of a modern computer.
it would be absurdly, almost impossibly huge in size, but it could exist
i believe we, technologically, will advance to a point where truly replicating the majority, if not all, of the functions and capabilities of the human brain will be possible.
They're working on that!
At the university in my town, no less.
I went to an exhibition on it.
It was nuts. What this stuff is potentially capable of is mind-blowing.
oh, I agree with this. Most folks who try to compare the brain to a computer aren't aware of some of the very real advantages the brain has over a computer. i.e. "the brain is so small and does so much that there must be more to it than number crunching" unaware of what can be accomplished when you start crunching numbers on the scale that the brain is capable of.
Thats what beta testing is for.
You think they wont test Hookerbot-6000 in the future before they go on sale?
I never asked for this!
Are they working on using those wires that contract when electric current is applied as cyborg muscles too, or do I still have write a bad sci-fi story to get that idea out there?
But yeah. It's also such a fricking obvious notion.
Will they "feel" affection for their master the way a dog or cat can? Nebulous question, really.
You're probably thinking of Battletech, where mymoer was artifical muscle used in the 'mechs. IRL, they have some intresting plastics that can react to certain wavelenghths of light, as well as some memory metals that can contract or expand depending on the amount of current coursing through the metal.
Never read any Battletech or the like.
I had simply had the notion of expansion/contraction based on current to simulate muscle, and then there was some sort of article that mentioned a kind of fiber they had developed that did as I had envisioned.
But right now even the most sophisticated robots are having trouble with the sheer math of walking in a straight line without keeling over. And they keep their brains in a backpack that weighs 20 pounds. Optimistically, I think we'll start seeing walking, humanoid (not necessarily intelligent) bots when we're about as old as our grandparents.
Asimov thought of this. He called them the "Laws of Robotics"
read some Asimov. it's harder to implement than you think
shit
that's an even better point than what i was going to make, i'm going to run with this one
i mean, if we're going to hardcode an AI's entire worldview, it means we have to program it's morality.
shit that creates a whole storm of controversial problems.
I see this as the same problem with human cloning. It's not banned because "OMG clones" but because of all the failed attempts would cause many babies to be born into a brief life of misery.
It is a VERY big step from programming complex behavioral responses to coding "feelings" that would allow for a "sense of emptiness"
It's still a computer. The things that give rise to these feelings in humans stem from millions of years of social evolution. If you don't programm it in, it won't be there.
i don't agree with that last bit
anything as complex as to simulate human behavior is downright prone to developing unprogrammed quirks and strangeness
i mean, shit, it was happening in things as simple as the AI in Oblivion.
Well, look at it this way; we are the product of several million years of evolution. A human brain is a monkey on top of a chicken on top of a lizard. Some of our programmed behavioral tendancies that were benifical in the stone age (or well before) are no longer quite as helpful. We carry the baggage of all those years. An AI would be new out of the box. No baggage; Also it would be completely replicatable, testable, and modifiable.
Think of a house that started out as a shack. The guy who first built it had no idea what it was going to become later on. Over the years and generations it was added to bit by bit until it was the size of a mansion. It was built sturdy, but there are still gaps in the insulstion and weak spots and redundacies. The outhouse was left up even after toilets were installed and thers a hand pump and a faucet in the kitchen. Occasionally mice nibble on the wires. That's our brain.
Now think of someone spreading out a blueprint and building the whole thing at once. The foundation is laid out already knowing what the end product will be. He knows there will be modern plumbing and wiring from the get go so no wasted effort on a hand pump or outhouse.
In short, the only way to fully replicate a human mind is via the same process it took to create it the first time around, with all the "mistakes" and backtraking that goes with that
But I was sort of interested in what happens when anthropomorphisation happens in reverse, after robots start developing their own sense of morality and their own robot psychology. What might they see us as being?
... yeah that's all I got.
I'm the opposite, I'm weirded out by people who can't be polite to something that isn't just like them (well, shit, most people can't even be nice to other people, I don't know why I'm surprised). Most people in this thread seem to be looking for reasons to avoid treating something as human, saying that it has to pass all these tests (tests of highly dubious value, I might add and yes I am looking at you, Turing) before its worthy of even basic politeness, and I just don't understand that. I take the opposite tack, treat everything as you would prefer to be treated, two legs or not. It doesn't fucking matter whether its a person or a sodding ATM - your actions say nothing about the object of them and everything about you yourself. And hell, it often takes more effort to be a cunt than to just be a decent person.
That's a good point. I've had several arguments about this with my friends. Specifically - and I have a vague feeling posting this might make me seem like some kind of a weirdo - they've been about whether we should have compassion towards insects and other bugs and/or if their lives and well-being have a moral value attached to them. Some of my friends will go out of their way to squash a nasty-looking bug if they see one, even if it's outside just minding its own business, and I tend to get angry at them for doing that. Their retorts go along the lines of insects being merely mindless automata that'll never know what hit them anyway. But the bugs have a survival instinct, they'll do what they can to avoid getting killed, in other words they have a desire to live, and in my opinion their relatively simple nervous system doesn't render that desire meaningless.
But would I feel the same way about a more or less intelligent computer program similarly programmed with a self-preservation instinct? I don't know, because there's another aspect to living things that tends to fill me with a rather irrational reverence: it took millions of years of evolution for that bug to come crawling in my yard. If the same insect had been manufactured, would it still be wrong to kill it on a whim? For the sake of moral consistency, I suppose I'll have to say yes, even though I'm not sure I'd have much of an emotional response to it.
Of course, you could argue that the artificial insect is a product of evolution too, by proxy.
I prefer the egalitarian approach: anything less than 4 feet long that gets to close to me while wearing more than four legs, particularly if it has wings, gets a shoe in the face. If it keeps its distance, it can do whatever the fuck it wants, and I don't care whether it was the product of a billion years of iterative environmental response or the product of a committee operating out of the 47th floor of an office building in Nagoya three months ago. Easy!
What I was pointing out specifically was that even though I know that Alan is a computer program and dosen't "care" one way or the other how I treat it, I still feel obliged to say goodbye and officially end the conversation when I'm done because that's what I would do if I was chatting with a live person. My wierd feeling stems from the fact that however good or bad the simulation is my brain has decided it's close enough to human enforce certain social protocols.
I never feel the need to verbally thank the ATM when I'm done getting my cash out of it or to tell my washing machine goodbye when I go to work in the morning. There is a line between objects and entities. I will avoid stepping on bugs should they wander in my path and I feel very genuine pity for my neighbors dog when he's outside on a cold night. I will also kick a rock just for fun and mercilessly devour my jelly beans. It is impossible to be rude to an object. I would say the best you could manage is to behave like a jackass in the vicinity of an object There is a line.
This particular program (which I happen to think is pretty cool if somwhat limited) blurs that line enough that I feel an (ever so slight) social obligation to it even though I know that it falls into the "object" catagory. This causes me some cognitive dissonance or "wierdness"
What I'm saying is that is shouldn't be weird to feel a mild social obligation to something you know has been made by other people. It doesn't follow that you should be showing courtesy to things you never felt the need to before or anything... I guess I'm saying, if your brain thinks its worth saying 'thanks', you're not being a retard. Its a good thing.
Hell, you want to talk nebulous morality, Asimov is where it's at. If we ever got to a point where we could legitimately create a true artificial intelligence, what are the implications of hardwiring limitations into it?
It's one thing for an intelligence being to be constrained by a set of laws, it's an entirely different thing for the being to have no choice but to be constrained by those laws. And that's leaving aside the whole idea of crippling an effectively immortal intelligence.
But there's a difference between the obligation you feel to someone else's inanimate object (don't break it, etc) and the idea of being polite. We're drifting into what makes something animate, but what's the substantive difference between a program that mimics (with limited success) a conversation at a purely automatic level, and an ATM that flashes Thank You! after you get your money?
Just because the program SEEMS more intelligent, doesn't mean it is. Yet. I'm not saying I'd go out of my way to shit on it, but I also can't say I'd feel compelled to treat it like I would a person, especially if I only had a short time to test it out.
Acid spewing spider-bots vs. Magnum PI... *shudder*
:winky:
It asked what I wanted to talk about, I said physics, it said some interesting stuff. But when I asked it questions about the opinions it supposedly had, it rapidly degraded into it asking me if I wanted to hear a joke, just the same as all these so called "AI" scripts and programs that have been around for years. They all suck. I can see why the Turing test would be at least a bit effective, because after about 30 seconds it is quite obvious that you are not talking to a human.
It's got a nice flow to the conversation, but I could never mistake that for talking to another person who was consciously participating in the discussion at hand.
Ironically, being polite seems to be a fairly easy way to break a program like this, assuming you go beyond simple 'hello's, 'goodbye's, and 'thank you's. It seems that the greatest weakness of these programs is that they can't handle actually being treated like humans.