As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/

AI Government

1234579

Posts

  • DanHibikiDanHibiki Registered User regular
    edited March 2010
    Kamar wrote: »
    On the enslavement issue, wouldn't the obvious thing to make it enjoy it's primary function?

    The same way people enjoy procreation, arguably their 'primary function'.

    Orgasms each time it saves a patient, and there's always silicon hell for those AIs that disobey.

    DanHibiki on
  • NartwakNartwak Registered User regular
    edited March 2010
    What does this patient have?
    Lupus?

    Nartwak on
  • durandal4532durandal4532 Registered User regular
    edited March 2010
    Actually, shit. I can't remember the story's name... shoot.

    But, anyways, there's an excellent SF short story about a guy who creates a functioning AI, then a functioning AI society in a virtual world, and basically tries to use them to get a leg up on the competition in his business. Only, they fucking hate him, so they use the machinery in his lab to create their own pocket universe to live in and leave behind a note telling him he's a silly goose.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • jothkijothki Registered User regular
    edited March 2010
    There's kind of a balance of power issue with AIs in general. If an AI is designed to be incapable of harming humans and yet can be shut down by a human at any time, how is the AI supposed to feel about that? Beyond the moral issues, it's probably a bad idea to have an AI resent that people that it's required to serve.

    jothki on
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    jothki wrote: »
    There's kind of a balance of power issue with AIs in general. If an AI is designed to be incapable of harming humans and yet can be shut down by a human at any time, how is the AI supposed to feel about that? Beyond the moral issues, it's probably a bad idea to have an AI resent that people that it's required to serve.

    Why would it even be able to feel resentment?

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • DaedalusDaedalus Registered User regular
    edited March 2010
    Modern Man wrote: »
    Kamar wrote: »
    On the enslavement issue, wouldn't the obvious thing to make it enjoy it's primary function?

    The same way people enjoy procreation, arguably their 'primary function'.
    I wouldn't consider that ethical, personally. It would be like genetically engineering a child for a particular profession.

    I wonder if we'll see a lawsuit in our lifetimes about the legal rights of an AI?

    I think we're likely to see the political shitstorm resulting from working human cloning/human genetic engineering before we run into AI. We're fantastically far away from real AI, and it has nothing to do with how powerful our computer hardware is.

    Daedalus on
  • MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    MrMister wrote: »
    The idea that we know what perfect rationality is--we just need a computer to carry it out for us--is hilarious.

    Yeah there's that too. This really pervades my whole skepticism of the whole thing.

    Before we start making a machine to think for us we really need to be able to do it ourselves....otherwise how do we make it? And if we make it via some kind of connectionist or neural system that we speed evolve in order to try and make it more advanced than us....how on earth are we going to be able to tell if it is more advanced? Who is going to check it?

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Edith UpwardsEdith Upwards Registered User regular
    edited March 2010
    Personally, I think that the first "real world" application of "AI" would be something similar to the Muses in Eclipse Phase.

    Not really intelligence as such, just excellently coded research/teaching assistants that are with a person for their entire life.

    Edith Upwards on
  • DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Erich you have too much faith in humanity. The first real world application will be a sex bot.

    DarkWarrior on
  • Edith UpwardsEdith Upwards Registered User regular
    edited March 2010
    Erich you have too much faith in humanity. The first real world application will be a sex bot.

    1.That's already been done.

    2.Not what I think of when I think "AI".

    3.It's already been done.

    Edith Upwards on
  • MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    Also, all these people making claims about AI's being perfect need to back them up with evidence.

    It's all very good to say "We know it will be perfect because the thought experiment requires it to be perfect" but what evidence do you have to make this claim. What evidence do you have that one ai will make the same diagnosis as another ai given the kinds of general and vague claims patients make in the real world? Are there any AI's around to experiment on? I know there is good research in decision engines for medical diagnosis supposedly...so how far advanced is this evidence? What are it's current limitations? what are the problems that must be overcome to get it out there? Why isn't it out there now if it is so self evidently good?

    If you have the evidence to support your supposed "facts", please, do trot them out. Until you do, don't presume to be talking from a place of knowledge and everybody else is just ignorant.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Im just saying, they'll harness the power of the human mind and then set it to perma-horny/clean the house mode.

    ANd the population will drop for a generation or two.

    I could easily see an AI taking over pretty much the run of everything one day, though long off in the future when life progresses to a point where theres just too much for any human or group of humans to cope with.

    DarkWarrior on
  • DanHibikiDanHibiki Registered User regular
    edited March 2010
    Erich you have too much faith in humanity. The first real world application will be a sex bot.

    spam filters and data miners for direct advertisers will be the first applications of AI. Just you wait and see.

    DanHibiki on
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    Also, all these people making claims about AI's being perfect need to back them up with evidence.

    It's all very good to say "We know it will be perfect because the thought experiment requires it to be perfect" but what evidence do you have to make this claim. What evidence do you have that one ai will make the same diagnosis as another ai given the kinds of general and vague claims patients make in the real world? Are there any AI's around to experiment on? I know there is good research in decision engines for medical diagnosis supposedly...so how far advanced is this evidence? What are it's current limitations? what are the problems that must be overcome to get it out there? Why isn't it out there now if it is so self evidently good?

    If you have the evidence to support your supposed "facts", please, do trot them out. Until you do, don't presume to be talking from a place of knowledge and everybody else is just ignorant.

    I'm not sure you are interpreting "perfect" correctly in this context. It's not that one AI will make the same diagnosis as every other AI, or that this will in fact be the correct diagnosis. Rather, it's that the same AI will diagnose the same case the same way every time (holding the available science constant).

    This is simply an engineering problem. You need to some six sigma shit, and eventually you get to where there is only an error once out of 100 billion times or something like that.

    A human can mis-remember something, can mean to prescribe one drug but actually write down another, and so forth. An AI can also, but it is far less likely because the base medium is just more precise.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Also, all these people making claims about AI's being perfect need to back them up with evidence.

    It's all very good to say "We know it will be perfect because the thought experiment requires it to be perfect" but what evidence do you have to make this claim. What evidence do you have that one ai will make the same diagnosis as another ai given the kinds of general and vague claims patients make in the real world? Are there any AI's around to experiment on? I know there is good research in decision engines for medical diagnosis supposedly...so how far advanced is this evidence? What are it's current limitations? what are the problems that must be overcome to get it out there? Why isn't it out there now if it is so self evidently good?

    If you have the evidence to support your supposed "facts", please, do trot them out. Until you do, don't presume to be talking from a place of knowledge and everybody else is just ignorant.

    I'm not sure you are interpreting "perfect" correctly in this context. It's not that one AI will make the same diagnosis as every other AI, or that this will in fact be the correct diagnosis. Rather, it's that the same AI will diagnose the same case the same way every time (holding the available science constant).

    This is simply an engineering problem. You need to some six sigma shit, and eventually you get to where there is only an error once out of 100 billion times or something like that.

    A human can mis-remember something, can mean to prescribe one drug but actually write down another, and so forth. An AI can also, but it is far less likely because the base medium is just more precise.

    Yeah, but the available science behind the symptoms used in the medical model is not constant. Nor is one cases surface appearance necessarily going to be the same as the other, nor is an AI necessarily going to be able to establish a patients case history through simply asking questions. Doctors don't only go with questions, they also read the patients body language and demeaner.
    For difficult cases, they have to rely on intuition about the patient. Patients might not be able to describe their symptoms accurately. Doctors have to deal with "I feel blagh".

    You can't just code in the relevant medical manuals, because those medical manuals have appendixes or addendums saying "Oh btw, don't treat this as a bible. Think!"

    This isn't saying I don't think it'd be a great accessory to a doctor, something to have in the doctors office that the doctor could input information into as a formal second opinion. It's a great tool. It's just a huge stretch and oversimplification to assume you can replace doctors.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • nescientistnescientist Registered User regular
    edited March 2010
    a patients case history

    That is the worst possible example to choose. It's quite reasonable to be skeptical of bold claims about nonexistent tech, but even with the pittance that we can currently claim to know about a future AI we can pretty confidently say that they will be better able to navigate largely quantitative data (such as is found in case histories) than a human. An AI could hold a hundred thousand case histories in memory at the same time and perform complex operations to compare them in a hundred thousand different ways, and it could do this while holding a conversation with a patient. In fact, the most outlandish part of the picture I've just painted is not the data-collating abilities of my hypothetical AI, but rather the bit about the conversation.

    nescientist on
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    Yeah, but the available science behind the symptoms used in the medical model is not constant.

    And thus they would need to download upgrades on a regular basis. My point was only to specify that the issue was precision, not anything else.
    Nor is one cases surface appearance necessarily going to be the same as the other, nor is an AI necessarily going to be able to establish a patients case history through simply asking questions. Doctors don't only go with questions, they also read the patients body language and demeaner.

    And we are working on systems that can do that. Or you could have a nurse assisting the AI and telling it what impression he/she is getting of the patient's body language.
    For difficult cases, they have to rely on intuition about the patient.

    In other words, they guess. Which an AI can do just as well.
    You can't just code in the relevant medical manuals, because those medical manuals have appendixes or addendums saying "Oh btw, don't treat this as a bible. Think!"

    And an AI would be very good at thinking.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • electricitylikesmeelectricitylikesme Registered User regular
    edited March 2010
    Yar wrote: »
    I mean, even if a decision system was really bad at diagnosis, it would still probably be an improvement over your average GP.

    I think from time to time me and my brothers have entertained this idea as an improvement to my dad's practice management software. Start data-mining the database for the historical symptoms and diagnosis, and then present the doctor with some relevant statistics on what someone's likely to have and some suggested follow-up questions.

    I mean, Yar is quite right - diagnostics is very much just weighing the probabilities against each other and ordering the appropriate tests. While it's true you couldn't trust a patient to interact directly with such a system (because people, and doctors, are all pretty bad at self-diagnosis), the doctor would essentially be serving as a cultural-mediator (hell, it's what they do now) for such a system - translating natural language and reading body language, essentially.

    We already do some things like this already: the MIMS drug database is just a big database of patient interactions. You enter the prescriptions you want to make and it tells you if there's any interactions.

    Also one other note: there's definitely a cross-purpose discussion happening. Artificial Intelligence as I used it in the OP doesn't always meant "machine sentience" - it can refer to expert systems, neural networks, that sort of thing.

    electricitylikesme on
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    Also one other note: there's definitely a cross-purpose discussion happening. Artificial Intelligence as I used it in the OP doesn't always meant "machine sentience" - it can refer to expert systems, neural networks, that sort of thing.

    All of which is just differences in approach and capabilities, not a difference in kind.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • nescientistnescientist Registered User regular
    edited March 2010
    HamHamJ wrote: »
    And an AI would be very good at thinking.

    An AI would be astonishingly good at processing data, but I suspect it would actually be counterproductive to be so focused on anthropomorphizing our machines that we would actually design them to do anything like human "thinking."

    Like, the idea of a medical AI actually talking to a patient in order to establish a medical history is becoming more bizarre to me the more I think about it. That data should be fucking programmed into the chip in the patient's ass, not elicited from a (extremely error-prone, even if we assume the AI is perfect; it's the talking human I'm worried about) verbal statement from grey-matter memory. It's like autoclaving your lancet before bleeding your patient to re-balance his humours; yes, that's some very nice technology you've got there, but you're doing it wrong.

    nescientist on
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    And an AI would be very good at thinking.

    An AI would be astonishingly good at processing data, but I suspect it would actually be counterproductive to be so focused on anthropomorphizing our machines that we would actually design them to do anything like human "thinking."

    Like, the idea of a medical AI actually talking to a patient in order to establish a medical history is becoming more bizarre to me the more I think about it. That data should be fucking programmed into the chip in the patient's ass, not elicited from a (extremely error-prone, even if we assume the AI is perfect; it's the talking human I'm worried about) verbal statement from grey-matter memory. It's like autoclaving your lancet before bleeding your patient to re-balance his humours; yes, that's some very nice technology you've got there, but you're doing it wrong.

    Well we are working on getting all that stuff digitized and network anyway right now, I believe. But if records didn't exist for whatever reasons (internet is down, patient is an illegal immigrant, whatever) the AI could get that information just as well as anyone.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • electricitylikesmeelectricitylikesme Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    And an AI would be very good at thinking.

    An AI would be astonishingly good at processing data, but I suspect it would actually be counterproductive to be so focused on anthropomorphizing our machines that we would actually design them to do anything like human "thinking."

    Like, the idea of a medical AI actually talking to a patient in order to establish a medical history is becoming more bizarre to me the more I think about it. That data should be fucking programmed into the chip in the patient's ass, not elicited from a (extremely error-prone, even if we assume the AI is perfect; it's the talking human I'm worried about) verbal statement from grey-matter memory. It's like autoclaving your lancet before bleeding your patient to re-balance his humours; yes, that's some very nice technology you've got there, but you're doing it wrong.

    Well we are working on getting all that stuff digitized and network anyway right now, I believe. But if records didn't exist for whatever reasons (internet is down, patient is an illegal immigrant, whatever) the AI could get that information just as well as anyone.

    Yeah but then you really do run into the issue that it's going to be a lot cheaper to pay a nurse to do that, especially since you still are going to need them. It's focusing on the wrong problem.

    electricitylikesme on
  • SanguiniusSanguinius Registered User regular
    edited March 2010
    I haven't bothered to log in and post for ages, though I've been lurking.

    However when people started bringing up how AIs interact with patients and so forth, I felt a bit obliged to say something as I'm working in this field, like right now.

    Currently, I'm working on an e-health System for the ADF. The primary function of the system is to improve all of the sorts of errors that you guys have been discussing - doctor error, incorrect drug perscription, diagnosis issues and so forth.

    Now we've developed a fairly sophisticated system already - one where a doctor is essentially inputting the symptoms that the patient has described and augmenting that with their own information, such as whether they feel that the patient is lying and so forth. In some cases, the doctor can also lend weight to certain symptoms and less to others, such as the description of pain and what not.

    It is massively more effective than a doctor giving a normal examination, because the database that it is using is perfect, the medicines and their interactions are perfectly defined, and so forth.

    And this is just a dumb bunch of databases interacting with each other, with a doctor pressing the buttons here and there. Quite easily, you could get a patient to diagnose themselves using the same information - in fact, one of the benefits of using the new system is that it will reduce the amount of GP visits, because patients can self-diagnose and self-help to a larger degree.

    Now, an AI would be able to perform this role much better than a human doctor. They don't get tired, they don't make mistakes in calling up information. They don't revert to their 'gut' when they are unsure of a diagnosis. They don't need to refer someone to a specialist.

    There are a set number of things that can go wrong with us. Whilst we don't know what all of them are and what causes them, an AI would be much, much better at cutting them down than a human.

    Hell, for an example of how just a simple database and decision tree - with a positive feedback loop programmed into it - works well, just check out some of those sites that play 20 questions with you and nail the most obscure stuff out there.

    Sanguinius on
  • MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    For difficult cases, they have to rely on intuition about the patient.

    In other words, they guess. Which an AI can do just as well[citation needed].

    You'd be advised not to equate intuition to guessing. It's significantly more complicated than that. There is good evidence that we are able to intuit about things in advance of conscious knowledge. This isn't the same as a random guess.

    For example, people can often tell when they are about to solve a complicated problem, before they solve it and before they really know what that solution is going to be.

    Don't underestimate a humans ability to intuit and don't dismiss intuition being the same as a random guess.

    Also, why on earth would you concede a nurse rather than a doctor other than not wanting to admit a doctor is still useful? How on earth would a nurse be able to double check a patient without the same level of expertise? She wouldn't know if it was wrong or not!

    @ Sanguinius I think your system is peachy keen and is basically what I think should happen. Use them as a tool, a more handy data base, capable of making decisions, with the doctor still seeing the patient and codifying the patients vague complaints into something the AI can handle. Best of both worlds. I'm completely comfortable with that.

    I'm also comfortable with the self-diagnosis thing but only if non minor symptoms are referred to a GP (with this system in place as well). The reason is simple: it has to assume the uneducated patient is a). not lying b). is describing all of their symptoms c). is capable of describing their symptoms accurately. But yeah, that'd be good for the kinds of things your pharmacist could easily sort out for you.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • NeliNeli Registered User regular
    edited March 2010
    We're nowhere close to "real" AI yet, are we? The things I've seen indicate that we have a good 20-50+ years before we have anything resembling "real" AI. This is assuming we ever get there. Currently we're still solving issues with our own minds, let alone creating new ones digitally.

    Neli on
    vhgb4m.jpg
    I have stared into Satan's asshole, and it fucking winked at me.
    [/size]
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    You'd be advised not to equate intuition to guessing. It's significantly more complicated than that. There is good evidence that we are able to intuit about things in advance of conscious knowledge. This isn't the same as a random guess.

    In which case what you mean is sub-conscious data processing. Guess what the AI would be better than you at? Data processing!
    Also, why on earth would you concede a nurse rather than a doctor other than not wanting to admit a doctor is still useful? How on earth would a nurse be able to double check a patient without the same level of expertise? She wouldn't know if it was wrong or not!

    The nurse doesn't need to know that. The AI tells her what to ask, and she interprets the patient's answers.

    And this is only because humans are born optimized for social interaction and picking up non-verbal cues in conversation, while programing a computer to do the same thing is annoying and difficult.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • YarYar Registered User regular
    edited March 2010
    Yeah, again to make clear: some people are using the term "AI" to mean a Turing-test AI. Meaning a programmed/artificial/computerized being whose intelligence is indistinguishable in all manners from a human's. So, for example, some people are saying that an AI would be just as likely to be affected by arrogance or error as a real human... possibly, if you're talking about a Turing-test AI. To be indistinguishable from a human, it would need emotions and human fallability programmed in, or at least the capability of faking them to a degree indistinguishable to a human.

    The OP appeared to be talking about something that was either almost Turing-test, or perhaps the latter, a super-sentient that could fake a perfect score on a Turing test if it wanted to, but then also abandon the pretenses of emotion and error when the task at hand was not to pass a Turing test.

    But I believe none of that matters, and I don't think it is the more accurate defintion of "AI." AI is just any system that can do anything we consider resembling human thought of any kind. It can be rather loosely applied to all kinds of systems that do things we'd otherwise rely on our own minds to do. And I think that's really what we ought to be discussing here, so we can avoid debating how "human" the AI is and instead focus on how well it would do whatever we are supposing that we wanted it to do.

    Back to the doctor discussion: ML, it is self-evident. I respect the call for scientific evidence of their success; that is certainly a necessary part of the process. But it requires no more than a basic rational understanding of diagnosis and of expert/decision systems, to successfully reason that the system could do this far better than a human could. It's as if I claimed that a computer could calculate large prime numbers faster than a human could, and you discounted such a claim until scientific proof was provided. A basic understanding of what we're discussing ought to make the answer obvious, and your challenge seems spurious, despite the nevertheless obvious value of scientific evidence to substantiate anything we might decide.

    As for why they aren't more widely in use? My opinion is because health care providers and particulary doctors are generally territorial, arrogant, defensive jerks who wield an unbalanced and undeserved amount of power over their flock. A large portion of patients are seeking from their doctor little more than the spiritual/imaginary confidence that ancient humans sought from witch doctors. And we have an irrational attachment to this entire process, a mantra that "nothing comes between me and my doctor," that "the doctor decide's what's best," the need to have your own personal preferred doctor who, more than anything, meets an emotional need regardless of how well they meet a quality of health need, and so on. We insist on this, and most of us would likely vomit at the idea of letting a computer be our PCP, despite the very clear evidence that the super-private, holy, unquestionable relationship between you and your doctor is desperately in need of regulation, governance, oversight, standardization, and automation.

    Yar on
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    Frankly I question the validity of the Turing test in general.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • durandal4532durandal4532 Registered User regular
    edited March 2010
    I have already stated that I think automation and standardization are exactly the least useful things for the medical profession so.


    See, this discussion is why I like having the AI/VI divide.

    To me, "Artificial Intelligence" means a sentient created by artificial means. That's where the "oh, they wouldn't be arrogant/angry/bored whatever" is stupid. I mean, maybe? I don't know, but you can't say "no no, this sentient being we create won't have an Arrogance chip". We don't know enough about sentience to speak about that with even a little suspension of disbelief.

    Realistically, this kind of AI will, if possible, be a long time coming and honestly probably not incredibly useful beyond an insight into our own nature. This is what you research when you're wondering what consciousness is, and trying to make a realistic model of it.


    VI to me means "really good at pretending to be conscious, but definitely not at all conscious". So it can hold a conversation with you, unlike your dog, but your dog is still the conscious one. A VI can have a "programmed personality" or goals forced on it, because it doesn't have a personality or goals. Those are just convenient metaphors for the actual utility. Like when you say your computer "remembers" XYZ. A VI, essentially, is an extension of our own mental capacity.

    When SantaBot tricks someone, it isn't because SantaBot is clever, it's because the designer was clever. When that 20 questions program guesses your object, it's not because it necessarily gathers information in any way similar to how we do, or even that it "gathers information", it's a clever algorithm for solving 20 questions that is written on a computer instead of paper because that's a faster way to show it off.


    I guess no one has to use different words, but I feel like it would end a lot of the semantic debates, which are so boring.


    Edit: Oh no you fucking didn't. The Turing Test is an elegant answer to a difficult problem. It isn't pass/fail, it's conclusive/inconclusive, and it isn't something you're supposed to sic an AIMbot on.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    I have already stated that I think automation and standardization are exactly the least useful things for the medical profession so.


    See, this discussion is why I like having the AI/VI divide.

    To me, "Artificial Intelligence" means a sentient created by artificial means. That's where the "oh, they wouldn't be arrogant/angry/bored whatever" is stupid. I mean, maybe? I don't know, but you can't say "no no, this sentient being we create won't have an Arrogance chip". We don't know enough about sentience to speak about that with even a little suspension of disbelief.

    Realistically, this kind of AI will, if possible, be a long time coming and honestly probably not incredibly useful beyond an insight into our own nature. This is what you research when you're wondering what consciousness is, and trying to make a realistic model of it.


    VI to me means "really good at pretending to be conscious, but definitely not at all conscious". So it can hold a conversation with you, unlike your dog, but your dog is still the conscious one. A VI can have a "programmed personality" or goals forced on it, because it doesn't have a personality or goals. Those are just convenient metaphors for the actual utility. Like when you say your computer "remembers" XYZ. A VI, essentially, is an extension of our own mental capacity.

    When SantaBot tricks someone, it isn't because SantaBot is clever, it's because the designer was clever. When that 20 questions program guesses your object, it's not because it necessarily gathers information in any way similar to how we do, or even that it "gathers information", it's a clever algorithm for solving 20 questions that is written on a computer instead of paper because that's a faster way to show it off.


    I guess no one has to use different words, but I feel like it would end a lot of the semantic debates, which are so boring.

    I disagree that there is any real difference between these two things. The perceived difference is an illusion.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • durandal4532durandal4532 Registered User regular
    edited March 2010
    HamHamJ wrote: »
    I disagree that there is any real difference between these two things. The perceived difference is an illusion.
    You honestly don't think there's a difference between 20 Questions Bot and a person playing 20 Questions? You think consciousness is a solved problem?

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    I disagree that there is any real difference between these two things. The perceived difference is an illusion.
    You honestly don't think there's a difference between 20 Questions Bot and a person playing 20 Questions?

    Not fundamentally. The latter is just a more advanced version of the former with a bunch of other unrelated features.

    EDIT: Most telling is probably your dog example. My roomba is probably about as intelligent as many insects and microbes. The latter are not any more "conscious".

    The only real advantage the dog has is the ability to deal with novel situations.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • nescientistnescientist Registered User regular
    edited March 2010
    I think it's important to understand that there are some tasks that even current computers can wildly outpace grey matter in, but the manner in which brains function is fundamentally unlike the manner in which computers function, and it seems to me that in order to make a computer behave like a brain you would need to dedicate more resources to the problem than the end product justifies. Unless you're a mad scientist trying to make a point about dualism/monism, I guess.

    nescientist on
  • durandal4532durandal4532 Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    I disagree that there is any real difference between these two things. The perceived difference is an illusion.
    You honestly don't think there's a difference between 20 Questions Bot and a person playing 20 Questions?

    Not fundamentally. The latter is just a more advanced version of the former with a bunch of other unrelated features.

    EDIT: Most telling is probably your dog example. My roomba is probably about as intelligent as many insects and microbes. The latter are not any more "conscious".

    The only real advantage the dog has is the ability to deal with novel situations.

    No, see, that's not at all settled.


    That's a hypothesis weakly supported by current Information Processing models of consciousness, but it is in no way something that can be stated unequivocally because we have no real idea why we are conscious or how our brains work or even if the brain is the most salient thing to study when attempting to understand why we are conscious.


    That said, Stephen Pinker's "How the Mind Works" is, like I said, a great encapsulation of the Information Processing point of view, though even he doesn't take it as far as stating that "intelligence" as displayed by computer programs is anywhere on the same line as conscious activity.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    I disagree that there is any real difference between these two things. The perceived difference is an illusion.
    You honestly don't think there's a difference between 20 Questions Bot and a person playing 20 Questions?

    Not fundamentally. The latter is just a more advanced version of the former with a bunch of other unrelated features.

    EDIT: Most telling is probably your dog example. My roomba is probably about as intelligent as many insects and microbes. The latter are not any more "conscious".

    The only real advantage the dog has is the ability to deal with novel situations.

    No, see, that's not at all settled.


    That's a hypothesis weakly supported by current Information Processing models of consciousness, but it is in no way something that can be stated unequivocally because we have no real idea why we are conscious or how our brains work or even if the brain is the most salient thing to study when attempting to understand why we are conscious.


    That said, Stephen Pinker's "How the Mind Works" is, like I said, a great encapsulation of the Information Processing point of view, though even he doesn't take it as far as stating that "intelligence" as displayed by computer programs is anywhere on the same line as conscious activity.

    It's a position that is superior to the alternatives because it doesn't involve magic and skyhooks.

    If we're providing refrences:

    Consciousness Explained by Daniel Dennett.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • durandal4532durandal4532 Registered User regular
    edited March 2010
    HamHamJ wrote: »
    It's a position that is superior to the alternatives because it doesn't involve magic and skyhooks.

    If we're providing refrences:

    Consciousness Explained by Daniel Dennett.
    Man, guess what is on the book jacket of the book that supports my opinion. (The Alva Noe one, I find his argument more convincing).

    "A provocative and insightful book that will force experts and students alike to reconsider their grasp of the current orthodoxy. Those of us who disagree with some of its main conclusions will have our work cut out for us" - Daniel C. Dennet, Tufts University whaaaat.


    Seriously though, the alternate opinions predating the Information Processing model are pretty bad. Searle, Penrose and the Behaviorists were pretty dumb. But there are new ideas! And I think they make a lot of sense, and hopefully this will lead to new and awesome tests of those ideas that will show they make more sense as a model.


    Basically my issue with Info Processing is that it makes an issue out of things that shouldn't be an issue. Easiest example: "flipping" the image that comes into your eye. Why would that happen. There's not a little photo-processing center that needs to then print something out and show it to us.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    It's a position that is superior to the alternatives because it doesn't involve magic and skyhooks.

    If we're providing refrences:

    Consciousness Explained by Daniel Dennett.
    Man, guess what is on the book jacket of the book that supports my opinion. (The Alva Noe one, I find his argument more convincing).

    "A provocative and insightful book that will force experts and students alike to reconsider their grasp of the current orthodoxy. Those of us who disagree with some of its main conclusions will have our work cut out for us" - Daniel C. Dennet, Tufts University whaaaat.


    Seriously though, the alternate opinions predating the Information Processing model are pretty bad. Searle, Penrose and the Behaviorists were pretty dumb. But there are new ideas! And I think they make a lot of sense, and hopefully this will lead to new and awesome tests of those ideas that will show they make more sense as a model.


    Basically my issue with Info Processing is that it makes an issue out of things that shouldn't be an issue. Easiest example: "flipping" the image that comes into your eye. Why would that happen. There's not a little photo-processing center that needs to then print something out and show it to us.

    I'm not sure what the issue here is exactly.

    Are you disputing the claim that my roomba is just as conscious as a microbe or an insect?

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Mr_RoseMr_Rose 83 Blue Ridge Protects the Holy Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    It's a position that is superior to the alternatives because it doesn't involve magic and skyhooks.

    If we're providing refrences:

    Consciousness Explained by Daniel Dennett.
    Man, guess what is on the book jacket of the book that supports my opinion. (The Alva Noe one, I find his argument more convincing).

    "A provocative and insightful book that will force experts and students alike to reconsider their grasp of the current orthodoxy. Those of us who disagree with some of its main conclusions will have our work cut out for us" - Daniel C. Dennet, Tufts University whaaaat.


    Seriously though, the alternate opinions predating the Information Processing model are pretty bad. Searle, Penrose and the Behaviorists were pretty dumb. But there are new ideas! And I think they make a lot of sense, and hopefully this will lead to new and awesome tests of those ideas that will show they make more sense as a model.


    Basically my issue with Info Processing is that it makes an issue out of things that shouldn't be an issue. Easiest example: "flipping" the image that comes into your eye. Why would that happen. There's not a little photo-processing center that needs to then print something out and show it to us.

    I'm not sure what the issue here is exactly.

    Are you disputing the claim that my roomba is just as conscious as a microbe or an insect?

    I don't think so; he appears to be disputing your claim that your pet hypothesis is the most supported one by citing a quote, from the author of the book you claimed explained it all, saying that durandal's pet hypothesis is really good and that those who disagree with it (including said author) will "have [their] work cut out for [them]."

    Mr_Rose on
    ...because dragons are AWESOME! That's why.
    Nintendo Network ID: AzraelRose
    DropBox invite link - get 500MB extra free.
  • MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Frankly I question the validity of the Turing test in general.

    Well, we had to agree on something eventually.

    It doesn't seem terribly controversial to say that we could, hypothetically, one day in the future, make awesome robot doctors. But the principle obstacle seems to be that at the moment (and for the conceivable future) we don't actually understand medical best practice ourselves--that is to say, whatever our best doctors are doing when they do good medicine, we couldn't operationalize it for a machine to carry out.

    You can know how to ride a bike without understanding how you're riding the bike. The classic example is that most people, when asked, will say that when you start to tip over you should turn your bike wheel away from the direction of fall, despite the fact that the converse is what they actually do in practice (and what happens to be actually true).

    And on the scale of things, robot doctors are much more plausible than robot presidents. We are much closer to understanding best practice in medicine than we are in politics.

    MrMister on
  • durandal4532durandal4532 Registered User regular
    edited March 2010
    HamHamJ wrote: »
    I'm not sure what the issue here is exactly.

    Are you disputing the claim that my roomba is just as conscious as a microbe or an insect?
    I'd say if a Roomba is as conscious as an insect, the reason for it may not be because the Information Processing model is accurate. Though robots are more likely to be something conscious in my opinion, as they're a closer model of the human consciousness than an attempt to make a Brain in a Jar.

    Also I'm doing that annoying thing that Mr_Rose pointed out, that was totally part of it.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
Sign In or Register to comment.