As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

AI Government

1234568

Posts

  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited March 2010
    As a side note, the Cog-Sci/Phil Mind guy in my class is totally into Alva Noe.

    He seems cool.

    MrMister on
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    I'm not sure what the issue here is exactly.

    Are you disputing the claim that my roomba is just as conscious as a microbe or an insect?
    I'd say if a Roomba is as conscious as an insect, the reason for it may not be because the Information Processing model is accurate. Though robots are more likely to be something conscious in my opinion, as they're a closer model of the human consciousness than an attempt to make a Brain in a Jar.

    Also I'm doing that annoying thing that Mr_Rose pointed out, that was totally part of it.

    But is it or isn't it?

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    I'm not sure what the issue here is exactly.

    Are you disputing the claim that my roomba is just as conscious as a microbe or an insect?
    I'd say if a Roomba is as conscious as an insect, the reason for it may not be because the Information Processing model is accurate. Though robots are more likely to be something conscious in my opinion, as they're a closer model of the human consciousness than an attempt to make a Brain in a Jar.

    Also I'm doing that annoying thing that Mr_Rose pointed out, that was totally part of it.

    But is it or isn't it?

    I don't know.

    Do you know?

    Is there an experiment we can do to prove it?

    Like um, this is kind of why I'm interested. If I knew the answers I wouldn't have any need to do science.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    I'm not sure what the issue here is exactly.

    Are you disputing the claim that my roomba is just as conscious as a microbe or an insect?
    I'd say if a Roomba is as conscious as an insect, the reason for it may not be because the Information Processing model is accurate. Though robots are more likely to be something conscious in my opinion, as they're a closer model of the human consciousness than an attempt to make a Brain in a Jar.

    Also I'm doing that annoying thing that Mr_Rose pointed out, that was totally part of it.

    But is it or isn't it?

    I don't know.

    Do you know?

    Is there an experiment we can do to prove it?

    Like um, this is kind of why I'm interested. If I knew the answers I wouldn't have any need to do science.

    I know, in that I have a justified belief that it is, because there is no logically coherent explanation of how it could not be.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    That's weird and seems super boring. I'd rather know if/how/why and get a parsimonious explanation that covers all of them. If is pretty settled, I guess? But I figure attempting to disprove it is a good thing to do so that we understand why it's true? Because "no explanation of how it could not be" covers all things ever described by science ever. I dunno though. Still doesn't really remove the snag that the roomba being as conscious as a microbe does absolutely nothing to validate Dennet/Pinker's theories.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    You'd be advised not to equate intuition to guessing. It's significantly more complicated than that. There is good evidence that we are able to intuit about things in advance of conscious knowledge. This isn't the same as a random guess.

    In which case what you mean is (my random theory which I will treat as fact). Guess what the AI would be better than you at? (my random theory!)

    No, I don't mean that, thanks.

    I know you think information processing models are the big cheese and the whole answer, but this doesn't make it true.
    Yar wrote: »
    Back to the doctor discussion: ML, it is self-evident. I respect the call for scientific evidence of their success; that is certainly a necessary part of the process. But it requires no more than a basic rational understanding of diagnosis and of expert/decision systems, to successfully reason that the system could do this far better than a human could. It's as if I claimed that a computer could calculate large prime numbers faster than a human could, and you discounted such a claim until scientific proof was provided. A basic understanding of what we're discussing ought to make the answer obvious, and your challenge seems spurious, despite the nevertheless obvious value of scientific evidence to substantiate anything we might decide.

    As for why they aren't more widely in use? My opinion is because health care providers and particulary doctors are generally territorial, arrogant, defensive jerks who wield an unbalanced and undeserved amount of power over their flock. A large portion of patients are seeking from their doctor little more than the spiritual/imaginary confidence that ancient humans sought from witch doctors. And we have an irrational attachment to this entire process, a mantra that "nothing comes between me and my doctor," that "the doctor decide's what's best," the need to have your own personal preferred doctor who, more than anything, meets an emotional need regardless of how well they meet a quality of health need, and so on. We insist on this, and most of us would likely vomit at the idea of letting a computer be our PCP, despite the very clear evidence that the super-private, holy, unquestionable relationship between you and your doctor is desperately in need of regulation, governance, oversight, standardization, and automation.

    This is not my argument. I've already said I believe that the ai would be good at looking up decision trees or mathematics. I've already declared how AI's could be a useful tool in addition to a GP. I've disputed the gray areas of medicine, which aren't clear, aren't easy to code, aren't easy to figure out, where there is no ultimately correct answer and everything is uncertain. The replies I have been given are "AI's will do this better" and that requires citations.

    If you say "AI's will think better than us" you require citations. If you say "AI's can deal with uncertain data better than us" you require citations. If you say "AI's will calculate better than us" this does not require citation. Think is not calculation. We are not computers. You show a fundamental ignorance of the structure of the human brain if you think calculation = thinking. We are not logic engines!

    It's nice that you believe we are basically big computers but you really do need to bring in some citations to claim the generalisations you guys are claiming.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    You'd be advised not to equate intuition to guessing. It's significantly more complicated than that. There is good evidence that we are able to intuit about things in advance of conscious knowledge. This isn't the same as a random guess.

    In which case what you mean is (my random theory which I will treat as fact). Guess what the AI would be better than you at? (my random theory!)

    No, I don't mean that, thanks.

    I know you think information processing models are the big cheese and the whole answer, but this doesn't make it true.

    So if they aren't guessing, and they aren't performing logical operations of some kind on available data, what are they doing? Using psychic powers to see the future?

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    You'd be advised not to equate intuition to guessing. It's significantly more complicated than that. There is good evidence that we are able to intuit about things in advance of conscious knowledge. This isn't the same as a random guess.

    In which case what you mean is (my random theory which I will treat as fact). Guess what the AI would be better than you at? (my random theory!)

    No, I don't mean that, thanks.

    I know you think information processing models are the big cheese and the whole answer, but this doesn't make it true.

    So if they aren't guessing, and they aren't performing logical operations of some kind on available data, what are they doing? Using psychic powers to see the future?

    Well, that's the thing. You are talking as if it's logical operations and that it's already understood.

    There's various theories behind intuition. Pattern matching theories, "chunking" theories, template theories. All of them involve large scale parallel processing of a kind no computer can currently do, and which likely doesn't exclusively use logical operations as much as it does rapid association and the unique neurostrata of human memory. Which is not a hdd in your head so no, a computer does not necessarily do all functions of human memory better.
    But as to which one is more correct than the other, I wouldn't know myself. It's not my field and the debate still rages.

    There's a few points to this.
    Firstly, its likely the result of our brains architecture re ability to massively associate. Computers are not designed like human brains, and its likely that setting up a computer that did work like a human brain would be pretty ineffective: you'd have to teach it information like a human (associatively) and it would probably have the same error rate as a human (due to how associative processing isn't as precise as computation).
    Computers are perfect at memory because their structure is fundamentally different from ours. They can take a long time to get to that binary data, but once they have it they can't mess up.

    The second point is that even if you said "blow it to hell, we could potentially make it like that so I'm still right!" to approximate intuition you would need so much more effort, time, and energy, assuming it is ever really understood, to create the system, that you'd be better off just employing a doctor anyway given that they possess a brain that can already do it. You get the computer to do the decision tree analysis (that it is excellent at) while the doctor does the large scale parallel processing (that he is excellent at).

    The third point is that Yar originally started this by saying you could replace doctors with AI's now. So "in the future" isn't really a card to be played. I'm arguing against complete replacement now, so it'd be best to deal with technology now. And there isn't a way to approximate the human brain now. Ai's sure as hell can't do what you claim they can now or in the next few years.

    In a hundred years, who knows, but neither of us are able to predict that far ahead with any accuracy. It may be that the whole science underlying these arguments could change just as it has done in the past and likely will do in the future, in which case neither of us will be relevant.

    I certainly don't see any room for you to be declaring "facts" as if it's a forgone conclusion, especially given the foundations you are starting with don't even remotely resemble solid ground.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    nescientistnescientist Registered User regular
    edited March 2010
    Wait which is it, Morninglord? Are you arguing that we can't replace our doctors with AIs right now (I agree, of course) or are you arguing that intuition represents a potential "unsolvable problem" for AI?

    I think it would be absolutely staggeringly retarded to try to design an associative memory to mimic our own and expect the AI plugged into it to exhibit any useful features, much less intuition. But I don't really see this as a problem for our future Machine Overlord, MD. Certainly intuition is a valuable part of human problem-solving strategies and therefore the current practice of medicine, but expert systems that exist already get by just fine without it.

    nescientist on
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    You'd be advised not to equate intuition to guessing. It's significantly more complicated than that. There is good evidence that we are able to intuit about things in advance of conscious knowledge. This isn't the same as a random guess.

    In which case what you mean is (my random theory which I will treat as fact). Guess what the AI would be better than you at? (my random theory!)

    No, I don't mean that, thanks.

    I know you think information processing models are the big cheese and the whole answer, but this doesn't make it true.

    So if they aren't guessing, and they aren't performing logical operations of some kind on available data, what are they doing? Using psychic powers to see the future?

    Well, that's the thing. You are talking as if it's logical operations and that it's already understood.

    There's various theories behind intuition. Pattern matching theories, "chunking" theories, template theories. All of them involve large scale parallel processing of a kind no computer can currently do, and which likely doesn't exclusively use logical operations as much as it does rapid association and the unique neurostrata of human memory. Which is not a hdd in your head so no, a computer does not necessarily do all functions of human memory better.
    But as to which one is more correct than the other, I wouldn't know myself. It's not my field and the debate still rages.

    Why does it matter? You are already conceding that some sort of conclusion is being drawn from data. The exact method of processing is not important, only the final result. And, again because of Turing completeness, any universal computing machine can simulate the same process to arrive at that conclusion, or even just use a different more optimal process but using the same rules still arrive at that same conclusion.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    Wait which is it, Morninglord? Are you arguing that we can't replace our doctors with AIs right now (I agree, of course) or are you arguing that intuition represents a potential "unsolvable problem" for AI?

    I think it would be absolutely staggeringly retarded to try to design an associative memory to mimic our own and expect the AI plugged into it to exhibit any useful features, much less intuition. But I don't really see this as a problem for our future Machine Overlord, MD. Certainly intuition is a valuable part of human problem-solving strategies and therefore the current practice of medicine, but expert systems that exist already get by just fine without it.

    I don't think it's unsolvable, I think its impractical to try and solve it when you could just hire a human given the difficulty and problems with it.

    Why do you think it isn't a problem?
    Which expert systems are these, what are their success rates, how well do they deal with novel cases? I want to see that data for myself before I make the conclusions you have.
    HamHamJ wrote: »
    HamHamJ wrote: »
    HamHamJ wrote: »
    You'd be advised not to equate intuition to guessing. It's significantly more complicated than that. There is good evidence that we are able to intuit about things in advance of conscious knowledge. This isn't the same as a random guess.

    In which case what you mean is (my random theory which I will treat as fact). Guess what the AI would be better than you at? (my random theory!)

    No, I don't mean that, thanks.

    I know you think information processing models are the big cheese and the whole answer, but this doesn't make it true.

    So if they aren't guessing, and they aren't performing logical operations of some kind on available data, what are they doing? Using psychic powers to see the future?

    Well, that's the thing. You are talking as if it's logical operations and that it's already understood.

    There's various theories behind intuition. Pattern matching theories, "chunking" theories, template theories. All of them involve large scale parallel processing of a kind no computer can currently do, and which likely doesn't exclusively use logical operations as much as it does rapid association and the unique neurostrata of human memory. Which is not a hdd in your head so no, a computer does not necessarily do all functions of human memory better.
    But as to which one is more correct than the other, I wouldn't know myself. It's not my field and the debate still rages.

    Why does it matter? You are already conceding that some sort of conclusion is being drawn from data. The exact method of processing is not important, only the final result. And, again because of Turing completeness, any universal computing machine can simulate the same process to arrive at that conclusion, or even just use a different more optimal process but using the same rules still arrive at that same conclusion.

    What on earth? Association is not computation. No computer can associate the way we do via long term potentiation. Once again you are claiming human brains are the same as computers. Turing completness applies to computers (Universal Turing machines only apply to machines within their own class). It does not apply to humans. You have to establish it can apply to humans before you can claim humans are in the same class as computers.

    You are heavily running afoul of the "human is a more complicated computer" fallacy.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Honestly, the only problem I have with it is the idea of the AI being a scaled-up human intelligence.

    I'm wary of the conception that anything that has 300,000,000 simultaneous conversations and no sensory organs would be like a normal intellect but faster.

    It could fake it. Since it's all Turing complete, the AI could simulate human intelligence.

    The Turing test isn't like a board-certification exam. If something can foll a human into thinking it is also a human, then it is likely intelligent. If it can't fool someone into thinking it's a human, that doesn't rule out the idea it's intelligent. If that were the case, people who sucked at holding conversations would be classified as non-human.
    I haven't seen anyone else pick this up, so...

    Turing completeness is unrelated to the Turing test, and HamHamJ's point was likewise unrelated to the Turing test.

    HamHamJ was saying that the human brain, and thus human personalities are the product of a Turing machine. One of the properties of Turing complete machines is that they can be programmed to exactly simulate any other Turing machine. It's essentially saying that it can run a bona fide human personality. Not perform Turing Test type shenanigans.

    Apothe0sis on
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    What on earth? Association is not computation.

    How do you figure? It has a data set it operates on, and it produces an output. How is that not computation? It's just linking together different data points in certain ways, weighing things based on some criteria, and so forth.
    You are heavily running afoul of the "human is a more complicated computer" fallacy.

    It's not a fallacy, it's the only reasonable position. There is no non-dualist account of consciousness that creates a fundamental difference between the two.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    nescientistnescientist Registered User regular
    edited March 2010
    HamHamJ wrote: »
    What on earth? Association is not computation.

    How do you figure? It has a data set it operates on, and it produces an output. How is that not computation? It's just linking together different data points in certain ways, weighing things based on some criteria, and so forth.
    You are heavily running afoul of the "human is a more complicated computer" fallacy.

    It's not a fallacy, it's the only reasonable position. There is no non-dualist account of consciousness that creates a fundamental difference between the two.

    While this seems to me to be a philosophically defensible position, I don't really see the sense in engaging Morninglord's inference argument at all. The entire point of a VI or AI or whatever you want to call an automated decision system is that it doesn't rely on imprecise, unpredictable methods like inference. The VI does not need to hear "it's like she has a hole in her head" in order to make the House-epiphany that the patient has a prion disease or whatever. Instead, we can reasonably say of a properly designed system that if symptoms consistent with prion disease are input, the system will hypothesize that there may be a prion disease and conduct tests accordingly. There is no engineering reason why such a machine needs anything other than a database of diseases and a database of symptoms and a database of probabilities - perhaps disease/organ/drug interactions, as well as dozens of other things related to medicine that I don't know about, but you get the idea - in order to fulfill its function. Fairly simple "fuzzy logic" routines exist that can zip through decisions such as are relevant in diagnostics, provided that they are fed robust and extensively tested databases.

    I didn't say it was going to be easy, and certainly there's a lot of work in this arena that is going to require diligent effort by human doctors, but eventually I think we'll see doctors gradually come to rely on an evolution of something akin to the PubMed database with an increasingly expert system-like user interface. From there, the jump to machines becoming doctors and doctors becoming replaceable parts is not really so far.

    nescientist on
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    Apothe0sis wrote: »
    HamHamJ wrote: »
    Honestly, the only problem I have with it is the idea of the AI being a scaled-up human intelligence.

    I'm wary of the conception that anything that has 300,000,000 simultaneous conversations and no sensory organs would be like a normal intellect but faster.

    It could fake it. Since it's all Turing complete, the AI could simulate human intelligence.

    The Turing test isn't like a board-certification exam. If something can foll a human into thinking it is also a human, then it is likely intelligent. If it can't fool someone into thinking it's a human, that doesn't rule out the idea it's intelligent. If that were the case, people who sucked at holding conversations would be classified as non-human.
    I haven't seen anyone else pick this up, so...

    Turing completeness is unrelated to the Turing test, and HamHamJ's point was likewise unrelated to the Turing test.

    HamHamJ was saying that the human brain, and thus human personalities are the product of a Turing machine. One of the properties of Turing complete machines is that they can be programmed to exactly simulate any other Turing machine. It's essentially saying that it can run a bona fide human personality. Not perform Turing Test type shenanigans.

    Oh cool, right, I didn't notice that and it is much more sensible. Well, I suppose I don't have to say that saying the brain is a Turing machine seems totally out of line this early into the study of the brain/consciousness. There aren't nearly enough indicators. It's picking a horse as it comes out of the womb to win the race in 13 years.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    nescientistnescientist Registered User regular
    edited March 2010
    It's picking a horse as it comes out of the womb to win the race in 13 years.

    bahahaha this is pretty much accurate. But none of the other colt-fetuses seem any healthier than that one to me, so I've left my money with the bookie for now.

    But I really don't understand the relevance of "true AI" to either the medical side-discussion or the original proposition about government. A decision engine, properly engineered, could serve admirably in either of these realms without exhibiting what a philosopher would call "consciousness." I suppose that if you want to truly remove the doctors and politicians from the process entirely, it could be argued that a "true AI" becomes necessary, but I think several decades before that becomes a reasonable option (assuming that it ever does) you could have something that drastically changes our idea of what a doctor or politician is, even if it doesn't actually replace them entirely (which it may yet, consciousness or no).

    nescientist on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    What on earth? Association is not computation.

    How do you figure? It has a data set it operates on, and it produces an output. How is that not computation? It's just linking together different data points in certain ways, weighing things based on some criteria, and so forth.
    You are heavily running afoul of the "human is a more complicated computer" fallacy.

    It's not a fallacy, it's the only reasonable position. There is no non-dualist account of consciousness that creates a fundamental difference between the two.

    Yeah, sure, because you are aware of all available discussion on consciousness that have ever been argued.

    It doesn't link or weigh in any methodological way. Association is classical conditioning. Stimulus response. When one neuron fires at the same time as a nearby neuron, they form a connection. It's not deliberate calculation, it's happenstance that works on a large scale. X triggers Y. Z happens to be nearby and is thus primed to fire next time X happens. Thus next time X happens, both Y and Z happen. That's association. That's long term potentiation. I'm not an expert on neuroscience, I've only done a 3rd year unit on it, but I haven't heard of any other way one neuron can effect a meaningful change on another neuron that could be used to represent information. So all processing appears to be the result of long term potentiation. Obviously if you expand this process to more than just a simple xyz you would end up with exponential runaway that would be very hard to model, but there is a thriving field out there that tries to use this process to model behavior in animals and humans.

    No computer runs like that. No computer runs like a human brain. We don't have a cpu analogue at all, we don't really have a hdd analogue (memory is decentralised), certainly not a motherboard analogue (because the architecture of the brain isn't just transmitting information, it's also rapidly modifying itself and associating the information all at the same time).
    The only possible way you can say it's like a computer is if you take the computer analogy and simplify it to the point where it loses all resemblence to a computer.
    Thinking the brain is like a computer reduces your understanding of the brain, because it causes you to think the wrong way about how we process things and assume the wrong things about the brains architecture.

    Amazing as it may be, there are ways of processing information on machines other than computers.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    What on earth? Association is not computation.

    How do you figure? It has a data set it operates on, and it produces an output. How is that not computation? It's just linking together different data points in certain ways, weighing things based on some criteria, and so forth.
    You are heavily running afoul of the "human is a more complicated computer" fallacy.

    It's not a fallacy, it's the only reasonable position. There is no non-dualist account of consciousness that creates a fundamental difference between the two.

    While this seems to me to be a philosophically defensible position, I don't really see the sense in engaging Morninglord's inference argument at all. The entire point of a VI or AI or whatever you want to call an automated decision system is that it doesn't rely on imprecise, unpredictable methods like inference. The VI does not need to hear "it's like she has a hole in her head" in order to make the House-epiphany that the patient has a prion disease or whatever. Instead, we can reasonably say of a properly designed system that if symptoms consistent with prion disease are input, the system will hypothesize that there may be a prion disease and conduct tests accordingly. There is no engineering reason why such a machine needs anything other than a database of diseases and a database of symptoms and a database of probabilities - perhaps disease/organ/drug interactions, as well as dozens of other things related to medicine that I don't know about, but you get the idea - in order to fulfill its function. Fairly simple "fuzzy logic" routines exist that can zip through decisions such as are relevant in diagnostics, provided that they are fed robust and extensively tested databases.

    I didn't say it was going to be easy, and certainly there's a lot of work in this arena that is going to require diligent effort by human doctors, but eventually I think we'll see doctors gradually come to rely on an evolution of something akin to the PubMed database with an increasingly expert system-like user interface. From there, the jump to machines becoming doctors and doctors becoming replaceable parts is not really so far.

    I agree with this. But I don't know any better, so:
    It doesn't link or weigh in any methodological way. Association is classical conditioning. Stimulus response. When one neuron fires at the same time as a nearby neuron, they form a connection. It's not deliberate calculation, it's happenstance that works on a large scale. X triggers Y. Z happens to be nearby and is thus primed to fire next time X happens. Thus next time X happens, both Y and Z happen. That's association. That's long term potentiation.

    And you think it would be impossible to write a software emulator that does that?

    Or just build something that does it mechanically.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited March 2010
    But I really don't understand the relevance of "true AI" to either the medical side-discussion or the original proposition about government.

    The relevance is that HamHam really likes to go after dualism.
    to be fair, it is kind of silly

    MrMister on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    What on earth? Association is not computation.

    How do you figure? It has a data set it operates on, and it produces an output. How is that not computation? It's just linking together different data points in certain ways, weighing things based on some criteria, and so forth.
    You are heavily running afoul of the "human is a more complicated computer" fallacy.

    It's not a fallacy, it's the only reasonable position. There is no non-dualist account of consciousness that creates a fundamental difference between the two.

    While this seems to me to be a philosophically defensible position, I don't really see the sense in engaging Morninglord's inference argument at all. The entire point of a VI or AI or whatever you want to call an automated decision system is that it doesn't rely on imprecise, unpredictable methods like inference. The VI does not need to hear "it's like she has a hole in her head" in order to make the House-epiphany that the patient has a prion disease or whatever. Instead, we can reasonably say of a properly designed system that if symptoms consistent with prion disease are input, the system will hypothesize that there may be a prion disease and conduct tests accordingly. There is no engineering reason why such a machine needs anything other than a database of diseases and a database of symptoms and a database of probabilities - perhaps disease/organ/drug interactions, as well as dozens of other things related to medicine that I don't know about, but you get the idea - in order to fulfill its function. Fairly simple "fuzzy logic" routines exist that can zip through decisions such as are relevant in diagnostics, provided that they are fed robust and extensively tested databases.

    I didn't say it was going to be easy, and certainly there's a lot of work in this arena that is going to require diligent effort by human doctors, but eventually I think we'll see doctors gradually come to rely on an evolution of something akin to the PubMed database with an increasingly expert system-like user interface. From there, the jump to machines becoming doctors and doctors becoming replaceable parts is not really so far.

    I agree with this. But I don't know any better, so:
    It doesn't link or weigh in any methodological way. Association is classical conditioning. Stimulus response. When one neuron fires at the same time as a nearby neuron, they form a connection. It's not deliberate calculation, it's happenstance that works on a large scale. X triggers Y. Z happens to be nearby and is thus primed to fire next time X happens. Thus next time X happens, both Y and Z happen. That's association. That's long term potentiation.

    And you think it would be impossible to write a software emulator that does that?

    Or just build something that does it mechanically.

    I never said it couldn't be done. I said it wasn't practical and that no computer currently works like that. Emulation on a large scale would take a huge amount of processing power to do far less than a machine built to do it. It'd be cheaper to use a human. Building it mechanically would end up with the same problems of a human brain. Do you only pay attention to the words I direct at you? Pretty sure I answered this question when someone else asked it.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    I never said it couldn't be done. I said it wasn't practical and that no computer currently works like that. Emulation on a large scale would take a huge amount of processing power to do far less than a machine built to do it. It'd be cheaper to use a human. Building it mechanically would end up with the same problems of a human brain. Do you only pay attention to the words I direct at you? Pretty sure I answered this question when someone else asked it.

    You said:
    It's just a huge stretch and oversimplification to assume you can replace doctors.

    Are you now backtracking that you in fact can replace doctors, but you just might not want to because it's not cost effective?

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    I never said it couldn't be done. I said it wasn't practical and that no computer currently works like that. Emulation on a large scale would take a huge amount of processing power to do far less than a machine built to do it. It'd be cheaper to use a human. Building it mechanically would end up with the same problems of a human brain. Do you only pay attention to the words I direct at you? Pretty sure I answered this question when someone else asked it.

    You said:
    It's just a huge stretch and oversimplification to assume you can replace doctors.

    Are you now backtracking that you in fact can replace doctors, but you just might not want to because it's not cost effective?

    I initially responded to this.

    From my point of view, you decided to defend this.
    Yar wrote: »
    I'm pretty that right now AIs could make better doctors than people do, and could probably solve a lot problems with health care.

    If you weren't trying to defend this, then what were you trying to say?

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    I never said it couldn't be done. I said it wasn't practical and that no computer currently works like that. Emulation on a large scale would take a huge amount of processing power to do far less than a machine built to do it. It'd be cheaper to use a human. Building it mechanically would end up with the same problems of a human brain. Do you only pay attention to the words I direct at you? Pretty sure I answered this question when someone else asked it.

    You said:
    It's just a huge stretch and oversimplification to assume you can replace doctors.

    Are you now backtracking that you in fact can replace doctors, but you just might not want to because it's not cost effective?

    I initially responded to this.

    From my point of view, you decided to defend this.
    Yar wrote: »
    I'm pretty that right now AIs could make better doctors than people do, and could probably solve a lot problems with health care.

    If you weren't trying to defend this, then what were you trying to say?

    The post of yours that I initially quoted did not quote that or even seem to be referencing it.

    It seems we have simply been having a misunderstanding about the context then.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    I never said it couldn't be done. I said it wasn't practical and that no computer currently works like that. Emulation on a large scale would take a huge amount of processing power to do far less than a machine built to do it. It'd be cheaper to use a human. Building it mechanically would end up with the same problems of a human brain. Do you only pay attention to the words I direct at you? Pretty sure I answered this question when someone else asked it.

    You said:
    It's just a huge stretch and oversimplification to assume you can replace doctors.

    Are you now backtracking that you in fact can replace doctors, but you just might not want to because it's not cost effective?

    I initially responded to this.

    From my point of view, you decided to defend this.
    Yar wrote: »
    I'm pretty that right now AIs could make better doctors than people do, and could probably solve a lot problems with health care.

    If you weren't trying to defend this, then what were you trying to say?

    The post of yours that I initially quoted did not quote that or even seem to be referencing it.

    It seems we have simply been having a misunderstanding about the context then.

    Yes I've been gunning on that statement ever since I started talking in here. When I read through the replies after it, you seemed to be arguing on that side. When I started talking again, I was addressing all the replies for that side.

    It seems like it was a misunderstanding on both our parts.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    Although, if Akinator can be better at 20 questions than a human being, I doubt medical diagnosis is some far off goal that would take decades to do.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    Are you trying to start shit a second time? :P

    I already told the guy who mentioned his own work on a diagnosis machine that it sounded like a good idea as he put it. Was a few pages back.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    That thing seriously fucking scares me. It just got Gary Gygax in 15.

    But seriously, I don't understand why you think diagnosing an illness (as just an end goal, not necessarily having anything to do with how human doctors do it currently, and stripping out the patient interaction part) is all that different from playing 20 questions.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    Most of my problem comes from the patient interaction part.

    What is pain? What kind of pain? What if all the person can do is point at it? How do you codify pain? There's so much gray area in such a simple thing like that.

    As you can see from Sanguinus, one of the things doctors do in current systems of this kind is translate and input that kind of stuff from a patient.

    It's much easier for a human to work it out because, well, they have the same body so a lot of that kind of stuff is shared between them.

    And I think it'd be important for the doctor to know about all the same knowledge as the ai because this would allow his intuition to come into play to tell him wether there should be a lot of emphasis on the pain or not when telling the AI.

    Not that I think pain is the only gray area.

    I just don't assume that patients are able to describe their symptoms accurately. How it's written in a medical manual and how a patients comes in to tell you about his problem are worlds apart.

    Where as most 20 question engines are based on categorical knowledge, not things that could be taken as an (potentially uncertain) continuum. Gary Gygax is a single guy, an individual. You can't have a human with half a gary gygax and half a gary nappleseed or whatever to confuse the machine. This isn't so with physical sensations, which is all patients have to go on when trying to describe their sickness.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    Most of my problem comes from the patient interaction part.

    What is pain? What kind of pain? What if all the person can do is point at it? How do you codify pain? There's so much gray area in such a simple thing like that.

    As you can see from Sanguinus, one of the things doctors do in current systems of this kind is translate and input that kind of stuff from a patient.

    It's much easier for a human to work it out because, well, they have the same body so a lot of that kind of stuff is shared between them.

    And I think it'd be important for the doctor to know about all the same knowledge as the ai because this would allow his intuition to come into play to tell him wether there should be a lot of emphasis on the pain or not when telling the AI.

    Not that I think pain is the only gray area.

    I just don't assume that patients are able to describe their symptoms accurately. How it's written in a medical manual and how a patients comes in to tell you about his problem are worlds apart.

    Where as most 20 question engines are based on categorical knowledge, not things that could be taken as an (potentially uncertain) continuum. Gary Gygax is a single guy.

    I don't necessarily disagree with any of this. I just don't see why a nurse or other more technical professional couldn't do this. AFAIK, what separates doctors is that they have the extensive knowledge necessary to make diagnosis. That particular skill set would be handled by the AI, so the person acting as a middle man between the AI and the patient would not need but would instead need an entirely different skill set (like a media degree on translating between humans and AI) and thus would not be a doctor in the traditional sense.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Most of my problem comes from the patient interaction part.

    What is pain? What kind of pain? What if all the person can do is point at it? How do you codify pain? There's so much gray area in such a simple thing like that.

    As you can see from Sanguinus, one of the things doctors do in current systems of this kind is translate and input that kind of stuff from a patient.

    It's much easier for a human to work it out because, well, they have the same body so a lot of that kind of stuff is shared between them.

    And I think it'd be important for the doctor to know about all the same knowledge as the ai because this would allow his intuition to come into play to tell him wether there should be a lot of emphasis on the pain or not when telling the AI.

    Not that I think pain is the only gray area.

    I just don't assume that patients are able to describe their symptoms accurately. How it's written in a medical manual and how a patients comes in to tell you about his problem are worlds apart.

    Where as most 20 question engines are based on categorical knowledge, not things that could be taken as an (potentially uncertain) continuum. Gary Gygax is a single guy.

    I don't necessarily disagree with any of this. I just don't see why a nurse or other more technical professional couldn't do this. AFAIK, what separates doctors is that they have the extensive knowledge necessary to make diagnosis. That particular skill set would be handled by the AI, so the person acting as a middle man between the AI and the patient would not need but would instead need an entirely different skill set (like a media degree on translating between humans and AI) and thus would not be a doctor in the traditional sense.

    The intuition I was talking about earlier requires expert knowledge for accuracy. The more you know, the better you are at judging those kinds of gray areas.

    I don't know if I mentioned that?

    When it comes to healthcare I'm also in favour of a bit of redundancy if it will help more people.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Most of my problem comes from the patient interaction part.

    What is pain? What kind of pain? What if all the person can do is point at it? How do you codify pain? There's so much gray area in such a simple thing like that.

    As you can see from Sanguinus, one of the things doctors do in current systems of this kind is translate and input that kind of stuff from a patient.

    It's much easier for a human to work it out because, well, they have the same body so a lot of that kind of stuff is shared between them.

    And I think it'd be important for the doctor to know about all the same knowledge as the ai because this would allow his intuition to come into play to tell him wether there should be a lot of emphasis on the pain or not when telling the AI.

    Not that I think pain is the only gray area.

    I just don't assume that patients are able to describe their symptoms accurately. How it's written in a medical manual and how a patients comes in to tell you about his problem are worlds apart.

    Where as most 20 question engines are based on categorical knowledge, not things that could be taken as an (potentially uncertain) continuum. Gary Gygax is a single guy.

    I don't necessarily disagree with any of this. I just don't see why a nurse or other more technical professional couldn't do this. AFAIK, what separates doctors is that they have the extensive knowledge necessary to make diagnosis. That particular skill set would be handled by the AI, so the person acting as a middle man between the AI and the patient would not need but would instead need an entirely different skill set (like a media degree on translating between humans and AI) and thus would not be a doctor in the traditional sense.

    The intuition I was talking about earlier requires expert knowledge for accuracy. The more you know, the better you are at judging those kinds of gray areas.

    I don't know if I mentioned that?

    I don't see why the interpretors intuition would be important or even desirable to this system.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    Most of my problem comes from the patient interaction part.

    What is pain? What kind of pain? What if all the person can do is point at it? How do you codify pain? There's so much gray area in such a simple thing like that.

    As you can see from Sanguinus, one of the things doctors do in current systems of this kind is translate and input that kind of stuff from a patient.

    It's much easier for a human to work it out because, well, they have the same body so a lot of that kind of stuff is shared between them.

    And I think it'd be important for the doctor to know about all the same knowledge as the ai because this would allow his intuition to come into play to tell him wether there should be a lot of emphasis on the pain or not when telling the AI.

    Not that I think pain is the only gray area.

    I just don't assume that patients are able to describe their symptoms accurately. How it's written in a medical manual and how a patients comes in to tell you about his problem are worlds apart.

    Where as most 20 question engines are based on categorical knowledge, not things that could be taken as an (potentially uncertain) continuum. Gary Gygax is a single guy.

    I don't necessarily disagree with any of this. I just don't see why a nurse or other more technical professional couldn't do this. AFAIK, what separates doctors is that they have the extensive knowledge necessary to make diagnosis. That particular skill set would be handled by the AI, so the person acting as a middle man between the AI and the patient would not need but would instead need an entirely different skill set (like a media degree on translating between humans and AI) and thus would not be a doctor in the traditional sense.

    The intuition I was talking about earlier requires expert knowledge for accuracy. The more you know, the better you are at judging those kinds of gray areas.

    I don't know if I mentioned that?

    I don't see why the interpretors intuition would be important or even desirable to this system.

    The intuition of an expert is a fantastic way of dealing with a gray area and will often result in creative and novel solutions. For example, a patient that has all the symptoms of one disease but there's something slightly too intense about one of his symptoms that triggers the doctor to think it might be another more serious one. So he would mention that increased intensity to the AI along with his suspicions and the AI might tell him there needs to be another symptom (in other words give him the expert details). So he would ask the patient about that symptom and lo and behold the patient as it turned out didn't think it was relevant or part of the sickness.

    This kind of interaction wouldn't happen with a basic interpreter unless they had the same level of knowledge as a doctor. He would have just put in the original symptoms as described to him. So a basic interpretor has the same problem as the AI: it's trusting the patient to know what's wrong with him.

    Together they would be much, much more powerful diagnostic device than either alone. I don't think that's a big drawback.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    HamHamJ wrote: »
    HamHamJ wrote: »
    Most of my problem comes from the patient interaction part.

    What is pain? What kind of pain? What if all the person can do is point at it? How do you codify pain? There's so much gray area in such a simple thing like that.

    As you can see from Sanguinus, one of the things doctors do in current systems of this kind is translate and input that kind of stuff from a patient.

    It's much easier for a human to work it out because, well, they have the same body so a lot of that kind of stuff is shared between them.

    And I think it'd be important for the doctor to know about all the same knowledge as the ai because this would allow his intuition to come into play to tell him wether there should be a lot of emphasis on the pain or not when telling the AI.

    Not that I think pain is the only gray area.

    I just don't assume that patients are able to describe their symptoms accurately. How it's written in a medical manual and how a patients comes in to tell you about his problem are worlds apart.

    Where as most 20 question engines are based on categorical knowledge, not things that could be taken as an (potentially uncertain) continuum. Gary Gygax is a single guy.

    I don't necessarily disagree with any of this. I just don't see why a nurse or other more technical professional couldn't do this. AFAIK, what separates doctors is that they have the extensive knowledge necessary to make diagnosis. That particular skill set would be handled by the AI, so the person acting as a middle man between the AI and the patient would not need but would instead need an entirely different skill set (like a media degree on translating between humans and AI) and thus would not be a doctor in the traditional sense.

    The intuition I was talking about earlier requires expert knowledge for accuracy. The more you know, the better you are at judging those kinds of gray areas.

    I don't know if I mentioned that?

    I don't see why the interpretors intuition would be important or even desirable to this system.

    The intuition of an expert is a fantastic way of dealing with a gray area and will often result in creative and novel solutions. For example, a patient that has all the symptoms of one disease but there's something slightly too intense about one of his symptoms that triggers the doctor to think it might be another more serious one. So he would mention that increased intensity to the AI along with his suspicions and the AI might tell him there needs to be another symptom (in other words give him the expert details). So he would ask the patient about that symptom and lo and behold the patient as it turned out didn't think it was relevant or part of the sickness.

    This kind of interaction wouldn't happen with a basic interpreter unless they had the same level of knowledge as a doctor. He would have just put in the original symptoms as described to him. So a basic interpretor has the same problem as the AI: it's trusting the patient to know what's wrong with him.

    Together they would be much, much more powerful diagnostic device than either alone. I don't think that's a big drawback.

    That points out a bigger flaw in the doctor then it does in the AI. An AI wouldn't particularly trust a patient to know what's wrong and would instead proceed to investigate all possible options that don't immediate fit with the situation and would come to the correct conclusion where as a doctor might simply not think it relevant.

    You could however argue that a system would not be as hands-on with the patient and miss things like patients not taking medication or something to that effect, but that would still reduce doctors to nothing more then Service Representative.

    It's funny that surgeons are adopting a check list system containing everything from advanced techniques to simple things like "wash your hands" which has dramatically increased the numbers of stressful surgeries since it was implemented. Just goes to show you how easy it is for people to forget the basic things even when you follow routine.

    DanHibiki on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    DanHibiki wrote: »
    HamHamJ wrote: »
    HamHamJ wrote: »
    Most of my problem comes from the patient interaction part.

    What is pain? What kind of pain? What if all the person can do is point at it? How do you codify pain? There's so much gray area in such a simple thing like that.

    As you can see from Sanguinus, one of the things doctors do in current systems of this kind is translate and input that kind of stuff from a patient.

    It's much easier for a human to work it out because, well, they have the same body so a lot of that kind of stuff is shared between them.

    And I think it'd be important for the doctor to know about all the same knowledge as the ai because this would allow his intuition to come into play to tell him wether there should be a lot of emphasis on the pain or not when telling the AI.

    Not that I think pain is the only gray area.

    I just don't assume that patients are able to describe their symptoms accurately. How it's written in a medical manual and how a patients comes in to tell you about his problem are worlds apart.

    Where as most 20 question engines are based on categorical knowledge, not things that could be taken as an (potentially uncertain) continuum. Gary Gygax is a single guy.

    I don't necessarily disagree with any of this. I just don't see why a nurse or other more technical professional couldn't do this. AFAIK, what separates doctors is that they have the extensive knowledge necessary to make diagnosis. That particular skill set would be handled by the AI, so the person acting as a middle man between the AI and the patient would not need but would instead need an entirely different skill set (like a media degree on translating between humans and AI) and thus would not be a doctor in the traditional sense.

    The intuition I was talking about earlier requires expert knowledge for accuracy. The more you know, the better you are at judging those kinds of gray areas.

    I don't know if I mentioned that?

    I don't see why the interpretors intuition would be important or even desirable to this system.

    The intuition of an expert is a fantastic way of dealing with a gray area and will often result in creative and novel solutions. For example, a patient that has all the symptoms of one disease but there's something slightly too intense about one of his symptoms that triggers the doctor to think it might be another more serious one. So he would mention that increased intensity to the AI along with his suspicions and the AI might tell him there needs to be another symptom (in other words give him the expert details). So he would ask the patient about that symptom and lo and behold the patient as it turned out didn't think it was relevant or part of the sickness.

    This kind of interaction wouldn't happen with a basic interpreter unless they had the same level of knowledge as a doctor. He would have just put in the original symptoms as described to him. So a basic interpretor has the same problem as the AI: it's trusting the patient to know what's wrong with him.

    Together they would be much, much more powerful diagnostic device than either alone. I don't think that's a big drawback.

    That points out a bigger flaw in the doctor then it does in the AI. An AI wouldn't particularly trust a patient to know what's wrong and would instead proceed to investigate all possible options that don't immediate fit with the situation and would come to the correct conclusion where as a doctor might simply not think it relevant.

    You could however argue that a system would not be as hands-on with the patient and miss things like patients not taking medication or something to that effect, but that would still reduce doctors to nothing more then Service Representative.

    It's funny that surgeons are adopting a check list system containing everything from advanced techniques to simple things like "wash your hands" which has dramatically increased the numbers of stressful surgeries since it was implemented. Just goes to show you how easy it is for people to forget the basic things even when you follow routine.

    I have no idea how you came to this conclusion. I gave an example of intuition translating something uncodifiable that indicated something that wasn't an immediate fit and you respond by assuming everything is codifiable and the doctor wouldn't see what I just outlined he did. You also accused the doctor of trusting the patient when I've been saying doctors don't necessarily trust their patients but an AI has to believe what it has been told.

    It's like you read everything I just said backwards.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    nescientistnescientist Registered User regular
    edited March 2010
    "an AI has to believe what it has been told"
    Maybe if you assume its only input is verbal and from the patient then yeah. So maybe there's an argument for having only humans dispense pain meds given the subjective nature of pain and propensity for lying about it... but then again we're already trying to design systems that detect nonvocal "tells" from video of people lying. Doubt we've gotten far yet but I haven't really looked into it.

    nescientist on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    It's not just about a patient lying.
    I mentioned 3 good reasons for it before.
    I'll go dig them up again.

    edit: here it is
    @ Sanguinius I think your system is peachy keen and is basically what I think should happen. Use them as a tool, a more handy data base, capable of making decisions, with the doctor still seeing the patient and codifying the patients vague complaints into something the AI can handle. Best of both worlds. I'm completely comfortable with that.

    I'm also comfortable with the self-diagnosis thing but only if non minor symptoms are referred to a GP (with this system in place as well). The reason is simple: it has to assume the uneducated patient is a). not lying b). is describing all of their symptoms c). is capable of describing their symptoms accurately. But yeah, that'd be good for the kinds of things your pharmacist could easily sort out for you.

    I didn't mean for those three objections to only be for self-diagnostic either.

    It's possible for a patient to have done some cursory research of his own and leading himself into thinking he has a particular thing and thus imagining he has the symptoms of it.

    You can't tell if someone is lying if they really believe what they are saying.

    These are all situations that intuition would help with, that would be very hard to encode. Not impossible, no. But so much work, it would be cheaper and more reliable to hire a doctor.

    I also don't even wanna start with an AI trying to deal with the possibility of having to refer a patient to a psychologist. Good luck coding that shit.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    nescientistnescientist Registered User regular
    edited March 2010
    You keep restating the importance of intuition, and I keep wondering why you think that because intuition is important to a doctor, intuition will be important to a decision engine attempting to solve the same problems. You accuse HamHamJ of describing people simplistically by calling them computers, but your basic assumption here seems to be that machines need to operate like people in order to do their jobs, which is rank nonsense. If Safeway decided tomorrow that all of their checkers and bag boys and what have you are fired, and replaced them with robots, the robots would not stand patiently at the counter, their gleaming metal hands whisking items across a bar-code scanner as their feet operate the conveyor belt, carefully minding the plastic dividers that separate one patron's food from another. Instead, they would glow with the words "Unknown Item In Bagging Area" and require constant manager intervention.

    Would you advocate precautionary measures to prevent the AI from creating a hostile workplace by sexually harassing nurses? No? Then you probably shouldn't be worrying about its intuitive capabilities, because they are of equal irrelevance.

    There is indeed great power in our associative memory and its chaotic, naturally emergent structure, but while we might be able to make a great intuitive leap based on divergent information that we might not even have conscious access to, a computer could have total and unrestricted access to all of its stored information at all times. A decision could be influenced by billions of factors drawn from thousands of sources. Intuition isn't just irrelevant: it exemplifies the weakness of humans relative to machines with managing an overwhelming constellation of variously relevant input data, as seems to be a problem in medicine.

    nescientist on
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    patients lying and patients with psychosomatic symptoms are statistically predictable.

    DanHibiki on
  • Options
    CouscousCouscous Registered User regular
    edited March 2010
    DanHibiki wrote: »
    patients lying and patients with psychosomatic symptoms are statistically predictable.

    I would also assume there are ways of telling based on how the patient acts and other data. Intuition is usually a way of just ignoring how you actually came about solving the problem through various factors. Intuition is based on facts and past behavior so I don't see why a computer would need intuition when it could just do the longer process of going through the problem without taking any shortcuts.

    Couscous on
  • Options
    Phoenix-DPhoenix-D Registered User regular
    edited March 2010
    DanHibiki wrote: »
    patients lying and patients with psychosomatic symptoms are statistically predictable.

    Statistics doesn't tell you about the single patient in front of you. You can say that X% of patients with these complaints are lying, but stats won't tell you which set your patient is in. (unless the % is something silly like 99.99999%)

    Phoenix-D on
Sign In or Register to comment.