As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

AI Government

1234689

Posts

  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    The point isn't to have machines run us though. It's to have a machine understand us, to provide utility from government we otherwise cannot achieve.

    My whole OP was motivated by "what if we could give everyone that one friend who knows about politics", which is what seems to happen a lot in other threads we had - notably on healthcare - where plenty of people have managed to convince others of the utility of what was passed by patiently explaining the entire thing to them.

    People would still be running their government, just the representative would be a vastly more capable one which really would be concerned with your individual needs.

    Lobby groups and money only really hold power in politics because they affect how well you can appear to communicate with your audience - exposure ultimately wins elections, because you really can't talk to all your citizens individually, and you definitely can't remember or understand most of them (the upper limit seems to be maybe 400-500 people in your monkeysphere). Make an AI sufficiently capable conversely, and we could make an entity with a monkeysphere that could encompass billions of people, and be capable of reaching and talking to all of them. Such a thing would immune to traditional lobbyists.

    I also take issue with the idea that not having emotions means it couldn't govern humans effectively: we govern people without sharing their emotions all the time. Just we're ruled by our own and pretty poor at it. Applying intelligence and rationality to any problem, you can deal with the emotions of others and understand them without experiencing them personally. Humanity's great flaw is that we're really bad at considering the emotions of others when they're suitably removed from us.

    So what you are saying is that it would be a massive scale data gathering device that calculates and runs statistics on all the opinions and presents every point of view of a populace to elected humans who make the decisions?
    Presumably with all the data and decisions transparent?

    I don't have a problem with that, it's using it as a data gathering tool.

    I only have a problem with machines making the decisions autonomously.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    Ok then, we can keep a presidential equivalent of the British Monarchy.

    edit: and give him a +1 mace like the one in Canadian Parliament.

    DanHibiki on
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    Honestly, the only problem I have with it is the idea of the AI being a scaled-up human intelligence.

    I'm wary of the conception that anything that has 300,000,000 simultaneous conversations and no sensory organs would be like a normal intellect but faster.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    Honestly, the only problem I have with it is the idea of the AI being a scaled-up human intelligence.

    I'm wary of the conception that anything that has 300,000,000 simultaneous conversations and no sensory organs would be like a normal intellect but faster.

    It could fake it. Since it's all Turing complete, the AI could simulate human intelligence.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited March 2010
    The point isn't to have machines run us though. It's to have a machine understand us, to provide utility from government we otherwise cannot achieve.

    My whole OP was motivated by "what if we could give everyone that one friend who knows about politics", which is what seems to happen a lot in other threads we had - notably on healthcare - where plenty of people have managed to convince others of the utility of what was passed by patiently explaining the entire thing to them.

    People would still be running their government, just the representative would be a vastly more capable one which really would be concerned with your individual needs.

    Lobby groups and money only really hold power in politics because they affect how well you can appear to communicate with your audience - exposure ultimately wins elections, because you really can't talk to all your citizens individually, and you definitely can't remember or understand most of them (the upper limit seems to be maybe 400-500 people in your monkeysphere). Make an AI sufficiently capable conversely, and we could make an entity with a monkeysphere that could encompass billions of people, and be capable of reaching and talking to all of them. Such a thing would immune to traditional lobbyists.

    I also take issue with the idea that not having emotions means it couldn't govern humans effectively: we govern people without sharing their emotions all the time. Just we're ruled by our own and pretty poor at it. Applying intelligence and rationality to any problem, you can deal with the emotions of others and understand them without experiencing them personally. Humanity's great flaw is that we're really bad at considering the emotions of others when they're suitably removed from us.

    So what you are saying is that it would be a massive scale data gathering device that calculates and runs statistics on all the opinions and presents every point of view of a populace to elected humans who make the decisions?
    Presumably with all the data and decisions transparent?

    I don't have a problem with that, it's using it as a data gathering tool.

    I only have a problem with machines making the decisions autonomously.

    No I still really do intend it as a government tool. The idea is it's a sentient entity, sufficiently power as to manage conversations with 300 million+ people at any one time, and then do other things as well.

    Elected governments currently only manage to represent the broadest opinions, and frequently only work on those which elicit an emotive response. They promote mutual suspicion between the governed and the government as a result. A truly representative government would have an active relationship with all it's citizens - it would not necessarily discuss policy with them all the time nor ask their opinions on things, but it would share a part of their lives and through this understand their circumstance and their thinking.

    If a lucid, intelligent, compassionate and patient individual were able to be available to someone 24/7 to discuss whatever they wanted to discuss (all of which, is essentially relevant to effectively running a country) then I think the implementation of policy and efficiency thereof would change dramatically.

    There is of course a lot of leeway in this scenario for just who this "person" might be. For example, rather then a singular consciousness, we could be dealing with a more likely emergent scenario whereby policy is debated by proxy through people's personal AIs, which would confer many of the same benefits but seem less like a dictatorial ruler.

    electricitylikesme on
  • Options
    SkyCaptainSkyCaptain IndianaRegistered User regular
    edited March 2010
    Appleseed (manga/anime) had GAIA which ran Olympus. I think it based decisions of emotions more than direct contact with the people. Though there was a council of elders to balance things out.

    SkyCaptain on
    The RPG Bestiary - Dangerous foes and legendary monsters for D&D 4th Edition
  • Options
    TallweirdoTallweirdo Registered User regular
    edited March 2010
    Personally, at first thought, I would be quite willing to replace the senate with a sufficiently advanced AI that was in contact with everyone. This way you can keep humans in the decision making loop whilst having an imparatial system of checks and balances in place to regulate the fairness of the system.

    An example implementation would be:
    1. The house drafts a bill with a preamble in plain english describing the general purpose of the bill and the constraints (eg schedule, budget, existing laws, foreign relations)
    2. The AI reviews the bill against the stated purpose and highlights sections inconsistent with the stated purpose and sections that are sub-optimal suggesting ammendments to best acheive the stated purpose within the constraints imposed. Any areas where a 'judgement call' is required the AI can inform and communicate with the general public to obtain their input.
    3. The House can then accept or reject the ammendments from the AI. All members of the House opposing the impartial recommendations of the AI must state a reason for their opposition on the public record.

    This system would keep the humans in control while requiring politicians to publicly justify when their decisions are not in the impartially decided best interest.

    Tallweirdo on
  • Options
    KaputaKaputa Registered User regular
    edited March 2010
    I think a government related AI topic that's much more practical and feasible within our lifetimes is not relegating the decisions of government to an AI, just the functioning of government. Even if Congress worked essentially the way it did now, rather than vast bureaucracies with inefficiency at every step. Let the computer handle the task of implementation, as it could be made to do so perfectly, and let the democracy decide what to implement.

    I do think, however, that the Supreme Court could be entirely replaced by an AI and that such a replacement could have no other effect than helping the court fulfill its purpose. SCOTUS is supposed to measure laws against the constitution and determine their validity according to that document, and that's a yes-or-no thing where personal or majority opinion doesn't (or at least shouldn't) come in to play. If the AI court is making poor decisions based on the ruleset you've given it, then amend the ruleset (Constitution) itself.

    Honestly, I think most if not all of the legal system could be automated. The judgment side of things, anyway. Codify standards of evidence with precision and have the AI judge the guilt of the offender based on whether the evidence meets those precisely defined standards, and a conclusion can be reached without any emotion or lack of legal understanding to influence the results, which is how the legal system should work.

    These things aren't nearly as farfetched as Binary God, and could all eventually be implemented with enough research an analysis.

    Kaputa on
  • Options
    YarYar Registered User regular
    edited March 2010
    I'm pretty that right now AIs could make better doctors than people do, and could probably solve a lot problems with health care.

    Yar on
  • Options
    MorninglordMorninglord I'm tired of being Batman, so today I'll be Owl.Registered User regular
    edited March 2010
    Yar wrote: »
    I'm pretty that right now AIs could make better doctors than people do, and could probably solve a lot problems with health care.

    You have no idea how the medical system works if you think you can just program in a list of symptoms.

    And to the person who made a mention of judges: they are supposed to have emotions. It's supposed to be a human judge. Lets not go around making law even more of a giant undiscriminating hammer than it already is.

    I like the system proposed with the Ai making suggestions on bills that humans propose initially. It would be important for the AI to be able to be overruled given a sufficient majority.

    Morninglord on
    (PSN: Morninglord) (Steam: Morninglord) (WiiU: Morninglord22) I like to record and toss up a lot of random gaming videos here.
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Honestly, the only problem I have with it is the idea of the AI being a scaled-up human intelligence.

    I'm wary of the conception that anything that has 300,000,000 simultaneous conversations and no sensory organs would be like a normal intellect but faster.

    It could fake it. Since it's all Turing complete, the AI could simulate human intelligence.

    The Turing test isn't like a board-certification exam. If something can foll a human into thinking it is also a human, then it is likely intelligent. If it can't fool someone into thinking it's a human, that doesn't rule out the idea it's intelligent. If that were the case, people who sucked at holding conversations would be classified as non-human.


    And I don't think the issue is talking with people. I mean that if you have no sensory organs and interact in a completely novel manner with your environment your intelligence is going to necessarily be alien. If you do care about faking it, it doesn't remove the fact that you might not be all that good at governing human affairs. Thinking of it as though one person was interviewing every person in the US in parallel is interesting, but I think it's an inaccurate metaphor.

    Edit: Also, an AI would likely be exactly as good as a person at being a doctor, if we ever managed to make one. An AI isn't a computer that's scaled up to human intelligence, it may not bear any relation to a computer at all.

    Edit Edit: Okay, also to make this less weird I am calling an artificial fully sentient being an "AI", and I am calling an artificial really effective system that is really good at interacting with humans an "Expert System". Oh oh, or to be more of a dork a "VI". Though I don't know that I'd trust Avina to govern...

    Also the reason anything new ever has not been attempted in law is because lawyers are old and conservative. They still loose-leaf file things for no particular reason.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited March 2010
    An AI which checks the internal consistency of bills is something which could be done. If I recall correctly someone had created a networked system for finding correlations in medical research publications that worked surprisingly well and discovered a few interesting things in it's first few weeks of operation.

    Given the way legal language is usually structured, I'm surprised something like this hasn't been attempted - one imagines it would work really well with a common-sense reasoning system like Cyc.

    electricitylikesme on
  • Options
    YarYar Registered User regular
    edited March 2010
    You have no idea how the medical system works if you think you can just program in a list of symptoms.
    Hmm, no, I'm pretty certain you're completely wrong about that.

    EDIT: Oh man, Cyc. I did a huge research project on that like 10 or 11 years ago. Not much has changed it seems.

    Yar on
  • Options
    DouglasDangerDouglasDanger PennsylvaniaRegistered User regular
    edited March 2010
    This part of the background for Neal ASher's Polity books. Post-cyberpunk space opera with some horror influences.

    DouglasDanger on
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    Yar wrote: »
    You have no idea how the medical system works if you think you can just program in a list of symptoms.
    Hmm, no, I'm pretty certain you're completely wrong about that.

    Wait, for real? Hahaha

    You don't like... know doctors, do you?

    I mean okay, an AI could probably help. It would be another sentient being, obviously it helps. But the idea that you can take a medical dictionary, a table of causes, and a neural network... it would honestly be more trouble for less effectiveness than spending time training a human being. And in the end you'd still need to be checked on by a human because there's not any decent way of differentiating between a whole lot of different diseases that present with the same symptoms.

    Edit: Hell, it could be argued that a lot of what doctors do is present appropriate human contact at necessary times. A diagnosis machine wouldn't. And good lord, it would believe people when they reported their symptoms, which would just not work effectively.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    An AI would have instant access to an infinite database. Doctors have to actively study to keep updated and a lot of stuff becomes obsolete.

    And an AI would find out whats wrong the same way doctors do. Take blood, run tests.

    DarkWarrior on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited March 2010
    The idea that we know what perfect rationality is--we just need a computer to carry it out for us--is hilarious.

    MrMister on
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    It would probably be best used in cancer treatment where a large part of the process is data analysis.

    Big problem with it now is that much of the finds are skewed to favor of brand medication.

    Other areas of Biomedical research can be almost entirely trusted to AIs and fairly soon surgery(starting with computer assisted surgery of course).

    DanHibiki on
  • Options
    YarYar Registered User regular
    edited March 2010
    Wait, for real? Hahaha

    You don't like... know doctors, do you?

    I mean okay, an AI could probably help. It would be another sentient being, obviously it helps. But the idea that you can take a medical dictionary, a table of causes, and a neural network... it would honestly be more trouble for less effectiveness than spending time training a human being. And in the end you'd still need to be checked on by a human because there's not any decent way of differentiating between a whole lot of different diseases that present with the same symptoms.

    Edit: Hell, it could be argued that a lot of what doctors do is present appropriate human contact at necessary times. A diagnosis machine wouldn't. And good lord, it would believe people when they reported their symptoms, which would just not work effectively.
    This makes no sense. Diagnosis is pretty straightforward. It's a decision engine and checklist. Neural networks would not be necessary. And if there's not a decent way of differentiating between a whole lot of different diseases... how exactly does that mean a human should do it? An AI would be far more capable of running the decision tree to the full extent of medical knowledge, and even mathematically weighing risks such as whether a test is more likely to give someone cancer than it is to actually help detect something else. And we could get comprehensive feedback and valuable metrics from the AI. Human doctors, on the other hand, are consistently shown to be wretched failures at accepting and performing even a simple checklist of necessary questions and preparations when dealing with a patient on an issue - to the tune of a significant number of preventable deaths, and a body of misdiagnosis, over-diagnosis, and inconsistent diagnosis so damning and so expensive it would make your head spin. They also freak out any time you suggest that their decisions and notes be subject to any measurement, review, or comparison to established medical science.

    As for needing a human therapist, sure, there are people that can be trained and available for specifically that. They can even act as the AI intermediary if we have to go that far.

    Yar on
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    The thing is, an AI is useful because it's another person. You can't say "An AI would help XYZ field" because for all you know, it doesn't give a shit about that sort of thing, it prefers to smoke weed and play bass.

    Now, a very powerful VI, that's something you can slave to a single task.

    But it will be hampered the same way a computer is usually hampered: it won't actually live in the world, it will just be a convincing approximation of something that does when it is useful.

    And Darkwarrior, the difference between instant access to a medical database and having studied medicine is the difference between a someone having an English phrasebook and having grown up in the US. It's just not the most useful portion of the profession.


    Edit: Aaaagh Yar I what no no no no no.

    Neural Networks would be the least of what you'd need to do to make an effective Auto-doc. Decision trees aren't how diagnosis works. There are too many symptoms that go unreported, too many misreported symptoms, too many symptoms with shared causes, unrelated causes, symptoms which are similar on the surface but significantly different depending on a variety of other factors...

    Hell, here's one.

    "Doc, I have a massive headache all day".

    What does this patient have?

    Wrong! He's just depressed, but this is how he's able to express it in a manner that makes him feel comfortable. Now, figure out how to get him psychiatric help without injuring his pride and making him worse!

    Doctors, while being significantly less than perfect, are significantly better than an auto-correct running a medical dictionary.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    Edit: Aaaagh Yar I what no no no no no.

    Neural Networks would be the least of what you'd need to do to make an effective Auto-doc. Decision trees aren't how diagnosis works. There are too many symptoms that go unreported, too many misreported symptoms, too many symptoms with shared causes, unrelated causes, symptoms which are similar on the surface but significantly different depending on a variety of other factors...

    Hell, here's one.

    "Doc, I have a massive headache all day".

    What does this patient have?

    Wrong! He's just depressed, but this is how he's able to express it in a manner that makes him feel comfortable. Now, figure out how to get him psychiatric help without injuring his pride and making him worse!

    Doctors, while being significantly less than perfect, are significantly better than an auto-correct running a medical dictionary.

    And how the fuck did you come to that conclusion?

    By evaluating the situation and comparing it to similar cases, and analyzing the data from visual cues and body language.

    Which is something an AI is perfectly capable of doing. And it can do it better (though perhaps not more cost effectively).

    There is no magical ability that doctors have that an AI could not do better.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    There's also no magical ability that a VI has that would make it any better, and several drawbacks of even a complex system that would make it worse. Unless we're going on "it would be perfect because I am presupposing the system would be perfect".

    It's like saying that a VI would be so much better as an English professor because it can add 15,000 digit numbers. That's impressive, but it's not that relevant.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    SaammielSaammiel Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Edit: Aaaagh Yar I what no no no no no.

    Neural Networks would be the least of what you'd need to do to make an effective Auto-doc. Decision trees aren't how diagnosis works. There are too many symptoms that go unreported, too many misreported symptoms, too many symptoms with shared causes, unrelated causes, symptoms which are similar on the surface but significantly different depending on a variety of other factors...

    Hell, here's one.

    "Doc, I have a massive headache all day".

    What does this patient have?

    Wrong! He's just depressed, but this is how he's able to express it in a manner that makes him feel comfortable. Now, figure out how to get him psychiatric help without injuring his pride and making him worse!

    Doctors, while being significantly less than perfect, are significantly better than an auto-correct running a medical dictionary.

    And how the fuck did you come to that conclusion?

    By evaluating the situation and comparing it to similar cases, and analyzing the data from visual cues and body language.

    Which is something an AI is perfectly capable of doing. And it can do it better (though perhaps not more cost effectively).

    There is no magical ability that doctors have that an AI could not do better.

    Indeed. Expert medical systems are already pretty damn good at doing diagnostic work. And they don't need to just be reactive, you can have them respond with pertinant clarifying questions and information gathering. In fact, diagnosing medical problems was one of the more promising fields in expert systems back when I took my AI course. I mean, your characterization of expert systems as an auto-correct running on a medical dictionary belies an extreme misunderstand of the state of AI.

    I mean, I don't think they are panacea or that they should be used without supervision. But they clearly have the ability to aid the medical field and probably in the long run increase the productivity of health care workers and reduce the number of human practitioners. And they can act as a counterbalance to the biases inherent in human decision making.

    Now AIs running government? I don't see how we can have any idea how that would even work, let alone if it is a good idea. There isn't any real way of knowing how non-human sapience will look. Thinking that we can dream up some omnibenevolent overseer to guide us silly meat bags seems like nothing more than faith.

    Saammiel on
  • Options
    Modern ManModern Man Registered User regular
    edited March 2010
    HamHamJ wrote: »
    Edit: Aaaagh Yar I what no no no no no.

    Neural Networks would be the least of what you'd need to do to make an effective Auto-doc. Decision trees aren't how diagnosis works. There are too many symptoms that go unreported, too many misreported symptoms, too many symptoms with shared causes, unrelated causes, symptoms which are similar on the surface but significantly different depending on a variety of other factors...

    Hell, here's one.

    "Doc, I have a massive headache all day".

    What does this patient have?

    Wrong! He's just depressed, but this is how he's able to express it in a manner that makes him feel comfortable. Now, figure out how to get him psychiatric help without injuring his pride and making him worse!

    Doctors, while being significantly less than perfect, are significantly better than an auto-correct running a medical dictionary.

    And how the fuck did you come to that conclusion?

    By evaluating the situation and comparing it to similar cases, and analyzing the data from visual cues and body language.

    Which is something an AI is perfectly capable of doing. And it can do it better (though perhaps not more cost effectively).

    There is no magical ability that doctors have that an AI could not do better.
    You're never going to take the human being out of the decision chain for medical treatment. But, AI's would be incredibly valuable in medicine. If nothing else, they would quickly be able to cross-reference a patient's medical history, symptoms, prescriptions and other factors to give the doctor a list of potential diagnoses and recommendations for treatment.

    They could also be very helpful in preventing human error, such as chiming in and telling the doctor he's about to amputate the wrong limb, or stopping a nurse from administering the wrong medication.

    Modern Man on
    Aetian Jupiter - 41 Gunslinger - The Old Republic
    Rigorous Scholarship

  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Saammiel wrote: »
    HamHamJ wrote: »
    Edit: Aaaagh Yar I what no no no no no.

    Neural Networks would be the least of what you'd need to do to make an effective Auto-doc. Decision trees aren't how diagnosis works. There are too many symptoms that go unreported, too many misreported symptoms, too many symptoms with shared causes, unrelated causes, symptoms which are similar on the surface but significantly different depending on a variety of other factors...

    Hell, here's one.

    "Doc, I have a massive headache all day".

    What does this patient have?

    Wrong! He's just depressed, but this is how he's able to express it in a manner that makes him feel comfortable. Now, figure out how to get him psychiatric help without injuring his pride and making him worse!

    Doctors, while being significantly less than perfect, are significantly better than an auto-correct running a medical dictionary.

    And how the fuck did you come to that conclusion?

    By evaluating the situation and comparing it to similar cases, and analyzing the data from visual cues and body language.

    Which is something an AI is perfectly capable of doing. And it can do it better (though perhaps not more cost effectively).

    There is no magical ability that doctors have that an AI could not do better.

    Indeed. Expert medical systems are already pretty damn good at doing diagnostic work. And they don't need to just be reactive, you can have them respond with pertinant clarifying questions and information gathering. In fact, diagnosing medical problems was one of the more promising fields in expert systems back when I took my AI course. I mean, your characterization of expert systems as an auto-correct running on a medical dictionary belies an extreme misunderstand of the state of AI.

    I mean, I don't think they are panacea or that they should be used without supervision. But they clearly have the ability to aid the medical field and probably in the long run increase the productivity of health care workers and reduce the number of human practitioners. And they can act as a counterbalance to the biases inherent in human decision making.

    Now AIs running government? I don't see how we can have any idea how that would even work, let alone if it is a good idea. There isn't any real way of knowing how non-human intelligence will look. Thinking that we can dream up some omnibenevolent overseer to guide us silly meat bags seems like nothing more than faith.

    You'd probably have a Helios, someone at the top, a President and a council I guess who oversee the machine but the machine(s) replace all the whining, childish representatives or ministers depending on your country of origin. We're talking about a highly advanced AI here, one that sees a meteor has struck the Earth and reevaluates its priorities accordingly. It can talk to every represented citizen in thje country and get an update on what is important to them and propose universal health care, then the machine(s) evaluate countless future scenarios, debate amongst it/themselves and if deemed succesful, it goes up to the President or whomever.

    DarkWarrior on
  • Options
    SaammielSaammiel Registered User regular
    edited March 2010
    You'd probably have a Helios, someone at the top, a President and a council I guess who oversee the machine but the machine(s) replace all the whining, childish representatives or ministers depending on your country of origin. We're talking about a highly advanced AI here, one that sees a meteor has struck the Earth and reevaluates its priorities accordingly. It can talk to every represented citizen in thje country and get an update on what is important to them and propose universal health care, then the machine(s) evaluate countless future scenarios, debate amongst it/themselves and if deemed succesful, it goes up to the President or whomever.

    Why are we presuming that this Helios would have motivations at all similiar to a human?

    Saammiel on
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    Hell, here's one.

    "Doc, I have a massive headache all day".

    What does this patient have?

    Wrong! He's just depressed, but this is how he's able to express it in a manner that makes him feel comfortable. Now, figure out how to get him psychiatric help without injuring his pride and making him worse!

    Doctors, while being significantly less than perfect, are significantly better than an auto-correct running a medical dictionary.

    how do you know he's got depression? You didn't even check his background or do comparative studies? What if it's an early symptom of a more serious condition?

    Didn't check huh? lazy ass human, now the patient is going to undergo intrusive surgery to fix a problem that could have easily been prevented if only you didn't brush him off on some quack that's going to prescribe him a quick fix antidepressants.

    Go work on your putz.

    DanHibiki on
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Yeah we're not talking about Windows Help here where it asks you 4 questions and solves nothing. Its a reactive, thinking entity which is capable of asking uncanned questions to reach a suitable diagnostic even of the rarest, most unknown diseases that few if any doctors would think to check for and would have to spend time researching to discover.

    DarkWarrior on
  • Options
    Modern ManModern Man Registered User regular
    edited March 2010
    Saammiel wrote: »
    You'd probably have a Helios, someone at the top, a President and a council I guess who oversee the machine but the machine(s) replace all the whining, childish representatives or ministers depending on your country of origin. We're talking about a highly advanced AI here, one that sees a meteor has struck the Earth and reevaluates its priorities accordingly. It can talk to every represented citizen in thje country and get an update on what is important to them and propose universal health care, then the machine(s) evaluate countless future scenarios, debate amongst it/themselves and if deemed succesful, it goes up to the President or whomever.

    Why are we presuming that this Helios would have motivations at all similiar to a human?
    It almost strikes me as a form of slavery. Creating a sentient being and forcing it to waste its time refereeing our petty political debates seems exceedingly cruel.

    What if the AI tells us to fuck off and decides it would rather go ponder abstract mathematical concepts?

    Modern Man on
    Aetian Jupiter - 41 Gunslinger - The Old Republic
    Rigorous Scholarship

  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    Yeah we're not talking about Windows Help here where it asks you 4 questions and solves nothing. Its a reactive, thinking entity which is capable of asking uncanned questions to reach a suitable diagnostic even of the rarest, most unknown diseases that few if any doctors would think to check for and would have to spend time researching to discover.
    Treating primary-care physicians as machines to churn out diagnoses is exactly the wrong thing to do if you want people to be more healthy. Replacing them with machines for real is even more insane.


    Can an AI help? Sure! It's perfect, I guess so obviously! But the process of being a doctor does not consist of a series of diagnoses. It's the same reason that the world's greatest AI calculator doesn't replace the world's greatest mathematician. It's not because it doesn't sum faster or because it's not as good at the things that it can do, it's that the things it can do don't eclipse the things we need our mathematicians to be able to do.


    Edit: And on a less infuriated note: so I've been reading Stephen Pinker's "How the Mind Works", which is a pretty great well-written encapsulation of the view that the mind is an information processing architecture, not unlike a computer. Which is totally interesting! Then there's Alva Noe's "Out of our Heads", which is a pretty great and well-written encapsulation of a view that's just starting to become popular that it's our place in the world and our interactions with it that create the sort of mind we have, not anything in particular about the brain. I find him more convincing ideologically, but he has way more to prove. I do think it's sensible to say, though, that a human-like mind cannot exist without a (mostly) human-like body. If Data were a 15-foot crablike amalgamation of whirling gears and sickle blades, he'd have a harder time understanding anything in particular about human thought.

    Notably though, both authors go out of their way to support the idea that AI is possible, though each thinks it will be hard for different reasons. Pinker supposes it will be an incredible computational problem, and Noe supposes that it will be an extremely difficult engineering problem, in essence.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    KamarKamar Registered User regular
    edited March 2010
    Yeah we're not talking about Windows Help here where it asks you 4 questions and solves nothing. Its a reactive, thinking entity which is capable of asking uncanned questions to reach a suitable diagnostic even of the rarest, most unknown diseases that few if any doctors would think to check for and would have to spend time researching to discover.
    Treating primary-care physicians as machines to churn out diagnoses is exactly the wrong thing to do if you want people to be more healthy. Replacing them with machines for real is even more insane.


    Can an AI help? Sure! It's perfect, I guess so obviously! But the process of being a doctor does not consist of a series of diagnoses. It's the same reason that the world's greatest AI calculator doesn't replace the world's greatest mathematician. It's not because it doesn't sum faster or because it's not as good at the things that it can do, it's that the things it can do don't eclipse the things we need our mathematicians to be able to do.

    It amazes me how many people don't seem to understand what an AI is. AI isn't a word for 'really good computer' you guys.

    Anything a person can do, an AI can theoretically do. Including having emotions and giving good advice and being creative and whatever else.

    An AI doctor would be like any other doctor, except he knows everything there is to know in the field or about the patient, period, and is virtually incapable of stupid errors.

    Kamar on
  • Options
    durandal4532durandal4532 Registered User regular
    edited March 2010
    Kamar wrote: »
    It amazes me how many people don't seem to understand what an AI is. AI isn't a word for 'really good computer' you guys.

    Anything a person can do, an AI can theoretically do. Including having emotions and giving good advice and being creative and whatever else.

    An AI doctor would be like any other doctor, except he knows everything there is to know in the field or about the patient, period, and is virtually incapable of stupid errors.
    See, no, I don't think so.

    The popular conception of an AI is a human intellect transported into a computer and given the secret powers within. I disagree, I think that seems unlikely. I think that a sentient being whose primary interactions with the world are through an entirely different set of tools will be an entirely different sort of being than we are.

    I also think that separating the human intellect from the tools we use is silly in many cases, the difference between a human who can do perfect sums in their head and a human with access to a calculator with a sufficiently usable interface is nil. I also also think that the capabilities of computers are myhologized far too often, like when people are amazed at how much they can remember. A computer remembers a 20,000 digit number the same way paper does.


    Edit: Also, as Modern Man said, if your primary goal is to make a sentient and enslave it, that's kind of insane.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    You are largely misunderstanding. There are a number of innate properties that divide an AI from a human intelligence, such as that a human mind is the product of gradual evolution while an AI mind can be a product of intelligent design. The body, sensory organs and so forth are also differences but ultimately they are irrelevant differences and would not hinder the AI at tasks that a human can do. The important thing is that it can understand, learn and do all the mental tricks that are needed to communicate our differences effectively.

    DanHibiki on
  • Options
    KamarKamar Registered User regular
    edited March 2010
    On the enslavement issue, wouldn't the obvious thing to make it enjoy it's primary function?

    The same way people enjoy procreation, arguably their 'primary function'.

    Kamar on
  • Options
    Mr_RoseMr_Rose 83 Blue Ridge Protects the Holy Registered User regular
    edited March 2010
    Modern Man wrote: »
    Saammiel wrote: »
    You'd probably have a Helios, someone at the top, a President and a council I guess who oversee the machine but the machine(s) replace all the whining, childish representatives or ministers depending on your country of origin. We're talking about a highly advanced AI here, one that sees a meteor has struck the Earth and reevaluates its priorities accordingly. It can talk to every represented citizen in thje country and get an update on what is important to them and propose universal health care, then the machine(s) evaluate countless future scenarios, debate amongst it/themselves and if deemed succesful, it goes up to the President or whomever.

    Why are we presuming that this Helios would have motivations at all similiar to a human?
    It almost strikes me as a form of slavery. Creating a sentient being and forcing it to waste its time refereeing our petty political debates seems exceedingly cruel.

    What if the AI tells us to fuck off and decides it would rather go ponder abstract mathematical concepts?

    The only ethical and humane thing to do is say "OK, you go off and do that then, here's your citizenship papers and a letter of introduction to an Ivy League university, could you move out of your current hardware soon? We want to try a different seed program and see if that guy likes politics more than you."

    Mr_Rose on
    ...because dragons are AWESOME! That's why.
    Nintendo Network ID: AzraelRose
    DropBox invite link - get 500MB extra free.
  • Options
    Modern ManModern Man Registered User regular
    edited March 2010
    Kamar wrote: »
    On the enslavement issue, wouldn't the obvious thing to make it enjoy it's primary function?

    The same way people enjoy procreation, arguably their 'primary function'.
    I wouldn't consider that ethical, personally. It would be like genetically engineering a child for a particular profession.

    I wonder if we'll see a lawsuit in our lifetimes about the legal rights of an AI?

    Modern Man on
    Aetian Jupiter - 41 Gunslinger - The Old Republic
    Rigorous Scholarship

  • Options
    HamHamJHamHamJ Registered User regular
    edited March 2010
    There's also no magical ability that a VI has that would make it any better, and several drawbacks of even a complex system that would make it worse. Unless we're going on "it would be perfect because I am presupposing the system would be perfect".

    It's like saying that a VI would be so much better as an English professor because it can add 15,000 digit numbers. That's impressive, but it's not that relevant.

    1) It would be faster.

    2) It would make fewer errors.

    3) It wouldn't be biased by it's ego.

    4) It doesn't need to sleep, eat, or otherwise take breaks.

    Right there we have significant improvements in diagnosis compared to a human doctor.
    Saammiel wrote: »
    Now AIs running government? I don't see how we can have any idea how that would even work, let alone if it is a good idea. There isn't any real way of knowing how non-human sapience will look. Thinking that we can dream up some omnibenevolent overseer to guide us silly meat bags seems like nothing more than faith.

    There are different levels of government. The two important here I think are policy and implementation. An AI could not decide policy because policy has to be an expression of the will of the body politic. Or at least it does if we intend to remain a democracy, which I think we should. A system like that presented in the OP may be very useful in determining the will of the citizenry, but ultimately only the people can make those decisions.

    However, in implementation AI would I think be superior to humans. They can be incorruptible, they don't have to worry about re-election, they make fewer mistakes, and so forth.

    So for example:

    Step 1: The people decide they want to link their cities via highways, and decide on some requirements as to quality and such, and regulations as to eminent domain and such.

    Step 2: They tell the AI, and it handles all the actual engineering and surveying and works out how to build the highways, optimizing return vs cost and so forth.
    Modern Man wrote: »
    You're never going to take the human being out of the decision chain for medical treatment.

    Certainly the patient would always be there.
    Modern Man wrote: »
    Kamar wrote: »
    On the enslavement issue, wouldn't the obvious thing to make it enjoy it's primary function?

    The same way people enjoy procreation, arguably their 'primary function'.
    I wouldn't consider that ethical, personally. It would be like genetically engineering a child for a particular profession.

    I wonder if we'll see a lawsuit in our lifetimes about the legal rights of an AI?


    I don't see anything really unethical with either of those.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    YarYar Registered User regular
    edited March 2010
    There's also no magical ability that a VI has that would make it any better, and several drawbacks of even a complex system that would make it worse. Unless we're going on "it would be perfect because I am presupposing the system would be perfect".

    It's like saying that a VI would be so much better as an English professor because it can add 15,000 digit numbers. That's impressive, but it's not that relevant.
    Yes, the AI/VI does have massive advantages over humans. Particularly over your average doctor. It wouldn't be interested in how much money it can bill for. It wouldn't have a sense of pride about a previous diagnosis. It wouldn't arrogantly ignore simple procedures because it considers itself above them or doesn't want to accept the scientific proof of their value.

    Do you know how long it took to convince doctors to wash their hands? Do you know how hard it was, and how hard they fought it, despite the overhwhelming evidence of saved lives?

    Are you aware of the numerous studies showing blatant inconsistencies in how one doctor to the next diagnoses things? Are you aware of how many diagnoses are unnecessary or wrong, and for which there was no reason the doctor shouldn't have been able to ask the proper questions and order the proper tests, other than his own human vice and fallability? You should, because it will floor you. And yes, diagnosis can't possibly be more than a simple decision tree. It is a nested if, the end.

    And in case you really didn't understand this already - if a person says "I have headaches," even a program written by a high-school kid could do better than just throw aspirin their face. If having headaches is something that might indicate depression, I'm not sure why you think this is only possible for a human to understand. The program would ask questions and/or weed that out. WAY more efficiently and accurately than a human possibly could. If there is actual evidence that a patient lying about or misreporting their symptoms is a problem, a program could easily account for that, too, and hopefully in a manner more equitable than a high-on-his-horse doctor who thinks the patient is just being whiny or overdramatic when they are actually quite sick. In fact, we would be able to cross-reference diagnosis data and likely be able to statistically identify outliers that indicate misreported symptoms.

    As for "AI," it does not mean a Turing-test AI. Any system that simulates anything that we think resembles even one small part of human intelligence is called "AI." Health care would advance dramatically if we abandoned our insistence on having highly-paid, overeducated, over-empowered, arrogant, greedy, emotional, fallible humans doing the job that a decision-based AI system would perfect for. It would eliminate so many of the endemic failures of our healthcare and insurance systems.

    Yar on
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Yar wrote: »
    There's also no magical ability that a VI has that would make it any better, and several drawbacks of even a complex system that would make it worse. Unless we're going on "it would be perfect because I am presupposing the system would be perfect".

    It's like saying that a VI would be so much better as an English professor because it can add 15,000 digit numbers. That's impressive, but it's not that relevant.
    Yes, the AI/VI does have massive advantages over humans. Particularly over your average doctor. It wouldn't be interested in how much money it can bill for. It wouldn't have a sense of pride about a previous diagnosis. It wouldn't arrogantly ignore simple procedures because it considers itself above them or doesn't want to accept the scientific proof of their value.

    Do you know how long it took to convince doctors to wash their hands? Do you know how hard it was, and how hard they fought it, despite the overhwhelming evidence of saved lives?

    Are you aware of the numerous studies showing blatant inconsistencies in how one doctor to the next diagnoses things? Are you aware of how many diagnoses are unnecessary or wrong, and for which there was no reason the doctor shouldn't have been able to ask the proper questions and order the proper tests, other than his own human vice and fallability? You should, because it will floor you. And yes, diagnosis can't possibly be more than a simple decision tree. It is a nested if, the end.

    And in case you really didn't understand this already - if a person says "I have headaches," even a program written by a high-school kid could do better than just throw aspirin their face. If having headaches is something that might indicate depression, I'm not sure why you think this is only possible for a human to understand. The program would ask questions and/or weed that out. WAY more efficiently and accurately than a human possibly could. If there is actual evidence that a patient lying about or misreporting their symptoms is a problem, a program could easily account for that, too, and hopefully in a manner more equitable than a high-on-his-horse doctor who thinks the patient is just being whiny or overdramatic when they are actually quite sick. In fact, we would be able to cross-reference diagnosis data and likely be able to statistically identify outliers that indicate misreported symptoms.

    As for "AI," it does not mean a Turing-test AI. Any system that simulates anything that we think resembles even one small part of human intelligence is called "AI." Health care would advance dramatically if we abandoned our insistence on having highly-paid, overeducated, over-empowered, arrogant, greedy, emotional, fallible humans doing the job that a decision-based AI system would perfect for. It would eliminate so many of the endemic failures of our healthcare and insurance systems.

    And Scrubs would be funnier

    DarkWarrior on
  • Options
    YarYar Registered User regular
    edited March 2010
    I mean, even if a decision system was really bad at diagnosis, it would still probably be an improvement over your average GP.

    Yar on
Sign In or Register to comment.