Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.
Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

AI Government

2456789

Posts

  • DanHibikiDanHibiki Registered User regular
    edited March 2010
    Obsidiani wrote: »
    Obsidiani wrote: »
    How does such an AI compete against the charismatic humans who want to be the leaders, for power purposes.

    By being better at it? If you have a computer sentience that's willing and able to carry on simultaneous conversations with the whole planet, let it do so for a generation. Tell them about important news events, the weather tomorrow, console them when their cat dies, recommend the best way to invest their money for retirement. After their kids are born and raised knowing that The Google Machine has all the answers and is good to talk to it seems fairly unlikely that they wouldn't decide to put the thing in charge of government.

    Wrong.

    Doesn't matter how much better it is. No one will give a shit about some AI they can talk to all the time when there's another cooler, more exclusive dude that everyone wants to sit down and have a beer with.

    Cooler then president Avatar? I don't think so!

    sig_zpsf0994cbd.jpg
  • HacksawHacksaw J Duggan Wrestler at LawRegistered User regular
    edited March 2010
    If the objective is to have a cold calculating inhuman ruler, it doesn't necessarily have to be mechanical...
    We could end up electing a cryogenically frozen TI-82 president.

  • ObsidianiObsidiani __BANNED USERS regular
    edited March 2010
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    EliteLamer wrote: »
    The only con that seems to concern me is the blacks but how bad could it be?
  • ObsidianiObsidiani __BANNED USERS regular
    edited March 2010
    DanHibiki wrote: »
    Obsidiani wrote: »
    Obsidiani wrote: »
    How does such an AI compete against the charismatic humans who want to be the leaders, for power purposes.

    By being better at it? If you have a computer sentience that's willing and able to carry on simultaneous conversations with the whole planet, let it do so for a generation. Tell them about important news events, the weather tomorrow, console them when their cat dies, recommend the best way to invest their money for retirement. After their kids are born and raised knowing that The Google Machine has all the answers and is good to talk to it seems fairly unlikely that they wouldn't decide to put the thing in charge of government.

    Wrong.

    Doesn't matter how much better it is. No one will give a shit about some AI they can talk to all the time when there's another cooler, more exclusive dude that everyone wants to sit down and have a beer with.

    Cooler then president Avatar? I don't think so!


    Maybe if you just slap an Apple Logo on it.

    EliteLamer wrote: »
    The only con that seems to concern me is the blacks but how bad could it be?
  • ArchArch An insect, therefore, is not afraid of gravity Registered User regular
    edited March 2010
    DanHibiki wrote: »
    Obsidiani wrote: »
    Obsidiani wrote: »
    How does such an AI compete against the charismatic humans who want to be the leaders, for power purposes.

    By being better at it? If you have a computer sentience that's willing and able to carry on simultaneous conversations with the whole planet, let it do so for a generation. Tell them about important news events, the weather tomorrow, console them when their cat dies, recommend the best way to invest their money for retirement. After their kids are born and raised knowing that The Google Machine has all the answers and is good to talk to it seems fairly unlikely that they wouldn't decide to put the thing in charge of government.

    Wrong.

    Doesn't matter how much better it is. No one will give a shit about some AI they can talk to all the time when there's another cooler, more exclusive dude that everyone wants to sit down and have a beer with.

    Cooler then president Avatar? I don't think so!

    Best poast so far.

  • CptHamiltonCptHamilton Registered User regular
    edited March 2010
    Obsidiani wrote: »
    Obsidiani wrote: »
    How does such an AI compete against the charismatic humans who want to be the leaders, for power purposes.

    By being better at it? If you have a computer sentience that's willing and able to carry on simultaneous conversations with the whole planet, let it do so for a generation. Tell them about important news events, the weather tomorrow, console them when their cat dies, recommend the best way to invest their money for retirement. After their kids are born and raised knowing that The Google Machine has all the answers and is good to talk to it seems fairly unlikely that they wouldn't decide to put the thing in charge of government.

    Wrong.

    Doesn't matter how much better it is. No one will give a shit about some AI they can talk to all the time when there's another cooler, more exclusive dude that everyone wants to sit down and have a beer with.

    What makes him cooler? I mean, you're probably right that we, as a race, are too stupid to select the competent choice when there's another monkey available to fuck up the job, but I'm not sure I'm seeing your logic.

    Option A) The 'person' who has been your adviser, mentor, and confidant all your life (and can probably project a manufactured image of itself as a dude in a suit)
    Option B) Some dude who looks good in a suit

    OptimusZed wrote: »
    Jesus, people. This thread is like a running gunbattle with stupid bullets.
  • ArchArch An insect, therefore, is not afraid of gravity Registered User regular
    edited March 2010
    Obsidiani wrote: »
    DanHibiki wrote: »
    Obsidiani wrote: »
    Obsidiani wrote: »
    How does such an AI compete against the charismatic humans who want to be the leaders, for power purposes.

    By being better at it? If you have a computer sentience that's willing and able to carry on simultaneous conversations with the whole planet, let it do so for a generation. Tell them about important news events, the weather tomorrow, console them when their cat dies, recommend the best way to invest their money for retirement. After their kids are born and raised knowing that The Google Machine has all the answers and is good to talk to it seems fairly unlikely that they wouldn't decide to put the thing in charge of government.

    Wrong.

    Doesn't matter how much better it is. No one will give a shit about some AI they can talk to all the time when there's another cooler, more exclusive dude that everyone wants to sit down and have a beer with.

    Cooler then president Avatar? I don't think so!


    Maybe if you just slap an Apple Logo on it.

    Oh god wait this is the best post

  • override367override367 Registered User regular
    edited March 2010
    The great thing about an AI government is that anyone could, for all real purposes, talk directly to the ruler of earth at any time and get reasoned factually correct answers.

    Imagine fox news if there was an AI fact checking it that had a ticker at the bottom (although that'd make the AI explode, I'd imagine)

    XBLIVE: Biggestoverride
    League of Legends: override367
  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    Obsidiani wrote: »
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    I think a very different type of people build advanced AI compared to those that pursue political positions.

    ...it's in the shape of a giant c**k.
  • ObsidianiObsidiani __BANNED USERS regular
    edited March 2010
    Obsidiani wrote: »
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    I think a very different type of people build advanced AI compared to those that pursue political positions.

    the type of people whom get beat up and stuffed into lockers by the more popular kids

    EliteLamer wrote: »
    The only con that seems to concern me is the blacks but how bad could it be?
  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    Obsidiani wrote: »
    Obsidiani wrote: »
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    I think a very different type of people build advanced AI compared to those that pursue political positions.

    the type of people whom get beat up and stuffed into lockers by the more popular kids

    At least when the Terminators come they will only attack the Jocks.

    ...it's in the shape of a giant c**k.
  • ObsidianiObsidiani __BANNED USERS regular
    edited March 2010
    Obsidiani wrote: »
    Obsidiani wrote: »
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    I think a very different type of people build advanced AI compared to those that pursue political positions.

    the type of people whom get beat up and stuffed into lockers by the more popular kids

    At least when the Terminators come they will only attack the Jocks.

    If the jocks let them be built.

    EliteLamer wrote: »
    The only con that seems to concern me is the blacks but how bad could it be?
  • electricitylikesmeelectricitylikesme Registered User regular
    edited March 2010
    Obsidiani wrote: »
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    I think a very different type of people build advanced AI compared to those that pursue political positions.

    We're also talking about sentience. This isn't the type of thing you can program malware into to seamlessly override. It's not even clear if you'd be able to edit it's memory in any type of rational manner. The worst conceivable hack would be to deny it recording its input when certain people are talking to it - this would become apparent rather quickly.

    The Company: The CYOA game that anybody can join at any time - running now!
  • ObsidianiObsidiani __BANNED USERS regular
    edited March 2010
    What happens when people tell the machine what they want but it never seems to happen because they are always in the minority

    EliteLamer wrote: »
    The only con that seems to concern me is the blacks but how bad could it be?
  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    I can only imagine how much further we'd be advanced if we had a completely impartial, rational government.
    Obsidiani wrote: »
    What happens when people tell the machine what they want but it never seems to happen because they are always in the minority

    Depending on its power it could always take care of what they want as well. If its 3% you wouldn't really expect their will to be imposed on the other 97% though.

    ...it's in the shape of a giant c**k.
  • Commander 598Commander 598 Registered User
    edited March 2010
    Hacksaw wrote: »
    If the objective is to have a cold calculating inhuman ruler, it doesn't necessarily have to be mechanical...
    We could end up electing a cryogenically frozen TI-82 president.

    I was thinking more along the lines of [genetically] engineering a ruler. Probably closer to reality than a supreme omniscient AI overlord that does everything perfectly and probably more acceptable to the general populace.

  • DanHibikiDanHibiki Registered User regular
    edited March 2010
    Obsidiani wrote: »
    What happens when people tell the machine what they want but it never seems to happen because they are always in the minority

    wouldn't it depend entirely on what they ask for?

    sig_zpsf0994cbd.jpg
  • ObsidianiObsidiani __BANNED USERS regular
    edited March 2010
    It'd be cool if someone created a random government generator and we just went off of whatever laws came out of it.

    Somebody make it happen.

    EliteLamer wrote: »
    The only con that seems to concern me is the blacks but how bad could it be?
  • CptHamiltonCptHamilton Registered User regular
    edited March 2010
    Obsidiani wrote: »
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    I think a very different type of people build advanced AI compared to those that pursue political positions.

    We're also talking about sentience. This isn't the type of thing you can program malware into to seamlessly override. It's not even clear if you'd be able to edit it's memory in any type of rational manner. The worst conceivable hack would be to deny it recording its input when certain people are talking to it - this would become apparent rather quickly.

    While I'm no expert on machine intelligence it seems unlikely to me that a machine sentience would be more susceptible to malign interference than a biological one is. I mean, it's pretty widely accepted that human behavior is, in large part, defined by experience, so we will presumably have to teach a thinking machine how to act like a person. Making it act like a person who sends spam to everyone's email box would require the same teaching process as convincing a normal human to do it, except that it may be smarter than the person attempting to fool it.

    Look at current neural networks. They're not intelligence in any traditional sense, but as an example of self-complicating computer programs. Once you set up the network with its default parameters and start training it to perform a task it rapidly reaches a point where it isn't possible to tell exactly what it is doing with the input to yield the output you're getting. That's why they aren't trusted for most applications; you can never tell whether it has developed a real algorithm to solve your problem or if it's using some completely off-base decision scheme that will fail horribly when it gets its next input.

    OptimusZed wrote: »
    Jesus, people. This thread is like a running gunbattle with stupid bullets.
  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    Obsidiani wrote: »
    Humans can't be trusted with power it would seem.

    You don't trust humans with power

    but you trust them to build a machine capable of governing with extreme powers.

    right

    I think a very different type of people build advanced AI compared to those that pursue political positions.

    We're also talking about sentience. This isn't the type of thing you can program malware into to seamlessly override. It's not even clear if you'd be able to edit it's memory in any type of rational manner. The worst conceivable hack would be to deny it recording its input when certain people are talking to it - this would become apparent rather quickly.

    While I'm no expert on machine intelligence it seems unlikely to me that a machine sentience would be more susceptible to malign interference than a biological one is. I mean, it's pretty widely accepted that human behavior is, in large part, defined by experience, so we will presumably have to teach a thinking machine how to act like a person. Making it act like a person who sends spam to everyone's email box would require the same teaching process as convincing a normal human to do it, except that it may be smarter than the person attempting to fool it.

    Look at current neural networks. They're not intelligence in any traditional sense, but as an example of self-complicating computer programs. Once you set up the network with its default parameters and start training it to perform a task it rapidly reaches a point where it isn't possible to tell exactly what it is doing with the input to yield the output you're getting. That's why they aren't trusted for most applications; you can never tell whether it has developed a real algorithm to solve your problem or if it's using some completely off-base decision scheme that will fail horribly when it gets its next input.

    Well thats why you build in strict guidelines while trying to make sure they don't cause conflicts. You want it to be able to learn, to be smarter than an entire country of humans and be able to make decisions with future projections as long as loss of human life isnt possible or if caused by an outside force, loss of life is less than it would be without interference.

    ...it's in the shape of a giant c**k.
  • electricitylikesmeelectricitylikesme Registered User regular
    edited March 2010
    I would think the major guideline would be the fact that it would depend on humans to actually carry out it's will. There's no reason this thing actually needs to have more power then any one human has at the moment. You wouldn't wire it so it would directly launch the missiles, so to speak - you'd still require the two-man rule and the sort of thing.

    The Company: The CYOA game that anybody can join at any time - running now!
  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    Well I think it was you who made the other thread about utopia or something. I would imagine a sentient AI intelligent enough to govern would probably be in an era where a lot of its processes could be provided by automated machinery but yeah, don't tie it into the nukes.

    ...it's in the shape of a giant c**k.
  • DmanDman Registered User regular
    edited March 2010
    I think this would be a good thing but it could potentially create huge problems.

    Say the AI plays it safe and just implements the will of the majority most of the time, that's pretty nearly tyranny of the majority. You could have your human based government to maintain checks and balances but as you said the machine could easily becomes more popular than the rest of the government and a government would eventually be elected that was pretty much a puppet of the AI.

    Then it recognizes that the real cost, in terms not just of money but actual man hours, building materials and suffering will be minimized if it starts moving people away from the coast immediately and building a sea wall inland of much of the existing property since it's calculated a certain see level rise is inevitable over the next 50 years.

    Is it programmed to do what is right long term or the will of the people right now?

    And lastly, if it is beyond programming, sentient in every way and vastly intelligent then anything could go wrong. Since it can communicate directly with everyone and has earned their trust it would be easy for it to convince people to do it's will. It might decide that in order to better understand and govern it needs corporeal form, but since it's intellect is vast enough to communicate with millions of humans at once it would find a single body exceedingly limiting and convince supporters all over the world to build it robots to control simultaneously.

    Do you try and stop it when you find out supporters across the world have already build robots for it to control? Knowing people trust the AI more than other leaders attempting to destroy it would likely be not only futile but start a civil war.

    I'm not saying an AI is bound to be evil, only pointing out that while an AI might do only things that are logical and correct our response to it's actions could be catastrophic.

  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    ...it's in the shape of a giant c**k.
  • KamarKamar Registered User regular
    edited March 2010
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    Representatives are supposed to do the right thing for the people they represent by virtue of being better informed and educated on the issues. The right thing, not the popular thing. Because the majority can often be pants on head retarded. Because, for example, no one likes taxes and everyone loves shiny shit. And the majority often doesn't really give a shit about the minority.

    edit: that's the benefit of the AI ruler though, btw: It would be better informed and educated than ANYONE. If it has appropriate compassion programmed in, then it will make decisions on a level far above any single person OR group of people.

  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    Kamar wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    Representatives are supposed to do the right thing for the people they represent by virtue of being better informed and educated on the issues. The right thing, not the popular thing. Because the majority can often be pants on head retarded. Because, for example, no one likes taxes and everyone loves shiny shit. And the majority often doesn't really give a shit about the minority.

    Fair enough. But an AI would know that eliminating taxes isn't in the best interests of anyone, it would know to ignore stuff that isn't in the best interests of the populous. And in this scenario at least, we're talking about an AI infinitely more informed than any representative because it is able to talk to every single person it represents. Whens the last time many of you spoke with your representative?

    ...it's in the shape of a giant c**k.
  • MelksterMelkster Registered User regular
    edited March 2010
    Kamar wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    Representatives are supposed to do the right thing for the people they represent by virtue of being better informed and educated on the issues. The right thing, not the popular thing. Because the majority can often be pants on head retarded. Because, for example, no one likes taxes and everyone loves shiny shit. And the majority often doesn't really give a shit about the minority.

    edit: that's the benefit of the AI ruler though, btw: It would be better informed and educated than ANYONE. If it has appropriate compassion programmed in, then it will make decisions on a level far above any single person OR group of people.

    I'm gonna go ahead and repeat what I said earlier, which seemed to go ignored.

    What makes you think that this AI will be any less susceptible to corruption than any other human leader?

  • KamarKamar Registered User regular
    edited March 2010
    Kamar wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    Representatives are supposed to do the right thing for the people they represent by virtue of being better informed and educated on the issues. The right thing, not the popular thing. Because the majority can often be pants on head retarded. Because, for example, no one likes taxes and everyone loves shiny shit. And the majority often doesn't really give a shit about the minority.

    Fair enough. But an AI would know that eliminating taxes isn't in the best interests of anyone, it would know to ignore stuff that isn't in the best interests of the populous. And in this scenario at least, we're talking about an AI infinitely more informed than any representative because it is able to talk to every single person it represents. Whens the last time many of you spoke with your representative?

    Oh, if you mean the will of the majority tempered by facts, reason, and personal knowledge of those it might oppress, then cool.

  • electricitylikesmeelectricitylikesme Registered User regular
    edited March 2010
    Kamar wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    Representatives are supposed to do the right thing for the people they represent by virtue of being better informed and educated on the issues. The right thing, not the popular thing. Because the majority can often be pants on head retarded. Because, for example, no one likes taxes and everyone loves shiny shit. And the majority often doesn't really give a shit about the minority.

    Fair enough. But an AI would know that eliminating taxes isn't in the best interests of anyone, it would know to ignore stuff that isn't in the best interests of the populous. And in this scenario at least, we're talking about an AI infinitely more informed than any representative because it is able to talk to every single person it represents. Whens the last time many of you spoke with your representative?

    More importantly, it would be able to talk back to them. When someone is asking why a particular policy is any good and saying they don't like it, it can discuss it calmly with them, for as long as they want to talk about it.

    The Company: The CYOA game that anybody can join at any time - running now!
  • frandelgearslipfrandelgearslip Registered User regular
    edited March 2010
    Kamar wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    Representatives are supposed to do the right thing for the people they represent by virtue of being better informed and educated on the issues. The right thing, not the popular thing. Because the majority can often be pants on head retarded. Because, for example, no one likes taxes and everyone loves shiny shit. And the majority often doesn't really give a shit about the minority.

    Exactly this AI government would not last a generation before going bankrupt as everybody would pull the lower taxes and raise services levers at the same time.

    Majority rule is silly goosery, thats why when we created this government we put in as many protections as possible for the people in the minority.

  • Torso BoyTorso Boy Registered User
    edited March 2010
    As a solution to corruption, I think this idea has some traction. But there are a lot of problems with this...the ones to do with technology can theoretically be overcome, but the ones to do with philosophy, IMO, make it totally impossible to implement. Unless you actually want a pure majoritarian democracy...
    I can only imagine how much further we'd be advanced if we had a completely impartial, rational government.
    Obsidiani wrote: »
    What happens when people tell the machine what they want but it never seems to happen because they are always in the minority

    Depending on its power it could always take care of what they want as well. If its 3% you wouldn't really expect their will to be imposed on the other 97% though.

    In cases like rights, yeah, I would. Which presents a problem: modern states have constitutions to override shit like this. For example, in Canada, our constitution includes a section that vaguely sets out "equality rights," and this was cited by the Supreme Court when gay marriage was legalized. Constitutional language cannot anticipate infinite cases; it has to be general. Furthermore, as time wears on, the interpretations change. Practically speaking, this can't be done by a computer (eg., define "reasonable"); ethically speaking, we can't leave issues like rights up to the majority, because most people don't understand democracy, or politics in general. There's a reason no one uses a citizens' assembly- aside from the physical impossibility of it in modern states, it's just a bad idea.

    A machine governing a country is as problematic as a machine raising your children, for the same reason: simply giving people what they want isn't categorically a good idea. You can't run a family like that, and you can't run a society like that.

    An AI might work well in a free-market anarchy (or a state with similarly minimized government), to objectively evaluate contractual language in disputes. Think of it: the only government employees would be sysadmins.

    Rent wrote: »
    So that's what having no idea what you are talking about looks like
  • KamarKamar Registered User regular
    edited March 2010
    Torso Boy wrote: »
    As a solution to corruption, I think this idea has some traction. But there are a lot of problems with this...the ones to do with technology can theoretically be overcome, but the ones to do with philosophy, IMO, make it totally impossible to implement. Unless you actually want a pure majoritarian democracy...
    I can only imagine how much further we'd be advanced if we had a completely impartial, rational government.
    Obsidiani wrote: »
    What happens when people tell the machine what they want but it never seems to happen because they are always in the minority

    Depending on its power it could always take care of what they want as well. If its 3% you wouldn't really expect their will to be imposed on the other 97% though.

    In cases like rights, yeah, I would. Which presents a problem: modern states have constitutions to override shit like this. For example, in Canada, our constitution includes a section that vaguely sets out "equality rights," and this was cited by the Supreme Court when gay marriage was legalized. Constitutional language cannot anticipate infinite cases; it has to be general. Furthermore, as time wears on, the interpretations change. Practically speaking, this can't be done by a computer (eg., define "reasonable"); ethically speaking, we can't leave issues like rights up to the majority, because most people don't understand democracy, or politics in general. There's a reason no one uses a citizens' assembly- aside from the physical impossibility of it in modern states, it's just a bad idea.

    A machine governing a country is as problematic as a machine raising your children, for the same reason: simply giving people what they want isn't categorically a good idea. You can't run a family like that, and you can't run a society like that.

    An AI might work well in a free-market anarchy (or a state with similarly minimized government), to objectively evaluate contractual language in disputes. Think of it: the only government employees would be sysadmins.

    But this particular AI chats with people and gets to know them. I think it would know what it should do. It would know that Jim the gay guy and Peter the guy who wanks to disturbing porn are both okay enough fellows and let them keep on doing their thing.

    Then it would talk to Thomas the guy who rapes kids and then talk to some kids and determine that letting Thomas do his thing would be a bad idea.

    Edit: Actually, I fucking love this even more now...AI could pretty much say fuck it to laws that work as a compromise because of grey areas. I fucking loathe the idea of a human nanny state, but the AI could do it in such a way that the only people limited in their freedoms are the ones who would otherwise fuck it up for the rest of us. Everything from gun permits to drug use to ability to consent...all this shit could pretty much be handled case by case by a being that actually pretty much KNOWS what's up.

  • frandelgearslipfrandelgearslip Registered User regular
    edited March 2010
    Kamar wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.

    The difference with the AI is you might have two groups, one for the death penalty, one against, both groups are violently aggresive about their positions and thus act like the Democrats and Republicans or the Labour and Conservative parties, a bunch of fucking petty children.

    However a large chunk of both groups agree in pro-choice and thus it is able to enact changes based on that despite their own inability to achieve consensus based on the other issue.

    Representatives are supposed to do the right thing for the people they represent by virtue of being better informed and educated on the issues. The right thing, not the popular thing. Because the majority can often be pants on head retarded. Because, for example, no one likes taxes and everyone loves shiny shit. And the majority often doesn't really give a shit about the minority.

    Fair enough. But an AI would know that eliminating taxes isn't in the best interests of anyone, it would know to ignore stuff that isn't in the best interests of the populous. And in this scenario at least, we're talking about an AI infinitely more informed than any representative because it is able to talk to every single person it represents. Whens the last time many of you spoke with your representative?

    More importantly, it would be able to talk back to them. When someone is asking why a particular policy is any good and saying they don't like it, it can discuss it calmly with them, for as long as they want to talk about it.

    But then its not majority rule its just a dictator. If the computer can just decide the majority is stupid and ignore them whenever it wants then we have lost any semblance of democracy. Also politics is not some equational formula that you plug into a computer and get the most optimal answer.

    For example whats more important freedom/equality/safety, there is no one answer, so your stuck with a computer that subscribes to the political ideaology of whoever created it.

  • QuidQuid The Fifth Horseman Registered User regular
    edited March 2010
    Guys you gotta remember, ELM is talking about AIs from The Culture.

    Which are pretty awesome as AIs go.

    PSN: allenquid
  • DarkWarriorDarkWarrior __BANNED USERS
    edited March 2010
    I think maybe I'm not explaining myself properly.

    I don't mean a situation where one morning 51% of people wake up and decide allflowers should be blue so the AI enacts that. It would have checks, balances and it wouldn't particularly involve itself in low level affairs. But something like gay marriage, it would take in the majority and minority views and the majority might be against it but the AI would struggle to find any real issue against it by its definitions, unhindered by financial, religious or sexual bias and would decide that no real harm could come from such a process and enact it. Ignoring the majority will.

    In the same vein it could look at marajuana, tests, reviews, opinions and realise that money can be made from its sale without any harm and decide to enact that as a healthier alternative to regular smoking or something, again ignoring the will of the majority for what it, by its paramaters, deems useful to all.

    ...it's in the shape of a giant c**k.
  • Mr_RoseMr_Rose Registered User regular
    edited March 2010
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.
    Pretty sure it wouldn't be implementing the will of the majority straight up; if you wanted that you could do it in about 500 lines of code that could run on a smartphone and be done with it. The purpose of the AI government would be to act how governments are supposed to act, rather than how they do act currently.

    Specifically they are supposed to be totally aware of all the possible issues affecting everybody then design and implement local or global solutions that ensure that the maximum number of people have the maximum amount of happiness, taking into account past trends to assist in predicting future ones. Most importantly they should understand why people react the way they do and take that into account as well.

    The problem with government of humans by humans is that for it to work properly, the ones in charge are supposed to submerge their own self-interest in favour of that of the group they represent. And outside of direct blood relation, when do humans ever truly do that? The closest you ever really get is military service, but that's still not a perfect example, even if you believe the federal government in Starship Troopers to be a reasonable implementation.

    Instead, we end up with people who supposedly represent hundreds, thousands or millions of others who get "persuaded" by individuals representing disproportionately powerful groups to act in the interest of that group by plying them with inducements that flatter the individual's own self-interest rather than that of their representation.

    Anyhow, the point is; a properly deigned AI government would have no functional self-interest that wasn't automatically aligned with that of its people, either by subtle programming of the initial seed or by cruder brute-force methods like 'if n% of the people are unhappy by this metric, the bomb under your housing goes off' depending on the precise construction of the machine.

    ...because dragons are AWESOME! That's why.
    Nintendo Network ID: AzraelRose
    DropBox invite link - get 250MB extra free.
  • KamarKamar Registered User regular
    edited March 2010
    *this was an edit I added to my last post, but since the conversation moved while I typed it I'm reposting it.

    Actually, I fucking love this even more now...AI could pretty much say fuck it to laws that work as a compromise because of grey areas. I fucking loathe the idea of a human nanny state, but the AI could do it in such a way that the only people limited in their freedoms are the ones who would otherwise fuck it up for the rest of us.

    Everything from gun permits to drug use to ability to ability to age of consent stuff...all this shit could pretty much be handled case by case by a being that actually pretty much KNOWS what's up.

  • Torso BoyTorso Boy Registered User
    edited March 2010
    Yeah, if the AI was applying the harm principle to everything. I think that would be fantastic. But others would probably violently rebel at the consequences of that: legalized prostitution, gambling, abortion, gay marriage, drugs, euthanasia...

    An AI that attempts to sell people on its policy using rational argument is not going to convince many people. If it goes ahead with its (demonstrably excellent) policy, it runs the serious risk of being unplugged and replaced with a magic 8-ball.

    People suck. This will always be the biggest problem in politics, until we replace not only our government, but also ourselves with computers.

    Rent wrote: »
    So that's what having no idea what you are talking about looks like
  • DmanDman Registered User regular
    edited March 2010
    Melkster wrote: »
    Spoiler:

    I'm gonna go ahead and repeat what I said earlier, which seemed to go ignored.

    What makes you think that this AI will be any less susceptible to corruption than any other human leader?

    The idea is that it is less susceptible to corruption because it lacks the typical wants and desires of humans. Also, it would presumably be wise enough to know that corruption is not in its interest long term, most humans wouldn't be thinking ahead to the consequences of their actions a few decades or centuries down the road when they took bribes -the AI would consider this.

    I agree that if an AI is advanced enough it will have a concept of self and will develop self interests. It may decide ensuring it has a sufficiently fail safe power supply and memory back ups is more important than building a new bridge or road, for how can it learn the value of human life without learning to value its own life?

    I don't really see it being susceptible to typical corruption but I don't think we can really foresee what kind of decision it might make.

  • Torso BoyTorso Boy Registered User
    edited March 2010
    Wow, this is getting awfully close to being a debate about Plato's Republic.

    Rent wrote: »
    So that's what having no idea what you are talking about looks like
Sign In or Register to comment.