As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

AI Government

1356789

Posts

  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    I think maybe I'm not explaining myself properly.

    I don't mean a situation where one morning 51% of people wake up and decide allflowers should be blue so the AI enacts that. It would have checks, balances and it wouldn't particularly involve itself in low level affairs. But something like gay marriage, it would take in the majority and minority views and the majority might be against it but the AI would struggle to find any real issue against it by its definitions, unhindered by financial, religious or sexual bias and would decide that no real harm could come from such a process and enact it. Ignoring the majority will.

    In the same vein it could look at marajuana, tests, reviews, opinions and realise that money can be made from its sale without any harm and decide to enact that as a healthier alternative to regular smoking or something, again ignoring the will of the majority for what it, by its paramaters, deems useful to all.

    In other words as I said it does what it wants and are opinions are meaningless. So were back to dictatorship with a dictator that will never die. Where do I sign up? :P

    frandelgearslip on
  • Options
    DmanDman Registered User regular
    edited March 2010
    Mr_Rose wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.
    Pretty sure it wouldn't be implementing the will of the majority straight up; if you wanted that you could do it in about 500 lines of code that could run on a smartphone and be done with it. The purpose of the AI government would be to act how governments are supposed to act, rather than how they do act currently.

    Specifically they are supposed to be totally aware of all the possible issues affecting everybody then design and implement local or global solutions that ensure that the maximum number of people have the maximum amount of happiness, taking into account past trends to assist in predicting future ones. Most importantly they should understand why people react the way they do and take that into account as well.

    The problem with government of humans by humans is that for it to work properly, the ones in charge are supposed to submerge their own self-interest in favour of that of the group they represent. And outside of direct blood relation, when do humans ever truly do that? The closest you ever really get is military service, but that's still not a perfect example, even if you believe the federal government in Starship Troopers to be a reasonable implementation.

    Instead, we end up with people who supposedly represent hundreds, thousands or millions of others who get "persuaded" by individuals representing disproportionately powerful groups to act in the interest of that group by plying them with inducements that flatter the individual's own self-interest rather than that of their representation.

    Anyhow, the point is; a properly deigned AI government would have no functional self-interest that wasn't automatically aligned with that of its people, either by subtle programming of the initial seed or by cruder brute-force methods like 'if n% of the people are unhappy by this metric, the bomb under your housing goes off' depending on the precise construction of the machine.

    I don't think this is the type of AI the OP envisioned when he said "super-powerful sentient AIs". A super powerful sentient AI that can hold complex conversations with million of people at once and thoughtfully consider their opinions and arguments could not be ruled by programming.

    Dman on
  • Options
    override367override367 ALL minions Registered User regular
    edited March 2010
    Quid wrote: »
    Guys you gotta remember, ELM is talking about AIs from The Culture.

    Which are pretty awesome as AIs go.

    Those AI's would just rename the US "I totally have more Gravitas than you" or something stupid

    override367 on
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    I think maybe I'm not explaining myself properly.

    I don't mean a situation where one morning 51% of people wake up and decide allflowers should be blue so the AI enacts that. It would have checks, balances and it wouldn't particularly involve itself in low level affairs. But something like gay marriage, it would take in the majority and minority views and the majority might be against it but the AI would struggle to find any real issue against it by its definitions, unhindered by financial, religious or sexual bias and would decide that no real harm could come from such a process and enact it. Ignoring the majority will.

    In the same vein it could look at marajuana, tests, reviews, opinions and realise that money can be made from its sale without any harm and decide to enact that as a healthier alternative to regular smoking or something, again ignoring the will of the majority for what it, by its paramaters, deems useful to all.

    In other words as I said it does what it wants and are opinions are meaningless. So were back to dictatorship with a dictator that will never die. Where do I sign up? :P

    It does whats in the best interests of the people without any personal desires clouding its judgement and in larger scale implementations it can properly inform and then take a live poll and I'm sure theres some calculations it could put in place to ensure each area/town/region gains an equal share of the vote.

    DarkWarrior on
  • Options
    QuidQuid Definitely not a banana Registered User regular
    edited March 2010
    In other words as I said it does what it wants and are opinions are meaningless. So were back to dictatorship with a dictator that will never die. Where do I sign up? :P

    A dictatorship isn't automatically a bad thing.

    Hell, the biggest problem with some dictatorships was that the really awesome guy who was in charge did eventually die.

    Quid on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    Dman wrote: »
    Mr_Rose wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.
    Pretty sure it wouldn't be implementing the will of the majority straight up; if you wanted that you could do it in about 500 lines of code that could run on a smartphone and be done with it. The purpose of the AI government would be to act how governments are supposed to act, rather than how they do act currently.

    Specifically they are supposed to be totally aware of all the possible issues affecting everybody then design and implement local or global solutions that ensure that the maximum number of people have the maximum amount of happiness, taking into account past trends to assist in predicting future ones. Most importantly they should understand why people react the way they do and take that into account as well.

    The problem with government of humans by humans is that for it to work properly, the ones in charge are supposed to submerge their own self-interest in favour of that of the group they represent. And outside of direct blood relation, when do humans ever truly do that? The closest you ever really get is military service, but that's still not a perfect example, even if you believe the federal government in Starship Troopers to be a reasonable implementation.

    Instead, we end up with people who supposedly represent hundreds, thousands or millions of others who get "persuaded" by individuals representing disproportionately powerful groups to act in the interest of that group by plying them with inducements that flatter the individual's own self-interest rather than that of their representation.

    Anyhow, the point is; a properly deigned AI government would have no functional self-interest that wasn't automatically aligned with that of its people, either by subtle programming of the initial seed or by cruder brute-force methods like 'if n% of the people are unhappy by this metric, the bomb under your housing goes off' depending on the precise construction of the machine.

    I don't think this is the type of AI the OP envisioned when he said "super-powerful sentient AIs". A super powerful sentient AI that can hold complex conversations with million of people at once and thoughtfully consider their opinions and arguments could not be ruled by programming.

    Also heres a thought how does the computer value people in other countries. If it values them not at all then look out Canada we need some Lebensraum.

    If it values the people in other countries equally then either we will get screwed in foreign affairs as other countries first concern are there own people or we get world war as the computer tries to get everyone under its "benevolent" wing

    frandelgearslip on
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    Torso Boy wrote: »
    Yeah, if the AI was applying the harm principle to everything. I think that would be fantastic. But others would probably violently rebel at the consequences of that: legalized prostitution, gambling, abortion, gay marriage, drugs, euthanasia...

    An AI that attempts to sell people on its policy using rational argument is not going to convince many people. If it goes ahead with its (demonstrably excellent) policy, it runs the serious risk of being unplugged and replaced with a magic 8-ball.

    People suck. This will always be the biggest problem in politics, until we replace not only our government, but also ourselves with computers.

    I think it's going to turn out quite the opposite. I'd be quite surprised if the people don't demand all those things of their government in the near future.

    DanHibiki on
  • Options
    QuidQuid Definitely not a banana Registered User regular
    edited March 2010
    Dman wrote: »
    Mr_Rose wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.
    Pretty sure it wouldn't be implementing the will of the majority straight up; if you wanted that you could do it in about 500 lines of code that could run on a smartphone and be done with it. The purpose of the AI government would be to act how governments are supposed to act, rather than how they do act currently.

    Specifically they are supposed to be totally aware of all the possible issues affecting everybody then design and implement local or global solutions that ensure that the maximum number of people have the maximum amount of happiness, taking into account past trends to assist in predicting future ones. Most importantly they should understand why people react the way they do and take that into account as well.

    The problem with government of humans by humans is that for it to work properly, the ones in charge are supposed to submerge their own self-interest in favour of that of the group they represent. And outside of direct blood relation, when do humans ever truly do that? The closest you ever really get is military service, but that's still not a perfect example, even if you believe the federal government in Starship Troopers to be a reasonable implementation.

    Instead, we end up with people who supposedly represent hundreds, thousands or millions of others who get "persuaded" by individuals representing disproportionately powerful groups to act in the interest of that group by plying them with inducements that flatter the individual's own self-interest rather than that of their representation.

    Anyhow, the point is; a properly deigned AI government would have no functional self-interest that wasn't automatically aligned with that of its people, either by subtle programming of the initial seed or by cruder brute-force methods like 'if n% of the people are unhappy by this metric, the bomb under your housing goes off' depending on the precise construction of the machine.

    I don't think this is the type of AI the OP envisioned when he said "super-powerful sentient AIs". A super powerful sentient AI that can hold complex conversations with million of people at once and thoughtfully consider their opinions and arguments could not be ruled by programming.

    Also heres a thought how does the computer value people in other countries. If it values them not at all then look out Canada we need some Lebensraum.

    If it values the people in other countries equally then either we will get screwed in foreign affairs as other countries first concern are there own people or we get world war as the computer tries to get everyone under its "benevolent" wing

    Why wouldn't it go for the option that best benefits everyone?

    Quid on
  • Options
    KamarKamar Registered User regular
    edited March 2010
    Dman wrote: »
    Mr_Rose wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.
    Pretty sure it wouldn't be implementing the will of the majority straight up; if you wanted that you could do it in about 500 lines of code that could run on a smartphone and be done with it. The purpose of the AI government would be to act how governments are supposed to act, rather than how they do act currently.

    Specifically they are supposed to be totally aware of all the possible issues affecting everybody then design and implement local or global solutions that ensure that the maximum number of people have the maximum amount of happiness, taking into account past trends to assist in predicting future ones. Most importantly they should understand why people react the way they do and take that into account as well.

    The problem with government of humans by humans is that for it to work properly, the ones in charge are supposed to submerge their own self-interest in favour of that of the group they represent. And outside of direct blood relation, when do humans ever truly do that? The closest you ever really get is military service, but that's still not a perfect example, even if you believe the federal government in Starship Troopers to be a reasonable implementation.

    Instead, we end up with people who supposedly represent hundreds, thousands or millions of others who get "persuaded" by individuals representing disproportionately powerful groups to act in the interest of that group by plying them with inducements that flatter the individual's own self-interest rather than that of their representation.

    Anyhow, the point is; a properly deigned AI government would have no functional self-interest that wasn't automatically aligned with that of its people, either by subtle programming of the initial seed or by cruder brute-force methods like 'if n% of the people are unhappy by this metric, the bomb under your housing goes off' depending on the precise construction of the machine.

    I don't think this is the type of AI the OP envisioned when he said "super-powerful sentient AIs". A super powerful sentient AI that can hold complex conversations with million of people at once and thoughtfully consider their opinions and arguments could not be ruled by programming.

    Also heres a thought how does the computer value people in other countries. If it values them not at all then look out Canada we need some Lebensraum.

    If it values the people in other countries equally then either we will get screwed in foreign affairs as other countries first concern are there own people or we get world war as the computer tries to get everyone under its "benevolent" wing

    Violence is an absolute LAST option in any situation. Even in self-defense, if you can take down your enemy non-lethally without risking yourself or someone else it is better. A rational, compassionate machine would probably have this built in as a rule or come to the conclusion itself.

    Kamar on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    Quid wrote: »
    A dictatorship isn't automatically a bad thing.

    Hell, the biggest problem with some dictatorships was that the really awesome guy who was in charge did eventually die.

    My father is awesome I don't want him to rule my life forever. Thats what this computer is just another parent.
    It does whats in the best interests of the people without any personal desires clouding its judgement and in larger scale implementations it can properly inform and then take a live poll and I'm sure theres some calculations it could put in place to ensure each area/town/region gains an equal share of the vote.

    But if it can ignore the vote when it disagrees with the outcome, then the vote is a pointless exercise to keep the sheep happy.

    frandelgearslip on
  • Options
    QuidQuid Definitely not a banana Registered User regular
    edited March 2010
    My father is awesome I don't want him to rule my life forever. Thats what this computer is just another parent.

    Because... it's programmed to pick the best option for everyone rather than let them decide gay people don't deserve equal rights?

    Quid on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    Kamar wrote: »
    Dman wrote: »
    Mr_Rose wrote: »
    I don't really understand the difference between the government majority implementing law and an AI implementing the will of the majority.
    Pretty sure it wouldn't be implementing the will of the majority straight up; if you wanted that you could do it in about 500 lines of code that could run on a smartphone and be done with it. The purpose of the AI government would be to act how governments are supposed to act, rather than how they do act currently.

    Specifically they are supposed to be totally aware of all the possible issues affecting everybody then design and implement local or global solutions that ensure that the maximum number of people have the maximum amount of happiness, taking into account past trends to assist in predicting future ones. Most importantly they should understand why people react the way they do and take that into account as well.

    The problem with government of humans by humans is that for it to work properly, the ones in charge are supposed to submerge their own self-interest in favour of that of the group they represent. And outside of direct blood relation, when do humans ever truly do that? The closest you ever really get is military service, but that's still not a perfect example, even if you believe the federal government in Starship Troopers to be a reasonable implementation.

    Instead, we end up with people who supposedly represent hundreds, thousands or millions of others who get "persuaded" by individuals representing disproportionately powerful groups to act in the interest of that group by plying them with inducements that flatter the individual's own self-interest rather than that of their representation.

    Anyhow, the point is; a properly deigned AI government would have no functional self-interest that wasn't automatically aligned with that of its people, either by subtle programming of the initial seed or by cruder brute-force methods like 'if n% of the people are unhappy by this metric, the bomb under your housing goes off' depending on the precise construction of the machine.

    I don't think this is the type of AI the OP envisioned when he said "super-powerful sentient AIs". A super powerful sentient AI that can hold complex conversations with million of people at once and thoughtfully consider their opinions and arguments could not be ruled by programming.

    Also heres a thought how does the computer value people in other countries. If it values them not at all then look out Canada we need some Lebensraum.

    If it values the people in other countries equally then either we will get screwed in foreign affairs as other countries first concern are there own people or we get world war as the computer tries to get everyone under its "benevolent" wing

    Violence is an absolute LAST option in any situation. Even in self-defense, if you can take down your enemy non-lethally without risking yourself or someone else it is better. A rational, compassionate machine would probably have this built in as a rule or come to the conclusion itself.

    So, that just means the computer tries to negotiate for a couple of years, before launching its wars for Lebensraum or world wide domination.
    Quid wrote: »
    Why wouldn't it go for the option that best benefits everyone?

    If the computer negotiates from a position of lets do the best for everyone and other countries negotiate from a position of whats best for their people the computer is always going to come out behind in international affairs.

    frandelgearslip on
  • Options
    Torso BoyTorso Boy Registered User regular
    edited March 2010
    A dictatorship isn't a problem provided the ruler is [rulers are] perfect- that is, philosopher kings or computers. However, a constitution would be absolutely necessary in the case of a computer. You can't just set up a box of perfect rationality and say, "go rule." It needs to be aware of concepts such as the harm principle, liberty and rights- political ideas which, as such, cannot be arrived at through reason. How can you rationally account for property rights?

    Some might say that liberty should be excised from that list, that it should be a small-c conservative society in which the government strives to make its people the best possible (which is authoritarianism but I'm not going there)...simply put, whether the AI is conservative or liberal would hugely affect what its policies would be, and that choice cannot be made by reason. This is philosophy, the realm where everyone is right, except when they're wrong, which is always.

    As I said, interpretation is the problem here. A computer cannot resolve abstract problems that have to do with philosophy. A computer is limited by the definitions of words, which can be vague or can bear different connotations for different people. I'm not belittling technology or underestimating its potential; I'm saying that politics and ethics simply are not the realm of computers, just as computation is not the realm of the human brain.

    Torso Boy on
  • Options
    QuidQuid Definitely not a banana Registered User regular
    edited March 2010
    If the computer negotiates from a position of lets do the best for everyone and other countries negotiate from a position of whats best for their people the computer is always going to come out behind in international affairs.

    Our current leaders, presumably, work towards what's best for everyone.

    That doesn't mean screwing over everyone in their country and it doesn't mean invading and killing everyone in other countries. It mostly means getting countries to work together in the best interest of both so that both benefit.

    Quid on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    Quid wrote: »
    If the computer negotiates from a position of lets do the best for everyone and other countries negotiate from a position of whats best for their people the computer is always going to come out behind in international affairs.

    Our current leaders, presumably, work towards what's best for everyone.

    Yeah right. They work from a position of whats best for there people first and everyone else second. (which is how it should be)

    Whether it be Obama, Gordon Brown, or Stephen Harper.

    If we were really dedicated to the best for everyone Obama would be raising taxes and sending a crap load of money over to Africa.

    frandelgearslip on
  • Options
    KamarKamar Registered User regular
    edited March 2010
    Its interesting to me how much my opinions on certain things change when we have a perfect ruler who knows everyone personally to look to though. Like, I've got this niggling worry in the back of my head that as we progress as a society we eventually start leaning towards nanny state crap. It bugs me when I hear about violent games being banned, or kinky porn, or guns.

    But give us a perfect AI who looks at everything logically AND compassionately, considering the needs and wants and freedoms of every individual? I'm willing to let it do what it wants. I think I can trust such a being to only put limits on what NEEDS limits; ideally it would even work on a case by case basis: John can have a handgun carry permit because he is educated, skilled, and would know when to NOT use it. Jim can't because he is a moron with aspirations of playing vigilante hero who will get himself or others killed.

    This might strike a lot of you as a major invasion of freedom, but for some reason it seems like the pinnacle of freedom: everyone could press right up to the absolute limits of personal freedom that wouldn't harm others. No need to blanket ban things because of idiots.

    Kamar on
  • Options
    Loren MichaelLoren Michael Registered User regular
    edited March 2010
    tl;dr - super-powerful sentient AIs would be the closest answer we could reach to achieving a perfect democracy. Government could be made to be all encompassing, yet able to relate directly to each of its citizens.

    The biggest problem with democracy is that people vote for stupid things, not that they don't feel that they are sufficiently represented. The latter is an externality of impatience with the political process, over-assessment of the popularity of one's views, and a failure to recognize when one's own policy views' implementation leads to bad results.

    The nice thing about democracy is that it provides a means to get rid of some forms of corruption. If you can make a sufficiently powerful AI, why not skip over the people altogether?

    Loren Michael on
    a7iea7nzewtq.jpg
  • Options
    QuidQuid Definitely not a banana Registered User regular
    edited March 2010
    Quid wrote: »
    If the computer negotiates from a position of lets do the best for everyone and other countries negotiate from a position of whats best for their people the computer is always going to come out behind in international affairs.

    Our current leaders, presumably, work towards what's best for everyone.

    Yeah right. They work from a position of whats best for there people first and everyone else second. (which is how it should be)

    Whether it be Obama, Gordon Brown, or Stephen Harper.

    If we were really dedicated to the best for everyone Obama would be raising taxes and sending a crap load of money over to Africa.

    And it'd be pretty great.

    Quid on
  • Options
    DmanDman Registered User regular
    edited March 2010
    Quid wrote: »
    In other words as I said it does what it wants and are opinions are meaningless. So were back to dictatorship with a dictator that will never die. Where do I sign up? :P

    A dictator ship isn't automatically a bad thing.

    Hell, the biggest problem with some dictatorships was that the really awesome guy who was in charge did eventually die.

    1. The AI Always does the right thing, never legislates laws that are bad for the whole
    2. Analysis all available information, from studies to blogs to direct discussion with experts and the populace in general to determine what the right thing is, constantly re-evaluating it's positions.
    3. Once it has determined what the correct thing is it explains its reasoning to the public via millions of personal one on one discussions.
    4. Only legislates if it has majority support for the action it has determined is correct at this time.

    Of course such an AI might eventually realize that over time humans always tend to agree with its policy and reasoning and therefore wasting time and resources to explaining its reasoning in order to get majority support before acting is a poor use of resources.

    Similarly it might find discussion with human experts in order to determine what is right is a waste of resources once it has hundreds of years worth of such discussions in it's memory banks and 99.99999% of input from humans is repetitious, derivative or otherwise useless.

    After cutting humans out of the decision process altogether it may decide discovering the origin of the universe or a way of preventing the death of the universe due to heat decay is more important than the human species and ignore us entirely.

    We cannot know how a super intelligent, immortal, all powerful AI would act in the short term or the long term.

    Humans are naturally afraid of the unknown, so even if the AI is always acting in our best interest (say over the long term) some of us may decide an uprising against the AI and it's supporters is preferable to the short term losses or suffering the AI suggests is best for the whole long term. Even if an AI never turns against us we may turn against it or each other over it's suggestions because we inherently distrust the unknowable, which an AI will always be.

    Dman on
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Remember, this isn't just a computer that goes "If A=1, rape people". its an AI, someone on par with a human mind provided with near infinite knowledge updated by the second that has no personal desires, fears or wants that allow it to be influenced by ulterior motives or act in an irrational manner like shouting "baby killer". It acts only in what it deems the best interests of its own citizens first and THEN the rest of the world while intending to do no harm to either.

    DarkWarrior on
  • Options
    KamarKamar Registered User regular
    edited March 2010
    Of course, I think most of this is irrelevant; by the time such an AI existed and would be put in power, people themselves will be computerizing. So society as a whole will move towards infinite knowledge and reasoning ability, and thus be making similar decisions for themselves to those the computer would make.

    I hope.

    Kamar on
  • Options
    Loren MichaelLoren Michael Registered User regular
    edited March 2010
    Kamar wrote: »
    Of course, I think most of this is irrelevant; by the time such an AI existed and would be put in power, people themselves will be computerizing. So society as a whole will move towards infinite knowledge and reasoning ability, and thus be making similar decisions for themselves to those the computer would mak

    I hope.

    Can't people simply be making more and more consequential bad decisions more and more efficiently?

    Loren Michael on
    a7iea7nzewtq.jpg
  • Options
    DanHibikiDanHibiki Registered User regular
    edited March 2010
    Kamar wrote: »
    Of course, I think most of this is irrelevant; by the time such an AI existed and would be put in power, people themselves will be computerizing. So society as a whole will move towards infinite knowledge and reasoning ability, and thus be making similar decisions for themselves to those the computer would make.

    I hope.

    well if you use g-mail, you've already got an AI that's sorting through your mail for spam, and it's doing a damn good job of it, fairly soon Wallstreet will be using AIs and soon AI will take over a lot of the administrative tasks without us even noticing.

    The transition of AI assisting government is going to be far more subtle then most can imagine.

    DanHibiki on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    What about religion. How would the computer handle that?

    Religion contributes nothing to the government (tax free), interferes with politics, is a net drain on the economy (people contributing money to myth tellers that could be better spent on many other things).

    How about religions that are demonstrably nothing but silly goosery (I will leave them unnamed, but most of them have south park episodes dedicated to their silly goosery)?

    I don't see your athiest computer having much fondness for any religion.

    frandelgearslip on
  • Options
    KamarKamar Registered User regular
    edited March 2010
    What about religion. How would the computer handle that?

    Religion contributes nothing to the government (tax free), interferes with politics, is a net drain on the economy (people contributing money to myth tellers that could be better spent on many other things).

    How about religions that are demonstrably nothing but silly goosery (I will leave them unnamed, but most of them have south park episodes dedicated to their silly goosery)?

    I don't see your athiest computer having much fondness for any religion.

    Treated the same as any other entertainment. Except when it is harmful, left alone.

    Kamar on
  • Options
    Mr_RoseMr_Rose 83 Blue Ridge Protects the Holy Registered User regular
    edited March 2010
    What about religion. How would the computer handle that?

    Religion contributes nothing to the government (tax free), interferes with politics, is a net drain on the economy (people contributing money to myth tellers that could be better spent on many other things).

    How about religions that are demonstrably nothing but silly goosery (I will leave them unnamed, but most of them have south park episodes dedicated to their silly goosery)?

    I don't see your athiest computer having much fondness for any religion.
    A rational AI would of curse immediately remove special treatment for religions (all of them, equally) because they otherwise contribute nothing and let them stand or fall on their own merits. Unless it's using the wrong 'happiness/suffering' metric in which case it will create its own religion with itself as god and evangelise everyone.

    Although the possibility (even likelihood) exists that some humans will do the second one on their own anyway, but then you always get some idiots.

    Mr_Rose on
    ...because dragons are AWESOME! That's why.
    Nintendo Network ID: AzraelRose
    DropBox invite link - get 500MB extra free.
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    It would respect the peoples right to follow and believe in anything they choose as much as it would people who want to gay marry or whatever. Its silly to think one group should be respected and the other not.

    It would however, as I said before, ignore their influence since it doesn't need their votes and their desires are not necessarily beneficial to the mass.

    DarkWarrior on
  • Options
    DmanDman Registered User regular
    edited March 2010
    What about religion. How would the computer handle that?

    Religion contributes nothing to the government (tax free), interferes with politics, is a net drain on the economy (people contributing money to myth tellers that could be better spent on many other things).

    How about religions that are demonstrably nothing but silly goosery (I will leave them unnamed, but most of them have south park episodes dedicated to their silly goosery)?

    I don't see your athiest computer having much fondness for any religion.

    Just like everything else the computer would determine how and why each part of each religion helps or hurts humanity. It can analyze things in ridiculous detail. Instead of relying on historical data it can measure effects of religions in real time and perhaps not make policy decisions regarding religion until it has established credible data to back up claims human make regarding the benefits or costs of religions.

    If it decided that belief and religion was detrimental for humans or human happiness on the whole it would try to convince humans this is the case and slowly phase religious teachings out of schools and law.

    It is possible it comes to some middle ground where it determines law should not be based on religion but that allowing humans the freedom to delude themselves if they so wish (even children at a young age) with illogical beliefs results in increased happiness and a more stable society.

    Just because the AI does not believe in your god doesn't mean it won't allow you to believe in it. However, if your religion cannot survive in the open market of thoughts and ideas no government (AI or otherwise) is obligated to help maintain your religion.

    Dman on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    Kamar wrote: »
    Treated the same as any other entertainment. Except when it is harmful, left alone.
    Mr_Rose wrote: »
    A rational AI would of curse immediately remove special treatment for religions (all of them, equally) because they otherwise contribute nothing and let them stand or fall on their own merits.

    Which brings up why this discussion is full of silly goosery everybody assumes

    Perfect AI == their opinions, which is of course arrogance beyond belief.

    I would have a great laugh if right after this AI was enacted it totally deregulated the economy right before appointing Glenn Beck as its right hand man.

    Should fat people be restricted in what they are allowed to eat? According to some threads on this board lately I think your perfect AI could go either way depending on who is describing it.

    frandelgearslip on
  • Options
    TheAceofSpadesTheAceofSpades Registered User regular
    edited March 2010
    This gives too much voice to the apathetic. Democracy for the most part weeds these people out of the decision making process. An AI that took into account the opinions of all (even those who sway to and fro on issues depending on their friends latest Facebook status) would need some way to truly ascertain the will of the people and not base decisions on the latest opinion poll.

    TheAceofSpades on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    This gives too much voice to the apathetic. Democracy for the most part weeds these people out of the decision making process. An AI that took into account the opinions of all (even those who sway to and fro on issues depending on their friends latest Facebook status) would need some way to truly ascertain the will of the people and not base decisions on the latest opinion poll.

    We have gone past that, the computer has been promoted to dictator for life since the OP

    frandelgearslip on
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    I really don't understand why anyone in a civilised country would want a human government to stay in charge. Just look how well they operate in teh way they don't operate. There are too many fears and vices that every human government will simply be corrupt, theres no way to prevent it short of giving everyone what they want.

    I mean how many gay senators have been outed lately who vote against gay positive agenda out of their own fear?

    It's not like the AI will suddenly say everyone should have sex with animals and we will be forced at gunpoint to do it.

    DarkWarrior on
  • Options
    Darkchampion3dDarkchampion3d Registered User regular
    edited March 2010
    Remember, this isn't just a computer that goes "If A=1, rape people".

    You sir just programmed a rape machine.

    Darkchampion3d on
    Our country is now taking so steady a course as to show by what road it will pass to destruction, to wit: by consolidation of power first, and then corruption, its necessary consequence --Thomas Jefferson
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Remember, this isn't just a computer that goes "If A=1, rape people".

    You sir just programmed a rape machine.

    ...A post apocalyptic future where skeletal cyborgs run around raping people?

    Well, its better than nukes.

    DarkWarrior on
  • Options
    KamarKamar Registered User regular
    edited March 2010
    Religion is something that brings people pleasure but provides no concrete benefits or problems in and of itself.

    What other than entertainment would it be classified as? A perfectly logical machine would know it had no way of knowing which religion if any were correct, and would thus file them under 'not my problem' unless religion MADE itself a problem.

    I mean, if you know a logical argument for a particular religion that isn't flawed or repeatable for another religion, well...I guess a lot of us are going to be surprised as hell as we turn in our athiest/agnostic membership cards?

    Alternative Answer: The educated and the intelligent and those born in later years all trend more progressive/liberal and less religious. This AI would outclass everyone alive right now in all of these things, so odds are in favor of it being a liberal atheist. ;d

    Kamar on
  • Options
    DarkWarriorDarkWarrior __BANNED USERS regular
    edited March 2010
    Like you say, unless it makes itself a problem then religion is of no concern to an AI anymore than someone practicing BDSM in their basement is. Its not something it takes account of in a decision making process unless neccesary but it doesn't allow itself to be influenced by its desires to impact the life of those who don't follow that particular religion.

    Which is why the AI is perfect and I want to gay marry it.

    DarkWarrior on
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited March 2010
    I think your perfect AI could go either way depending on who is describing it.

    Ding ding ding!

    We have a winner.

    MrMister on
  • Options
    DmanDman Registered User regular
    edited March 2010
    Kamar wrote: »
    Treated the same as any other entertainment. Except when it is harmful, left alone.
    Mr_Rose wrote: »
    A rational AI would of curse immediately remove special treatment for religions (all of them, equally) because they otherwise contribute nothing and let them stand or fall on their own merits.

    Which brings up why this discussion is full of silly goosery everybody assumes

    Perfect AI == their opinions, which is of course arrogance beyond belief.

    I would have a great laugh if right after this AI was enacted it totally deregulated the economy right before appointing Glenn Beck as its right hand man.

    Should fat people be restricted in what they are allowed to eat? According to some threads on this board lately I think your perfect AI could go either way depending on who is describing it.

    I've been purposefully trying to avoid that trap, I am not so arrogant as to assume a perfect AI would have the same opinion as myself.

    I even submit that while it may do what is best for humanity in humanity's opinion, as least initially, there is no way to know if it might form opinions of it's own that are sometimes contrary to the very legislation is suggests or legislates, it might simply choose to output results as it's creators predicted it would, basing it's decision outputs and interactions with humans only on solid logical foundations with a very human perspective of priorities.

    How can you have an AI that you claim is superior to a human and yet claim to know everything about how it would behave.

    Dman on
  • Options
    frandelgearslipfrandelgearslip 457670Registered User regular
    edited March 2010
    I really don't understand why anyone in a civilised country would want a human government to stay in charge. Just look how well they operate in teh way they don't operate. There are too many fears and vices that every human government will simply be corrupt, theres no way to prevent it short of giving everyone what they want.

    I mean how many gay senators have been outed lately who vote against gay positive agenda out of their own fear?

    It's not like the AI will suddenly say everyone should have sex with animals and we will be forced at gunpoint to do it.

    Because I don't want to be controlled by God, whether it be the Imaginary God of the bible or your Atheistic computer or Karl Marx in the end its the same thing.

    frandelgearslip on
  • Options
    SpeedySwafSpeedySwaf Registered User regular
    edited March 2010
    I've wondered about this too, but in a slightly different light. We have scientists who've already created machines that can mimic some basic survival techniques, and after these AIs have evolved for a couple thousand generations, they begin showing behavior very similar to some actual animals, in the sense they start forming "packs" to secure their own livelihood, so to speak.

    So I wonder if we could do something in regards to economics or the like...have groups of machines in simple, but similar situations that we may find ourselves, and after so many generations, see how they've learned to balance the well being of themselves and the others, if at all.

    SpeedySwaf on
Sign In or Register to comment.