As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Don't Fight the Future - An argument for Terminators

124»

Posts

  • Options
    AstaerethAstaereth In the belly of the beastRegistered User regular
    There's a huge difference between treating the mentally ill to put them in line with conventional notions of sanity/lucidity and treating political enemies, even killers, to put them in line with the ideologies they oppose. (Although the former can also be problematic.)

    But the chip thing deserves its own thread.
    [Tycho?] wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    This is completely wrong.

    We absolutely don't use chemical weapons because it is not cheaper, not quicker, and does not reduce casualties. The entire reason we have the perverse concept of "laws of armed conflict" is because the things they prohibit are all about avoiding the types of atrocities that do not lead to speedy resolution of conflict.

    Rules of war are made between nations on the assumption that there is no such thing as a war of annihilation, and on the observation that certain classes of weaponry absolutely lead to prolonged, high cost attritional conflicts.

    The problem with chemical weapons is they enrage people and they're of limited effectiveness against prepared militaries - but of unlimited effectiveness against the civilian populations that tend to be near them. And after you wipe out a few cities, the remaining standing militaries tend not to think they should just surrender and end the conflict.

    None of these assumptions apply to AI weapons systems. By definition they don't, since AI weapons systems are entirely about killing fewer people.

    You're the second person to bring up the rationale for banning chemical weapons, with one reason in particular being that they're ineffective against armies and deadly against civilians.

    And yet chemical weapons were banned before they were used on a large scale- the Hague Conventions of 1899 and 1907 prohibited them and a variety of other practices. So it wasn't their effectiveness that got them banned, nor their affect on civilians, unless there's something I don't know about. Since both of you mentioned it I assume I'm missing something.

    There are two different arguments here in the chemical weapons analogy that you're conflating. Either the point of the analogy is that chemical weapons make war easier, and therefore politically cheaper, for those who use them; or the point of the analogy is that, like chemical waepons, robots are so horrific that we should agree to ban them solely for their cruelty. Which is it?

    Because to my understanding, the reason we ban things like chemical weapons is that they make war worse for everyone. First side A uses chems on side B, which is horrible for side B; then side B has to make chemical weapons to keep up, and so war quickly becomes that hellish for both sides. (This is the same "do unto others" rationale behind POW treatment conventions.)

    This is not the case with robots, though, because eventually both sides are using robots on the other side's robots and no actual human beings are being harmed anymore.

    ACsTqqK.jpg
  • Options
    [Tycho?][Tycho?] As elusive as doubt Registered User regular
    Astaereth wrote: »
    There's a huge difference between treating the mentally ill to put them in line with conventional notions of sanity/lucidity and treating political enemies, even killers, to put them in line with the ideologies they oppose. (Although the former can also be problematic.)

    But the chip thing deserves its own thread.
    [Tycho?] wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    This is completely wrong.

    We absolutely don't use chemical weapons because it is not cheaper, not quicker, and does not reduce casualties. The entire reason we have the perverse concept of "laws of armed conflict" is because the things they prohibit are all about avoiding the types of atrocities that do not lead to speedy resolution of conflict.

    Rules of war are made between nations on the assumption that there is no such thing as a war of annihilation, and on the observation that certain classes of weaponry absolutely lead to prolonged, high cost attritional conflicts.

    The problem with chemical weapons is they enrage people and they're of limited effectiveness against prepared militaries - but of unlimited effectiveness against the civilian populations that tend to be near them. And after you wipe out a few cities, the remaining standing militaries tend not to think they should just surrender and end the conflict.

    None of these assumptions apply to AI weapons systems. By definition they don't, since AI weapons systems are entirely about killing fewer people.

    You're the second person to bring up the rationale for banning chemical weapons, with one reason in particular being that they're ineffective against armies and deadly against civilians.

    And yet chemical weapons were banned before they were used on a large scale- the Hague Conventions of 1899 and 1907 prohibited them and a variety of other practices. So it wasn't their effectiveness that got them banned, nor their affect on civilians, unless there's something I don't know about. Since both of you mentioned it I assume I'm missing something.

    There are two different arguments here in the chemical weapons analogy that you're conflating. Either the point of the analogy is that chemical weapons make war easier, and therefore politically cheaper, for those who use them; or the point of the analogy is that, like chemical waepons, robots are so horrific that we should agree to ban them solely for their cruelty. Which is it?

    Because to my understanding, the reason we ban things like chemical weapons is that they make war worse for everyone. First side A uses chems on side B, which is horrible for side B; then side B has to make chemical weapons to keep up, and so war quickly becomes that hellish for both sides. (This is the same "do unto others" rationale behind POW treatment conventions.)

    This is not the case with robots, though, because eventually both sides are using robots on the other side's robots and no actual human beings are being harmed anymore.

    I didn't (intend) to make an analogy with chemical weapons, I just wanted to show a scenario where fewer overall deaths wasn't morally better, and to provide an example of a technology that was banned pre-emptively. It was overly inflammatory though, and just distracts from the main topic.

    mvaYcgc.jpg
  • Options
    GoumindongGoumindong Registered User regular
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    Well no. Chemical weapons are banned not because they're horrific but because hey don't work. They're indiscrimnate so you can't target specific area well, they do more damage to civilian populations because military members are trained to put on gas masks but civilians don't or don't have them. They do damage to the landscape etc etc etc.

    It's not because "oh no it's so impersonal" it's because we generally ban weapons whose only purpose is to kill and terrorize civilian populations because no one wants their civilian populations killed and terrorized. The sole exception being nukes, which are banned for selfish reasons (that is, you can't really risk invading people with nukes and keeping that hegemony on power is important to our interests)

    wbBv3fj.png
  • Options
    rockrngerrockrnger Registered User regular
    Goumindong wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    Well no. Chemical weapons are banned not because they're horrific but because hey don't work. They're indiscrimnate so you can't target specific area well, they do more damage to civilian populations because military members are trained to put on gas masks but civilians don't or don't have them. They do damage to the landscape etc etc etc.

    It's not because "oh no it's so impersonal" it's because we generally ban weapons whose only purpose is to kill and terrorize civilian populations because no one wants their civilian populations killed and terrorized. The sole exception being nukes, which are banned for selfish reasons (that is, you can't really risk invading people with nukes and keeping that hegemony on power is important to our interests)

    Which is always going to be hard to distinguish between morality and pragmatism.

    Like the US and landmines. We have a reason to use them so maybe they are ok. Would Germany (or whatever) think the same way situations reversed? I honestly don't know.

  • Options
    Moridin889Moridin889 Registered User regular
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

  • Options
    QuidQuid Definitely not a banana Registered User regular
    Moridin889 wrote: »
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

    Er, being able to discern targets better than a person doesn't automatically mean true sentience.

    I mean I definitely agree creating a sentient being to fight war against its will would be awful but that assumption really doesn't follow.

  • Options
    HamHamJHamHamJ Registered User regular
    Moridin889 wrote: »
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

    That isn't necessary. A more reasonable near future scenario is that there is still a human looking though a monitor and selecting targets. All the robot needs to know is "Shoot heat blobs designated as hostile until they stop moving".

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    AManFromEarthAManFromEarth Let's get to twerk! The King in the SwampRegistered User regular
    That is an absolutely worthless criteria for targeting.

    Lh96QHG.png
  • Options
    TaranisTaranis Registered User regular
    Replacing most line troops is probably one of the very last roles for automated weapons. There's just so much ingenuity and abstract reasoning that goes into that job that AI can't really emulate anytime soon. More likely they'd replace other supporting (yet offensive) roles first, before they'd replace infantry.

    EH28YFo.jpg
  • Options
    [Tycho?][Tycho?] As elusive as doubt Registered User regular
    Goumindong wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    Well no. Chemical weapons are banned not because they're horrific but because hey don't work. They're indiscrimnate so you can't target specific area well, they do more damage to civilian populations because military members are trained to put on gas masks but civilians don't or don't have them. They do damage to the landscape etc etc etc.

    It's not because "oh no it's so impersonal" it's because we generally ban weapons whose only purpose is to kill and terrorize civilian populations because no one wants their civilian populations killed and terrorized. The sole exception being nukes, which are banned for selfish reasons (that is, you can't really risk invading people with nukes and keeping that hegemony on power is important to our interests)

    I asked above, but can you source this? As I mentioned, chemical weapons were banned before they were used on any sort of scale. Which means their effectiveness was unknown, and no civilians had been the victims of them. Though I'm probably missing something.

    mvaYcgc.jpg
  • Options
    programjunkieprogramjunkie Registered User regular
    Moridin889 wrote: »
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

    In the context I mentioned a couple pages ago where it replaces air support for human ground troops, with rules of engagement that it only ever attacks targets who have attacked a friendly, that is very resilient to mistakes and comparatively easy to detect. You'd need to be a bit clever with the edge cases and some definitions, but the legal, moral, and practical concerns are all very low compared to many other AI implementations, and even many other military decisions in general.

    Plus, I believe it was Taranis that noted that route clearance is a fantastic non-offensive use for such. While it might be more susceptible to ambush than veteran, well trained troops (but maybe not), the purpose of route clearance, crudely stated, is to deliberately hit every IED / mine between point A and B so the next group doesn't have to.

    Honestly, every incentive would conspire for AI to be working hand in hand with humans at first, if not forever, so the combination of both of them will help each cover for the fallibility of the other, so many problems that may arise are fixed by appropriate coordination. This actually fits well with existing combined arms doctrine, as modern militaries already overlap multiple layers of differing capabilities to cover weaknesses.

  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    HamHamJ wrote: »
    Moridin889 wrote: »
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

    That isn't necessary. A more reasonable near future scenario is that there is still a human looking though a monitor and selecting targets. All the robot needs to know is "Shoot heat blobs designated as hostile until they stop moving".

    That is a pretty terrible metric, and result in huge numbers of civilians being killed.

    Which is actually pretty OK from the point of view of someone who just want to kill and maimed all the living things in a village.


    And, if we make them. The Ruskies and Chinese will make them, and sell them to those people whom give zero fucks.

    They moistly come out at night, moistly.
  • Options
    PLAPLA The process.Registered User regular
    Human soldiers are not sent out to target "bad guys". They are not caped vigilantes.

  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    HamHamJ wrote: »
    Moridin889 wrote: »
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

    That isn't necessary. A more reasonable near future scenario is that there is still a human looking though a monitor and selecting targets. All the robot needs to know is "Shoot heat blobs dogs designated as hostile until they stop moving".

    Ftfy

  • Options
    shrykeshryke Member of the Beast Registered User regular
    Apothe0sis wrote: »
    HamHamJ wrote: »
    Moridin889 wrote: »
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

    That isn't necessary. A more reasonable near future scenario is that there is still a human looking though a monitor and selecting targets. All the robot needs to know is "Shoot heat blobs dogs designated as hostile until they stop moving".

    Ftfy

    423337_2798727849057_n6cg4.jpg
    Now, this technology is new to me but I'm pretty sure that's Homer Simpson in the oven rotating slowly. His body temperature has risen to over 400 degrees. He's literally stewing in his own juices.

  • Options
    FANTOMASFANTOMAS Flan ArgentavisRegistered User regular
    I cant understand the people claiming that automated weapons are good, as long as the most war driven country in the world controls the monopoly on them. A country that is turning into a police state... with automated weapons that will not question the motives of those in command.

    Also, the basic flaw in all this premises, is that they talk from the position of "AI weapons are on my side, wich is the right side, and they will be used by stop the enemy, who is evil and wrong. And desicion making parties will never be corrupt".

    Its akin to the "keep the schools safe, put more guns in schools" mentality. And dealing with domestic social issues with military grade weapons... I mean, really? Is anyone advocating for better weapons in the hands of the leaders we have today? Or are the people arguing FOR Automated weapons thinking about a dream, future scenario, where leaders actually care about people.

    Its not some convoluted sci-fi induced scenario, its the natural progression of things, make soldiers that question orders, and they will be ordered to make the things that soldiers wouldnt want to do, or things that require no witnesses. In the next armed confrontation, the are will be too dangerous for humans to operate, so no press, the only witnesses will be the ones in command, and just let the war crimes roll in.

    Yes, with a quick verbal "boom." You take a man's peko, you deny him his dab, all that is left is to rise up and tear down the walls of Jericho with a ".....not!" -TexiKen
  • Options
    AustralopitenicoAustralopitenico Registered User regular
    edited July 2015
    The bad thing about robots is that they buff your human enemies. Especially if you are on their country.

    Making people shoot other people effectively is surprisingly hard to do. If you invade with robots every teenager ever is going to get a gun and jump at the chance to shoot inanimate objects with incredible glee and no remorse.

    Also, I want this to happen so I can hear someone say seriously: "bots on the ground".

    Australopitenico on
  • Options
    HamHamJHamHamJ Registered User regular
    FANTOMAS wrote: »
    I cant understand the people claiming that automated weapons are good, as long as the most war driven country in the world controls the monopoly on them. A country that is turning into a police state... with automated weapons that will not question the motives of those in command.

    Also, the basic flaw in all this premises, is that they talk from the position of "AI weapons are on my side, wich is the right side, and they will be used by stop the enemy, who is evil and wrong. And desicion making parties will never be corrupt".

    Its akin to the "keep the schools safe, put more guns in schools" mentality. And dealing with domestic social issues with military grade weapons... I mean, really? Is anyone advocating for better weapons in the hands of the leaders we have today? Or are the people arguing FOR Automated weapons thinking about a dream, future scenario, where leaders actually care about people.

    Its not some convoluted sci-fi induced scenario, its the natural progression of things, make soldiers that question orders, and they will be ordered to make the things that soldiers wouldnt want to do, or things that require no witnesses. In the next armed confrontation, the are will be too dangerous for humans to operate, so no press, the only witnesses will be the ones in command, and just let the war crimes roll in.

    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    FANTOMASFANTOMAS Flan ArgentavisRegistered User regular
    edited July 2015
    HamHamJ wrote: »
    FANTOMAS wrote: »
    I cant understand the people claiming that automated weapons are good, as long as the most war driven country in the world controls the monopoly on them. A country that is turning into a police state... with automated weapons that will not question the motives of those in command.

    Also, the basic flaw in all this premises, is that they talk from the position of "AI weapons are on my side, wich is the right side, and they will be used by stop the enemy, who is evil and wrong. And desicion making parties will never be corrupt".

    Its akin to the "keep the schools safe, put more guns in schools" mentality. And dealing with domestic social issues with military grade weapons... I mean, really? Is anyone advocating for better weapons in the hands of the leaders we have today? Or are the people arguing FOR Automated weapons thinking about a dream, future scenario, where leaders actually care about people.

    Its not some convoluted sci-fi induced scenario, its the natural progression of things, make soldiers that question orders, and they will be ordered to make the things that soldiers wouldnt want to do, or things that require no witnesses. In the next armed confrontation, the are will be too dangerous for humans to operate, so no press, the only witnesses will be the ones in command, and just let the war crimes roll in.

    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    Absolutely, an AI wouldnt be able to CHOOSE to commit war crimes, nor it wouldnt be able to choose not to. While 500 years ago raping and pillaging were part of the payment for being in an army, when those things happen in more recent conflicts, it is because they are given carte blanche from above. The current tendencies go against the "rape & pillage" model, and into a more humane era, where disobedience of orders against human rights is becoming, or soon to become a norm. It would be the worst time to reduce military forces into non-thinking machines, now that we are giving soldiers the chance to avoid the crimes they were forced to commit in the past.

    EDIT: I argue because I dont like the direction its all headed to, but I know that as soon as technology catches up, those weapons will be made, and they will be used.

    FANTOMAS on
    Yes, with a quick verbal "boom." You take a man's peko, you deny him his dab, all that is left is to rise up and tear down the walls of Jericho with a ".....not!" -TexiKen
  • Options
    DisruptedCapitalistDisruptedCapitalist I swear! Registered User regular
    You know what would make a great near-future sci-fi novel? An autonomous weapon patrolling the Korean DMZ starts gets bad data from its GPS and wanders off into North Korea. Of course the PRK would interpret this as an act of war and the rest of the novel would be about the high-stakes diplomacy to defuse the situation eventually resulting in a peaceful dissolution of the DPRK. (Hey, I said it was Fiction, right?)

    "Simple, real stupidity beats artificial intelligence every time." -Mustrum Ridcully in Terry Pratchett's Hogfather p. 142 (HarperPrism 1996)
  • Options
    CantelopeCantelope Registered User regular
    I bet autonomous weapons will be a lot better at genocide. Also with the press.


    "Reporters want to ask questions, oh we have an entirely robot army except a few hackers in a secret facility in a desert."


    If I'm a king or dictator and I've got minority groups making my life complicated, I could sure benefit from some robot armies.


    "Sanctions? I've probably already got them, I'll just get a loan from goldman sachs to hire one of the megacorps robot armies. Surely they will acknowledge the significant increase in value in my kingdom once it's completely homogenous." I think a lot of countries wouldn't mind a decade or so of sanctions if they could kill all those pesky religious and ethnic minorities. That's assuming they can't negotiate there way out of any sanctions. Countries that have sufficient natural resources tend to do pretty well in this regard.

  • Options
    HamHamJHamHamJ Registered User regular
    FANTOMAS wrote: »
    HamHamJ wrote: »
    FANTOMAS wrote: »
    I cant understand the people claiming that automated weapons are good, as long as the most war driven country in the world controls the monopoly on them. A country that is turning into a police state... with automated weapons that will not question the motives of those in command.

    Also, the basic flaw in all this premises, is that they talk from the position of "AI weapons are on my side, wich is the right side, and they will be used by stop the enemy, who is evil and wrong. And desicion making parties will never be corrupt".

    Its akin to the "keep the schools safe, put more guns in schools" mentality. And dealing with domestic social issues with military grade weapons... I mean, really? Is anyone advocating for better weapons in the hands of the leaders we have today? Or are the people arguing FOR Automated weapons thinking about a dream, future scenario, where leaders actually care about people.

    Its not some convoluted sci-fi induced scenario, its the natural progression of things, make soldiers that question orders, and they will be ordered to make the things that soldiers wouldnt want to do, or things that require no witnesses. In the next armed confrontation, the are will be too dangerous for humans to operate, so no press, the only witnesses will be the ones in command, and just let the war crimes roll in.

    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    Absolutely, an AI wouldnt be able to CHOOSE to commit war crimes, nor it wouldnt be able to choose not to. While 500 years ago raping and pillaging were part of the payment for being in an army, when those things happen in more recent conflicts, it is because they are given carte blanche from above. The current tendencies go against the "rape & pillage" model, and into a more humane era, where disobedience of orders against human rights is becoming, or soon to become a norm. It would be the worst time to reduce military forces into non-thinking machines, now that we are giving soldiers the chance to avoid the crimes they were forced to commit in the past.

    EDIT: I argue because I dont like the direction its all headed to, but I know that as soon as technology catches up, those weapons will be made, and they will be used.

    This idea seems to have no basis in reality to me. History, even current history, shows that getting soldiers to commit atrocities is not hard. Even amongst supposedly civilized forces like those of the US. At least with robot forces there are fewer people in the mix to originate bad ideas.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    programjunkieprogramjunkie Registered User regular
    FANTOMAS wrote: »
    HamHamJ wrote: »
    FANTOMAS wrote: »
    I cant understand the people claiming that automated weapons are good, as long as the most war driven country in the world controls the monopoly on them. A country that is turning into a police state... with automated weapons that will not question the motives of those in command.

    Also, the basic flaw in all this premises, is that they talk from the position of "AI weapons are on my side, wich is the right side, and they will be used by stop the enemy, who is evil and wrong. And desicion making parties will never be corrupt".

    Its akin to the "keep the schools safe, put more guns in schools" mentality. And dealing with domestic social issues with military grade weapons... I mean, really? Is anyone advocating for better weapons in the hands of the leaders we have today? Or are the people arguing FOR Automated weapons thinking about a dream, future scenario, where leaders actually care about people.

    Its not some convoluted sci-fi induced scenario, its the natural progression of things, make soldiers that question orders, and they will be ordered to make the things that soldiers wouldnt want to do, or things that require no witnesses. In the next armed confrontation, the are will be too dangerous for humans to operate, so no press, the only witnesses will be the ones in command, and just let the war crimes roll in.

    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    Absolutely, an AI wouldnt be able to CHOOSE to commit war crimes, nor it wouldnt be able to choose not to. While 500 years ago raping and pillaging were part of the payment for being in an army, when those things happen in more recent conflicts, it is because they are given carte blanche from above. The current tendencies go against the "rape & pillage" model, and into a more humane era, where disobedience of orders against human rights is becoming, or soon to become a norm. It would be the worst time to reduce military forces into non-thinking machines, now that we are giving soldiers the chance to avoid the crimes they were forced to commit in the past.

    EDIT: I argue because I dont like the direction its all headed to, but I know that as soon as technology catches up, those weapons will be made, and they will be used.

    I disagree with this analysis. Hamas deliberately targets civilians, Israel, when unguarded, considers civilian suffering to be just desserts. NATO tries its best but doesn't always succeed, hence issues like Abu Ghraib, the insurgents in Iraq and Afghanistan use torture, terrorism, deliberately target civilians, etc. ISIS is a wall to wall horror show, conducting mass executions of men and young boys, and sells captured women as sex slaves, and charges a premium for prepubescent girls, so that says everything you need to know about both their moral character, and that of the society surrounding them.

    Overall, in a very broad strokes consideration, there has been an improvement on the international scale. However, I would contend it isn't due to a bottom up disobedience of illegal orders, nor would I say it is especially consistent.

    Though, I still contend AI will allow us to raise the bar. No inherent right to self defense, no miscommunication, no bad apples, etc, etc.

    In the case of a totally unaccountable evil empire, I think AI could make the issue worse. In the real world, I think it will skew slightly better than the current state of affairs, particularly when implemented in a narrow way.

  • Options
    tbloxhamtbloxham Registered User regular
    FANTOMAS wrote: »
    HamHamJ wrote: »
    FANTOMAS wrote: »
    I cant understand the people claiming that automated weapons are good, as long as the most war driven country in the world controls the monopoly on them. A country that is turning into a police state... with automated weapons that will not question the motives of those in command.

    Also, the basic flaw in all this premises, is that they talk from the position of "AI weapons are on my side, wich is the right side, and they will be used by stop the enemy, who is evil and wrong. And desicion making parties will never be corrupt".

    Its akin to the "keep the schools safe, put more guns in schools" mentality. And dealing with domestic social issues with military grade weapons... I mean, really? Is anyone advocating for better weapons in the hands of the leaders we have today? Or are the people arguing FOR Automated weapons thinking about a dream, future scenario, where leaders actually care about people.

    Its not some convoluted sci-fi induced scenario, its the natural progression of things, make soldiers that question orders, and they will be ordered to make the things that soldiers wouldnt want to do, or things that require no witnesses. In the next armed confrontation, the are will be too dangerous for humans to operate, so no press, the only witnesses will be the ones in command, and just let the war crimes roll in.

    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    Absolutely, an AI wouldnt be able to CHOOSE to commit war crimes, nor it wouldnt be able to choose not to. While 500 years ago raping and pillaging were part of the payment for being in an army, when those things happen in more recent conflicts, it is because they are given carte blanche from above. The current tendencies go against the "rape & pillage" model, and into a more humane era, where disobedience of orders against human rights is becoming, or soon to become a norm. It would be the worst time to reduce military forces into non-thinking machines, now that we are giving soldiers the chance to avoid the crimes they were forced to commit in the past.

    EDIT: I argue because I dont like the direction its all headed to, but I know that as soon as technology catches up, those weapons will be made, and they will be used.

    I disagree with this analysis. Hamas deliberately targets civilians, Israel, when unguarded, considers civilian suffering to be just desserts. NATO tries its best but doesn't always succeed, hence issues like Abu Ghraib, the insurgents in Iraq and Afghanistan use torture, terrorism, deliberately target civilians, etc. ISIS is a wall to wall horror show, conducting mass executions of men and young boys, and sells captured women as sex slaves, and charges a premium for prepubescent girls, so that says everything you need to know about both their moral character, and that of the society surrounding them.

    Overall, in a very broad strokes consideration, there has been an improvement on the international scale. However, I would contend it isn't due to a bottom up disobedience of illegal orders, nor would I say it is especially consistent.

    Though, I still contend AI will allow us to raise the bar. No inherent right to self defense, no miscommunication, no bad apples, etc, etc.

    In the case of a totally unaccountable evil empire, I think AI could make the issue worse. In the real world, I think it will skew slightly better than the current state of affairs, particularly when implemented in a narrow way.

    Yeah, the idea that war crimes are diminishing because people are becoming more moral is just not true. It is organizations which are becoming slightly more moral. Most people will do pretty much whatever you tell them is the best thing to do. It's why this argument is pretty moot. Humans will already slaughter millions based on the failed guidance of a madman, why does it matter if that madman tells AI's to do it or his soldiers? It's incredibly rare for the soldiers who actually did the killing to rise up against the generals who ordered them to do it, since they view themselves as 'invested' in the arguments they accepted and don't want to admit they were wrong.

    "That is cool" - Abraham Lincoln
  • Options
    QuidQuid Definitely not a banana Registered User regular
    FANTOMAS wrote: »
    I cant understand the people claiming that automated weapons are good, as long as the most war driven country in the world controls the monopoly on them. A country that is turning into a police state... with automated weapons that will not question the motives of those in command.

    Also, the basic flaw in all this premises, is that they talk from the position of "AI weapons are on my side, wich is the right side, and they will be used by stop the enemy, who is evil and wrong. And desicion making parties will never be corrupt".

    Its akin to the "keep the schools safe, put more guns in schools" mentality. And dealing with domestic social issues with military grade weapons... I mean, really? Is anyone advocating for better weapons in the hands of the leaders we have today? Or are the people arguing FOR Automated weapons thinking about a dream, future scenario, where leaders actually care about people.

    Its not some convoluted sci-fi induced scenario, its the natural progression of things, make soldiers that question orders, and they will be ordered to make the things that soldiers wouldnt want to do, or things that require no witnesses. In the next armed confrontation, the are will be too dangerous for humans to operate, so no press, the only witnesses will be the ones in command, and just let the war crimes roll in.

    No weapon is "good." But a weapon can absolutely be better than current weapons. The most war driven country already has a near monopoly on force so I honestly don't have a problem with that force at least becoming less dangerous for non combatants.

  • Options
    The EnderThe Ender Registered User regular
    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    Well, let's look at the autonomous systems already employed: landmines and cluster bombs. Do these systems (which, by the way, were lauded for the same reason pro-autonomous force advocates today give high praise to the notion. You can just plop them down, mark them off, and golly gee whiz nobody will ever need to be posted a sentry. The enemy will toss aside their weapons and throw up their arms in surrender, because nobody will want to march into the mine siege, which never tires and never requires a re-supply) have huge civilian costs?


    A practical autonomous combat system is going to more resemble a vehicle than a person. Even if we had the technology to build a reliable & robust bipedal machine, it wouldn't be cheap enough for mass production. We'd be fielding autonomous tanks and rovers, and these machines would require fuel & maintenance (and probably a lot of coolant & lubricant). The idea of this entirely robotic army marching / crawling alone into the front is absurd, because no matter how smart the machines are, they still need someone to top them up, cool them off and buff out the dents. Your autonomous machines are probably also going to have the same terrain limitations as a modern tank or IFV.

    So the realistic outlook is that you'll probably just take armored vehicle crews out of the combat zone - or, even more realistically, make said crews merely telepresent as they control the vehicle / 'robot' from a remote location while it's supported by conventional infantry on the ground. And then, what, you also propose that these vehicles will be programmed to be more hesitant to engage? How long does that last? Probably until the first instance where friendly casualties are taken and blamed on the robot not firing when soldiers felt it should.

    I'm just not seeing how such systems are somehow more merciful unless we escape into the realm of magical & unfeasible technology. Probably, much like drones, they have a minimal impact (if any at all) on civilian deaths on the front, and their nature as increasingly disposable war materiel would make them ideal candidates for activities like ambush & assassination.

    With Love and Courage
  • Options
    GoumindongGoumindong Registered User regular
    [Tycho?] wrote: »
    Goumindong wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    Well no. Chemical weapons are banned not because they're horrific but because hey don't work. They're indiscrimnate so you can't target specific area well, they do more damage to civilian populations because military members are trained to put on gas masks but civilians don't or don't have them. They do damage to the landscape etc etc etc.

    It's not because "oh no it's so impersonal" it's because we generally ban weapons whose only purpose is to kill and terrorize civilian populations because no one wants their civilian populations killed and terrorized. The sole exception being nukes, which are banned for selfish reasons (that is, you can't really risk invading people with nukes and keeping that hegemony on power is important to our interests)

    I asked above, but can you source this? As I mentioned, chemical weapons were banned before they were used on any sort of scale. Which means their effectiveness was unknown, and no civilians had been the victims of them. Though I'm probably missing something.

    The Geneva Conventions (though they also cite the manner of death) the primary reasons are indiscrimianteness and the unknown future despite low casualties from the weapons.
    The bad thing about robots is that they buff your human enemies. Especially if you are on their country.

    Making people shoot other people effectively is surprisingly hard to do. If you invade with robots every teenager ever is going to get a gun and jump at the chance to shoot inanimate objects with incredible glee and no remorse.

    Also, I want this to happen so I can hear someone say seriously: "bots on the ground".

    It is not true that people have a hard time shooting other people. People have a hard time shooting period. But the research which suggests killing aversion is neither well done or robust. The vast majority of misses against troops are high; sure. But so are the vast majority of misses on a shooting range.

    The idea that robots would be a lot better at genocide and such is just foolish. Humans are amazing at it. They will do so without even being told. Twiddle your thumbs and your army will do it for you without even having to worry about the paperwork or trail.

    wbBv3fj.png
  • Options
    rockrngerrockrnger Registered User regular
    The Ender wrote: »
    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    Well, let's look at the autonomous systems already employed: landmines and cluster bombs. Do these systems (which, by the way, were lauded for the same reason pro-autonomous force advocates today give high praise to the notion. You can just plop them down, mark them off, and golly gee whiz nobody will ever need to be posted a sentry. The enemy will toss aside their weapons and throw up their arms in surrender, because nobody will want to march into the mine siege, which never tires and never requires a re-supply) have huge civilian costs?


    A practical autonomous combat system is going to more resemble a vehicle than a person. Even if we had the technology to build a reliable & robust bipedal machine, it wouldn't be cheap enough for mass production. We'd be fielding autonomous tanks and rovers, and these machines would require fuel & maintenance (and probably a lot of coolant & lubricant). The idea of this entirely robotic army marching / crawling alone into the front is absurd, because no matter how smart the machines are, they still need someone to top them up, cool them off and buff out the dents. Your autonomous machines are probably also going to have the same terrain limitations as a modern tank or IFV.

    So the realistic outlook is that you'll probably just take armored vehicle crews out of the combat zone - or, even more realistically, make said crews merely telepresent as they control the vehicle / 'robot' from a remote location while it's supported by conventional infantry on the ground. And then, what, you also propose that these vehicles will be programmed to be more hesitant to engage? How long does that last? Probably until the first instance where friendly casualties are taken and blamed on the robot not firing when soldiers felt it should.

    I'm just not seeing how such systems are somehow more merciful unless we escape into the realm of magical & unfeasible technology. Probably, much like drones, they have a minimal impact (if any at all) on civilian deaths on the front, and their nature as increasingly disposable war materiel would make them ideal candidates for activities like ambush & assassination.

    So supposing that all they do is make AFV crews safer is that a vote for using them or against?

  • Options
    TaranisTaranis Registered User regular
    The Ender wrote: »
    On the other hand, they also won't rape, pillage, and murder. And I bet incidents of the former are still far more common than soldiers nobly disobeying illegal orders.

    Well, let's look at the autonomous systems already employed: landmines and cluster bombs. Do these systems (which, by the way, were lauded for the same reason pro-autonomous force advocates today give high praise to the notion. You can just plop them down, mark them off, and golly gee whiz nobody will ever need to be posted a sentry. The enemy will toss aside their weapons and throw up their arms in surrender, because nobody will want to march into the mine siege, which never tires and never requires a re-supply) have huge civilian costs?


    A practical autonomous combat system is going to more resemble a vehicle than a person. Even if we had the technology to build a reliable & robust bipedal machine, it wouldn't be cheap enough for mass production. We'd be fielding autonomous tanks and rovers, and these machines would require fuel & maintenance (and probably a lot of coolant & lubricant). The idea of this entirely robotic army marching / crawling alone into the front is absurd, because no matter how smart the machines are, they still need someone to top them up, cool them off and buff out the dents. Your autonomous machines are probably also going to have the same terrain limitations as a modern tank or IFV.

    So the realistic outlook is that you'll probably just take armored vehicle crews out of the combat zone - or, even more realistically, make said crews merely telepresent as they control the vehicle / 'robot' from a remote location while it's supported by conventional infantry on the ground. And then, what, you also propose that these vehicles will be programmed to be more hesitant to engage? How long does that last? Probably until the first instance where friendly casualties are taken and blamed on the robot not firing when soldiers felt it should.

    I'm just not seeing how such systems are somehow more merciful unless we escape into the realm of magical & unfeasible technology. Probably, much like drones, they have a minimal impact (if any at all) on civilian deaths on the front, and their nature as increasingly disposable war materiel would make them ideal candidates for activities like ambush & assassination.

    When we talk about automation, we're talking about AI controlled offensive weaponry. That's what the letter was referring to. Mines and cluster bombs certainly don't qualify.

    There's plenty of offensive roles that an AI drone could replace: number one man in a fireteam's stack, route clearance for IEDs, locating an HVI during raids, replacing a weapons squad for support by fire, replacing a mortar team, etc. Many of these roles (such as number one man, HVI finding, route clearance) could all use less lethal weapons whereas a human in that position would need a firearm. That seems more merciful to me.

    EH28YFo.jpg
  • Options
    The EnderThe Ender Registered User regular
    So supposing that all they do is make AFV crews safer is that a vote for using them or against?

    Honestly I'm apathetic. I suspect that drone armored vehicles will be a thing, and in some respects it will be good (someone at a base sees the vehicle they are telepresent in take fire, decides makes a thoughtful cost-benefits calculation and chooses to pull their vehicle out, whereas someone inside the same vehicle might instead have made an adrenal fight-flight calculation where lives are lost) and in others it will be bad (the CIA or another clandestine operations org uses camouflaged attack vehicles to ambush political targets they don't like, find that they enjoy the utility of the machinery for this purpose, and as a result the number of clandestine assassinations dramatically increase).

    I just think it's dumb to posit that we'll ever have entirely robot armies storming the beaches or whatever, and that somehow this fantasy scenario will result in fewer war atrocities.

    With Love and Courage
  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    Apothe0sis wrote: »
    HamHamJ wrote: »
    Moridin889 wrote: »
    I think you guys are missing a big point about the automation of war and it's current impossibility is sophistication. How do they identify bad guys and how easy is it to trick their programming. If you can attach household objects to a gun so it doesn't recognize it, that's an issue. If it's sophisticated enough to see through most/all potential human trickery, then how is it not a true ai with sentience itself and why would we create a new set of sentient beings and use them for war. That's just passing the cost along.

    That isn't necessary. A more reasonable near future scenario is that there is still a human looking though a monitor and selecting targets. All the robot needs to know is "Shoot heat blobs dogs designated as hostile until they stop moving".

    Ftfy
    For reference, this is a google deep dream joke.

  • Options
    JusticeforPlutoJusticeforPluto Registered User regular
    [Tycho?] wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    This is completely wrong.

    We absolutely don't use chemical weapons because it is not cheaper, not quicker, and does not reduce casualties. The entire reason we have the perverse concept of "laws of armed conflict" is because the things they prohibit are all about avoiding the types of atrocities that do not lead to speedy resolution of conflict.

    Rules of war are made between nations on the assumption that there is no such thing as a war of annihilation, and on the observation that certain classes of weaponry absolutely lead to prolonged, high cost attritional conflicts.

    The problem with chemical weapons is they enrage people and they're of limited effectiveness against prepared militaries - but of unlimited effectiveness against the civilian populations that tend to be near them. And after you wipe out a few cities, the remaining standing militaries tend not to think they should just surrender and end the conflict.

    None of these assumptions apply to AI weapons systems. By definition they don't, since AI weapons systems are entirely about killing fewer people.

    You're the second person to bring up the rationale for banning chemical weapons, with one reason in particular being that they're ineffective against armies and deadly against civilians.

    And yet chemical weapons were banned before they were used on a large scale- the Hague Conventions of 1899 and 1907 prohibited them and a variety of other practices. So it wasn't their effectiveness that got them banned, nor their affect on civilians, unless there's something I don't know about. Since both of you mentioned it I assume I'm missing something.

    But chemical weapons were used in WW1. And in the Iran Iraq War. And in Syria today.

    Which, to me, in the real issue here. We can talk about banning autonomous/ai weapons all we want, but how do enforce said ban?

  • Options
    The EnderThe Ender Registered User regular
    [Tycho?] wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    This is completely wrong.

    We absolutely don't use chemical weapons because it is not cheaper, not quicker, and does not reduce casualties. The entire reason we have the perverse concept of "laws of armed conflict" is because the things they prohibit are all about avoiding the types of atrocities that do not lead to speedy resolution of conflict.

    Rules of war are made between nations on the assumption that there is no such thing as a war of annihilation, and on the observation that certain classes of weaponry absolutely lead to prolonged, high cost attritional conflicts.

    The problem with chemical weapons is they enrage people and they're of limited effectiveness against prepared militaries - but of unlimited effectiveness against the civilian populations that tend to be near them. And after you wipe out a few cities, the remaining standing militaries tend not to think they should just surrender and end the conflict.

    None of these assumptions apply to AI weapons systems. By definition they don't, since AI weapons systems are entirely about killing fewer people.

    You're the second person to bring up the rationale for banning chemical weapons, with one reason in particular being that they're ineffective against armies and deadly against civilians.

    And yet chemical weapons were banned before they were used on a large scale- the Hague Conventions of 1899 and 1907 prohibited them and a variety of other practices. So it wasn't their effectiveness that got them banned, nor their affect on civilians, unless there's something I don't know about. Since both of you mentioned it I assume I'm missing something.

    But chemical weapons were used in WW1. And in the Iran Iraq War. And in Syria today.

    Which, to me, in the real issue here. We can talk about banning autonomous/ai weapons all we want, but how do enforce said ban?

    The cost requirements for a telepresent or autonomous army are basically a de facto 'ban' at this point. A (bad) airborne drone might be feasible for some warlords, but something like a robot tank? Out of the question. It'll always make sense for backwater gangsters to leverage the poor, ignorant & young for use in their armies rather than spending money trying to get robots.

    With Love and Courage
  • Options
    JusticeforPlutoJusticeforPluto Registered User regular
    The Ender wrote: »
    [Tycho?] wrote: »
    Astaereth wrote: »
    Should we refuse to give our soldiers better body armor, because that would reduce casualties and make war cheaper?

    Should we refuse to flood the field with chemical weapons, because that would reduce casualties and make war cheaper? Should we refuse to confine enemy populations to camps, because that would reduce casualties and make war cheaper? Should we refuse to use nuclear weapons on populations centers, since that would reduce casualties and make war cheaper.

    Not only are those all arguments that were used, they were all successful arguments. It was only in retrospect that people realized that the implementation had some unforeseen consequences and negative repercussions.

    This is completely wrong.

    We absolutely don't use chemical weapons because it is not cheaper, not quicker, and does not reduce casualties. The entire reason we have the perverse concept of "laws of armed conflict" is because the things they prohibit are all about avoiding the types of atrocities that do not lead to speedy resolution of conflict.

    Rules of war are made between nations on the assumption that there is no such thing as a war of annihilation, and on the observation that certain classes of weaponry absolutely lead to prolonged, high cost attritional conflicts.

    The problem with chemical weapons is they enrage people and they're of limited effectiveness against prepared militaries - but of unlimited effectiveness against the civilian populations that tend to be near them. And after you wipe out a few cities, the remaining standing militaries tend not to think they should just surrender and end the conflict.

    None of these assumptions apply to AI weapons systems. By definition they don't, since AI weapons systems are entirely about killing fewer people.

    You're the second person to bring up the rationale for banning chemical weapons, with one reason in particular being that they're ineffective against armies and deadly against civilians.

    And yet chemical weapons were banned before they were used on a large scale- the Hague Conventions of 1899 and 1907 prohibited them and a variety of other practices. So it wasn't their effectiveness that got them banned, nor their affect on civilians, unless there's something I don't know about. Since both of you mentioned it I assume I'm missing something.

    But chemical weapons were used in WW1. And in the Iran Iraq War. And in Syria today.

    Which, to me, in the real issue here. We can talk about banning autonomous/ai weapons all we want, but how do enforce said ban?

    The cost requirements for a telepresent or autonomous army are basically a de facto 'ban' at this point. A (bad) airborne drone might be feasible for some warlords, but something like a robot tank? Out of the question. It'll always make sense for backwater gangsters to leverage the poor, ignorant & young for use in their armies rather than spending money trying to get robots.

    Im thinking for the future, which kinda seems the point of said exercise. Sorry if I wasn't clear.

    To me, the real problem would be:

    1) Nations who hold to much political power. If China or the US develop such weapons what will the rest of the world do. Those nations are to powerful to ingore and have veto power in the UN.
    2) Rouge nations. If they are already pariahs they may as well be powerful pariahs.
    3) A total breakdown of international relations. If we get to a point where a world war seems certain (like the climate before WW1) then all these treaties go out the window.

  • Options
    The EnderThe Ender Registered User regular
    Im thinking for the future, which kinda seems the point of said exercise. Sorry if I wasn't clear.

    To me, the real problem would be:

    1) Nations who hold to much political power. If China or the US develop such weapons what will the rest of the world do. Those nations are to powerful to ingore and have veto power in the UN.
    2) Rouge nations. If they are already pariahs they may as well be powerful pariahs.
    3) A total breakdown of international relations. If we get to a point where a world war seems certain (like the climate before WW1) then all these treaties go out the window.

    For problem 1):

    If a G[X] nation wants robots, they will build them. Maybe someone at the U.N. will throw a fit, maybe they won't; in practice most rich countries do what they like regardless of U.N. resolutions or vetoes.

    For problem 2):

    ...A rogue nation with sufficient capital & industry to build a sophisticated mechanized army is unlikely. Even a nation like Iraq, which had a very formidable tank force in terms of raw scale, had to get the vehicles second hand and didn't have either well trained crews or the support equipment (communications, visual enhancement, etc) to make their vehicles effective. Something like a telepresent army requires a logistical infrastructure & technical proficiency among it's citizens that undermines the idea of a 'rogue state'.

    For problem 3):

    WWI can be seen as a dissolution of the unsustainable old world aristocratic powers, with WWII being a continuation of that dissolution. Those conflicts happened at a time when there was no war at that kind of scale before, little to no negative press coverage of war, etc. The old network of dukes & counts & barons and related fuck muppetry is gone, wiped-out and/or rendered obsolete by the enormous conflict it kicked-off.

    I'm dubious that anytime in our lifetimes will that house of cards be stood up again; ergo, it can't really fall back over.

    With Love and Courage
  • Options
    FANTOMASFANTOMAS Flan ArgentavisRegistered User regular
    I guess the dividing factor is if you trust that robots will be used for "good", and people arguing that whatever their side is, is "the real good side".

    Also, once AI are used in combat, its just a countdown untill they are used to aide domestic peace-keeping forces, as we have already seen happen with military type weapons. Then pick your favorite science fiction, dystopian tale, and start checking boxes for similarities.

    I could make a list of horrible stuff diferent governments have done, stepping all over, or ignoring the peoples choices, to ilustrate why we shouldnt give the military autonomous weapon systems. Military dictatorships are not SO old, I was born in one, and the only way we could take our country back, was thanks to diferent parts of our military, coordinating with civilan efforts in order to restore democracy. If they would have had "soldiers" that dont question orders, who knows, maybe I would still live in a military regime, or most likely, I wouldnt exist, since my parents would have been murdered, like the 30.000+ citizen that were murdered by our military forces, in our own ground.

    So I really cant see how giving ANY military force that kind of power can end well, because in my experience, you have to keep the military on a really short leash, or the roles are reversed.

    Yes, with a quick verbal "boom." You take a man's peko, you deny him his dab, all that is left is to rise up and tear down the walls of Jericho with a ".....not!" -TexiKen
  • Options
    QuidQuid Definitely not a banana Registered User regular
    We already give military forces that kind of power. The are all sorts of autonomy and power given to the military on the assumption that it won't be used for a coup. You're not really providing an explanation as to why having robots would be what suddenly drives a general mad with power when we already have high level officials in charge of nuclear weapons.

  • Options
    The EnderThe Ender Registered User regular
    edited July 2015
    I guess the dividing factor is if you trust that robots will be used for "good", and people arguing that whatever their side is, is "the real good side".

    Also, once AI are used in combat, its just a countdown untill they are used to aide domestic peace-keeping forces, as we have already seen happen with military type weapons. Then pick your favorite science fiction, dystopian tale, and start checking boxes for similarities.

    'Good' is not something that usually applies across a broad spectrum, imho. If my Canadian government built an army of robot tanks and then used that army to invade Yemen, i would consider this to be Not a Good Thing. If, during that Not Good invasion, a soldier piloting a robot tank decided allow their vehicle to be destroyed / disabled by insurgents because opening fire would've endangered a lot of nearby non-combatants and the soldier himself wasn't in danger, I would consider this a Good Thing within the surrounding context of the Not Good invasion. If this subsequently became some kind of formalized doctrine / rule of engagement (with meaningful consequences if violated), I would consider this an Even Better Thing within the surrounding context of an invasion that remains Not Good.


    Also, my favorite science fiction dystopian tale is the Warhammer 40K mythos. I'm going down my checklist for Serbia, but I don't see the related entries for 'Hive Fleet Leviathan' or 'Warboss Ghazghkull Mag Uruk Thraka'...

    The Ender on
    With Love and Courage
  • Options
    tbloxhamtbloxham Registered User regular
    The Ender wrote: »
    I guess the dividing factor is if you trust that robots will be used for "good", and people arguing that whatever their side is, is "the real good side".

    Also, once AI are used in combat, its just a countdown untill they are used to aide domestic peace-keeping forces, as we have already seen happen with military type weapons. Then pick your favorite science fiction, dystopian tale, and start checking boxes for similarities.

    'Good' is not something that usually applies across a broad spectrum, imho. If my Canadian government built an army of robot tanks and then used that army to invade Yemen, i would consider this to be Not a Good Thing. If, during that Not Good invasion, a soldier piloting a robot tank decided allow their vehicle to be destroyed / disabled by insurgents because opening fire would've endangered a lot of nearby non-combatants and the soldier himself wasn't in danger, I would consider this a Good Thing within the surrounding context of the Not Good invasion. If this subsequently became some kind of formalized doctrine / rule of engagement (with meaningful consequences if violated), I would consider this an Even Better Thing within the surrounding context of an invasion that remains Not Good.


    Also, my favorite science fiction dystopian tale is the Warhammer 40K mythos. I'm going down my checklist for Serbia, but I don't see the related entries for 'Hive Fleet Leviathan' or 'Warboss Ghazghkull Mag Uruk Thraka'...

    From the purposes of Warhammer 40K clearly we need to steer clear of AI systems because of how vulnerable they will be to possession by the Chaos Gods! I certainly don't want my robot mall cop to be mutated into a gaping maw into the warp through which an endless army of Eldritch horrors will spill. We will need to only use properly controlled and managed machine spirits, which are well appeased by the sacred oils of a tech priest.

    "That is cool" - Abraham Lincoln
  • Options
    Psychotic OnePsychotic One The Lord of No Pants Parts UnknownRegistered User regular
    tbloxham wrote: »
    The Ender wrote: »
    I guess the dividing factor is if you trust that robots will be used for "good", and people arguing that whatever their side is, is "the real good side".

    Also, once AI are used in combat, its just a countdown untill they are used to aide domestic peace-keeping forces, as we have already seen happen with military type weapons. Then pick your favorite science fiction, dystopian tale, and start checking boxes for similarities.

    'Good' is not something that usually applies across a broad spectrum, imho. If my Canadian government built an army of robot tanks and then used that army to invade Yemen, i would consider this to be Not a Good Thing. If, during that Not Good invasion, a soldier piloting a robot tank decided allow their vehicle to be destroyed / disabled by insurgents because opening fire would've endangered a lot of nearby non-combatants and the soldier himself wasn't in danger, I would consider this a Good Thing within the surrounding context of the Not Good invasion. If this subsequently became some kind of formalized doctrine / rule of engagement (with meaningful consequences if violated), I would consider this an Even Better Thing within the surrounding context of an invasion that remains Not Good.


    Also, my favorite science fiction dystopian tale is the Warhammer 40K mythos. I'm going down my checklist for Serbia, but I don't see the related entries for 'Hive Fleet Leviathan' or 'Warboss Ghazghkull Mag Uruk Thraka'...

    From the purposes of Warhammer 40K clearly we need to steer clear of AI systems because of how vulnerable they will be to possession by the Chaos Gods! I certainly don't want my robot mall cop to be mutated into a gaping maw into the warp through which an endless army of Eldritch horrors will spill. We will need to only use properly controlled and managed machine spirits, which are well appeased by the sacred oils of a tech priest.

    Damned Cog Boys. Just to turn on my vox they gave me a 50 page manual of "rites" that must be performed before I hit the power button.

Sign In or Register to comment.