The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.

Don't Fight the Future - An argument for Terminators

programjunkieprogramjunkie Registered User regular
Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue.Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.

Toby Walsh, professor of AI at the University of New South Wales said: “We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”
Advertisement

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”. But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva in April discussing the future of weaponry, including so-called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.

So, this is a thing, but I think they are misled for many reasons:

1. Autonomous robots can help warfighters.

Fighting wars is damaging. It takes both physical and psychological tolls. Not only do you have casualties on the battlefield, but you also have 50% higher suicide rates among recent veterans, lifelong crippling injuries, $100 billion in expenditures despite lowering of population numbers, etc. And worst of all, fighting a war remotely, being the human controller of a drone, seems not to help much, with nearly half of drone pilots exhibiting high stress, and many having symptoms severe enough to be clinical in nature.

Moreover, the idea of proliferation might not be so bad. It's one thing for a well informed volunteer with a variety of choices to fight a war, it is another thing to sacrifice draftees, child soldiers, those with no other economic choices, etc. Robots are the ethical choice for work that is both inherently dangerous, and innately damaging. Much like no one would be so callous as to say we shouldn't use a robot to help disarm a bomb at length, because we want the bomb technician to feel the consequences either way, we shouldn't desire sending in the young or the poor to horrible fates in some sort of game of chicken at ratcheting up the stakes. There will always be a place for a handful of professional level human warriors, but robots can help mitigate the problem of needing the rank and file.

This is a problem when wars are fought by people, and it is a problem that can be avoided by using autonomous machines.

2. Autonomous robots can help people on the other side.

Robots can be used as expendable and with a lack of emotion you'll never see in humans. In 2007, Blackwater guards returned fire on insurgents trying to kill them, in self defense. Or maybe not. Whether lying after the fact, or in the fog of war, 17 civilians were dead, and the evidence there was a firefight ended up being scant. This has been repeated in both recent American wars and wars throughout history, as it isn't always clear who the enemies are.

Robots give a solution similar to some American rules of engagement: the idea that if you aren't certain, disengage, even at peril. The fact is, all humans have a right to self-defense and a natural instinct to exercise it, but when it comes to robots, you can simply program them not to return fire to enemies opening fire from an orphanage. When it doubt, you can ask them to relocate, or even be destroyed in a way that would be unconscionable to do so with an 18 year old kid who is your fellow countryman.

3. The bar is low.

The Holocaust. The Rape of Nanking. The Armenian Genocide. Over a century of WMD usage. Revenge killings, etc. An estimated 50 million innocent civilians were killed in WW2. Even the Mongols, lacking any industrialized technology, killed a substantial portion of the entire world's population (2.5% of all humans in the world, by some estimates). Whether with sticks and stones, or using the power of the atom, humanity is capable of untold and horrible devastation.

Autonomous robots, properly programed, could potentially lack the capacity for the blackest depths of evil and bigotry that help fuel engines of death and destruction. "Killer robots" could kill many innocent people, and they'd still pale in comparison to what humanity itself is capable of. We don't need to make a system that is perfect, we need to make a system better than we are at our very worst, and that's a very, very low bar.

4. It's going to happen anyways.

Much like any number of other innovations, the writing is on the wall. By being the first to create it, the first to set norms and regulations, and being able to use them as much as a shield as like a sword, we can help shape the future. We might delay the inevitable, but the chance of a permanent prohibition on autonomous weapons is unlikely. Arguably, we've always dabbled in it by using animals in warfare, who are independent living creatures, but who lack the capability to be purpose built.

If it isn't us, it will be someone, so it should be us.

--

Some sources:
http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons
http://www.latimes.com/nation/la-na-veteran-suicide-20150115-story.html
http://www.salon.com/2015/03/06/a_chilling_new_post_traumatic_stress_disorder_why_drone_pilots_are_quitting_in_record_numbers_partner/
http://www.va.gov/vetdata/docs/quickfacts/Gdx-trends.pdf
http://www.nytimes.com/2015/04/14/us/ex-blackwater-guards-sentenced-to-prison-in-2007-killings-of-iraqi-civilians.html?_r=0
https://en.wikipedia.org/wiki/World_War_II_casualties#Total_deaths
http://bedejournal.blogspot.com/2011/12/how-bad-were-mongols.html

«134

Posts

  • PolaritiePolaritie Sleepy Registered User regular
    Pandora's box begs to be opened, but that doesn't mean we shouldn't try to delay it.

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • programjunkieprogramjunkie Registered User regular
    Bonus point:

    5. Military projects have non-military uses.

    Attacking the problem of making robust AI from a variety of sectors, using a variety of funding sources is the most robust way to do it. A substantial portion of human innovation has been driven by defense applications, most appropriately for this discussion: the internet.

    An autonomous fighting machine that is good at its job requires a huge number of subsystems that are relevant to any number of other uses. Heck, being able to navigate buildings, debris, a variety of terrain (if non-flying) is a huge problem in AI development.

    The very same principles, even the very same code can be taken and reused in other applications. By allowing for both defense and civilian budgets to be applied to the same project, we can accelerate it.

  • PhillisherePhillishere Registered User regular
    Polaritie wrote: »
    Pandora's box begs to be opened, but that doesn't mean we shouldn't try to delay it.

    In the long run, the extinction of the human race is inevitable. Why delay the inevitable?

  • EchoEcho ski-bap ba-dapModerator, Administrator admin
    A bit related: here's a TED talk by a Google guy explaining how their self-driving cars see the road and the environment and some of the troubles it faces.

    https://www.youtube.com/watch?v=tiwVMrTLUWg

  • RichyRichy Registered User regular
    Bonus point:

    5. Military projects have non-military uses.

    Attacking the problem of making robust AI from a variety of sectors, using a variety of funding sources is the most robust way to do it. A substantial portion of human innovation has been driven by defense applications, most appropriately for this discussion: the internet.

    An autonomous fighting machine that is good at its job requires a huge number of subsystems that are relevant to any number of other uses. Heck, being able to navigate buildings, debris, a variety of terrain (if non-flying) is a huge problem in AI development.

    The very same principles, even the very same code can be taken and reused in other applications. By allowing for both defense and civilian budgets to be applied to the same project, we can accelerate it.

    And on the flip side: non-military projects have military uses. All the developments needed for an offensive military weapon AI can be made in the civilian sector. Things like navigation in dynamic irregular terrain, person recognition, intent prediction of person, dodging obstacles, environment exploration, etc., could all be useful say for an automated taxi. Then strap a gun to it and add an extra subroutine and you have an autonomous tank.

    In other words, a moratorium on developing these weapons will have no effect.

    For the same reason, we don't have a moratorium on developing space-based weapons. After all, a space-based weapon is the satellites we already have with missiles that can work in a vacuum (which we can strap together easily enough). We have a moratorium on building and launching these weapons, and that's what has effectively prevented the militarization of space so far.

    sig.gif
  • HamHamJHamHamJ Registered User regular
    I agree that robot armies would probably be safer for everyone.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • AstaerethAstaereth In the belly of the beastRegistered User regular
    Robots are the next step in warfare; but it's important that we proceed so we can get to the step after that. Once both sides are using AI machines, the logical extension is to eschew physical combat entirely in favor of an abstracted digital contest, a literal war game whose outcome by agreement determines the result of intractable disputes between nations. That is when war truly becomes bloodless.

    (I'm only a little joking here.)

    ACsTqqK.jpg
  • RichyRichy Registered User regular
    HamHamJ wrote: »
    I agree that robot armies would probably be safer for everyone.

    Well no. They'd be safer for the people who have the robots. For the people without robots, war will become much, much bloodier.

    After all, for Nation 1, war now has only a financial cost, no human cost. You literally just throw more money at war to create more robots and ship them over until you win. Charge of the Light Brigade all day long until the enemy folds, and the only people who will balk at the massive losses the strategy entails are the accountants. Nation 2, meanwhile, has no robots, and has to send wave after wave of their own men to fight an endlessly oncoming robot army.

    sig.gif
  • zagdrobzagdrob Registered User regular
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

  • Harry DresdenHarry Dresden Registered User regular
    edited July 2015
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    edit: The danger increases if an AI controls multiple robots ala Iron Legion. If a general in an army goes rogue they generally don't take the army with them, this might be a possibility with hostile A.I's.

    Harry Dresden on
  • Hahnsoo1Hahnsoo1 Make Ready. We Hunt.Registered User, Moderator, Administrator admin
    Astaereth wrote: »
    Robots are the next step in warfare; but it's important that we proceed so we can get to the step after that. Once both sides are using AI machines, the logical extension is to eschew physical combat entirely in favor of an abstracted digital contest, a literal war game whose outcome by agreement determines the result of intractable disputes between nations. That is when war truly becomes bloodless.

    (I'm only a little joking here.)
    You are missing an important intermediate step:
    1418401069_2efd67d.jpg
    Crash and burn, dude

    8i1dt37buh2m.png
  • MadicanMadican No face Registered User regular
    Richy wrote: »
    HamHamJ wrote: »
    I agree that robot armies would probably be safer for everyone.

    Well no. They'd be safer for the people who have the robots. For the people without robots, war will become much, much bloodier.

    After all, for Nation 1, war now has only a financial cost, no human cost. You literally just throw more money at war to create more robots and ship them over until you win. Charge of the Light Brigade all day long until the enemy folds, and the only people who will balk at the massive losses the strategy entails are the accountants. Nation 2, meanwhile, has no robots, and has to send wave after wave of their own men to fight an endlessly oncoming robot army.

    Ah yes, the Zapp Brannigan method of warfare.

    On a more serious note, I do not like the assumption that roboticized warfare is inevitable. I think that just having a fancy toy increases the desire to use the fancy toy, as we see with police and their military surplus. When it's just a matter of money to decide whether or not to wage war, since personnel largely won't be at risk, then it's much easier to decide to send those toys out to "play" than it is to send out thousands of living beings.

  • JuliusJulius Captain of Serenity on my shipRegistered User regular
    Richy wrote: »
    Bonus point:

    5. Military projects have non-military uses.

    Attacking the problem of making robust AI from a variety of sectors, using a variety of funding sources is the most robust way to do it. A substantial portion of human innovation has been driven by defense applications, most appropriately for this discussion: the internet.

    An autonomous fighting machine that is good at its job requires a huge number of subsystems that are relevant to any number of other uses. Heck, being able to navigate buildings, debris, a variety of terrain (if non-flying) is a huge problem in AI development.

    The very same principles, even the very same code can be taken and reused in other applications. By allowing for both defense and civilian budgets to be applied to the same project, we can accelerate it.

    And on the flip side: non-military projects have military uses. All the developments needed for an offensive military weapon AI can be made in the civilian sector. Things like navigation in dynamic irregular terrain, person recognition, intent prediction of person, dodging obstacles, environment exploration, etc., could all be useful say for an automated taxi. Then strap a gun to it and add an extra subroutine and you have an autonomous tank.

    In other words, a moratorium on developing these weapons will have no effect.

    For the same reason, we don't have a moratorium on developing space-based weapons. After all, a space-based weapon is the satellites we already have with missiles that can work in a vacuum (which we can strap together easily enough). We have a moratorium on building and launching these weapons, and that's what has effectively prevented the militarization of space so far.

    Which is why the call to ban the use of AI for war makes sense. If the military industry will develop these weapons despite them not being allowed in war, then why allow them?

  • RichyRichy Registered User regular
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    edit: The danger increases if an AI controls multiple robots ala Iron Legion. If a general in an army goes rogue they generally don't take the army with them, this might be a possibility with hostile A.I's.

    I've got 6000 years of recorded nationalistic brainwashing I'd like to introduce you to.

    sig.gif
  • zagdrobzagdrob Registered User regular
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties[/b[ and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    Milgram begs to differ. Not to mention that people can do all kinds of awful shit all on their own.

    A machine can do what it's programmed to do, and while there is the possibility of error or emergent behavior, they aren't necessarily going to be more error prone than humans.

    The big question is how they are used. If we end up with something akin to a Culture 'knife missile' that is intelligent and surgical enough to only kill valid targets based on their rules of engagement / kill list, the big question is who is setting those rules of engagement or the kill list.

    Drones have lowered the threshold for use of force, but they have also lowered the collateral damage and risk of error (in theory). It comes back to their use and the policies / doctrine around them. Lowering the threshold for use of force might be a net negative, but there are a lot of possible gains. Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS.

    One of the big limitations in our current messes is simply having forces in tribal / rural areas that are reliable and competent. Being able to POLICE removes the need for a lot of military options.

  • Harry DresdenHarry Dresden Registered User regular
    edited July 2015
    zagdrob wrote: »
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties[/b[ and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    Milgram begs to differ. Not to mention that people can do all kinds of awful shit all on their own.

    Not like this, they don't. And despite this being in effect the US military does protect us as a whole, and there's less danger involved if a soldier goes homicidal.
    A machine can do what it's programmed to do, and while there is the possibility of error or emergent behavior, they aren't necessarily going to be more error prone than humans.

    The big question is how they are used. If we end up with something akin to a Culture 'knife missile' that is intelligent and surgical enough to only kill valid targets based on their rules of engagement / kill list, the big question is who is setting those rules of engagement or the kill list.

    The US hasn't got a good history with that. You think that'll be any different with controlling robots?
    Drones have lowered the threshold for use of force, but they have also lowered the collateral damage and risk of error (in theory). It comes back to their use and the policies / doctrine around them. Lowering the threshold for use of force might be a net negative, but there are a lot of possible gains. Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS.

    This evades the hacking subject, which is a real danger AIs and robots that have humans don't.
    One of the big limitations in our current messes is simply having forces in tribal / rural areas that are reliable and competent. Being able to POLICE removes the need for a lot of military options.

    Not being human doesn't automatically mean they'll be reliable or competent. Especially if they're got a weakness for being hacked. IIRC the US military has already lost a few drones to that.

    Harry Dresden on
  • tbloxhamtbloxham Registered User regular
    zagdrob wrote: »
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties[/b[ and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    Milgram begs to differ. Not to mention that people can do all kinds of awful shit all on their own.

    Not like this, they don't. And despite this being in effect the US military does protect us as a whole, and there's less danger involved if a soldier goes homicidal.
    A machine can do what it's programmed to do, and while there is the possibility of error or emergent behavior, they aren't necessarily going to be more error prone than humans.

    The big question is how they are used. If we end up with something akin to a Culture 'knife missile' that is intelligent and surgical enough to only kill valid targets based on their rules of engagement / kill list, the big question is who is setting those rules of engagement or the kill list.

    The US hasn't got a good history with that. You think that'll be any different with controlling robots?
    Drones have lowered the threshold for use of force, but they have also lowered the collateral damage and risk of error (in theory). It comes back to their use and the policies / doctrine around them. Lowering the threshold for use of force might be a net negative, but there are a lot of possible gains. Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS.

    This evades the hacking subject, which is a real danger AIs and robots that have humans don't.
    One of the big limitations in our current messes is simply having forces in tribal / rural areas that are reliable and competent. Being able to POLICE removes the need for a lot of military options.

    Not being human doesn't automatically mean they'll be reliable or competent. Especially if they're got a weakness for being hacked. IIRC the US military has already lost a few drones to that.

    Humans can be corrupted,lazy, and bribed.
    Robots can be hacked or poorly programmed.

    Both have their risks.

    "That is cool" - Abraham Lincoln
  • Harry DresdenHarry Dresden Registered User regular
    tbloxham wrote: »
    zagdrob wrote: »
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties[/b[ and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    Milgram begs to differ. Not to mention that people can do all kinds of awful shit all on their own.

    Not like this, they don't. And despite this being in effect the US military does protect us as a whole, and there's less danger involved if a soldier goes homicidal.
    A machine can do what it's programmed to do, and while there is the possibility of error or emergent behavior, they aren't necessarily going to be more error prone than humans.

    The big question is how they are used. If we end up with something akin to a Culture 'knife missile' that is intelligent and surgical enough to only kill valid targets based on their rules of engagement / kill list, the big question is who is setting those rules of engagement or the kill list.

    The US hasn't got a good history with that. You think that'll be any different with controlling robots?
    Drones have lowered the threshold for use of force, but they have also lowered the collateral damage and risk of error (in theory). It comes back to their use and the policies / doctrine around them. Lowering the threshold for use of force might be a net negative, but there are a lot of possible gains. Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS.

    This evades the hacking subject, which is a real danger AIs and robots that have humans don't.
    One of the big limitations in our current messes is simply having forces in tribal / rural areas that are reliable and competent. Being able to POLICE removes the need for a lot of military options.

    Not being human doesn't automatically mean they'll be reliable or competent. Especially if they're got a weakness for being hacked. IIRC the US military has already lost a few drones to that.

    Humans can be corrupted,lazy, and bribed.
    Robots can be hacked or poorly programmed.

    Both have their risks.

    True, but one is less lethal when they turn on us.

  • RichyRichy Registered User regular
    Except you can also fix robots with proper code reviews and security measures. You can't fix humans.

    sig.gif
  • tbloxhamtbloxham Registered User regular
    tbloxham wrote: »
    zagdrob wrote: »
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties[/b[ and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    Milgram begs to differ. Not to mention that people can do all kinds of awful shit all on their own.

    Not like this, they don't. And despite this being in effect the US military does protect us as a whole, and there's less danger involved if a soldier goes homicidal.
    A machine can do what it's programmed to do, and while there is the possibility of error or emergent behavior, they aren't necessarily going to be more error prone than humans.

    The big question is how they are used. If we end up with something akin to a Culture 'knife missile' that is intelligent and surgical enough to only kill valid targets based on their rules of engagement / kill list, the big question is who is setting those rules of engagement or the kill list.

    The US hasn't got a good history with that. You think that'll be any different with controlling robots?
    Drones have lowered the threshold for use of force, but they have also lowered the collateral damage and risk of error (in theory). It comes back to their use and the policies / doctrine around them. Lowering the threshold for use of force might be a net negative, but there are a lot of possible gains. Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS.

    This evades the hacking subject, which is a real danger AIs and robots that have humans don't.
    One of the big limitations in our current messes is simply having forces in tribal / rural areas that are reliable and competent. Being able to POLICE removes the need for a lot of military options.

    Not being human doesn't automatically mean they'll be reliable or competent. Especially if they're got a weakness for being hacked. IIRC the US military has already lost a few drones to that.

    Humans can be corrupted,lazy, and bribed.
    Robots can be hacked or poorly programmed.

    Both have their risks.

    True, but one is less lethal when they turn on us.

    At the 'mass' scale humans can be susceptible to brainwashing, nationalism and crowd mentalities. Humans can also be individually placed in charge of massively destructive weapon systems (nuclear submarine captain)

    Roboguard vs Normal Guard are equally lethal if they turn on those they are protecting.

    "That is cool" - Abraham Lincoln
  • Harry DresdenHarry Dresden Registered User regular
    Richy wrote: »
    Except you can also fix robots with proper code reviews and security measures. You can't fix humans.

    Except the damage has been done, a homicidal robot/drone with missiles is going to cause much more physical harm than a rogue soldier. If that happens on American soil or to American soldiers in a theater that's going to be a scandal to employing AI's in warfare capacity. God forbid this happens to an AI slaved to multiple robots/drones.

  • programjunkieprogramjunkie Registered User regular
    edited July 2015
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    As others note, they absolutely can. Child soldiers are a great and terrible example of such, where brutalization, drugs, and forcing people to participate in atrocities.

    More broadly, "Do this or go to jail / we kill you / we kill your family," is consistently successful across all of recorded history. To use the Mongols again, they would use people's fear of being killed to induce a surrender, then turn around and use those very same people as high attrition cannon fodder for the next fight to minimize their own losses. Also, contemporary accounts suggest that people were so afraid of them, an unarmed Mongol could tell a group of adults "Stay here while I get my sword," and then they would actually remain there, rather than arming themselves, setting an ambush, or fleeing, and dutifully allow themselves to be executed by said Mongol. That also tracks fairly well with contemporary videos (not that I suggest you watch them) of people being executed in wartime.

    Quite simply, no small amount of warfare has revolved around using what, in retrospect, is almost insultingly simple and obviously dishonest techniques to "hack" humans, whether to get them to lie down and die willingly, or to get them to commit atrocities as obscene as cutting pregnant women apart with machetes.

    The difference is, robots can only be hacked by the tinniest percentage of super skilled experts, whereas humans can be hacked by any illiterate thug with a weapon.

    Edit: It's worth noting that psychological operations is a modern day specialty in contemporary armies, and while there are good guy ways to implement it (giving away soccer balls), both traditional (Mongols again), and modern forces (ISIS) use brutality not just because of intrinsic cruelty and evil, but as a deliberate long term psychological weapon.

    Also, I'm of the opinion it would be possible to generate weakly suicidogenic propaganda aimed at a specific ideology. We'd be talking single digit percentages even when robustly applied, but I think it could be done.
    edit: The danger increases if an AI controls multiple robots ala Iron Legion. If a general in an army goes rogue they generally don't take the army with them, this might be a possibility with hostile A.I's.

    What is a military coup if not that? In countries with less stable systems of government, loyalty of officers is a top concern.

    As an example, https://en.wikipedia.org/wiki/Free_Officers_Movement_(Egypt)

    In the case of humans, it doesn't even need to be top generals. Nasser was only a lieutenant colonel and he ended up leading a nation.

    This is a case of having distinct smaller units without an absolute singular controller which can be disrupted or co-opted, but this is an existing problem with difficult solution, which sometimes vere towards the exceedingly unethical (mass political purges).

    programjunkie on
  • IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    The differences in speed and scale between hacking a machine and hacking a person are fairly large.

  • programjunkieprogramjunkie Registered User regular
    Incenjucar wrote: »
    The differences in speed and scale between hacking a machine and hacking a person are fairly large.

    It depends. Sophisticated hacks tend to take months or years to develop, but little time per individual application. Hacking people requires zero additional research (unless improving on existing methods), but can be slow, depending on the methods and desired outcomes.

    The obvious response is to have a robust defense, but any robot biased army is going to naturally want that for selfish reasons, so the requirement for the best moral outcome and best practical outcome are linked.

  • AstaerethAstaereth In the belly of the beastRegistered User regular
    The argument that anything which makes war less costly and bloody is bad because it will lead to more war is a poor one. War is only bad because it's costly and bloody. This is why drones are an improvement over bombings, and robots will be an improvement over infantry.

    ACsTqqK.jpg
  • QuidQuid Definitely not a banana Registered User regular
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    edit: The danger increases if an AI controls multiple robots ala Iron Legion. If a general in an army goes rogue they generally don't take the army with them, this might be a possibility with hostile A.I's.

    Hacking other humans to attack or move towards the wrong target is the entire idea behind military deception.

  • TraceTrace GNU Terry Pratchett; GNU Gus; GNU Carrie Fisher; GNU Adam We Registered User regular
    I find any effort to limit AI research in any field to be a bad thing.

    I've got unlimited amounts of respect for most of those people but I disagree with them most strongly about this.

  • Harry DresdenHarry Dresden Registered User regular
    tbloxham wrote: »
    tbloxham wrote: »
    zagdrob wrote: »
    zagdrob wrote: »
    I've seen too many people who left part of themselves (literally and figuratively) in Iraq, Afghanistan, Vietnam, Korea, etc. Let machines do the fighting.

    God knows it won't make things any worse, and I agree that I'd rather have a machine making decisions about using force than some scared 18 year old.

    Sure it can. Humans can't be hacked by third parties[/b[ and whatever bullshit robots do if they go off the reservation. They're also harder to kill than humans are.

    Milgram begs to differ. Not to mention that people can do all kinds of awful shit all on their own.

    Not like this, they don't. And despite this being in effect the US military does protect us as a whole, and there's less danger involved if a soldier goes homicidal.
    A machine can do what it's programmed to do, and while there is the possibility of error or emergent behavior, they aren't necessarily going to be more error prone than humans.

    The big question is how they are used. If we end up with something akin to a Culture 'knife missile' that is intelligent and surgical enough to only kill valid targets based on their rules of engagement / kill list, the big question is who is setting those rules of engagement or the kill list.

    The US hasn't got a good history with that. You think that'll be any different with controlling robots?
    Drones have lowered the threshold for use of force, but they have also lowered the collateral damage and risk of error (in theory). It comes back to their use and the policies / doctrine around them. Lowering the threshold for use of force might be a net negative, but there are a lot of possible gains. Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS.

    This evades the hacking subject, which is a real danger AIs and robots that have humans don't.
    One of the big limitations in our current messes is simply having forces in tribal / rural areas that are reliable and competent. Being able to POLICE removes the need for a lot of military options.

    Not being human doesn't automatically mean they'll be reliable or competent. Especially if they're got a weakness for being hacked. IIRC the US military has already lost a few drones to that.

    Humans can be corrupted,lazy, and bribed.
    Robots can be hacked or poorly programmed.

    Both have their risks.

    True, but one is less lethal when they turn on us.

    At the 'mass' scale humans can be susceptible to brainwashing, nationalism and crowd mentalities. Humans can also be individually placed in charge of massively destructive weapon systems (nuclear submarine captain)

    True. Nationalism as such is an instrument for the state to control them, in contrast to robots turning on their masters. Crowd mentalities are chaotic, not ice cold analytical murder like what robots would do.
    Roboguard vs Normal Guard are equally lethal if they turn on those they are protecting.

    One is harder to destroy than the other when they go rogue. And normal human troops don't carry missiles with them.
    As others note, they absolutely can. Child soldiers are a great and terrible example of such, where brutalization, drugs, and forcing people to participate in atrocities.

    In third world countries with the weakest members in society, they're not conscripting Navy SEALS or the Air Force with those tactics.
    More broadly, "Do this or go to jail / we kill you / we kill your family," is consistently successful across all of recorded history. To use the Mongols again, they would use people's fear of being killed to induce a surrender, then turn around and use those very same people as high attrition cannon fodder for the next fight to minimize their own losses. Also, contemporary accounts suggest that people were so afraid of them, an unarmed Mongol could tell a group of adults "Stay here while I get my sword," and then they would actually remain there, rather than arming themselves, setting an ambush, or fleeing, and dutifully allow themselves to be executed by said Mongol. That also tracks fairly well with contemporary videos (not that I suggest you watch them) of people being executed in wartime.

    Quite simply, no small amount of warfare has revolved around using what, in retrospect, is almost insultingly simple and obviously dishonest techniques to "hack" humans, whether to get them to lie down and die willingly, or to get them to commit atrocities as obscene as cutting pregnant women apart with machetes.

    Which produces reluctant soldiers, forcing the vulnerable to do horrible deeds - they're not the top of line soldiers or generals in the armed forces.
    The difference is, robots can only be hacked by the tinniest percentage of super skilled experts, whereas humans can be hacked by any illiterate thug with a weapon.

    Agreed. But we're not just talking about humans here, this is about people in the military, who aren't as exposed to that hacking on the front lines.
    Edit: It's worth noting that psychological operations is a modern day specialty in contemporary armies, and while there are good guy ways to implement it (giving away soccer balls), both traditional (Mongols again), and modern forces (ISIS) use brutality not just because of intrinsic cruelty and evil, but as a deliberate long term psychological weapon.

    Also, I'm of the opinion it would be possible to generate weakly suicidogenic propaganda aimed at a specific ideology. We'd be talking single digit percentages even when robustly applied, but I think it could be done.
    edit: The danger increases if an AI controls multiple robots ala Iron Legion. If a general in an army goes rogue they generally don't take the army with them, this might be a possibility with hostile A.I's.

    What is a military coup if not that? In countries with less stable systems of government, loyalty of officers is a top concern.

    As an example, https://en.wikipedia.org/wiki/Free_Officers_Movement_(Egypt)

    In the case of humans, it doesn't even need to be top generals. Nasser was only a lieutenant colonel and he ended up leading a nation.

    Can't military coups lead to a divided army and civil war? With AI's it isn't that difficult and they'd have easier control over their resources. With AI's the possibility is higher it will be top generals and they'd have less danger from their subordinates betraying them outside of being hacked themselves. If the top military AI/s are slaved to the military's top armaments and robots it's all corrupted, and the damage they'll create and the lives they'd put in danger from civilians and the government forces sent to destroy them will be harder to contain than ordinary military forces. Especially with drones and if we get competent killer robots under their command. The Beltway Sniper was bad enough, it'd be ten times as devastating if instead he was an AI/robot/drone.
    This is a case of having distinct smaller units without an absolute singular controller which can be disrupted or co-opted, but this is an existing problem with difficult solution, which sometimes vere towards the exceedingly unethical (mass political purges).

    ?

  • TaranisTaranis Registered User regular
    edited July 2015
    You seem to have very specific ideas of how AI soldiers would work. There's no reason to believe that AI soldiers would be inherently more difficult to kill, better at killing, or even more heavily armed. They could be as good as human soldiers in pretty much every way, except they're not human and that's inherently desirable.

    The required tactical competency of AI soldiers now is pretty low. ISIS, Al Qaeda, and the Taliban aren't great fighters themselves, so the requirement for AI troops is pretty low. Having more, expendable soldiers would be all you'd need.

    On top of all that, there's no way we wouldn't have a human component at some level which would act as a safeguard. Initial implementations would certainly have small numbers of AI soldiers alongside human ones. As they'd develop we'd work the kinks out and the initial risks would be minimal. We'd never phase out humans overnight.
    And normal human troops don't carry missiles with them.

    This is false.

    Taranis on
    EH28YFo.jpg
  • AstaerethAstaereth In the belly of the beastRegistered User regular
    In a warzone, AIs will accompany human units to accomplish specific tasks that are too dangerous or difficult, just like the bomb squad robots we have today. Outside of a warzone, AIs will be used the way we use drones now, to enter a country where we have no troops and accomplish specific tasks (killing or capturing terrorists, for instance) without risking our troops and probably with even fewer unintended civilian casualties.

    ACsTqqK.jpg
  • tbloxhamtbloxham Registered User regular
    edited July 2015
    Taranis wrote: »
    You seem to have very specific ideas of how AI soldiers would work. There's no reason to believe that AI soldiers would be inherently more difficult to kill, better at killing, or even more heavily armed. They could be as good as human soldiers in pretty much every way, except they're not human and that's inherently desirable.

    The required tactical competency of AI soldiers now is pretty low. ISIS, Al Qaeda, and the Taliban aren't great fighters themselves, so the requirement for AI troops is pretty low. Having more, expendable soldiers would be all you'd need.

    On top of all that, there's no way we wouldn't have a human component at some level which would act as a safeguard. Initial implementations would certainly have small numbers of AI soldiers alongside human ones. As they'd develop we'd work the kinks out and the initial risks would be minimal. We'd never phase out humans overnight.
    And normal human troops don't carry missiles with them.

    This is false.

    Yeah, I'm not comparing some mall security guard to ED209 and saying 'each is equally destructive'. If I'm building a mall security bot, then I give him the same (or less) armaments than I would allow a human security guard to carry.

    Saying that robots inherently have more destructive power is false. Robots are desirable due to their predictability and accountability, not their potential for enormous destructive force. If I'm building 'mall bot', then he might have a pistol equivalent, but he would be programmed to only use said pistol for defending others. He has no need to defend himself. He would likely defend himself passively by being resilient to attacks with bats and simple weapons, featuring a sturdy plexiglass shell or something. If someone goes after him with an AK47, I don't need him to survive, I'll be sending in support just like I would with a human guard.

    If I'm building a robot to defend my food supplies in hostile territory then yet again I'm giving it the same armaments as a human in the same situation would have.

    And heck, even if robots become very common in warfare, to the point where it changes the guns I need to give my defense bots then it isn't going to make them more lethal to people. A high speed armor piercing bullet is not particularly more dangerous than a simple slug to a person.

    edit - To be clear, I'm agreeing with you in reference to my earlier point, but I just noticed you cropped the quote tree.

    tbloxham on
    "That is cool" - Abraham Lincoln
  • [Tycho?][Tycho?] As elusive as doubt Registered User regular
    I agree with the signatories to that letter and disagree with the OP.

    I'll make a few points.

    1) They call for a ban of offensive autonomous systems. Offensive is key, so a robot to defuse a bomb or shoot down a missile would not be included.

    2) I think there's no basis for the notion that autonomous weapons will result in fewer deaths. As technology has advanced war has invariably become more deadly. Occasionally people make predictions that new weapons will bring about peace. When the gatling gun was invented some thought it would end war as they knew it, it being so deadly and so impossible to attack those armed with such weapons. We saw how that turned out. I challenge anyone to name a technological development that actually made war less deadly. The only contender I can think of is nuclear weapons, and it has yet to be seen how far MAD will take us.

    3) Robots make war politically cheap. With no need to draft or maintain fighting forces, and no worry of war-weariness from human casualties or disabled vets, war becomes something that can be entered into with less political risk at home. This is a major selling point to drones right now.

    3) Even if everyone gets autonomous robots, war won't become robots killing robots and sparing innocents. War is politics by different means. Its purpose is to affect change in people and states. A country that has its army defeated in the field may surrender, or it may not, depending on what the leadership wages its chances are. In the 20th century it has often taken widescale destruction of cities to bring about the end to wars. Adding robots to this mix doesn't change change much, since its still humans, in the end, that you need to put the screws to.

    4)
    Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS anyone we don't like.

    Robot occupiers will of course be targeted by those they occupy. And the notion of robots that can keep a society under heel is extremely dangerous, this is literally terminators on guard duty.

    mvaYcgc.jpg
  • tbloxhamtbloxham Registered User regular
    Also, to be completely frank, in the future you absolutely will be able to 'hack people'.

    Imagine I'm a security company in the future. I'm protecting a mall somewhere, where there have been violent robberies before. I might provide my security team with a list of known shoplifters, and an image file which shows up when they see one. I might also give them the ability to quickly share updates about what the situation is say, outside the Gap, and to share audio files and images with each other. Sounds good right? Everyone in the mall will be on the same page?

    But then, as a hacker I breach that system and add false targets to the list of known shoplifters, making a few random guys from inside the mall suddenly appear to be known violent thieves. I then hack the image sharing system, and tell the other guards that these people have been spotted in the mall and are heavily armed. Share sounds of gunfire, hack the speakers to panic the shoppers to create confusion. I assure you that you will be able to get the guards to open fire on the random dudes, and if I've picked those random dudes to be people who actually do have guns, then they will fire back and the entire thing will be a bloodbath.

    "That is cool" - Abraham Lincoln
  • redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    I guess I have a huge fear about what happens when these fall into the hands of warloards and ISIS and whatnot.

    The comparison in the op to the AK47 is a pretty terrifying thing. If they become cheap enough to play a role in ethnic cleansing and dictatorships and asymmetrical warfare, ehh... I imagin all those same arguments for us using them will apply.

    They moistly come out at night, moistly.
  • tbloxhamtbloxham Registered User regular
    edited July 2015
    [Tycho?] wrote: »
    I agree with the signatories to that letter and disagree with the OP.

    I'll make a few points.

    1) They call for a ban of offensive autonomous systems. Offensive is key, so a robot to defuse a bomb or shoot down a missile would not be included.

    2) I think there's no basis for the notion that autonomous weapons will result in fewer deaths. As technology has advanced war has invariably become more deadly. Occasionally people make predictions that new weapons will bring about peace. When the gatling gun was invented some thought it would end war as they knew it, it being so deadly and so impossible to attack those armed with such weapons. We saw how that turned out. I challenge anyone to name a technological development that actually made war less deadly. The only contender I can think of is nuclear weapons, and it has yet to be seen how far MAD will take us.

    3) Robots make war politically cheap. With no need to draft or maintain fighting forces, and no worry of war-weariness from human casualties or disabled vets, war becomes something that can be entered into with less political risk at home. This is a major selling point to drones right now.

    3) Even if everyone gets autonomous robots, war won't become robots killing robots and sparing innocents. War is politics by different means. Its purpose is to affect change in people and states. A country that has its army defeated in the field may surrender, or it may not, depending on what the leadership wages its chances are. In the 20th century it has often taken widescale destruction of cities to bring about the end to wars. Adding robots to this mix doesn't change change much, since its still humans, in the end, that you need to put the screws to.

    4)
    Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS anyone we don't like.

    Robot occupiers will of course be targeted by those they occupy. And the notion of robots that can keep a society under heel is extremely dangerous, this is literally terminators on guard duty.

    I would disagree that war has become more deadly. The wars with the highest levels of casualties have been the most recent ones, but in terms of fraction of people killed, people did a fine job with Roman Legionary level equipment.

    If AI can turn on us, wants to turn on us, and can defeat us with its AI cleverness, then it's going to do it whether its in ED209 or Mr Ed. Barring specific aspects of it shows a complete misunderstanding of the risks. If AI's can be built, they WILL be built.

    tbloxham on
    "That is cool" - Abraham Lincoln
  • programjunkieprogramjunkie Registered User regular
    Taranis wrote: »
    You seem to have very specific ideas of how AI soldiers would work. There's no reason to believe that AI soldiers would be inherently more difficult to kill, better at killing, or even more heavily armed. They could be as good as human soldiers in pretty much every way, except they're not human and that's inherently desirable.

    The required tactical competency of AI soldiers now is pretty low. ISIS, Al Qaeda, and the Taliban aren't great fighters themselves, so the requirement for AI troops is pretty low. Having more, expendable soldiers would be all you'd need.

    On top of all that, there's no way we wouldn't have a human component at some level which would act as a safeguard. Initial implementations would certainly have small numbers of AI soldiers alongside human ones. As they'd develop we'd work the kinks out and the initial risks would be minimal. We'd never phase out humans overnight.
    And normal human troops don't carry missiles with them.

    This is false.

    Yeah. It's worth noting that one of the primary drivers for the American quest for competency in warfare is a desire to avoid casualties, as not only is it politically damaging, but many soldiers consider it a fundamental moral duty to their fellow soldiers to be effective. You could prove this with some citations, but I'll use anecdotes if you'll forgive me.

    Back when I was a young soldier in my training, I was impressed upon by a JSOC guy, "You know getting a 240 (minimum is 180, max is 300, for reference) on a physical fitness test, sucks, right?" Many soldiers are of the opinion you're not only forbidden to just meet the standards, but you're not allowed to only exceed them by a bit, because when the guy to your left gets shot, you need to make sure he gets out alive.

    That personal and moral calculus disappears with robots. You simply need to multiply change in combat effectiveness vs. change in costs and find the best middle ground. $5 million in losses and $5 million in upgrades are indistinguishable. Robots don't need to be especially good, and I expect they will be better than average, worse than elite soldiers, if I had to guess.

    I think the novelty of killer robots being hacked is causing people to overestimate them. For one, the best people at countering them are almost certainly also the people employing them, so while there is a non-zero danger, between advanced electronic warfare capabilities, hacking them back, or simply shooting them with guns, the threat should be minimal. And again, for two, humans are notoriously prone to dangerous and/or evil mistakes on the battlefield. Even if the robot can kill twice as well, if it fails only half as often, that's a wash.

    In the entire history of computerized warfare, there is very little evidence that there is a major threat of using hacking to turn weapons on their owners. Stuxnet and an incident with a drone have occurred, but both were transitory control that took an incredible amount of prior development to make happen. I have no doubt countermeasures will occur, but I would expect most of them to be temporary disabling attacks and less "tell Ultron to destroy humanity and then he does."

  • MortiousMortious The Nightmare Begins Move to New ZealandRegistered User regular
    I fail to see how the end game scenario of this is robots fighting robots.

    Are we going to set out a nice field for them? Build our strategic resource and production facilities far away from population centers and fully automate them? Expect the losing side to just surrender, once it runs out of robots?

    This is not even taking into account that the Nations currently at war, very few of them would have the capability to field robots.

    Move to New Zealand
    It’s not a very important country most of the time
    http://steamcommunity.com/id/mortious
  • RichyRichy Registered User regular
    tbloxham wrote: »
    Also, to be completely frank, in the future you absolutely will be able to 'hack people'.

    Imagine I'm a security company in the future. I'm protecting a mall somewhere, where there have been violent robberies before. I might provide my security team with a list of known shoplifters, and an image file which shows up when they see one. I might also give them the ability to quickly share updates about what the situation is say, outside the Gap, and to share audio files and images with each other. Sounds good right? Everyone in the mall will be on the same page?

    But then, as a hacker I breach that system and add false targets to the list of known shoplifters, making a few random guys from inside the mall suddenly appear to be known violent thieves. I then hack the image sharing system, and tell the other guards that these people have been spotted in the mall and are heavily armed. Share sounds of gunfire, hack the speakers to panic the shoppers to create confusion. I assure you that you will be able to get the guards to open fire on the random dudes, and if I've picked those random dudes to be people who actually do have guns, then they will fire back and the entire thing will be a bloodbath.

    We can already hack people. That's what marketing people and political spin experts do all the freaking time.

    sig.gif
  • QuidQuid Definitely not a banana Registered User regular
    edited July 2015
    Mortious wrote: »
    I fail to see how the end game scenario of this is robots fighting robots.

    Are we going to set out a nice field for them? Build our strategic resource and production facilities far away from population centers and fully automate them? Expect the losing side to just surrender, once it runs out of robots?

    This is not even taking into account that the Nations currently at war, very few of them would have the capability to field robots.

    It's feasible. Much in the same way that nations eventually surrender to overpowering might of other nations. It's hard to convince people they should throw themselves in to the maw of the killbots if there's no preset kill number.

    Mind that being able to defeat the enemy in open battle isn't the only way to win a fight but it is a really effective one a lot of the time.

    Quid on
  • [Tycho?][Tycho?] As elusive as doubt Registered User regular
    tbloxham wrote: »
    [Tycho?] wrote: »
    I agree with the signatories to that letter and disagree with the OP.

    I'll make a few points.

    1) They call for a ban of offensive autonomous systems. Offensive is key, so a robot to defuse a bomb or shoot down a missile would not be included.

    2) I think there's no basis for the notion that autonomous weapons will result in fewer deaths. As technology has advanced war has invariably become more deadly. Occasionally people make predictions that new weapons will bring about peace. When the gatling gun was invented some thought it would end war as they knew it, it being so deadly and so impossible to attack those armed with such weapons. We saw how that turned out. I challenge anyone to name a technological development that actually made war less deadly. The only contender I can think of is nuclear weapons, and it has yet to be seen how far MAD will take us.

    3) Robots make war politically cheap. With no need to draft or maintain fighting forces, and no worry of war-weariness from human casualties or disabled vets, war becomes something that can be entered into with less political risk at home. This is a major selling point to drones right now.

    3) Even if everyone gets autonomous robots, war won't become robots killing robots and sparing innocents. War is politics by different means. Its purpose is to affect change in people and states. A country that has its army defeated in the field may surrender, or it may not, depending on what the leadership wages its chances are. In the 20th century it has often taken widescale destruction of cities to bring about the end to wars. Adding robots to this mix doesn't change change much, since its still humans, in the end, that you need to put the screws to.

    4)
    Imagine being able to put a robotic soldier / guard in every village (or girl's school), that fires up and drives off groups like ISIS anyone we don't like.

    Robot occupiers will of course be targeted by those they occupy. And the notion of robots that can keep a society under heel is extremely dangerous, this is literally terminators on guard duty.

    I would disagree that war has become more deadly. The wars with the highest levels of casualties have been the most recent ones, but in terms of fraction of people killed, people did a fine job with Roman Legionary level equipment.

    If AI can turn on us, wants to turn on us, and can defeat us with its AI cleverness, then it's going to do it whether its in ED209 or Mr Ed. Barring specific aspects of it shows a complete misunderstanding of the risks. If AI's can be built, they WILL be built.

    I'm looking but not finding anything handy. Can you source the fraction of people killed thing?

    I found this neat like from wiki:
    https://en.wikipedia.org/wiki/List_of_wars_and_anthropogenic_disasters_by_death_toll

    The Chinese sure do it big.

    mvaYcgc.jpg
Sign In or Register to comment.