The Coin Return Foundational Fundraiser is here! Please donate!

[Ethics] Did I just loose my favorite Show? (Mind Field Trolley Experiment)

2

Posts

  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    Meeqe wrote: »
    Dedwrekka wrote: »
    Aioua wrote: »
    also god dammit this isn't even what the trolley problem is about!

    there isn't a right answer, that's the point
    knowing how any individual person would perform doesn't get you any useful information

    Which I would agree with except there's a lot of tech people who are trying to block automation because they don't want automatic cars that don't make the "right" answer to the trolley problem. (despite the fact that, from an automation standpoint, if you ever reach a trolley problem you've already had way more important problems before that point)

    So the fact that it's being considered for testing and design purposes and/or regulation does mean that it has some value in testing the problem as a matter of how humans and machines react to it.

    But we know how people react to trauma, violence, and hard ethical choices. The military has oodles and oodles of research on this stuff, with solid hardcore research going back to WW2. I agree that knowing those things are important, but both psych and sociology have studied this issue to death over the last few decades. My cite is any undergraduate psych or socio textbook, these kind of moral issues and how humans react are usually intro material and are routinely covered in your first year in a program, along with the ethics of doing such cases being taught alongside the experiments themselves.

    What new knowledge did putting people through this test get us? Was there a falsifiable idea being tested here, that hasn't been done before? Because that's the only reason I can think of ethically think for putting people through this type of situation. And even that assumes it could be done under proper testing protocols, which from the info I have been able to get Mind Field didn't do, the subjects were not adequately informed as the to the harm they may be subject to.

    I'm not saying what Mindfield did was necessary or ethical, I was responding to the idea that testing the thought experiment to gauge reactions "tells us nothing".

  • aiouaaioua Ora Occidens Ora OptimaRegistered User regular
    It tells us nothing about which action is most moral.

    life's a game that you're bound to lose / like using a hammer to pound in screws
    fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
    that's right we're on a fucked up cruise / God is dead but at least we have booze
    bad things happen, no one knows why / the sun burns out and everyone dies
  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    Aioua wrote: »
    It tells us nothing about which action is most moral.

    Which doesn't mean it tells us nothing or that the information is not useful. I already gave a case where information on what actual humans choose is useful information. It's also important to test thought experiments like this because there may actually be a moral trend among people being tested that skews towards one decision or the other, and that's useful information as well.
    However, it's important that it be tested ethically and under good scientific standards.

  • discriderdiscrider Registered User regular
    edited December 2017
    Dedwrekka wrote: »
    Aioua wrote: »
    It tells us nothing about which action is most moral.

    Which doesn't mean it tells us nothing or that the information is not useful. I already gave a case where information on what actual humans choose is useful information. It's also important to test thought experiments like this because there may actually be a moral trend among people being tested that skews towards one decision or the other, and that's useful information as well.
    However, it's important that it be tested ethically and under good scientific standards.

    Well, it also tells us nothing new or nothing useful.

    It's a straight question of how humans react under stress, of which there is no doubt enough research already.

    How people choose to react in the situation gives us no insights as to how people, or otherwise, should react to the situation (obviously the correct answer is to attempt to derail the trolley by switching as the trolley goes over the junction, and then watching a sideways trolley roll over and cream both sets of people instead, but that's not an answer people on the spot are going to take)

    And any result that is generated has no application in real life as you admit:
    Dedwrekka wrote:
    despite the fact that, from an automation standpoint, if you ever reach a trolley problem you've already had way more important problems before that point
    An AI would or should not have a trolley barreling down the line at an unsafe speed in the first place, such that stopping was not possible.

    The only reason that this experiment should be allowed, in my opinion, is purely as a piece of introspection by the test participants.
    The Authority experiment, which had participants electrify actors just because they were told to, enabled some participants to then critically analyse other situations where they were told to do things, leading some to become conscientious objectors to a draft in later life.

    However this has little to do with science, and the cost of damaging the test subjects would have to be seriously weighed against any potential growth in the test subjects.
    And there is little way that the researcher could determine which outcome would apply to which participant.

    discrider on
  • AridholAridhol Daddliest Catch Registered User regular
    I watched the episode and was anxious for each person. This isn't a thing we should be putting people through.
    If this situation legitimately comes up then whatever that person does in that moment is all that matters and I don't see how this "experiment" would help us do anything to change that.

  • discriderdiscrider Registered User regular
    Like, I could see perhaps running this as a training exercise for emergency responders.
    That is, people who would conceivably run into a dire lose-lose situation, and need to learn how to cope with that.

  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    discrider wrote: »
    Dedwrekka wrote:
    despite the fact that, from an automation standpoint, if you ever reach a trolley problem you've already had way more important problems before that point
    An AI would or should not have a trolley barreling down the line at an unsafe speed in the first place, such that stopping was not possible.

    The Trolley problem isn't about speed, it's about the brake failing and you have two choices of where the train goes, with both options being situations from which the people on the track can't avoid the trolley (high walls on either side seems to be a popular one). From an automation standpoint, any vehicle should already be aware of faults in the brake and emergency brake systems, and should place huge limits on the operation of the vehicle, if not outright prevent it from moving, if either of them is compromised.

  • discriderdiscrider Registered User regular
    Dedwrekka wrote: »
    discrider wrote: »
    Dedwrekka wrote:
    despite the fact that, from an automation standpoint, if you ever reach a trolley problem you've already had way more important problems before that point
    An AI would or should not have a trolley barreling down the line at an unsafe speed in the first place, such that stopping was not possible.

    The Trolley problem isn't about speed, it's about the brake failing and you have two choices of where the train goes, with both options being situations from which the people on the track can't avoid the trolley (high walls on either side seems to be a popular one). From an automation standpoint, any vehicle should already be aware of faults in the brake and emergency brake systems, and should place huge limits on the operation of the vehicle, if not outright prevent it from moving, if either of them is compromised.

    Yep, and that's the thing.
    Any situation where a trolley problem occurs is already a situation where the AI's safety mechanisms have failed.
    Given that, there's no real reason why the AI needs to make 'the correct decision' in the trolley scenario; it is already off the rails, and so the programmer/trainer may not be able to anticipate what the trolley problem will even look like.

  • ForarForar #432 Toronto, Ontario, CanadaRegistered User regular
    discrider wrote: »
    Dedwrekka wrote: »
    discrider wrote: »
    Dedwrekka wrote:
    despite the fact that, from an automation standpoint, if you ever reach a trolley problem you've already had way more important problems before that point
    An AI would or should not have a trolley barreling down the line at an unsafe speed in the first place, such that stopping was not possible.

    The Trolley problem isn't about speed, it's about the brake failing and you have two choices of where the train goes, with both options being situations from which the people on the track can't avoid the trolley (high walls on either side seems to be a popular one). From an automation standpoint, any vehicle should already be aware of faults in the brake and emergency brake systems, and should place huge limits on the operation of the vehicle, if not outright prevent it from moving, if either of them is compromised.

    Yep, and that's the thing.
    Any situation where a trolley problem occurs is already a situation where the AI's safety mechanisms have failed.
    Given that, there's no real reason why the AI needs to make 'the correct decision' in the trolley scenario; it is already off the rails, and so the programmer/trainer may not be able to anticipate what the trolley problem will even look like.

    You've lost me.

    If we have systems that make decisions, and begin to advance towards actual AI, even if we don't guide it, it may find itself in a situation where it needs to make a choice, and it will problem solve to the best of its ability. The notion that just because X, Y, and Z safety parameters have failed are irrelevant, aside from substantial investigations and yelling to occur after the fact about how those layers were bypassed or broken.

    This isn't a situation where the only winning move is to choose not to play [/wargames], but it WILL happen, and in those millions or billions or whateverillions of computing cycles, a choice must be made, even if not to make a choice at all and let things happen as they currently shall.

    The topic came up a lot during discussions about driverless vehicles, and it was interesting, but we definitely didn't spend dozens of pages just nodding and saying 'nope, it'll never happen thanks to X, Y, and Z'.

    Maybe I'm misreading you, but even if we don't program AI for such systems to recognize exactly and strictly defined scenarios, if we allow them to control potentially dangerous situations, they may find 'no win scenarios' all their own. A simple goal to 'do the least harm possible' may not preclude fatalities, depending on those circumstances.

    Obviously it's not ideal to have anyone or anything making life and death choices, but as noted, that ideal has been cast aside and someone or something is making that decision anyway (or choosing not to), and then living with the outcome.

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKER!
  • discriderdiscrider Registered User regular
    So, if and when we get to general AI, then the same problems with testing this with humans occur when testing this with the AI.
    Namely, damage to their pysche may not be worth finding out that they would freeze in this situation.

    For non-general AI, I do not think it's worth the effort expanding their problem space past the point where they should've shut down entirely.
    If a self-driving car gets to the point where the brakes have failed, the car is not immobilised, the car is driving along a road regardless, the car is out of control, and there's now two options, we now need another system to identify, quickly, which option causes the least harm, on top of the general systems that should exist to prevent this possibility entirely.
    The problem has gone from 'Stay on the road; Don't hit stuff' to 'Stop; Hit the least valuable thing' and the car may not be originally built to discern that (driving based on optical flow alone rather than object recognition for instance).

    And perhaps a self-driving car may be able to make the trolley decision based on its driving knowledge.
    But it could equally be 'Put myself in front of a semi and kill my occupant, or have it run uncontrolled into a bus of schoolkids.'
    The trolley problem does not necessarily have to represent a problem the self-driving car is good at, or that are expected scenarios.

    In this case is it worth including anything to handle perhaps a more 'common' trolley problem, rather than refocusing attention on how to make the car itself a safer vehicle?

    Ideally, every car would be a general AI with the ability to adapt and draw sound conclusions to any scenario thrown at it.
    But we're a far way off from that.
    I think we'll be lucky to have cars that just drive well in all road conditions.

  • SiliconStewSiliconStew Registered User regular
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    Just remember that half the people you meet are below average intelligence.
  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    The company who programmed the vehicle can still be culpable if they designed the vehicle to swerve into someone who otherwise wouldn't have been involved in the accident.

  • PaladinPaladin Registered User regular
    Dedwrekka wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    The company who programmed the vehicle can still be culpable if they designed the vehicle to swerve into someone who otherwise wouldn't have been involved in the accident.

    Companies are also not morally culpable; they just pay out

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • spool32spool32 Contrary Library Registered User, Transition Team regular
    discrider wrote: »
    So, if and when we get to general AI, then the same problems with testing this with humans occur when testing this with the AI.
    Namely, damage to their pysche may not be worth finding out that they would freeze in this situation.

    For non-general AI, I do not think it's worth the effort expanding their problem space past the point where they should've shut down entirely.
    If a self-driving car gets to the point where the brakes have failed, the car is not immobilised, the car is driving along a road regardless, the car is out of control, and there's now two options, we now need another system to identify, quickly, which option causes the least harm, on top of the general systems that should exist to prevent this possibility entirely.
    The problem has gone from 'Stay on the road; Don't hit stuff' to 'Stop; Hit the least valuable thing' and the car may not be originally built to discern that (driving based on optical flow alone rather than object recognition for instance).

    And perhaps a self-driving car may be able to make the trolley decision based on its driving knowledge.
    But it could equally be 'Put myself in front of a semi and kill my occupant, or have it run uncontrolled into a bus of schoolkids.'


    oh man. So many questions... do you design this feature in?

    If you do... do you tell people it's there?

    Do you give them the option to turn it off??

  • hippofanthippofant ティンク Registered User regular
    Dedwrekka wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    The company who programmed the vehicle can still be culpable if they designed the vehicle to swerve into someone who otherwise wouldn't have been involved in the accident.

    It does make it easier for people to avoid the problem though. That is to say, it doesn't solve the ethical problem, but it removes the psychological one. Which... isn't a "good" thing, but would certainly be consistent with many of the things that human beings do regularly in life.

  • DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    hippofant wrote: »
    Dedwrekka wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    The company who programmed the vehicle can still be culpable if they designed the vehicle to swerve into someone who otherwise wouldn't have been involved in the accident.

    It does make it easier for people to avoid the problem though. That is to say, it doesn't solve the ethical problem, but it removes the psychological one. Which... isn't a "good" thing, but would certainly be consistent with many of the things that human beings do regularly in life.

    Which, great, it solves a possible psychological issue with trolley problem being applied to automated vehicles.

    But also the trolley problem shouldn't be applied to automated vehicles. Because to get to a trolley problem with automated vehicles you have to have cut through around 5 physical systems and multiple software barriers as well as failed the continuous in-operation software checks to detect those faults. By that point the important problems to solve are those software and hardware problems, and not "how do we also add on a morality semi-AI that will itself create new faults within the system and new problems with normal operation".

    Which is why it's pretty useless in designing automated vehicles. It's too far down the line and too much damage has to be done to the entire system for it to happen. By that point assuming that a dedicated "Trolley Problem System" would also work, or have any control over the operation of the vehicle, is folly.

    However if people are going to demand that it be added to automated vehicles, then it also makes the trolley problem something that needs to be tested in real world situations, because people who program the system need to be able to have real world data to compare it to.

    But that testing also needs to be done ethically.

  • HevachHevach Registered User regular
    Paladin wrote: »
    Dedwrekka wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    The company who programmed the vehicle can still be culpable if they designed the vehicle to swerve into someone who otherwise wouldn't have been involved in the accident.

    Companies are also not morally culpable; they just pay out

    Most likely the per victim settlement is going to be very similar either way, be it the one person who wouldn't have been involved or the several who would be involved, but in an accident that shouldn't have happened. Many companies that have a likely risk of deaths even have a standard settlement offer that is generous enough to avoid a lot of litigation.

  • discriderdiscrider Registered User regular
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    Disagree.
    Morals are just general patterns of behaviour that apply to many different scenarios.
    So I believe a general AI needs morals in order to cope with unexpected situations.

    Also, placing a general AI in a trolley trainer is a good way to get an autonomous car that only drives at 10mph to prevent getting in a trolley problem.

    So although we may want the AI to be utilitarian about it, I do not think that's something you may want to train it to do.
    Better to teach it 'least harm' as a moral, and then hope it doesn't freeze when presented with a trolley problem.

  • Mr FuzzbuttMr Fuzzbutt Registered User regular
    edited December 2017
    discrider wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    Disagree.
    Morals are just general patterns of behaviour that apply to many different scenarios.
    So I believe a general AI needs morals in order to cope with unexpected situations.

    Also, placing a general AI in a trolley trainer is a good way to get an autonomous car that only drives at 10mph to prevent getting in a trolley problem.

    So although we may want the AI to be utilitarian about it, I do not think that's something you may want to train it to do.
    Better to teach it 'least harm' as a moral, and then hope it doesn't freeze when presented with a trolley problem.

    Gotta be careful teaching AIs, though. Given that everyone dies eventually, and also suffers some amount of harm during their lives, you could get a fleet of murder cars trying to KILL ALL HUMANS NOW, getting people's death over and done with so they don't have to endure the suffering in the rest of their lives.

    Mr Fuzzbutt on
    broken image link
  • discriderdiscrider Registered User regular
    discrider wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    Disagree.
    Morals are just general patterns of behaviour that apply to many different scenarios.
    So I believe a general AI needs morals in order to cope with unexpected situations.

    Also, placing a general AI in a trolley trainer is a good way to get an autonomous car that only drives at 10mph to prevent getting in a trolley problem.

    So although we may want the AI to be utilitarian about it, I do not think that's something you may want to train it to do.
    Better to teach it 'least harm' as a moral, and then hope it doesn't freeze when presented with a trolley problem.

    Gotta be careful teaching AIs, though. Given that everyone dies eventually, and also suffers some amount of harm during their lives, you could get a fleet of murder cars trying to KILL ALL HUMANS NOW, getting people's death over and done with so they don't have to endure the suffering in the rest of their lives.

    Well, that's why "Don't impose your own beliefs on others" is also a good one.
    Especially if you want your car to drive you to the pub.

    But yeah, that's the point of morals, to be competing guidelines, rather than a set of strict rules.
    Which is also probably why you don't want general AI for cars, unless you want to deal with vehicular manslaughter in courts.

  • redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    ehh... i think they will designed to minimize legal culpability. often, i think, moral imperatives like, "do the lawful thing, even if it does more damage." will be easier to defend in court.

    You aren't that likely to get sued for letting the trolley continue to roll over the 5 people.

    You aren't that likely to get sued if you AI is following at a safe distance, applies its breaks and stays in its lane, but still kills someone in an emergency. at least not for punative damages your insurance won't cover.

    They moistly come out at night, moistly.
  • DocshiftyDocshifty Registered User regular
    redx wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    ehh... i think they will designed to minimize legal culpability. often, i think, moral imperatives like, "do the lawful thing, even if it does more damage." will be easier to defend in court.

    You aren't that likely to get sued for letting the trolley continue to roll over the 5 people.

    You aren't that likely to get sued if you AI is following at a safe distance, applies its breaks and stays in its lane, but still kills someone in an emergency. at least not for punative damages your insurance won't cover.

    I can't imagine you would be liable at all in a self driving car.

    You are effectively a passenger to a chauffeur. They are the ones that face damages (them in this case being the company that programs the ai).

  • redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited December 2017
    Docshifty wrote: »
    redx wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    ehh... i think they will designed to minimize legal culpability. often, i think, moral imperatives like, "do the lawful thing, even if it does more damage." will be easier to defend in court.

    You aren't that likely to get sued for letting the trolley continue to roll over the 5 people.

    You aren't that likely to get sued if you AI is following at a safe distance, applies its breaks and stays in its lane, but still kills someone in an emergency. at least not for punative damages your insurance won't cover.

    I can't imagine you would be liable at all in a self driving car.

    You are effectively a passenger to a chauffeur. They are the ones that face damages (them in this case being the company that programs the ai).

    you is the AI manufacturer in this instance. the big huge company who is driven by fiduciary duty, not morality.

    car driving ai will be made by companies attempting to make money an environment of laws.

    why would morality be expected to have anything to do with their behavior?

    redx on
    They moistly come out at night, moistly.
  • DocshiftyDocshifty Registered User regular
    redx wrote: »
    Docshifty wrote: »
    redx wrote: »
    When it comes to people, the trolley problem always get hung up on moral culpability. An autonomous vehicle has no need or concern for morals and as such should choose the utilitarian response of killing the least number of people every time.

    ehh... i think they will designed to minimize legal culpability. often, i think, moral imperatives like, "do the lawful thing, even if it does more damage." will be easier to defend in court.

    You aren't that likely to get sued for letting the trolley continue to roll over the 5 people.

    You aren't that likely to get sued if you AI is following at a safe distance, applies its breaks and stays in its lane, but still kills someone in an emergency. at least not for punative damages your insurance won't cover.

    I can't imagine you would be liable at all in a self driving car.

    You are effectively a passenger to a chauffeur. They are the ones that face damages (them in this case being the company that programs the ai).

    you is the AI manufacturer in this instance. the big huge company who is driven by fiduciary duty, not morality.

    car driving ai will be made by companies attempting to make money an environment of laws.

    why would morality be expected to have anything to do with their behavior?

    Not morality, liability. You aren't in charge of the vehicle. You aren't in control, you are a passenger.

    Ideally there would be massive law reform and regulations already in place by the time self driving cars are the norm. but I could see the company being the ones liable if the vehicle programmed took an action that resulted in loss of life. That would depend on how the laws look.

  • nrmairnrmair Registered User new member
    I had a look through that AMA and found this comment - https://reddit.com/r/vsauce/comments/7iogcp/im_michael_stevens_ask_me_about_mind_field_or/dr3dczq/

    The guy points out that Corey was also in Nathan for You.

    It's clearly completely faked for the purpose of entertainment. Nothing about the study was even remotely scientific in the first place but this only goes to confirm that.

    I hope knowing this puts your minds at rest, but the discussion in this thread has been really interesting to read regardless.

  • PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    discrider wrote: »
    So, if and when we get to general AI, then the same problems with testing this with humans occur when testing this with the AI.
    Namely, damage to their pysche may not be worth finding out that they would freeze in this situation.

    For non-general AI, I do not think it's worth the effort expanding their problem space past the point where they should've shut down entirely.
    If a self-driving car gets to the point where the brakes have failed, the car is not immobilised, the car is driving along a road regardless, the car is out of control, and there's now two options, we now need another system to identify, quickly, which option causes the least harm, on top of the general systems that should exist to prevent this possibility entirely.
    The problem has gone from 'Stay on the road; Don't hit stuff' to 'Stop; Hit the least valuable thing' and the car may not be originally built to discern that (driving based on optical flow alone rather than object recognition for instance).

    And perhaps a self-driving car may be able to make the trolley decision based on its driving knowledge.
    But it could equally be 'Put myself in front of a semi and kill my occupant, or have it run uncontrolled into a bus of schoolkids.'
    The trolley problem does not necessarily have to represent a problem the self-driving car is good at, or that are expected scenarios.

    In this case is it worth including anything to handle perhaps a more 'common' trolley problem, rather than refocusing attention on how to make the car itself a safer vehicle?

    Ideally, every car would be a general AI with the ability to adapt and draw sound conclusions to any scenario thrown at it.
    But we're a far way off from that.
    I think we'll be lucky to have cars that just drive well in all road conditions.

    Well, in the case of AI you could always just do testing on a temporary perfect copy which gets destroyed after

    But I don't think this matters. You don't need a general AI to do self driving cars and for them to solve the trolley problem. If there is no gap to maneuver into to avoid the problem then any maneuver is going to just cause more damage. Roads aren't a rail line that has people tied up on them, there's flexibility. Total instantaneous brake failure is also something that doesn't happen very often. Self driving cars avoid this by just not getting into a situation where they can't stop in time with whatever braking power they have (which it will always know)

  • discriderdiscrider Registered User regular
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

  • RT800RT800 Registered User regular
    Maybe we'll end up with vehicles like Rick's - operating under the supreme imperative to "keep their occupant safe".

  • discriderdiscrider Registered User regular
    Well that's another flaw in the general AI send-a-copy-through-the-trolley-sim.
    Rick's car is a jerk.
    What happens if your general AI is a jerk and only gives you the 'correct' answer in the sim?
    And then proceeds to hit as many pedestrians as it can during an 'accident'?

    But again, you would have to hope the morals impressed upon it during its training phase stick.
    Like any person.

  • HamHamJHamHamJ Registered User regular
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    discrider wrote: »
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

    Any Turing machine based AI is necessarily copyable, you just need to have extra resources available. You just mark all memory as copy on write at some moment and preserve execution state (and stop the AI for a few milliseconds to do so atomically) and copy at your leisure

  • SiliconStewSiliconStew Registered User regular
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    Just remember that half the people you meet are below average intelligence.
  • PolaritiePolaritie Sleepy Registered User regular
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    Well, not really. It's just assumed the only choice you have is the switch. Usually it doesn't go into details of how the situation happened.

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Jealous DevaJealous Deva Registered User regular
    discrider wrote: »
    Well that's another flaw in the general AI send-a-copy-through-the-trolley-sim.
    Rick's car is a jerk.
    What happens if your general AI is a jerk and only gives you the 'correct' answer in the sim?
    And then proceeds to hit as many pedestrians as it can during an 'accident'?

    But again, you would have to hope the morals impressed upon it during its training phase stick.
    Like any person.

    We should probably not design a general ai for something like driving a car. Both for the reason that it’s vastly overkill for the task and risks adding too much individual judgement and potential for failure to what ultimately should be a very organized and rote system, and for the reason that avoiding making advanced general intelligences do simple and routine tasks like driving cars was the whole point of developing a self driving car in the first place.

  • HamHamJHamHamJ Registered User regular
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I doubt anyone will ever bother writing some kind of special handling for this situation. If a brake failure has happened in a way the system could not detect, and only became obvious once it is about to hit someone, it's just gonna hit someone. And then engine brake to stop and then turn itself off.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • JuliusJulius Captain of Serenity on my shipRegistered User regular
    HamHamJ wrote: »
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I doubt anyone will ever bother writing some kind of special handling for this situation. If a brake failure has happened in a way the system could not detect, and only became obvious once it is about to hit someone, it's just gonna hit someone. And then engine brake to stop and then turn itself off.

    yeah the scenario here is that the system would be aware of how many people are on each track/road (and all further consequences of switching), able to control the switch or its wheels and unable to brake in any way or avoid either track/road. and unaware of system failure until it's too late.

    that just seems like a system designed specifically to create this scenario.

  • shrykeshryke Member of the Beast Registered User regular
    The saddest thing about this whole affair is for all the hubbub and lack of thought and morals, they didn't even cover any of the interesting trolley problems.

    Just the standard one? BORING!

  • NinjeffNinjeff Registered User regular
    Enc wrote: »
    Ninjeff wrote: »
    It....didn't seem to bother me that much.
    I mean, it is such a popular thought experiment i'm actually surprised no one caught it in the moment.

    So, I'm a random person unrelated to this study. If I were put on here I would end up having to choose who would be killed by a vehicle.

    My sister-in-law was killed by a truck driver who had to choose between hitting her as she fell off a sidewalk or crashing into traffic in a split second, delayed momentarily, and then did both due to the delay.

    The similarities here are enough to where, just talking about this, I am reliving really terrible parts of my past. Which is fine, in this case, because it is theoretical. Where I in this situation, I would go through a living hell, as would my wife and her family as they helped me get through it, causing a lot of misery for... what? A quick buck through shock factor on TV?

    Even if you are familiar with the exercise, you may not recall it in time to matter. Or, if you do, you might still second guess yourself because what it it was real and just happened to coincidentally look like the test? Does that make you hesitate in the real situation also? Are people dead because of you?

    It doesn't matter if it was common. It doesn't matter that nobody told him no. This is a really shitty thing to do to an unassuming bystander, especially one interested in the vehicles. Now those people will link the potential deaths of others to their passion every time they see a high-speed train. What if they are unable to look at them the same now? What if they never want to work with trains again, after decades of interest?

    This is super shitty. It needs to be called out on it.

    You'll notice that I did not say that i didn't understand why OTHER people were disturbed by it, i simply stating that it didn't bother ME that much.
    Maybe it was a little shitty, but I understand the desire to know how people would really react to this. In my primitive musician brain i couldnt think of a different way to go about it any "safer" than they did, so, it didnt bother me.

    People handle tragedy in different ways. I do not re-live tragic parts of my life when i see that tragedy played out in media (and it is, a lot), but AGAIN that is me. So this kind of thing did not bother me per se'

  • EncEnc A Fool with Compassion Pronouns: He, Him, HisRegistered User regular
    It is pretty difficult to take a post like "it doesn't bother me that much" in this context and not see it as marginalizing those it did bother, even though that isn't your intention.

    I'm glad you aren't bothered by your past. That sounds nice. I suspect that is more irregular than the other way.

  • discriderdiscrider Registered User regular
    Phyphor wrote: »
    discrider wrote: »
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

    Any Turing machine based AI is necessarily copyable, you just need to have extra resources available. You just mark all memory as copy on write at some moment and preserve execution state (and stop the AI for a few milliseconds to do so atomically) and copy at your leisure

    Well...
    I was originally thinking that there'd be some physical trade-off having an architecture where you can physcially access all the memory rather than just the surface of the memory stick.
    And that an AI may not have such an arrangement for speed purposes.

    But yes, this works, and I can't see any reason why you wouldn't have a standard physically accessible architecture, aside perhaps loading it into the fast brain is hard.

Sign In or Register to comment.