As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

[Ethics] Did I just loose my favorite Show? (Mind Field Trolley Experiment)

13»

Posts

  • Options
    NinjeffNinjeff Registered User regular
    edited December 2017
    Enc wrote: »
    It is pretty difficult to take a post like "it doesn't bother me that much" in this context and not see it as marginalizing those it did bother, even though that isn't your intention.

    I'm glad you aren't bothered by your past. That sounds nice. I suspect that is more irregular than the other way.

    I disagree.

    the OP was asking if it bothered other people. I was not bothered. It does not marginalize you that i have a different opinion.

    Ninjeff on
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    discrider wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

    Any Turing machine based AI is necessarily copyable, you just need to have extra resources available. You just mark all memory as copy on write at some moment and preserve execution state (and stop the AI for a few milliseconds to do so atomically) and copy at your leisure

    Well...
    I was originally thinking that there'd be some physical trade-off having an architecture where you can physcially access all the memory rather than just the surface of the memory stick.
    And that an AI may not have such an arrangement for speed purposes.

    But yes, this works, and I can't see any reason why you wouldn't have a standard physically accessible architecture, aside perhaps loading it into the fast brain is hard.

    Ah, yeah for performance you might want to do it differently, my thinking was more along the lines that any AI could be designed such that it is copyable not that every design always is

  • Options
    EncEnc A Fool with Compassion Pronouns: He, Him, HisRegistered User regular
    Ninjeff wrote: »
    Enc wrote: »
    It is pretty difficult to take a post like "it doesn't bother me that much" in this context and not see it as marginalizing those it did bother, even though that isn't your intention.

    I'm glad you aren't bothered by your past. That sounds nice. I suspect that is more irregular than the other way.

    I disagree.

    the OP was asking if it bothered other people. I was not bothered. It does not marginalize you that i have a different opinion.

    I'm glad I have you here to tell me how I feel.

  • Options
    MeeqeMeeqe Lord of the pants most fancy Someplace amazingRegistered User regular
    Phyphor wrote: »
    discrider wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

    Any Turing machine based AI is necessarily copyable, you just need to have extra resources available. You just mark all memory as copy on write at some moment and preserve execution state (and stop the AI for a few milliseconds to do so atomically) and copy at your leisure

    Well...
    I was originally thinking that there'd be some physical trade-off having an architecture where you can physcially access all the memory rather than just the surface of the memory stick.
    And that an AI may not have such an arrangement for speed purposes.

    But yes, this works, and I can't see any reason why you wouldn't have a standard physically accessible architecture, aside perhaps loading it into the fast brain is hard.

    Ah, yeah for performance you might want to do it differently, my thinking was more along the lines that any AI could be designed such that it is copyable not that every design always is

    In modern computer memory every part of the data structure is available to read. It isn’t like a stack of paper where you have to remove the top bits to get to the lower ones, we stopped used those sorts of physical structures decades ago. Think of them more like drawers, they each open individually. Programs may access them in sequential orders, but that is a separate thing from how bits are physically stored.

    themoreyouknow.gif

  • Options
    JuliusJulius Captain of Serenity on my shipRegistered User regular
    Ninjeff wrote: »
    Enc wrote: »
    It is pretty difficult to take a post like "it doesn't bother me that much" in this context and not see it as marginalizing those it did bother, even though that isn't your intention.

    I'm glad you aren't bothered by your past. That sounds nice. I suspect that is more irregular than the other way.

    I disagree.

    the OP was asking if it bothered other people. I was not bothered. It does not marginalize you that i have a different opinion.

    well OP asked what the ethical response is, and what our thoughts are. just purely from the context of ethics it's reasonable to draw the implication that one shouldn't be bothered by this.

    and, like, you saying that being bothered by it is also fine but you understand wanting to know and thus doing this reads like you think doing this wasn't really wrong?

    perfectly fine standpoint as such. just saying that is the conclusion one is led to.

  • Options
    discriderdiscrider Registered User regular
    Meeqe wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

    Any Turing machine based AI is necessarily copyable, you just need to have extra resources available. You just mark all memory as copy on write at some moment and preserve execution state (and stop the AI for a few milliseconds to do so atomically) and copy at your leisure

    Well...
    I was originally thinking that there'd be some physical trade-off having an architecture where you can physcially access all the memory rather than just the surface of the memory stick.
    And that an AI may not have such an arrangement for speed purposes.

    But yes, this works, and I can't see any reason why you wouldn't have a standard physically accessible architecture, aside perhaps loading it into the fast brain is hard.

    Ah, yeah for performance you might want to do it differently, my thinking was more along the lines that any AI could be designed such that it is copyable not that every design always is

    In modern computer memory every part of the data structure is available to read. It isn’t like a stack of paper where you have to remove the top bits to get to the lower ones, we stopped used those sorts of physical structures decades ago. Think of them more like drawers, they each open individually. Programs may access them in sequential orders, but that is a separate thing from how bits are physically stored.

    themoreyouknow.gif

    I don't know whether this applies to the specific tech that has been designed to run neural nets.
    In those, I thought each 'neuron' held its own memory storage, which means it may be difficult to physically access the register attached to a neuron deep within the structure.

  • Options
    MeeqeMeeqe Lord of the pants most fancy Someplace amazingRegistered User regular
    I still believe, even in those setups that you can access each one individually. Mainly because we're faking each neuron with a pretty intense setup, and have to emulate each one individually. I can't think of a reason to build one where you couldn't read the individual state, given that we started with one and scaled up.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Meeqe wrote: »
    I still believe, even in those setups that you can access each one individually. Mainly because we're faking each neuron with a pretty intense setup, and have to emulate each one individually. I can't think of a reason to build one where you couldn't read the individual state, given that we started with one and scaled up.

    Well there are two approaches, a million tiny machines that are largely independent and one machine as fast as we can possibly built it. Most modern machines are the latter (though GPUs have branched and now have ~1-2000 cores but still unified memory). You could have each neuron-equivalent hold its own local memory, and that would be maximally performant as there would be no need for a memory controller, no delays, etc

    If you actually built a NN that was composed of independent components its conceivable that you couldn't stop it atomically to copy the state

  • Options
    MeeqeMeeqe Lord of the pants most fancy Someplace amazingRegistered User regular
    Phyphor wrote: »
    Meeqe wrote: »
    I still believe, even in those setups that you can access each one individually. Mainly because we're faking each neuron with a pretty intense setup, and have to emulate each one individually. I can't think of a reason to build one where you couldn't read the individual state, given that we started with one and scaled up.

    Well there are two approaches, a million tiny machines that are largely independent and one machine as fast as we can possibly built it. Most modern machines are the latter (though GPUs have branched and now have ~1-2000 cores but still unified memory). You could have each neuron-equivalent hold its own local memory, and that would be maximally performant as there would be no need for a memory controller, no delays, etc

    If you actually built a NN that was composed of independent components its conceivable that you couldn't stop it atomically to copy the state

    Oh for sure! I'm just skeptical that someone with the funding to get one of these things made did so in such a way that you couldn't break down its memory state to analyze, being able to do so is pretty much necessary for debugging/improvements. Can't fix whats wrong unless you can poke at the details. Even with local memory storage I'm pretty sure you can export it all, we start with one and seeing what it did, then basically multicored the neurons. Given that at the start we could see what the individual NN components did, it would be a baffling choice to eliminate your ablity to look inside each NN neuron.

  • Options
    jothkijothki Registered User regular
    I have a bit of a minor pet peeve in regards to how people talk about Turing machine-equivalents like they're physically possible.

  • Options
    PolaritiePolaritie Sleepy Registered User regular
    jothki wrote: »
    I have a bit of a minor pet peeve in regards to how people talk about Turing machine-equivalents like they're physically possible.

    Eh, I mean, technically, no, you can't make a proper infinite Turing machine. My intuition is that the set of problems that can be solved on a Turing machine but not a finite Turing machine (in a finite amount of time) is very close to empty (or just empty). So... not sure that it matters, either.

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    NinjeffNinjeff Registered User regular
    Enc wrote: »
    Ninjeff wrote: »
    Enc wrote: »
    It is pretty difficult to take a post like "it doesn't bother me that much" in this context and not see it as marginalizing those it did bother, even though that isn't your intention.

    I'm glad you aren't bothered by your past. That sounds nice. I suspect that is more irregular than the other way.

    I disagree.

    the OP was asking if it bothered other people. I was not bothered. It does not marginalize you that i have a different opinion.

    I'm glad I have you here to tell me how I feel.

    I didn't tell you how to feel. If you feel marginalized, that is by choice. The act of someone having a different opinion than yours does not marginalize your own opinion. I don't like some people take on the new Star Wars movie, but that doesn't marginalize my opinion of it, as my opinion remains as relevant to me -and any discussion regarding opinions- as it was before other people had a different opinion.




    On the topic at hand, this test all seemed relatively minor to me. They tested the subjects to verify -or at least try to- that they were mentally sturdy enough to take the experiment. Assuming that test was done properly, i think its totally viable to do this sort of thing in the manner they did it. If they'd have let the simulation run into looking like they actually killed people it might push that boundary for me, but they ended it well before that was the case.
    I can see the worry is what it might "do" to some peoples mental state, but it appears to me they tried to prevent those people from being part of the test.
    I, personally, wouldn't mind being part of that test.

  • Options
    ZekZek Registered User regular
    What gets me about this experiment is that there really isn't a clear benefit, other than satisfying idle curiosity. Why is it valuable to our society for there to be a record of what people do in this scenario? Does it actually change anything at all? This video in particular was made simply to entertain people. I didn't watch the whole thing, does he actually attempt to reach any sort of valuable conclusion from it?

  • Options
    durandal4532durandal4532 Registered User regular
    edited December 2017
    discrider wrote: »
    Meeqe wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

    Any Turing machine based AI is necessarily copyable, you just need to have extra resources available. You just mark all memory as copy on write at some moment and preserve execution state (and stop the AI for a few milliseconds to do so atomically) and copy at your leisure

    Well...
    I was originally thinking that there'd be some physical trade-off having an architecture where you can physcially access all the memory rather than just the surface of the memory stick.
    And that an AI may not have such an arrangement for speed purposes.

    But yes, this works, and I can't see any reason why you wouldn't have a standard physically accessible architecture, aside perhaps loading it into the fast brain is hard.

    Ah, yeah for performance you might want to do it differently, my thinking was more along the lines that any AI could be designed such that it is copyable not that every design always is

    In modern computer memory every part of the data structure is available to read. It isn’t like a stack of paper where you have to remove the top bits to get to the lower ones, we stopped used those sorts of physical structures decades ago. Think of them more like drawers, they each open individually. Programs may access them in sequential orders, but that is a separate thing from how bits are physically stored.

    themoreyouknow.gif

    I don't know whether this applies to the specific tech that has been designed to run neural nets.
    In those, I thought each 'neuron' held its own memory storage, which means it may be difficult to physically access the register attached to a neuron deep within the structure.

    Neural Networks aren't really tied to any physical structure.

    Each "neuron" is basically whatever you feel comfortable calling a "neuron". That can mean anything from like you make little pods and connect them with wires and each pod has a separate capacity to do shit, all the way to the number stored in variable A only changes if the number stored in variable B also changes by at least .04.

    It's one of the reasons I super hate the name "Neural Network"! Neuron is too charged a word for something that's a very non-specific thing.
    On the topic at hand, this test all seemed relatively minor to me. They tested the subjects to verify -or at least try to- that they were mentally sturdy enough to take the experiment. Assuming that test was done properly, i think its totally viable to do this sort of thing in the manner they did it. If they'd have let the simulation run into looking like they actually killed people it might push that boundary for me, but they ended it well before that was the case.
    I can see the worry is what it might "do" to some peoples mental state, but it appears to me they tried to prevent those people from being part of the test.
    I, personally, wouldn't mind being part of that test.

    I think it's extremely likely that they did not test people properly, based on their inability to understand both why this was not a particularly useful investigation and their inability to understand why anyone would imagine it was possible for this to be a damaging "experiment".

    IRBs exist because the social and psychological sciences spent a long time disastrously harming people in the sloppy pursuit of experimental questions. Even when we didn't think we were! Even when we thought we'd figured out what possible harms existed and weeded them out.

    If you can't figure out how to properly inform participants and still have your experiment work, you don't do it. You definitely don't turn it into a stunt for a TV show.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    daveNYCdaveNYC Why universe hate Waspinator? Registered User regular
    Zek wrote: »
    What gets me about this experiment is that there really isn't a clear benefit, other than satisfying idle curiosity. Why is it valuable to our society for there to be a record of what people do in this scenario? Does it actually change anything at all? This video in particular was made simply to entertain people. I didn't watch the whole thing, does he actually attempt to reach any sort of valuable conclusion from it?

    It's like Ow My Balls! except with psychological trauma.

    Shut up, Mr. Burton! You were not brought upon this world to get it!
  • Options
    HefflingHeffling No Pic EverRegistered User regular
    discrider wrote: »
    Well that's another flaw in the general AI send-a-copy-through-the-trolley-sim.
    Rick's car is a jerk.
    What happens if your general AI is a jerk and only gives you the 'correct' answer in the sim?
    And then proceeds to hit as many pedestrians as it can during an 'accident'?

    But again, you would have to hope the morals impressed upon it during its training phase stick.
    Like any person.

    We should probably not design a general ai for something like driving a car. Both for the reason that it’s vastly overkill for the task and risks adding too much individual judgement and potential for failure to what ultimately should be a very organized and rote system, and for the reason that avoiding making advanced general intelligences do simple and routine tasks like driving cars was the whole point of developing a self driving car in the first place.

    We will make an AI for driving a car just because we're lazy and prefer someone else to have responsibility when possible.

  • Options
    HefflingHeffling No Pic EverRegistered User regular
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I think it's worth noting that an AI will have access to a whole suite of options that might not occur to a regular driver during an emergency scenario.

    For example, an AI could instantly downshift to first gear, driving up the RPMs to crazy levels in order to slow the car. Or it could shift into reverse, willingly destroying the transmission and possibly the drive train and engine in order to rapidly slow the car. But also converting much of that forward momentum into the energy that is used up in destroying the transmission.

  • Options
    NinjeffNinjeff Registered User regular
    Heffling wrote: »
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I think it's worth noting that an AI will have access to a whole suite of options that might not occur to a regular driver during an emergency scenario.

    For example, an AI could instantly downshift to first gear, driving up the RPMs to crazy levels in order to slow the car. Or it could shift into reverse, willingly destroying the transmission and possibly the drive train and engine in order to rapidly slow the car. But also converting much of that forward momentum into the energy that is used up in destroying the transmission.

    Unfortunately, I doubt shifting to first will be very effective in electric vehicles, buuuuut i could be wrong on that.

    As far as shifting to reverse, assuming the brakes have already been applied to their max, there is nothing more you can do since the limit is on the tires at this point. Reverse will actually cause a LOSS of traction by spinning the wheels and preventing them from grabbing (like how a car fish tails during a burn out, but in reverse.)
    Point is that once emergency stopping has initiated (and brake application is at 100% capacity), you are mostly limited to the quality of the tire to stop the vehicle effectively.

    I suppose an option might be some type of external anchor but that probably has other worse implications....

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    If we have full brakes available the driving AI has failed if we ever get in a situation where we hit something other than a pedestrian literally jumping in front of us. The AI should be able to determine very precise stopping distances, if it gets to the point where it can't always safely stop, it should slow down

  • Options
    Jealous DevaJealous Deva Registered User regular
    Ninjeff wrote: »
    Heffling wrote: »
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I think it's worth noting that an AI will have access to a whole suite of options that might not occur to a regular driver during an emergency scenario.

    For example, an AI could instantly downshift to first gear, driving up the RPMs to crazy levels in order to slow the car. Or it could shift into reverse, willingly destroying the transmission and possibly the drive train and engine in order to rapidly slow the car. But also converting much of that forward momentum into the energy that is used up in destroying the transmission.

    Unfortunately, I doubt shifting to first will be very effective in electric vehicles, buuuuut i could be wrong on that.

    As far as shifting to reverse, assuming the brakes have already been applied to their max, there is nothing more you can do since the limit is on the tires at this point. Reverse will actually cause a LOSS of traction by spinning the wheels and preventing them from grabbing (like how a car fish tails during a burn out, but in reverse.)
    Point is that once emergency stopping has initiated (and brake application is at 100% capacity), you are mostly limited to the quality of the tire to stop the vehicle effectively.

    I suppose an option might be some type of external anchor but that probably has other worse implications....

    I think he was mainly referring to those as options when traction is still available but the brakes specifically are fried. And AI systems are already VERY good at traction management (your car you drive at this moment likely has an AI run traction and stability control system.)

    Humans suck at emergency thinking unless trained for it. People have died in a lot of situations that had tight time windows but easy solutions that if given sufficient time and calm virtually anyone could have thought of (stuck throttles, for example, that could have been disabled by shifting into neutral and standing on the brake). AI would actually be great for these situations.

  • Options
    NinjeffNinjeff Registered User regular
    Ninjeff wrote: »
    Heffling wrote: »
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I think it's worth noting that an AI will have access to a whole suite of options that might not occur to a regular driver during an emergency scenario.

    For example, an AI could instantly downshift to first gear, driving up the RPMs to crazy levels in order to slow the car. Or it could shift into reverse, willingly destroying the transmission and possibly the drive train and engine in order to rapidly slow the car. But also converting much of that forward momentum into the energy that is used up in destroying the transmission.

    Unfortunately, I doubt shifting to first will be very effective in electric vehicles, buuuuut i could be wrong on that.

    As far as shifting to reverse, assuming the brakes have already been applied to their max, there is nothing more you can do since the limit is on the tires at this point. Reverse will actually cause a LOSS of traction by spinning the wheels and preventing them from grabbing (like how a car fish tails during a burn out, but in reverse.)
    Point is that once emergency stopping has initiated (and brake application is at 100% capacity), you are mostly limited to the quality of the tire to stop the vehicle effectively.

    I suppose an option might be some type of external anchor but that probably has other worse implications....

    I think he was mainly referring to those as options when traction is still available but the brakes specifically are fried. And AI systems are already VERY good at traction management (your car you drive at this moment likely has an AI run traction and stability control system.)

    Humans suck at emergency thinking unless trained for it. People have died in a lot of situations that had tight time windows but easy solutions that if given sufficient time and calm virtually anyone could have thought of (stuck throttles, for example, that could have been disabled by shifting into neutral and standing on the brake). AI would actually be great for these situations.

    AI or not, you are still 100% limited to the quality of the tire. Which is my point. Putting the car in reverse will put the tires over 100% traction quicker, which would then cause a loss of control. You'd actually want to keep the transmission in tact, as it creates parasitic drag on the drive train helping to slow the car.
    I dont, however, know if that applies the same in an electric vehicle which isnt prone to the same engine mechanics as an I.C.E vehicle is. I dont think the gears would create the same drag on an electric drive train.

    If, in the rare case, brakes get fried your (and by proxy the AI's) best outcome is to attempt to steer around the obstacle. Which, unfortunately, is another set of problems still limited to tire quality. Even if the AI can manage to turn the wheel quick enough, lack of grip in the tire -from the sudden change in direction- will just induce push (understeer) in the car and continue the slide forward. So even if your brakes are fried, you're still limited to tire quality.
    My point of all that, is to say that it is a whole NEW problem for the AI that hasnt been addressed yet.

    In addition to the morel and philosophical quandaries facing the AI, it also has to have the ability to learn what equipment its working with the get the best out of the vehicle.
    The unfortunate answer is that is probably NOT possible to dial a solution to every problem. If a deer jumps out at you on a highway while you're doing 55, you're going to hit that thing AI or no.
    Which, i suppose, begs the question that when is it ok to STOP worrying about vanishingly small possibilities?
    I dont know the answer to that. I enjoy the conversation tho

  • Options
    discriderdiscrider Registered User regular
    discrider wrote: »
    Meeqe wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Phyphor wrote: »
    discrider wrote: »
    Depends on if you can copy the AI.
    I'm not sure whether a general AI would not store its state in temporary, powered memory.
    In such a case, it may not be possible to bring the entire state across all at once; it may change between the start of the copy to the end.
    This may render the copied AI non-functional as the temporary state may be as much a necessity as the physical 'brain' structure.

    But yes.
    You could do that.
    Even if you don't kill the copy.

    Any Turing machine based AI is necessarily copyable, you just need to have extra resources available. You just mark all memory as copy on write at some moment and preserve execution state (and stop the AI for a few milliseconds to do so atomically) and copy at your leisure

    Well...
    I was originally thinking that there'd be some physical trade-off having an architecture where you can physcially access all the memory rather than just the surface of the memory stick.
    And that an AI may not have such an arrangement for speed purposes.

    But yes, this works, and I can't see any reason why you wouldn't have a standard physically accessible architecture, aside perhaps loading it into the fast brain is hard.

    Ah, yeah for performance you might want to do it differently, my thinking was more along the lines that any AI could be designed such that it is copyable not that every design always is

    In modern computer memory every part of the data structure is available to read. It isn’t like a stack of paper where you have to remove the top bits to get to the lower ones, we stopped used those sorts of physical structures decades ago. Think of them more like drawers, they each open individually. Programs may access them in sequential orders, but that is a separate thing from how bits are physically stored.

    themoreyouknow.gif

    I don't know whether this applies to the specific tech that has been designed to run neural nets.
    In those, I thought each 'neuron' held its own memory storage, which means it may be difficult to physically access the register attached to a neuron deep within the structure.

    Neural Networks aren't really tied to any physical structure.

    Each "neuron" is basically whatever you feel comfortable calling a "neuron". That can mean anything from like you make little pods and connect them with wires and each pod has a separate capacity to do shit, all the way to the number stored in variable A only changes if the number stored in variable B also changes by at least .04.

    It's one of the reasons I super hate the name "Neural Network"! Neuron is too charged a word for something that's a very non-specific thing.

    Yes.
    But can you get the full state out of IBM's Loihi chip?

    You may not need to build the tech to physically mirror the algorithm, but it appears companies are.
    And if you do, I don't think it is easy, or maybe even possible, to copy the AI's state between two systems.

    Speed is being increased at the expense of accessibility.

  • Options
    Jealous DevaJealous Deva Registered User regular
    Ninjeff wrote: »
    Ninjeff wrote: »
    Heffling wrote: »
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I think it's worth noting that an AI will have access to a whole suite of options that might not occur to a regular driver during an emergency scenario.

    For example, an AI could instantly downshift to first gear, driving up the RPMs to crazy levels in order to slow the car. Or it could shift into reverse, willingly destroying the transmission and possibly the drive train and engine in order to rapidly slow the car. But also converting much of that forward momentum into the energy that is used up in destroying the transmission.

    Unfortunately, I doubt shifting to first will be very effective in electric vehicles, buuuuut i could be wrong on that.

    As far as shifting to reverse, assuming the brakes have already been applied to their max, there is nothing more you can do since the limit is on the tires at this point. Reverse will actually cause a LOSS of traction by spinning the wheels and preventing them from grabbing (like how a car fish tails during a burn out, but in reverse.)
    Point is that once emergency stopping has initiated (and brake application is at 100% capacity), you are mostly limited to the quality of the tire to stop the vehicle effectively.

    I suppose an option might be some type of external anchor but that probably has other worse implications....

    I think he was mainly referring to those as options when traction is still available but the brakes specifically are fried. And AI systems are already VERY good at traction management (your car you drive at this moment likely has an AI run traction and stability control system.)

    Humans suck at emergency thinking unless trained for it. People have died in a lot of situations that had tight time windows but easy solutions that if given sufficient time and calm virtually anyone could have thought of (stuck throttles, for example, that could have been disabled by shifting into neutral and standing on the brake). AI would actually be great for these situations.

    AI or not, you are still 100% limited to the quality of the tire. Which is my point. Putting the car in reverse will put the tires over 100% traction quicker, which would then cause a loss of control. You'd actually want to keep the transmission in tact, as it creates parasitic drag on the drive train helping to slow the car.
    I dont, however, know if that applies the same in an electric vehicle which isnt prone to the same engine mechanics as an I.C.E vehicle is. I dont think the gears would create the same drag on an electric drive train.

    If, in the rare case, brakes get fried your (and by proxy the AI's) best outcome is to attempt to steer around the obstacle. Which, unfortunately, is another set of problems still limited to tire quality. Even if the AI can manage to turn the wheel quick enough, lack of grip in the tire -from the sudden change in direction- will just induce push (understeer) in the car and continue the slide forward. So even if your brakes are fried, you're still limited to tire quality.
    My point of all that, is to say that it is a whole NEW problem for the AI that hasnt been addressed yet.

    In addition to the morel and philosophical quandaries facing the AI, it also has to have the ability to learn what equipment its working with the get the best out of the vehicle.
    The unfortunate answer is that is probably NOT possible to dial a solution to every problem. If a deer jumps out at you on a highway while you're doing 55, you're going to hit that thing AI or no.
    Which, i suppose, begs the question that when is it ok to STOP worrying about vanishingly small possibilities?
    I dont know the answer to that. I enjoy the conversation tho

    Pretty sure in an electric motor situation at least in theory you can just run the motor backwards to create as much drag as the tires can handle, especially if you are running an ungeared/transmissionless system.

  • Options
    DedwrekkaDedwrekka Metal Hell adjacentRegistered User regular
    I'm actually fairly certain that in fully electric vehicles, especially those with regenerative braking, each wheel has both an independent motor and an independent brake. Brake failure becomes either a distributed problem (all brakes failed independently), or a central software failure (the software cannot connect to the wheel motors).
    Either way, a passenger controlled, un-networked, physical e-brake seems like a must.

  • Options
    dlinfinitidlinfiniti Registered User regular
    Maybe a robot conductor driving robot passengers

    AAAAA!!! PLAAAYGUUU!!!!
  • Options
    l_gl_g Registered User regular
    Ninjeff wrote: »
    Ninjeff wrote: »
    Heffling wrote: »
    HamHamJ wrote: »
    A self driving car will be programmed to hit the brakes and swerve only if it can safely do so. Swerving to hit something else is probably just going to make things worse. Braking drops the energies involved in the collision thus making it as safe as possible.

    Obviously, but the trolley problem is specifically about a brake failure scenario.

    I think it's worth noting that an AI will have access to a whole suite of options that might not occur to a regular driver during an emergency scenario.

    For example, an AI could instantly downshift to first gear, driving up the RPMs to crazy levels in order to slow the car. Or it could shift into reverse, willingly destroying the transmission and possibly the drive train and engine in order to rapidly slow the car. But also converting much of that forward momentum into the energy that is used up in destroying the transmission.

    Unfortunately, I doubt shifting to first will be very effective in electric vehicles, buuuuut i could be wrong on that.

    As far as shifting to reverse, assuming the brakes have already been applied to their max, there is nothing more you can do since the limit is on the tires at this point. Reverse will actually cause a LOSS of traction by spinning the wheels and preventing them from grabbing (like how a car fish tails during a burn out, but in reverse.)
    Point is that once emergency stopping has initiated (and brake application is at 100% capacity), you are mostly limited to the quality of the tire to stop the vehicle effectively.

    I suppose an option might be some type of external anchor but that probably has other worse implications....

    I think he was mainly referring to those as options when traction is still available but the brakes specifically are fried. And AI systems are already VERY good at traction management (your car you drive at this moment likely has an AI run traction and stability control system.)

    Humans suck at emergency thinking unless trained for it. People have died in a lot of situations that had tight time windows but easy solutions that if given sufficient time and calm virtually anyone could have thought of (stuck throttles, for example, that could have been disabled by shifting into neutral and standing on the brake). AI would actually be great for these situations.

    AI or not, you are still 100% limited to the quality of the tire. Which is my point. Putting the car in reverse will put the tires over 100% traction quicker, which would then cause a loss of control. You'd actually want to keep the transmission in tact, as it creates parasitic drag on the drive train helping to slow the car.
    I dont, however, know if that applies the same in an electric vehicle which isnt prone to the same engine mechanics as an I.C.E vehicle is. I dont think the gears would create the same drag on an electric drive train.

    If, in the rare case, brakes get fried your (and by proxy the AI's) best outcome is to attempt to steer around the obstacle. Which, unfortunately, is another set of problems still limited to tire quality. Even if the AI can manage to turn the wheel quick enough, lack of grip in the tire -from the sudden change in direction- will just induce push (understeer) in the car and continue the slide forward. So even if your brakes are fried, you're still limited to tire quality.
    My point of all that, is to say that it is a whole NEW problem for the AI that hasnt been addressed yet.

    In addition to the morel and philosophical quandaries facing the AI, it also has to have the ability to learn what equipment its working with the get the best out of the vehicle.
    The unfortunate answer is that is probably NOT possible to dial a solution to every problem. If a deer jumps out at you on a highway while you're doing 55, you're going to hit that thing AI or no.
    Which, i suppose, begs the question that when is it ok to STOP worrying about vanishingly small possibilities?
    I dont know the answer to that. I enjoy the conversation tho

    Pretty sure in an electric motor situation at least in theory you can just run the motor backwards to create as much drag as the tires can handle, especially if you are running an ungeared/transmissionless system.

    I agree certainly with the point that there are options available to AI that are not available to human beings in certain time windows, especially not to humans that lack skill or experience in those situations. I also agree that there are mechanical/physical limitations that will prevent an AI from doing anything about no matter how smart it is. I don't think these are mutually exclusive propositions.

    Cole's Law: "Thinly sliced cabbage."
  • Options
    AiouaAioua Ora Occidens Ora OptimaRegistered User regular
    Dedwrekka wrote: »
    I'm actually fairly certain that in fully electric vehicles, especially those with regenerative braking, each wheel has both an independent motor and an independent brake. Brake failure becomes either a distributed problem (all brakes failed independently), or a central software failure (the software cannot connect to the wheel motors).
    Either way, a passenger controlled, un-networked, physical e-brake seems like a must.

    mmm that's just some of the fancier teslas with the four motors, I think
    actually the best I can even find is a two-motor (front and rear) version of the Model S

    most electrics have a single motor and a fixed gear, and they all have physical brakes in addition to the regenerative brakes

    life's a game that you're bound to lose / like using a hammer to pound in screws
    fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
    that's right we're on a fucked up cruise / God is dead but at least we have booze
    bad things happen, no one knows why / the sun burns out and everyone dies
  • Options
    Jealous DevaJealous Deva Registered User regular
    edited December 2017
    They have physical brakes because regenerative brakes require a power input to fully stop a vehicle (there is a point of inflection where they stop making power back because the slower the vehicle is going the less power gets fed back into the system, and at that point the battery has to feed energy back into the system to futher slow down ). You use physical brakes for the low speed tasks which would otherwise require energy inputs (like say sitting on a hill, or driving in a parking lot. Also they make a good last ditch safety feature, which is why even things like trains that almost exclusively rely on electrical braking to stop still have backup friction brakes.

    Edit: Actually researching it’s a bit more complicated than the above but thats probably getting way off topic.

    Jealous Deva on
Sign In or Register to comment.