Options

[Autonomous Transportation] When the cars have all the jobs, the poor will walk the earth

1303133353648

Posts

  • Options
    tbloxhamtbloxham Registered User regular
    hippofant wrote: »
    tbloxham wrote: »
    mrondeau wrote: »
    Looks like Uber's released some findings on the Arizona crash that killed a woman crossing the street. Basically, software settings were set too lax and it classified the obstruction in the road as a false positive and ignored it.
    The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

    As someone who works in machine learning, and with colleagues specifically working on safe ML: WHAT!!!?

    Your operating point should be in the 0.00000% false negative rate when false negatives kill people. If the corresponding false positive rate is too high, or if you are not reasonably certain of the ROC curve, YOU DON'T LET THE CAR ON THE FUCKING STREET!

    This is fucking engineering ethics 101.

    Yeah, then your car can't drive and nor can your normal driver. When you are driving along the road yourself as a person you have a massive false positive assumption reject rate. You can definately be completely wrong in your level, but the tolerable level is not zero risk. It is 'better than a human driver'. And human drivers are AWFUL at driving.

    Cars break all your standard assumptions of the tolerable operation levels. Because any 'industry standard' level of fault tree is going to say...

    "Well, noone gets to have a car. And noone gets to drive a bus. We're just about OK with trains, but they have to go pretty darn slow"

    And thats not an autonomous car. A fault tree inspection will 100% say, "Humans cannot operate cars. It is impossible"

    Mistakes have clearly been made here, but Humans are not 'safe machines'. The sooner we can replace them as vehicle operators the better.

    No. You run at much stricter than necessary cut-offs, and then analyze your results from that in a theoretical realm, making double and triple sure that your cut-off is stricter than necessary. Then you loosen it just a little bit for more practical trials.

    This is basic research ethics 101. When you test drugs for safety in humans, you do not aim for your best-guess dosage in your first trial. When we test surgical diagnosis tools, we don't use a p<0.05 cut-off, because that means we're performing unnecessary surgeries on ~5% of people, if we've gotten everything right, and possibly WAY more than that if we're off, which we probably are. Those are ways to get a whole lot of people killed.

    This is not a are AIs safer vs human drivers issue, as much as y'all might enjoy rehashing that particular argument. This is a, "STOP BEING SILLY GOOSE AI DEVELOPERS," issue. There are ethical ways to do this sort of research, established protocols found in many fields, and this is ridiculous. If your test car STOPS IN THE ROAD 500% more often than it should, that is not a reason to drop the false positive cut-off in testing! (Unless you're getting near final phase testing, which I'm almost certain Uber was not.) You run the shittily driving car over and over again until you gather enough data to be almost entirely certain of what your ROC looks like, and then you set your cut-offs appropriately.*


    * Note, you probably also shouldn't use ROCs to do that, because ROCs have a ton of problems in the boundary regions that you'd probably want to be working at. They're good for viewing the overall trade-off between sensitivity and precision, but use more appropriate statistics when you're entirely interested in the 0.95 - 1.00 range.

    What if you know with 100% confidence that the human surgeon you are up against makes the wrong decision and performs an unnecessary surgery 25% of the time. Still setting that p value less than 0.05? Demanding a 5 fold improvement over humans before you will even start the testing? What if it turns out that with perfect information you are still wrong 10% of the time? No robo surgeon ever even though the one you've built had 50% fewer false positives than the best humans.

    Humans are AWFUL at driving. Roads are awful for driving ON. Should we just give up and leave it to the humans?

    "That is cool" - Abraham Lincoln
  • Options
    mrondeaumrondeau Montréal, CanadaRegistered User regular
    tbloxham wrote: »
    hippofant wrote: »
    tbloxham wrote: »
    mrondeau wrote: »
    Looks like Uber's released some findings on the Arizona crash that killed a woman crossing the street. Basically, software settings were set too lax and it classified the obstruction in the road as a false positive and ignored it.
    The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

    As someone who works in machine learning, and with colleagues specifically working on safe ML: WHAT!!!?

    Your operating point should be in the 0.00000% false negative rate when false negatives kill people. If the corresponding false positive rate is too high, or if you are not reasonably certain of the ROC curve, YOU DON'T LET THE CAR ON THE FUCKING STREET!

    This is fucking engineering ethics 101.

    Yeah, then your car can't drive and nor can your normal driver. When you are driving along the road yourself as a person you have a massive false positive assumption reject rate. You can definately be completely wrong in your level, but the tolerable level is not zero risk. It is 'better than a human driver'. And human drivers are AWFUL at driving.

    Cars break all your standard assumptions of the tolerable operation levels. Because any 'industry standard' level of fault tree is going to say...

    "Well, noone gets to have a car. And noone gets to drive a bus. We're just about OK with trains, but they have to go pretty darn slow"

    And thats not an autonomous car. A fault tree inspection will 100% say, "Humans cannot operate cars. It is impossible"

    Mistakes have clearly been made here, but Humans are not 'safe machines'. The sooner we can replace them as vehicle operators the better.

    No. You run at much stricter than necessary cut-offs, and then analyze your results from that in a theoretical realm, making double and triple sure that your cut-off is stricter than necessary. Then you loosen it just a little bit for more practical trials.

    This is basic research ethics 101. When you test drugs for safety in humans, you do not aim for your best-guess dosage in your first trial. When we test surgical diagnosis tools, we don't use a p<0.05 cut-off, because that means we're performing unnecessary surgeries on ~5% of people, if we've gotten everything right, and possibly WAY more than that if we're off, which we probably are. Those are ways to get a whole lot of people killed.

    This is not a are AIs safer vs human drivers issue, as much as y'all might enjoy rehashing that particular argument. This is a, "STOP BEING SILLY GOOSE AI DEVELOPERS," issue. There are ethical ways to do this sort of research, established protocols found in many fields, and this is ridiculous. If your test car STOPS IN THE ROAD 500% more often than it should, that is not a reason to drop the false positive cut-off in testing! (Unless you're getting near final phase testing, which I'm almost certain Uber was not.) You run the shittily driving car over and over again until you gather enough data to be almost entirely certain of what your ROC looks like, and then you set your cut-offs appropriately.*


    * Note, you probably also shouldn't use ROCs to do that, because ROCs have a ton of problems in the boundary regions that you'd probably want to be working at. They're good for viewing the overall trade-off between sensitivity and precision, but use more appropriate statistics when you're entirely interested in the 0.95 - 1.00 range.

    What if you know with 100% confidence that the human surgeon you are up against makes the wrong decision and performs an unnecessary surgery 25% of the time. Still setting that p value less than 0.05? Demanding a 5 fold improvement over humans before you will even start the testing? What if it turns out that with perfect information you are still wrong 10% of the time? No robo surgeon ever even though the one you've built had 50% fewer false positives than the best humans.

    Humans are AWFUL at driving. Roads are awful for driving ON. Should we just give up and leave it to the humans?

    You vastly overestimate the false negative rate for human.

  • Options
    Phoenix-DPhoenix-D Registered User regular
    tbloxham wrote: »
    hippofant wrote: »
    tbloxham wrote: »
    mrondeau wrote: »
    Looks like Uber's released some findings on the Arizona crash that killed a woman crossing the street. Basically, software settings were set too lax and it classified the obstruction in the road as a false positive and ignored it.
    The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

    As someone who works in machine learning, and with colleagues specifically working on safe ML: WHAT!!!?

    Your operating point should be in the 0.00000% false negative rate when false negatives kill people. If the corresponding false positive rate is too high, or if you are not reasonably certain of the ROC curve, YOU DON'T LET THE CAR ON THE FUCKING STREET!

    This is fucking engineering ethics 101.

    Yeah, then your car can't drive and nor can your normal driver. When you are driving along the road yourself as a person you have a massive false positive assumption reject rate. You can definately be completely wrong in your level, but the tolerable level is not zero risk. It is 'better than a human driver'. And human drivers are AWFUL at driving.

    Cars break all your standard assumptions of the tolerable operation levels. Because any 'industry standard' level of fault tree is going to say...

    "Well, noone gets to have a car. And noone gets to drive a bus. We're just about OK with trains, but they have to go pretty darn slow"

    And thats not an autonomous car. A fault tree inspection will 100% say, "Humans cannot operate cars. It is impossible"

    Mistakes have clearly been made here, but Humans are not 'safe machines'. The sooner we can replace them as vehicle operators the better.

    No. You run at much stricter than necessary cut-offs, and then analyze your results from that in a theoretical realm, making double and triple sure that your cut-off is stricter than necessary. Then you loosen it just a little bit for more practical trials.

    This is basic research ethics 101. When you test drugs for safety in humans, you do not aim for your best-guess dosage in your first trial. When we test surgical diagnosis tools, we don't use a p<0.05 cut-off, because that means we're performing unnecessary surgeries on ~5% of people, if we've gotten everything right, and possibly WAY more than that if we're off, which we probably are. Those are ways to get a whole lot of people killed.

    This is not a are AIs safer vs human drivers issue, as much as y'all might enjoy rehashing that particular argument. This is a, "STOP BEING SILLY GOOSE AI DEVELOPERS," issue. There are ethical ways to do this sort of research, established protocols found in many fields, and this is ridiculous. If your test car STOPS IN THE ROAD 500% more often than it should, that is not a reason to drop the false positive cut-off in testing! (Unless you're getting near final phase testing, which I'm almost certain Uber was not.) You run the shittily driving car over and over again until you gather enough data to be almost entirely certain of what your ROC looks like, and then you set your cut-offs appropriately.*


    * Note, you probably also shouldn't use ROCs to do that, because ROCs have a ton of problems in the boundary regions that you'd probably want to be working at. They're good for viewing the overall trade-off between sensitivity and precision, but use more appropriate statistics when you're entirely interested in the 0.95 - 1.00 range.

    What if you know with 100% confidence that the human surgeon you are up against makes the wrong decision and performs an unnecessary surgery 25% of the time. Still setting that p value less than 0.05? Demanding a 5 fold improvement over humans before you will even start the testing? What if it turns out that with perfect information you are still wrong 10% of the time? No robo surgeon ever even though the one you've built had 50% fewer false positives than the best humans.

    Humans are AWFUL at driving. Roads are awful for driving ON. Should we just give up and leave it to the humans?

    You really really aren't getting it. And again, your method is how we get shitty rushed systems. This isn't a unique problem to AI research, you could just as easily apply the same flawed logic to any new technique. You do not start from the assumption that your system is going to be magically better than the alternative because you don't know that yet. You run tests.

    For an real-world example, look at Google. They are almost ready to start actual passenger service and as far as I remember 100% of accidents involving their system have been the fault of the other cars. That's not to say they won't ever hit someone- they will- but it won't be because they were overconfident and had a system that wasn't ready on the roads.

  • Options
    tbloxhamtbloxham Registered User regular
    edited May 2018
    Phoenix-D wrote: »
    tbloxham wrote: »
    hippofant wrote: »
    tbloxham wrote: »
    mrondeau wrote: »
    Looks like Uber's released some findings on the Arizona crash that killed a woman crossing the street. Basically, software settings were set too lax and it classified the obstruction in the road as a false positive and ignored it.
    The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

    As someone who works in machine learning, and with colleagues specifically working on safe ML: WHAT!!!?

    Your operating point should be in the 0.00000% false negative rate when false negatives kill people. If the corresponding false positive rate is too high, or if you are not reasonably certain of the ROC curve, YOU DON'T LET THE CAR ON THE FUCKING STREET!

    This is fucking engineering ethics 101.

    Yeah, then your car can't drive and nor can your normal driver. When you are driving along the road yourself as a person you have a massive false positive assumption reject rate. You can definately be completely wrong in your level, but the tolerable level is not zero risk. It is 'better than a human driver'. And human drivers are AWFUL at driving.

    Cars break all your standard assumptions of the tolerable operation levels. Because any 'industry standard' level of fault tree is going to say...

    "Well, noone gets to have a car. And noone gets to drive a bus. We're just about OK with trains, but they have to go pretty darn slow"

    And thats not an autonomous car. A fault tree inspection will 100% say, "Humans cannot operate cars. It is impossible"

    Mistakes have clearly been made here, but Humans are not 'safe machines'. The sooner we can replace them as vehicle operators the better.

    No. You run at much stricter than necessary cut-offs, and then analyze your results from that in a theoretical realm, making double and triple sure that your cut-off is stricter than necessary. Then you loosen it just a little bit for more practical trials.

    This is basic research ethics 101. When you test drugs for safety in humans, you do not aim for your best-guess dosage in your first trial. When we test surgical diagnosis tools, we don't use a p<0.05 cut-off, because that means we're performing unnecessary surgeries on ~5% of people, if we've gotten everything right, and possibly WAY more than that if we're off, which we probably are. Those are ways to get a whole lot of people killed.

    This is not a are AIs safer vs human drivers issue, as much as y'all might enjoy rehashing that particular argument. This is a, "STOP BEING SILLY GOOSE AI DEVELOPERS," issue. There are ethical ways to do this sort of research, established protocols found in many fields, and this is ridiculous. If your test car STOPS IN THE ROAD 500% more often than it should, that is not a reason to drop the false positive cut-off in testing! (Unless you're getting near final phase testing, which I'm almost certain Uber was not.) You run the shittily driving car over and over again until you gather enough data to be almost entirely certain of what your ROC looks like, and then you set your cut-offs appropriately.*


    * Note, you probably also shouldn't use ROCs to do that, because ROCs have a ton of problems in the boundary regions that you'd probably want to be working at. They're good for viewing the overall trade-off between sensitivity and precision, but use more appropriate statistics when you're entirely interested in the 0.95 - 1.00 range.

    What if you know with 100% confidence that the human surgeon you are up against makes the wrong decision and performs an unnecessary surgery 25% of the time. Still setting that p value less than 0.05? Demanding a 5 fold improvement over humans before you will even start the testing? What if it turns out that with perfect information you are still wrong 10% of the time? No robo surgeon ever even though the one you've built had 50% fewer false positives than the best humans.

    Humans are AWFUL at driving. Roads are awful for driving ON. Should we just give up and leave it to the humans?

    You really really aren't getting it. And again, your method is how we get shitty rushed systems. This isn't a unique problem to AI research, you could just as easily apply the same flawed logic to any new technique. You do not start from the assumption that your system is going to be magically better than the alternative because you don't know that yet. You run tests.

    For an real-world example, look at Google. They are almost ready to start actual passenger service and as far as I remember 100% of accidents involving their system have been the fault of the other cars. That's not to say they won't ever hit someone- they will- but it won't be because they were overconfident and had a system that wasn't ready on the roads.

    No. First you inspect the current state of play to see what you need to be superior to. The current state of play is humans do the driving, killing 35000 a year, injuring far more, and generally doing a disastrous and terrible job that no possible standard safety testing evaluation would allow.

    So you need to be better than that. It does not matter how safe YOU are. It matters that you are safer than your best available competition.

    edit - And, to be clear, mistakes were made by the designers here (although, we only know that because other companies say they would be better, and its a lot easier to say how great you are than to actually be great). But driving a car is not like being a surgeon. Current standards are awful. Humans fail continually.

    tbloxham on
    "That is cool" - Abraham Lincoln
  • Options
    AridholAridhol Daddliest Catch Registered User regular
    Iterating at an extremely slow rate while tens of thousands of people die a year isn't some winning argument here...

    Yes I would rather 10 people die from these mistakes if it saves the 30,000 others by getting onto the road "faster".

    That's an insane argument to make.
    We don't need to make it perfect, we need to make it better than humans.

  • Options
    MortiousMortious The Nightmare Begins Move to New ZealandRegistered User regular
    Aridhol wrote: »
    Iterating at an extremely slow rate while tens of thousands of people die a year isn't some winning argument here...

    Yes I would rather 10 people die from these mistakes if it saves the 30,000 others by getting onto the road "faster".

    That's an insane argument to make.
    We don't need to make it perfect, we need to make it better than humans.

    Is it even a lot better than human drivers at this point? What is Ubers mile driver per accident/death?

    Move to New Zealand
    It’s not a very important country most of the time
    http://steamcommunity.com/id/mortious
  • Options
    Phoenix-DPhoenix-D Registered User regular
    tbloxham wrote: »
    Phoenix-D wrote: »
    tbloxham wrote: »
    hippofant wrote: »
    tbloxham wrote: »
    mrondeau wrote: »
    Looks like Uber's released some findings on the Arizona crash that killed a woman crossing the street. Basically, software settings were set too lax and it classified the obstruction in the road as a false positive and ignored it.
    The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

    As someone who works in machine learning, and with colleagues specifically working on safe ML: WHAT!!!?

    Your operating point should be in the 0.00000% false negative rate when false negatives kill people. If the corresponding false positive rate is too high, or if you are not reasonably certain of the ROC curve, YOU DON'T LET THE CAR ON THE FUCKING STREET!

    This is fucking engineering ethics 101.

    Yeah, then your car can't drive and nor can your normal driver. When you are driving along the road yourself as a person you have a massive false positive assumption reject rate. You can definately be completely wrong in your level, but the tolerable level is not zero risk. It is 'better than a human driver'. And human drivers are AWFUL at driving.

    Cars break all your standard assumptions of the tolerable operation levels. Because any 'industry standard' level of fault tree is going to say...

    "Well, noone gets to have a car. And noone gets to drive a bus. We're just about OK with trains, but they have to go pretty darn slow"

    And thats not an autonomous car. A fault tree inspection will 100% say, "Humans cannot operate cars. It is impossible"

    Mistakes have clearly been made here, but Humans are not 'safe machines'. The sooner we can replace them as vehicle operators the better.

    No. You run at much stricter than necessary cut-offs, and then analyze your results from that in a theoretical realm, making double and triple sure that your cut-off is stricter than necessary. Then you loosen it just a little bit for more practical trials.

    This is basic research ethics 101. When you test drugs for safety in humans, you do not aim for your best-guess dosage in your first trial. When we test surgical diagnosis tools, we don't use a p<0.05 cut-off, because that means we're performing unnecessary surgeries on ~5% of people, if we've gotten everything right, and possibly WAY more than that if we're off, which we probably are. Those are ways to get a whole lot of people killed.

    This is not a are AIs safer vs human drivers issue, as much as y'all might enjoy rehashing that particular argument. This is a, "STOP BEING SILLY GOOSE AI DEVELOPERS," issue. There are ethical ways to do this sort of research, established protocols found in many fields, and this is ridiculous. If your test car STOPS IN THE ROAD 500% more often than it should, that is not a reason to drop the false positive cut-off in testing! (Unless you're getting near final phase testing, which I'm almost certain Uber was not.) You run the shittily driving car over and over again until you gather enough data to be almost entirely certain of what your ROC looks like, and then you set your cut-offs appropriately.*


    * Note, you probably also shouldn't use ROCs to do that, because ROCs have a ton of problems in the boundary regions that you'd probably want to be working at. They're good for viewing the overall trade-off between sensitivity and precision, but use more appropriate statistics when you're entirely interested in the 0.95 - 1.00 range.

    What if you know with 100% confidence that the human surgeon you are up against makes the wrong decision and performs an unnecessary surgery 25% of the time. Still setting that p value less than 0.05? Demanding a 5 fold improvement over humans before you will even start the testing? What if it turns out that with perfect information you are still wrong 10% of the time? No robo surgeon ever even though the one you've built had 50% fewer false positives than the best humans.

    Humans are AWFUL at driving. Roads are awful for driving ON. Should we just give up and leave it to the humans?

    You really really aren't getting it. And again, your method is how we get shitty rushed systems. This isn't a unique problem to AI research, you could just as easily apply the same flawed logic to any new technique. You do not start from the assumption that your system is going to be magically better than the alternative because you don't know that yet. You run tests.

    For an real-world example, look at Google. They are almost ready to start actual passenger service and as far as I remember 100% of accidents involving their system have been the fault of the other cars. That's not to say they won't ever hit someone- they will- but it won't be because they were overconfident and had a system that wasn't ready on the roads.

    No. First you inspect the current state of play to see what you need to be superior to. The current state of play is humans do the driving, killing 35000 a year, injuring far more, and generally doing a disastrous and terrible job that no possible standard safety testing evaluation would allow.

    So you need to be better than that. It does not matter how safe YOU are. It matters that you are safer than your best available competition.

    edit - And, to be clear, mistakes were made by the designers here (although, we only know that because other companies say they would be better, and its a lot easier to say how great you are than to actually be great). But driving a car is not like being a surgeon. Current standards are awful. Humans fail continually.

    re bolding: the testing discussed is how you find that out without, you know, killing people.

    And we know mistakes were made in vacuum here because the system missed a really obvious, really large target in the middle of the road and the explanation why was laughable. The average driver would not have had that accident. But beyond that we know other companies are better at this than Uber because they've proven it.

  • Options
    MortiousMortious The Nightmare Begins Move to New ZealandRegistered User regular
    Didn't Uber also have a ridiculously low mile per driver intervention when compared to other autonomous vehciles?

    Move to New Zealand
    It’s not a very important country most of the time
    http://steamcommunity.com/id/mortious
  • Options
    hippofanthippofant ティンク Registered User regular
    edited May 2018
    tbloxham wrote: »
    hippofant wrote: »
    tbloxham wrote: »
    mrondeau wrote: »
    Looks like Uber's released some findings on the Arizona crash that killed a woman crossing the street. Basically, software settings were set too lax and it classified the obstruction in the road as a false positive and ignored it.
    The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

    As someone who works in machine learning, and with colleagues specifically working on safe ML: WHAT!!!?

    Your operating point should be in the 0.00000% false negative rate when false negatives kill people. If the corresponding false positive rate is too high, or if you are not reasonably certain of the ROC curve, YOU DON'T LET THE CAR ON THE FUCKING STREET!

    This is fucking engineering ethics 101.

    Yeah, then your car can't drive and nor can your normal driver. When you are driving along the road yourself as a person you have a massive false positive assumption reject rate. You can definately be completely wrong in your level, but the tolerable level is not zero risk. It is 'better than a human driver'. And human drivers are AWFUL at driving.

    Cars break all your standard assumptions of the tolerable operation levels. Because any 'industry standard' level of fault tree is going to say...

    "Well, noone gets to have a car. And noone gets to drive a bus. We're just about OK with trains, but they have to go pretty darn slow"

    And thats not an autonomous car. A fault tree inspection will 100% say, "Humans cannot operate cars. It is impossible"

    Mistakes have clearly been made here, but Humans are not 'safe machines'. The sooner we can replace them as vehicle operators the better.

    No. You run at much stricter than necessary cut-offs, and then analyze your results from that in a theoretical realm, making double and triple sure that your cut-off is stricter than necessary. Then you loosen it just a little bit for more practical trials.

    This is basic research ethics 101. When you test drugs for safety in humans, you do not aim for your best-guess dosage in your first trial. When we test surgical diagnosis tools, we don't use a p<0.05 cut-off, because that means we're performing unnecessary surgeries on ~5% of people, if we've gotten everything right, and possibly WAY more than that if we're off, which we probably are. Those are ways to get a whole lot of people killed.

    This is not a are AIs safer vs human drivers issue, as much as y'all might enjoy rehashing that particular argument. This is a, "STOP BEING SILLY GOOSE AI DEVELOPERS," issue. There are ethical ways to do this sort of research, established protocols found in many fields, and this is ridiculous. If your test car STOPS IN THE ROAD 500% more often than it should, that is not a reason to drop the false positive cut-off in testing! (Unless you're getting near final phase testing, which I'm almost certain Uber was not.) You run the shittily driving car over and over again until you gather enough data to be almost entirely certain of what your ROC looks like, and then you set your cut-offs appropriately.*


    * Note, you probably also shouldn't use ROCs to do that, because ROCs have a ton of problems in the boundary regions that you'd probably want to be working at. They're good for viewing the overall trade-off between sensitivity and precision, but use more appropriate statistics when you're entirely interested in the 0.95 - 1.00 range.

    What if you know with 100% confidence that the human surgeon you are up against makes the wrong decision and performs an unnecessary surgery 25% of the time. Still setting that p value less than 0.05? Demanding a 5 fold improvement over humans before you will even start the testing? What if it turns out that with perfect information you are still wrong 10% of the time? No robo surgeon ever even though the one you've built had 50% fewer false positives than the best humans.

    Humans are AWFUL at driving. Roads are awful for driving ON. Should we just give up and leave it to the humans?

    If you knew with 100% confidence that the human surgeon is making the wrong decision 25% of the time, YOU REPORT THEM TO THE SURGEON'S COLLEGE. Why are you working with them? Why are you choosing a terrible surgeon as your baseline comparison? How can you know they're wrong 25% of the time unless you've already built and tested a system that's better??? Like, if you already have a way to know that a doctor's wrong 25% of the time, then that means you already have something that's better than the doctor, and so there wouldn't be any development or testing required!

    That has absolutely NOTHING to do with how you design and test your system. Nothing you said makes any sense. It's a total non-sequitur. Your hypothetical scenario is so absurd.... My machine learning algorithm can beat a monkey flinging shit at a wall, therefore I should be allowed to test it on live humans, because demanding a 5-fold improvement over a monkey flinging shit at a wall is ridiculous?! And no, that doesn't mean we just leave it to monkeys throwing shit at walls! There are actual ethical ways to conduct research, as we continually have over history, so stop making this a bullshit false dichotomy argument.

    (Note: we're not talking about deployment here. A 76% system is better than a 75% system, but that's remotely relevant here, at all.)

    hippofant on
  • Options
    milskimilski Poyo! Registered User regular
    edited May 2018
    Aridhol wrote: »
    Iterating at an extremely slow rate while tens of thousands of people die a year isn't some winning argument here...

    Yes I would rather 10 people die from these mistakes if it saves the 30,000 others by getting onto the road "faster".

    That's an insane argument to make.
    We don't need to make it perfect, we need to make it better than humans.

    I have zero confidence Uber will ever be there and in the meantime you're suggesting giving a corporation a license to recklessly kill for the sake of profit

    milski on
    I ate an engineer
  • Options
    hippofanthippofant ティンク Registered User regular
    edited May 2018
    Mortious wrote: »
    Didn't they steal a bunch of tech from Google?

    Why are they so far behind.

    Uber is not nearly the sort of tech company like Google or Apple. They've really got, like, one tech/engineering innovation, and even then, it's honestly a pretty mediocre one, probably dwarfed in importance by the business model and systems they use. Uber engineers aren't building AIs to play Go, just for the hell of it, for example. That's not the sorta engineering culture they have. They can steal the tech, but they don't have the talent. They're only now starting to reach out to academics in computer vision, when Google's had a presence here for... at least a decade now? Here at my university, Google's lured... four or five of our senior AI full professors over the past few years. Uber's lured one assistant professor.

    hippofant on
  • Options
    hippofanthippofant ティンク Registered User regular
    edited May 2018
    Aridhol wrote: »
    Iterating at an extremely slow rate while tens of thousands of people die a year isn't some winning argument here...

    Yes I would rather 10 people die from these mistakes if it saves the 30,000 others by getting onto the road "faster".

    That's an insane argument to make.
    We don't need to make it perfect, we need to make it better than humans.

    You should know that we've actually done this before, right? The rules governing ethical research aren't just arbitrary bureaucracy. They were developed over many years, expressly because there were once many people who thought like you did, and so they kept killing 10 people at a time trying to save 30 000, and it turns out that the math on it doesn't actually work out, and instead you just end up killing hundreds of thousands. For all sorts of reasons, because people overestimate the potential benefit of their research, because they underestimate the amount of harm their experiments will cause, because duplicated work mitigates the potential benefits of parallel efforts, because other technologies can entirely supplant the use-case for your research before its effects are fully realized, because the easier you make this sort of research the more people turn to it unnecessarily, etc..

    Like, even in this scenario, your thinking doesn't make any sense. Either Google's going to win the race to these AV technologies or Uber's going to win. (Or GM or Tesla or BMW or Nissan or Ford or Delphi or Mercedes Benz or Bosch...) If Google wins the race to save 30 000 people, Uber's just killed 10 people for no reason. Or rather, the reason they killed 10 people is because they suck at research, fell behind, and want to catch up (so they can make money), which... sure, that's not a moral hazard at all there: be bad at research, and then you're ethically allowed to kill people to make progress!

    May I kill 10 people in the name of my AV research? What if I say my research will save 60 000 people; may I kill 20 people then? No, you say, because obviously my research has a less than 0.03% probability of being successful. But what's Uber's probability? Do you know? And more to the point, do you trust Uber to know? The purely statistical errors here are profound, and yet there are still more: would you be okay with Uber decided to run all their tests in your neighbourhood, knowing that they're going to kill 10 people in the name of saving 30 000? What do you do the day Uber kills their 11th person? What if Uber kills 10 people but maims 100? Are any of you thinking these ethical research problems through?


    Edit: Not to mention, again, that this is a gross misrepresentation of the situation. Uber could have run their damn vehicles at a lower FP threshold. The cars would have still run on the street. They'd still gather testing data that would progress them. When people design airplanes, the first prototype doesn't even fly, because you can get plenty of juice out of ground tests. There is no reason to be running live tests with loose thresholds, again, unless you're nearing final stage testing, at which point you probably shouldn't be fricking eyeballing your thresholds! They could have run this car at a tighter threshold; the car would have stopped; the woman would have lived; and then they could go back to the labs, see that this woman scored at 0.84 or whatever and then said, hey, our false positive threshold should probably be higher than 0.84! There was (probably) no benefit at all to running with 0.83 instead, and yet you're all making it out as though 0.83 would have just ended Uber's ambitions entirely.

    hippofant on
  • Options
    mrondeaumrondeau Montréal, CanadaRegistered User regular
    hippofant wrote: »
    Mortious wrote: »
    Didn't they steal a bunch of tech from Google?

    Why are they so far behind.

    Uber is not nearly the sort of tech company like Google or Apple. They've really got, like, one tech/engineering innovation, and even then, it's honestly a pretty mediocre one, probably dwarfed in importance by the business model and systems they use. Uber engineers aren't building AIs to play Go, just for the hell of it, for example. That's not the sorta engineering culture they have. They can steal the tech, but they don't have the talent. They're only now starting to reach out to academics in computer vision, when Google's had a presence here for... at least a decade now? Here at my university, Google's lured... four or five of our senior full AI professors over the past few years. Uber's lured one assistant professor.

    I don't think I have even seen a single research paper from Uber.
    Not much from Apple either, but that's more because they don't usually publish their research.
    Google, Microsoft and Facebook, that's something else.

  • Options
    hippofanthippofant ティンク Registered User regular
    edited May 2018
    mrondeau wrote: »
    hippofant wrote: »
    Mortious wrote: »
    Didn't they steal a bunch of tech from Google?

    Why are they so far behind.

    Uber is not nearly the sort of tech company like Google or Apple. They've really got, like, one tech/engineering innovation, and even then, it's honestly a pretty mediocre one, probably dwarfed in importance by the business model and systems they use. Uber engineers aren't building AIs to play Go, just for the hell of it, for example. That's not the sorta engineering culture they have. They can steal the tech, but they don't have the talent. They're only now starting to reach out to academics in computer vision, when Google's had a presence here for... at least a decade now? Here at my university, Google's lured... four or five of our senior full AI professors over the past few years. Uber's lured one assistant professor.

    I don't think I have even seen a single research paper from Uber.
    Not much from Apple either, but that's more because they don't usually publish their research.
    Google, Microsoft and Facebook, that's something else.

    I honestly don't even know if Apple is working on autonomous vehicles at all. Scuttlebutt is that there is an Apple Car in development, or it was (?), but I have no idea if it is/was autonomous or not.

    hippofant on
  • Options
    mrondeaumrondeau Montréal, CanadaRegistered User regular
    hippofant wrote: »
    mrondeau wrote: »
    hippofant wrote: »
    Mortious wrote: »
    Didn't they steal a bunch of tech from Google?

    Why are they so far behind.

    Uber is not nearly the sort of tech company like Google or Apple. They've really got, like, one tech/engineering innovation, and even then, it's honestly a pretty mediocre one, probably dwarfed in importance by the business model and systems they use. Uber engineers aren't building AIs to play Go, just for the hell of it, for example. That's not the sorta engineering culture they have. They can steal the tech, but they don't have the talent. They're only now starting to reach out to academics in computer vision, when Google's had a presence here for... at least a decade now? Here at my university, Google's lured... four or five of our senior full AI professors over the past few years. Uber's lured one assistant professor.

    I don't think I have even seen a single research paper from Uber.
    Not much from Apple either, but that's more because they don't usually publish their research.
    Google, Microsoft and Facebook, that's something else.

    I honestly don't even know if Apple is working on autonomous vehicles at all. Scuttlebutt is that there is an Apple Car in development, or it was (?), but I have no idea if it is/was autonomous or not.
    Well, since they don't publish...
    I know they do ASR, NLP and some computer vision, 'cause I saw the jobs posting, but I never saw anything specifically about autonomous vehicles. Then again, it's not my area, so I did not look for them.

  • Options
    AngelHedgieAngelHedgie Registered User regular
    At least Autopilot made it easy for the cops:
    A Tesla sedan running in its autopilot mode crashed into a parked police car in Laguna Beach, California on Tuesday, per the Associated Press, resulting in “minor injuries” to the driver. The officer in charge of the cruiser at the time of the crash was not inside the vehicle and thus avoided being injured.

    According to USA Today, the man who was driving the Tesla said he had engaged the autopilot mode prior to the impact—the latest in a series of incidents involving the feature to some degree:
    [Police Sgt. Jim Cota] said the luxury electric car crashed in almost the same place as another Tesla about a year ago. The driver, he said, also pointed to the Autopilot system as being engaged.

    The crash comes as Tesla has been facing scrutiny involving Autopilot. Since March, a driver died when his Model X SUV crossed the center divider in Mountain View, Calif., while in Autopilot mode. And another driver in Salt Lake City was injured in a crash when her car hit a parked fire truck. In both cases, Tesla says it can tell from its logs that drivers were either distracted or ignored the car’s warnings to take control.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    mRahmanimRahmani DetroitRegistered User regular
    Tesla Autopilot is simply not ready for road use. It's a system that advertises by name to drive for you, but in the fine print requires you to pay full attention at all times. Humans aren't wired for that, and the cars aren't ready to go fully autonomous. The feature needs to be pulled until it's ready to be a true autonomous system.

  • Options
    kaidkaid Registered User regular
    mRahmani wrote: »
    Tesla Autopilot is simply not ready for road use. It's a system that advertises by name to drive for you, but in the fine print requires you to pay full attention at all times. Humans aren't wired for that, and the cars aren't ready to go fully autonomous. The feature needs to be pulled until it's ready to be a true autonomous system.

    I would agree that system is not really for full autonomy and people are not wired to pay full attention to something that is not constantly requiring attention. The active cruise control stuff is fine for maintaining follow distance and lane assist to keep you from lane drifting but that is not ready to drive itself yet.

  • Options
    SunrizeSunrize Registered User regular
    edited May 2018
    Edit: Misread post the first time, sorry!

    Sunrize on
  • Options
    AimAim Registered User regular
    Did tesla ever change the guidance to only use it in highways?

  • Options
    ForarForar #432 Toronto, Ontario, CanadaRegistered User regular
    edited May 2018
    Feral wrote: »
    We already have Smart cars. Americans are prejudiced against them. They sell remarkably poorly in the US and only saw any measure of success when gas prices were high. They sold 10,000 units in the US when gas prices peaked in 2012.

    They look like they have just enough cargo space for a single bag of groceries. 2 bags if you don't have a passenger. God forbid if you ever need it to hold anything more.

    Pardon the post from way back, but I finally decided to dig back into this thread after a couple months away, and this caught my eye.

    Toronto has a few car sharing services, and one is Car2Go, which uses smart cars. A few years back a woman I dated had a membership, and while it wasn't a daily thing, we used them fairly regularly for larger grocery trips (as in, more than we wanted to lug home by hand or on mass transit), among other things. We managed to cram the 'week or two+' worth of shopping for a household of two at Costco or a standard grocery store into the back of one.

    Yes, they are small. No, you're not going to fit an assembled full size coffee table in one. But you can indeed get a fairly substantial amount of stuff into the back of one. It's not like it taps out on the second bag of chips.

    Yes, I'm sure this would become a challenge again if we were feeding a family of 4 with two growing teenagers and needed 40 pounds of ground beef per week or something, but while it wasn't the perfect answer to every situation (not saying that you're indicating this, Wolfman, more to get ahead of the 'but but but what about _______' tangents that seem to crop up a lot in this thread), it wasn't without reasonable use cases. Sure, once in a while I'd hold a bag in my lap. If we needed a larger vehicle, we got one from another service (I think C2G also has some larger cars as well).

    Though that is somewhat moot if what I'm hearing about C2G leaving the Toronto market for one reason or another is true.

    Oh and I'm 6'3". No, you're not riding high in an armored tank of a vehicle, but I never felt particularly unsafe in one either, nor pinched for space. Again, I wasn't stretched out like on a 9 foot couch, but it's not like the cartoons portray where my knees were up past my eyebrows to fit in there. Frankly, I was pretty impressed with them, the cost was reasonable, and I'm kind of sad to see them go.

    Forar on
    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKER!
  • Options
    kimekime Queen of Blades Registered User regular
    mRahmani wrote: »
    Tesla Autopilot is simply not ready for road use. It's a system that advertises by name to drive for you, but in the fine print requires you to pay full attention at all times. Humans aren't wired for that, and the cars aren't ready to go fully autonomous. The feature needs to be pulled until it's ready to be a true autonomous system.

    The first time this came up (when someone hit a white truck, I think?) I said mostly the same thing, but it was pointed out to me that at that time, Autopilot was still actually safer than human drivers, even if it felt like it was irresponsible to keep Autopilot active or with that name (for the reasons you've listed). And honestly I think that's a convincing point, or was at the time.

    Any idea of that is still true? Is the number of miles driven via Autopilot public?

    Battle.net ID: kime#1822
    3DS Friend Code: 3110-5393-4113
    Steam profile
  • Options
    evilmrhenryevilmrhenry Registered User regular
    edited May 2018
    Does Tesla Uber's system even have the concept of grey areas? I'm seeing discussion about if an "unknown object in road" should result in a complete stop or not, but I'd expect an AV system to also have the option of "it's probably nothing, but this looks weird, so I'm going to slow down". Coming to a screeching halt because of a plastic bag is obviously a failure, but dropping speed by 10mph or so isn't that bad, even if it's nothing.

    evilmrhenry on
  • Options
    khainkhain Registered User regular
    edited May 2018
    kime wrote: »
    mRahmani wrote: »
    Tesla Autopilot is simply not ready for road use. It's a system that advertises by name to drive for you, but in the fine print requires you to pay full attention at all times. Humans aren't wired for that, and the cars aren't ready to go fully autonomous. The feature needs to be pulled until it's ready to be a true autonomous system.

    The first time this came up (when someone hit a white truck, I think?) I said mostly the same thing, but it was pointed out to me that at that time, Autopilot was still actually safer than human drivers, even if it felt like it was irresponsible to keep Autopilot active or with that name (for the reasons you've listed). And honestly I think that's a convincing point, or was at the time.

    Any idea of that is still true? Is the number of miles driven via Autopilot public?

    NHTSA just said that Tesla has been misconstruing the key statistic used to defend Autopilot, that it's 40% safer than a human driver. Not only are they using airbag deployments as a proxy for crashes, but they released an automatic emergency braking feature a few months before Autopilot. An IIHS study found that automatic braking reduces rear end crashes by 40%. Releasing a key safety feature immediately before Autopilot and then claiming that the entire reduction in collisions is due to Autopilot and not the other feature is pretty sleezy.

    khain on
  • Options
    hippofanthippofant ティンク Registered User regular
  • Options
    Lord_AsmodeusLord_Asmodeus goeticSobriquet: Here is your magical cryptic riddle-tumour: I AM A TIME MACHINERegistered User regular
    Mortious wrote: »
    Didn't Uber also have a ridiculously low mile per driver intervention when compared to other autonomous vehciles?

    From a couple pages ago it was once per every 13 miles vs once per every 5000 miles for other fully autonomous systems

    Capital is only the fruit of labor, and could never have existed if Labor had not first existed. Labor is superior to capital, and deserves much the higher consideration. - Lincoln
  • Options
    AngelHedgieAngelHedgie Registered User regular
    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    tbloxhamtbloxham Registered User regular
    edited June 2018
    Edit - Eh, I can't be bothered. The video speaks for itself if you watch it. A clear and fair analysis of the systems capabilities, advantages, and what it still requires from the user.

    tbloxham on
    "That is cool" - Abraham Lincoln
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    tbloxham wrote: »

    You mean, "British safety firm demonstrates why Autopilot behaves exactly as the user manual describes and is still safer than driving yourself in the situation they demonstrated"

    I can assure you 100% that if there is a vehicle stopped in the road in front of you, and the vehicle ahead of you simply swerves around it without reducing speed or indicating and you haven't been lucky enough to get a good shadow indication of the stopped vehicle then the BEST human driver in the world is crashing into that vehicle, or swerving into traffic in lanes to the side. I would have crashed there. You would have crashed there. EVERYONE would have crashed there. When that driver in front moved aside at high speed at the last minute without deceleration he may as well have got out and shot the following driver in the gut.

    Stopped vehicle (or large thing) in high speed free flowing traffic on a freeway, with a large vehicle in front of you, which hasn't yet transitioned to slow flow upstream of it is literally the most dangerous thing that you can run into on the road. Other than a stopped thing which might start moving perpendicular to you.

    Then on the cornering example, the Tesla is actually the example they give of a vehicle which does well even on tight turns, and its a BMW which they show crashing into traffic because of poor cornering.

    The video itself is excellent. A clearly detailed example of why autopilot and its counterparts is good, and safer than not having it but why you absolutely must drive as the autopilot continuously tells you to do. With hands on the wheel, monitoring road conditions. Just like you would with every other system in the world which is described as an autopilot (planes, trains etc)

    The major car accident I was in (4 car pile up) was caused by this exact scenario. I was coming down a large sloping curve with cut out walls, so I had no clear view of the traffic conditions ahead. Up ahead traffic was at a total standstill, but other then the one car I was many car lengths away from in front of me, the road was clear. The car ahead of me braked aggressively, I braked aggressively, the women behind me just made it, the car behind her did not and whacked 3 of us into the car in front of me.

    That said, it does feel like we should be able to do better here - detecting whether the object in front is actually making progress relative to the road.

  • Options
    tbloxhamtbloxham Registered User regular
    tbloxham wrote: »

    You mean, "British safety firm demonstrates why Autopilot behaves exactly as the user manual describes and is still safer than driving yourself in the situation they demonstrated"

    I can assure you 100% that if there is a vehicle stopped in the road in front of you, and the vehicle ahead of you simply swerves around it without reducing speed or indicating and you haven't been lucky enough to get a good shadow indication of the stopped vehicle then the BEST human driver in the world is crashing into that vehicle, or swerving into traffic in lanes to the side. I would have crashed there. You would have crashed there. EVERYONE would have crashed there. When that driver in front moved aside at high speed at the last minute without deceleration he may as well have got out and shot the following driver in the gut.

    Stopped vehicle (or large thing) in high speed free flowing traffic on a freeway, with a large vehicle in front of you, which hasn't yet transitioned to slow flow upstream of it is literally the most dangerous thing that you can run into on the road. Other than a stopped thing which might start moving perpendicular to you.

    Then on the cornering example, the Tesla is actually the example they give of a vehicle which does well even on tight turns, and its a BMW which they show crashing into traffic because of poor cornering.

    The video itself is excellent. A clearly detailed example of why autopilot and its counterparts is good, and safer than not having it but why you absolutely must drive as the autopilot continuously tells you to do. With hands on the wheel, monitoring road conditions. Just like you would with every other system in the world which is described as an autopilot (planes, trains etc)

    The major car accident I was in (4 car pile up) was caused by this exact scenario. I was coming down a large sloping curve with cut out walls, so I had no clear view of the traffic conditions ahead. Up ahead traffic was at a total standstill, but other then the one car I was many car lengths away from in front of me, the road was clear. The car ahead of me braked aggressively, I braked aggressively, the women behind me just made it, the car behind her did not and whacked 3 of us into the car in front of me.

    That said, it does feel like we should be able to do better here - detecting whether the object in front is actually making progress relative to the road.

    In the example here, its even worse than what you described. If the vehicle in front of the autopilot vehicle had done an emergency stop, then I believe the autopilot vehicle would have also stopped. Here, the lead vehicle dodged out of the way of a stationary vehicle at a point late enough that the autopilot vehicle was too close to break via emergency stop. It could ONLY have avoided the accident by...

    1) Seeing the stationary vehicles shadow -> Humans and robots are both awful at this
    2) Swerving - Almost always a bad idea. Always break, never swerve. Whether you are a robot or not. Notice they put a car on the left side? Thats because they wanted the vehicle to crash if it did an emergency swerve too.
    3) Following the lead vehicle at a greater distance -> A good idea, but with humans on the road too, if you leave an actual safe distance then human drivers will just fill in the space, leaving you at risk again.

    "That is cool" - Abraham Lincoln
  • Options
    AridholAridhol Daddliest Catch Registered User regular
    Can any of the manufacturers or systems "see" and track multiple vehicles ahead and to the side?
    I. E. Why can't the system see that there is a large stopped solid object in front of the car driving ahead of them?

  • Options
    DoodmannDoodmann Registered User regular
    Eventually all the cars should be pinging each other an relaying information so those cars would know one crashed or that there was an object in the road way ahead of time.

    Whippy wrote: »
    nope nope nope nope abort abort talk about anime
    I like to ART
  • Options
    mRahmanimRahmani DetroitRegistered User regular
    tbloxham wrote: »
    You mean, "British safety firm demonstrates why Autopilot behaves exactly as the user manual describes and is still safer than driving yourself in the situation they demonstrated"

    I can assure you 100% that if there is a vehicle stopped in the road in front of you, and the vehicle ahead of you simply swerves around it without reducing speed or indicating and you haven't been lucky enough to get a good shadow indication of the stopped vehicle then the BEST human driver in the world is crashing into that vehicle, or swerving into traffic in lanes to the side. I would have crashed there. You would have crashed there. EVERYONE would have crashed there. When that driver in front moved aside at high speed at the last minute without deceleration he may as well have got out and shot the following driver in the gut.

    Stopped vehicle (or large thing) in high speed free flowing traffic on a freeway, with a large vehicle in front of you, which hasn't yet transitioned to slow flow upstream of it is literally the most dangerous thing that you can run into on the road. Other than a stopped thing which might start moving perpendicular to you.

    Uh-uh, nope. There is something very crucial you guys are missing when throwing up your hands and saying "a human would have crashed too!"

    Spoiler for large image:
    Highway%20Braking.jpg

    I borrowed this from a driver safety website, of all places. What do you notice about the green station wagon?

    It has a window you can see through. There's a white vehicle moving with the flow of traffic a few hundred yards ahead of it.

    If your method of driving is to stare at the bumper of the car in front of you and react to whatever it's doing, you are a horrible driver. You know what they pound into you over and over and over again if you go to any sort of performance driver training? Look ahead. No, look ahead. No, further than that. As far as you can possibly see. Your peripheral vision will take care of what's directly in front of the car, you should be looking as far down the road as you can. If you're coming up to a 90 degree turn, your head should be pointed at the side window, not the windshield. Look ahead.

    A competent human driver would not have crashed. They would have noticed the stopped car from a long ways back.

    And if you're behind a van or truck and can't see through it? Then you're following much too closely. A police officer at this accident would write the driver a ticket for failure to maintain a clear stopping distance.

    Quote from the driver ed website I cribbed the photo from:
    Highway traffic can decelerate from 65mph to 15mph in a very short time period. To avoid being caught off-guard by these quick speed changes, teach your teen to look for the brake lights of cars in the distance. By looking far down the highway (up to a quarter mile), they can get a heads up on speed changes.

    As soon as your teen sees brakes lights coming on up ahead, they should immediately cover the brake. This will give them a little extra cushion in case they need to apply brake pressure.

  • Options
    tbloxhamtbloxham Registered User regular
    Aridhol wrote: »
    Can any of the manufacturers or systems "see" and track multiple vehicles ahead and to the side?
    I. E. Why can't the system see that there is a large stopped solid object in front of the car driving ahead of them?

    Because the vehicle is completely obscured in angular space by the intervening car. It's literally completely hidden. You as a human might be able to intuit that the shadows ahead of the vehicle in front of you look a bit weird if you were driving at just the right time, and the sun was in the right place, but both you and the autopilot are pretty much reliant on the vehicle ahead of you NOT doing exactly what that vehicle did. You both have to 'outsource' some safety decisions to the driver you are following unless you want to follow at full speed to zero + reaction times stopping distance.

    Humans are probably a bit better if you were following a small car and there was like, a massive tractor stalled in the road ahead. In that case you'd see it, and likely begin braking even if the car in front of you didn't.

    Some vehicles can and do track vehicles to the sides, but I believe 'do not swerve out of the lane' is a pretty uniform safety feature, since 95% chance of a head on collision where you can emergency brake in advance is much safer than '50% chance I don't hit that vehicle and get ping ponged around this road'

    "That is cool" - Abraham Lincoln
  • Options
    tbloxhamtbloxham Registered User regular
    edited June 2018
    mRahmani wrote: »
    tbloxham wrote: »
    You mean, "British safety firm demonstrates why Autopilot behaves exactly as the user manual describes and is still safer than driving yourself in the situation they demonstrated"

    I can assure you 100% that if there is a vehicle stopped in the road in front of you, and the vehicle ahead of you simply swerves around it without reducing speed or indicating and you haven't been lucky enough to get a good shadow indication of the stopped vehicle then the BEST human driver in the world is crashing into that vehicle, or swerving into traffic in lanes to the side. I would have crashed there. You would have crashed there. EVERYONE would have crashed there. When that driver in front moved aside at high speed at the last minute without deceleration he may as well have got out and shot the following driver in the gut.

    Stopped vehicle (or large thing) in high speed free flowing traffic on a freeway, with a large vehicle in front of you, which hasn't yet transitioned to slow flow upstream of it is literally the most dangerous thing that you can run into on the road. Other than a stopped thing which might start moving perpendicular to you.

    Uh-uh, nope. There is something very crucial you guys are missing when throwing up your hands and saying "a human would have crashed too!"

    Spoiler for large image:
    Highway%20Braking.jpg

    I borrowed this from a driver safety website, of all places. What do you notice about the green station wagon?

    It has a window you can see through. There's a white vehicle moving with the flow of traffic a few hundred yards ahead of it.

    If your method of driving is to stare at the bumper of the car in front of you and react to whatever it's doing, you are a horrible driver. You know what they pound into you over and over and over again if you go to any sort of performance driver training? Look ahead. No, look ahead. No, further than that. As far as you can possibly see. Your peripheral vision will take care of what's directly in front of the car, you should be looking as far down the road as you can. If you're coming up to a 90 degree turn, your head should be pointed at the side window, not the windshield. Look ahead.

    A competent human driver would not have crashed. They would have noticed the stopped car from a long ways back.

    And if you're behind a van or truck and can't see through it? Then you're following much too closely. A police officer at this accident would write the driver a ticket for failure to maintain a clear stopping distance.

    Quote from the driver ed website I cribbed the photo from:
    Highway traffic can decelerate from 65mph to 15mph in a very short time period. To avoid being caught off-guard by these quick speed changes, teach your teen to look for the brake lights of cars in the distance. By looking far down the highway (up to a quarter mile), they can get a heads up on speed changes.

    As soon as your teen sees brakes lights coming on up ahead, they should immediately cover the brake. This will give them a little extra cushion in case they need to apply brake pressure.

    In your image, look at the vehicle to the left of your example, and the vehicle to the right. We have a truck with a high and tinted rear window, and a SUV with a rear tire. Both have high ride heights and a completely obscured view for any sedan. If you are following those vehicles, your ONLY hope is (as you mention) the brake lights (of which we have none, as we just came upon a stationary obstacle at high speed and the preceeding vehicle did no braking) and the shadows (unreliable at BEST). The image you have there is also hopelessly optimistic. You are viewing a white or silver vehicle ahead with perfect contrast against a nice dark subaru with a HUGE rear window which is flat to you to eliminate glare. The sun is nice and high, but there is a good level of cloud cover (note the shadows being short but diffuse) to give you almost no glare anywhere, but still a very high level of ambient light. You are also going downhill, so the white vehicle is highlighted AGAIN against the ideally colored freeway. Note how up ahead there is a crest of a hill? And a bridge? Both of these are going to transiently remove any hope you have of seeing things using the techniques you describe.

    While there are techniques you as a human can do to minimize your dependance on the driver in front of you not swerving to dodge a stationary object, or relying on quick reflexes and good brakes, they are innefective at best. Which is why 'high speed incoming vehicles + stationary object' is incredibly dangerous. The right advice to minimize the risk of this is to increase your following distance and drive slower. Everything else is just the illusion of safety from unreliable information sources. All these clever techniques that they try and teach in driving safety schools are just happy little fluff to pad out the course long enough for you to remember "DRIVE SLOWER AND KEEP A LONGER DISTANCE TO THE CAR IN FRONT"

    And none of these would have been any use in the example shown, other than the possibility of seeing through the window, which relies on you having a very specific relationship between the car you are in, the thing which is stopped (for example, what if its not a car. What if its a massive truck tire lying on its side), and the light conditions.

    edit - Note that this is NOT a 'vehicle ahead emergency stops' situation. This is about a million times as severe as that. This is 'Vehicle ahead swerves without warning to avoid a heavy stationary obstacle that you didn't see in advance because it was obscured'. A human driver would not have seen the car from ages back. A human driver would (like they do all the time, I literally drove past someone who did it this morning and three days ago) crash straight into the stationary vehicle.

    tbloxham on
    "That is cool" - Abraham Lincoln
  • Options
    SiliconStewSiliconStew Registered User regular
    mRahmani wrote: »
    tbloxham wrote: »
    You mean, "British safety firm demonstrates why Autopilot behaves exactly as the user manual describes and is still safer than driving yourself in the situation they demonstrated"

    I can assure you 100% that if there is a vehicle stopped in the road in front of you, and the vehicle ahead of you simply swerves around it without reducing speed or indicating and you haven't been lucky enough to get a good shadow indication of the stopped vehicle then the BEST human driver in the world is crashing into that vehicle, or swerving into traffic in lanes to the side. I would have crashed there. You would have crashed there. EVERYONE would have crashed there. When that driver in front moved aside at high speed at the last minute without deceleration he may as well have got out and shot the following driver in the gut.

    Stopped vehicle (or large thing) in high speed free flowing traffic on a freeway, with a large vehicle in front of you, which hasn't yet transitioned to slow flow upstream of it is literally the most dangerous thing that you can run into on the road. Other than a stopped thing which might start moving perpendicular to you.

    Uh-uh, nope. There is something very crucial you guys are missing when throwing up your hands and saying "a human would have crashed too!"

    Spoiler for large image:
    Highway%20Braking.jpg

    I borrowed this from a driver safety website, of all places. What do you notice about the green station wagon?

    It has a window you can see through. There's a white vehicle moving with the flow of traffic a few hundred yards ahead of it.

    If your method of driving is to stare at the bumper of the car in front of you and react to whatever it's doing, you are a horrible driver. You know what they pound into you over and over and over again if you go to any sort of performance driver training? Look ahead. No, look ahead. No, further than that. As far as you can possibly see. Your peripheral vision will take care of what's directly in front of the car, you should be looking as far down the road as you can. If you're coming up to a 90 degree turn, your head should be pointed at the side window, not the windshield. Look ahead.

    A competent human driver would not have crashed. They would have noticed the stopped car from a long ways back.

    And if you're behind a van or truck and can't see through it? Then you're following much too closely. A police officer at this accident would write the driver a ticket for failure to maintain a clear stopping distance.

    Quote from the driver ed website I cribbed the photo from:
    Highway traffic can decelerate from 65mph to 15mph in a very short time period. To avoid being caught off-guard by these quick speed changes, teach your teen to look for the brake lights of cars in the distance. By looking far down the highway (up to a quarter mile), they can get a heads up on speed changes.

    As soon as your teen sees brakes lights coming on up ahead, they should immediately cover the brake. This will give them a little extra cushion in case they need to apply brake pressure.

    Most drivers are horrible. It's quite a leap to assume more than a vanishingly small percentage of the population that have had any driving instruction beyond driving around the block, and if they're lucky, parallel parking.

    "Competent" drivers wouldn't have swerved out of the way at the last second.

    "If you can't see through a vehicle, you're following too closely" is a logical impossibility, you need to rephrase.

    There were no brake lights to look at as an early warning because the cars in front didn't brake.

    Just remember that half the people you meet are below average intelligence.
  • Options
    mRahmanimRahmani DetroitRegistered User regular
    The image is pretty typical, not "hopelessly optimistic." Roads are not infinitely flat with no side vision and populated only by semi trucks and SUVs.

    Even behind a truck, curves will allow you to see around them. Hills and bridges will let you see above them. Polarized sunglasses let you cut through glare. And if you're beset on all sides by vehicles that you can't see around, then you need to back the fuck off.

    Here's a screencap from the BBC video. They don't give you much of a lead up to the collision, but even with the tiny snippet they use in the video, you can spot the stopped vehicle before the Tesla hits it.

    aihFAqN.png

    A competent human driver would not have hit that.

  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Random tangent: most of my driving mistakes are caused by me looking too far ahead.

    Sometimes I register a green stoplight that's three blocks ahead while missing the red stoplight one block ahead.

    Or I'll be watching traffic 300 yards ahead and not notice that the guy in front of me is slowing down.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    tbloxhamtbloxham Registered User regular
    Feral wrote: »
    Random tangent: most of my driving mistakes are caused by me looking too far ahead.

    Sometimes I register a green stoplight that's three blocks ahead while missing the red stoplight one block ahead.

    Or I'll be watching traffic 300 yards ahead and not notice that the guy in front of me is slowing down.

    Yup, humans are awful at driving and most of the 'clever techniques' mRahami talks about actually make you worse at driving by taking your attention off the critical most dangerous thing on the road, the vehicle right in front of you and the vehicles which might be about to merge right in front of you and cut you off. The way to drive safer as a human is a slow down and back off. Everything else is so unreliable as to be worthless. Humans aren't good at multitasking, or tracking multiple objects. We just believe we are. You don't even really HAVE peripheral vision. You have peripheral MEMORY and peripheral guessing. Your brain just imagines what might be there, and checks back in once in a while.

    If you are sitting in a stopped vehicle on a freeway, and a vehicle approaching you at high speed swerves out of the way without decelerating, I would bet $100 that the immediately following driver is more likely to be Drunk, Asleep, or texting or just literally not looking at the road for some reason than observing some kind of detailed and effective defensive driving procedure. The only way to be a 'safe' driver is to drive slower with a larger follow distance.

    Hell, the crash issue only exists if the first driver is a bad driver! The approaching driver should have slowed down severely, and carefully transitioned to the next lane when it was safe. So the whole argument about human drivers being so awesome and safe only exists in relation to an example of human drivers driving awfully.

    "That is cool" - Abraham Lincoln
  • Options
    tinwhiskerstinwhiskers Registered User regular
    Feral wrote: »
    Random tangent: most of my driving mistakes are caused by me looking too far ahead.

    Sometimes I register a green stoplight that's three blocks ahead while missing the red stoplight one block ahead.

    Or I'll be watching traffic 300 yards ahead and not notice that the guy in front of me is slowing down.

    You are consuming a diet too high in spice.

    6ylyzxlir2dz.png
Sign In or Register to comment.