As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Autonomous Vehicles: The Robot Apocalypse and You

ElJeffeElJeffe Moderator, ClubPA mod
Hey, autonomous vehicles! How about that!

Me, I think they're pretty cool. Data from a bunch of sources generally shows them to be significantly safer than human driven cars in terms of accidents per miles driven, and it only stands to improve in the future.

I feel that there are a couple of aspects of autonomous vehicles that don't really help it in the public eye. First, there is an inherent bias against letting technology have the reins. 1000 people getting mowed down by careless drivers is background noise, while 1 person getting hit by an autonomous vehicle is big news. Second, we've found that the class of accidents involving AVs are just different than conventional accidents. An AV will not have issues with the driver being drunk, or falling asleep, or being inattentive. An AV is constantly looking in all directions at all times. An AV has nigh instantaneous reaction times. But a human driver isn't usually going to mistake a trailer truck for a piece of fence. And the former don't draw attention, while the latter is pretty conspicuous.

So anyway, talk about our wheeled robot overlords! The technology, the real world results, the ethics, whatever.

I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
«13456733

Posts

  • Options
    zepherinzepherin Russian warship, go fuck yourself Registered User regular
    I welcome our autonomous overlords. Also I would like to be able to at least do highway autopilot where I tell my car down 95 to drive to Georgia and alert me if it’s below 1/3 a tank of gas. I’ll do the first 45 minutes and last 30 minutes.

  • Options
    kaidkaid Registered User regular
    I think once the tech gets even mostly there insurance companies are going to push for it hard. So many of the day to day inattention accidents basically get eliminated with this. It won't completely remove accidents but it should lower them very significantly. I think the initial transition where you have mostly human drivers with a few autonomous ones is probably going to be the most dangerous time for them. The biggest issue is what mistakes autonomous cars make look really stupid to humans and could cause issues catching people off guard and over/under reacting.

  • Options
    zepherinzepherin Russian warship, go fuck yourself Registered User regular
    kaid wrote: »
    I think once the tech gets even mostly there insurance companies are going to push for it hard. So many of the day to day inattention accidents basically get eliminated with this. It won't completely remove accidents but it should lower them very significantly. I think the initial transition where you have mostly human drivers with a few autonomous ones is probably going to be the most dangerous time for them. The biggest issue is what mistakes autonomous cars make look really stupid to humans and could cause issues catching people off guard and over/under reacting.
    Absolutely. And there is going to likely be one horrific accident. Like a semi plowing into a bus stop during a snow storm. But every DUI accident could be prevented by autonomous driving and this still a considerable number.

    Who’s insurance is going to pay is also going to be an interesting issue. Are we going to just naturally move to a no fail system, or assume liability on non AI drivers? At some point the manufacturers are going to get sued and their liability insurance will fight to try to keep them from being responsible. Honestly congress should create some laws, but it’s likely to be a patchwork of state laws.

  • Options
    TryCatcherTryCatcher Registered User regular
    Dunno. I think that the idea is amazing, but rubber hits the road on implementation.

    Let's get things out of the way: Autonomous vehicles do not work outside controlled environments adapted to them, like the Olympic village at Tokyo. So, the implementation is going to go by slowly using them on public transport in order to adapt cities to them piece by piece. Once enough momentum is gathered, can talk about private use.

    Of course, that actually depends on investment on public transport so good luck with that.

  • Options
    shrykeshryke Member of the Beast Registered User regular
    The looming issue of liability for autonomous vehicles is a thing too imo.

  • Options
    discriderdiscrider Registered User regular
    I feel like that required beaconing of pedestrians and cyclists to assist autonomous vehicles that was mentioned in the congress thread is a realisation of this webcomic, http://smbc-comics.com/comic/autonomous :
    1532349537-20180723.png

  • Options
    zepherinzepherin Russian warship, go fuck yourself Registered User regular
    edited November 2021
    shryke wrote: »
    The looming issue of liability for autonomous vehicles is a thing too imo.
    This, while concerning is going to just come down to a bunch of insurance companies fighting each other. It’ll be expensive and scary, but ultimately which corporation gets the bill is only going to effect prices a little because it’ll be spread across all of their customers.

    Actually now that I think about it, it’s going to be a low key battle where AI programming firms and manufacturers try to get liability on the drivers, and insurance companies are going to try to shift liability to manufacturers. They’ll tap their lobbying firms and campaign donations to get laws drafted friendly to them. It’s going to be interesting from a dystopian cyberpunk perspective.

    zepherin on
  • Options
    Man in the MistsMan in the Mists Registered User regular
    As someone who never got his driver's license and mainly gets around by bus, this is something I want to see become a thing.

  • Options
    HamHamJHamHamJ Registered User regular
    The idea of beacons on pedestrians and bicycles seems just obviously impractical. Maybe for some specific high risk situations with maximum impact like commercial bikes doing deliveries or bike rentals? But every random person on the street is never going to have some radio beacon.

    I think we will see increased automation to support the driver in personal vehicles, and autonomous vehicles only for some specific commercial applications, until those two things meet in the middle. I think making a car that can operate in 90% of conditions and is statistically safer than a human driver on average is achievable. Mainly because humans are actually mostly really bad at driving.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    RT800RT800 Registered User regular
    I don't know what a beacon looks like, but if you could put it in a smartphone then that's like 80% of pedestrians right there.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    TryCatcher wrote: »
    Dunno. I think that the idea is amazing, but rubber hits the road on implementation.

    Let's get things out of the way: Autonomous vehicles do not work outside controlled environments adapted to them, like the Olympic village at Tokyo. So, the implementation is going to go by slowly using them on public transport in order to adapt cities to them piece by piece. Once enough momentum is gathered, can talk about private use.

    Of course, that actually depends on investment on public transport so good luck with that.

    [citation needed] because they are currently working on city streets

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    Phyphor wrote: »
    TryCatcher wrote: »
    Dunno. I think that the idea is amazing, but rubber hits the road on implementation.

    Let's get things out of the way: Autonomous vehicles do not work outside controlled environments adapted to them, like the Olympic village at Tokyo. So, the implementation is going to go by slowly using them on public transport in order to adapt cities to them piece by piece. Once enough momentum is gathered, can talk about private use.

    Of course, that actually depends on investment on public transport so good luck with that.

    [citation needed] because they are currently working on city streets

    There's a test drive out there of a Tesla with the new Autopilot that almost turns into a crowd of pedestrians. I think the systems are very accomplished, but the problem is the failure modes are not gradual: the distance between "I know what I'm doing" and "total catastrophe" is still too small.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Phyphor wrote: »
    TryCatcher wrote: »
    Dunno. I think that the idea is amazing, but rubber hits the road on implementation.

    Let's get things out of the way: Autonomous vehicles do not work outside controlled environments adapted to them, like the Olympic village at Tokyo. So, the implementation is going to go by slowly using them on public transport in order to adapt cities to them piece by piece. Once enough momentum is gathered, can talk about private use.

    Of course, that actually depends on investment on public transport so good luck with that.

    [citation needed] because they are currently working on city streets

    There's a test drive out there of a Tesla with the new Autopilot that almost turns into a crowd of pedestrians. I think the systems are very accomplished, but the problem is the failure modes are not gradual: the distance between "I know what I'm doing" and "total catastrophe" is still too small.

    I'm not talking about tesla, they are probably the least competent out of all of them

  • Options
    LabelLabel Registered User regular
    I oppose mandatory AI driving on a couple of grounds.

    First is Civil Liberties. I do NOT want my car's position to be automatically recorded. Especially after the last federal administration. Voting rights advocates already have enough problems.

    Second is Public Emergencies. AI driving relies on more background infrastructure to be functional. Whatever development company being on point, the update process being intact, etc.

    Thirdly, economic disadvantage. I live in a rural area. Any AI-driving developed for my area is likely to be hamstrung by the low monetary value available here. There are plenty of places like this in America, from areas even more rural than this, to impoverished neighborhoods in large cities. Doing AI driving poorly may not be the potentially lifesaving tool we think of it as.



    Note, at the beginning I said mandatory AI driving. I think if left unchecked and unregulated, companies will push to make human driving either too difficult to pursue, or outright illegal. They will be pursuing their profits in the name of "public safety." It could be AI car manufacturing companies or insurance companies, it doesn't matter. Both will have financial incentives to get human drivers off the road, and lobby for legislation mandating that.

    Meaning if/when AI driving becomes functional enough to operate, without significant pushback it will become mandatory.


    P.S. The onset of AI driving provides a great opportunity to increase the driving license safety training and standards, and this should be pursued as well.

  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    Phyphor wrote: »
    Phyphor wrote: »
    TryCatcher wrote: »
    Dunno. I think that the idea is amazing, but rubber hits the road on implementation.

    Let's get things out of the way: Autonomous vehicles do not work outside controlled environments adapted to them, like the Olympic village at Tokyo. So, the implementation is going to go by slowly using them on public transport in order to adapt cities to them piece by piece. Once enough momentum is gathered, can talk about private use.

    Of course, that actually depends on investment on public transport so good luck with that.

    [citation needed] because they are currently working on city streets

    There's a test drive out there of a Tesla with the new Autopilot that almost turns into a crowd of pedestrians. I think the systems are very accomplished, but the problem is the failure modes are not gradual: the distance between "I know what I'm doing" and "total catastrophe" is still too small.

    I'm not talking about tesla, they are probably the least competent out of all of them

    Tesla are probably running a system in the widest possible set of real world circumstances. The others aren't because they know it'll work out basically the same as Tesla, but I've seen nothing to particularly convince me the underlying tech is different - everyone is cribbing off the same basic set of papers.

  • Options
    shrykeshryke Member of the Beast Registered User regular
    RT800 wrote: »
    I don't know what a beacon looks like, but if you could put it in a smartphone then that's like 80% of pedestrians right there.

    "It will only mow down 20% of pedestrians" is not a big selling point.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Phyphor wrote: »
    Phyphor wrote: »
    TryCatcher wrote: »
    Dunno. I think that the idea is amazing, but rubber hits the road on implementation.

    Let's get things out of the way: Autonomous vehicles do not work outside controlled environments adapted to them, like the Olympic village at Tokyo. So, the implementation is going to go by slowly using them on public transport in order to adapt cities to them piece by piece. Once enough momentum is gathered, can talk about private use.

    Of course, that actually depends on investment on public transport so good luck with that.

    [citation needed] because they are currently working on city streets

    There's a test drive out there of a Tesla with the new Autopilot that almost turns into a crowd of pedestrians. I think the systems are very accomplished, but the problem is the failure modes are not gradual: the distance between "I know what I'm doing" and "total catastrophe" is still too small.

    I'm not talking about tesla, they are probably the least competent out of all of them

    Tesla are probably running a system in the widest possible set of real world circumstances. The others aren't because they know it'll work out basically the same as Tesla, but I've seen nothing to particularly convince me the underlying tech is different - everyone is cribbing off the same basic set of papers.

    Sure it's different. For one tesla is doing it with the smallest sensor package, their results are basically always going to be inferior because they don't have lidar, etc. Second, companies like waymo likely have the people who wrote the papers and almost certainly have more interesting stuff in their implementation

    They've been running a no-safety-driver ride service in phoenix for a while now and while that is pretty much an optimal scenario with little weather and wide roads it is a functional driverless car, and they've recently started up in SF proper

  • Options
    DarkPrimusDarkPrimus Registered User regular
    edited November 2021
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?
    Even taking Tesla's policies at relatively face value—and not including the highly publicized ways Teslas have been easily tricked for years into driving on their own for extended periods, bugs for which Tesla could issue a software update to fix—Tesla has always tried to have it both ways. It promotes these driver assist features as if they basically drive the car itself—the names are "Autopilot" and "Full-Self Driving," after all—and you can pay $10,000 for the privilege of using them, a premium price for what’s being sold as a premium experience. But, in the fine legal print, the company says these features are no more reliable than any other Level 2 driver assist system that can be found from virtually every other manufacturer, and the driver must still pay close attention at all times.

    While the Biden administration is now requiring companies report accidents that occur while "autopilots" are active, it's that or face a fine, and as we all know, a fine is just a price that companies will factor into their budget reports if they feel that eating the fine is more profitable than obeying the law.

    We cannot rely on the industry to self-report how safe these things are.

    DarkPrimus on
  • Options
    zepherinzepherin Russian warship, go fuck yourself Registered User regular
    shryke wrote: »
    RT800 wrote: »
    I don't know what a beacon looks like, but if you could put it in a smartphone then that's like 80% of pedestrians right there.

    "It will only mow down 20% of pedestrians" is not a big selling point.

    They do stop for obstructions…usually. Because you don’t want your car to hit a dog, deer, recap hell I go around empty boxes. And AI tries not to hit those things either.

  • Options
    The WolfmanThe Wolfman Registered User regular
    Don't these things still completely fall to pieces in any mild weather like rain or snow?

    "The sausage of Green Earth explodes with flavor like the cannon of culinary delight."
  • Options
    ReynoldsReynolds Gone Fishin'Registered User regular
    Don't these things still completely fall to pieces in any mild weather like rain or snow?

    So do most drivers.

    uyvfOQy.png
  • Options
    tinwhiskerstinwhiskers Registered User regular
    edited November 2021
    Don't these things still completely fall to pieces in any mild weather like rain or snow?

    My Model 3 is fine in both of those, until the point that the snow covers the lines on the road. Which...yeah short of some crazy beacon system built into the road way. Its never going to be able to guess at where the lanes were.

    e:and like wipers can't keep up rain, which again

    tinwhiskers on
    6ylyzxlir2dz.png
  • Options
    HamHamJHamHamJ Registered User regular
    One thing is that the weaknesses of purely camera based systems can be overcome with other sensors. You can have lidar or whatever to detect that there is a solid object on front of you and override to hit the breaks without needing to rely on an image recognition algorithm to recognize what it is.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    DarkPrimus wrote: »
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?

    Well no, that would be a rather silly thing to find rad.

    I was more commenting on the impressive technical performance of AI drivers versus human drivers, and how much better it is than the median driver.

    It's admittedly an uphill climb convincing people that driverless cars can be not just as good as, but significantly better than human drivers. Partly because everyone thinks they're an above average driver, so they tend to go "Well, that car just swerved into a pedestrian, obviously I would never swerve into a pedestrian," ignoring the thousands of drivers per year who swerve into pedestrians.

    Tesla isn't the entirety of AVs.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    HamHamJ wrote: »
    One thing is that the weaknesses of purely camera based systems can be overcome with other sensors. You can have lidar or whatever to detect that there is a solid object on front of you and override to hit the breaks without needing to rely on an image recognition algorithm to recognize what it is.

    The issue is that while "just don't hit anything" sounds good in theory, in practice if your self-driving car is liable to slam on the brakes when non-obstructions are in front of it (i.e. returns that are too small to reasonably be obstructions, or transients) then you've probably created a huge hazard anyway, or alternatively a car which just can't bring itself to drive forwards.

    The problem is that LIDAR doesn't give a clean "obstruction yes/no" output - neither does RADAR, and having both technically worsens it. LIDAR in rain for example will produce noisy returns, and also treat any random object as "solid" - so you wind up having to train to reject LIDAR returns. RADAR is worse - non-metallic objects don't return at all, and RADAR returns can get all sorts of weird.

    The case of the Uber vehicle which killed a pedestrian was almost certainly the result of something like this: the frame of the bike being pushed producing a scatter of tiny LIDAR returns which were rejected, and then resulting in any object in that volume to be lowered in priority for expectation. You can blame this on bad training or programming, but it definitely speaks to the problems of "sensor fusion" - once a sensor is into an "unreliable" range, how much has your model suffered in overall accuracy?

  • Options
    DarkPrimusDarkPrimus Registered User regular
    ElJeffe wrote: »
    DarkPrimus wrote: »
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?

    Well no, that would be a rather silly thing to find rad.

    I was more commenting on the impressive technical performance of AI drivers versus human drivers, and how much better it is than the median driver.

    It's admittedly an uphill climb convincing people that driverless cars can be not just as good as, but significantly better than human drivers. Partly because everyone thinks they're an above average driver, so they tend to go "Well, that car just swerved into a pedestrian, obviously I would never swerve into a pedestrian," ignoring the thousands of drivers per year who swerve into pedestrians.

    Tesla isn't the entirety of AVs.

    I doubt that is a metric that's actually been met, and even were it so, AI drivers should not merely be better than the average human driver. The unpredictable nature of road conditions are such that we should be holding these companies accountable for every accident the same as we would hold any human driver accountable.

  • Options
    tinwhiskerstinwhiskers Registered User regular
    DarkPrimus wrote: »
    ElJeffe wrote: »
    DarkPrimus wrote: »
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?

    Well no, that would be a rather silly thing to find rad.

    I was more commenting on the impressive technical performance of AI drivers versus human drivers, and how much better it is than the median driver.

    It's admittedly an uphill climb convincing people that driverless cars can be not just as good as, but significantly better than human drivers. Partly because everyone thinks they're an above average driver, so they tend to go "Well, that car just swerved into a pedestrian, obviously I would never swerve into a pedestrian," ignoring the thousands of drivers per year who swerve into pedestrians.

    Tesla isn't the entirety of AVs.

    I doubt that is a metric that's actually been met, and even were it so, AI drivers should not merely be better than the average human driver. The unpredictable nature of road conditions are such that we should be holding these companies accountable for every accident the same as we would hold any human driver accountable.

    Well yeah, if you reject all the data and pick non-sense arbitrary criteria nothing will ever be good enough...

    6ylyzxlir2dz.png
  • Options
    BSoBBSoB Registered User regular
    If you get an AV that mearly drives as well as the average person who isn't tired, drunk, texting, angry or otherwise distracted, that saves millions of life's and dollars and hours of pain.

  • Options
    QuidQuid Definitely not a banana Registered User regular
    DarkPrimus wrote: »
    ElJeffe wrote: »
    DarkPrimus wrote: »
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?

    Well no, that would be a rather silly thing to find rad.

    I was more commenting on the impressive technical performance of AI drivers versus human drivers, and how much better it is than the median driver.

    It's admittedly an uphill climb convincing people that driverless cars can be not just as good as, but significantly better than human drivers. Partly because everyone thinks they're an above average driver, so they tend to go "Well, that car just swerved into a pedestrian, obviously I would never swerve into a pedestrian," ignoring the thousands of drivers per year who swerve into pedestrians.

    Tesla isn't the entirety of AVs.

    I doubt that is a metric that's actually been met, and even were it so, AI drivers should not merely be better than the average human driver. The unpredictable nature of road conditions are such that we should be holding these companies accountable for every accident the same as we would hold any human driver accountable.

    If and when AI driven cars are proven to be "merely" better drivers than humans I see no issue with letting them replace people. I would even contend we have a moral responsibility to do so where feasible.

    By all means hold companies liable for accidents caused by their programming. I don't think anyone here disagrees with doing so.

  • Options
    DarkPrimusDarkPrimus Registered User regular
    edited November 2021
    DarkPrimus wrote: »
    ElJeffe wrote: »
    DarkPrimus wrote: »
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?

    Well no, that would be a rather silly thing to find rad.

    I was more commenting on the impressive technical performance of AI drivers versus human drivers, and how much better it is than the median driver.

    It's admittedly an uphill climb convincing people that driverless cars can be not just as good as, but significantly better than human drivers. Partly because everyone thinks they're an above average driver, so they tend to go "Well, that car just swerved into a pedestrian, obviously I would never swerve into a pedestrian," ignoring the thousands of drivers per year who swerve into pedestrians.

    Tesla isn't the entirety of AVs.

    I doubt that is a metric that's actually been met, and even were it so, AI drivers should not merely be better than the average human driver. The unpredictable nature of road conditions are such that we should be holding these companies accountable for every accident the same as we would hold any human driver accountable.

    Well yeah, if you reject all the data and pick non-sense arbitrary criteria nothing will ever be good enough...

    If we put "self-driving" cars on the road that are merely as good as the average driver, who is at fault when one of those cars hits someone?

    If it is the person behind the wheel, then the developers of this "self-driving" technology have no motivation at all to make their systems reliable or safe, because they aren't accountable for accidents that occur.

    FFS this whole GDST is spun out of legislation being passed that is placing the onus on pedestrians to have to wear "safety beacons" to alert "self-driving" vehicles to their presence, because heaven forbid those "self-driving" cars be the ones responsible for avoiding hitting a pedestrian.

    DarkPrimus on
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited November 2021
    DarkPrimus wrote: »
    ElJeffe wrote: »
    DarkPrimus wrote: »
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?

    Well no, that would be a rather silly thing to find rad.

    I was more commenting on the impressive technical performance of AI drivers versus human drivers, and how much better it is than the median driver.

    It's admittedly an uphill climb convincing people that driverless cars can be not just as good as, but significantly better than human drivers. Partly because everyone thinks they're an above average driver, so they tend to go "Well, that car just swerved into a pedestrian, obviously I would never swerve into a pedestrian," ignoring the thousands of drivers per year who swerve into pedestrians.

    Tesla isn't the entirety of AVs.

    I doubt that is a metric that's actually been met, and even were it so, AI drivers should not merely be better than the average human driver. The unpredictable nature of road conditions are such that we should be holding these companies accountable for every accident the same as we would hold any human driver accountable.

    I think this raises some interesting cost benefit analysis, and some interesting ethical concerns. How much better than the average driver does an AI have to be in order to be worth implementing? Like BSoB says, if the AI is as good as the average sober, alert driver, that's already a huge step up when implemented on a wide scale.

    Liability is also an interesting question. If a Tesla-navigated car gets into an accident because the AI fucks up, I'd put that on par with a human driver making a bad (but not negligent) decision that causes an accident. Looking in his mirror at the wrong time, slow reflexes, that kind of thing. So that sort of accident would be handled by whatever our insurance regime is, just as human accidents are now.

    And if it turns out the error was something known and concealed, that could be treated as a human driver being negligent - driving drunk, texting, what have you.

    The nice thing about errors in AI is that once the error is identified and fixed, it ceases to be a problem, because now every car in the network implements the same fix. Versus with human drivers, where a driver falling asleep at the wheel doesn't prevent the next driver falling asleep at the wheel.

    Edit: also, if you have any evidence that AI isn't very reliable - e.g., an actual study showing that it's worse than human drivers - I'd be curious to see it.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    Styrofoam SammichStyrofoam Sammich WANT. normal (not weird)Registered User regular
    Is there any kind of reliable third party study in crash rates for autonomous vehicles vs human controlled?

    wq09t4opzrlc.jpg
  • Options
    zepherinzepherin Russian warship, go fuck yourself Registered User regular
    Actually if you are doing a beacon system.

    The highway would be a good place to do it. There is fencing that runs along both sides of the highway. If you run beacons on those fences. It wouldn’t be difficult to do positioning within those beacons based on distances.

  • Options
    CalicaCalica Registered User regular
    I wonder what happens when someone tosses a beacon off an overpass onto a busy freeway.

  • Options
    XaquinXaquin Right behind you!Registered User regular
    I trust the makers of automated vehicles as much as I trust any other major company

    which is absolutely zero

  • Options
    PolaritiePolaritie Sleepy Registered User regular
    Tesla specifically has serious issues with how they're pushing this stuff. Between giving it names that suggest it is far more capable than it is ("autopilot"), the attempts at information control over it, and what feels like recklessly aggressive testing (or at least, insufficiently cautious) on actual roads.

    I agree that self-driving cars have a lot of potential as a technology, but I'm worried about how companies are trying to get there. This whole beacon nonsense is bullshit, for instance, for several reasons (shifting blame, imposing costs on everyone else, privacy implications...).

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    DarkPrimusDarkPrimus Registered User regular
    edited November 2021
    ElJeffe wrote: »
    DarkPrimus wrote: »
    ElJeffe wrote: »
    DarkPrimus wrote: »
    ElJeffe wrote: »
    Yeah, they've been on the road for years now and have logged tens of millions of driver hours. I've watched a bunch of videos on how they respond to various conditions and emergency situations and it's pretty rad.

    Oh, you mean like how Tesla owners can reset their safety scores in order to get free access to the autonomous driving beta-test, or how Tesla's own safety guidelines instruct drivers to not take their hands off the wheel for more than a minute?

    Well no, that would be a rather silly thing to find rad.

    I was more commenting on the impressive technical performance of AI drivers versus human drivers, and how much better it is than the median driver.

    It's admittedly an uphill climb convincing people that driverless cars can be not just as good as, but significantly better than human drivers. Partly because everyone thinks they're an above average driver, so they tend to go "Well, that car just swerved into a pedestrian, obviously I would never swerve into a pedestrian," ignoring the thousands of drivers per year who swerve into pedestrians.

    Tesla isn't the entirety of AVs.

    I doubt that is a metric that's actually been met, and even were it so, AI drivers should not merely be better than the average human driver. The unpredictable nature of road conditions are such that we should be holding these companies accountable for every accident the same as we would hold any human driver accountable.

    I think this raises some interesting cost benefit analysis, and some interesting ethical concerns. How much better than the average driver does an AI have to be in order to be worth implementing? Like BSoB says, if the AI is as good as the average sober, alert driver, that's already a huge step up when implemented on a wide scale.

    Liability is also an interesting question. If a Tesla-navigated car gets into an accident because the AI fucks up, I'd put that on par with a human driver making a bad (but not negligent) decision that causes an accident. Looking in his mirror at the wrong time, slow reflexes, that kind of thing. So that sort of accident would be handled by whatever our insurance regime is, just as human accidents are now.

    And if it turns out the error was something known and concealed, that could be treated as a human driver being negligent - driving drunk, texting, what have you.

    The nice thing about errors in AI is that once the error is identified and fixed, it ceases to be a problem, because now every car in the network implements the same fix. Versus with human drivers, where a driver falling asleep at the wheel doesn't prevent the next driver falling asleep at the wheel.

    Edit: also, if you have any evidence that AI isn't very reliable - e.g., an actual study showing that it's worse than human drivers - I'd be curious to see it.

    Let's not presume that fixing a problem is as easy as identifying a problem. Identifying a problem and fixing a problem are two very different things, as anyone with any experience with software development can tell you.

    And once a problem is identified... what is to be done with that software? Do we simply allow it to continue to function in its flawed state? Do we disable it nation-wide until a patch is developed? What happens if someone refuses to install the update?

    As to your edit: The burden of proof is on those making the claim that self-driving cars are already as safe as a human driver. Where are the sources that aren't directly from the corporations themselves?

    DarkPrimus on
  • Options
    HefflingHeffling No Pic EverRegistered User regular
    I think autonomous vehicles will also be held to higher standards because the vehicle manufacturers are taking on a much greater liability with self-driving programs. I think Tesla advises their drivers not to take their hands off the steering wheel not because there are studies showing that accident rates are reduced by doing so, but because if they can demonstrate that the person wasn't paying attention via steering wheel sensors, they can place all liability on the driver.

  • Options
    QuidQuid Definitely not a banana Registered User regular
    Calica wrote: »
    I wonder what happens when someone tosses a beacon off an overpass onto a busy freeway.

    Probably roughly the same thing as tossing a brick off an overpass onto a busy freeway.

Sign In or Register to comment.