As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

[Uber]: Disrupting Livery Service (And Ethics)

17576777981

Posts

  • Options
    DarkewolfeDarkewolfe Registered User regular
    It's definitely a matter of frequency, but I've had more bad Uber rides than I have taxi rides. Taxis are always just meh, neither good nor bad. I have rarely ever called for one, though, I always flag them off the street or walk to a stand.

    Meanwhile with Uber I have had multiple times where I had to cancel the driver because they were not heading my way, deliberately or not. I've also had way more drivers via Uber who seemed to have literally just arrived in the US that very day and never driven a car before.

    What is this I don't even.
  • Options
    DarkPrimusDarkPrimus Registered User regular
    I haven't been keeping up with this thread but I'm pretty sure this is brand-new, uh, news, and is absolutely horrifying. Uber's self-driving cars? They haven't been programmed to consider people on the streets except at crosswalks.


    (Bloomberg is a business news website.)

    More details via Wired.
    ...despite the fact that the car detected Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

    It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s Vehicle Automation Report reads. Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed. It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg, that it couldn’t steer around her, and that it needed to slam on the brakes.

    That triggered what Uber called “action suppression,” in which the system held off braking for one second while it verified “the nature of the detected hazard”—a second during which the safety operator, Uber’s most important and last line of defense, could have taken control of the car and hit the brakes herself. But Vasquez wasn’t looking at the road during that second. So with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

  • Options
    kimekime Queen of Blades Registered User regular
    Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.

    Fun! :/

    Battle.net ID: kime#1822
    3DS Friend Code: 3110-5393-4113
    Steam profile
  • Options
    Kane Red RobeKane Red Robe Master of Magic ArcanusRegistered User regular
    Jesus. I hope the NTSB fines their asses off. That should never have been on the road, backup human or no.

  • Options
    schussschuss Registered User regular
    kime wrote: »
    Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.

    Fun! :/

    Computer vision at speed is hard and has to be approached in an open manner. Building assumptions into any system is a mistake as the real world is messy and unreliable.

  • Options
    AimAim Registered User regular
    It's surprising to me that regardless.of what it classified as (other than effectively immaterial, like plastic bag floating around) that it didn't figure out it was on a collision course with *something* and do some change way ahead of time to avoid it.

  • Options
    BlarghyBlarghy Registered User regular
    Just a guess, but given that the pedestrian was in forward motion, if the uber car calculated that the object was a forward moving vehicle or bicycle, it may have been calculating to pass behind it into the vacated space, as bikes and vehicles moving forward typically need more time and space to suddenly reverse direction than a pedestrian. The assumptions the car would make about potential future places of a moving object would be different for cars, bikes, or people.

  • Options
    kimekime Queen of Blades Registered User regular
    schuss wrote: »
    kime wrote: »
    Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.

    Fun! :/

    Computer vision at speed is hard and has to be approached in an open manner. Building assumptions into any system is a mistake as the real world is messy and unreliable.

    Yeah like.... what'd they do? "if (crosswalk) { checkPedestrian; checkEverythingElse; } else { checkEverythingElse; }"

    I don't do computer vision, but it seems weird to have hard-baked that logic in.
    Aim wrote: »
    It's surprising to me that regardless.of what it classified as (other than effectively immaterial, like plastic bag floating around) that it didn't figure out it was on a collision course with *something* and do some change way ahead of time to avoid it.

    Based on the article except, their logic says "classify the object first, then predict where it's going, then see if I have to stop." It couldn't do the first...
    Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed

    Battle.net ID: kime#1822
    3DS Friend Code: 3110-5393-4113
    Steam profile
  • Options
    spool32spool32 Contrary Library Registered User regular
    Kamiro wrote: »
    Calica wrote: »
    Kamiro wrote: »
    Calica wrote: »
    Yeah, "hailing a cab" is a thing that only happens in movies for most people.

    Huh?

    Do you mean like just yelling “Taxi!” and a can showing up?

    Or just hailing a cab from the street in general. Cause it’s pretty common in most cities

    The latter.

    This is baffling to me.

    Every city I've been to, except Pittsburgh, you could hail a cab from the street pretty easily.

    Try it while being black, see how it goes.

  • Options
    schussschuss Registered User regular
    kime wrote: »
    schuss wrote: »
    kime wrote: »
    Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.

    Fun! :/

    Computer vision at speed is hard and has to be approached in an open manner. Building assumptions into any system is a mistake as the real world is messy and unreliable.

    Yeah like.... what'd they do? "if (crosswalk) { checkPedestrian; checkEverythingElse; } else { checkEverythingElse; }"

    I don't do computer vision, but it seems weird to have hard-baked that logic in.
    Aim wrote: »
    It's surprising to me that regardless.of what it classified as (other than effectively immaterial, like plastic bag floating around) that it didn't figure out it was on a collision course with *something* and do some change way ahead of time to avoid it.

    Based on the article except, their logic says "classify the object first, then predict where it's going, then see if I have to stop." It couldn't do the first...
    Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed

    Yep, not ready for primetime at all. You'd first want to spin up a classifier that worked fast enough for the speeds and had some sense of masking/occlusion for previously identified objects to reduce process load. Only once you had that nailed would you want to add a rules engine on top. Sounds like they were trying to do it all at once, which is incredibly irresponsible if they didn't inform the supervising drivers that it was basically "pre alpha" state.
    Also, in that case you'd want to err on the side of braking more than less, but they specifically didn't do that for "smoothness of ride". So just bad design and product decisions everywhere

  • Options
    evilmrhenryevilmrhenry Registered User regular
    Here's the actual report
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
    The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
    According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review

    I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.

    My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.

  • Options
    BlarghyBlarghy Registered User regular
    Yeah, I guess the computer would likely slam on the brakes more often than a human would, with rear end collisions from anyone following being more likely. It really does sound like a two person setup is needed.

  • Options
    LostNinjaLostNinja Registered User regular
    Smrtnik wrote: »
    Paladin wrote: »
    They'll drive you to an ATM

    A tow truck driver did this to me in a middle of a Blizzard with my car on his flatbed after he pulled me out of a ditch.

    We talk about taxi regulation, but the lack of it surrounding tows is ridiculous. They can literally steal your vehicle* and refuse to give it back unless you pay them cash!

    *for an offense imagined or otherwise.

  • Options
    GoumindongGoumindong Registered User regular
    The first paragraph is reasonable right until the last part
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator

    It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.

    wbBv3fj.png
  • Options
    CalicaCalica Registered User regular
    Goumindong wrote: »
    The first paragraph is reasonable right until the last part
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator

    It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.

    It's almost like their software was designed with no consideration for human beings whatsoever!

    Software development in general desperately needs some kind of ethics board.

  • Options
    khainkhain Registered User regular
    Blarghy wrote: »
    Yeah, I guess the computer would likely slam on the brakes more often than a human would, with rear end collisions from anyone following being more likely. It really does sound like a two person setup is needed.

    If this is true, then the system has absolutely no business being on the road.

  • Options
    fedaykin666fedaykin666 Registered User regular
    In my area of the world, taxi drivers are pretty likely to try to scam English speaking tourists. The main reason for foreigners to Uber is to avoid price shenanigans, dodgy meters and detours.

    It would be cool if Uber competition forced taxi companies to let you pay in a similar way.

  • Options
    evilmrhenryevilmrhenry Registered User regular
    Goumindong wrote: »
    The first paragraph is reasonable right until the last part
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator

    It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.

    By the time you need to use emergency braking procedures, it's too late to sound an alarm, wait for the driver to figure out what needs to be done, and do it. The car realized that emergency braking was needed 1.3 seconds before the collision. (It might even make things worse, by having the driver glancing at the console to see what that alert is about instead of watching the road.)

  • Options
    GoumindongGoumindong Registered User regular
    Goumindong wrote: »
    The first paragraph is reasonable right until the last part
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator

    It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.

    By the time you need to use emergency braking procedures, it's too late to sound an alarm, wait for the driver to figure out what needs to be done, and do it. The car realized that emergency braking was needed 1.3 seconds before the collision. (It might even make things worse, by having the driver glancing at the console to see what that alert is about instead of watching the road.)

    If the driver is watching the road 1.3 seconds before a collision then they do not need the console to tell them because theyre already watching the road and can see what the warning would be warning them of.

    If the driver is not then informing them on a collision is the only reasonable course of action. 1.3 seconds isnt a lot of time but its possibly enough time to look up and course correct. Whereas zero seconds is not.

    Plus HUDs have existed for cars long before they were testing their tech.

    wbBv3fj.png
  • Options
    tsmvengytsmvengy Registered User regular
    Here's the actual report
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
    The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
    According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review

    I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.

    My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.

    Just FYI, this is the preliminary report from over a year ago. It is not the new information from the most recent NTSB meeting that the articles are citing.

    The new information, as others have said, shows that not only were they relying too heavily on a single operator as a safety backup, but their entire software system was not even set up to account for the fact that a pedestrian might be in the road not in a crosswalk.

    steam_sig.png
  • Options
    evilmrhenryevilmrhenry Registered User regular
    tsmvengy wrote: »
    Here's the actual report
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
    The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
    According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review

    I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.

    My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.

    Just FYI, this is the preliminary report from over a year ago. It is not the new information from the most recent NTSB meeting that the articles are citing.

    The new information, as others have said, shows that not only were they relying too heavily on a single operator as a safety backup, but their entire software system was not even set up to account for the fact that a pedestrian might be in the road not in a crosswalk.

    Ugg. Put dates in your documents, people.

    OK, let's try this one. The news reports are obviously referring to other documents, but those don't seem to be available anywhere. In particular, there's a document discussing the "General poor safety culture at Uber ATG" which I would like to read.

  • Options
    ThroThro pgroome@penny-arcade.com Registered User regular
    Here's the actual report
    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
    The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
    According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review

    I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.

    My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.

    So I'm not sure if there's a video or transcript I can link (and it would be hard to find right now, I'm at work), but the NOVA on this had a different take. There's video of the driver looking at her cellphone, and possibly evidence she was watching "America's got Talent" (or something similar, I forget which show) at the time of the accident; so she was neither paying attention to console cues or the road at all. 5 of the 6 seconds before the accident not looking at the road. Not sure where NOVA got that info, as it's not in the linked report.

    The emergency brakes were disabled to ensure a 'smooth ride' or something; no sudden jarring stops. Possibly they thought the system + driver would be good enough.

    It would have been a win for the assisted emergency breaking if it was on; it detected a pedestrian jaywalking a bike perpendicular across the road (a unusual premise for the AI), with poor ambient lighting, dark clothes and no reflective surfaces facing the car. An attentive driver might have still hit her.

  • Options
    kimekime Queen of Blades Registered User regular
    There were reports (rumors?) back when this happened that the operator was watching TV on her phone. I have the same opinion now as I did then: if your system relies on one person to pay attention to something monotonous and boring for hours on end and react in the split second someone in there when something may happen (or maybe nothing happens!), then your system is flawed.

    Battle.net ID: kime#1822
    3DS Friend Code: 3110-5393-4113
    Steam profile
  • Options
    PaladinPaladin Registered User regular
    Driving already kind of works that way

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    kimekime Queen of Blades Registered User regular
    Paladin wrote: »
    Driving already kind of works that way

    And it's horrible for it, yeah. Now imagine making people even more distanced from any action.

    I am soooo ready for self-driving cars. If Uber blows this for me I will be quite put out :P

    Battle.net ID: kime#1822
    3DS Friend Code: 3110-5393-4113
    Steam profile
  • Options
    CouscousCouscous Registered User regular
    Who among us has not made a mistake like torturing and murdering a dissident as Saudi Arabia has?

    https://www.cnbc.com/2019/11/11/uber-chief-called-the-murder-of-jamal-khashoggi-serious-mistake.html
    Uber CEO Dara Khosrowshahi expressed regret for describing the murder of journalist Jamal Khashoggi as a “mistake.”
    Referring to the government of Saudi Arabia, the Uber chief told the show, “I think that government said that they made a mistake.”

    “It’s a serious mistake,” Khosrowshahi said on the show, adding, “We’ve made mistakes too, right? With self-driving, and we stopped driving and we’re recovering from that mistake. I think that people make mistakes, it doesn’t mean that they can never be forgiven. I think they have taken it seriously.”
    That is the worst response he could have given.

  • Options
    RMS OceanicRMS Oceanic Registered User regular
    Open mouth, insert foot

  • Options
    AngelHedgieAngelHedgie Registered User regular
    The founder of Uber has fully divested from the company:
    Travis Kalanick is leaving the board of Uber, the company he cofounded a decade ago and ran until his 2017 ouster. A spokesperson told CNBC that Kalanick has sold all of his remaining Uber stock, estimated to be worth around $2.5 billion.

    Three years ago, Kalanick was CEO of Uber and the undisputed master of the ride-hailing company. The company's board of directors was organized to give Kalanick outsized influence. But a series of scandals in early 2017 fatally weakened Kalanick's power. An investor revolt led by the venture capital firm Benchmark led to his ouster in June 2017.

    In the months after his departure, Kalanick maneuvered to maintain power behind the scenes—perhaps with an eye to eventually reclaiming the CEO title. But his efforts were rebuffed by Uber's other shareholders, and Kalanick ultimately moved on to other projects. His departure from the board is the final step in his disengagement from the company.

    He's still got his fingers in the gig economy pie, though, having founded the ghost kitchen startup CloudKitchens.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    GoumindongGoumindong Registered User regular
    Ironically cloudkitchens isn’t actually a bad idea

    wbBv3fj.png
  • Options
    Void SlayerVoid Slayer Very Suspicious Registered User regular
    It may be that he knows Uber's long term strategy is risky if people start asking too many questions and now he doesnt have a strong in to know when to jump ship before a fall in stock value.

    I personally saw lyft seems to have raised thier rates on normal rides and reduced the cost of shared rides. I wonder of they are trying to get people more use to the idea.

    He's a shy overambitious dog-catcher on the wrong side of the law. She's an orphaned psychic mercenary with the power to bend men's minds. They fight crime!
  • Options
    AngelHedgieAngelHedgie Registered User regular
    edited December 2019
    Goumindong wrote: »
    Ironically cloudkitchens isn’t actually a bad idea

    There's an entire industry of providing "dark" or "ghost" kitchens strictly for delivery with no attached dine-in spaces. One of the more...interestingly named companies in the space is Zuul Kitchens.

    AngelHedgie on
    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    PolaritiePolaritie Sleepy Registered User regular
    edited December 2019
    Goumindong wrote: »
    Ironically cloudkitchens isn’t actually a bad idea

    There's an entire industry of providing "dark" or "ghost" kitchens strictly for delivery with no attached dine-in spaces. One of the more...interestingly named companies in the space is Zuul Kitchens.

    What, catering? (I know most catering comes from places that DO have dining spaces, but still).

    Edit: Also catering shouldn't be a gig economy thing, ever. For obvious reasons.

    Polaritie on
    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    SanderJKSanderJK Crocodylus Pontifex Sinterklasicus Madrid, 3000 ADRegistered User regular
    Nah, the current delivery boom has people opening kitchens in garage boxes, at least over here.

    Steam: SanderJK Origin: SanderJK
  • Options
    DisruptedCapitalistDisruptedCapitalist I swear! Registered User regular
    Getting out while the getting is good.

    "Simple, real stupidity beats artificial intelligence every time." -Mustrum Ridcully in Terry Pratchett's Hogfather p. 142 (HarperPrism 1996)
  • Options
    KetarKetar Come on upstairs we're having a partyRegistered User regular
    SanderJK wrote: »
    Nah, the current delivery boom has people opening kitchens in garage boxes, at least over here.

    This. Thanks to the huge increases in restaurant delivery that have come with the rise of services like GrubHub, DoorDash, UberEats and so on, some places have been realizing that they can do more delivery business than their existing kitchens can handle. So to chase that business you're getting expansions, and sometimes even whole new restaurants, in spaces that are strictly kitchen only with no customer seating or areas to order food on site at all - ghost kitchens, or dark kitchens. Space that is completely dedicated to cooking, aside from a small area for the gig economy delivery drivers to pick up the orders. And some companies have recognized this growing market and started opening up really big kitchens where smaller businesses can rent space and get in on the delivery boom in a shared space.

  • Options
    GoumindongGoumindong Registered User regular
    And if the space is rentable on short notice then businesses can scale in such a way to only use the kitchens during high volume times. This reduces the total kitchen space needed for multiple businesses if their peak times are not all at the same time and therefore provides the angle where both businesses can profit from the arrangement.

    wbBv3fj.png
  • Options
    PaladinPaladin Registered User regular
    Hmm the end of dine-in, you say

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • Options
    discriderdiscrider Registered User regular
    edited December 2019
    Cool.
    Sounds like a food health and safety nightmare (no ability to find or regulate the kitchens), and the enablers of ghost/dark kitchens (CloudKitchens, UberEats) should be shut down.

    discrider on
  • Options
    KetarKetar Come on upstairs we're having a partyRegistered User regular
    discrider wrote: »
    Cool.
    Sounds like a food health and safety nightmare, and the enablers of ghost/dark kitchens (CloudKitchens, UberEats) should be shut down.

    Huh?

    Shared kitchens aren't a new thing, and those that operate them as well as those that use them are required to adhere to all food safety regulations and the normal inspection process.

  • Options
    GoumindongGoumindong Registered User regular
    They're "dark/ghost" because they don't have a store front. They have to pass food/safety inspections the same as a caterer does.. who also don't tend to have storefronts on their kitchens.

    wbBv3fj.png
Sign In or Register to comment.