It's definitely a matter of frequency, but I've had more bad Uber rides than I have taxi rides. Taxis are always just meh, neither good nor bad. I have rarely ever called for one, though, I always flag them off the street or walk to a stand.
Meanwhile with Uber I have had multiple times where I had to cancel the driver because they were not heading my way, deliberately or not. I've also had way more drivers via Uber who seemed to have literally just arrived in the US that very day and never driven a car before.
I haven't been keeping up with this thread but I'm pretty sure this is brand-new, uh, news, and is absolutely horrifying. Uber's self-driving cars? They haven't been programmed to consider people on the streets except at crosswalks.
...despite the fact that the car detected Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.
It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s Vehicle Automation Report reads. Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed. It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg, that it couldn’t steer around her, and that it needed to slam on the brakes.
That triggered what Uber called “action suppression,” in which the system held off braking for one second while it verified “the nature of the detected hazard”—a second during which the safety operator, Uber’s most important and last line of defense, could have taken control of the car and hit the brakes herself. But Vasquez wasn’t looking at the road during that second. So with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.
Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.
Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.
Fun!
Computer vision at speed is hard and has to be approached in an open manner. Building assumptions into any system is a mistake as the real world is messy and unreliable.
It's surprising to me that regardless.of what it classified as (other than effectively immaterial, like plastic bag floating around) that it didn't figure out it was on a collision course with *something* and do some change way ahead of time to avoid it.
Just a guess, but given that the pedestrian was in forward motion, if the uber car calculated that the object was a forward moving vehicle or bicycle, it may have been calculating to pass behind it into the vacated space, as bikes and vehicles moving forward typically need more time and space to suddenly reverse direction than a pedestrian. The assumptions the car would make about potential future places of a moving object would be different for cars, bikes, or people.
Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.
Fun!
Computer vision at speed is hard and has to be approached in an open manner. Building assumptions into any system is a mistake as the real world is messy and unreliable.
It's surprising to me that regardless.of what it classified as (other than effectively immaterial, like plastic bag floating around) that it didn't figure out it was on a collision course with *something* and do some change way ahead of time to avoid it.
Based on the article except, their logic says "classify the object first, then predict where it's going, then see if I have to stop." It couldn't do the first...
Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed
Pretty sure we knew their software was bad and rushed already (they needed like 10x the number of human intervention as other self-driving cars or something), but this is new specifics I think.
Fun!
Computer vision at speed is hard and has to be approached in an open manner. Building assumptions into any system is a mistake as the real world is messy and unreliable.
It's surprising to me that regardless.of what it classified as (other than effectively immaterial, like plastic bag floating around) that it didn't figure out it was on a collision course with *something* and do some change way ahead of time to avoid it.
Based on the article except, their logic says "classify the object first, then predict where it's going, then see if I have to stop." It couldn't do the first...
Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed
Yep, not ready for primetime at all. You'd first want to spin up a classifier that worked fast enough for the speeds and had some sense of masking/occlusion for previously identified objects to reduce process load. Only once you had that nailed would you want to add a rules engine on top. Sounds like they were trying to do it all at once, which is incredibly irresponsible if they didn't inform the supervising drivers that it was basically "pre alpha" state.
Also, in that case you'd want to err on the side of braking more than less, but they specifically didn't do that for "smoothness of ride". So just bad design and product decisions everywhere
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review
I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.
My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.
Yeah, I guess the computer would likely slam on the brakes more often than a human would, with rear end collisions from anyone following being more likely. It really does sound like a two person setup is needed.
A tow truck driver did this to me in a middle of a Blizzard with my car on his flatbed after he pulled me out of a ditch.
We talk about taxi regulation, but the lack of it surrounding tows is ridiculous. They can literally steal your vehicle* and refuse to give it back unless you pay them cash!
The first paragraph is reasonable right until the last part
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator
It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.
The first paragraph is reasonable right until the last part
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator
It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.
It's almost like their software was designed with no consideration for human beings whatsoever!
Software development in general desperately needs some kind of ethics board.
Yeah, I guess the computer would likely slam on the brakes more often than a human would, with rear end collisions from anyone following being more likely. It really does sound like a two person setup is needed.
If this is true, then the system has absolutely no business being on the road.
In my area of the world, taxi drivers are pretty likely to try to scam English speaking tourists. The main reason for foreigners to Uber is to avoid price shenanigans, dodgy meters and detours.
It would be cool if Uber competition forced taxi companies to let you pay in a similar way.
The first paragraph is reasonable right until the last part
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator
It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.
By the time you need to use emergency braking procedures, it's too late to sound an alarm, wait for the driver to figure out what needs to be done, and do it. The car realized that emergency braking was needed 1.3 seconds before the collision. (It might even make things worse, by having the driver glancing at the console to see what that alert is about instead of watching the road.)
The first paragraph is reasonable right until the last part
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator
It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.
By the time you need to use emergency braking procedures, it's too late to sound an alarm, wait for the driver to figure out what needs to be done, and do it. The car realized that emergency braking was needed 1.3 seconds before the collision. (It might even make things worse, by having the driver glancing at the console to see what that alert is about instead of watching the road.)
If the driver is watching the road 1.3 seconds before a collision then they do not need the console to tell them because theyre already watching the road and can see what the warning would be warning them of.
If the driver is not then informing them on a collision is the only reasonable course of action. 1.3 seconds isnt a lot of time but its possibly enough time to look up and course correct. Whereas zero seconds is not.
Plus HUDs have existed for cars long before they were testing their tech.
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review
I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.
My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.
Just FYI, this is the preliminary report from over a year ago. It is not the new information from the most recent NTSB meeting that the articles are citing.
The new information, as others have said, shows that not only were they relying too heavily on a single operator as a safety backup, but their entire software system was not even set up to account for the fact that a pedestrian might be in the road not in a crosswalk.
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review
I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.
My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.
Just FYI, this is the preliminary report from over a year ago. It is not the new information from the most recent NTSB meeting that the articles are citing.
The new information, as others have said, shows that not only were they relying too heavily on a single operator as a safety backup, but their entire software system was not even set up to account for the fact that a pedestrian might be in the road not in a crosswalk.
Ugg. Put dates in your documents, people.
OK, let's try this one. The news reports are obviously referring to other documents, but those don't seem to be available anywhere. In particular, there's a document discussing the "General poor safety culture at Uber ATG" which I would like to read.
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.
According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review
I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.
My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.
So I'm not sure if there's a video or transcript I can link (and it would be hard to find right now, I'm at work), but the NOVA on this had a different take. There's video of the driver looking at her cellphone, and possibly evidence she was watching "America's got Talent" (or something similar, I forget which show) at the time of the accident; so she was neither paying attention to console cues or the road at all. 5 of the 6 seconds before the accident not looking at the road. Not sure where NOVA got that info, as it's not in the linked report.
The emergency brakes were disabled to ensure a 'smooth ride' or something; no sudden jarring stops. Possibly they thought the system + driver would be good enough.
It would have been a win for the assisted emergency breaking if it was on; it detected a pedestrian jaywalking a bike perpendicular across the road (a unusual premise for the AI), with poor ambient lighting, dark clothes and no reflective surfaces facing the car. An attentive driver might have still hit her.
There were reports (rumors?) back when this happened that the operator was watching TV on her phone. I have the same opinion now as I did then: if your system relies on one person to pay attention to something monotonous and boring for hours on end and react in the split second someone in there when something may happen (or maybe nothing happens!), then your system is flawed.
Marty: The future, it's where you're going? Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
Uber CEO Dara Khosrowshahi expressed regret for describing the murder of journalist Jamal Khashoggi as a “mistake.”
Referring to the government of Saudi Arabia, the Uber chief told the show, “I think that government said that they made a mistake.”
“It’s a serious mistake,” Khosrowshahi said on the show, adding, “We’ve made mistakes too, right? With self-driving, and we stopped driving and we’re recovering from that mistake. I think that people make mistakes, it doesn’t mean that they can never be forgiven. I think they have taken it seriously.”
Travis Kalanick is leaving the board of Uber, the company he cofounded a decade ago and ran until his 2017 ouster. A spokesperson told CNBC that Kalanick has sold all of his remaining Uber stock, estimated to be worth around $2.5 billion.
Three years ago, Kalanick was CEO of Uber and the undisputed master of the ride-hailing company. The company's board of directors was organized to give Kalanick outsized influence. But a series of scandals in early 2017 fatally weakened Kalanick's power. An investor revolt led by the venture capital firm Benchmark led to his ouster in June 2017.
In the months after his departure, Kalanick maneuvered to maintain power behind the scenes—perhaps with an eye to eventually reclaiming the CEO title. But his efforts were rebuffed by Uber's other shareholders, and Kalanick ultimately moved on to other projects. His departure from the board is the final step in his disengagement from the company.
He's still got his fingers in the gig economy pie, though, having founded the ghost kitchen startup CloudKitchens.
It may be that he knows Uber's long term strategy is risky if people start asking too many questions and now he doesnt have a strong in to know when to jump ship before a fall in stock value.
I personally saw lyft seems to have raised thier rates on normal rides and reduced the cost of shared rides. I wonder of they are trying to get people more use to the idea.
He's a shy overambitious dog-catcher on the wrong side of the law. She's an orphaned psychic mercenary with the power to bend men's minds. They fight crime!
Ironically cloudkitchens isn’t actually a bad idea
There's an entire industry of providing "dark" or "ghost" kitchens strictly for delivery with no attached dine-in spaces. One of the more...interestingly named companies in the space is Zuul Kitchens.
Ironically cloudkitchens isn’t actually a bad idea
There's an entire industry of providing "dark" or "ghost" kitchens strictly for delivery with no attached dine-in spaces. One of the more...interestingly named companies in the space is Zuul Kitchens.
What, catering? (I know most catering comes from places that DO have dining spaces, but still).
Edit: Also catering shouldn't be a gig economy thing, ever. For obvious reasons.
Nah, the current delivery boom has people opening kitchens in garage boxes, at least over here.
This. Thanks to the huge increases in restaurant delivery that have come with the rise of services like GrubHub, DoorDash, UberEats and so on, some places have been realizing that they can do more delivery business than their existing kitchens can handle. So to chase that business you're getting expansions, and sometimes even whole new restaurants, in spaces that are strictly kitchen only with no customer seating or areas to order food on site at all - ghost kitchens, or dark kitchens. Space that is completely dedicated to cooking, aside from a small area for the gig economy delivery drivers to pick up the orders. And some companies have recognized this growing market and started opening up really big kitchens where smaller businesses can rent space and get in on the delivery boom in a shared space.
And if the space is rentable on short notice then businesses can scale in such a way to only use the kitchens during high volume times. This reduces the total kitchen space needed for multiple businesses if their peak times are not all at the same time and therefore provides the angle where both businesses can profit from the arrangement.
Marty: The future, it's where you're going? Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
Cool.
Sounds like a food health and safety nightmare (no ability to find or regulate the kitchens), and the enablers of ghost/dark kitchens (CloudKitchens, UberEats) should be shut down.
Cool.
Sounds like a food health and safety nightmare, and the enablers of ghost/dark kitchens (CloudKitchens, UberEats) should be shut down.
Huh?
Shared kitchens aren't a new thing, and those that operate them as well as those that use them are required to adhere to all food safety regulations and the normal inspection process.
They're "dark/ghost" because they don't have a store front. They have to pass food/safety inspections the same as a caterer does.. who also don't tend to have storefronts on their kitchens.
Posts
Meanwhile with Uber I have had multiple times where I had to cancel the driver because they were not heading my way, deliberately or not. I've also had way more drivers via Uber who seemed to have literally just arrived in the US that very day and never driven a car before.
(Bloomberg is a business news website.)
More details via Wired.
Rock Band DLC | GW:OttW - arrcd | WLD - Thortar
Fun!
3DS Friend Code: 3110-5393-4113
Steam profile
Computer vision at speed is hard and has to be approached in an open manner. Building assumptions into any system is a mistake as the real world is messy and unreliable.
Yeah like.... what'd they do? "if (crosswalk) { checkPedestrian; checkEverythingElse; } else { checkEverythingElse; }"
I don't do computer vision, but it seems weird to have hard-baked that logic in.
Based on the article except, their logic says "classify the object first, then predict where it's going, then see if I have to stop." It couldn't do the first...
3DS Friend Code: 3110-5393-4113
Steam profile
Try it while being black, see how it goes.
Yep, not ready for primetime at all. You'd first want to spin up a classifier that worked fast enough for the speeds and had some sense of masking/occlusion for previously identified objects to reduce process load. Only once you had that nailed would you want to add a rules engine on top. Sounds like they were trying to do it all at once, which is incredibly irresponsible if they didn't inform the supervising drivers that it was basically "pre alpha" state.
Also, in that case you'd want to err on the side of braking more than less, but they specifically didn't do that for "smoothness of ride". So just bad design and product decisions everywhere
I think this cuts to the core of the issue. Uber expects the driver to be able to take control of the vehicle at a moment's notice, but also expects them to be monitoring the self-driving system in a way that requires them to look away from the road. The car is not good enough to manage without a driver, and the driver might as well be answering their email while driving for the amount of attention they're able to pay to their surroundings.
My perspective is that these cars required two people at that stage of testing to safely operate on public roads: an alert driver that isn't constantly being distracted, and someone monitoring the self-driving system.
We talk about taxi regulation, but the lack of it surrounding tows is ridiculous. They can literally steal your vehicle* and refuse to give it back unless you pay them cash!
*for an offense imagined or otherwise.
It seems like... that would be prioirty number 1. The system detects a potential collision and does not alert the operator.
It's almost like their software was designed with no consideration for human beings whatsoever!
Software development in general desperately needs some kind of ethics board.
If this is true, then the system has absolutely no business being on the road.
It would be cool if Uber competition forced taxi companies to let you pay in a similar way.
By the time you need to use emergency braking procedures, it's too late to sound an alarm, wait for the driver to figure out what needs to be done, and do it. The car realized that emergency braking was needed 1.3 seconds before the collision. (It might even make things worse, by having the driver glancing at the console to see what that alert is about instead of watching the road.)
If the driver is watching the road 1.3 seconds before a collision then they do not need the console to tell them because theyre already watching the road and can see what the warning would be warning them of.
If the driver is not then informing them on a collision is the only reasonable course of action. 1.3 seconds isnt a lot of time but its possibly enough time to look up and course correct. Whereas zero seconds is not.
Plus HUDs have existed for cars long before they were testing their tech.
Just FYI, this is the preliminary report from over a year ago. It is not the new information from the most recent NTSB meeting that the articles are citing.
The new information, as others have said, shows that not only were they relying too heavily on a single operator as a safety backup, but their entire software system was not even set up to account for the fact that a pedestrian might be in the road not in a crosswalk.
Ugg. Put dates in your documents, people.
OK, let's try this one. The news reports are obviously referring to other documents, but those don't seem to be available anywhere. In particular, there's a document discussing the "General poor safety culture at Uber ATG" which I would like to read.
So I'm not sure if there's a video or transcript I can link (and it would be hard to find right now, I'm at work), but the NOVA on this had a different take. There's video of the driver looking at her cellphone, and possibly evidence she was watching "America's got Talent" (or something similar, I forget which show) at the time of the accident; so she was neither paying attention to console cues or the road at all. 5 of the 6 seconds before the accident not looking at the road. Not sure where NOVA got that info, as it's not in the linked report.
The emergency brakes were disabled to ensure a 'smooth ride' or something; no sudden jarring stops. Possibly they thought the system + driver would be good enough.
It would have been a win for the assisted emergency breaking if it was on; it detected a pedestrian jaywalking a bike perpendicular across the road (a unusual premise for the AI), with poor ambient lighting, dark clothes and no reflective surfaces facing the car. An attentive driver might have still hit her.
3DS Friend Code: 3110-5393-4113
Steam profile
Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
And it's horrible for it, yeah. Now imagine making people even more distanced from any action.
I am soooo ready for self-driving cars. If Uber blows this for me I will be quite put out :P
3DS Friend Code: 3110-5393-4113
Steam profile
https://www.cnbc.com/2019/11/11/uber-chief-called-the-murder-of-jamal-khashoggi-serious-mistake.html That is the worst response he could have given.
He's still got his fingers in the gig economy pie, though, having founded the ghost kitchen startup CloudKitchens.
I personally saw lyft seems to have raised thier rates on normal rides and reduced the cost of shared rides. I wonder of they are trying to get people more use to the idea.
There's an entire industry of providing "dark" or "ghost" kitchens strictly for delivery with no attached dine-in spaces. One of the more...interestingly named companies in the space is Zuul Kitchens.
What, catering? (I know most catering comes from places that DO have dining spaces, but still).
Edit: Also catering shouldn't be a gig economy thing, ever. For obvious reasons.
3DS: 0473-8507-2652
Switch: SW-5185-4991-5118
PSN: AbEntropy
This. Thanks to the huge increases in restaurant delivery that have come with the rise of services like GrubHub, DoorDash, UberEats and so on, some places have been realizing that they can do more delivery business than their existing kitchens can handle. So to chase that business you're getting expansions, and sometimes even whole new restaurants, in spaces that are strictly kitchen only with no customer seating or areas to order food on site at all - ghost kitchens, or dark kitchens. Space that is completely dedicated to cooking, aside from a small area for the gig economy delivery drivers to pick up the orders. And some companies have recognized this growing market and started opening up really big kitchens where smaller businesses can rent space and get in on the delivery boom in a shared space.
Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
Sounds like a food health and safety nightmare (no ability to find or regulate the kitchens), and the enablers of ghost/dark kitchens (CloudKitchens, UberEats) should be shut down.
Huh?
Shared kitchens aren't a new thing, and those that operate them as well as those that use them are required to adhere to all food safety regulations and the normal inspection process.