The way it waffled back and forth between "I know what this is and I think it's moving out of the way in time" and "nope that's clearly something in the middle of the road and I WANT TO RAM IT" explains 100% why the emergency brakes were activating chaotically and any software developer worth anything could have debugged that.
"Just disable the braking safety system" was the concept used instead, because they didn't want to invest the relatively small amount of time to figure it out.
Reminds me of the therac-25 incident in canada vaguely. They're not 1:1 but a lot of parallels to draw between lazy programmers, cheap executives, and using tech to do things it shouldn't do.
Driving a car is a nearly braindead simple decision tree, that car had nearly 6 fucking seconds to stop, you can stop a fucking tractor trailer in that amount of time. Most humans can background task driving when things go well that's how simple it is. And it had 5 seconds to go "wait maybe I should slow down?" instead of waffling back and forth every few milliseconds on what it should do an then finally deciding "fuck it, ramming speed". I know AI is the new hotness, but there's absolutely no reason for it in 99% of the applications I've seen.
The thing about the programming that is insane to me, is that the way i'm reading that report, every time it reclassified, it forgot about everything it had already learned?
Yeah that's AI/machine learning genetic algorithm decision tree at work.
Machine learning's results very often will try to "cheat" or "exploit bugs in the rules" rather than actually produce the most efficient method of doing a thing. You could very reasonably build a system that probably outperforms machine learning very quickly.
The ironic thing is you can tell this original system was designed in a major metropolitan area that has never dealt with things larger than a squirrel darting out in front of you in the road. How "pedestrians and animals only cross at crosswalks" made it past QC is beyond me.
i don't think that applies to any east coast major metro i've ever been to.
crosswalks are just a suggestion around here. out west they're sort of respected but it's weird and i don't like it.
Yeah, just.. how did this design make it out, have they never seen a deer even?
After one or two of those back and forths where it couldn't figure out what to do (even though it should be reclassifying if possible) it should've pulled over, threw a panic, and let the driver take over. But it kept going "maybe it's a bike, maybe a tree, who fucking knows, RAMMING SPEED BABY"
How is that not one of the first things baked into the program?
The way it waffled back and forth between "I know what this is and I think it's moving out of the way in time" and "nope that's clearly something in the middle of the road and I WANT TO RAM IT" explains 100% why the emergency brakes were activating chaotically and any software developer worth anything could have debugged that.
"Just disable the braking safety system" was the concept used instead, because they didn't want to invest the relatively small amount of time to figure it out.
Reminds me of the therac-25 incident in canada vaguely. They're not 1:1 but a lot of parallels to draw between lazy programmers, cheap executives, and using tech to do things it shouldn't do.
Driving a car is a nearly braindead simple decision tree, that car had nearly 6 fucking seconds to stop, you can stop a fucking tractor trailer in that amount of time. Most humans can background task driving when things go well that's how simple it is. And it had 5 seconds to go "wait maybe I should slow down?" instead of waffling back and forth every few milliseconds on what it should do an then finally deciding "fuck it, ramming speed". I know AI is the new hotness, but there's absolutely no reason for it in 99% of the applications I've seen.
The thing about the programming that is insane to me, is that the way i'm reading that report, every time it reclassified, it forgot about everything it had already learned?
Yeah that's AI/machine learning genetic algorithm decision tree at work.
Machine learning's results very often will try to "cheat" or "exploit bugs in the rules" rather than actually produce the most efficient method of doing a thing. You could very reasonably build a system that probably outperforms machine learning very quickly.
The ironic thing is you can tell this original system was designed in a major metropolitan area that has never dealt with things larger than a squirrel darting out in front of you in the road. How "pedestrians and animals only cross at crosswalks" made it past QC is beyond me.
i don't think that applies to any east coast major metro i've ever been to.
crosswalks are just a suggestion around here. out west they're sort of respected but it's weird and i don't like it.
Yeah, just.. how did this design make it out, have they never seen a deer even?
After one or two of those back and forths where it couldn't figure out what to do (even though it should be reclassifying if possible) it should've pulled over, threw a panic, and let the driver take over. But it kept going "maybe it's a bike, maybe a tree, who fucking knows, RAMMING SPEED BABY"
How is that not one of the first things baked into the program?
it sorta was but the uber devs unplugged it because they couldn't figure out how to get the tech they stole from google to work
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Machine learning's results very often will try to "cheat" or "exploit bugs in the rules" rather than actually produce the most efficient method of doing a thing. You could very reasonably build a system that probably outperforms machine learning very quickly.
There's a list somewhere of hilarious ways "AI" have exploited stuff to reach the "goals".
One thing was playing a game and when it was losing it exploited a bug that crashed the game.
Machine learning's results very often will try to "cheat" or "exploit bugs in the rules" rather than actually produce the most efficient method of doing a thing. You could very reasonably build a system that probably outperforms machine learning very quickly.
There's a list somewhere of hilarious ways "AI" have exploited stuff to reach the "goals".
One thing was playing a game and when it was losing it exploited a bug that crashed the game.
My favorite was when they set the goal function for Tetris as "don't lose" and it just paused the game.
Yeah I had a little machine learning project to do something like helping to match movie reviews to recommend movie genre and one of the early errors I got was like I had set it so it HAD to assign an account with a recommendation. So it then ignored every other aspect of the programming and set everything to Action, the first one alphabetically.
Yeah I had a little machine learning project to do something like helping to match movie reviews to recommend movie genre and one of the early errors I got was like I had set it so it HAD to assign an account with a recommendation. So it then ignored every other aspect of the programming and set everything to Action, the first one alphabetically.
so, I recently bought a 2070 Super, and am going to get a new SSD today. In the next couple of weeks, I want to swap out the mobo and cpu as well. Currently have an old MSI Z87 and an i5 4670k.
I always get confused about mobo stuff, what should I be looking at? What's the hot new shit that isn't prohibitively expensive?
Hard to know as I haven't read it in detail, but computer vision+motion generally works like (and will totally defer to people that aren't total hacks at it like me):
1. Simple - classify object and reprocess every n milliseconds.
2. Complex - classify, then mask and track object, reclassifying periodically if prediction strength is below a certain threshold.
You typically run a prebuilt model as there's no input saying "hey, that was actually a person, not a sign" definitively. You could mask/store/update and retrain, but that could cause unacceptable model drift if not done carefully.
This is why you have people like Nvidia building incredibly complex simulation environments using GANs, as there are a lot of layers of things happening interleaved that would be dangerous to experiment with in real life.
Pretty much. I haven't looked into what they're actually doing (and the code is probably proprietary anyway) but online model fitting and tracking are hugely processor intensive. Doing it in real time with a large viewfield at high speed is by no means impossible but it's also not quick or easy (understatement!), and it sounds like Uber were going straight for the fast-simple-and-wrong solution.
My assessment from a distance is that that system wasn't anywhere near ready to be within a kilometre of a real human, let alone on a non-controlled test site, and Uber were cutting corners and moving to on-road tests too soon because they're panicking about their business model and the sooner they can eliminate those pesky drivers, the better.
The way it waffled back and forth between "I know what this is and I think it's moving out of the way in time" and "nope that's clearly something in the middle of the road and I WANT TO RAM IT" explains 100% why the emergency brakes were activating chaotically and any software developer worth anything could have debugged that.
"Just disable the braking safety system" was the concept used instead, because they didn't want to invest the relatively small amount of time to figure it out.
Reminds me of the therac-25 incident in canada vaguely. They're not 1:1 but a lot of parallels to draw between lazy programmers, cheap executives, and using tech to do things it shouldn't do.
Driving a car is a nearly braindead simple decision tree, that car had nearly 6 fucking seconds to stop, you can stop a fucking tractor trailer in that amount of time. Most humans can background task driving when things go well that's how simple it is. And it had 5 seconds to go "wait maybe I should slow down?" instead of waffling back and forth every few milliseconds on what it should do an then finally deciding "fuck it, ramming speed". I know AI is the new hotness, but there's absolutely no reason for it in 99% of the applications I've seen.
Jesus that Therac-25 thing is terrifying
+6
Options
OrcaAlso known as EspressosaurusWrexRegistered Userregular
so, I recently bought a 2070 Super, and am going to get a new SSD today. In the next couple of weeks, I want to swap out the mobo and cpu as well. Currently have an old MSI Z87 and an i5 4670k.
I always get confused about mobo stuff, what should I be looking at? What's the hot new shit that isn't prohibitively expensive?
It's difficult to argue against going with a Ryzen at this point, unless you're against getting some DDR 4 as well
+1
Options
BroloBroseidonLord of the BroceanRegistered Userregular
Wow my Samsung tv just updated itself and installed a fucking ad service onto the home screen.
I managed to disable it, but I feel fucking outraged right now.
Yea once I saw how much outbound data my TV was using I blocked it from my wi-fi. They upload lots of usage info and whatnot - turns out there's no profit margin on TVs anymore so they capture usage data and sell it to actually make money on them. Or they run ads on them.
pimento on
+10
Options
JedocIn the scupperswith the staggers and jagsRegistered Userregular
She laughed. I laughed. The TV laughed. I shot the TV. It was a good time.
As someone who recently got a new cheap TV what’s the best way to block that outbound data (crucially without losing the ability to watch anything and upsetting my partner, as the streaming services we use are also built into the TV.)
I run pihole on my home network and it blocks some stuff on my roku tv. Like on the roku channel list there's a banner on the right side that has ads, and pihole blocks that. Don't know if it stops analytics data going outbound.
0
Options
webguy20I spend too much time on the InternetRegistered Userregular
As someone who recently got a new cheap TV what’s the best way to block that outbound data (crucially without losing the ability to watch anything and upsetting my partner, as the streaming services we use are also built into the TV.)
You should be able to allow it to reach out to the streaming services while blocking the addresses related to the data collection. A good Router should allow this.
That said, I have Google Fi and it was $299 for me when I ordered through my app - so that's interesting. Also, I picked up the 3a and have been enjoying it so far. It being a half an inch taller than the original is a bit annoying, but whatcha gonna do.
Thanks for all the confirmation and recommendations, thread!
Wow my Samsung tv just updated itself and installed a fucking ad service onto the home screen.
I managed to disable it, but I feel fucking outraged right now.
Yea once I saw how much outbound data my TV was using I blocked it from my wi-fi. They upload lots of usage info and whatnot - turns out there's no profit margin on TVs anymore so they capture usage data and sell it to actually make money on them. Or they run ads on them.
I feel like there's probably an actual market for dumb TVs for tech adverse olds and IT people who know that shit is bad and seen how awful it's implementation can be.
At this point it wouldn't shock me to hear smart tvs want to be cameras in them to capture you watching them so they can better advertise.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Apple Card's credit rating system might be sexist?
Impossible; we didn't write "be sexist!" into the algorithm!
If they're directly using gender they're in for a shitstorm as financial services regs are pretty ironclad on that. Likely some weird proxy for gender was included, as the more likely answer is poor model oversight from an ethical standpoint. Ethics/bias review on models is pretty early state.
+1
Options
BroloBroseidonLord of the BroceanRegistered Userregular
Wow my Samsung tv just updated itself and installed a fucking ad service onto the home screen.
I managed to disable it, but I feel fucking outraged right now.
Yea once I saw how much outbound data my TV was using I blocked it from my wi-fi. They upload lots of usage info and whatnot - turns out there's no profit margin on TVs anymore so they capture usage data and sell it to actually make money on them. Or they run ads on them.
I feel like there's probably an actual market for dumb TVs for tech adverse olds and IT people who know that shit is bad and seen how awful it's implementation can be.
At this point it wouldn't shock me to hear smart tvs want to be cameras in them to capture you watching them so they can better advertise.
Posts
How is that not one of the first things baked into the program?
3DS Friend Code: 3110-5393-4113
Steam profile
it sorta was but the uber devs unplugged it because they couldn't figure out how to get the tech they stole from google to work
There's a list somewhere of hilarious ways "AI" have exploited stuff to reach the "goals".
One thing was playing a game and when it was losing it exploited a bug that crashed the game.
My favorite was when they set the goal function for Tetris as "don't lose" and it just paused the game.
I just remember the launch landing amongst tech-tubers as more of a thud than a bang.
I always get confused about mobo stuff, what should I be looking at? What's the hot new shit that isn't prohibitively expensive?
Pretty much. I haven't looked into what they're actually doing (and the code is probably proprietary anyway) but online model fitting and tracking are hugely processor intensive. Doing it in real time with a large viewfield at high speed is by no means impossible but it's also not quick or easy (understatement!), and it sounds like Uber were going straight for the fast-simple-and-wrong solution.
My assessment from a distance is that that system wasn't anywhere near ready to be within a kilometre of a real human, let alone on a non-controlled test site, and Uber were cutting corners and moving to on-road tests too soon because they're panicking about their business model and the sooner they can eliminate those pesky drivers, the better.
Jesus that Therac-25 thing is terrifying
It's a cautionary tale in CS programs for a reason.
It's difficult to argue against going with a Ryzen at this point, unless you're against getting some DDR 4 as well
@DrZiplock
Pixel 3A will be $299.99 at Amazon, Best Buy, and B&H Photo
Pixel 3A XL will be $379 at Amazon, Best Buy, and B&H Photo
https://www.theverge.com/good-deals/2019/11/8/20955411/black-friday-google-deals-cyber-monday-pixel-4-3a-xl-nest-hub-wifi-pixelbook
I managed to disable it, but I feel fucking outraged right now.
Satans..... hints.....
Yea once I saw how much outbound data my TV was using I blocked it from my wi-fi. They upload lots of usage info and whatnot - turns out there's no profit margin on TVs anymore so they capture usage data and sell it to actually make money on them. Or they run ads on them.
You should be able to allow it to reach out to the streaming services while blocking the addresses related to the data collection. A good Router should allow this.
Origin ID: Discgolfer27
Untappd ID: Discgolfer1981
don't trust this, because I'm just spitballing, but
data analytics could be udp going out, but the request for content would be tcp in/out, udp in. Block all udp out traffic from the TV?
I haven't done a PC build in almost a decade, so hopefully nothing explodes too badly.
Steam ID - VeldrinD | SS Post | Wishlist
Apple Card's credit rating system might be sexist?
I have it on good authority that Veldrin always screws with confidence.
Oh nice, thanks for sharing the info!
That said, I have Google Fi and it was $299 for me when I ordered through my app - so that's interesting. Also, I picked up the 3a and have been enjoying it so far. It being a half an inch taller than the original is a bit annoying, but whatcha gonna do.
Thanks for all the confirmation and recommendations, thread!
Impossible; we didn't write "be sexist!" into the algorithm!
I feel like there's probably an actual market for dumb TVs for tech adverse olds and IT people who know that shit is bad and seen how awful it's implementation can be.
At this point it wouldn't shock me to hear smart tvs want to be cameras in them to capture you watching them so they can better advertise.
If they're directly using gender they're in for a shitstorm as financial services regs are pretty ironclad on that. Likely some weird proxy for gender was included, as the more likely answer is poor model oversight from an ethical standpoint. Ethics/bias review on models is pretty early state.
https://linustechtips.com/main/topic/777448-pizzeria-billboard-in-oslo-analyzes-people/
Humans had a good run.
Pretty short compared to dinosaurs tbh
"We made these huge buildings"
"Bitch I am a building"