Yeap, it's a numbers game: after your 100th interview you no longer give a shit when you get a "sorry, but no" email.
Just keep pushing until it works.
( < . . .
0
Options
Donovan PuppyfuckerA dagger in the dark isworth a thousand swords in the morningRegistered Userregular
Jesus fuck, I think I'm going to warranty this bastard modem/router.
Sometimes it just wouldn't talk to my desktop at all, it'd have a connection and my phone would be pulling down stuff over wifi, but the computer couldn't see it exists. Restarting the computer a few times would typically fix that.
I've checked and double-checked all the settings tens of times, there's nothing awry there.
This morning I can't even log into the unit's own UI through 192.168.1.1. Zero response even after multiple power cycles.
Wifi's still humming along! Wired connection is turbo fucked though.
0
Options
Donovan PuppyfuckerA dagger in the dark isworth a thousand swords in the morningRegistered Userregular
Yeah, it's the primary wired port.
The other three ports (LAN 1, 2, & 3) all give faultless service.
Warranty return it is!
Also while that is a pretty graph the experience of one person is not a very good dataset to make any broad analysis against.
Seidkona on
Mostly just huntin' monsters.
XBL:Phenyhelm - 3DS:Phenyhelm
+2
Options
TL DRNot at all confident in his reflexive opinions of thingsRegistered Userregular
For those just tuning in, I emailed in my resignation letter yesterday.
This morning my supervisor, who I've butted heads with a fair amount over the past year, just came in and said he appreciated me as a coworker and that I could use him as a reference! He sounded like he might actually get choked up!
I... did not expect that reaction!
On the job front, my second interview was Friday and they asked me to turn in the paper application for the background check on Monday. If I get the job, I'll be thrilled. If I don't, I get to take the summer off and I'll probably mostly visit friends and fuck around and also get my Security+ during that time, which should be an easy dunk and will help address that minor gap in the resume.
+9
Options
TL DRNot at all confident in his reflexive opinions of thingsRegistered Userregular
A client wants to have an alternative domain for clients to leave reviews. Like MyContosoExperience.com - and they asked me to forward them to the Google review page for the associated location. The only thing is that it brings up existing reviews like "1 star - I just want to know if the mice are still an issue?"
I'm guessing that because the 'Write a review' button is a javascript object on the review page, there's no way to skip directly to that without running a proper webapp at least, right?
Also also that graph reminded me to once again say the thing I always say when job searches come up.
If you do not have field experience computing is a field where you can generate it on your own. Set up a way to lab what you want to work on and do things with it.
The initiative and attempt will get through some of the filters and you'll have a better shot.
Seidkona on
Mostly just huntin' monsters.
XBL:Phenyhelm - 3DS:Phenyhelm
+3
Options
TL DRNot at all confident in his reflexive opinions of thingsRegistered Userregular
For those just tuning in, I emailed in my resignation letter yesterday.
This morning my supervisor, who I've butted heads with a fair amount over the past year, just came in and said he appreciated me as a coworker and that I could use him as a reference! He sounded like he might actually get choked up!
I... did not expect that reaction!
On the job front, my second interview was Friday and they asked me to turn in the paper application for the background check on Monday. If I get the job, I'll be thrilled. If I don't, I get to take the summer off and I'll probably mostly visit friends and fuck around and also get my Security+ during that time, which should be an easy dunk and will help address that minor gap in the resume.
Man said, of the times we've butted heads in the last year, "You're a great worker, when you have the motivation. We weren't able to provide you with that motivation. Seriously, use me as a reference and I'll have nothing but good things to say, because I want that for you."
Jeez
+7
Options
ShadowfireVermont, in the middle of nowhereRegistered Userregular
Usually it goes like
-someone calls in saying they can't open a file
-recognize encryption, possibly see ransom letter text document
-check for the file's owner or last modifier for clues as to source of infection
-confirm via the presence of encryption on user's local desktop (or TS session, etc)
-disable their account, change their password, disconnect their PC
-shut down any affected server volumes and restore from backup
I've never dealt with something of a higher tier of complexity like one might find bringing down the City of Baltimore, though - not sure what would bring down the systems mentioned, tbh. A lot of the malware truisms changed over the years - things seem less likely to spread and infect other systems, likely because AV heuristics are much better. 'Nuke it from orbit' has become 'audit share permissions and fix things with a scalpel with minimal disruption'.
+1
Options
That_GuyI don't wanna be that guyRegistered Userregular
I doubt this would have gotten into the national spotlight if they had proper backups. A few years back one of our municipal clients got ransomware. Because we had good backups that were properly configured we had them restored inside of 12 hours. As far as the public was concerned they were temporarily down for emergency maintenance. The fact that they are still struggling with it several days later indicates a failure by their IT people to plan for disaster recovery. This shit happens all the time and it's trivially easy to plan for. A few hundred buck a month to the right company and you will have nothing to worry about.
I doubt this would have gotten into the national spotlight if they had proper backups. A few years back one of our municipal clients got ransomware. Because we had good backups that were properly configured we had them restored inside of 12 hours. As far as the public was concerned they were temporarily down for emergency maintenance. The fact that they are still struggling with it several days later indicates a failure by their IT people to plan for disaster recovery. This shit happens all the time and it's trivially easy to plan for. A few hundred buck a month to the right company and you will have nothing to worry about.
TL DRNot at all confident in his reflexive opinions of thingsRegistered Userregular
"We need to test disaster recovery of the EMS systems"
-Those systems can't be down
"We can't know how long it would take to recover from a disaster, or even whether full recovery is possible, unless we do a proper test, ideally on a regular interval"
-We need 100% uptime. Let me get back to you on this.
"We need to test disaster recovery of the EMS systems"
-Those systems can't be down
"We can't know how long it would take to recover from a disaster, or even whether full recovery is possible, unless we do a proper test, ideally on a regular interval"
-We need 100% uptime. Let me get back to you on this.
I feel attacked
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
+1
Options
RandomHajileNot actually a SnatcherThe New KremlinRegistered Userregular
"We need to test disaster recovery of the EMS systems"
-Those systems can't be down
"We can't know how long it would take to recover from a disaster, or even whether full recovery is possible, unless we do a proper test, ideally on a regular interval"
-We need 100% uptime. Let me get back to you on this.
Sounds like the perfect time to ask for a 100% duplicate DR environment!
the problem in IT is there is so much backlog I do not have time to do disaster recovery procedures and tests like ever
This is literally my life right now. I've been told I need to drop everything to write up backup restore testing documents/guidelines by the end of the week, and schedule a partial recovery test by end of May.
I have about 25 other things I need to be doing but that's the rest of my week
This morning some sites are not working on our network, unless you add the s to http. Including our own website. Presumably something in the sonicwall is blocking it? And of course this happens the morning after a board meeting where they ask if we really need so much technology while I'm managing our internet via a 2011 mac mini.
*throws all the macs in the garbage*
+2
Options
ThegreatcowLord of All BaconsWashington State - It's Wet up here innit? Registered Userregular
the problem in IT is there is so much backlog I do not have time to do disaster recovery procedures and tests like ever
Oh goddamn this. At my previous job, we kept trying to arrange a generator test to confirm that things were working for our call center (our SLAs basically demand less than 15 minute downtime for our agents woo!) and it kept getting pushed back over and over again.
Then we finally managed to arrange a test! How you say?
Well when the Department of Water & Power say they're going to be shutting power off for the whole block due to restringing electricity poles, sounds like as good a time as any!
Good News: Generator works and kicked in when we lost power.
Bad News: The BackUPSs we had deployed to the various agents workstations to ensure that their computers don't shutoff during the power transfer between line and generator pretty much all catastrophically failed after 1 minute of battery use and all of the workstations turned off anyway. I guess that will happen when they've been in use for over 6 years without swapping out the battery!
Ultra Bad News: After presenting the potential bill for replacing the UPS batteries, management decided they'd chance the potential downtime and decided NOT to replace the ups batteries on account of "well the generator came on immediately, they should be able to reboot just fine without issue".
Conveniently ignoring that several of the older workstations decided to also brick themselves due to the surge part of the UPS also failing during the power transfer and frying their power supplies or mobos.
the problem in IT is there is so much backlog I do not have time to do disaster recovery procedures and tests like ever
Oh goddamn this. At my previous job, we kept trying to arrange a generator test to confirm that things were working for our call center (our SLAs basically demand less than 15 minute downtime for our agents woo!) and it kept getting pushed back over and over again.
Then we finally managed to arrange a test! How you say?
Well when the Department of Water & Power say they're going to be shutting power off for the whole block due to restringing electricity poles, sounds like as good a time as any!
Good News: Generator works and kicked in when we lost power.
Bad News: The BackUPSs we had deployed to the various agents workstations to ensure that their computers don't shutoff during the power transfer between line and generator pretty much all catastrophically failed after 1 minute of battery use and all of the workstations turned off anyway. I guess that will happen when they've been in use for over 6 years without swapping out the battery!
Ultra Bad News: After presenting the potential bill for replacing the UPS batteries, management decided they'd chance the potential downtime and decided NOT to replace the ups batteries on account of "well the generator came on immediately, they should be able to reboot just fine without issue".
Conveniently ignoring that several of the older workstations decided to also brick themselves due to the surge part of the UPS also failing during the power transfer and frying their power supplies or mobos.
A client demanded a quote for a UPS that would run their entire server rack for 8+ hours. They had expanded into an adjacent building, and the power there went out fairly regularly. The price was, as you imagine, substantial. I suggested that we have a chat about their specific needs, since none of the PCs have UPSs anyway and hey why not just run an electric line from a circuit in the good building literally one room over, but they just repeated their demand to rush the project through so lel
+3
Options
ThegreatcowLord of All BaconsWashington State - It's Wet up here innit? Registered Userregular
the problem in IT is there is so much backlog I do not have time to do disaster recovery procedures and tests like ever
Oh goddamn this. At my previous job, we kept trying to arrange a generator test to confirm that things were working for our call center (our SLAs basically demand less than 15 minute downtime for our agents woo!) and it kept getting pushed back over and over again.
Then we finally managed to arrange a test! How you say?
Well when the Department of Water & Power say they're going to be shutting power off for the whole block due to restringing electricity poles, sounds like as good a time as any!
Good News: Generator works and kicked in when we lost power.
Bad News: The BackUPSs we had deployed to the various agents workstations to ensure that their computers don't shutoff during the power transfer between line and generator pretty much all catastrophically failed after 1 minute of battery use and all of the workstations turned off anyway. I guess that will happen when they've been in use for over 6 years without swapping out the battery!
Ultra Bad News: After presenting the potential bill for replacing the UPS batteries, management decided they'd chance the potential downtime and decided NOT to replace the ups batteries on account of "well the generator came on immediately, they should be able to reboot just fine without issue".
Conveniently ignoring that several of the older workstations decided to also brick themselves due to the surge part of the UPS also failing during the power transfer and frying their power supplies or mobos.
A client demanded a quote for a UPS that would run their entire server rack for 8+ hours. They had expanded into an adjacent building, and the power there went out fairly regularly. The price was, as you imagine, substantial. I suggested that we have a chat about their specific needs, since none of the PCs have UPSs anyway and hey why not just run an electric line from a circuit in the good building literally one room over, but they just repeated their demand to rush the project through so lel
Jeebus. 8 Hours? Our rackmounts guaranteed I think maybe 1-2? If that? Granted we figured we'd be on generator power so I guess we didn't beef it as much that but still. Those must have been some absolutely massive batteries.
Management/leadership has to prioritize the DR tests. That's why I like working in regulated environments, or at least for companies that have robust cybersecurity insurance policies that require proof of (among other things) annual DR tests.
every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.
the problem in IT is there is so much backlog I do not have time to do disaster recovery procedures and tests like ever
Oh goddamn this. At my previous job, we kept trying to arrange a generator test to confirm that things were working for our call center (our SLAs basically demand less than 15 minute downtime for our agents woo!) and it kept getting pushed back over and over again.
Then we finally managed to arrange a test! How you say?
Well when the Department of Water & Power say they're going to be shutting power off for the whole block due to restringing electricity poles, sounds like as good a time as any!
Good News: Generator works and kicked in when we lost power.
Bad News: The BackUPSs we had deployed to the various agents workstations to ensure that their computers don't shutoff during the power transfer between line and generator pretty much all catastrophically failed after 1 minute of battery use and all of the workstations turned off anyway. I guess that will happen when they've been in use for over 6 years without swapping out the battery!
Ultra Bad News: After presenting the potential bill for replacing the UPS batteries, management decided they'd chance the potential downtime and decided NOT to replace the ups batteries on account of "well the generator came on immediately, they should be able to reboot just fine without issue".
Conveniently ignoring that several of the older workstations decided to also brick themselves due to the surge part of the UPS also failing during the power transfer and frying their power supplies or mobos.
My company's largest building has a 300-gallon diesel backup generator feeding all circuits, and there's one circuit that's run through a large UPS and feeds all of the network closets and the main server room.
We run annual generator and UPS tests, though Seattle power is flaky enough that we have a few major power outages per year.
We've never had a computer brick after the generator kicks in. I'm not sure if there's something our electricians did, that yours didn't do, to prevent that.
However, we did attempt to put in small personal APC UPSes at a number of desks. It takes about 5-10 seconds for the generator to kick in, but some folks felt their jobs were important enough that they shouldn't suffer any interruption during a power outage. Despite being brand new and appropriately sized for their loads, the APC UPSes had roughly a 50% failure rate. The UPSes not only failed to maintain power during the brief outage, but they also didn't come back online once the generator was on. And because most of those users refused (or didn't understand) to unplug the UPSes, what should have been a 10 second wait for their computers to come back on became a 20 minute wait while IT scrambled to unplug them ourselves.
every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.
I was super-salty about the small APC UPS thing, too. I got angry at my boss.
Had we made a sober risk assessment during a strategy meeting that we needed small APC UPSes at certain desks, that would have been fine.
No, it was that after going through three winter seasons after that building generator was installed, which means we'd already had years to see how it works, we were coming up on a big storm less than a week away. One executive raised a stink about how his team needed 100% continuity during the storm and my boss, instead of saying 'sorry, maybe next time,' asked members of my team to go to a local office supply store and purchase (on company credit cards) small UPSes and install them on short notice.
OH NO AN EXECUTIVE WANTS SOMETHING THAT MAKES IT AN EMERGENCY
TELL PEOPLE "NO" ONCE IN A WHILE? THAT'S CRAZY TALK
every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.
I don't get this obsession with 100% uptime. Rarely do businesses need that.
My system is offline and various parts of it get a reboot between the hours of 11:30pm and 1:00am. My docs work late nights sometimes, but having all our data at their fingertips 24/7/365 is just not feasible on our budget.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
I don't get this obsession with 100% uptime. Rarely do businesses need that.
My system is offline and various parts of it get a reboot between the hours of 11:30pm and 1:00am. My docs work late nights sometimes, but having all our data at their fingertips 24/7/365 is just not feasible on our budget.
our vpn server gets rebooted every day at 4am becuase if we don't once every couple of weeks or so it just stops letting clients connect to VPN until a reboot and we've never been able to figure out why. Rather do a 4am reboot every night than a 2pm reboot every couple of weeks.
This reminds me of a scenario we dealt with during Technical Expert training a couple years ago (the name of the training is wrong, but that's basically what it was).
Big ass ships use a giant gearbox to transmit power from their engines to the propeller. These gearboxes require oil flow during a power outage while the power system "spins down." There was a ship that was built with the wrong solenoid valve in the oil line (the valve would fail shut instead of fail open). The decision was made to install a UPS for the valve's power so that the gearbox had oil to spin down during a power failure. The ship went to sea and had a power failure. The UPS shit itself, and the gearbox seized. They had to basically dismantle the entire ass end of the ship to replace the gearbox.
+11
Options
That_GuyI don't wanna be that guyRegistered Userregular
We've got a pretty slick backup power system for our office/datacenter. The first line of defence is a room sized battery backup system. It has enough reserve to power the entire datacenter (if it were totally full) for something like a half hour. All the network equipment has its own dedicated rack mount UPS good for more than an hour. All of our workstations have UPSs but those aren't as important since we all have laptops too. Our Datacenter uses 3 phase power so we needed 3 phase backup power too. For that we have a nearly 2 story tall 3 phase diesel generator that can run for something like 72 hours under full load on a single tank. That's just for the datacenter though. We have 4 natural gas generators that are supplied by the mains and will run indefinitely. We also have an emergency petrol generator that is enough to run the network equipment so even if all the other generators fail.
A little town in southeast Iowa got hit with a ransomware attack and got caught with their pants down when attackers hit the library and city hall computer networks, holding all the files for ransom. They refused, and ended up going techless while they tried to fix and clean out everything- it also meant a return to the old-school way of doing things: you couldn't pay anything over the web because there was no active computer at city hall, no active internet at the library, either. You actually had to go and pay or do everything in person
It took them about six months to completely purge the stuff from the system, and now they have backups to the backups, and so on:
Some of those city officials ought to become speakers as to the importance of backing up important stuff and making sure you can just purge and restore in case something like this happens.
JaysonFour on
I can has cheezburger, yes?
+2
Options
That_GuyI don't wanna be that guyRegistered Userregular
I've said it before and I'll say it again. If it's not backed up in at least 2 places, it's not backed up.
Posts
Yeap, it's a numbers game: after your 100th interview you no longer give a shit when you get a "sorry, but no" email.
Just keep pushing until it works.
Sometimes it just wouldn't talk to my desktop at all, it'd have a connection and my phone would be pulling down stuff over wifi, but the computer couldn't see it exists. Restarting the computer a few times would typically fix that.
I've checked and double-checked all the settings tens of times, there's nothing awry there.
This morning I can't even log into the unit's own UI through 192.168.1.1. Zero response even after multiple power cycles.
Wifi's still humming along! Wired connection is turbo fucked though.
The other three ports (LAN 1, 2, & 3) all give faultless service.
Warranty return it is!
XBL:Phenyhelm - 3DS:Phenyhelm
This morning my supervisor, who I've butted heads with a fair amount over the past year, just came in and said he appreciated me as a coworker and that I could use him as a reference! He sounded like he might actually get choked up!
I... did not expect that reaction!
On the job front, my second interview was Friday and they asked me to turn in the paper application for the background check on Monday. If I get the job, I'll be thrilled. If I don't, I get to take the summer off and I'll probably mostly visit friends and fuck around and also get my Security+ during that time, which should be an easy dunk and will help address that minor gap in the resume.
I'm guessing that because the 'Write a review' button is a javascript object on the review page, there's no way to skip directly to that without running a proper webapp at least, right?
If you do not have field experience computing is a field where you can generate it on your own. Set up a way to lab what you want to work on and do things with it.
The initiative and attempt will get through some of the filters and you'll have a better shot.
XBL:Phenyhelm - 3DS:Phenyhelm
Man said, of the times we've butted heads in the last year, "You're a great worker, when you have the motivation. We weren't able to provide you with that motivation. Seriously, use me as a reference and I'll have nothing but good things to say, because I want that for you."
Jeez
Check the target computer user's email!
Usually it goes like
-someone calls in saying they can't open a file
-recognize encryption, possibly see ransom letter text document
-check for the file's owner or last modifier for clues as to source of infection
-confirm via the presence of encryption on user's local desktop (or TS session, etc)
-disable their account, change their password, disconnect their PC
-shut down any affected server volumes and restore from backup
I've never dealt with something of a higher tier of complexity like one might find bringing down the City of Baltimore, though - not sure what would bring down the systems mentioned, tbh. A lot of the malware truisms changed over the years - things seem less likely to spread and infect other systems, likely because AV heuristics are much better. 'Nuke it from orbit' has become 'audit share permissions and fix things with a scalpel with minimal disruption'.
Found the problem.
-Those systems can't be down
"We can't know how long it would take to recover from a disaster, or even whether full recovery is possible, unless we do a proper test, ideally on a regular interval"
-We need 100% uptime. Let me get back to you on this.
I feel attacked
This is a clickable link to my Steam Profile.
You don't get top of the line DR and security with entry admin wages and shoestring budgets.
XBL:Phenyhelm - 3DS:Phenyhelm
This is literally my life right now. I've been told I need to drop everything to write up backup restore testing documents/guidelines by the end of the week, and schedule a partial recovery test by end of May.
I have about 25 other things I need to be doing but that's the rest of my week
*throws all the macs in the garbage*
Oh goddamn this. At my previous job, we kept trying to arrange a generator test to confirm that things were working for our call center (our SLAs basically demand less than 15 minute downtime for our agents woo!) and it kept getting pushed back over and over again.
Then we finally managed to arrange a test! How you say?
Well when the Department of Water & Power say they're going to be shutting power off for the whole block due to restringing electricity poles, sounds like as good a time as any!
Good News: Generator works and kicked in when we lost power.
Bad News: The BackUPSs we had deployed to the various agents workstations to ensure that their computers don't shutoff during the power transfer between line and generator pretty much all catastrophically failed after 1 minute of battery use and all of the workstations turned off anyway. I guess that will happen when they've been in use for over 6 years without swapping out the battery!
Ultra Bad News: After presenting the potential bill for replacing the UPS batteries, management decided they'd chance the potential downtime and decided NOT to replace the ups batteries on account of "well the generator came on immediately, they should be able to reboot just fine without issue".
Conveniently ignoring that several of the older workstations decided to also brick themselves due to the surge part of the UPS also failing during the power transfer and frying their power supplies or mobos.
Wud yoo laek to lern aboot meatz? Look here!
Because we test it by having actual emergencies routinely.
XBL:Phenyhelm - 3DS:Phenyhelm
Come to think of it, that's kinda true in my case.
"Well, we should test the backup for this site."
"There's a storm tonight."
"No need, then!"
A client demanded a quote for a UPS that would run their entire server rack for 8+ hours. They had expanded into an adjacent building, and the power there went out fairly regularly. The price was, as you imagine, substantial. I suggested that we have a chat about their specific needs, since none of the PCs have UPSs anyway and hey why not just run an electric line from a circuit in the good building literally one room over, but they just repeated their demand to rush the project through so lel
Jeebus. 8 Hours? Our rackmounts guaranteed I think maybe 1-2? If that? Granted we figured we'd be on generator power so I guess we didn't beef it as much that but still. Those must have been some absolutely massive batteries.
Wud yoo laek to lern aboot meatz? Look here!
the "no true scotch man" fallacy.
My company's largest building has a 300-gallon diesel backup generator feeding all circuits, and there's one circuit that's run through a large UPS and feeds all of the network closets and the main server room.
We run annual generator and UPS tests, though Seattle power is flaky enough that we have a few major power outages per year.
We've never had a computer brick after the generator kicks in. I'm not sure if there's something our electricians did, that yours didn't do, to prevent that.
However, we did attempt to put in small personal APC UPSes at a number of desks. It takes about 5-10 seconds for the generator to kick in, but some folks felt their jobs were important enough that they shouldn't suffer any interruption during a power outage. Despite being brand new and appropriately sized for their loads, the APC UPSes had roughly a 50% failure rate. The UPSes not only failed to maintain power during the brief outage, but they also didn't come back online once the generator was on. And because most of those users refused (or didn't understand) to unplug the UPSes, what should have been a 10 second wait for their computers to come back on became a 20 minute wait while IT scrambled to unplug them ourselves.
the "no true scotch man" fallacy.
Imagining your avatar throwing up half-chewed dog bone on the conference table
the "no true scotch man" fallacy.
that only took all morning
my kingdom for an AD server
Had we made a sober risk assessment during a strategy meeting that we needed small APC UPSes at certain desks, that would have been fine.
No, it was that after going through three winter seasons after that building generator was installed, which means we'd already had years to see how it works, we were coming up on a big storm less than a week away. One executive raised a stink about how his team needed 100% continuity during the storm and my boss, instead of saying 'sorry, maybe next time,' asked members of my team to go to a local office supply store and purchase (on company credit cards) small UPSes and install them on short notice.
OH NO AN EXECUTIVE WANTS SOMETHING THAT MAKES IT AN EMERGENCY
TELL PEOPLE "NO" ONCE IN A WHILE? THAT'S CRAZY TALK
the "no true scotch man" fallacy.
My system is offline and various parts of it get a reboot between the hours of 11:30pm and 1:00am. My docs work late nights sometimes, but having all our data at their fingertips 24/7/365 is just not feasible on our budget.
He'd lick it up after.
Power move.
XBL:Phenyhelm - 3DS:Phenyhelm
our vpn server gets rebooted every day at 4am becuase if we don't once every couple of weeks or so it just stops letting clients connect to VPN until a reboot and we've never been able to figure out why. Rather do a 4am reboot every night than a 2pm reboot every couple of weeks.
Big ass ships use a giant gearbox to transmit power from their engines to the propeller. These gearboxes require oil flow during a power outage while the power system "spins down." There was a ship that was built with the wrong solenoid valve in the oil line (the valve would fail shut instead of fail open). The decision was made to install a UPS for the valve's power so that the gearbox had oil to spin down during a power failure. The ship went to sea and had a power failure. The UPS shit itself, and the gearbox seized. They had to basically dismantle the entire ass end of the ship to replace the gearbox.
Reminds me of what happened where some of my relatives live:
https://www.kwqc.com/content/news/City-of-Muscatine-responds-to-cyber-attack-498364541.html
A little town in southeast Iowa got hit with a ransomware attack and got caught with their pants down when attackers hit the library and city hall computer networks, holding all the files for ransom. They refused, and ended up going techless while they tried to fix and clean out everything- it also meant a return to the old-school way of doing things: you couldn't pay anything over the web because there was no active computer at city hall, no active internet at the library, either. You actually had to go and pay or do everything in person
It took them about six months to completely purge the stuff from the system, and now they have backups to the backups, and so on:
https://www.kwqc.com/content/news/Muscatine-recovering-after-October-ransomware-attack-507376451.html
Some of those city officials ought to become speakers as to the importance of backing up important stuff and making sure you can just purge and restore in case something like this happens.
I can has cheezburger, yes?
If you can't recover the backup, it's not backed up.