- Did you try rebooting the computer?
- Yes.
- LIAR.
I've been in IT...15+ years, and nobody who says they rebooted their computer or cleared their cookies / cache when asked by IT has ever done that.
The few people who have done that before calling support will always tell you up front the troubleshooting steps they've taken.
"I don't have the time to do that!"
"That's your solution for everything!"
motherfucker I'm going to reboot it regardless of what you say so you're losing the time anyways, reboot it.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
+4
Options
That_GuyI don't wanna be that guyRegistered Userregular
edited December 2021
When they refuse to reboot their computer to solve a simple problem you just open a remote command line and force the issue.
"Oh well let me take a look here. Click, click, click. Oh no, it looks like your computer started rebooting on its own. However could this have happened?"
When they refuse to reboot their computer to solve a simple problem you just open a remote command line and force the issue.
"Oh well let me take a look here. Click, click, click. Oh no, it looks like your computer started rebooting on its own. However could this have happened?"
Unironically: do you have anything open that you need to save? Ok, this is going to reboot your computer, you should see it shutting down now.
+3
Options
RandomHajileNot actually a SnatcherThe New KremlinRegistered Userregular
“Ah, must be Windows Update running a bit behind!”
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
Speaking of rebooting computers, I'm poking through our patch management system for the first time. Looks like roughly 20% of our fleet is sitting at reboot pending.....
Speaking of rebooting computers, I'm poking through our patch management system for the first time. Looks like roughly 20% of our fleet is sitting at reboot pending.....
Oh how I miss the days of managing a small environment.
Even though our patching software prompts the users to reboot, we also implemented a scheduled task GPO to look at all the spots in the registry that flags Windows as needing a reboot. It runs every night at 1am or upon first login after missed scheduled run and if it needs a reboot, prompts the user that the machine is going to reboot soon. If they don't do anything it reboots automatically after 30 minutes. It has drastically cut down on that issue.
Just remember that half the people you meet are below average intelligence.
Back when I was on the service team, I used to let managers and important people skip the line and get immediate support.
Shit like this is why IT is perpetually understaffed.
If you insulate decision-makers from the ramifications of IT understaffing, then you never get adequate staffing.
I let the managers THINK they are making decisions. The real key is to come up with solutions before they can. It's a tricky game to play but basically you give them several options, all but 1 are obviously terrible and wrong. Even if you have to fudge some numbers to make the other options less attractive. Bottom line, managers should be approving your solutions not coming up with their own.
That isn't really what I'm talking about though.
I largely agree with you, I just think we're talking about different things. What you're talking about is good advice for things like network designs, or helpdesk procedures, or purchasing. It's a strong philosophy. It's similar to the idea of giving children forced choices. Offer 2-3 options that you find amenable and let the individual choose between those specific options. And you can nudge one of the options to be more attractive than the other.
I'm talking about more high-level, department staffing level stuff, like whether IT can add 6 new employees this year, or zero new employees, or gets a headcount cut. I can give my recommendations but that's much bigger decision (and subject to a lot more resistance and inertia).
If the CEO, COO, CIO, and President of HR all get a special VIP line to the helpdesk, then they won't feel the pain if helpdesk needs more people.
Does that mean they should wait in the standard queue every time? No, because it has to be balanced with reputation management. Every once in a while, somebody should rush in like a superhero and impress the hell out of the C-level execs and VIPs. Show them that our individuals are talented. They can have a little bit of VIP priority, as a treat.
For the most part, make them eat the dog food. If helpdesk SLAs are too slow for the CEO, they're too slow for the lowest tier of staff.
If you're not at a company using actual data like queue times, ticket surges, and overall TTR and satisfaction metrics, then... well, really we're just working at different scales. But a VIP system makes perfect sense for a large organization that is using IT as a tool to keep the business running and things like "very expensive people sitting in queue" is obviously a dumb idea.
Before the forums transition to holiday mode, let me just say that I hope everyone has a quiet, ticket free, server explosion free, cloud outage free, and otherwise uneventful holiday over the next few days.
Back when I was on the service team, I used to let managers and important people skip the line and get immediate support.
Shit like this is why IT is perpetually understaffed.
If you insulate decision-makers from the ramifications of IT understaffing, then you never get adequate staffing.
I let the managers THINK they are making decisions. The real key is to come up with solutions before they can. It's a tricky game to play but basically you give them several options, all but 1 are obviously terrible and wrong. Even if you have to fudge some numbers to make the other options less attractive. Bottom line, managers should be approving your solutions not coming up with their own.
That isn't really what I'm talking about though.
I largely agree with you, I just think we're talking about different things. What you're talking about is good advice for things like network designs, or helpdesk procedures, or purchasing. It's a strong philosophy. It's similar to the idea of giving children forced choices. Offer 2-3 options that you find amenable and let the individual choose between those specific options. And you can nudge one of the options to be more attractive than the other.
I'm talking about more high-level, department staffing level stuff, like whether IT can add 6 new employees this year, or zero new employees, or gets a headcount cut. I can give my recommendations but that's much bigger decision (and subject to a lot more resistance and inertia).
If the CEO, COO, CIO, and President of HR all get a special VIP line to the helpdesk, then they won't feel the pain if helpdesk needs more people.
Does that mean they should wait in the standard queue every time? No, because it has to be balanced with reputation management. Every once in a while, somebody should rush in like a superhero and impress the hell out of the C-level execs and VIPs. Show them that our individuals are talented. They can have a little bit of VIP priority, as a treat.
For the most part, make them eat the dog food. If helpdesk SLAs are too slow for the CEO, they're too slow for the lowest tier of staff.
If you're not at a company using actual data like queue times, ticket surges, and overall TTR and satisfaction metrics, then... well, really we're just working at different scales. But a VIP system makes perfect sense for a large organization that is using IT as a tool to keep the business running and things like "very expensive people sitting in queue" is obviously a dumb idea.
I do work at a company (and have previously worked at others) that use(s) queue times, ticket load, and satisfaction metrics. Sometimes upper management pays attention to that data, but in my (admittedly anecdotal) experience, they usually don't.
every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.
I enjoy that it only affected Exchange 2016 and 2019. So this old POS exchange 2010 server that I have a few users on that haven't been migrated to o365 from yet (don't ask, I don't know) is still a-ok. :rotate:
Sure, no problem. I, the person who has been working here for 3 weeks should have no trouble whipping up a disaster recovery plan by end of Thursday. And it will also be extremely accurate since I'm very familiar with the systems and how it all works, and what the business priorities are.
Sure, no problem. I, the person who has been working here for 3 weeks should have no trouble whipping up a disaster recovery plan by end of Thursday. And it will also be extremely accurate since I'm very familiar with the systems and how it all works, and what the business priorities are.
It's going to be a great document.
Item #1 on the plan: Disaster Recovery meetings 1/month to revise the plan
Sure, no problem. I, the person who has been working here for 3 weeks should have no trouble whipping up a disaster recovery plan by end of Thursday. And it will also be extremely accurate since I'm very familiar with the systems and how it all works, and what the business priorities are.
It's going to be a great document.
It's of course going to be a shitshow, but s great way to learn and ever disaster recovery plan is probably a shitshow beyond the very specific immediate circumstances.
Long term its probably near worthless (but better than what they already have) but also a great test and great way to learn.
Honestly, the start of a disaster recovery plan is just, "ok what would we need to recover." And i agree, a new person would be just as good at ferreting everything out and would benefit from it at the same time.
Honestly, the start of a disaster recovery plan is just, "ok what would we need to recover." And i agree, a new person would be just as good at ferreting everything out and would benefit from it at the same time.
Yep, it's a good way to introduce someone to the systems and people while getting a fresh perspective. On a short fuse though? Ehhhhhhh....
the best part is that we're planning on moving most of our workloads into Azure by the end of the year so most of it is going to be irrelevant in 12 months or less anyway.
That_GuyI don't wanna be that guyRegistered Userregular
Just hang up a sign saying "Cloud's fucked, going home" than leave for the day. The best part of having everything in the cloud is that outages aren't your problem anymore.
Just hang up a sign saying "Cloud's fucked, going home" than leave for the day. The best part of having everything in the cloud is that outages aren't your problem anymore.
oh my sweet summer child
every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.
one time, when an internet outage caused by a fiber cut down the road caused our cloud stuff to be inaccessible, the CTO wanted the IT department to activate a bunch of hotspots ASAP and hand them out along with laptops
every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.
so, here's some sick hax on how you can defeat two-factor authentication (assuming you've already harvested the user's password)
log in as the user to a website that uses 2FA
let it push a 2FA request to the user's phone
do it again
do it again
do it until you get locked out
pivot to a different user
push to the phone
do it again
eventually some user is going to just tap "allow" on the 2fa popup on their phone without thinking about it, which will allow the attacker in
anyway I just got my company's most recent penetration test results
Feral on
every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.
so, here's some sick hax on how you can defeat two-factor authentication (assuming you've already harvested the user's password)
log in as the user to a website that uses 2FA
let it push a 2FA request to the user's phone
do it again
do it again
do it until you get locked out
pivot to a different user
push to the phone
do it again
eventually some user is going to just tap "allow" on the 2fa popup on their phone without thinking about it, which will allow the attacker in
anyway I just got my company's most recent penetration test results
This is why I recommended to our director of security that we not use the one-touch authenticator app for our general employee group's MFA. Too easy for them to get trained into just pressing Yes any time it pops up on their phone. At least with a code to type in they would be forced to be interacting with the application making the request. That recommendation was ignored.
Just remember that half the people you meet are below average intelligence.
Considering if that happens, Azure AD is down and basically the entirety of the business world is down, I'd say "go to an early lunch".
There are a lot of situations where your cloud resources in Azure might be inaccessible without it being 'azure, everywhere, is down'
Fiber cut outside our HQ as the example I used above
I mean, things like a fiber cut just mean you need redundant access with the will to pay for it. Otherwise, large data transfer loads just wait while you send people home to WFH using other internet connections. I don't think anything about that is materially different from how you'd handle in an on-prem era as while many on-prem resources would still be available, plenty of consoles, tooling etc. would be in a failure state due to lack of connectivity. If anything, Azure makes that less impactful as it would mean that a fiber cut wouldn't take external facing stuff offline since the hosting is in Azure vs. your own data center.
(I'm not in the support group for this but I have to manage contractors in the office)
We've had a general network outage for 3 days now. Only some people can connect via hotspots. I can only use webmail at home despite having a valid/working VPN connection.
Apparently one of the data centers is "out" but this is bordering on ridiculous.
Has anyone heard if the Pax River area (I think MD, or maybe VA) got hit hard by the snow on Tues?
I have to be in the office but I can't do a goddamn thing and I hate it. I had to hand write paperwork for safely locking out a crane yesterday afternoon.
Why doesn't the gov have network redundancy?
Related: the most recent hurricane season caused a distance support site to be down for a week because its data center/server was in New Orleans. One of the users replied all to the outage notification:
"Has there been any thought to moving the server above sea level? I'm asking for a friend."
Posts
"I don't have the time to do that!"
"That's your solution for everything!"
motherfucker I'm going to reboot it regardless of what you say so you're losing the time anyways, reboot it.
"Oh well let me take a look here. Click, click, click. Oh no, it looks like your computer started rebooting on its own. However could this have happened?"
Unironically: do you have anything open that you need to save? Ok, this is going to reboot your computer, you should see it shutting down now.
This is a clickable link to my Steam Profile.
Even though our patching software prompts the users to reboot, we also implemented a scheduled task GPO to look at all the spots in the registry that flags Windows as needing a reboot. It runs every night at 1am or upon first login after missed scheduled run and if it needs a reboot, prompts the user that the machine is going to reboot soon. If they don't do anything it reboots automatically after 30 minutes. It has drastically cut down on that issue.
If you're not at a company using actual data like queue times, ticket surges, and overall TTR and satisfaction metrics, then... well, really we're just working at different scales. But a VIP system makes perfect sense for a large organization that is using IT as a tool to keep the business running and things like "very expensive people sitting in queue" is obviously a dumb idea.
I do work at a company (and have previously worked at others) that use(s) queue times, ticket load, and satisfaction metrics. Sometimes upper management pays attention to that data, but in my (admittedly anecdotal) experience, they usually don't.
the "no true scotch man" fallacy.
https://techcommunity.microsoft.com/t5/exchange-team-blog/email-stuck-in-exchange-on-premises-transport-queues/ba-p/3049447
Their solution is a bit duct-tapey, so I hope we get a proper solution soon.
It's going to be a great document.
Item #1 on the plan: Disaster Recovery meetings 1/month to revise the plan
It's of course going to be a shitshow, but s great way to learn and ever disaster recovery plan is probably a shitshow beyond the very specific immediate circumstances.
Long term its probably near worthless (but better than what they already have) but also a great test and great way to learn.
Yep, it's a good way to introduce someone to the systems and people while getting a fresh perspective. On a short fuse though? Ehhhhhhh....
the "no true scotch man" fallacy.
It's okay, wunderbar wrote a great document on how to recover from a disaster so we're covered.
Take a nap.
And uploaded it to MS OneDrive, where it's the only place where it'll be available.
Hang up the "I told you so" poster that I made for our environment architect, close the door and turn of the telephone.
I mean, I could hang up a sign and close the door but the only living being that would piss off is the dog, while I work from home.
oh my sweet summer child
the "no true scotch man" fallacy.
the "no true scotch man" fallacy.
log in as the user to a website that uses 2FA
let it push a 2FA request to the user's phone
do it again
do it again
do it until you get locked out
pivot to a different user
push to the phone
do it again
eventually some user is going to just tap "allow" on the 2fa popup on their phone without thinking about it, which will allow the attacker in
anyway I just got my company's most recent penetration test results
the "no true scotch man" fallacy.
This is why I recommended to our director of security that we not use the one-touch authenticator app for our general employee group's MFA. Too easy for them to get trained into just pressing Yes any time it pops up on their phone. At least with a code to type in they would be forced to be interacting with the application making the request. That recommendation was ignored.
This is why we can't have nice things.
the "no true scotch man" fallacy.
Considering if that happens, Azure AD is down and basically the entirety of the business world is down, I'd say "go to an early lunch".
There are a lot of situations where your cloud resources in Azure might be inaccessible without it being 'azure, everywhere, is down'
Fiber cut outside our HQ as the example I used above
the "no true scotch man" fallacy.
and at least in that case, especially now, companies can tell people to go work from home.
I mean if you have Azure and you don't have redundant Internet access then I don't know what the fuck you're doing.
"When End to End Circuit Diversity Isn't Really End to End: How to Make Customers Angry with a Single Backhoe"
I mean, things like a fiber cut just mean you need redundant access with the will to pay for it. Otherwise, large data transfer loads just wait while you send people home to WFH using other internet connections. I don't think anything about that is materially different from how you'd handle in an on-prem era as while many on-prem resources would still be available, plenty of consoles, tooling etc. would be in a failure state due to lack of connectivity. If anything, Azure makes that less impactful as it would mean that a fiber cut wouldn't take external facing stuff offline since the hosting is in Azure vs. your own data center.
We've had a general network outage for 3 days now. Only some people can connect via hotspots. I can only use webmail at home despite having a valid/working VPN connection.
Apparently one of the data centers is "out" but this is bordering on ridiculous.
Has anyone heard if the Pax River area (I think MD, or maybe VA) got hit hard by the snow on Tues?
I have to be in the office but I can't do a goddamn thing and I hate it. I had to hand write paperwork for safely locking out a crane yesterday afternoon.
Why doesn't the gov have network redundancy?
Related: the most recent hurricane season caused a distance support site to be down for a week because its data center/server was in New Orleans. One of the users replied all to the outage notification:
"Has there been any thought to moving the server above sea level? I'm asking for a friend."