You want a reliable printer? Get one with a maintenance contract.
Not only is the manufacturer incentivised to make it last because they have to fix it if shit breaks, but they have to fix it if shit breaks. As in, not you!
My two primary clients both use managed printer services and it's just the best. They fix anything that breaks and aren't allowed to touch anything except their own hardware and I never have to do more than deploy a new driver once in awhile.
0
Options
TL DRNot at all confident in his reflexive opinions of thingsRegistered Userregular
Admins who set up network printers not on a print server can burn. One client has a printer with jobs stuck it in from somewhere but since it's installed manually on every workstation they can't figure out who the job belongs to. Now we have to send someone on site to go to each system and restart the print spooler.
Why not restart the spooler service on all the domain PCs remotely?
Holy shit... did you hear about the Adult Friend Finder hacking? Apparently AFF didn't want to pay someone (they're guessing someone sending them web traffic) around $250k. This someone has a hacker friend who decided to provide a little payback. He/She hacked AFF (40+ million records) and ransomed it for $100k. When AFF didn't pay, the hacker posted all of the information (username, zip code, gender, preferences of apparently every preference possible, marital status, IPs, etc.) on Darknet Tor.
The big issue is that now all 40 million of these users are social engineering targets and could easily get duped into clicking phishing links.
I'm actually more shocked at 40 million users on something like AFF, but I guess that's just me. Anywho, be prepared for people doing more stupid shit.
EDIT: Apparently it was 40 million, not 4 million. As if I needed any further reason to lose faith in humanity. This is mind boggling.
Le_Goat on
While I agree that being insensitive is an issue, so is being oversensitive.
+1
Options
lwt1973King of ThievesSyndicationRegistered Userregular
I find it amusing when people say all servers should never be rebooted.
"He's sulking in his tent like Achilles! It's the Iliad?...from Homer?! READ A BOOK!!" -Handy
I find it amusing when people say all servers should never be rebooted.
Reminds me of the question a user asked when he was pissy about the systems going down, of which he had ample notice: "How about we just don't do updates anymore?"
...he was dead serious and didn't appreciate my laughter
While I agree that being insensitive is an issue, so is being oversensitive.
Admins who set up network printers not on a print server can burn. One client has a printer with jobs stuck it in from somewhere but since it's installed manually on every workstation they can't figure out who the job belongs to. Now we have to send someone on site to go to each system and restart the print spooler.
Why not restart the spooler service on all the domain PCs remotely?
Is the job actually getting stuck in the printer's memory? Or is Windows refusing to stop resending it? If they didn't set it up with the print servers queue then nothing should be getting stuck behind it once the local spooler gives up on it.
Of course I say this having once had to deal with a situation where our EHR software had a problem with sending a certain document type to a particular Kyocera printer driver which would actually cause the printer to fail to print anything until powercycled. A few PCs on one unit had the printer locally installed as a TCP/IP printer instead of off the server, so they still had the old driver and were occasionally killing the printer until we managed to track down where it was coming from.
I find it amusing when people say all servers should never be rebooted.
Well not all servers.
There was one server that we had which pretty much never had to reboot. It was fantastic. It's one of the many things I miss about having a Novell server.
Le_Goat on
While I agree that being insensitive is an issue, so is being oversensitive.
0
Options
TL DRNot at all confident in his reflexive opinions of thingsRegistered Userregular
As someone who manages Windows servers almost exclusively, I'm a big fan of nightly reboots. Workstations, too.
I find it amusing when people say all servers should never be rebooted.
Well not all servers.
There was one server that we had which pretty much never had to reboot. It was fantastic. It's one of the many things I miss about having a Novell server.
Uptimes from some of my servers:
Gateway/firewall/dhcp server: 269d 8h 18m
web server 1: 180d 17h 58m
web server 2: 252d 6h 1m
voip: 351d 12h 20m
storage server: 25d 4h 56m (had to reboot this one recently to swap hard drives)
All Linux servers of some variant.
+1
Options
Apothe0sisHave you ever questioned the nature of your reality?Registered Userregular
Maybe we should just stop using servers, they just cause you trouble in the end.
+1
Options
Apothe0sisHave you ever questioned the nature of your reality?Registered Userregular
In other news, backup exec is making me want to hurt myself and others.
The old backup exec server is no longer reliable and is 50% dead.
Currently paralyzed trying to decide whether to expend a lot of effort trying to import the existing stuff into the new server given i know the data is incomplete and partly broken...
If my IT director asks me why we need to subnet our network with 99% full IP range and all of our servers, workstations, printers, network gear, phones, and user's mobile phones on it one more time I am going to scream
I've never understood the e-peen of "I never need to reboot my servers"
most of our stuff gets rebooted at least once every quarter, whether it be for updates, or just to clean stuff up. it's amazing how much better things run with regular reboots. And it isn't like the old days when everything was on physical hardware and you could maybe be scared that the hardware wouldn't come back after a reboot. With everything virtualized in a worst case scenario you take a snapshot before the reboot, and if something goes sideways revert back after.
Also if you have something that requires 100% uptime and it runs on only one server without any kind of fall back/dr/second site necessitating never rebooting because you can't go below 100%, you're doing it wrong.
In other news, backup exec is making me want to hurt myself and others.
The old backup exec server is no longer reliable and is 50% dead.
Currently paralyzed trying to decide whether to expend a lot of effort trying to import the existing stuff into the new server given i know the data is incomplete and partly broken...
backup exec used to give me nightmares. My last job we had a 2007 era version of it running on physical hardware because it was backing up to a 2005 era tape drive. (this was still the case when I left there a year ago, in 2014). We'd basically do a small rain dance every day hoping the backup would run, and if it didn't or something broke we were terrified we wouldn't be able to get it running again.
We had just started to move to a different solution for some backup stuff, but that was still the "off site" option as of a year ago, and at that time there was nothing in the plans for a better off site backup.
RandomHajileNot actually a SnatcherThe New KremlinRegistered Userregular
Man, you guys, I've had a hell of a month. I'm just now digging my way out of the documentation and such.
- On 4/24, I lost enough hard drives in our SAN to take down the entire array. Basically, I had a fundamental misunderstanding of how the RAID LUNs were set up in our EQL. Luckily, we didn't lose any data (though I did have backups just in case). It took over a week to carefully migrate VMs over to a couple temporary NFS servers and back after all drives were replaced. That'll teach me to trust the other admin to replace hard drives expeditiously when he said he would.
- Then, on the first day I'd had off in around 10 days (for the day my wife was taking a licensure exam out of town), one of our old out-of-warranty domain servers went down, taking our main office DHCP/DNS/printing/outside trust with it. We were already part way through the DHCP migration (exported, hadn't imported yet). DNS was easy to fix, as it was already replicated, so it was just a matter of pointing machines with static DNS over to the new servers, half of which I had already done. Printers were easy, also, as I had already had most of the drivers installed and setup. But the worst part was that we have a dumb domain trust with a parent company, and I had been hounding them to migrate their trust pointers over to the new servers since I put them in back in October. They completed in 1 hour what I had been trying to get them to do for 6 months. Oh, and the best part? I got a call for this at 7AM, when my wife--after studying late into the night--wasn't planning on getting up until 9AM. Luckily, she passed her exam, but I got to work 2 hours from the hotel and 2 hours from a mall that morning, and then still had to go in when we got back for another 8 hours.
- Hey, you remember those temporary NFS servers I mentioned a couple paragraphs back? Before I had a chance to migrate the VMs back over to the SAN, one of them (new out of the box, I might add) had a memory failure...while I was out of the office again, this time for my wife's graduation ceremony. Luckily, that one was easily fixed, as it seems the RAM stick was jostled around in shipping.
- Okay, so of course, I'm on a cruise with very limited internet access, from 5/8 - 5/15. Everything's calmed down, right, so I can at least take a vacation, right? Well, on the evening of 5/12 (luckily, like 10 minutes after the backups finished), someone in HR got a CryptoLocker virus, and of course it encrypted every file that she had access to. Something like 500k files out of the 3M files we have total. As I was out in the middle of the ocean, all I could offer was an occasional suggestion. The backup system is one of my systems, so I tried to assist so that nobody else would screw it up, and thankfully my alternate admin did a good job. I still ended up being the one to clean up the encrypted copies of files when I got back. I figured out how to use the File Server Resource Manager to screen files and shut down file sharing if someone tries to copy certain filenames to the network.
- Also, did I mention our cruise went out of Baltimore?
- Then, on 5/14 (again, still on vacation), we lost another one of the old domain servers at a remote location. Luckily, after the other incident, we had pretty much migrated everything over except a couple machines still pointing at the server for DNS and a couple legacy printers.
The big takeaway is that I'm just one person, and too much rests on my shoulders here out of our 6-person IT department. I keep getting pulled away from core stuff to work on initiatives and go to meetings and attend training. Management pays lip service to our complaints, but we never add anyone new to the group. I really need someone to be hired to be my protege', because everyone else in the department is busy or has an entirely different skillset.
Any idea how to clean up an old Exchange SSL cert that is no longer in use? Client went from on-site to 365, their SSL cert from before just expired and now their autodiscover is reporting that the cert is out of date since it apparently likes to look to the domain name for authentication before looking to 365.
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
Fuck fuck fucker fucking fuck fucks.... I am so fucking sick and tired of dealing with backups: the drives, the shitty software, the stupid fucking logs that make my eyes bleed just to determine if compression is indeed working, the god damn license and support bullshit, the fucking different everything on every god damn level of fucktatude...
Mainly because I am the backup duder*, I think I deserve better pay. No one else will want to do this shit, so I think there's value in that.
*as in the only person who handles the backup system, and when anyone else does, they don't do it right at all.
Le_Goat on
While I agree that being insensitive is an issue, so is being oversensitive.
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
Any idea how to clean up an old Exchange SSL cert that is no longer in use? Client went from on-site to 365, their SSL cert from before just expired and now their autodiscover is reporting that the cert is out of date since it apparently likes to look to the domain name for authentication before looking to 365.
1) Server at remote site with a power hit wouldn't come up
2) Drive two hours out there after over-the-phone talking to on site staff couldn't get anything working
3) Get on site - cannot get server to boot even to video signal. Tried new cables, seated RAM, etc
4) Take it and drive two hours back to my office
5) Plug thing into test bench and get ready to troubleshoot more
6) Server fucking boots up normally first go
+2
Options
Apothe0sisHave you ever questioned the nature of your reality?Registered Userregular
Any idea how to clean up an old Exchange SSL cert that is no longer in use? Client went from on-site to 365, their SSL cert from before just expired and now their autodiscover is reporting that the cert is out of date since it apparently likes to look to the domain name for authentication before looking to 365.
I don't understand - if they are on 365 where is the old cert coming from? Can you be more specific?
Also, can't you just change their auto discover record - typically it's just a cname for their exchange cluster, so if that now points to 365 aren't you shiny and chrome?
Otherwise you can use the exchange powershell dealie and the get-exchangecertificate and related commandlets.
So I've run into a really weird error. We have a screensaver machine policy in Sysvol. There are two files in this directory that I cannot take ownership of even with the root admin account. I've tried to force inheritance, take ownership, even set the acl manually via powershell, and nothing. How do I force the down ownership. It says the current owner cannot be displayed, which leads me to believe that it's not an AD account. Any ideas anyone might have would be great. Domain functional level is 2008 R2, as is the domain controller.
Any idea how to clean up an old Exchange SSL cert that is no longer in use? Client went from on-site to 365, their SSL cert from before just expired and now their autodiscover is reporting that the cert is out of date since it apparently likes to look to the domain name for authentication before looking to 365.
I don't understand - if they are on 365 where is the old cert coming from? Can you be more specific?
Also, can't you just change their auto discover record - typically it's just a cname for their exchange cluster, so if that now points to 365 aren't you shiny and chrome?
Otherwise you can use the exchange powershell dealie and the get-exchangecertificate and related commandlets.
I haven't been able to track down exactly where the old cert is coming from. It's not on the internal server but it is somehow part of autodiscover. As for Exchange, it hasn't been installed for over a year, so trying to check powershell won't get me anywhere there. I may have to look at the O365 side to see what is listed for certs there.
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
Think I got this cert sorted, their 365 setup was never completed so they were missing records in their external DNS and the autodiscover record was wrong.
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
1) Server at remote site with a power hit wouldn't come up
2) Drive two hours out there after over-the-phone talking to on site staff couldn't get anything working
3) Get on site - cannot get server to boot even to video signal. Tried new cables, seated RAM, etc
4) Take it and drive two hours back to my office
5) Plug thing into test bench and get ready to troubleshoot more
6) Server fucking boots up normally first go
At my previous job we had some Dell desktops that if they lost power while turned on, would not turn on again unless unplugged for a few seconds and plugged back in.
1) Server at remote site with a power hit wouldn't come up
2) Drive two hours out there after over-the-phone talking to on site staff couldn't get anything working
3) Get on site - cannot get server to boot even to video signal. Tried new cables, seated RAM, etc
4) Take it and drive two hours back to my office
5) Plug thing into test bench and get ready to troubleshoot more
6) Server fucking boots up normally first go
At my previous job we had some Dell desktops that if they lost power while turned on, would not turn on again unless unplugged for a few seconds and plugged back in.
I did that, but I guess 30 seconds or so wasn't enough.
Oh well, I didn't want to rebuild that dumb server anyway. All I had to do was set up DHCP on the router instead. I'll take it out next week conveniently around lunch time.
As for he unplugged thing, I've seen that too - seemed to me at the time it was related to the power bricks. We had some weird all-in-one things that used them instead of just a plug.
Bigity on
0
Options
RandomHajileNot actually a SnatcherThe New KremlinRegistered Userregular
Man, you guys, I hate print servers.
(Though in this case, it's my own damn fault for trying to set up DNS aliases.)
1) Server at remote site with a power hit wouldn't come up
2) Drive two hours out there after over-the-phone talking to on site staff couldn't get anything working
3) Get on site - cannot get server to boot even to video signal. Tried new cables, seated RAM, etc
4) Take it and drive two hours back to my office
5) Plug thing into test bench and get ready to troubleshoot more
6) Server fucking boots up normally first go
At my previous job we had some Dell desktops that if they lost power while turned on, would not turn on again unless unplugged for a few seconds and plugged back in.
I did that, but I guess 30 seconds or so wasn't enough.
Oh well, I didn't want to rebuild that dumb server anyway. All I had to do was set up DHCP on the router instead. I'll take it out next week conveniently around lunch time.
As for he unplugged thing, I've seen that too - seemed to me at the time it was related to the power bricks. We had some weird all-in-one things that used them instead of just a plug.
Their ultra small machines get to be ultra small by not having a power supply in the case, instead being a brick outside. That had advantages and disadvantages.
Backing up your iPod is not my responsibility, so please stop asking me to help you with it on your work computer. *sigh*
Where I work, I would just direct them to the portion of the Computer Facility Usage Policy that they signed when they were hired, where it very clearly points out that you are not allowed to connect non-approved devices to a company PC.
Backing up your iPod is not my responsibility, so please stop asking me to help you with it on your work computer. *sigh*
Where I work, I would just direct them to the portion of the Computer Facility Usage Policy that they signed when they were hired, where it very clearly points out that you are not allowed to connect non-approved devices to a company PC.
We don't have that policy, because it is "inconvenient" to the end users. I must have forgotten to mention that I basically work at an adult day care.
While I agree that being insensitive is an issue, so is being oversensitive.
Posts
My two primary clients both use managed printer services and it's just the best. They fix anything that breaks and aren't allowed to touch anything except their own hardware and I never have to do more than deploy a new driver once in awhile.
Why not restart the spooler service on all the domain PCs remotely?
The big issue is that now all 40 million of these users are social engineering targets and could easily get duped into clicking phishing links.
I'm actually more shocked at 40 million users on something like AFF, but I guess that's just me. Anywho, be prepared for people doing more stupid shit.
EDIT: Apparently it was 40 million, not 4 million. As if I needed any further reason to lose faith in humanity. This is mind boggling.
...he was dead serious and didn't appreciate my laughter
I'm pretty sure most of us have our home routers locked down, but think about the normal person. Pretty freaking smart and scary at the same time.
Well not all servers. Just the ones I have to deal with.
Is the job actually getting stuck in the printer's memory? Or is Windows refusing to stop resending it? If they didn't set it up with the print servers queue then nothing should be getting stuck behind it once the local spooler gives up on it.
Of course I say this having once had to deal with a situation where our EHR software had a problem with sending a certain document type to a particular Kyocera printer driver which would actually cause the printer to fail to print anything until powercycled. A few PCs on one unit had the printer locally installed as a TCP/IP printer instead of off the server, so they still had the old driver and were occasionally killing the printer until we managed to track down where it was coming from.
Uptimes from some of my servers:
Gateway/firewall/dhcp server: 269d 8h 18m
web server 1: 180d 17h 58m
web server 2: 252d 6h 1m
voip: 351d 12h 20m
storage server: 25d 4h 56m (had to reboot this one recently to swap hard drives)
All Linux servers of some variant.
The old backup exec server is no longer reliable and is 50% dead.
Currently paralyzed trying to decide whether to expend a lot of effort trying to import the existing stuff into the new server given i know the data is incomplete and partly broken...
PSN/Steam/NNID: SyphonBlue | BNet: SyphonBlue#1126
most of our stuff gets rebooted at least once every quarter, whether it be for updates, or just to clean stuff up. it's amazing how much better things run with regular reboots. And it isn't like the old days when everything was on physical hardware and you could maybe be scared that the hardware wouldn't come back after a reboot. With everything virtualized in a worst case scenario you take a snapshot before the reboot, and if something goes sideways revert back after.
Also if you have something that requires 100% uptime and it runs on only one server without any kind of fall back/dr/second site necessitating never rebooting because you can't go below 100%, you're doing it wrong.
backup exec used to give me nightmares. My last job we had a 2007 era version of it running on physical hardware because it was backing up to a 2005 era tape drive. (this was still the case when I left there a year ago, in 2014). We'd basically do a small rain dance every day hoping the backup would run, and if it didn't or something broke we were terrified we wouldn't be able to get it running again.
We had just started to move to a different solution for some backup stuff, but that was still the "off site" option as of a year ago, and at that time there was nothing in the plans for a better off site backup.
PSN/Steam/NNID: SyphonBlue | BNet: SyphonBlue#1126
- On 4/24, I lost enough hard drives in our SAN to take down the entire array. Basically, I had a fundamental misunderstanding of how the RAID LUNs were set up in our EQL. Luckily, we didn't lose any data (though I did have backups just in case). It took over a week to carefully migrate VMs over to a couple temporary NFS servers and back after all drives were replaced. That'll teach me to trust the other admin to replace hard drives expeditiously when he said he would.
- Then, on the first day I'd had off in around 10 days (for the day my wife was taking a licensure exam out of town), one of our old out-of-warranty domain servers went down, taking our main office DHCP/DNS/printing/outside trust with it. We were already part way through the DHCP migration (exported, hadn't imported yet). DNS was easy to fix, as it was already replicated, so it was just a matter of pointing machines with static DNS over to the new servers, half of which I had already done. Printers were easy, also, as I had already had most of the drivers installed and setup. But the worst part was that we have a dumb domain trust with a parent company, and I had been hounding them to migrate their trust pointers over to the new servers since I put them in back in October. They completed in 1 hour what I had been trying to get them to do for 6 months. Oh, and the best part? I got a call for this at 7AM, when my wife--after studying late into the night--wasn't planning on getting up until 9AM. Luckily, she passed her exam, but I got to work 2 hours from the hotel and 2 hours from a mall that morning, and then still had to go in when we got back for another 8 hours.
- Hey, you remember those temporary NFS servers I mentioned a couple paragraphs back? Before I had a chance to migrate the VMs back over to the SAN, one of them (new out of the box, I might add) had a memory failure...while I was out of the office again, this time for my wife's graduation ceremony. Luckily, that one was easily fixed, as it seems the RAM stick was jostled around in shipping.
- Okay, so of course, I'm on a cruise with very limited internet access, from 5/8 - 5/15. Everything's calmed down, right, so I can at least take a vacation, right? Well, on the evening of 5/12 (luckily, like 10 minutes after the backups finished), someone in HR got a CryptoLocker virus, and of course it encrypted every file that she had access to. Something like 500k files out of the 3M files we have total. As I was out in the middle of the ocean, all I could offer was an occasional suggestion. The backup system is one of my systems, so I tried to assist so that nobody else would screw it up, and thankfully my alternate admin did a good job. I still ended up being the one to clean up the encrypted copies of files when I got back. I figured out how to use the File Server Resource Manager to screen files and shut down file sharing if someone tries to copy certain filenames to the network.
- Also, did I mention our cruise went out of Baltimore?
- Then, on 5/14 (again, still on vacation), we lost another one of the old domain servers at a remote location. Luckily, after the other incident, we had pretty much migrated everything over except a couple machines still pointing at the server for DNS and a couple legacy printers.
The big takeaway is that I'm just one person, and too much rests on my shoulders here out of our 6-person IT department. I keep getting pulled away from core stuff to work on initiatives and go to meetings and attend training. Management pays lip service to our complaints, but we never add anyone new to the group. I really need someone to be hired to be my protege', because everyone else in the department is busy or has an entirely different skillset.
This is a clickable link to my Steam Profile.
Mainly because I am the backup duder*, I think I deserve better pay. No one else will want to do this shit, so I think there's value in that.
*as in the only person who handles the backup system, and when anyone else does, they don't do it right at all.
Internal DNS entry pointing to the 365 address? Failing that, Exchange Shell https://technet.microsoft.com/en-us/library/bb201695(v=exchg.141).aspx
1) Server at remote site with a power hit wouldn't come up
2) Drive two hours out there after over-the-phone talking to on site staff couldn't get anything working
3) Get on site - cannot get server to boot even to video signal. Tried new cables, seated RAM, etc
4) Take it and drive two hours back to my office
5) Plug thing into test bench and get ready to troubleshoot more
6) Server fucking boots up normally first go
I don't understand - if they are on 365 where is the old cert coming from? Can you be more specific?
Also, can't you just change their auto discover record - typically it's just a cname for their exchange cluster, so if that now points to 365 aren't you shiny and chrome?
Otherwise you can use the exchange powershell dealie and the get-exchangecertificate and related commandlets.
I haven't been able to track down exactly where the old cert is coming from. It's not on the internal server but it is somehow part of autodiscover. As for Exchange, it hasn't been installed for over a year, so trying to check powershell won't get me anywhere there. I may have to look at the O365 side to see what is listed for certs there.
Sadly, the only source where I quickly could find the message was on BuzzFeed
However, thanks to the video on the link, the internet should be able to use this photo for a new meme sensation:
Like... a... boss...
At my previous job we had some Dell desktops that if they lost power while turned on, would not turn on again unless unplugged for a few seconds and plugged back in.
Only when you need to patch them for stuff they forgot to publish as bugs that brings down major production instances.
I did that, but I guess 30 seconds or so wasn't enough.
Oh well, I didn't want to rebuild that dumb server anyway. All I had to do was set up DHCP on the router instead. I'll take it out next week conveniently around lunch time.
As for he unplugged thing, I've seen that too - seemed to me at the time it was related to the power bricks. We had some weird all-in-one things that used them instead of just a plug.
(Though in this case, it's my own damn fault for trying to set up DNS aliases.)
This is a clickable link to my Steam Profile.
Their ultra small machines get to be ultra small by not having a power supply in the case, instead being a brick outside. That had advantages and disadvantages.
This is a clickable link to my Steam Profile.