life's a game that you're bound to lose / like using a hammer to pound in screws
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
0
RandomHajileNot actually a SnatcherThe New KremlinRegistered Userregular
It's interesting that you can sort of tell what is happening if you look at the binary for the ASCII. The fifth bit is stuck as a 0, but only every other letter, I think? Probably not Ngrmad, in any case.
Sorry for the wide images, I am feeling too lazy to re-crop.
+2
AthenorBattle Hardened OptimistThe Skies of HiigaraRegistered Userregular
I was king of surprised that we didn't go with Unifi/Ubiquiti stuff with our recent wireless bid.
Apparently they don't scale to our usage levels well, which is a shame. I mean.. yeah, anyone's gonna have trouble covering dorms or the quad when everyone could potentially have 4-5 devices connected to the wireless at once, and a student population of 20k. But I'd think they would have solutions for that.
You guys have dual power supplies on your routers/switches? Is that a thing?
I once brought down a pair of very important racks because I bumped a power strip wrong and the fibre channel switch that connected the SAN to the VMs was on it, with no redundancy.
The fact that the power supply was built in and non-redundant always bugged me for such an expensive piece of IBM equipment. I guess in retrospect we could've used a Y-splitter or something (if it was supported), but.. ugh.
You guys have dual power supplies on your routers/switches? Is that a thing?
Wouldn't go without. Our core router currently only has a single power supply, and that's the only thing we have now that doesn't. When we had to change out UPS's in prep for a building move, we could have done it with zero downtime at all if it wasn't for that core router. We've also had a power supply go on a switch that would have taken about 45 people down had we not had redundancy. Or that one time we talk about less where someone accidentally bumped a power cable on a device and no one noticed for like 3 weeks since redundancy.
Dual power supplies on everything is too nice to give up, and should be a requirement on anything that has a required uptime.
I don't think I've ever been in a position where I couldn't go "yeah you will lose internet for 30 minutes".
I don't think I'd want to be in a position where I couldn't say that either.
While not perfect, making your power supply as redundant as possible is one of the cheapest ways to shoot towards 99% uptime. If your server supports two power supplies, plug them into different UPS's (ideally on different circuits, but that gets more expensive). Then link both PSUs together so the system knows both are there and can help report on issues.
I don't think I've ever been in a position where I couldn't go "yeah you will lose internet for 30 minutes".
I don't think I'd want to be in a position where I couldn't say that either.
While not perfect, making your power supply as redundant as possible is one of the cheapest ways to shoot towards 99% uptime. If your server supports two power supplies, plug them into different UPS's (ideally on different circuits, but that gets more expensive). Then link both PSUs together so the system knows both are there and can help report on issues.
well yeah, servers
not a router or switch, fuck that noise, I ain't no datacenter
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
0
AthenorBattle Hardened OptimistThe Skies of HiigaraRegistered Userregular
I don't think I've ever been in a position where I couldn't go "yeah you will lose internet for 30 minutes".
I don't think I'd want to be in a position where I couldn't say that either.
While not perfect, making your power supply as redundant as possible is one of the cheapest ways to shoot towards 99% uptime. If your server supports two power supplies, plug them into different UPS's (ideally on different circuits, but that gets more expensive). Then link both PSUs together so the system knows both are there and can help report on issues.
well yeah, servers
not a router or switch, fuck that noise, I ain't no datacenter
Here's what I told my uncle when he was asking me for recommendations and I said "UPS".
What good is it if your server stays up but your workstations go down?
Now what good is it if your server stays up and your workstations stay up, but your switches and router go down? Especially if you are a "cloud" based company?
Even a $35 APC UPS goes a long way. Get your entire pipeline on a battery and you'll save your butt from losing data or having hardware die prematurely.
When you have racks on racks on racks having everything redundant is nice.
Or even if you have overly complex user structures.
Sure if all you need to do is send out an email to the 20 people in the office that network will be down for 30 mins after hours no problem.
When it's like, ok I need to replace the power pole on this rack... There are 20 machines and three of them are vm hosts with another 20 vms each. Let's dig through and find the machine owners... Well half of them have old DLs listed as contact... And this one vm host in a legacy pos we inherited and nobody will admit to using the vms but every time one goes down something breaks and we're get yelled at...
You end up spending half a day crafting an outage notice and then someone complains and pushes it up the chain and the vp of dicksucking makes you push it back to the general maintainence window in two months that you know is gonna get postponed, yet again...
That's when you wish you had redundant power supplies.
life's a game that you're bound to lose / like using a hammer to pound in screws
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
I cannot express how much I hate this term. It's why my IT shop is currently hated by most of the university.. and it should not be used as protection for being sloppy or, worse yet, costing money or time.
We do when we can. Ninja decommed so many machines.
But there are plenty of cases where that'd get you shitcanned.
life's a game that you're bound to lose / like using a hammer to pound in screws
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
Like I said, I wouldn't want to be part of a business where "hey your internet might be down for 30 minutes" means people lose their fucking minds.
Welcome to any company larger than like, 50 people.
Internet is down the company's down.
life's a game that you're bound to lose / like using a hammer to pound in screws
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
+6
AthenorBattle Hardened OptimistThe Skies of HiigaraRegistered Userregular
edited January 2016
Those racks I brought down and that SAN/VM environment.. it was for a regional clinic. 13 shops used a server/client (terminal) setup to remote into those VMs. Luckily, LUCKILY, the outage happened at the very end of the day because I wasn't about to touch that environment during normal working hours. But I still caused end of day processing to be delayed quite significantly.
Another job, I was called on-site at 4 AM to help with a downed server. We managed to get it up by 7:30 AM. After the fact, the manager of the place told us that they could have lost out on tens to hundreds of thousands of dollars an hour due to not being able to interact with the stock market.
Then there's the time I worked from 6PM till 8:30 AM to recover the networking/internet of a firewall gone haywire after a full normal day at work.
I always tell my customers, there are 3 vital things that MUST remain functional at all costs: Email, printing, payroll. I'd throw internet in there as well, but the previous 3 pretty much require the internet to be up, so... Those are the lifeblood of the modern office, and my job as an admin is to make sure they are available. Sadly, the people in charge of the purse don't always agree with me, but they aren't held responsible either.
Like I said, I wouldn't want to be part of a business where "hey your internet might be down for 30 minutes" means people lose their fucking minds.
Welcome to any company larger than like, 50 people.
Internet is down the company's down.
Not really. You could take a hospital's internet down without it affecting like... anything.
Every doctor is on your ass because they're waiting for faxes.
(The faxes come through one of those fax over IP solutions because you're not gonna stop support an analog fax like fuck that shit)
life's a game that you're bound to lose / like using a hammer to pound in screws
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
Like I said, I wouldn't want to be part of a business where "hey your internet might be down for 30 minutes" means people lose their fucking minds.
Welcome to any company larger than like, 50 people.
Internet is down the company's down.
Not really. You could take a hospital's internet down without it affecting like... anything.
Every doctor is on your ass because they're waiting for faxes.
(The faxes come through one of those fax over IP solutions because you're not gonna stop support an analog fax like fuck that shit)
"It'll be down for 30 minutes."
I guess I've got brass balls.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
0
AthenorBattle Hardened OptimistThe Skies of HiigaraRegistered Userregular
Eh. to each their own. LAN is still the core switch. theoretically your router& modem solution should be nearby, or in a properly racked demarcation point. Again.. not hard to throw a battery on that. And if you are going to go that far, shop around and find equipment that can do redundant power supplies... mostly focusing on the core switch again.
I kinda miss playing with that Layer 3 10Gbps switch at the old job...
I've rebooted a server in the middle of the day during a 'lunch' period because I didn't want to stay until 7 to do it. Whoops, looks like it rebooted itself. It was scheduled to do it during lunch but I was waiting until you were all out. It must've just done it anyways.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Like I said, first hit. But yeah. I've seen that kind of thing before, where you can daisy chain equipment together.
Oh man would 'oops' not fly in my shop. Not with how much change and release management is going on. No one trusts us.
Yeah I give no fucks, they don't know how it works so I can abuse it like that.
Not like they can stop it once it happens. Then I say "I will research why it happened" and then they forget in a few hours because life is too short to be mad about shit like that.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Like I said, first hit. But yeah. I've seen that kind of thing before, where you can daisy chain equipment together.
Oh man would 'oops' not fly in my shop. Not with how much change and release management is going on. No one trusts us.
Yeah I give no fucks, they don't know how it works so I can abuse it like that.
Not like they can stop it once it happens. Then I say "I will research why it happened" and then they forget in a few hours because life is too short to be mad about shit like that.
And attitudes like this, ladies and gentlemen, is why everyone in the world hates their IT department.
Like I said, first hit. But yeah. I've seen that kind of thing before, where you can daisy chain equipment together.
Oh man would 'oops' not fly in my shop. Not with how much change and release management is going on. No one trusts us.
Yeah I give no fucks, they don't know how it works so I can abuse it like that.
Not like they can stop it once it happens. Then I say "I will research why it happened" and then they forget in a few hours because life is too short to be mad about shit like that.
And attitudes like this, ladies and gentlemen, is why everyone in the world hates their IT department.
Trust me when I say they'll hate you for literally any reason.
Oh you rebooted it at 7:00 pm during a scheduled outage? But I had a fucking important thing I needed to do!
Oh you came in at 2:00 am? I'm out of the country, I'm bored!
People hate you because your department is a void that sucks up budgets, and because a lot of people are condescending in general.
You're positing they hate IT because, while being 100% nice, they know subtly that I am a dick and am rebooting it on purpose?
Please. People are fucking morons.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
With cloud based apps becoming commonplace, not having internet is getting critical.
The calculation is this:
How many people are on your network?
What's the average pay/hour (feel free to totally SWAG this one)?
How long will it be down?
So - 20 employees making an average of 40 bucks an hour with a 1 hour outage (because it never comes up like it's supposed to) is 20x40x1 = $800
50 employees?
50x40x1 = $2000
Lost productivity adds up quick. Especially when how you get to justify equipment or redundancy is "last year we lost x to downtime, this year we'll lose near 0 if we do this"
Rebooting the router/switches maybe takes 5 minutes.
Even if it takes 15 minutes, go take a break.
Like I said, I think we are grossly exaggerating just how bad that is, even in a large company.
Eh, we've had major network switches die on us and it impacts a lot, as systems within a large company typically talk to each other on a continuing basis and any disruption causes a whole pile of issues and rework.
Posts
Edit: That is not dead which can eternal transmit, And with strange aeons even packets may die
XBL:Phenyhelm - 3DS:Phenyhelm
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
This is a clickable link to my Steam Profile.
For those of you who saw their tiny shelf mounted firewall and thought "ummmm", they just released a rack mounted version. https://www.ubnt.com/unifi-switching-routing/unifi-security-gateway-pro-4/
It is pretty, I want it.
Sorry for the wide images, I am feeling too lazy to re-crop.
Apparently they don't scale to our usage levels well, which is a shame. I mean.. yeah, anyone's gonna have trouble covering dorms or the quad when everyone could potentially have 4-5 devices connected to the wireless at once, and a student population of 20k. But I'd think they would have solutions for that.
They do, at least when I looked a few years ago. Or, at least, they looked pretty.
I once brought down a pair of very important racks because I bumped a power strip wrong and the fibre channel switch that connected the SAN to the VMs was on it, with no redundancy.
The fact that the power supply was built in and non-redundant always bugged me for such an expensive piece of IBM equipment. I guess in retrospect we could've used a Y-splitter or something (if it was supported), but.. ugh.
Wouldn't go without. Our core router currently only has a single power supply, and that's the only thing we have now that doesn't. When we had to change out UPS's in prep for a building move, we could have done it with zero downtime at all if it wasn't for that core router. We've also had a power supply go on a switch that would have taken about 45 people down had we not had redundancy. Or that one time we talk about less where someone accidentally bumped a power cable on a device and no one noticed for like 3 weeks since redundancy.
Dual power supplies on everything is too nice to give up, and should be a requirement on anything that has a required uptime.
I don't think I'd want to be in a position where I couldn't say that either.
While not perfect, making your power supply as redundant as possible is one of the cheapest ways to shoot towards 99% uptime. If your server supports two power supplies, plug them into different UPS's (ideally on different circuits, but that gets more expensive). Then link both PSUs together so the system knows both are there and can help report on issues.
well yeah, servers
not a router or switch, fuck that noise, I ain't no datacenter
Here's what I told my uncle when he was asking me for recommendations and I said "UPS".
What good is it if your server stays up but your workstations go down?
Now what good is it if your server stays up and your workstations stay up, but your switches and router go down? Especially if you are a "cloud" based company?
Even a $35 APC UPS goes a long way. Get your entire pipeline on a battery and you'll save your butt from losing data or having hardware die prematurely.
Or even if you have overly complex user structures.
Sure if all you need to do is send out an email to the 20 people in the office that network will be down for 30 mins after hours no problem.
When it's like, ok I need to replace the power pole on this rack... There are 20 machines and three of them are vm hosts with another 20 vms each. Let's dig through and find the machine owners... Well half of them have old DLs listed as contact... And this one vm host in a legacy pos we inherited and nobody will admit to using the vms but every time one goes down something breaks and we're get yelled at...
You end up spending half a day crafting an outage notice and then someone complains and pushes it up the chain and the vp of dicksucking makes you push it back to the general maintainence window in two months that you know is gonna get postponed, yet again...
That's when you wish you had redundant power supplies.
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
Easier to ask forgiveness than permission.
I cannot express how much I hate this term. It's why my IT shop is currently hated by most of the university.. and it should not be used as protection for being sloppy or, worse yet, costing money or time.
We do when we can. Ninja decommed so many machines.
But there are plenty of cases where that'd get you shitcanned.
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
Welcome to any company larger than like, 50 people.
Internet is down the company's down.
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
Another job, I was called on-site at 4 AM to help with a downed server. We managed to get it up by 7:30 AM. After the fact, the manager of the place told us that they could have lost out on tens to hundreds of thousands of dollars an hour due to not being able to interact with the stock market.
Then there's the time I worked from 6PM till 8:30 AM to recover the networking/internet of a firewall gone haywire after a full normal day at work.
I always tell my customers, there are 3 vital things that MUST remain functional at all costs: Email, printing, payroll. I'd throw internet in there as well, but the previous 3 pretty much require the internet to be up, so... Those are the lifeblood of the modern office, and my job as an admin is to make sure they are available. Sadly, the people in charge of the purse don't always agree with me, but they aren't held responsible either.
Not really. You could take a hospital's internet down without it affecting like... anything.
LAN? Sure.
Internet? Lol.
Every doctor is on your ass because they're waiting for faxes.
(The faxes come through one of those fax over IP solutions because you're not gonna stop support an analog fax like fuck that shit)
fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
that's right we're on a fucked up cruise / God is dead but at least we have booze
bad things happen, no one knows why / the sun burns out and everyone dies
"It'll be down for 30 minutes."
I guess I've got brass balls.
I kinda miss playing with that Layer 3 10Gbps switch at the old job...
http://www.netgear.com/business/products/switches/modules-accessories/rps5412.aspx
Ah so this is a specially designed piece of equipment. These switches and firewalls don't actually come with two PSUs inside them.
Oh man would 'oops' not fly in my shop. Not with how much change and release management is going on. No one trusts us.
Edit: Heh. The amazon page for that RPS calls it a laptop accessory - charger & adapter. That, right there, is overkill.
Yeah I give no fucks, they don't know how it works so I can abuse it like that.
Not like they can stop it once it happens. Then I say "I will research why it happened" and then they forget in a few hours because life is too short to be mad about shit like that.
And attitudes like this, ladies and gentlemen, is why everyone in the world hates their IT department.
Trust me when I say they'll hate you for literally any reason.
Oh you rebooted it at 7:00 pm during a scheduled outage? But I had a fucking important thing I needed to do!
Oh you came in at 2:00 am? I'm out of the country, I'm bored!
People hate you because your department is a void that sucks up budgets, and because a lot of people are condescending in general.
You're positing they hate IT because, while being 100% nice, they know subtly that I am a dick and am rebooting it on purpose?
Please. People are fucking morons.
The calculation is this:
How many people are on your network?
What's the average pay/hour (feel free to totally SWAG this one)?
How long will it be down?
So - 20 employees making an average of 40 bucks an hour with a 1 hour outage (because it never comes up like it's supposed to) is 20x40x1 = $800
50 employees?
50x40x1 = $2000
Lost productivity adds up quick. Especially when how you get to justify equipment or redundancy is "last year we lost x to downtime, this year we'll lose near 0 if we do this"
Rebooting the router/switches maybe takes 5 minutes.
Even if it takes 15 minutes, go take a break.
Like I said, I think we are grossly exaggerating just how bad that is, even in a large company.
Eh, we've had major network switches die on us and it impacts a lot, as systems within a large company typically talk to each other on a continuing basis and any disruption causes a whole pile of issues and rework.