Anybody have some experience of failover clustering on server 2012? I'm trying to configure some RA proxy servers in a failover pair, and not really sure if I'm doing it right.
I'm not really sure whether my issues are to do with my configuration being wrong, or the fact I'm using ancient software which definitely hasn't been tested on server 2012. Probably the latter.
I have extensive failover experience on 2008, it's probably not vastly different.
Can you clarify RA Proxy please? What are your servers actually doing?
Two servers which provide remote access gateway to everything else in the network via a single public IP address. I'll use a static NAT on the external firewall to point to these server's shared IP, and set them up as a failover pair simply for resilience.
So far I've set up a failover cluster with 2 nodes and a witness file on my domain controller, and a generic service role for the RA software (NetOp). I'm only able to connect to this via the shared IP for the service, not via the cluster IP, or via any of the server's individual IP's. Is this right? I expected that I should be able to use either the currently active node's IP, or the cluster IP, or the specific service IP to connect, but maybe I'm just not understanding it right.
I have a feeling you're using a failover cluster for something you should be using a load-balanced cluster to do.
You could well be right, but I'm kind of using this as way to learn about how to set up failover clusters. Since I'm going to have to set up some mirrored MSSQL servers in the near future, and SQL mirroring is being deprecated in favour of SQL AlwaysOn, which requires that the severs involved be in a failover cluster.
Yay, finished documentation for the latest workstation\laptop image so we can do a proper review and benchmark on common hardware we are reselling these days. Boo...we have none of that hardware in our IT office as all of our stuff is 2-3 years old and usually a hand me down from someone who's "more important".
On the topic of Anti-virus stuff, I had to deploy Kaspersky to 15 clients this summer ranging from 10-150 computers. It wasn't as bad as it could of been, but it sure could of been MUCH MUCH better. The main reason for the deployment was also because Kaspersky's latest key files were unusable by older versions of the software so you were forced to update to KES10 even if you were perfectly happy with KES8. Recently we had to shut off their system watch module as it would prevent our VPN software from creating a tunnel, but only on four computers in the company, which of course meant two project managers and the CEO...
You guys bitch and moan about BackupExec, I'm helping clients still running NTBackup, wbadmin, and xcopy scripts because they won't pay for real backup software...
You guys bitch and moan about BackupExec, I'm helping clients still running NTBackup, wbadmin, and xcopy scripts because they won't pay for real backup software...
There's better free backup software though...
Try this for old servers.
Anybody have some experience of failover clustering on server 2012? I'm trying to configure some RA proxy servers in a failover pair, and not really sure if I'm doing it right.
I'm not really sure whether my issues are to do with my configuration being wrong, or the fact I'm using ancient software which definitely hasn't been tested on server 2012. Probably the latter.
I have extensive failover experience on 2008, it's probably not vastly different.
Can you clarify RA Proxy please? What are your servers actually doing?
Two servers which provide remote access gateway to everything else in the network via a single public IP address. I'll use a static NAT on the external firewall to point to these server's shared IP, and set them up as a failover pair simply for resilience.
So far I've set up a failover cluster with 2 nodes and a witness file on my domain controller, and a generic service role for the RA software (NetOp). I'm only able to connect to this via the shared IP for the service, not via the cluster IP, or via any of the server's individual IP's. Is this right? I expected that I should be able to use either the currently active node's IP, or the cluster IP, or the specific service IP to connect, but maybe I'm just not understanding it right.
I have a feeling you're using a failover cluster for something you should be using a load-balanced cluster to do.
You could well be right, but I'm kind of using this as way to learn about how to set up failover clusters. Since I'm going to have to set up some mirrored MSSQL servers in the near future, and SQL mirroring is being deprecated in favour of SQL AlwaysOn, which requires that the severs involved be in a failover cluster.
Okay, you are definitely using the wrong type of clustering. This needs to be a load balanced cluster and not a failover cluster.
The best 'rules of thumb' I can think of for which type of cluster to use:
If your cluster is going to directly touch dynamic data (example, SQL servers), it should probably be a failover cluster
If your cluster is a gateway, portal, web application, or other application which only contains static data (example, IIS websites), or access to a service or static application (your remote access application, for example) it should probably be a load-balanced cluster.
SQL, for example, should run on a failover cluster. You can't have two servers running instances of the same database at the same time. SQL shits its pants and corrupts data. Web servers run on load balanced clusters. Theyre just providing a viewer to whatever data is being served up behind them. The web application/website itself never changes, you're just serving up multiple concurrent copies of it to incoming clients.
For your remote access, you're just providing a portal, an entry point. There's no data involved. A load balanced cluster is what you want. A load balanced cluster here will also give you the resilience you're looking for. Your NAT will hand off incoming requests to the cluster IP. One of the load balanced nodes will service that request, and they will subsequently either take turns if configured for round-robin, or the requests will route to the least busy node if configured that way. If one node dies, the other node simply takes all requests. If you want to take a node offline, you essentially pause its participation in the cluster until all the requests it's actively servicing drain out of the pool, and you're free to take it down/reboot/etc.
I understand you're using this as an opportunity to 'learn' about failover clusters, but the problem is you're using the wrong tool for the job. You're going to learn bad habits because you're failover clustering something that should be load balanced. In a failover cluster there are storage concerns that are much more nuanced and touchy than in a load balanced cluster. Additionally, if you do manage to succeed setting it up in a servicable fashion, you're going to now be relying on a cluster that's simply the wrong tool for the job as your production system.
If you get too reliant on failover clusters for the wrong purposes, you'll end up using them for everything, and for the wrong thing. If you only have a hammer, all your problems look like nails. Learn to use a wrench when the job requires it.
If you want to learn about failover clusters, set up a lab and do it right. But for a remote access server pool, you want need this to be load balanced.
You guys bitch and moan about BackupExec, I'm helping clients still running NTBackup, wbadmin, and xcopy scripts because they won't pay for real backup software...
There's better free backup software though...
Try this for old servers.
Seriously, there's no reason to not use some sort of proper backup. Fuck, you can get free delta-block backups out of Areca.
Amusing update about my contract: They keep extending me. I was supposed to have been out of work just over a week ago, they're now asking me to stay through the end of the year to fix their SUS and AD mess. Shoulda made me an FTE, chumps.
Cog on
+2
lwt1973King of ThievesSyndicationRegistered Userregular
I love the government. I need to fill a report through their website once a quarter through a csv that they give to us. All the information, except one box, is given to me from their website that I get the csv from. So I'm just filling out the csv using their website data, add one box, and then upload it to the same government website.
"He's sulking in his tent like Achilles! It's the Iliad?...from Homer?! READ A BOOK!!" -Handy
0
jaziekBad at everythingAnd mad about it.Registered Userregular
Anybody have some experience of failover clustering on server 2012? I'm trying to configure some RA proxy servers in a failover pair, and not really sure if I'm doing it right.
I'm not really sure whether my issues are to do with my configuration being wrong, or the fact I'm using ancient software which definitely hasn't been tested on server 2012. Probably the latter.
I have extensive failover experience on 2008, it's probably not vastly different.
Can you clarify RA Proxy please? What are your servers actually doing?
Two servers which provide remote access gateway to everything else in the network via a single public IP address. I'll use a static NAT on the external firewall to point to these server's shared IP, and set them up as a failover pair simply for resilience.
So far I've set up a failover cluster with 2 nodes and a witness file on my domain controller, and a generic service role for the RA software (NetOp). I'm only able to connect to this via the shared IP for the service, not via the cluster IP, or via any of the server's individual IP's. Is this right? I expected that I should be able to use either the currently active node's IP, or the cluster IP, or the specific service IP to connect, but maybe I'm just not understanding it right.
I have a feeling you're using a failover cluster for something you should be using a load-balanced cluster to do.
You could well be right, but I'm kind of using this as way to learn about how to set up failover clusters. Since I'm going to have to set up some mirrored MSSQL servers in the near future, and SQL mirroring is being deprecated in favour of SQL AlwaysOn, which requires that the severs involved be in a failover cluster.
Okay, you are definitely using the wrong type of clustering. This needs to be a load balanced cluster and not a failover cluster.
The best 'rules of thumb' I can think of for which type of cluster to use:
If your cluster is going to directly touch dynamic data (example, SQL servers), it should probably be a failover cluster
If your cluster is a gateway, portal, web application, or other application which only contains static data (example, IIS websites), or access to a service or static application (your remote access application, for example) it should probably be a load-balanced cluster.
SQL, for example, should run on a failover cluster. You can't have two servers running instances of the same database at the same time. SQL shits its pants and corrupts data. Web servers run on load balanced clusters. Theyre just providing a viewer to whatever data is being served up behind them. The web application/website itself never changes, you're just serving up multiple concurrent copies of it to incoming clients.
For your remote access, you're just providing a portal, an entry point. There's no data involved. A load balanced cluster is what you want. A load balanced cluster here will also give you the resilience you're looking for. Your NAT will hand off incoming requests to the cluster IP. One of the load balanced nodes will service that request, and they will subsequently either take turns if configured for round-robin, or the requests will route to the least busy node if configured that way. If one node dies, the other node simply takes all requests. If you want to take a node offline, you essentially pause its participation in the cluster until all the requests it's actively servicing drain out of the pool, and you're free to take it down/reboot/etc.
I understand you're using this as an opportunity to 'learn' about failover clusters, but the problem is you're using the wrong tool for the job. You're going to learn bad habits because you're failover clustering something that should be load balanced. In a failover cluster there are storage concerns that are much more nuanced and touchy than in a load balanced cluster. Additionally, if you do manage to succeed setting it up in a servicable fashion, you're going to now be relying on a cluster that's simply the wrong tool for the job as your production system.
If you get too reliant on failover clusters for the wrong purposes, you'll end up using them for everything, and for the wrong thing. If you only have a hammer, all your problems look like nails. Learn to use a wrench when the job requires it.
If you want to learn about failover clusters, set up a lab and do it right. But for a remote access server pool, you want need this to be load balanced.
Thanks for the advice. I've already set up a couple of NLB clusters for web servers, so I am a bit more familiar with them already.
Anybody have some experience of failover clustering on server 2012? I'm trying to configure some RA proxy servers in a failover pair, and not really sure if I'm doing it right.
I'm not really sure whether my issues are to do with my configuration being wrong, or the fact I'm using ancient software which definitely hasn't been tested on server 2012. Probably the latter.
I have extensive failover experience on 2008, it's probably not vastly different.
Can you clarify RA Proxy please? What are your servers actually doing?
Two servers which provide remote access gateway to everything else in the network via a single public IP address. I'll use a static NAT on the external firewall to point to these server's shared IP, and set them up as a failover pair simply for resilience.
So far I've set up a failover cluster with 2 nodes and a witness file on my domain controller, and a generic service role for the RA software (NetOp). I'm only able to connect to this via the shared IP for the service, not via the cluster IP, or via any of the server's individual IP's. Is this right? I expected that I should be able to use either the currently active node's IP, or the cluster IP, or the specific service IP to connect, but maybe I'm just not understanding it right.
I have a feeling you're using a failover cluster for something you should be using a load-balanced cluster to do.
You could well be right, but I'm kind of using this as way to learn about how to set up failover clusters. Since I'm going to have to set up some mirrored MSSQL servers in the near future, and SQL mirroring is being deprecated in favour of SQL AlwaysOn, which requires that the severs involved be in a failover cluster.
Okay, you are definitely using the wrong type of clustering. This needs to be a load balanced cluster and not a failover cluster.
The best 'rules of thumb' I can think of for which type of cluster to use:
If your cluster is going to directly touch dynamic data (example, SQL servers), it should probably be a failover cluster
If your cluster is a gateway, portal, web application, or other application which only contains static data (example, IIS websites), or access to a service or static application (your remote access application, for example) it should probably be a load-balanced cluster.
SQL, for example, should run on a failover cluster. You can't have two servers running instances of the same database at the same time. SQL shits its pants and corrupts data. Web servers run on load balanced clusters. Theyre just providing a viewer to whatever data is being served up behind them. The web application/website itself never changes, you're just serving up multiple concurrent copies of it to incoming clients.
For your remote access, you're just providing a portal, an entry point. There's no data involved. A load balanced cluster is what you want. A load balanced cluster here will also give you the resilience you're looking for. Your NAT will hand off incoming requests to the cluster IP. One of the load balanced nodes will service that request, and they will subsequently either take turns if configured for round-robin, or the requests will route to the least busy node if configured that way. If one node dies, the other node simply takes all requests. If you want to take a node offline, you essentially pause its participation in the cluster until all the requests it's actively servicing drain out of the pool, and you're free to take it down/reboot/etc.
I understand you're using this as an opportunity to 'learn' about failover clusters, but the problem is you're using the wrong tool for the job. You're going to learn bad habits because you're failover clustering something that should be load balanced. In a failover cluster there are storage concerns that are much more nuanced and touchy than in a load balanced cluster. Additionally, if you do manage to succeed setting it up in a servicable fashion, you're going to now be relying on a cluster that's simply the wrong tool for the job as your production system.
If you get too reliant on failover clusters for the wrong purposes, you'll end up using them for everything, and for the wrong thing. If you only have a hammer, all your problems look like nails. Learn to use a wrench when the job requires it.
If you want to learn about failover clusters, set up a lab and do it right. But for a remote access server pool, you want need this to be load balanced.
I never knew the difference, so thanks for this post.
A load balancing cluster is like waiting in line for a bank. There is some sort of algorithm to decide who is "next". Think of the line in a bank. When it's your turn, you go to Teller 1 or 2 or 3, depending who frees up first.
A failover is like having something tucked away that's a complete copy. Ready to go, but not in use. Say you had 2 cars. One day you get into an accident and it needs to go to the shop. Well then while you fix car 1, you use car 2. You can still get around but your down a car.
Fear my l33t visio skills. I couldn't find an image I liked on GIS quckly, so I made one. Its time for learning!
This is, in a rudimentary fashion, how clustering "works". You have a client that connects remotely via <whatever> to your load balanced cluster, the 3 servers at the top. Each is running a Remote Access application, or an IIS website or SOMEthing. The data therein never changes. The IIS info is the same on each server, or the RA app is the same on each server, but the data is static and a separate copy is stored on each node. This is why you can load balance, because the data never changes on any client. Sure, people may interact with a website and "make changes", but they're not changing the site itself. They're changing the data on the back end, not on the exact RA/IIS server they contacted. Those servers are simply running an application that allows them access deeper into the guts of your environment.
In a load balanced cluster, when a request comes in to the balancer, be it hardware or software, any member of the pool can service the request. You can take one offline and the remainder will pick up the slack. This also lets you incrementally upgrade your environment. You can take one node offline, insert a new copy of the RA software, or a new version of the website, and then put it back into the pool. If you do it gracefully, the node being taken offline will handle all the clients it's currently serving, but simply not bother picking up new requests as they come into the pool. Because of this, upgrades and maintainence to the load balanced environment can be done completely transparently to the end users. As long as your remaining nodes have the resources to shoulder the load, they won't even see a slow down.
NLB (network load balanced) clusters can handle requests by 4methods:
A simple round robin (each request simply goes to the next server in line, always)
Weighted round-robin (servers take turns, but you kinda set a ratio. If one server is twice as powerful, it could be weighted to receive 2 connections for every connection sent to the lesser server)
Least connection (the server with the fewest active connections receives the next connection.
Load-based (The cluster uses a load balancing algorithm to distribute incoming requests to the servers with the lightest "load" usually determined by available CPU or some other resource)
NLB gives you scalability and high availability.
Underneath your load balanced cluster, the actual request to handle the data falls to your failover cluster. Instead of a pool of servers that can pick up a request sent to the cluster's IP, only one node at a time is allowed to service an incoming request. This is generally because the server is directly touching fluid, dynamically changing data. SQL databases are the best simple example of this. If two NON clustered (or load balance clustered even) servers actively touch the database and make changes, the servers are not each aware of the changes being made by the other server. They could both update the same table at the same time to different values. This can corrupt the database and all kinds of ugly shit.
The Active node has an active connection to the datastore. The passive node does not touch the data in any way. Since all the data is kept in shared storage, the servers can hand off access and control of the data fairly seamlessly. The tlogs and the database are all right there, so when you roll over to the other node, it just picks up all the data in-state. It has all the tlogs it needs to pick right up where the other server left off. The actual SQL application files are stored on the local storage of the nodes, because they are simply the engine that manipulates the data. The data itself must be in a shared location, and must be manipulated by only one server at a time. In the realm of Windows failover clustering, the passive node actually takes its disk offline (just like you can with any disk in the storage management portion of Server Management).
The nodes have a heartbeat connection (usually a seperate VLAN or even physical connection when possible) where they basically ask each other non-goddamn-stop "you still up? yeah. How about you? yeah. cool." There's also a thing called a "quorum" member of the cluster, which is either a piece of shared storage, or another server, that serves as a witness in the event of failover clusters getting in a slap-fight over who should be in charge. If the nodes, for whatever reason, stopped talking to each other, they both check with their quorum witness. If the witness says the Active node is still online, the passive node will sit tight. If the witness says the Active node is unreachable, the passive node promotes itself and takes over the cluster. Quorums aren't necessary in an NLB cluster, cause each node is autonomous. It's just there to do its assigned task with the application on its local storage, and doesn't give a shit what the other nodes are up to.
Now, the "passive" server could have an active connection to a second SQL instance. If you imagine a second SAN storage at the bottom of my diagram, Node 1 could have an active connection to one piece of shared storage, and interact with the data stored there, while node 2 has an active connection to a seperate piece of shared storage and interact with the data there. Both nodes can actively be running SQL at the same time, just not connected to the same instance.
Failover gives you improved ability to handle and reduce downtimes.
There's still a moment of "outage" when you failover between nodes. The disk has to be brought online. It's not seamless, but it's a fucking lot better than having your only SQL server reboot or, god forbid, just die. The other node(s) will pick up automatically with no intervention, but SQL will be "down" for a few seconds, usually around 15 depending on the hardware involved. A lot of applications are tolerant of this, and can take the interrupt without shitting their pants. Depending on what the application is and how tolerant of timeouts it is, a failover *can* be "seamless" (outside of a 15-20 second hiccup) to end users. This lets you do nice things like patch your passive node, roll to it, and then patch the previously active node. If you have a fancy SAN that lets you span storage over multiple sites, you can also have failover nodes at physically separated locations in the case of physical disaster at one site.
There's a couple other ways to handle the setup, Transaction Log or Hot Standby state synchronizations, but they're a fuckawful mess, compared to the standard shared storage, and I'm just not going there today. :P
Sooo.......
..... I like clustering. :P
EDIT: I just know I'm gonna edit this post like 30 times cause I'll reread it and just cant leave well enough alone.
A load balancing cluster is like waiting in line for a bank. There is some sort of algorithm to decide who is "next". Think of the line in a bank. When it's your turn, you go to Teller 1 or 2 or 3, depending who frees up first.
A failover is like having something tucked away that's a complete copy. Ready to go, but not in use. Say you had 2 cars. One day you get into an accident and it needs to go to the shop. Well then while you fix car 1, you use car 2. You can still get around but your down a car.
Fuck. I should have just posted this.
/sigh.
They're great analogies because they work really well with the concept of data.
The tellers at the bank are just putting your money in the vault. They don't give a shit if you're depositing money with two tellers at once, they're only there to convey your money to the vault. Load balanced clusters are just there to convey your requests to where the money is.
With cars, you can't drive two cars at once because oh my god why would you drive two cars at once?
Cog on
+1
jaziekBad at everythingAnd mad about it.Registered Userregular
I'm planning on doing it the non-shared storage way, where each server has their own copy of the database, and its mirrored from the active one to the passive one, and then they switch roles on a failover.
I've been advised by people at work that this is something to look into anyway.
currently the live system I'm supposed to be upgrading is running windows 03 and sql 05 and just using standard sql mirroring, and they share an IP address by using some ancient third party application.
Since today, the gadgets in my office PC are not functioning anymore. All of them gone from my desktop, and are not selectable from the Control Panel or screen personalize.
That is not a big deal, but...
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
Also have to say, I really want to hit people when they try printing something and it doesn't come out so they decide "maybe I didn't print it right, I'll just sent it another 5 times".
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
Also have to say, I really want to hit people when they try printing something and it doesn't come out so they decide "maybe I didn't print it right, I'll just sent it another 5 times".
Especially if it's out of paper.
Had one lady try to print a document 8 times.
It was 20 pages.
She decided after the 8th time she'd call me instead of investigating the printer itself, because getting up to walk 15 feet is difficult.
Meanwhile I have to hike across the entire office to their area.
Good news is I've run into this issue about 14 dozen times in the nearly a decade or so I've worked her so it's "did you check if the toner was good? How about if there was paper? Oh you did? Well it's telling me there's no paper, maybe you should check again. Oh how about that, there was no paper. Yes the magical paper fairy kidnapped that ream that you just put in there."
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Yes the magical paper fairy kidnapped that ream that you just put in there."
Totally stealing this from you. Hope you don't mind
While I agree that being insensitive is an issue, so is being oversensitive.
+1
jaziekBad at everythingAnd mad about it.Registered Userregular
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
Agreed... but being on the other end of it, if the person so much as stumbles with any of the primary basic questions I skip the entire rest of it and just get a remote session going because I'm not going to frustrate myself spending 20 minutes explaining the reasoning behind everything I'm asking.
Bottom line we're all in IT at the end of the day and we all hate people.
Guys? Hay guys?
PSN - sumowot
+7
Apothe0sisHave you ever questioned the nature of your reality?Registered Userregular
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
Totally, if a tech guy told me to open a massive hole like that in my firewall, I'd laugh, congratulate him on a good joke and ask him what the webex code was.
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
That's what GoToAssist and Webex Support are for.
Extremely this. Were you actually dealing with Symantec here?
I totally realize that not everyone has a sensitivity issue with their ears like I do (anything foreign, including someone's breathe, that touches my ears and I shut down into gross-out mode... it really freaks me out), but is it really necessary for users to breathe heavily into their phone while I'm on a support call with them? I can't hang up because it's my job to help them, but I'm doing my damnedest to not throw the phone across the room and scream "WHYYYYY?!?!?!?!" and then re-enact the scene from Star Trek 2 where that earwig-like creature goes into that guy's ear... come to think of it, that's probably where this whole thing started.
Regardless, I can't take them breathing like that directly into the phone
While I agree that being insensitive is an issue, so is being oversensitive.
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
That's what GoToAssist and Webex Support are for.
Extremely this. Were you actually dealing with Symantec here?
Yep.
I mean, the point was moot since there was no public IP address for that server, and the only way of getting to it was via a remote access gateway that I was sure as hell not giving out the information for, even on the off chance that this guy had the right software installed.
The problem ended up being that the backup exec database had been corrupted when it instantly shut itself down due to the C drive on the system hitting capacity. I had cleared out space and restarted the backup exec services, and only certain jobs were failing, with seemingly no connection between them.
In other news;
Company wants the functionality of SQL AlwaysOn Availability Groups, but is too cheap to shell out for Enterprise licenses. I think I can hack a solution together with powershell, SQL jobs, and failover clusters, but I'm not fucking happy about it...
All we need is for a pair of mirrored sql servers in an active/standby configuration that have a shared virtual IP address. SURELY there is a way to do this properly without an enterprise license. SURELY.
Anyone have a web filtering option for 75+ users that won't cost my $4000/year like OpenDNS wants to charge for their previously free DNS filtering? I'm looking at a freeware nxfilter but so far it seems a bit of a pain to use with user logins. I've found one called DNS Redirector as well, I can probably convince management that the $300 price tag is worth it.
Because if you're going to attempt to squeeze that big black monster into your slot you will need to be able to take at least 12 inches or else you're going to have a bad time...
I totally realize that not everyone has a sensitivity issue with their ears like I do (anything foreign, including someone's breathe, that touches my ears and I shut down into gross-out mode... it really freaks me out), but is it really necessary for users to breathe heavily into their phone while I'm on a support call with them? I can't hang up because it's my job to help them, but I'm doing my damnedest to not throw the phone across the room and scream "WHYYYYY?!?!?!?!" and then re-enact the scene from Star Trek 2 where that earwig-like creature goes into that guy's ear... come to think of it, that's probably where this whole thing started.
Regardless, I can't take them breathing like that directly into the phone
Yes, hello. Except the breather is one of our L1's.
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
That's what GoToAssist and Webex Support are for.
Extremely this. Were you actually dealing with Symantec here?
Yep.
I mean, the point was moot since there was no public IP address for that server, and the only way of getting to it was via a remote access gateway that I was sure as hell not giving out the information for, even on the off chance that this guy had the right software installed.
As goofy as Symantec is, I've never had them ask me for something like this. I mean.. webex (or similar) is just how you do. Who the fuck asks for this, seriously? Chetan Sevade, we demand answers!
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
That's what GoToAssist and Webex Support are for.
Extremely this. Were you actually dealing with Symantec here?
Yep.
I mean, the point was moot since there was no public IP address for that server, and the only way of getting to it was via a remote access gateway that I was sure as hell not giving out the information for, even on the off chance that this guy had the right software installed.
The problem ended up being that the backup exec database had been corrupted when it instantly shut itself down due to the C drive on the system hitting capacity. I had cleared out space and restarted the backup exec services, and only certain jobs were failing, with seemingly no connection between them.
In other news;
Company wants the functionality of SQL AlwaysOn Availability Groups, but is too cheap to shell out for Enterprise licenses. I think I can hack a solution together with powershell, SQL jobs, and failover clusters, but I'm not fucking happy about it...
All we need is for a pair of mirrored sql servers in an active/standby configuration that have a shared virtual IP address. SURELY there is a way to do this properly without an enterprise license. SURELY.
What functionality from AlwaysOn Availability Groups do they want? Base SQL Mirroring is Active/Passive... You aren't going to be able to do secondary replicas and stuff like that without Enterprise.
Automatic failover can be achieved by using a Mirroring and the application connecting using a Failover Connection string. Why do you need a single virtual IP?
Incindium on
Nintendo ID: Incindium
PSN: IncindiumX
0
jaziekBad at everythingAnd mad about it.Registered Userregular
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
That's what GoToAssist and Webex Support are for.
Extremely this. Were you actually dealing with Symantec here?
Yep.
I mean, the point was moot since there was no public IP address for that server, and the only way of getting to it was via a remote access gateway that I was sure as hell not giving out the information for, even on the off chance that this guy had the right software installed.
The problem ended up being that the backup exec database had been corrupted when it instantly shut itself down due to the C drive on the system hitting capacity. I had cleared out space and restarted the backup exec services, and only certain jobs were failing, with seemingly no connection between them.
In other news;
Company wants the functionality of SQL AlwaysOn Availability Groups, but is too cheap to shell out for Enterprise licenses. I think I can hack a solution together with powershell, SQL jobs, and failover clusters, but I'm not fucking happy about it...
All we need is for a pair of mirrored sql servers in an active/standby configuration that have a shared virtual IP address. SURELY there is a way to do this properly without an enterprise license. SURELY.
What functionality from AlwaysOn Availability Groups do they want? Base SQL Mirroring is Active/Passive... You aren't going to be able to do secondary replicas and stuff like that without Enterprise.
Automatic failover can be achieved by using a Mirroring and the application connecting using a Failover Connection string. Why do you need a single virtual IP?
Failover connection strings is going to require us to update the odbc drivers and dsns on a few of our other systems, because they're using oooooooooold versions. Plus the versions of our internal software that they're running will need to be upgraded to a version thats compatible with these new odbc drivers.
Plus there's php sites using these databases as well, and they probably aren't using some sensible method of connecting that'll be easy to change. Its pretty likely that its just a direct IP address connection somewhere in there, because the code is just that bad.
But yeah, this is the method we're gonna try and test tomorrow.
Is it pretty standard for symantec (or any other big software companies ) to immediately request remote access as soon as you get on the phone to one of their support guys?
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
That's what GoToAssist and Webex Support are for.
Extremely this. Were you actually dealing with Symantec here?
Yep.
I mean, the point was moot since there was no public IP address for that server, and the only way of getting to it was via a remote access gateway that I was sure as hell not giving out the information for, even on the off chance that this guy had the right software installed.
As goofy as Symantec is, I've never had them ask me for something like this. I mean.. webex (or similar) is just how you do. Who the fuck asks for this, seriously? Chetan Sevade, we demand answers!
Every time I've spoken to a Chetan Sevade they've used Webex. I am certain their support team is a Webex Support house.
Posts
Two servers which provide remote access gateway to everything else in the network via a single public IP address. I'll use a static NAT on the external firewall to point to these server's shared IP, and set them up as a failover pair simply for resilience.
So far I've set up a failover cluster with 2 nodes and a witness file on my domain controller, and a generic service role for the RA software (NetOp). I'm only able to connect to this via the shared IP for the service, not via the cluster IP, or via any of the server's individual IP's. Is this right? I expected that I should be able to use either the currently active node's IP, or the cluster IP, or the specific service IP to connect, but maybe I'm just not understanding it right.
You could well be right, but I'm kind of using this as way to learn about how to set up failover clusters. Since I'm going to have to set up some mirrored MSSQL servers in the near future, and SQL mirroring is being deprecated in favour of SQL AlwaysOn, which requires that the severs involved be in a failover cluster.
On the topic of Anti-virus stuff, I had to deploy Kaspersky to 15 clients this summer ranging from 10-150 computers. It wasn't as bad as it could of been, but it sure could of been MUCH MUCH better. The main reason for the deployment was also because Kaspersky's latest key files were unusable by older versions of the software so you were forced to update to KES10 even if you were perfectly happy with KES8. Recently we had to shut off their system watch module as it would prevent our VPN software from creating a tunnel, but only on four computers in the company, which of course meant two project managers and the CEO...
You guys bitch and moan about BackupExec, I'm helping clients still running NTBackup, wbadmin, and xcopy scripts because they won't pay for real backup software...
Try this for old servers.
Okay, you are definitely using the wrong type of clustering. This needs to be a load balanced cluster and not a failover cluster.
The best 'rules of thumb' I can think of for which type of cluster to use:
If your cluster is going to directly touch dynamic data (example, SQL servers), it should probably be a failover cluster
If your cluster is a gateway, portal, web application, or other application which only contains static data (example, IIS websites), or access to a service or static application (your remote access application, for example) it should probably be a load-balanced cluster.
SQL, for example, should run on a failover cluster. You can't have two servers running instances of the same database at the same time. SQL shits its pants and corrupts data. Web servers run on load balanced clusters. Theyre just providing a viewer to whatever data is being served up behind them. The web application/website itself never changes, you're just serving up multiple concurrent copies of it to incoming clients.
For your remote access, you're just providing a portal, an entry point. There's no data involved. A load balanced cluster is what you want. A load balanced cluster here will also give you the resilience you're looking for. Your NAT will hand off incoming requests to the cluster IP. One of the load balanced nodes will service that request, and they will subsequently either take turns if configured for round-robin, or the requests will route to the least busy node if configured that way. If one node dies, the other node simply takes all requests. If you want to take a node offline, you essentially pause its participation in the cluster until all the requests it's actively servicing drain out of the pool, and you're free to take it down/reboot/etc.
I understand you're using this as an opportunity to 'learn' about failover clusters, but the problem is you're using the wrong tool for the job. You're going to learn bad habits because you're failover clustering something that should be load balanced. In a failover cluster there are storage concerns that are much more nuanced and touchy than in a load balanced cluster. Additionally, if you do manage to succeed setting it up in a servicable fashion, you're going to now be relying on a cluster that's simply the wrong tool for the job as your production system.
If you get too reliant on failover clusters for the wrong purposes, you'll end up using them for everything, and for the wrong thing. If you only have a hammer, all your problems look like nails. Learn to use a wrench when the job requires it.
If you want to learn about failover clusters, set up a lab and do it right. But for a remote access server pool, you want need this to be load balanced.
Seriously, there's no reason to not use some sort of proper backup. Fuck, you can get free delta-block backups out of Areca.
Thanks for the advice. I've already set up a couple of NLB clusters for web servers, so I am a bit more familiar with them already.
I never knew the difference, so thanks for this post.
A failover is like having something tucked away that's a complete copy. Ready to go, but not in use. Say you had 2 cars. One day you get into an accident and it needs to go to the shop. Well then while you fix car 1, you use car 2. You can still get around but your down a car.
This is, in a rudimentary fashion, how clustering "works". You have a client that connects remotely via <whatever> to your load balanced cluster, the 3 servers at the top. Each is running a Remote Access application, or an IIS website or SOMEthing. The data therein never changes. The IIS info is the same on each server, or the RA app is the same on each server, but the data is static and a separate copy is stored on each node. This is why you can load balance, because the data never changes on any client. Sure, people may interact with a website and "make changes", but they're not changing the site itself. They're changing the data on the back end, not on the exact RA/IIS server they contacted. Those servers are simply running an application that allows them access deeper into the guts of your environment.
In a load balanced cluster, when a request comes in to the balancer, be it hardware or software, any member of the pool can service the request. You can take one offline and the remainder will pick up the slack. This also lets you incrementally upgrade your environment. You can take one node offline, insert a new copy of the RA software, or a new version of the website, and then put it back into the pool. If you do it gracefully, the node being taken offline will handle all the clients it's currently serving, but simply not bother picking up new requests as they come into the pool. Because of this, upgrades and maintainence to the load balanced environment can be done completely transparently to the end users. As long as your remaining nodes have the resources to shoulder the load, they won't even see a slow down.
NLB (network load balanced) clusters can handle requests by 4methods:
NLB gives you scalability and high availability.
Underneath your load balanced cluster, the actual request to handle the data falls to your failover cluster. Instead of a pool of servers that can pick up a request sent to the cluster's IP, only one node at a time is allowed to service an incoming request. This is generally because the server is directly touching fluid, dynamically changing data. SQL databases are the best simple example of this. If two NON clustered (or load balance clustered even) servers actively touch the database and make changes, the servers are not each aware of the changes being made by the other server. They could both update the same table at the same time to different values. This can corrupt the database and all kinds of ugly shit.
The Active node has an active connection to the datastore. The passive node does not touch the data in any way. Since all the data is kept in shared storage, the servers can hand off access and control of the data fairly seamlessly. The tlogs and the database are all right there, so when you roll over to the other node, it just picks up all the data in-state. It has all the tlogs it needs to pick right up where the other server left off. The actual SQL application files are stored on the local storage of the nodes, because they are simply the engine that manipulates the data. The data itself must be in a shared location, and must be manipulated by only one server at a time. In the realm of Windows failover clustering, the passive node actually takes its disk offline (just like you can with any disk in the storage management portion of Server Management).
The nodes have a heartbeat connection (usually a seperate VLAN or even physical connection when possible) where they basically ask each other non-goddamn-stop "you still up? yeah. How about you? yeah. cool." There's also a thing called a "quorum" member of the cluster, which is either a piece of shared storage, or another server, that serves as a witness in the event of failover clusters getting in a slap-fight over who should be in charge. If the nodes, for whatever reason, stopped talking to each other, they both check with their quorum witness. If the witness says the Active node is still online, the passive node will sit tight. If the witness says the Active node is unreachable, the passive node promotes itself and takes over the cluster. Quorums aren't necessary in an NLB cluster, cause each node is autonomous. It's just there to do its assigned task with the application on its local storage, and doesn't give a shit what the other nodes are up to.
Now, the "passive" server could have an active connection to a second SQL instance. If you imagine a second SAN storage at the bottom of my diagram, Node 1 could have an active connection to one piece of shared storage, and interact with the data stored there, while node 2 has an active connection to a seperate piece of shared storage and interact with the data there. Both nodes can actively be running SQL at the same time, just not connected to the same instance.
Failover gives you improved ability to handle and reduce downtimes.
There's still a moment of "outage" when you failover between nodes. The disk has to be brought online. It's not seamless, but it's a fucking lot better than having your only SQL server reboot or, god forbid, just die. The other node(s) will pick up automatically with no intervention, but SQL will be "down" for a few seconds, usually around 15 depending on the hardware involved. A lot of applications are tolerant of this, and can take the interrupt without shitting their pants. Depending on what the application is and how tolerant of timeouts it is, a failover *can* be "seamless" (outside of a 15-20 second hiccup) to end users. This lets you do nice things like patch your passive node, roll to it, and then patch the previously active node. If you have a fancy SAN that lets you span storage over multiple sites, you can also have failover nodes at physically separated locations in the case of physical disaster at one site.
There's a couple other ways to handle the setup, Transaction Log or Hot Standby state synchronizations, but they're a fuckawful mess, compared to the standard shared storage, and I'm just not going there today. :P
Sooo.......
..... I like clustering. :P
EDIT: I just know I'm gonna edit this post like 30 times cause I'll reread it and just cant leave well enough alone.
Fuck. I should have just posted this.
/sigh.
They're great analogies because they work really well with the concept of data.
The tellers at the bank are just putting your money in the vault. They don't give a shit if you're depositing money with two tellers at once, they're only there to convey your money to the vault. Load balanced clusters are just there to convey your requests to where the money is.
With cars, you can't drive two cars at once because oh my god why would you drive two cars at once?
http://technet.microsoft.com/en-us/library/hh213151.aspx
At least I think thats what this does.
I've been advised by people at work that this is something to look into anyway.
currently the live system I'm supposed to be upgrading is running windows 03 and sql 05 and just using standard sql mirroring, and they share an IP address by using some ancient third party application.
Especially if it's out of paper.
Had one lady try to print a document 8 times.
It was 20 pages.
She decided after the 8th time she'd call me instead of investigating the printer itself, because getting up to walk 15 feet is difficult.
Meanwhile I have to hike across the entire office to their area.
Good news is I've run into this issue about 14 dozen times in the nearly a decade or so I've worked her so it's "did you check if the toner was good? How about if there was paper? Oh you did? Well it's telling me there's no paper, maybe you should check again. Oh how about that, there was no paper. Yes the magical paper fairy kidnapped that ream that you just put in there."
Like literally the first thing I got asked for was an IP address, and to make sure that remote desktop was enabled.
Fuck right off. Even if I had the authority to create massive gaping security holes in a client network on a whim, I'm not about to do it for a ticket that can probably be solved with about 5 minutes of somebody who actually knows how this software works talking to me.
Agreed... but being on the other end of it, if the person so much as stumbles with any of the primary basic questions I skip the entire rest of it and just get a remote session going because I'm not going to frustrate myself spending 20 minutes explaining the reasoning behind everything I'm asking.
Bottom line we're all in IT at the end of the day and we all hate people.
PSN - sumowot
This is not remotely standard unless they're your outsourcer.
A direct, Remote Desktop connection? Bulllllllshiiiiit.
That's what GoToAssist and Webex Support are for.
"Yes please thank you, what is the password?"
PSN - sumowot
Extremely this. Were you actually dealing with Symantec here?
Regardless, I can't take them breathing like that directly into the phone
Windows 7 was a good compromise.
Honestly, yes.
Yep.
I mean, the point was moot since there was no public IP address for that server, and the only way of getting to it was via a remote access gateway that I was sure as hell not giving out the information for, even on the off chance that this guy had the right software installed.
The problem ended up being that the backup exec database had been corrupted when it instantly shut itself down due to the C drive on the system hitting capacity. I had cleared out space and restarted the backup exec services, and only certain jobs were failing, with seemingly no connection between them.
In other news;
Company wants the functionality of SQL AlwaysOn Availability Groups, but is too cheap to shell out for Enterprise licenses. I think I can hack a solution together with powershell, SQL jobs, and failover clusters, but I'm not fucking happy about it...
All we need is for a pair of mirrored sql servers in an active/standby configuration that have a shared virtual IP address. SURELY there is a way to do this properly without an enterprise license. SURELY.
Yes, hello. Except the breather is one of our L1's.
As goofy as Symantec is, I've never had them ask me for something like this. I mean.. webex (or similar) is just how you do. Who the fuck asks for this, seriously? Chetan Sevade, we demand answers!
What functionality from AlwaysOn Availability Groups do they want? Base SQL Mirroring is Active/Passive... You aren't going to be able to do secondary replicas and stuff like that without Enterprise.
Automatic failover can be achieved by using a Mirroring and the application connecting using a Failover Connection string. Why do you need a single virtual IP?
Nintendo ID: Incindium
PSN: IncindiumX
Failover connection strings is going to require us to update the odbc drivers and dsns on a few of our other systems, because they're using oooooooooold versions. Plus the versions of our internal software that they're running will need to be upgraded to a version thats compatible with these new odbc drivers.
Plus there's php sites using these databases as well, and they probably aren't using some sensible method of connecting that'll be easy to change. Its pretty likely that its just a direct IP address connection somewhere in there, because the code is just that bad.
But yeah, this is the method we're gonna try and test tomorrow.
Every time I've spoken to a Chetan Sevade they've used Webex. I am certain their support team is a Webex Support house.