Hey, yeah, ok... So here is the deal.
I'm now employed(yay!). In addition to other tasks, I'm kinda responsible for making things like computers and my job actually work. One such computer is our file server. One rather key function of said server is actually having disk space available.
Due to a general inability of people to remove stuff, that last bit is looking like it is going to start being a tad bit of a problem in a couple months. As I haven't exactly go to school for computers, and at no point in time really had the need or resources to set up a SMB class raid at home, I'd ever so slightly out of my depth. While there are a few folks here that can toss about various IT buzzwords, their experience(and intestinal fortitude when it comes to fucking about with a computer) is rather less than mine.
Of course my first thought was "fuck it, it's other people's money. Let's just buy more drives." After a little bit of initial poking around, determined that we are running a level 5 raid on an IBM ServeRaid 7k. So, it isn't just buying a drive and shoving it in, it is buying 4 SCSI(apparently) drives and throwing them in there and a bit more configuration. Not that we don't piss away that kind of money here or anything, but it might not be required right now.
A little more dicking about shows that the total raid size is 210gigs, with 70 of that being reserved for parity shit, leaving 140 for a single logical drive. The logical drive is partitioned with 10 gig for the windows server 2003, and a couple of 40 partitions for data.
that leaves about 50gigs of seemingly unpartitioned space. It shows up in disk manager as such.
For the life of me, I can't see any reason why this space is not being used. It is not as if it is dual booting and the partition is some odd *nix partition, which I've never seen show up as unpartitioned anyway. I don't believe it is some sort of silly back-up partition.
Of course the person who originally set the system up is long gone, so just asking for why the hell he set it up this way is not an option.
The system
is backed up daily, but of course that would not apply to the partition in question, and I'd rather not fuck over some vital part of the raid.
anyway, I know we've got some smartish folks up in here, and input would be nice. I'm almost totally sure this just empty space that we could actually be using, but I've never dicked with a configuration quite like this before.
They moistly come out at night, moistly.
Posts
An inventory of the components, or screenshot of the RAID managment software and/or Disk Manager might be helpful (remember to hide identifiable information in the printscreen).
Also - RAID's are nice, but I hope you back up the important stuff. If you lose 2 or more disks, you're toast. A RAID 5 can build back from one disk lost, but not two. Just a thought.
RAID5 does tend to be the optimal balance between redundancy, performance, and drivespace efficiency.
And I highly recommend a hotspare for your array. It's a harddrive cartridge equal to or greater than the existing drives, and all it does is sit in the array waiting for one of the other disks to die.
When that happens, the RAID5 begins rebuilding the broken disk onto the hotspare, improving recovery time and reducing the chance of catastrophic failure due to a second disk failing while your first failed disk is still offline.
For some reason imageshack seems to be sucking ball today and I am having a tiny bit of difficulty getting images hosted.
The partition does not have a drive letter, because it is not actually partitioned yet. Partitioning and formating a drive is something I can swing pretty easy. I just don't see any reason why it was not already done.
No, RAID's are nice, not arguing that, and RAID 5's in particular are the way to go IMHO. Early in my IT career I was swapping out a drive in a Powervault, and the backplane blew, and killed 2 more disks (I have no idea how the other 5 lived). There were no backups, and that was a bad day for me. This is why I like to interject a nice backup scheme into my posts sometimes. Shit happens ya know?
These days I run a hotspare as well, and while i've not had a disk eat it yet (knock on wood), it's comforting to know it's there, so i'll 2nd that motion.
OP, consult your documentation. The server may not natively support some of the stuff we're going on about, but you might find supplamental hardware or software to help you out (and if you have a budget).
Edit:
If you're using NTBackup, it doesn't backup parity bits. It also uses kinda wonky compression (BKF), and it's very proprietary. Come to think of it, I don't know of anything that backs up RAID parity bits, but i've only used Veritas and NTBackup. Eh, YMMV.
Nothing would want to backup parity bits. Parity bits are there as a backup. Parity is the first line of defense in a RAID5 drive failure. The other drives can keep on going and the data from the dead drive can be rebuilt using the parity. The next line of defense would be external backups of the data.
My point was that the hot swap was only large enough to contain the information on the logical drive, but is not...
per se. I was implying that I doubted it needed to be, and that there wasn't a problem with that.
Really, my main question was would there be any serious reason that a large portion of the logical drive was not being used(partitioned).
For example, we like to use four disk RAID5 here:
LogicalDisk1, approximately 112GB:
-PhysicalDisk1, 37.4GB
-PhysicalDisk2, 37.4GB
-PhysicalDisk3, 37.4GB
-PhysicalDisk4, 37.4GB
Hotspare:
-PhysicalDisk5, 37.4GB
If PhysicalDisk1, 2, 3, or 4 dies, PhysicalDisk5 automatically replaces it. Then you replace the dead disk, and it becomes the new hotspare.
I managed to track down my old boss's cell, and apparently the unused stuff was going to be for some sort of project that he never got around to finishing.
anyway, that's for the help and info, this shit is solved.
edit:sooo..... with the new system we can't edit thread titles anymore? Keen.