08-29-2013 12:22 PM - last edited on 08-29-2013 08:44 PM by RASHMI
I have two HP4300 storage clusters....they are separate--and both at capacity. I'd like to combine them into one 4 node cluster so I can use network raid-5 on volumes and have more space. The thing is, I'd prefer to keep some or all of the vm's that are running on the second cluster alive during the process.
Is it possible to break my second cluster...leave one node operational (but in a degraded state) while I move the other node into the first cluster? I could then change my raid-10 volumes to raid-5 and wait for them to restripe. My idea is to create new luns on the 3 node cluster, present them to vmware, and vmotion the vm's from the degraded node over to the new 3 node cluster. I understand the network performance takes a hit when the cluster is degraded--but isn't the selling point of network raid-10 to have one node function while the downed node is repaired?
This isn't exactly what HP had in mind, I realize...and before you all bash me for lack of proper planning, trust me, I've beaten myself up for months now, but I have to get it fixed soon.
(These are set up to spec... FOM is on local storage on one of our ESX hosts. I have full backups of all data involved.)
PS. This thread has been moved from from Storage>SAN (Small and Medium Business) to HP Lefthand category- Forum Moderator
08-29-2013 10:38 PM
For P4000 SAN solutions querries you can also visit the HP Guided troubleshooting tree.
Below is the link for HPGT:
To assign points on this post? Click the white Thumbs up below!
09-04-2013 05:28 AM
I called HP for support on this... In the event that anyone else needs to perform the operation, here's what they told me.
In my scenario, we changed all of the volumes on cluster 2 to thin to see exactly how much space was available. Turns out, there will be enough to support the luns from cluster 1. (Nodes 1 and 2 are in Cluster 1 and nodes 3 and 4 are in Cluster 2.)
From HP, the solution is this:
--Put node 2 in repair mode, leaving node 1 up
--Move node 2 into cluster 2 and let it restripe all of the volumes
--After restripe, change all luns from node 1 to Cluster 2 and again, let restripe
--Delete Cluster 1 and move remaining node to Cluster 2
If space were a concern, some or all of the volumes on cluster 2 could have been switched to raid 0 to free up space.
Only potential for disaster is if node 1 somehow blows up while node 2 is in repair mode. HP took the logs from that cluster to make sure there aren't any errors. I have backups as well.
09-04-2013 07:12 AM
did you test your performance w/ NR5? generally that is really only acceptible with majority read information and very little write data. It sounds like you were running your NR10 data as full instead of thin and really you should run everything as thin provisioning even if you don't want to overprovision your SAN.
Beyond that.... good luck. If you had the extra space on the san, I don't know why you couldn't just move a LUN over to the final cluster before you removed the LUN, but either way it should work. Personally, I would avoid NR5 if at all possible as it is very limited for performance and even HP cautions its use in production.