03-03-2014 01:04 PM
I had a question regarding disk space on a P4300 G2. Right now the total raw space on the node is 4.37TB but the node shows there is only 2.81TB of usable space. I am pretty sure you do not loose 1.56Tb to RAID 5.
Is there a limitation on the SAN on how much space it can see?
Solved! Go to Solution.
03-03-2014 01:28 PM - edited 03-03-2014 01:29 PM
Nope, there's no limitation on how much space a SAN can see.
Sure, on each NSM, you'll lose some space to RAID - i.e. your usable space would be at most ((disk size * n) - disk size) n being the qty of disks in the NSM.
Addtionally, StoreVirtual OS takes some space and also Cluster Metadata takes some space.
Lastly, if the NSM is part of a cluster, the usable space is limited to the size of the smallest member of the cluster.
I suspect that what you're seeing is actually the effect of the last item.
03-03-2014 02:03 PM
I have 3 nodes, all the nodes have 600GB size disks according to my calculations the cluster should be seeing 12851TB of usable space. I guess the other thing to add is there used to be 2 older nodes that had smaller size disks but those 2 nodes have been removed from the cluster and management group and have been turned off. Any ideas on how to make recalculate properly.
03-03-2014 03:00 PM - edited 03-03-2014 03:00 PM
Not sure if there's a way to do that.
Also, at first glance, there does not seem to be anything wrong with those numbers.
What Hardware RAID level are you using?
Here's a good resource for LH data sorage capacities.
03-03-2014 03:16 PM
The usable space IS limited by the smallest nodes in the cluster, so if you had smaller nodes in there, then that could explain why it showed less usable space. Assuming the smaller nodes are out of the cluster now, you might need to reboot the nodes to get the extra usable space back... if a single node reboot doesn't work, I would contact support, it might take a reboot of the management group or some forced CLI command to rescan the disks.
03-04-2014 11:51 AM
Yes the smaller nodes are shutdown and separated from the cluster. Rebooting a single node did not fix the issue. Raw Space 4.37TB but only 2.81 usable. So I am about to call support, to see what they have to say.
03-04-2014 02:53 PM - edited 03-04-2014 02:58 PM
I would suggest that you check all of your nodes to make sure that they're exactly the same.
That is, make sure that they all have 600gb disks in them.
2.81TB usable is exactly what you would get on a P4300G2 with 8 x 450GB disks in a RAID 5 setup.
This is before even joining them to a cluster.
I know this because I just re-imaged one of mine to v.11, and that's what it shows.
04-07-2014 09:07 AM
So the solution is to reconfigure the Network RAID on the node. I had 3 nodes so I was able to reconfigure the nodes with out down time. After reconfiguring the network raid, all nodes have 4.37TB of raw space and 3.7TB of usable, whihc is consistant with RAID 5.
04-07-2014 10:30 AM
can you clairify? Network raid is done at the LUN level, not at the node level. Are you talking about the physical raid of the nodes?
Typically the physical nodes are running raid5 and the LUNs operate at NR10. The usable space for each node would be limited by the mix of physical nodes on your system where the usable space for every node would max our at the usable space in the smallest node, so for example, if you have 5 nodes that are 10TB and one that is 2TB, every node would show 2TB usable as long as that small node was allowed to remain in that cluster.
04-09-2014 12:59 PM
I am sorry, I meant the pysical raid on the nodes. We wanted to upgrade the disks on our nodes to gain more space with out having to buy another node, so we eneded up swapping all 300GB drives for 600GB at this point the nodes recognized that bigger drives were installed but the usable never changed the only way we were able to make the nodes calculate the correct usable space with the new drives was to kick each node from the cluster and reconfigure the physical raid on each node and then adding it back. Hopefully this makes more sense.
04-09-2014 01:40 PM
FWIW, you didn't really need to kick them out of the cluster as you were in the process of configuring the cluster.
A "Reconfigure RAID" command issued to each node should have taken care of this problem.