2x ESXi Hosts (same physical location) with VSA on both in a Cluster with RAID10 Mirror volume so each VMware Host server has full resilience. I also have a third ESXi server running the FOM which is active within the management group. This FOM server sits at a remote site as it's the host for DR. Comms seem good and no errors appear in the CMC (9.0).
My Management Group shows "3 of 3 managers running, 2 regular managers, 1 failover manager" and I have a Quorum value of 2.
I am able to shutdown either one one of the VSA's individually and the cluster still works as expected (but degraded of course). But I have a scenario which I don't understand. If I shut down the Management Group (both VSA's power off) and then just power up one VSA, the RAID10 volume is unavailable. It only becomes available once then second VSA comes back online. It would be entirely possible to have a situation where we lost power to the site hosting the cluster and one VSA couldn't come back up for whatever reason.
Can anyone shed light on what I'm missing? I assumed the FOM and one remaining VSA would have been enough for this to function.
It's the "Shut Down Management Group" function that causes this problem as it places the Group into Maintenance Mode. If I shutdown everything manually (either from CMC or VMware Client) I can then bring up the FOM and one VSA with the volume available. But if the Management Group is shutdown it stays in "Maintenance Mode" until the until all the VSA's are back online and then automatically switches to Normal. I can manually switch back to "Normal Mode" by editing the Management Group and the volume is available with just the one VSA online. I was using the "Shut Down Management Group" as a clean option for shutdown.