07-04-2011 04:35 AM
i'm looking at doing a cold install of 11.31 to upgrade an 11.23.
Its currently in a 2 node sg cluster 11.17.
can anyone recommend a nice easy way to do this.
i was going to roughly do the following.
copy network configs, ssh keys, time settings
copy the /etc/cmcluster dir
swap the hdd's out with extras i have
copy over configs
recreate packages (or use them in legacy?)
i was going to leave the other node up while i do this. when the other node comes backup and the packages are created again, this should join the cluster?
because the two versions are different i won't be able to make any changes until the other node is upgraded?
anything else i should look out for?
i don't want change the exisiting configuration on the current cluster setup, e.g. if i upgrade 1 node will cmapplyconf roll out that new config onto the old active node?
i need to be able to put the original disks back in and boot off them again as if it were never upgraded.
am i way off here?
07-05-2011 08:24 AM
1) Install 11.31 on new hardware. Provide SCSI or SAN access to all the Serviceguard managed resources (disks) in the old cluster.
2) Install the newest version of SG 11.20 on the new systems.
3) Copy the serviceguard configuration and package scripts from the old cluster.
4) Migrate or re-generate all packages. This will require an outage. One of the things SG 11.20 provides is the ability to set up package dependencies. Package B can be dependent on Package A already running. This provides you the ability to restructure your packages and make sure things run more logically and reliably.
An alternative to item 3 would be to generate a new cluster and build new packages, using the old packages as a reference to the functionality you need. This path might be simpler to implement
Owner of ISN Corporation
07-05-2011 11:18 PM
Thanks for the reply.
This sounds good. I would prefer to copy the configs over if i can but i'll just have to decide when it comes down to it and how the new configs look.
I guess my only problem here is i need to put the original hardware back in to revert back until later.
thanks for your notes much appreciated.
07-06-2011 10:32 AM
The lab does not provide any direction on a cold-install upgrade of a node in a cluster because the Serviceguard cluster binary file ( /etc/cmcluster/cmclconfig) contains hardware paths that must be maintained for Serviceguard to work properly.
# cmviewcl -v -f line | grep path
For that matter, if any path changes for a partcular volume group, lvmtab must be corrected with the new path(es) by way of a vgexport/vgimport refresh.
If the lock disk path (if configured) is altered as a result of the install, Serviceguard will complain about it and you won't be able to update the cluster binary with the new path while the other node is at a different version of Serviceguard - cmapplyconf will refuse to update the cluster binary file until both nodes are at the same version.
08-08-2011 09:48 PM
Ok so its been a while but i've revisiting this now.
I've built the new HPUX HA servers on the new disks.
I've copied the /etc/cmclusters from the old servers into the new servers /etc/ directory
I have 1 old SG node online and 1 new unconfigured node online.
How can i import the configuration from the old cluster into the new cluster node without making it active or effecting the old cluster node configuration?
I haven't mounted the disks yet, i will be doing this shortly.
Considering both cluster nodes have their own disks am i better off shutting down all of the old nodes and putting the new disks in and trying to configure both new nodes at the same time?
08-10-2011 04:31 AM
Officially, HP does not support cold-install upgrades of nodes in a configured
cluster. The A.11.20 Release Notes state this on page 47:
Rolling Upgrade Exceptions
HP-UX Cold Install
A rolling upgrade cannot include a cold install of HP-UX on any node. A
cold install will remove configuration information; for example, device
file names (DSFs) are not guaranteed to remain the same after a cold
All versions of Serviceguard record the lock disk and lan device special file
names, hardware paths and IP addresses in the cluster binary file, so if these
references are changed as a result of the cold-install, the cluster binary file
becomes outdated and invalid. This uncertainty is one reason why HP will not
support a cold-install upgrade.
Furthermore, when different versions of Serviceguard exist on nodes in the
cluster, cmapplyconf is disabled, so discrepancies in the cluster binary file
cannot be corrected until all nodes are loaded with the same version of
Another factor to consider after cold-install; cluster related files and
configuration information have to be restored to a cold-installed system. For
example, package control scripts must be restores and the LVM configuration
file /etc/lvmtab must be updated with cluster-aware volume groups.
On the same page in the Release Notes mentioned above, it states:
Requirements for Rolling Upgrade to A.11.20
To perform a rolling upgrade to Serviceguard A.11.20, you must be running:
. HP-UX 11i v3 and
. Serviceguard A.11.19
The reason for this is that A.11.19 is the first version of Serviceguard to use
a new cluster manager (CM2) engine and the only version that can convert
pre-A.11.19 cluster binary files to the new engine. Therefore it is not
supported to upgrade A.11.18 or older directly to A.11.20 or newer.
Assuming these limitations can be met, then it may be possible to upgrade a
node using a cold-install.
Essentially, after the install (of both UX and SG), try the following on the updated node (no guarantees this will work):
1) Copy /etc/hosts, /etc/resolv.conf, /etc/nsswitch.conf from the active node to the updated node
2) import the cluster volume groups using map files created on the active node
3) copy the /etc/cmcluster directory to the newly installed node
4) remove log files in /etc/cmcluster (they apply to the other node)
5) convert the cluster binary to the new SG version on the system:
# /usr/sbin/convert -f /etc/cmcluster/cmclconfig
6) Test cmviewcl and cmrunnode on the updated node.