VG informations (vgdisplay , vgversion ) (862 Views)
Reply
Valued Contributor
support_billa
Posts: 192
Registered: ‎06-27-2011
Message 1 of 6 (862 Views)

VG informations (vgdisplay , vgversion )

hello,

 

i want to upgrade with vgversion  VG's , which are configured in clusters or on server without cluster.

 

a lot of questions were answered with

Change VG name and upgrade LVM version

 

I read the LVM Volume Group Version Migration" guide:

http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01916196/c01916196.pdf

 

i have questions about following requirements:

To migrate a volume group from one version to another, you must meet the following prerequisites: The volume group must be deactivated during migration. The volume group must be cluster unaware (vgchange –c n) before changes are made. After the volume group migration, you can then make the volume group cluster aware (vgchange –c y).
 No physical volume in the volume group is configured as a cluster lock disk for a high availability cluster. All physical volumes belonging to the volume group must be accessible.

 

- how can i detect if a volume group is active or inactive ?

- how can i detect if a volume group is a cluster vg or not  ?

 

   Info's :  serviceguard VG

 

   I detect  /usr/sam/lbin/vglist and i think it handle all ?

  when a vg is down ( configured as a vg on a alternate host or really deactivated ), how can i detect it ?

 

   i made a test with

vgchange -a n vgtest

    i only get a result  with :

 

vgdisplay /dev/vgtest 2>vgtest_down.txt 1>vgtest_down.txt

 Output :
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgtest".

- but when the vg is down, i don't know if it is a vg of cluster on the alternate host ? i can activate it , with "vgchange -a r" and get the "VG Status" with vgdisplay ?

 

-  if a volume group is a cluster vg or not   : vgdisplay and parse the "VG Status" : exclusive is a cluster VG ?

   does other stati exist ? in man vgdisplay nothing is explained !

 

  - how can i  detect if in the volume group is configured cluster lock disk ?

 

  - how can i  detect if in the volume group is configured spare disks ?

 

vgdisplay an parse this stati ?

Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0   


- when the command vgversion was finished, what should i have to do on the alternate cluster node ?

   these steps ( in the guide is nothing explained about the alternate host ) ?
vgimport data VG on secondary node without de-activate data VG on primary node in MC-SG


in the attachment are

- vgdisplay  ( vgtest_act.txt )

- vgdisplay of deactivated vg  ( vgtest_down.txt)

- vglist (vglist.txt)

 

regards

Acclaimed Contributor
Torsten.
Posts: 23,451
Registered: ‎10-02-2001
Message 2 of 6 (854 Views)

Re: VG informations (vgdisplay , vgversion )

[ Edited ]

To answer a few of your questions:

 

 

- vgdisplay is showing information, hence the VG is activated

 

see also

 

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02273766/c02273766.pdf

 

 

--- Volume groups ---
VG Name                     /dev/vgtest
VG Write Access             read/write    
VG Status                   available                 <-- not a cluster, would be "exclusive" or shared
Max LV                      2047  
Cur LV                      1     
Open LV                     1     
Cur Snapshot LV             0             
Max PV                      2048  
Cur PV                      2     
Act PV                      2     
Max PE per PV               51184         
VGDA                        4  
PE Size (Mbytes)            32             
Unshare unit size (Kbytes)  1024                     
Total PE                    6398          
Alloc PE                    6394          
Current pre-allocated PE    0                      
Free PE                     4             
Total PVG                   0       
Total Spare PVs             0                                 <-- no spares
Total Spare PVs in use      0                    
VG Version                  2.2                               <-- it is already the highest version
VG Max Size                 1637888m  
VG Max Extents              51184         
Cur Snapshot Capacity       0p                  
Max Snapshot Capacity       1637888m 

 

 

To import the "new" vg to another host in the cluster, use the well-known vgexport/import process.

For other cluster related topics use e.g. cmcheckconf etc ...

 


Hope this helps!
Regards
Torsten.

__________________________________________________

There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________

No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Valued Contributor
support_billa
Posts: 192
Registered: ‎06-27-2011
Message 3 of 6 (841 Views)

Re: VG informations (vgdisplay , vgversion )

hello,

 

i asked those questions above due to following issue :

 

when i want to upgade the VG version of a serviceguard VG,  i test:

 

vgversion -V 2.2 -r -v /dev/vgtest

Output

vgversion: Error: Cannot change the version of a cluster aware Volume Group. 

 

Also , when the serviceguard VG is inactive, vgversion know , if the VG is a shared volume ?

can i also dectect, when the VG is down, is it a shared volume ?

 

i find this document , where LVM metadatas are explained :

 

HPUX Serviceguard - vgchange: Activation Mode Conflicts with Configured Mode

 

- in a maintenance slot i will stop the serviceguard package

 

so following steps i have to prepare

 1. the VG's of the serviceguard package  ( write a list of the VG's to a temporary file )

 2. Review with vgversion the possibility of an upgrade

 3. Stop the serviceguard package with cmhaltpkg

 4. Check, if the VG's ( list of the temporary file ) are inactive (question above )

 5. Check, if the VG's are "shared"  ( how : the VG is inactive )

     Also i have to write the status to a temporary file

 6. vgchange  (vgchange -c n)

 7. vgversion

 8. vgchange  (vgchange -c y) ( check a temporary file)

 9. Do the steps on the serviceguard alternate host

10. Start the serviceguard package with cmrunpkg

 

did i forget anything ?

 

in a serviceguard package are many, many VG's configured, so i have to create a script, which must be "sure" to handle all the steps.  once more, i think i only can work with temporary files and i can't get stati, when a VG is down ?

vg-commands have this features

 

regards

Honored Contributor
Matti_Kurkela
Posts: 6,271
Registered: ‎12-02-2001
Message 4 of 6 (834 Views)

Re: VG informations (vgdisplay , vgversion )

A non-cluster VG in HP-UX LVM has three possible states:

  • inactive (vgchange -a n)
  • active (vgchange -a y)
  • active/read-only (vgchange -a r)

A cluster VG in HP-UX LVM has four possible states:

  • inactive (vgchange -a n)
  • active/exclusive (vgchange -a e, can be active on one cluster node only)
  • active/shared (vgchange -a s, can be active on many cluster nodes simultaneously, cannot be changed while in this state)
  • active/read-only (vgchange -a r)

So, when a cluster VG is inactive on all nodes, it cannot be shared at the moment, because you already know it is inactive on all nodes.

 

However, the cluster VG may still be shareable. (vgchange -S y)


There does not seem to be a way to view the state of the "shareable" setting directly, but once you have the VG in inactive state, it will be easy to test: try to activate the VG in shared mode in any one cluster node (vgchange -a s <vgname>). If this is successful, you know that the VG is configured as shareable. Remember to make the VG inactive again before continuing (vgchange -a n).

 

If the Serviceguard package is down (but Serviceguard is still running = the node is up), you have confirmed that the VG is inactive on all cluster nodes, and your attempt to activate the VG in shared mode fails, you now know the VG is definitely not shareable (vgchange -S n = the default state).

 

In general, only cluster VGs can be shareable, so vgchange -c n also implies vgchange -S n.

Sharing a VG with normal filesystems is not very useful, so shareable VGs should contain either special cluster filesystems (e.g. CFS), or raw data for multi-node databases (e.g. for Oracle RAC).

MK
Valued Contributor
support_billa
Posts: 192
Registered: ‎06-27-2011
Message 5 of 6 (829 Views)

Re: VG informations (vgdisplay , vgversion )

Hello,

 

So, when a cluster VG is inactive on all nodes, it cannot be shared at the moment, because you already know it is inactive on all nodes.

However, the cluster VG may still be shareable. (vgchange -S y)

 

i made your tests and got errors ( see vgchange -S y )

 

- Cluster is running and then i stop the cluster package test_cluster_pkg

- now i detect "vgversion" activate a inactive VG and then make the test,

   if it is a cluster VG

- i think , i must get the VG stati in the active mode and write it to temporary files,

   when the service guard package is stop , it isn't easy to get stati . Only i can activate it temporary  ?

 

# vgdisplay -v vgtest_cluster

 

--- Volume groups ---

VG Name                         /dev/vgtest_cluster

VG Write Access            read/write    

VG Status                        available, exclusive     <---- Cluster VG

Max LV                             255   

Cur LV                               8      

Open LV                            8     

Max PV                             16    

Cur PV                               2     

Act PV                               2     

Max PE per PV                16000       

VGDA                                 4 

PE Size (Mbytes)             32             

Total PE                             6398   

Alloc PE                              4034   

Free PE                              2364   

Total PVG                           0       

Total Spare PVs                0             

Total Spare PVs in use    0                    

VG Version                         1.0      

VG Max Size                      8000g     

VG Max Extents               256000       

 

# cmhaltpkg test_cluster_pkg

 

# vgdisplay -v vgtest_cluster

vgdisplay: Volume group not activated.

vgdisplay: Cannot display volume group "vgtest_cluster".

 

# vgchange -a r vgtest_cluster

# vgdisplay -v vgtest_cluster

 

--- Volume groups ---

VG Name                         /dev/vgtest_cluster

VG Write Access            read-only                               < READ ONLY  

VG Status                        available

Max LV                             255   

Cur LV                               8      

Open LV                            8     

Max PV                             16    

Cur PV                               2     

Act PV                               2     

Max PE per PV                16000       

VGDA                                 4 

PE Size (Mbytes)             32             

Total PE                             6398   

Alloc PE                              4034   

Free PE                              2364   

Total PVG                           0       

Total Spare PVs                0             

Total Spare PVs in use    0                    

VG Version                         1.0      

VG Max Size                      8000g     

VG Max Extents               256000       

 

 

# vgchange -c y -S y vgtest_cluster

vgchange: Couldn't activate volume group "/dev/vgtest_cluster":

Shared server activation requires all mirrored Logical Volumes

to have Consistency Recovery of NOMWC or NONE.

 

# vgversion -V 2.2 -r -v /dev/vgtest_cluster

 

Activated volume group.

Volume group "/dev/vgtest_cluster" has been successfully activated.

vgversion: Error: Cannot change the version of a cluster aware Volume Group.

Volume group "/dev/vgtest_cluster" has been successfully deactivated.

Performing "vgchange -a r -l -p -s /dev/vgtest_cluster" to collect data

 

Warning: Volume Group version 2.2 does not support bad block

relocation. The bad block relocation policy of all logical volumes

will be set to NONE.

Deactivating Volume Group "/dev/vgtest_cluster"

Review complete. Volume group not modified

_

 

regards

Valued Contributor
support_billa
Posts: 192
Registered: ‎06-27-2011
Message 6 of 6 (793 Views)

Re: VG informations (vgdisplay , vgversion )

- Here my workflow for vgversion of a cluster VG or a non cluster VG .

   i set a defined minor number to the LVM2 VG

 LVM1 :  /dev/vgtest_cluster/group c 128 0x210000
 LVM2 :  /dev/vgtest_cluster/group c 128 0x021000

workflow (-  if it possible stop the cluster package with cmhaltpkg )

1.  /usr/sbin/umount -a  of VG /dev/vgtest_cluster

 1.1.  /usr/sbin/vgchange -a n /dev/vgtest_cluster
 2.  /usr/sbin/vgdisplay /dev/vgtest_cluster  # Check Status of VG
 3.  /usr/sbin/vgexport -p -s -v -m /tmp/save/vgtest_cluster_1_lvm1.map /dev/vgtest_cluster
Beginning the export process on Volume Group "/dev/vgtest_cluster".
vgexport: Preview of vgexport on volume group "/dev/vgtest_cluster" succeeded.

 3.1./usr/sbin/vgchange -c n /dev/vgtest_cluster  # Remove Cluster Flag
Configuration change completed.
Volume group "/dev/vgtest_cluster" has been successfully changed.
 4.  Backup of VG Info from LVM1 /dev/vgtest_cluster nach /tmp/save/vgtest_cluster_2_lvm1.def
 5.  Backup Permission,Owner,Group,etc. of Files from VG /dev/vgtest_cluster to /tmp/save/vgtest_cluster_3_save_perm_files.def
   
     Example /dev/vgtest_cluster/lvol
     Owner/Group root:sys
     Permission:  0640
     Time: 201004091206.01

 6.  /usr/sbin/vgversion -V 2.2 -v /dev/vgtest_cluster
Activated volume group.
Volume group "/dev/vgtest_cluster" has been successfully activated.
Volume group "/dev/vgtest_cluster" has been successfully deactivated.
Performing "vgchange -a y -l -p -s /dev/vgtest_cluster" to collect data
Warning: Volume Group version 2.2 does not support bad block
relocation. The bad block relocation policy of all logical volumes
will be set to NONE.

Old Volume Group configuration for "/dev/vgtest_cluster" has been saved in "/etc/lvmconf/vgversion_vgtest_cluster/vgtest_cluster_1.0.conf"
Deactivating Volume Group "/dev/vgtest_cluster"

New Volume Group configuration for "/dev/vgtest_cluster" has been saved in "/etc/lvmconf/vgversion_vgtest_cluster/vgtest_cluster_2.2.conf"
Removing the Volume Group /dev/vgtest_cluster from /etc/lvmtab

Applying the configuration to all Physical Volumes from "/etc/lvmconf/vgversion_vgtest_cluster/vgtest_cluster_2.2.conf"
Volume Group configuration has been restored to /dev/rdisk/disk218
Volume Group configuration has been restored to /dev/rdisk/disk132
Creating the Volume Group of version 2.2 with minor number 0x21000.
Adding the Volume Group /dev/vgtest_cluster to /etc/lvmtab_p
Original Volume Group Version was 1.0
New Volume Group Version is 2.2
Volume Group version has been successfully changed to 2.2
Volume Group configuration for /dev/vgtest_cluster has been saved in /etc/lvmconf/vgtest_cluster.conf
 7.  /usr/sbin/vgexport -s -v -m /tmp/save/vgtest_cluster_4_lvm2_del.map /dev/vgtest_cluster
Beginning the export process on Volume Group "/dev/vgtest_cluster".
vgexport: Volume group "/dev/vgtest_cluster" has been successfully removed.
 8.  mkdir -p /dev/vgtest_cluster
 9.  mknod /dev/vgtest_cluster/group c 128 0x021000
10.  /usr/sbin/vgimport -s -N -v -m /tmp/save/vgtest_cluster_4_lvm2_del.map /dev/vgtest_cluster
vgimport: Beginning the import process on Volume Group "/dev/vgtest_cluster".
.
Volume group "/dev/vgtest_cluster" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.

11.  - Change file owner or group of files (devices) of /dev/vgtest_cluster
       - Change file mode access permissions of files (devices) of /dev/vgtest_cluster
       - update access, modification, and/or change times of file

     Example:
     chown root:sys /dev/vgtest_cluster/lvol
     chmod  0640 /dev/vgtest_cluster/lvol
     touch -t 201004091206.01 /dev/vgtest_cluster/lvol

11.1./usr/sbin/vgchange -c y /dev/vgtest_cluster  # Set Cluster Flag
Configuration change completed.
Volume group "/dev/vgtest_cluster" has been successfully changed.
12.1./usr/sbin/vgchange -a r /dev/vgtest_cluster
Activated volume group.
Volume group "/dev/vgtest_cluster" has been successfully changed.
12.2./usr/sbin/vgcfgbackup /dev/vgtest_cluster
Volume Group configuration for /dev/vgtest_cluster has been saved in /etc/lvmconf/vgtest_cluster.conf
12.3./usr/sbin/vgchange -a n /dev/vgtest_cluster
Volume group "/dev/vgtest_cluster" has been successfully changed.

# When you stop a cluster package this part isn't necessary 
13. /usr/sbin/vgchange -a y /dev/vgtest_cluster
14. /usr/sbin/mount -a of VG /dev/vgtest_cluster

 

Primary Host:

Copy Map Files and Saveing of Owner,Group,etc from Primary Host to Alternate Host


Alternate Host :

Upgrade on Alternate Host following VG :

 LVM2 :  /dev/vgtest_cluster/group c 128 0x021000

 1.  The VG is deactivated !!
 2.  /usr/sbin/vgdisplay /dev/vgtest_cluster
 3.  /usr/sbin/vgexport -p -s -v -m /tmp/save/vgtest_cluster_8_lvm1_save.map /dev/vgtest_cluster
Beginning the export process on Volume Group "/dev/vgtest_cluster".
vgexport: Preview of vgexport on volume group "/dev/vgtest_cluster" succeeded.
 4.  /usr/sbin/vgexport -s -v -m /tmp/save/vgtest_cluster_9_lvm1_del.map /dev/vgtest_cluster
Beginning the export process on Volume Group "/dev/vgtest_cluster".
vgexport: Volume group "/dev/vgtest_cluster" has been successfully removed.
 5.  mkdir -p /dev/vgtest_cluster
 6.  mknod /dev/vgtest_cluster/group c 128 0x021000
 7.  /usr/sbin/vgimport -s -N -v -m /tmp/save/vgtest_cluster_5_lvm2.map /dev/vgtest_cluster
vgimport: Beginning the import process on Volume Group "/dev/vgtest_cluster".
Volume group "/dev/vgtest_cluster" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.
 8.  - Change file owner or group of files (devices) of /dev/vgtest_cluster/group and /dev/vgtest_cluster
     - Change file mode access permissions of files (devices) of /dev/vgtest_cluster and /dev/vgtest_cluster
     - update access, modification, and/or change times of file

     Example:
     chown root:sys /dev/vgtest_cluster/group
     chmod 0644 /dev/vgtest_cluster/group
     touch -t 201211290704.34 /dev/vgtest_cluster/group
     chown root:sys /dev/vgtest_cluster
     chmod 0755 /dev/vgtest_cluster
     touch -t 201211290704.34 /dev/vgtest_cluster

 9.  - Change file owner or group of files (devices) of /dev/vgtest_cluster
     - Change file mode access permissions of files (devices) of /dev/vgtest_cluster
     - update access, modification, and/or change times of file

     chown root:sys /dev/vgtest_cluster/lvol
     chmod  0640 /dev/vgtest_cluster/lvol
     touch -t 201004091206.01 /dev/vgtest_cluster/lvol
     chown root:sys /dev/vgtest_cluster/rlvol
     chmod  0640 /dev/vgtest_cluster/rlvol
     touch -t 201004091206.01 /dev/vgtest_cluster/rlvol

 9.1./usr/sbin/vgchange -a r /dev/vgtest_cluster
Activated volume group.
Volume group "/dev/vgtest_cluster" has been successfully changed.
 9.2. /usr/sbin/vgcfgbackup /dev/vgtest_cluster
Volume Group configuration for /dev/vgtest_cluster has been saved in /etc/lvmconf/vgtest_cluster.conf
 9.3./usr/sbin/vgchange -a n /dev/vgtest_cluster
Volume group "/dev/vgtest_cluster" has been successfully changed.

  regards

The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.