Re: Cluster switching fail (300 Views)
Reply
Occasional Contributor
hp omni backup
Posts: 7
Registered: ‎03-04-2009
Message 1 of 7 (300 Views)
Accepted Solution

Cluster switching fail

Hi ,

in my cluster environment we are trying to switch a package to primary server.

The package is running on alternative failover node.Here i'm posting some of the outputs . can anyone please point me in right direction.

dev001 root# cmviewcl -v

CLUSTER STATUS
MDDB up

NODE STATUS STATE
dev001 up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY down 0/0/0 lan0
STANDBY up 0/1/0 lan1

NODE STATUS STATE
dev002 up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0 lan0
STANDBY down 1/2/0 lan1

PACKAGE STATUS STATE AUTO_RUN NODE
MDDB up running enabled dev002

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Subnet up 12.10.10.0

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled dev001
Alternate up enabled dev002 (current)




May 29 08:19:20 dev001 CM-CMD[12837]: cmrunpkg -n dev001 MDDB
May 29 08:19:20 dev001 cmcld: Executing '/etc/cmcluster/oracle/control.sh star t' for package MDDB, as service PKG*16385.
May 29 08:19:20 dev001 LVM[12851]: vgchange -a n vgapp
May 29 08:19:20 dev001 LVM[12854]: vgchange -a n vgdata1
May 29 08:19:20 dev001 LVM[12857]: vgchange -a n vgdata2
May 29 08:19:20 dev001 LVM[12860]: vgchange -a n vgdata3
May 29 08:19:20 dev001 LVM[12863]: vgchange -a n vgdata4
May 29 08:19:28 dev001 cmcld: Processing exit status for service PKG*16385
May 29 08:19:28 dev001 cmcld: Service PKG*16385 terminated due to an exit(1).
May 29 08:19:28 dev001 cmcld: Package MDDB run script exited with NO_RESTART.
May 29 08:19:28 dev001 cmcld: Examine the file /etc/cmcluster/oracle/control.sh .log for more details.






########### Node "dev001": Starting package at Fri May 29 08:18:02 GMT 2009 ###########
May 29 08:18:02 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.
ERROR: Function activate_volume_group
ERROR: Failed to activate vgapp
May 29 08:18:02 - Node "dev001": Deactivating volume group vgapp
vgchange: Volume group "vgapp" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata1
vgchange: Volume group "vgdata1" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata2
vgchange: Volume group "vgdata2" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata3
vgchange: Volume group "vgdata3" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata4
vgchange: Volume group "vgdata4" has been successfully changed.

########### Node "dev001": Starting package at Fri May 29 08:19:20 GMT 2009 ###########
May 29 08:19:20 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.
ERROR: Function activate_volume_group
ERROR: Failed to activate vgapp
May 29 08:19:20 - Node "dev001": Deactivating volume group vgapp
vgchange: Volume group "vgapp" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata1
vgchange: Volume group "vgdata1" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata2
vgchange: Volume group "vgdata2" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata3
vgchange: Volume group "vgdata3" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata4
vgchange: Volume group "vgdata4" has been successfully changed.

########### Node "dev001": Starting package at Fri May 29 08:20:36 GMT 2009 ###########
May 29 08:20:36 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.
ERROR: Function activate_volume_group
ERROR: Failed to activate vgapp
May 29 08:20:36 - Node "dev001": Deactivating volume group vgapp
vgchange: Volume group "vgapp" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata1
vgchange: Volume group "vgdata1" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata2
vgchange: Volume group "vgdata2" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata3
vgchange: Volume group "vgdata3" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata4
vgchange: Volume group "vgdata4" has been successfully changed.
dev001 root#

Exalted Contributor
Steven E. Protter
Posts: 33,806
Registered: ‎08-15-2002
Message 2 of 7 (300 Views)

Re: Cluster switching fail

Shalom,

The second or other node is not permitting volume group activation in exclusive mode.

There may be error data on the second node, or you may wish to run cmhaltnode and bring that node down and try again on this node.

vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
HP Pro
melvyn burnard
Posts: 6,068
Registered: ‎04-06-1997
Message 3 of 7 (300 Views)

Re: Cluster switching fail

Well I would first halt the package and THEN try to start it on the other node, as it appears you did not do this.
Also, I suggest yu investigate your network as it looks like you have some issues:

NODE STATUS STATE
dev001 up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY down 0/0/0 lan0 <<<<<<<
STANDBY up 0/1/0 lan1

NODE STATUS STATE
dev002 up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0 lan0
STANDBY down 1/2/0 lan1 <<<<<<<
My house is the bank's, my money the wife's, But my opinions belong to me, not HP!
Honored Contributor
Rita C Workman
Posts: 3,791
Registered: ‎08-03-2000
Message 4 of 7 (300 Views)

Re: Cluster switching fail

Your answer is right there in the cluster message:
########### Node "dev001": Starting package at Fri May 29 08:20:36 GMT 2009 ###########
May 29 08:20:36 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.

>>>>>>>>>>It's already running on the other node. So follow Melvyn's suggestions - cmhaltpkg on the second node and then you could just run cmmodpkg -e command to force it up on the first node. But, again just like Melvyn told you, check out your lan connections cause looks like you got issues there.

Rgrds,
Rita
Regular Advisor
Anoop P_2
Posts: 91
Registered: ‎01-21-2004
Message 5 of 7 (300 Views)

Re: Cluster switching fail

Well, it can be due to various reasons, but below is what comes to my mind first.

1. The package shutdown failed on the fail over node. What ever caused the package shutdown failure, the vg remained active. Hence you are unable to start it up on primary.

Try to manually unmount file systems mounted from lvols (kill any process using it) in those vgs and do vgchange -a n for each vg on the fail over node. If you get errors trying to unmount a file system, you might have to reboot it.

2. The vg was manually activated on the fail over server during an earlier failed - fail over. This means the package script was not updated correctly.

Still you'll need to unmount file systems/deactivate vgs manually.
Advisor
Syed Nazer Abbas
Posts: 20
Registered: ‎11-09-2007
Message 6 of 7 (300 Views)

Re: Cluster switching fail

As Melwin pointed out,

your error message is very clear
in node001

PRIMAR LAN0 is DOWN
check our LAN connections.

what does lanscan and ioscan -fnkClan show.

Occasional Contributor
hp omni backup
Posts: 7
Registered: ‎03-04-2009
Message 7 of 7 (300 Views)

Re: Cluster switching fail

Thank you all for your support.

I need to do this cluster switching on weekend.

As you all guided i have got the following out put when i did lanscan and ioscan -fnkClan

dev001 root#lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
0/0/0 0x001083F7B33A 0 UP lan0 snap0 1 ETHER Yes 119
0/1/0 0x001083F7B3BB 1 UP lan1 snap1 2 ETHER Yes 119
dev001 root# ioscan -fnkClan
Class I H/W Path Driver S/W State H/W Type Description
===================================================================
lan 0 0/0/0 btlan6 CLAIMED INTERFACE HP A3738A PCI 10/100Base- TX Ultimate Combo
/dev/diag/lan0 /dev/ether0 /dev/lan0
lan 1 0/1/0 btlan6 CLAIMED INTERFACE HP A3738A PCI 10/100Base- TX Ultimate Combo
/dev/diag/lan1 /dev/ether1 /dev/lan1


The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.