03-28-2007 09:15 PM
VxVM vxdg ERROR V-5-1-587 Disk group omcdg: import failed: No valid disk found containing disk group
Also, the command "vxdisk -o alldgs list" does not list and vxvm disks.
Please help in this scenario. Very urgently required for one of our systems.
03-28-2007 11:14 PM
have a look at the vxdg -f option
But use with care !
Please also read:
Your profile indicates you have awarded points to only two of 15 answers.
03-28-2007 11:29 PM
I am a new user. From now on I will take care of assigning the points. :)
But I wasnt clear abt the answer that you gave me. Pls explain as to what are the risks involved in the same.
03-29-2007 12:03 AM
EVICE TYPE DISK GROUP STATUS
c2t0d0s2 auto:LVM - - LVM
c2t1d0 auto:LVM - - LVM
c4t0d1s2 auto:LVM - - LVM
c4t0d2s2 auto:LVM - - LVM
c4t0d3 auto:LVM - - LVM
c4t0d4 auto:LVM - - LVM
c8t0d1 auto:LVM - - LVM
c8t0d2 auto:LVM - - LVM
c8t0d3 auto:LVM - - LVM
c8t0d4 auto:LVM - - LVM
I dont c a VxVM disk at all.
03-29-2007 01:19 AM
What you have sir are LVM managed disks.. so you'll need vgimport and vgchange to do the trick.
Does your cluster not have any documentation ? Is this a production cluster or a non-production one that you as a novice are allowed to manage?
03-29-2007 01:55 AM
My main concern is that only how the disks became LVM disks?
03-29-2007 02:14 AM
Other things you can check:
1.) The disks that you listed -- are you sure they are the same disks that you claim *were* VxVM before?
2) If you are, then someone or something initialized them as LVM disks.
3) Try to see if the VGIDs of all the disks are all the same. If they are..trace from your other node(s) where they also show up with the same VGID and check what LVM VG theynow belong to and chase up on your peers, your SAN Admins, etc. on what happened, if there are changes.
4) A distant possibility -- you are in a SAN Twilight Zone...
03-29-2007 03:06 AM
I can list the scenario that led to this situation.
1) Node1 and Node2 were working fine in cluster with the shared file system as VXVM disks.
2) Service was now running on node2.
3) We wanted to try recovery procedures so we created node1 from scratch.
4) Now we failed over the cluster package from node2 to node1.
5) Node1 couldnt start the package. Then we checked the listing and the disks were now shown as LVM.
6) We tried to start the package back on node2 and the package didnt start.
03-29-2007 03:09 AM
Continue with your hunt... look at the VGIDs of those disks..