05-28-2013 07:59 AM - edited 05-28-2013 10:16 AM
I've looked through some posts, but haven't found anything exactly matching what I am running across today. The short of it is that all logical volumes in vg00 are on a raided volume. However the previous administrators of this server added two, non-raided disks, into vg00. Last week a disk failure occured on one of those non-raided disks and it was replaced before I could remove it out of the VG. I think I have the steps for this straight, but want to confirm since this is a production server.
In the attached file, disk40 is the disk that went bad, disk3 is the new disk.
I'm assuming what I need to do is:
pvreduce /dev/vg00 /dev/disk/disk40 (-f if it doesn't work)
vgcfgrestore -n /dev/vg00 /dev/disk/disk3 (-R if it doesn't work)
vgextend /dev/vg00 /dev/disk/disk3
Is there any other steps I might be missing, or issues I will run into? I've also been thinking about just removing the non-raided volumes from vg00, but I need to ensure the server will still boot properly.
05-28-2013 10:24 AM
it provides cookbooks for many scenarios
HP UNIX Certified (CSA, CSE HPUX 11i High Availability) HP Software (Openview) Certified Consultant
Certified HP Instructor, Technical Certified I and II SMB and Enterprise
Master ASE Superdome Solutins
HP Education Services
Ask me about training on Blades, Proliant, HP-UX, ServiceGuard, Polyserve, X9000, Virtual Libraries, and High Availability
05-28-2013 04:37 PM
That procedure for sure will not work. (pvreduce doesnt exist, its vgreduce that exists as command and you cant vgreduce the disk40 before you do the vgcfgrestore)
And I wouldnt also have allowed someone to hardware replace a defective bootdisk, before you could have done the necessary commands to remove the problem disk out of the lvm configuration.
Interesting that vgdisplay -v /dev/vg00 doesnt give any "real" output. As its not the bootdisk, i.e. disk39_p2, that went defective, I would still have expected full output.
The vgcfgrestore command also is not correct. I think you need to do something like putting the lvm metadata contents of previous disk40 on to new replacement disk disk3.
But that wouldnt be easy with disk40 still part of vg00.
In short get your ignite backup ready and restore that. ;)
05-30-2013 06:42 AM
first check the lvdisplay of the lvols in vg00, and check for any stale le's in lvols. Since it is a unused disk their should not be any lvols.
do vgreduce -f vg00
do vgextend vg00 pvname
it should work, im not that sure.......
05-30-2013 06:45 AM
or else boot it in lvm maintainence mode do
vgexport -p -s -v -m /tmp/vg00.map vg00
vgimport vg00 /dev/disk/disk39_p2 /dev/disk/disk41
vgchange -a y vg00
currrent and actual pv varies
then do vgreduce -f vg00
06-03-2013 06:45 AM
Yeah unfortunately I was out the day the disk was replaced and the HP engineer didn't ask about the configuration. I cut down some of the output so that the text file wasn't as large, but it displayed properly (albeit with the errors on the replaced disk). For the question on the lvdisplay, all logical volumes are showing no pieces on the disk that was removed, they only show the raid disk (i.e. no stale extents).
06-06-2013 01:57 AM
If still u r facing issue, follow the below steps and send screenshot.
# vgdisplay -v vg00
# lvdisplay -v /dev/vg00/lvol3
# lvdisplay -v -k /dev/vg00/lvol3
# strings /etc/lvmtab
06-18-2013 05:48 AM - edited 06-18-2013 05:49 AM
Thank you all for your ideas on this (I don't know why I had the wrong commands in my initial post). Over this past weekend I was able to reduce out the non-existant disk without any major issues. What worked in my favor for this was the bad disk was part of VG00, but did not contain any lvols or lvlnboot information. The below is what I ran sucessfully:
[ server_name:/etc ] vgreduce -f vg00
PV with key 1 sucessfully deleted from vg vg00
Repair completed, please perform the following steps..:
1. vgscan -k -f vg00
Please note that for anyone that may try this in the future, this only works if no lvols were on the disk and the disk did not contain any boot information. When Good Disks Go Bad actually listed the step as vgreduce -f vg00 pvname; vgscan -f vgname. This does not work as vgreduce -f will not accept anything other than just a volume group name. Backing up the /etc/lvmconf/vg00.cfg, /etc/lvmtab, and /etc/lvmtab_p files also ensure that you can go back in the event of a problem. This is due to a newer LVM patch on 11.31 apparently.
06-18-2013 05:54 AM
Most of the times the problem is that disks get a new device file when replaced, so you need to work with scsimgr and io_redirect_dsf to adjust.
So - if possible - use hardware RAID for your boot (and other) disks.
Hope this helps!
There are only 10 types of people in the world -
those who understand binary, and those who don't.
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!