Re: lvconvert -m1 shows Unknown message (313 Views)
Reply
Occasional Contributor
BR70551
Posts: 5
Registered: ‎11-04-2011
Message 1 of 5 (350 Views)

lvconvert -m1 shows Unknown message

Hi @all

 

I try to create a mirror on a logical volume which contains 2 multipath disks from the SAN

 

OS: Suse Linux SLES 11 SP1

 

This is what I see

**********

# lvconvert -m1 /dev/blc1bl01-vg05/lvol1
  Logical volume lvol1 has multiple mirror segments.

**********

 

Some Informations:

 

pvscan shows:

  PV /dev/dm-28          VG blc1bl01-vg05   lvm2 [100.00 GB / 0    free]
  PV /dev/dm-27          VG blc1bl01-vg05   lvm2 [100.00 GB / 0    free]

 

vgscan shows:

  Found volume group "blc1bl01-vg05" using metadata type lvm2

 

lvscan shows:

  ACTIVE            '/dev/blc1bl01-vg05/lvol1' [199.99 GB] inherit

 

vgs shows:

  VG            #PV #LV #SN Attr   VSize   VFree
  blc1bl01-vg05   2   1   0 wz--n- 199.99G    0

 

lvs shows:

  LV    VG            Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lvol1 blc1bl01-vg05 -wi-a- 199.99G                                     

 

pvs shows:

  /dev/dm-27        blc1bl01-vg05 lvm2 a-   100.00G    0
  /dev/dm-28        blc1bl01-vg05 lvm2 a-   100.00G    0

 

 

What does the error message of the lvconvert command mean?

Where is my mistakes (in configuring or thinking  :-) )

 

Thanks for all helpful replys.

 

Franziska

Please use plain text.
Honored Contributor
Matti_Kurkela
Posts: 6,271
Registered: ‎12-02-2001
Message 2 of 5 (336 Views)

Re: lvconvert -m1 shows Unknown message

What does "lvdisplay -m /dev/blc1bl01-vg05/lvol1" say?

 

The --- Segments --- section at the end of the output would be particularly important.

MK
Please use plain text.
Occasional Contributor
BR70551
Posts: 5
Registered: ‎11-04-2011
Message 3 of 5 (325 Views)

Re: lvconvert -m1 shows Unknown message

Hi MK

 

Sorry for the late reply, I was out of office for some days.

 

The Output shows the following:

 

blc1bl01:~ # lvdisplay -m /dev/blc1bl01-vg05/lvol1
  --- Logical volume ---
  LV Name                /dev/blc1bl01-vg05/lvol1
  VG Name                blc1bl01-vg05
  LV UUID                Ze14e8-80uw-dX7y-7dB2-747l-lfde-GNREfd
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                199.99 GB
  Current LE             51198
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:29

 

  --- Segments ---
  Logical extent 0 to 25598:
    Type                linear
    Physical volume     /dev/dm-28
    Physical extents    0 to 25598

  Logical extent 25599 to 51197:
    Type                linear
    Physical volume     /dev/dm-27
    Physical extents    0 to 25598

 

Regards

Franziska

Please use plain text.
Honored Contributor
Matti_Kurkela
Posts: 6,271
Registered: ‎12-02-2001
Message 4 of 5 (318 Views)

Re: lvconvert -m1 shows Unknown message

OK... I should have noticed this before, but it appears that both PVs in your VG are already completely allocated by the LV, so there is no free space for the mirror. Your "vgs" output confirms that there is 0 free space in the VG.

 

(This will be a long post, as I think I need to cover several important concepts.)

 

With your "lvconvert -m1 /dev/blc1bl01-vg05/lvol1" command, you're essentially telling LVM: "Find a way to mirror the LV /dev/blc1bl01-vg05/lvol1 using any free PVs in the VG blc1bl01-vg05." You would need about 200 GB more disk space to hold the new mirror, and a bit more to hold the mirror log (the bit of metadata that tracks whether the mirror is in sync or not, and which parts of the mirror need syncing if any).

 

Your LV consists of two chunks, one on the first PV and another on the second PV. In other words, it's a JBOD-style configuration.  If you wanted to maintain a symmetric mirror set-up, you would need two new PVs of about 100 GB each, preferably a little more so that a disk-based mirror log can be used. (A RAM-based log is only good for very short-lived and/or small mirrors, in my opinion.) It would also be possible to use a single chunk of 200 GB or more for the mirror.

 

LVM is now trying to figure out what to do, and failing. The error message should more properly be something like "Cannot find a place to put the new mirror chunk(s)."

 

But your PVs are listed as /dev/dm-* devices, so they might be something more than simple disk devices. Besides the LVM mirroring, there is a separate RAID layer that could have been used to mirror disks. Your PVs might already be mirrored at a level "below" the LVM.

 

/dev/dm-* means a Device Mapper device. Device Mapper is a Linux kernel subsystem that is used by several storage-related functions. So the Device Mapper devices can be many things:

  • software RAID, including mirroring.
  • multipathed devices, and/or partitions on multipathed devices.
  • encrypted devices.
  • LVM logical volumes (although usually these are referred to by their LVM names instead of generic, opaque /dev/dm-* names)
  • LVM snapshots
  • and various other things that are mainly useful for more esoteric storage configurations, or for storage testing/debugging.

When the LVM commands are looking for PVs, they will go through the entire /dev directory hierarchy looking for disk devices. A disk device may have several device nodes and/or symlinks pointing to it, using different naming conventions. The LVM tools aren't confused by this: they will gather all the names associated with a given major/minor disk device number, and unless otherwise specified, will use the first name off that list for the device. If that happens to be the /dev/dm-* name, that is somewhat unhelpful.

 

With the command "dmsetup ls --tree", you can easily figure out the more complex storage combinations. It will output a small tree diagram of each Device Mapper device. For example, if you had a LVM LV named /dev/vgdata/lvol1 on a multipathed PV /dev/mapper/dataLUN with four separate physical paths to the LUN, the output would look something like this:

vgdata-lvol1 (major:minor of the LV device)
  \- dataLUN (major:minor of the multipath device)
       |- (major:minor of physical path 1)
       |- (major:minor of physical path 2)
       |- (major:minor of physical path 3)
       \- (major:minor of physical path 4)

 Unfortunately, the physical path devices (usually /dev/sd* devices) are only listed by their major/minor device numbers, so you might have to look them up using a "ls -l /dev/sd*" or similar listing. Still, it should help in figuring out what the /dev/dm-27 and /dev/dm-28 actually are. If it turns out that your PVs are already mirrored using a lower Device Mapper layer, the command for manipulating those mirror sets will probably be "mdadm" (or in some cases "dmraid"), not "lvconvert".

 

Please run "dmadmin ls --tree" and paste to this thread the tree diagram that has blc1bl01-vg05/lvol1 at the top.

 

It is also possible to tell LVM to prefer certain kinds of names over others. In /etc/lvm/lvm.conf, in the "devices" section, there should be a setting "preferred_names". Unless the maintainers of your Linux distribution have chosen otherwise, it is an empty list by default. Next to the default setting, there is usually a commented-out example that makes the LVM tools more helpful in a multipathed environment:

 

    # If several entries in the scanned directories correspond to the
    # same block device and the tools need to display a name for device,
    # all the pathnames are matched against each item in the following
    # list of regular expressions in turn and the first match is used.
    preferred_names = [ ]

    # Try to avoid using undescriptive /dev/dm-N names, if present.
    # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]

Just comment out the first preferred_names line and uncomment the second one, and the unhelpful /dev/dm-* names in the output of the LVM commands should immediately be replaced by more informative names.

The LVM is within the kernel, so all the device references it actually uses are based on minor/major device numbers (or whatever kernel-internal pointers/structures correspond to those), so changing this setting only changes how those devices are displayed to the user. As a result, it is safe to change this setting even if VGs are active and disks are mounted.

 

(If the setting does not exist at all, your Linux distribution probably has a rather old version of the LVM userspace tools.)

 

Device Mapper can be confusing if you are not familiar with it, but it is a very powerful feature. With it, you can "stack" storage features (like encryption, LVM, RAID, multipathing) on top of each other practically in any order you wish.

MK
Please use plain text.
Occasional Contributor
BR70551
Posts: 5
Registered: ‎11-04-2011
Message 5 of 5 (313 Views)

Re: lvconvert -m1 shows Unknown message

HI MK

 

Thank you for your detailed answer.

 

 

Here the output of dmsetup ls --tree

...

blc1bl01--vg05-lvol1 (253:27)
 - 3600508b40006e6e00001100000720000 (253:26) -> dm-26
 ----- (66:160) ----------------------------->>> is /dev/sdaq
 ----- (66:96) ----------------------------->>> is /dev/sdam
 ----- (66:176) ----------------------------->>> is /dev/sdar

 ----- (66:112) ----------------------------->>> is /dev/sdan
 - 3600508b40010533b0000d00000300000 (253:25) -> dm-25
  ----- (66:144) ----------------------------->>> is /dev/sdap
  ----- (66:80) ----------------------------->>> is /dev/sdal
  ----- (66:128) ----------------------------->>> is /dev/sdac
  ----- (66:64) ----------------------------->>> is /dev/sdak

 

Fact is, that we have installed the multipath-tools on our blade  systems, because of the SAN connections to the EVA 6500.

 

I think, I must check my LVM userspace tools, because I do not find any entry in my lvm.conf which looks like

preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ] or similar.

It looks strange because normally the kernel since 2.6. should have the right LVM tools and my kernel is a 2.6.32.24

 

Kind regards

 

Franziska

Please use plain text.
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation