This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Showing posts with label LVM. Show all posts
Showing posts with label LVM. Show all posts

Monday, September 3, 2018

How to reduce (lvreduce) the Logical Volume in Linux Server

Monday, September 03, 2018 0
How to reduce (lvreduce) the Logical Volume in Linux Server.

Situation

Here, /app1 is 100GB filesystem. We need to reduce it to 70GB 

[root@testserver ~]# df -hP
Filesystem                                                 Size  Used Avail Use% Mounted on
/dev/mapper/vg_main-lv_root                   31G  2.0G    28G   7%    /
tmpfs                                                        3.7G     0      3.7G   0%   /dev/shm
/dev/xvdb1                                              477M   93M  355M  21%  /boot
/dev/mapper/vg_DPFERT-lv_app1           99G   11G    84G  11%   /app1


[root@testserver ~]# vgs
  VG                #PV #LV #SN Attr      VSize    VFree
  vg_DPFERT    1   1      0    wz--n-  100.00g    0
  vg_main           1   2      0    wz--n-    31.50g   0

[root@testserver ~]# fdisk -l /dev/xvdc

Disk /dev/xvdc: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdc1               1       13054   104856254+  8e  Linux LVM

STEP 1 : First Unmount the LV

STEP 2 : Run e2fsck command to check the file system

[root@testserver ~]# e2fsck -f /dev/vg_DPFERT/lv_app1
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg_DPFERT/lv_app1: 57971/6553600 files (0.2% non-contiguous), 3073907/26213376 blocks


STEP 3 : Run resize2fs command for resizing the file system.

[root@testserver ~]# resize2fs /dev/vg_DPFERT/lv_app1 70G 
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/vg_DPFERT/lv_app1 to 18350080 (4k) blocks.
The filesystem on /dev/vg_DPFERT/lv_app1 is now 18350080 blocks long.

STEP 4 : Run lvreduce command to resuce the lvsize.

[root@testserver ~]# lvreduce -L 70G /dev/vg_DPFERT/lv_app1
  WARNING: Reducing active logical volume to 70.00 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lv_app1? [y/n]: y
  Size of logical volume vg_DPFERT/lv_app1 changed from 100.00 GiB (25599 extents) to 70.00 GiB (17920 extents).
  Logical volume lv_app1 successfully resized
[root@testserver ~]#

STEP 5 : Mount the LV
[root@testserver ~]# lvs
  LV          VG                  Attr          LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_app1  vg_DPFERT  -wi-a-----  70.00g
  lv_root    vg_main       -wi-ao----  31.22g
  lv_swap  vg_main       -wi-ao---- 288.00m

[root@testserver ~]# vgs
  VG                  #PV #LV #SN Attr       VSize      VFree
  vg_DPFERT    1      1      0    wz--n-  100.00g   30.00g
  vg_main           1      2     0     wz--n-  31.50g     0
  
[root@testserver home]# df -hP
Filesystem                                          Size  Used Avail Use% Mounted on
/dev/mapper/vg_main-lv_root             31G   2.0G   28G   7%   /
tmpfs                                                  3.7G     0      3.7G   0%  /dev/shm
/dev/xvdb1                                        477M   93M  355M  21% /boot
/dev/mapper/vg_DPFERT-lv_app1     69G   11G   56G   16%  /app1

Hope it helps.

Sunday, January 7, 2018

How to modify the Default Physical extent size of Physical Volume?

Sunday, January 07, 2018 0

There are 2 situation for modifying or Setting the Default Physical extent size in LVM.

1. Create a volume group with new Physical extent size. This method will be used before creating logical volume on that Volume Group.

#vgcreate -s PE_SIZE 

Here,
-s  --physicalextentsize Size[m|UNIT] - Sets the physical extent size of PVs in the VG.  The value must be either a power of 2 of at least 1 sector (where the sector size is the largest sector size of the PVs currently used in the VG), or at least 128KiB.  Once this value has been set, it is difficult to change without recreating the VG,unless no extents need moving.

2. Modify the Existing value of Physical extent size.

- remove all Logical Volumes of the Volume Group with lvremove
- do a vgreduce on that VG.
- "vgchage -an" on that VG
- vgremove that VG
- setup the VG with large PE size (vgcreate -s PE_SIZE)

A more "Forceful" approach is:
- "vgchange -a n" on the VG
- "pvcreate -ff" on all its PVs
- setup the VG with large PE size (vgcreate -s PE_SIZE)

Saturday, January 6, 2018

Display what extents are allocated on the physical volume to logical volume

Saturday, January 06, 2018 0
We can see what extents are allocated on the physical volume

NAME
       pvdisplay - Display various attributes of physical volume(s)
   
pvdisplay shows the attributes of PVs, like size, physical extent size, space used for the VG descriptor area, etc. Here pvdisplay along with options --maps will show the what extents are allocated on the physical volume to the lv.

[root@nsk postfix]# pvdisplay --maps /dev/sda2
  --- Physical volume ---
  PV Name                /dev/sda2
  VG Name                centos
  PV Size                  <19.00 GiB / not usable 3.00 MiB
  Allocatable              yes (but full)
  PE Size                   4.00 MiB
  Total PE                   4863
  Free PE                   0
  Allocated PE           4863
  PV UUID                 DQjmHN-fso4-Mu4t-3l1V-Yogj-ksTH-ROFiK7

  --- Physical Segments ---
  Physical extent 0 to 511:
    Logical volume      /dev/centos/swap
    Logical extents      0 to 511
  Physical extent 512 to 4862:
    Logical volume      /dev/centos/root
    Logical extents      0 to 4350

Here,
         -m  --maps  Display the mapping of physical extents to LVs and logical extents.

Sunday, December 17, 2017

Understanding the LVM Configuration Files in Linux

Sunday, December 17, 2017 0

Understanding the LVM Configuration Files in Linux

The following files are part of LVM configuration:

/etc/lvm/lvm.conf
    Central configuration file read by the tools.

etc/lvm/lvm_hosttag.conf
    For each host tag, an extra configuration file is read if it exists: lvm_hosttag.conf. If that file defines new tags,
    then further configuration files will be appended to the list of tiles to read in.
In addition to the LVM configuration files, a system running LVM includes the following files that affect LVM system setup:

/etc/lvm/cache
    Device name filter cache file (configurable).

/etc/lvm/backup/
    Directory for automatic volume group metadata backups (configurable).

/etc/lvm/archive/
    Directory for automatic volume group metadata archives (configurable with regard to directory path and archive history depth).

/var/lock/lvm/
    In single-host configuration, lock files to prevent parallel tool runs from corrupting the metadata; in a cluster, cluster-wide DLM is used.

Wednesday, November 1, 2017

How to Add and Remove Object Tags in Logical Volume Manager (Linux Server)?

Wednesday, November 01, 2017 0

Add or Delete tags, below steps will be used.

To add or delete tags from physical volumes, use the --addtag or --deltag option of the
 pvchange command.                                                                          
To add or delete tags from volume groups, use the --addtag or --deltag option of the 
vgchange or vgcreate commands.
To add or delete tags from logical volumes, use the --addtag or --deltag option of the 
lvchange or lvcreate commands.    

As of the Red Hat Enterprise Linux 6.1 release, you can specify multiple --addtag and --deltag arguments within asingle pvchange, vgchange, or lvchange command. For example, the following command deletes the tags T9 and T10 and adds the tags T13 and T14 to the volume group VGTEST

vgchange --deltag T9 --deltag T10 --addtag T13 --addtag T14 VGTEST

Thursday, October 19, 2017

How to Activate the Logical Volumes on Individual Cluster Member Nodes in a RHEL Cluster?

Thursday, October 19, 2017 0
If you have LVM installed in a cluster environment, you may at times need to activate logical volumes exclusively on one node.

To activate logical volumes exclusively on one node, use the lvchange -aey command. Alternatively, you can use lvchange -aly command to activate logical volumes only on the local node but not exclusively.


You can later activate them on additional nodes concurrently.

Wednesday, September 13, 2017

How to Clone the LVM2 Volume Groups?

Wednesday, September 13, 2017 0
These instructions describe the steps required to clone an LVM2 volume
group by creating a duplicate copy of the physical storage (PVs). This
requires the VG be deactivated while the clone is created and
re-named.

The volume group being cloned, CloneVG consists of two PVs originally present
on /dev/testpv0 and /dev/testpv1. A new volume group named CloneVG-clone will
be created on devices /dev/testpv2 and /dev/testpv3.

1. Deactivate the VG

       # vgchange -an CloneVG

2. Create the cloned PV(s)

       E.g., dd, clone LUNs on storage, break mirror etc.

       # dd if=/dev/testpv0 of=/dev/testpv2
       # dd if=/dev/testpv1 of=/dev/testpv3

3. For each original PV, create a filter entry in /etc/lvm/lvm.conf to
temporarily mask the PV from the LVM tools.

Preserve a copy of the original filtering rules so that it can be
restored at the end of the process, for example:

       # cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.orig

To exclude the original loopback devices /dev/testpv0 and /dev/testpv1, the
filter line could look like this:

       filter = [ "r|/dev/testpv0|", "r|/dev/testpv1|", "a|.*|" ]

Or, using a regex to match both devices with a single rule:

       filter = [ "r|/dev/loop[01]|", "a|.*|" ]

Once the filters are set up, remove the LVM persistent cache:

       # rm -f /etc/lvm/.cache [versions before 2.02.23]
OR
       # rm -f /etc/lvm/cache/.cache [version 2.02.23 or later]

Verify that the filtering is correct by running pvscan:

       # pvscan
         PV /dev/testpv2   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]
         PV /dev/testpv3   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]
         Total: 2 [120.00 MB] / in use: 2 [120.00 MB] / in no VG: 0 [0   ]

Only the cloned PVs should be displayed. If the original PVs appear,
check the syntax of the filtering rule and clear the persistent cache
again.

4. Modify the cloned volume group name, ID and physical volume IDs to
avoid name and UUID clashes between the original and cloned devices:

For each cloned physical volume, run:

       # pvchange --uuid /path/to/physical/volume

This will generate a new random UUID for the specified physical volume
and update the volume group metadata to reflect the changed identity.

For example:

       # pvchange --uuid /dev/testpv2
         Physical volume "/dev/testpv2" changed
         1 physical volume changed / 0 physical volumes not changed
       # pvchange --uuid /dev/testpv3
         Physical volume "/dev/testpv3" changed
         1 physical volume changed / 0 physical volumes not changed

Generate a new UUID for the entire volume group using vgchange:

       # vgchange --uuid CloneVG
         Volume group "CloneVG" successfully changed

Finally, rename the cloned VG:

       # vgrename CloneVG CloneVG-clone

5. Remove filtering rules & verify both VGs co-exist correctly

Restore the original filtering configuration and wipe the persistent cache:

       # cp /etc/lvm/lvm.conf.orig /etc/lvm/lvm.conf
       cp: overwrite `/etc/lvm/lvm.conf'? y
       # rm -f /etc/lvm/.cache

Run pvscan to verify the new and old VGs are correctly displayed:

       # pvscan
         PV /dev/testpv0   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]
         PV /dev/testpv1   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]

         PV /dev/testpv2   VG CloneVG-clone   lvm2 [60.00 MB / 40.00 MB free]
         PV /dev/testpv3   VG CloneVG-clone   lvm2 [60.00 MB / 40.00 MB free]
         Total: 4 [240.00 MB] / in use: 4 [240.00 MB] / in no VG: 0 [0  ]

6. Activate volume groups

Both the original and cloned VGs can now be activated simultaneously:

       # vgchange -ay CloneVG
         1 logical volume(s) in volume group "CloneVG" now active

       # vgchange -ay CloneVG-clone
         1 logical volume(s) in volume group "CloneVG-clone" now active

Wednesday, June 15, 2016

LVM export and import: How to move a VG to another Machine or Group

Wednesday, June 15, 2016 0
 It is quite easy to move a whole volume group to another system if, for example, a user department acquires a new server. To do this we use the vgexport and vgimport commands.

    Unmount the file system
    Mark the volume group inactive
    Export the volume group
    Import the volume group
    Activate the volume group
    Mount the file system
Exporting Volume Group

1. unmount the file system

First make sure no users are accessing files on active volume, then unmount it

# df -h

Filesystem                                               Size  Used Avail Use% Mounted on

/dev/sda1                                                 25G  4.9G   19G  21% /
tmpfs                                                        593M     0  593M   0% /dev/shm
/dev/mapper/vg--nagavg-lvm--naga        664M  542M   90M  86% /lvm-naga

# umount /lvm-naga

2. Mark the Volume Group inactive

Marks the volume group inactive removes it from the kernal and prevents any further activity on it.

# vgchange -an vg-nagavg(VG name)

 logical volume(s) in volume group "vg-naga" now active

3. Export the VG

It is now necessor to export the Volume Group, this prevents it from being accessed on the "old"host system and prepares it to be removed.

# vgexport vg-nagavg(vg name)

  Volume group "vg-nagavg" successfully exported

when the machine is next shut down, the disk can be unplgged and then connected to its new machine.
Import the Volume Group(VG)

When plugged into new system it becomes /dev/sdb or what ever depends so an initial pvscan shows

1. # pvscan

PV /dev/sda3 is in exported VG vg-nagavg[580.00MB/0 free]
PV /dev/sda4 is in exported VG vg-nagavg[484.00MB/312.00MB free]
PV /dev/sda5 is in exported VG vg-nagavg[288.00MB/288.00MB free]
 Total: 3 [1.32 GB] / in use: 3 [1.32 GB] / in no VG: 0[0]

2. We can now import the Volume Group (which also activates it) and mount the fle system.

If you are importing on an LVM2 system run,

# vgimport vg-nagavg

Volume group "vg-nagavg" successfully imported
If you are importing on an LVM1 system, add the pvs that needed to import

# vgimport vg-nagavg/dev/sda3 /dev/sda4 /dev/sda5

3. Activate the Volume Group

You must activate the volume group before you can access it

# vgchange -ay vg-nagavg

1 logical volume(s) in volume group "vg-ctechz" now active

Now mount the file system

# mount /dev/vg-ctechz/lvm-ctechz /LVM-import/

# mount

/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/vg--nagavg-lvm--naga on /LVM-import type ext3 (rw)

[root@localhost ~]# df -h

Filesystem                                             Size  Used Avail Use% Mounted on

/dev/sda1                                               25G  4.9G   19G  21% /
tmpfs                                                      593M     0  593M   0% /dev/shm
/dev/mapper/vg--nagavg-lvm--naga       664M  542M   90M  86% /LVM-import

 Using Vgscan

# pvs

  PV                 VG        Fmt  Attr   PSize   PFree
  /dev/sda3  vg-nagavg lvm2 ax-  580.00M 0
  /dev/sda4  vg-nagavg lvm2 ax-  484.00M 312.00M
  /dev/sda5  vg-nagavg lvm2 ax-  288.00M 288.00M

# pvs shows in which all disk attached to vg

# vgscan

Reading all physical volumes.  This may take a while...
Found exported volume group "vg-nagavg" using metadata type lvm2

# vgimport vg-naagavg
Volume group "vg-nagavg" successfully imported

# vgchange -ay vg-nagavg

1 logical volume(s) in volume group "vg-nagavg" now active

# mkdir /LVM-vgscan
# mount /dev/vg-ctechz/lvm-ctechz /LVM-vgscan

# df -h

Filesystem                                                  Size  Used Avail Use% Mounted on
/dev/sda1                                                    25G  4.9G   19G  21% /
tmpfs                                                           593M     0  593M   0% /dev/shm
/dev/mapper/vg--naga-lvm--nagavg            664M  542M   90M  86% /LVM-vgscan


# mount

/dev/sda1 on / type ext3 (rw)

proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/vg--nagavg-lvm--naga on /LVM-vgscan type ext3 (rw)


VG Scan is using when we are not exporting the vg. ie, first umount the Logical Volume and take the disk and attach it to some other disk, and then do the # vgscan

it will detect the volume group from the disk and mount it in the new system.

Thursday, November 5, 2015

Explain about the LVM DUMPCONFIG command in Linux Server?

Thursday, November 05, 2015 0
The lvm dumpconfig Command

You can display the current LVM configuration, or save the configuration to a file, with the dumpconfig option of the lvm command. There are a variety of features that the lvm dumpconfig command provides, including the following;


1. You can dump the current lvm configuration merged with any tag configuration files.
2. You can dump all current configuration settings for which the values differ from the defaults.
3. You can dump all new configuration settings introduced in the current LVM version, in a specific LVM version.
4. You can dump all profilable configuration settings, either in their entirety or separately   for command and metadata profiles

5. You can dump only the configuration settings for a specific version of LVM.
6. You can validate the current configuration.

For a full list of supported features and information on specifying the lvm dumconfig options, see the lvm-dumpconfig man page.

What are the Metadata Contents available in LVM?

Thursday, November 05, 2015 0
The volume group metadata contains:
    ·         Information about how and when it was created
    ·         Information about the volume group:

The volume group information contains:
    ·         Name and unique id
    ·         A version number which is incremented whenever the metadata gets updated
    ·         Any properties: Read/Write? Resizeable?
    ·         Any administrative limit on the number of physical volumes and logical volumes it may contain
    ·         The extent size (in units of sectors which are defined as 512 bytes)

An unordered list of physical volumes making up the volume group, each with:
    ·         Its UUID, used to determine the block device containing it
    ·         Any properties, such as whether the physical volume is allocatable
    ·         The offset to the start of the first extent within the physical volume (in sectors)
    ·         The number of extents

 An unordered list of logical volumes. Each consisting of
        An ordered list of logical volume segments. For each segment the metadata includes a mapping applied to an ordered list of physical volume segments or logical volume segments.

Sample Metadata Contents.

# Generated by LVM2 version 2.02.88(2)-RHEL5 (2012-01-20): Sat Mar 21 15:44:51 2015

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing '/usr/sbin/vgs --noheadings -o name'"

creation_host = "testserver.com"    # Linux testserver.com 2.6.32-300.10.1.el5uek #1 SMP Wed Feb 22 17:37:40 EST 2012 x86_64
creation_time = 1426945491      # Sat Mar 21 15:44:51 2015

VolGroup00 {
        id = "ZfQCQ1-suTc-ykV9-TwvN-ACpB-XcEM-NuWlnE"
        seqno = 3
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 65536             # 32 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "36bcud-E3uI-NPeG-BfTe-ePx0-FEpQ-un5N5F"
                        device = "/dev/xvda2"   # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 104647410    # 49.8998 Gigabytes
                        pe_start = 384
                        pe_count = 1596 # 49.875 Gigabytes
                }
        }
        logical_volumes {

                LogVol00 {
                        id = "SWOjo1-qFZZ-CztY-CSXb-zQdX-pwRH-jDNI3o"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 1024     # 32 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }
                LogVol01 {
                        id = "LoJOLg-5TDC-5ity-l5a6-qLJ5-fuju-oRRzWb"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 572      # 17.875 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 1024
                                ]
                        }
                }
        }
}

Monday, November 2, 2015

How to resolve the Insufficient Free Extents for a Logical Volume in Linux Server?

Monday, November 02, 2015 0
 You may get the error message "Insufficient free extents" when creating a logical volume when you think you have enough extents based on the output of the vgdisplay or vgs commands. This is because this commands round figures to 2 decimal places to provide human-readable output. To specify exact size, use free physical extent count instead of some multiple of bytes to determine the size of the logical volume.

The vgdisplay command, by default, includes this line of output that indicates the free physical extents.

# vgdisplay
  --- Volume group ---
  ...
  Free  PE / Size       8780 / 34.30 GB

Alternately, you can use the vg_free_count and vg_extent_count arguments of the vgs command to display the free extents
and the total number of extents.

[root@tng3-1 ~]# vgs -o +vg_free_count,vg_extent_count
  VG     #PV #LV #SN Attr   VSize   VFree   Free #Ext
  testvg   2       0    0 wz--n- 34.30G 34.30G 8780 8780

With 8780 free physical extents, you can run the following command, using the lower-case l argument to use extents instead of bytes:

# lvcreate -l8780 -n testlv testvg

This uses all the free extents in the volume group.

# vgs -o +vg_free_count,vg_extent_count
  VG     #PV #LV #SN Attr   VSize  VFree Free #Ext
  testvg   2      1      0 wz--n- 34.30G    0     0 8780

Alternately, you can extend the logical volume to use a percentage of the remaining free space in the volume group by using the -l argument of the lvcreate command.

How to Recover from LVM Mirror Failure in Linux Server?

Monday, November 02, 2015 0
This section provides an example of recovering from a situation where one leg of an LVM mirrored volume fails because the underlying device for a physical volume goes down. When a mirror leg fails, LVM converts the mirrored volume into a linear volume, which continues to operate as before but without the mirrored redundancy. At that point, you can add a new disk device to the system to use as a replacement physical device and rebuild the mirror.

The following command creates the physical volumes which will be used for the mirror.

[root@test ~]# pvcreate /dev/sd[abcdef][12]
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sda2" successfully created
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdc2" successfully created
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sdd2" successfully created
  Physical volume "/dev/sde1" successfully created
  Physical volume "/dev/sde2" successfully created
  Physical volume "/dev/sdf1" successfully created
  Physical volume "/dev/sdf2" successfully created

The following commands creates the volume group vg and the mirrored volume lvgroupfs

[root@test ~]# vgcreate vg /dev/sd[abcdef][12]
  Volume group "vg" successfully created


[root@test ~]# lvcreate -L 750M -n lvgroupfs -m 1 vg /dev/sda1 /dev/sdb1 /dev/sdc1
  Rounding up size to full physical extent 752.00 MB
  Logical volume "lvgroupfs" created

We can use lvs command to verify the layout of the mirrored volume and the underlying devices for the mirror leg and the mirror log. Note that in the first example the mirror is not yet completely synced; you should wait until the Copy% field displays 100.00 before continuing.

[root@test ~]# lvs -a -o +devices
  LV                                  VG   Attr   LSize   Origin Snap%  Move Log          Copy% Devices
  lvgroupfs                        vg   mwi-a- 752.00M                  lvgroupfs_mlog 21.28 lvgroupfs_mimage_0(0),lvgroupfs_mimage_1(0)
  [lvgroupfs_mimage_0]   vg   iwi-ao 752.00M                                       /dev/sda1(0)
  [lvgroupfs_mimage_1]   vg   iwi-ao 752.00M                                       /dev/sdb1(0)
  [lvgroupfs_mlog]            vg   lwi-ao   4.00M                                       /dev/sdc1(0)

[root@test ~]# lvs -a -o +devices
  LV                                VG   Attr   LSize   Origin Snap%  Move Log          Copy%  Devices
  lvgroupfs                      vg   mwi-a- 752.00M                  lvgroupfs_mlog 100.00  lvgroupfs_mimage_0(0),lvgroupfs_mimage_1(0)
  [lvgroupfs_mimage_0] vg   iwi-ao 752.00M                                        /dev/sda1(0)
  [lvgroupfs_mimage_1] vg   iwi-ao 752.00M                                        /dev/sdb1(0)
  [lvgroupfs_mlog]          vg   lwi-ao   4.00M     i                                  /dev/sdc1(0)

In this example, the primary leg of the mirror /dev/sda1 fails. Any write activity to the mirrored volume causes LVM to detect the failed mirror. When this occurs, LVM converts the mirror into a single linear volume. In this case, to trigger the conversion, we execute a dd command

[root@test ~]# dd if=/dev/zero of=/dev/vg/lvgroupfs count=10
10+0 records in
10+0 records out

You can use the lvs command to verify that the device is now a linear device. Because of the failed disk, I/O errors occur.

[root@test ~]# lvs -a -o +devices
  /dev/sda1: read failed after 0 of 2048 at 0: Input/output error
  /dev/sda2: read failed after 0 of 2048 at 0: Input/output error
  LV                     VG   Attr   LSize   Origin Snap%  Move Log Copy%  Devices
  lvgroupfs           vg   -wi-a- 752.00M                               /dev/sdb1(0)

 At this point you should still be able to use the logical volume, but there will be no mirror redundancy.

To rebuild the mirrored volume, you replace the broken drive and recreate the physical volume. If you use the same disk rather than replacing it with a new one, you will see "inconsistent" warnings when you run the pvcreate command.

[root@test ~]# pvcreate /dev/sda[12]
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sda2" successfully created

[root@test ~]# pvscan
  PV /dev/sdb1   VG vg   lvm2 [67.83 GB / 67.10 GB free]
  PV /dev/sdb2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdc1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdc2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdd1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdd2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sde1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sde2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdf1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdf2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sda1              lvm2 [603.94 GB]
  PV /dev/sda2              lvm2 [603.94 GB]

Next you extend the original volume group with the new physical volume.

[root@test ~]# vgextend vg /dev/sda[12]
  Volume group "vg" successfully extended

[root@test ~]# pvscan
  PV /dev/sdb1   VG vg   lvm2 [67.83 GB / 67.10 GB free]
  PV /dev/sdb2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdc1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdc2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdd1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdd2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sde1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sde2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdf1   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sdf2   VG vg   lvm2 [67.83 GB / 67.83 GB free]
  PV /dev/sda1   VG vg   lvm2 [603.93 GB / 603.93 GB free]
  PV /dev/sda2   VG vg   lvm2 [603.93 GB / 603.93 GB free]

Convert the linear volume back to its original mirrored state.

[root@test ~]# lvconvert -m 1 /dev/vg/lvgroupfs /dev/sda1 /dev/sdb1 /dev/sdc1
  Logical volume mirror converted.

You can use the lvs command to verify that the mirror is restored.

[root@test ~]# lvs -a -o +devices
  LV                                   VG   Attr   LSize   Origin Snap%  Move Log          Copy% Devices
  lvgroupfs                         vg   mwi-a- 752.00M                  lvgroupfs_mlog 68.62 lvgroupfs_mimage_0(0),lvgroupfs_mimage_1(0)
  [lvgroupfs_mimage_0]    vg   iwi-ao 752.00M                                       /dev/sdb1(0)
  [lvgroupfs_mimage_1]     vg   iwi-ao 752.00M                                       /dev/sda1(0)
  [lvgroupfs_mlog]             vg   lwi-ao   4.00M                                       /dev/sdc1(0)