This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Wednesday, June 15, 2016

Brief - multipath Command

Wednesday, June 15, 2016 0

Use the Linux multipath command to configure and manage multipathed devices.

General syntax for the multipath command:

multipath [-v verbosity] [-d] [-h|-l|-ll|-f|-F] [-p failover|multibus|group_by_serial|group_by_prio|group_by_node_name] 

Configure multipath devices

#multipath

Configure a specific multipath device

#multipath devicename

Replace devicename

 Replace devicename with the device node name such as /dev/sdb (as shown by udev in the $DEVNAME variable), or in the major:minor format.Selectively suppress a multipath map, and its device-mapped partitions:
#multipath -f

Display potential multipath devices

Display potential multipath devices, but do not create any devices and do not update device maps (dry run):
#multipath -d

Configure multipath devices and display multipath map information

#multipath -v2  
#multipath -v3

The -v2 option in multipath -v2 -d shows only local disks. Use the -v3 option to show the full path list.lliiFor example

#multipath -v3 -d

Display the status of all multipath devices, or a specified multipath device

#multipath -ll 
#multipath -ll 

Flush all unused multipath device maps 

Flush all unused multipath device maps (unresolves the multiple paths; it does not delete the device)

#multipath -F
#multipath -F  

Set the group policy

multipath -p [failover|multibus|group_by_serial|group_by_prio|group_by_node_name] 
Group Policy Options for the multipath -p Command

Policy Option             Description
failover                        One path per priority group. You can use only one path at a time.
multibus                       All paths in one priority group.
group_by_serial           One priority group per detected SCSI serial number 
group_by_prio              One priority group per path priority value. Paths with the same priority                          are in the same priority group. Priorities are determined by callout                                           programs specified as a global, per-controller, or per-multipath                                               option in  the /etc/multipath.conf configuration file.
group_by_node_          
name                           One priority group per target node name. Target node names are                                            fetched in the /sys/class/fc_transport/target*/node_name location.

How to fix the delay in SSH Login

Wednesday, June 15, 2016 0
Have you ever faced  login delays  when you tried to connect to the Linux systems, if yes this is happening due to  reverse DNS look-up  query that is been made to DNS Server.

We can fix this issue as mentioned below steps:

1) Take /etc/ssh/sshd_config  backup

# cp -p /etc/ssh/sshd_config /etc/ssh/sshd_config.`date '+%m-%d-%Y_%H:%M:%S'`

2) Edit  /etc/ssh/sshd_config  on sshd  Server

vi /etc/ssh/sshd_config

  And add this DNS option to the file:
  UseDNS no

3) Now add the following line to your /etc/resolv.conf

   options single-request-reopen

4) Restart ssh daemon

 service sshd restart

Sometimes adding the client's net address to the server's /etc/hosts can fix this issue  which is an alternative method.

Compressing files by using Linux commands

Wednesday, June 15, 2016 0
Compress Files


SyntaxDescriptionExample(s)
gzip {filename}Gzip compress the size of the given files using Lempel-Ziv coding (LZ77). Whenever possible, each file is replaced by one with the extension .gz.gzip mydata.doc
gzip *.jpg
ls -l
bzip2 {filename}bzip2 compresses files using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. Compression is generally considerably better than that achieved by bzip command (LZ77/LZ78-based compressors). Whenever possible, each file is replaced by one with the extension .bz2.bzip2 mydata.doc
bzip2 *.jpg
ls -l
zip {.zip-filename} {filename-to-compress}zip is a compression and file packaging utility for Unix/Linux. Each file is stored in single .zip {.zip-filename} file with the extension .zip.zip mydata.zip mydata.doc
zip data.zip *.doc
ls -l
tar -zcvf {.tgz-file} {files}
tar -jcvf {.tbz2-file} {files}
The GNU tar is archiving utility but it can be use to compressing large file(s). GNU tar supports both archive compressing through gzip and bzip2. If you have more than 2 files then it is recommended to use tar instead of gzip or bzip2.
-z: use gzip compress
-j: use bzip2 compress
tar -zcvf data.tgz *.doc
tar -zcvf pics.tar.gz *.jpg *.png
tar -jcvf data.tbz2 *.doc
ls -l
De-Compressing File

SyntaxDescriptionExample(s)
gzip -d {.gz file}
gunzip {.gz file}
Decompressed a file that is created using gzipcommand. File is restored to their original form using this command.gzip -d mydata.doc.gz
gunzip mydata.doc.gz
bzip2 -d {.bz2-file}
bunzip2 {.bz2-file}
Decompressed a file that is created using bzip2command. File is restored to their original form using this command.bzip2 -d mydata.doc.bz2
gunzip mydata.doc.bz2
unzip {.zip file}Extract compressed files in a ZIP archive.unzip file.zip
unzip data.zip resume.doc
tar -zxvf {.tgz-file}
tar -jxvf {.tbz2-file}
Untar or decompressed a file(s) that is created using tar compressing through gzip and bzip2 filtertar -zxvf data.tgz
tar -zxvf pics.tar.gz *.jpg
tar -jxvf data.tbz2
 List the contents of an archive/compressed file

Some time you just wanted to look at files inside an archive or compressed file. Then all of the above command supports file list option.
SyntaxDescriptionExample(s)
gzip -l {.gz file}List files from a GZIP archivegzip -l mydata.doc.gz
unzip -l {.zip file}List files from a ZIP archiveunzip -l mydata.zip
tar -ztvf {.tar.gz}
tar -jtvf {.tbz2}
List files from a TAR archivetar -ztvf pics.tar.gz
tar -jtvf data.tbz2

Explain the Resource limits on UNIX systems

Wednesday, June 15, 2016 0
On UNIX systems, the ulimit command controls the limits on system resource, such as process data size, process virtual memory, and process file size. Specifically.

On UNIX systems, each user can either inherit resource limits from the root user or have specific limits defined. When setting resource limits for a process, it is important to know that the limits that apply are those that are in effect for the parent process and not the limits for the user under which the process runs. For example, the IBM Directory server runs under the ldap user account that was created at install time. However, the IBM Directory server is typically started while logged in as the root user. Starting while logged in as the root user means that any limits that are in effect for the ldap user have no effect on the IBM Directory server process unless the IBM Directory server process is started while logged in as the ldap user.

To display the current user’s resource limits, use the ulimit command (see the following example):

# ulimit -Ha
time(seconds) unlimited
file(blocks) 2097151
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited

# ulimit -Sa
time(seconds) unlimited
file(blocks) 2097151
data(kbytes) 131072
stack(kbytes) 32768
memory(kbytes) 32768
coredump(blocks) 2097151
nofiles(descriptors) 2000

 -H option instructs the command to display hard resource limits.
 -S option instructs the command to display soft resource limits.

The hard resource limit values are set by the root user using the chuser command for each user. The soft resource limit values can be relaxed by the individual user using the ulimit command, as long as the values are smaller than the hard resource limit values.
Increasing process memory size limit

Enter the following command to check the current process data size and virtual memory size limits:

ulimit -d
ulimit -m

It is recommended that the process data size and virtual memory size be set to unlimited. Setting to unlimited can be done by modifying the following lines in the /etc/security/limits file:

default:
data = -1
rss = -1

For changes to the /etc/security/limits file to take effect, the user must log out of the current login session and log back in.

At minimum, set these size limits to 256 MB, which is the value of 256000 in the /etc/security/limits file. Increase these limits when a larger-than-default IBM Directory server cache is to be used. For more information, see the IBM Directory Server documentation.

In addition to the /etc/security/limits file, the process virtual memory size is limited by the number of segments that a process can use. By default, a process can only use one memory segment, which limits a process to 128 MB. AIX support a large memory model that is enabled through the LDR_CNTRLenvironment variable.
Increase file size limit

Enter the following command to check the current file size limits:

ulimit -f

It is recommended that the file size limit be set to unlimited. Setting to unlimited can be done by modifying the following lines in the /etc/security/limits file:

default:
fsize = -1

Create file systems with large file support

The standard file system on AIX has a 2 GB file size limit, regardless of the ulimit setting. One way to enable files larger than the 2 GB limit is to create the file system with the Large File Enabled option. This option can be found through the Add a Journaled File System option of the smit menu. Refer to AIX documentation for additional information and file system options.
Edit/Change the Ulimit  Values for uid:

    Edit the limits file under /etc/security/limits (takes effect after reboot)
    Use the chuser command to change individual user settings (logout and login required)

Here are a few of the flags that can be set:

chuser rss=-1 username
chuser fsize=-1 username
chuser data=-1 username
chuser nofiles=4000 username
chuser “stack=8388608? username

Explain the Linux OS Return Codes

Wednesday, June 15, 2016 0
When computer programs are executed, the operating system creates an abstract entity called a process in which the book-keeping for that program is maintained. In multitasking operating systems such as Unix or Linux, new processes can be created by active processes.

The process that spawns another is called a parent process, while those created are child processes. Child processes run concurrently with the parent process.

 The technique of spawning child processes is used to delegate some work to a child process when there is no reason to stop the execution of the parent. When the child finishes executing, it exits by calling theexit system call. This system call facilitates passing the exit status code back to the parent, which can retrieve this value using the wait system call.

There is no  straight way to get return code when it come to Linux/AIX operating systems.I found indirect method. Always 0 = Success anything else is an error.
Return Codes:

The exit status or return code of a process in computer programming is a small number passed from a child process (or callee) to a parent process (or caller) when it has finished executing a specific procedure or delegated task. In DOS, this may be referred to as an errorlevel.

0
1       Operation not permitted
2       No such file or directory
3       No such process
4       Interrupted system call
5       Input/output error
6       No such device or address
7       Argument list too long
8       Exec format error
9       Bad file descriptor
10      No child processes
11      Resource temporarily unavailable
12      Cannot allocate memory
13      Permission denied
14      Bad address
15      Block device required
16      Device or resource busy
17      File exists
18      Invalid cross-device link
19      No such device
20      Not a directory
21      Is a directory
22      Invalid argument
23      Too many open files in system
24      Too many open files
25      Inappropriate ioctl for device
26      Text file busy
27      File too large
28      No space left on device
29      Illegal seek
30      Read-only file system
31      Too many links
32      Broken pipe
33      Numerical argument out of domain
34      Numerical result out of range
35      Resource deadlock avoided
36      File name too long
37      No locks available
38      Function not implemented
39      Directory not empty
40      Too many levels of symbolic links
41      Unknown error 41
42      No message of desired type
43      Identifier removed
44      Channel number out of range
45      Level 2 not synchronized
46      Level 3 halted
47      Level 3 reset
48      Link number out of range
49      Protocol driver not attached
50      No CSI structure available
51      Level 2 halted
52      Invalid exchange
53      Invalid request descriptor
54      Exchange full
55      No anode
56      Invalid request code
57      Invalid slot
58      Unknown error 58
59      Bad font file format
60      Device not a stream
61      No data available
62      Timer expired
63      Out of streams resources
64      Machine is not on the network
65      Package not installed
66      Object is remote
67      Link has been severed
68      Advertise error
69      Srmount error
70      Communication error on send
71      Protocol error
72      Multihop attempted
73      RFS specific error
74      Bad message
75      Value too large for defined data type
76      Name not unique on network
77      File descriptor in bad state
78      Remote address changed
79      Can not access a needed shared library
80      Accessing a corrupted shared library
81      .lib section in a.out corrupted
82      Attempting to link in too many shared libraries
83      Cannot exec a shared library directly
84      Invalid or incomplete multibyte or wide character
85      Interrupted system call should be restarted
86      Streams pipe error
87      Too many users
88      Socket operation on non-socket
89      Destination address required
90      Message too long
91      Protocol wrong type for socket
92      Protocol not available
93      Protocol not supported
94      Socket type not supported
95      Operation not supported
96      Protocol family not supported
97      Address family not supported by protocol
98      Address already in use
99      Cannot assign requested address
100     Network is down
101     Network is unreachable
102     Network dropped connection on reset
103     Software caused connection abort
104     Connection reset by peer
105     No buffer space available
106     Transport endpoint is already connected
107     Transport endpoint is not connected
108     Cannot send after transport endpoint shutdown
109     Too many references: cannot splice
110     Connection timed out
111     Connection refused
112     Host is down
113     No route to host
114     Operation already in progress
115     Operation now in progress
116     Stale NFS file handle
117     Structure needs cleaning
118     Not a XENIX named type file
119     No XENIX semaphores available
120     Is a named type file
121     Remote I/O error
122     Disk quota exceeded
123     No medium found
124     Wrong medium type
125     Operation canceled
126     Required key not available
127     Key has expired

Hope it is useful.

How to enable the Name Service cache Daemon (NSCD)

Wednesday, June 15, 2016 0
By enabling the Name Service cache Daemon (NSCD) of the operating system, a significant performance improvement can be achieved when using naming services like DNS, NIS, NIS+, LDAP.

Benefit of name service cache daemon (NSCD) for ClearCase
Example:

WithoutNSCD:

[user@host]$ time cleartool co -nc "/var/tmp/file"
Checked out "/var/tmp/file" from version "/main/10".
real    0m3.355s
user    0m0.020s
sys     0m0.018s

With NSCD

[user@host]$ time cleartool co -nc "/var/tmp/file"
Checked out "/var/tmp/file" from version "/main/11".
real    0m0.556s
user    0m0.021s
sys     0m0.016s
Enabling NSCD

Solaris:

/etc/init.d/nscd start

Linux

service nscd start

AIX:

startsrc -s netcd

Note: In addition to having nscd started it is mandatory to be sure this service will be started after a reboot. For instance on Red Hat and SuSE you can run:

chkconfig nscd  on

For more details on how to configure and or enable NSCD refer to your respective operating system vendor's manpage.

Useful TSM client commands for UNIX Admins

Wednesday, June 15, 2016 0
TSM (Tivoli Storage Manager)  is  a centralized, policy-based, enterprise class, data backup and recovery package from IBM Corporation.The software enables the user to insert objects not only via backup, but also through space management and archive tools. It also allows retrieval of the same data via similar restore, recall, and retrieve methods.

As  Unix Admins  we used to get lot of requests from the application teams for tsm backup restores.I would like to discuss about the the best 14 best use-full TSM client commands.

Lets discuss category wise  "Query,Backup & Restore".

Generally we use dsmc/dsm  for the  TSM client commands.

In this article we are going to discuss about the following contents with practice examples.

  1) Querying the server

    A. Querying your scheduled backup slot

    B. Querying what files are included / excluded for backup

    C.Querying what partitions have been backed up

    D.Querying what files have been backed up

 2) Backing Up data

    A. Backing your local filesystems

    B. Backing up selected files
 
 3) Restore Data

    A. Restore a file to its original directory

    B. Restore the most recent backup version of a file

    C. Display a list of active and inactive backup versions of files from which you can select versions to restore

    D. Restore with a directory including subdirectories

    E. Restore the  file under a new name and directory

    F. Restore all files in a directory  as of their current state

    G. Restore all files from a  directory that end with .xyz to the another directory

    H. Restore files specified in the text file to a different location

1) Querying the server
A. Querying your scheduled backup slot

To query your scheduled backup slot enter dsmc q sched (which is short for query schedule). The output should look similar to that below:

#dsmc q sched

    Schedule Name: WEEKLY_UM
    Description: UM weekly incremental backup
   Schedule Style: Classic

         Action: Incremental
        Options:
        Objects:
         Priority: 5

   Next Execution: 135 Hours and 25 Minutes

         Duration: 20 Minutes
          Period: 1 Week

      Day of Week: Thursday
           Expire: Never

B. Querying what files are included / excluded for backup

"q inclexcl" to list output similar to the following:

#dsmc q inclexcl

*** FILE INCLUDE/EXCLUDE ***

Mode Function  Pattern (match from top down)  Source File
---- --------- ------------------------------ -----------------
Excl Filespace /var/run                       /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl Filespace /tmp                           /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl Directory /.../.opera/.../cache4         /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl Directory /.../.mozilla/.../Cache        /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl Directory /.../.netscape/.../cache       /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl Directory /var/tmp                       /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl All       /.../dsmsched.log              /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl All       /.../core                      /opt/tivoli/tsm/client/ba/bin/incl.excl
Excl All       /.../a.out                     /opt/tivoli/tsm/client/ba/bin/incl.excl

C.Querying what partitions have been backed up
"q fi" to list which partitions have been backed up:

** Unix/Linux  **

#dsmc q fi
  #     Last Incr Date      Type    File Space Name
---     --------------      ----    ---------------
  1   02-05-2013 02:13:13   UFS     /       
  2   25-07-2012 12:26:09   UFS     /export/home
  3   02-05-2013 02:13:26   UFS     /home   
  4   16-01-2013 11:26:37   UFS     /scratch 
  5   02-05-2013 02:13:54   UFS     /usr/local
  6   12-02-2013 02:52:41   UFS     /var   

** Netware **
  #     Last Incr Date      Type       File Space Name
---     --------------      ----       ---------------
  1   02-05-2013 00:23:46   NTW:LONG   Oracle_data\usr:
  2   02-07-2013 00:22:42   NDS        Oracle_data\bin:
  3   02-07-2013 00:25:33   NTW:LONG   Oracle_data\apps:
  4   02-07-2013 00:25:11   NTW:LONG   Oracle_data\usr:
D.Querying what files have been backed up

In order to query the files or directories that are backed-up earlier you can use "q ba".
The below example gives you only the directory information.
#dsmc q ba /home/oraadmin
   Size      Backup Date                Mgmt Class           A/I File
   ----      -----------                ----------           --- ----
   1024  B  15-10-2013 02:52:09          STANDARD             A  /home/oraadmin

If you just add a trailing * (star) as a wildcard in the above query, TSM will only return those files and directories backed up immediately below the directory path given in the query
#dsmc q ba /home/oraadm/*
   Size      Backup Date        Mgmt Class A/I File
   ----      -----------        ---------- --- ----
    512  12-09-2012 19:57:09    STANDARD    A  /home/oraadm/data1.dtf
  1,024  08-12-2012 02:46:53    STANDARD    A  /home/oraadm/data2.dtf
    512  12-09-2012 19:57:09    STANDARD    A  /home/oraadm/data3.dtf
    512  24-04-2002 00:22:56    STANDARD    A  /home/oraadm/data4.dtf

If you want to query all the current files and directories backed up under a directory and all its sub-directories you need to add the -subdir=yes option as below:
#dsmc q ba /home/oraadm/* -subdir=yes
   Size      Backup Date        Mgmt Class A/I File
   ----      -----------        ---------- --- ----
    512  12-09-2012 19:57:09    STANDARD    A  /home/oraadm/data1.dtf
  1,024  08-12-2012 02:46:53    STANDARD    A  /home/oraadm/data2.dtf
    512  12-09-2012 19:57:09    STANDARD    A  /home/oraadm/data3.dtf
    512  24-04-2002 00:22:56    STANDARD    A  /home/oraadm/data4.dtf
  1,024  12-09-2012 19:57:09    STANDARD    A  /home/oraadm/datasmart1/test
  1,024  12-09-2012 19:57:09    STANDARD    A  /home/oraadm/datasmart1/test/test2
 12,048  04-12-2012 02:01:29    STANDARD    A  /home/oraadm/datasmart2/tables
 50,326  30-04-2013 01:35:26    STANDARD    A  /home/oraadm/datasmart3/data_file1
 50,326  27-04-2013 00:28:15    STANDARD    A  /home/oraadm/datasmart3/data_file2
 11,013  24-04-2013 00:22:56    STANDARD    A  /home/oraadm/datasmart3/data_file3

2. Backing Up data
A. Backing your local filesystems
The syntax for this is "dsmc backup-type filesystem" , where backup-type is one of incremental or selective.

Incremental Backup : It is one that backs up only the data that changed since the last backup — be it a full or incremental backup

Selective Backup : A type of backup where only the user specified files and directories are backed up. A selective backup is commonly used for backing up files which change frequently or in situations where the space available to store backups is limited. Also called a partial backup.

I would always suggest you always go with incremental. The command is "dsmc incremental"  or "dsmc incr" Where "incr" is an abbreviation for incremental.

Perform an incremental backup of your client server.

#dsmc incr

Make this will omit the filesystems which were mention in the exclude file.
To incrementally back up specific file-systems enter:

#dsmc incr /  /usr  /usr/local  /home

To back up entire filesystem irrespective of whether files have changed since the last backup, use the selective command with a wild-card and -subdir=yes as below:

#dsmc sel /*  /usr/*   /home/*  -su=yes
B. Backing up selected files

For backing up selected files is similar to that for backing up filesystems. Be aware, however, that you cannot use wildcards in directory / folder names:

#dsmc incr /home/oradm/data*/* -su=yes
ANS1071E Invalid domain name entered: '/home/oradm/data*/*'

#dsmc sel /home/oradm/data*/* -su=yes

Selective Backup function invoked.
ANS1081E Invalid search file specification '/home/oradm/data*/*' entered

You can, however, enter several file specifications on the command line, as below:

#dsmc incr /home/surya/*  /usr/bin/* -su=yes
3) Restore Data

We use the "restore" command to restore  files
 A. Restore a file to its original directory

 Restore the /home/oraadm/data.txt  file to its original directory.

 #dsmc restore /home/oraadm/data.txt

 If you do not specify a destination, the files are restored to their original location.
B. Restore the most recent backup version of a file

Here is an example to restore  /home/oraadm/data.txt file, even if the backup is inactive.

#dsmc restore /home/oraadm/data.txt -latest

If the file you are restoring no longer resides on your client machine, and you have run an incremental backup since deleting the file, there is no active backup of the file on the server. In this case, use the latest option to restore the most recent backup version. Tivoli Storage Manager restores the latest backup version, whether it is active or inactive.

C. Display a list of active and inactive backup versions of files from which you can select versions to restore

#dsmc restore "/home/oraadmin/*"-pick -inactive
D. Restore with a directory including subdirectories

Restore the files in the /oradata1 directory and all of its sub-directories (-sub=yes)
#dsmc restore /oradata1/ -subdir=yes

When restoring a specific path and file, Tivoli Storage Manager recursively restores all sub-directories under that path, and any instances of the specified file that exist under any of those sub-directories.

E. Restore the  file under a new name and directory
In-order to restore the  /home/oraadm/data.txt   file under a new name and directory.

#dsmc restore /home/oraadm/data.txt /tmp/data-renamed.txt
F. Restore all files in a directory  as of their current state

Restore all files in the /usr/oradata/docs directory to their state as of 5:00 PM on October 16, 2013.

#dsmc restore -pitd=10/16/2013 -pitt=17:00:00 /usr/oradata/docs/

Use the pitdate option with the pittime option to establish a point in time for which you want to display or restore the latest version of your backups. Files that were backed up on or before the date and time you specified, and which were not deleted before the date and time you specified, are processed. Backup versions that you create after this date and time are ignored.

G. Restore all files from a  directory that end with .xyz to the another directory
Restore all files from the /usr/oradata/docs/ directory that end with .bak to the /usr/oradata/projects/ directory.

# dsmc restore "/usr/oradata/docs/*.bak" /usr/oradata/projects/

If the destination is a directory, specify the delimiter (/) as the last character of the destination. If you omit the delimiter and your specified source is a directory or a file spec with a wildcard, you will receive an error. If the projects directory does not exist, it is created.
H. Restore files specified in the text file to a different location

Restore files specified in the restorelist.txt file to a different location.

# dsmc restore -filelist=/tmp/restorelist.txt /usr/ora_backup/

The files (entries) listed in the filelist must adhere to the following rules:

    Each entry must be a fully or partially qualified path to a file or directory or a relative path.
    Each entry must be on a new line.
    Do not use wildcard characters.
    Each entry results in the processing of only one object (file or directory).
    If the file name contains any spaces, enclose the file name with quotes.
    The filelist can be an MBCS file or a Unicode file with all Unicode entries.
    Tivoli Storage Manager ignores any entry that is not valid.

LVM export and import: How to move a VG to another Machine or Group

Wednesday, June 15, 2016 0
 It is quite easy to move a whole volume group to another system if, for example, a user department acquires a new server. To do this we use the vgexport and vgimport commands.

    Unmount the file system
    Mark the volume group inactive
    Export the volume group
    Import the volume group
    Activate the volume group
    Mount the file system
Exporting Volume Group

1. unmount the file system

First make sure no users are accessing files on active volume, then unmount it

# df -h

Filesystem                                               Size  Used Avail Use% Mounted on

/dev/sda1                                                 25G  4.9G   19G  21% /
tmpfs                                                        593M     0  593M   0% /dev/shm
/dev/mapper/vg--nagavg-lvm--naga        664M  542M   90M  86% /lvm-naga

# umount /lvm-naga

2. Mark the Volume Group inactive

Marks the volume group inactive removes it from the kernal and prevents any further activity on it.

# vgchange -an vg-nagavg(VG name)

 logical volume(s) in volume group "vg-naga" now active

3. Export the VG

It is now necessor to export the Volume Group, this prevents it from being accessed on the "old"host system and prepares it to be removed.

# vgexport vg-nagavg(vg name)

  Volume group "vg-nagavg" successfully exported

when the machine is next shut down, the disk can be unplgged and then connected to its new machine.
Import the Volume Group(VG)

When plugged into new system it becomes /dev/sdb or what ever depends so an initial pvscan shows

1. # pvscan

PV /dev/sda3 is in exported VG vg-nagavg[580.00MB/0 free]
PV /dev/sda4 is in exported VG vg-nagavg[484.00MB/312.00MB free]
PV /dev/sda5 is in exported VG vg-nagavg[288.00MB/288.00MB free]
 Total: 3 [1.32 GB] / in use: 3 [1.32 GB] / in no VG: 0[0]

2. We can now import the Volume Group (which also activates it) and mount the fle system.

If you are importing on an LVM2 system run,

# vgimport vg-nagavg

Volume group "vg-nagavg" successfully imported
If you are importing on an LVM1 system, add the pvs that needed to import

# vgimport vg-nagavg/dev/sda3 /dev/sda4 /dev/sda5

3. Activate the Volume Group

You must activate the volume group before you can access it

# vgchange -ay vg-nagavg

1 logical volume(s) in volume group "vg-ctechz" now active

Now mount the file system

# mount /dev/vg-ctechz/lvm-ctechz /LVM-import/

# mount

/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/vg--nagavg-lvm--naga on /LVM-import type ext3 (rw)

[root@localhost ~]# df -h

Filesystem                                             Size  Used Avail Use% Mounted on

/dev/sda1                                               25G  4.9G   19G  21% /
tmpfs                                                      593M     0  593M   0% /dev/shm
/dev/mapper/vg--nagavg-lvm--naga       664M  542M   90M  86% /LVM-import

 Using Vgscan

# pvs

  PV                 VG        Fmt  Attr   PSize   PFree
  /dev/sda3  vg-nagavg lvm2 ax-  580.00M 0
  /dev/sda4  vg-nagavg lvm2 ax-  484.00M 312.00M
  /dev/sda5  vg-nagavg lvm2 ax-  288.00M 288.00M

# pvs shows in which all disk attached to vg

# vgscan

Reading all physical volumes.  This may take a while...
Found exported volume group "vg-nagavg" using metadata type lvm2

# vgimport vg-naagavg
Volume group "vg-nagavg" successfully imported

# vgchange -ay vg-nagavg

1 logical volume(s) in volume group "vg-nagavg" now active

# mkdir /LVM-vgscan
# mount /dev/vg-ctechz/lvm-ctechz /LVM-vgscan

# df -h

Filesystem                                                  Size  Used Avail Use% Mounted on
/dev/sda1                                                    25G  4.9G   19G  21% /
tmpfs                                                           593M     0  593M   0% /dev/shm
/dev/mapper/vg--naga-lvm--nagavg            664M  542M   90M  86% /LVM-vgscan


# mount

/dev/sda1 on / type ext3 (rw)

proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/vg--nagavg-lvm--naga on /LVM-vgscan type ext3 (rw)


VG Scan is using when we are not exporting the vg. ie, first umount the Logical Volume and take the disk and attach it to some other disk, and then do the # vgscan

it will detect the volume group from the disk and mount it in the new system.