This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Wednesday, September 13, 2017

Collecting diagnostic information of all ESXi By using PowerCLi Script

Wednesday, September 13, 2017 0
VMware Technical Support routinely requests diagnostic information from you when a support request is handled. This diagnostic information contains product specific logs, configuration files, and data appropriate to the situation. The information is gathered using a specific script or tool for each product and can include a host support bundle from the ESXi host and vCenter Server support bundle. Data collected in a host support bundle may be considered sensitive.



This article provides the procedures for obtaining this diagnostic information of all ESX/ESXi hosts using VMware vSphere PowerCLI.

 #Variable declaration
$vCenterIPorFQDN="VCName"
$vCenterUsername="domain\user" #Any User name who has access to Vcenter.
$vCenterPassword="xxxx"
$destination ="C:\Users\user\Desktop\naga\" #Location where to download support bundles
 
Write-Host "Connecting to vCenter" -foregroundcolor "magenta"
Connect-VIServer -Server $vCenterIPorFQDN -User $vCenterUsername -Password $vCenterPassword
 
$hosts = Get-VMHost #Retrieve all hosts from vCenter
 
Write-Host "Downloading vCenter support bundle" -foregroundcolor "magenta"
Get-Log -Bundle -DestinationPath $destination
 
foreach ($esxihost in $hosts){
Write-Host "Downloading support bundle for ESXi host $($esxihost.Name)" -foregroundcolor "magenta"
Get-Log -VMHost (Get-VMHost -Name $esxihost.Name) -Bundle -DestinationPath $destination
}


/lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

Wednesday, September 13, 2017 0
When we encounter this error /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory this is how you can fix it:

This will happen only on 64 bit systems, the cause is the fact that 32 bit libraries are missing from the system, so we can easily fix this issue by installing the 32 bit libraries on our system.

In RHEL

yum -y install glibc.i686


If the issue happen in  Debian/Ubuntu/Mint

apt-get update
apt-get install ia32-libs

That's it.

How to Clone the LVM2 Volume Groups?

Wednesday, September 13, 2017 0
These instructions describe the steps required to clone an LVM2 volume
group by creating a duplicate copy of the physical storage (PVs). This
requires the VG be deactivated while the clone is created and
re-named.

The volume group being cloned, CloneVG consists of two PVs originally present
on /dev/testpv0 and /dev/testpv1. A new volume group named CloneVG-clone will
be created on devices /dev/testpv2 and /dev/testpv3.

1. Deactivate the VG

       # vgchange -an CloneVG

2. Create the cloned PV(s)

       E.g., dd, clone LUNs on storage, break mirror etc.

       # dd if=/dev/testpv0 of=/dev/testpv2
       # dd if=/dev/testpv1 of=/dev/testpv3

3. For each original PV, create a filter entry in /etc/lvm/lvm.conf to
temporarily mask the PV from the LVM tools.

Preserve a copy of the original filtering rules so that it can be
restored at the end of the process, for example:

       # cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.orig

To exclude the original loopback devices /dev/testpv0 and /dev/testpv1, the
filter line could look like this:

       filter = [ "r|/dev/testpv0|", "r|/dev/testpv1|", "a|.*|" ]

Or, using a regex to match both devices with a single rule:

       filter = [ "r|/dev/loop[01]|", "a|.*|" ]

Once the filters are set up, remove the LVM persistent cache:

       # rm -f /etc/lvm/.cache [versions before 2.02.23]
OR
       # rm -f /etc/lvm/cache/.cache [version 2.02.23 or later]

Verify that the filtering is correct by running pvscan:

       # pvscan
         PV /dev/testpv2   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]
         PV /dev/testpv3   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]
         Total: 2 [120.00 MB] / in use: 2 [120.00 MB] / in no VG: 0 [0   ]

Only the cloned PVs should be displayed. If the original PVs appear,
check the syntax of the filtering rule and clear the persistent cache
again.

4. Modify the cloned volume group name, ID and physical volume IDs to
avoid name and UUID clashes between the original and cloned devices:

For each cloned physical volume, run:

       # pvchange --uuid /path/to/physical/volume

This will generate a new random UUID for the specified physical volume
and update the volume group metadata to reflect the changed identity.

For example:

       # pvchange --uuid /dev/testpv2
         Physical volume "/dev/testpv2" changed
         1 physical volume changed / 0 physical volumes not changed
       # pvchange --uuid /dev/testpv3
         Physical volume "/dev/testpv3" changed
         1 physical volume changed / 0 physical volumes not changed

Generate a new UUID for the entire volume group using vgchange:

       # vgchange --uuid CloneVG
         Volume group "CloneVG" successfully changed

Finally, rename the cloned VG:

       # vgrename CloneVG CloneVG-clone

5. Remove filtering rules & verify both VGs co-exist correctly

Restore the original filtering configuration and wipe the persistent cache:

       # cp /etc/lvm/lvm.conf.orig /etc/lvm/lvm.conf
       cp: overwrite `/etc/lvm/lvm.conf'? y
       # rm -f /etc/lvm/.cache

Run pvscan to verify the new and old VGs are correctly displayed:

       # pvscan
         PV /dev/testpv0   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]
         PV /dev/testpv1   VG CloneVG         lvm2 [60.00 MB / 40.00 MB free]

         PV /dev/testpv2   VG CloneVG-clone   lvm2 [60.00 MB / 40.00 MB free]
         PV /dev/testpv3   VG CloneVG-clone   lvm2 [60.00 MB / 40.00 MB free]
         Total: 4 [240.00 MB] / in use: 4 [240.00 MB] / in no VG: 0 [0  ]

6. Activate volume groups

Both the original and cloned VGs can now be activated simultaneously:

       # vgchange -ay CloneVG
         1 logical volume(s) in volume group "CloneVG" now active

       # vgchange -ay CloneVG-clone
         1 logical volume(s) in volume group "CloneVG-clone" now active

Friday, September 8, 2017

How to setup passwordless `sudo` on Linux Easily

Friday, September 08, 2017 0
To allow  all users to set-up password-less sudo
Edit the visudo file by simply typing

ALL     ALL = (ALL) NOPASSWD: ALL

to allow all users to run all commands without a password.
Add the following line for a user called username alone  to setup password-less sudo

username ALL=(ALL) NOPASSWD: ALL

Add the following line for a group called wheel to set-up password-less sudo

# %wheel        ALL=(ALL)       NOPASSWD: ALL

Adding a user to Sudoers in simple way

Friday, September 08, 2017 0
Login as root to server. The root user are the only one who has privilege to add new user.

Once you logged-in, you may now try the following commands below:

    Create a new user.
    useradd  [username]
    Add password to user
    passwd [username]
    Grant root privileges to user Edit the visudo file by simply typing
    enter code here

Find the following line of code: 

root ALL=(ALL) ALL
Then add this code below:
[username] ALL=(ALL) ALL
In Otherway
 su - root
and enter your password, then :
echo 'USERNAME ALL=(ALL:ALL) ALL' >> /etc/sudoers

  to add the user in sudoers file

Thursday, September 7, 2017

How to create Software Raid & How to replace the failed disk?

Thursday, September 07, 2017 0

Now let's create our RAID arrays /dev/md0/dev/md1, and /dev/md2 
/dev/sdb1 will be added to /dev/md0 
/dev/sdb2 to /dev/md1, and /dev/sdb3 to/dev/md2 

/dev/sda1/dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1

mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2

mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3



The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

server1:~# cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

md2 : active raid1 sdb3[1]

      4594496 blocks [2/1] [_U]



md1 : active raid1 sdb2[1]

      497920 blocks [2/1] [_U]



md0 : active raid1 sdb1[1]

      144448 blocks [2/1] [_U]



unused devices: <none>



Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext3 /dev/md0

mkswap /dev/md1

mkfs.ext3 /dev/md2

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig

mdadm --examine --scan >> /etc/mdadm/mdadm.conf



At the bottom of the file you should now see details about our three (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35d103e3:01b5209e:be9ff10a
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e19f9e4:01b5209e:be9ff10a
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae381162:01b5209e:be9ff10a



Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback 1 right after default 0:

vi /boot/grub/menu.lst

[...]
default         0
fallback        1
[...]





This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replaceroot=/dev/sda3 with root=/dev/md2 and root (hd0,0) with root (hd1,0):



[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd1)
root            (hd1,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486 (single-user mode)
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro single
initrd          /initrd.img-2.6.18-4-486
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST



root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot

cp -dpRx . /mnt/md0



Preparing GRUB (Part 1)

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)

 Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)

 Checking if "/boot/grub/stage1" exists... no

 Checking if "/grub/stage1" exists... yes

 Checking if "/grub/stage2" exists... yes

 Checking if "/grub/e2fs_stage1_5" exists... yes

 Running "embed /grub/e2fs_stage1_5 (hd0)"...  15 sectors are embedded.

succeeded

 Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded

Done.



grub>

root (hd1,0)

grub> root (hd1,0)

 Filesystem type is ext2fs, partition type 0xfd



grub>

setup (hd1)

grub> setup (hd1)

 Checking if "/boot/grub/stage1" exists... no

 Checking if "/grub/stage1" exists... yes

 Checking if "/grub/stage2" exists... yes

 Checking if "/grub/e2fs_stage1_5" exists... yes

 Running "embed /grub/e2fs_stage1_5 (hd1)"...  15 sectors are embedded.

succeeded

 Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded

Done.



grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

Reboot



Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

server1:~# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/md2              4.4G  730M  3.4G  18% /

tmpfs                 126M     0  126M   0% /lib/init/rw

udev                   10M   68K   10M   1% /dev

tmpfs                 126M     0  126M   0% /dev/shm

/dev/md0              137M   17M  114M  13% /boot



The output of

cat /proc/mdstat

should be as follows:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[1]

      4594496 blocks [2/1] [_U]



md1 : active raid1 sdb2[1]

      497920 blocks [2/1] [_U]



md0 : active raid1 sdb1[1]

      144448 blocks [2/1] [_U]



unused devices: <none>

server1:~#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:



fdisk /dev/sda

server1:~# fdisk /dev/sda



Command (m for help): <-- t

Partition number (1-4): <-- 1

Hex code (type L to list codes): <-- fd

Changed system type of partition 1 to fd (Linux raid autodetect)



Command (m for help): <-- t

Partition number (1-4): <-- 2

Hex code (type L to list codes): <-- fd

Changed system type of partition 2 to fd (Linux raid autodetect)



Command (m for help): <-- t

Partition number (1-4): <-- 3

Hex code (type L to list codes): <-- fd

Changed system type of partition 3 to fd (Linux raid autodetect)



Command (m for help): <-- w

The partition table has been altered!



Calling ioctl() to re-read partition table.



WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

server1:~#

Now we can add /dev/sda1/dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1

mdadm --add /dev/md1 /dev/sda2

mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sda3[2] sdb3[1]

      4594496 blocks [2/1] [_U]

      [=====>...............]  recovery = 29.7% (1367040/4594496) finish=0.6min speed=85440K/sec



md1 : active raid1 sda2[0] sdb2[1]

      497920 blocks [2/2] [UU]



md0 : active raid1 sda1[0] sdb1[1]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sda3[0] sdb3[1]

      4594496 blocks [2/2] [UU]



md1 : active raid1 sda2[0] sdb2[1]

      497920 blocks [2/2] [UU]



md0 : active raid1 sda1[0] sdb1[1]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

).

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf

mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:



cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35d103e3:2b3d68b9:a903a704
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e19f9e4:2b3d68b9:a903a704
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae381162:2b3d68b9:a903a704



8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 withhd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd1)
root            (hd1,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd0)
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

#title          Debian GNU/Linux, kernel 2.6.18-4-486
#root           (hd0,0)
#kernel         /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
#initrd         /initrd.img-2.6.18-4-486
#savedefault

#title          Debian GNU/Linux, kernel 2.6.18-4-486 (single-user mode)
#root           (hd0,0)
#kernel         /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro single
#initrd         /initrd.img-2.6.18-4-486
#savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

In the same file, there's a kopt line; replace /dev/sda3 with /dev/md2 (don't remove the # at the beginning of the line!):

[...]
# kopt=root=/dev/md2 ro
[...]



Afterwards, update your ramdisk:

update-initramfs -u

... and reboot the system:

reboot

Testing

Now let's simulate a hard drive failure. It doesn't matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:

mdadm --manage /dev/md0 --fail /dev/sdb1

mdadm --manage /dev/md1 --fail /dev/sdb2

mdadm --manage /dev/md2 --fail /dev/sdb3

mdadm --manage /dev/md0 --remove /dev/sdb1

mdadm --manage /dev/md1 --remove /dev/sdb2

mdadm --manage /dev/md2 --remove /dev/sdb3

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put/dev/sdb in /dev/sda's place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sda3[0]

      4594496 blocks [2/1] [U_]



md1 : active raid1 sda2[0]

      497920 blocks [2/1] [U_]



md0 : active raid1 sda1[0]

      144448 blocks [2/1] [U_]



unused devices: <none>

server1:~#

The output of

fdisk -l

should look as follows:

server1:~# fdisk -l



Disk /dev/sda: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          18      144553+  fd  Linux raid autodetect

/dev/sda2              19          80      498015   fd  Linux raid autodetect

/dev/sda3              81         652     4594590   fd  Linux raid autodetect



Disk /dev/sdb: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sdb doesn't contain a valid partition table



Disk /dev/md0: 147 MB, 147914752 bytes

2 heads, 4 sectors/track, 36112 cylinders

Units = cylinders of 8 * 512 = 4096 bytes



Disk /dev/md0 doesn't contain a valid partition table



Disk /dev/md1: 509 MB, 509870080 bytes

2 heads, 4 sectors/track, 124480 cylinders

Units = cylinders of 8 * 512 = 4096 bytes



Disk /dev/md1 doesn't contain a valid partition table



Disk /dev/md2: 4704 MB, 4704763904 bytes

2 heads, 4 sectors/track, 1148624 cylinders

Units = cylinders of 8 * 512 = 4096 bytes



Disk /dev/md2 doesn't contain a valid partition table

server1:~#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

)

server1:~# sfdisk -d /dev/sda | sfdisk /dev/sdb

Checking that no-one is using this disk right now ...

OK



Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track



sfdisk: ERROR: sector 0 does not have an msdos signature

 /dev/sdb: unrecognized partition table type

Old situation:

No partitions found

New situation:

Units = sectors of 512 bytes, counting from 0



   Device Boot    Start       End   #sectors  Id  System

/dev/sdb1   *        63    289169     289107  fd  Linux raid autodetect

/dev/sdb2        289170   1285199     996030  fd  Linux raid autodetect

/dev/sdb3       1285200  10474379    9189180  fd  Linux raid autodetect

/dev/sdb4             0         -          0   0  Empty

Successfully wrote the new partition table



Re-reading the partition table ...



If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)

to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1

(See fdisk(8).)

server1:~#

Afterwards we remove any remains of a previous RAID array from /dev/sdb...

mdadm --zero-superblock /dev/sdb1

mdadm --zero-superblock /dev/sdb2

mdadm --zero-superblock /dev/sdb3

... and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1

mdadm -a /dev/md1 /dev/sdb2

mdadm -a /dev/md2 /dev/sdb3

Now take a look at

cat /proc/mdstat

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[2] sda3[0]

      4594496 blocks [2/1] [U_]

      [======>..............]  recovery = 30.8% (1416256/4594496) finish=0.6min speed=83309K/sec



md1 : active raid1 sdb2[1] sda2[0]

      497920 blocks [2/2] [UU]



md0 : active raid1 sdb1[1] sda1[0]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

Wait until the synchronization has finished:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[1] sda3[0]

      4594496 blocks [2/2] [UU]



md1 : active raid1 sdb2[1] sda2[0]

      497920 blocks [2/2] [UU]



md0 : active raid1 sdb1[1] sda1[0]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

Then run

grub

and install the bootloader on both HDDs:

root (hd0,0)

setup (hd0)

root (hd1,0)

setup (hd1)

quit

That's it. You've just replaced a failed hard drive in your RAID1 array.