This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Tuesday, October 13, 2015

VMware PowerCli Script which will list all snapshots over some days old

Tuesday, October 13, 2015 0
Connect-VIServer MYVISERVER

 Get-VM | Get-Snapshot | Where { $_.Created -lt (Get-Date).AddDays(-15)}

Just  replace 15 days with your requirement ( n days)

Powercli command to do snapshot consolidation in vsphere 5.x

Tuesday, October 13, 2015 0
Command to do snapshot consolidation in vsphere 5.x
 
Get-VM | Where-Object {$_.Extensiondata.Runtime.ConsolidationNeeded} | ForEach-Object {  $_.ExtensionData.ConsolidateVMDisks()}

Monday, October 12, 2015

How to Kill a VM from command line?

Monday, October 12, 2015 0
If you want to power off or kill a virtual machine running on an ESXi host you can do this using the following esxcli command:
  • connect a console to your ESXi host (eg. SSH or ESXi Shell)
To get a list of all VMs running on the host use this command:
esxcli vm process list
The list contains: World ID, Process ID, VMX Cartel ID, UUID, display name and the path to the vmx config file:
kill_vm

To kill / power off the virtual machine use the following command:
esxcli vm process kill -type=xxxx – world-id=yyyyy
for -type=xxxx use: soft, hard or force
for world-id=yyyy use the World ID listed in the command above (eg. World ID 39731 for the example VM „Cold“)

Some information about the three possible shutdown methods:
soft = prefer this if you want to shut down „softly“
hard = equal to an immediate shutdown
force = hard kill of the VM



How to restart the management agents on ESXi 6.x

Monday, October 12, 2015 0
ESXi 6.x

Log in to SSH or Local console as root.
Run these commands:
/etc/init.d/hostd restart
/etc/init.d/vpxa restart
Or also (alternative way)
To reset the management network on a specific VMkernel interface, by default vmk0, run the command:
esxcli network ip interface set -e false -i vmk0; esxcli network ip interface set -e true -i vmk0
Note: Using a semicolon (;) between the two commands ensures the VMkernel interface is disabled and then re-enabled in succession. If the management interface is not running on vmk0, change the above command according to the VMkernel interface used.
To restart all management agents on the host, run the command:

Note: Using a semicolon (;) between the two commands ensures the VMkernel interface is disabled and then re-enabled in succession. If the management interface is not running on vmk0, change the above command according to the VMkernel interface used.

To restart all management agents on the host, run the command:
services.sh restart

How to Restart the Management agents on an ESXi or ESX host

Monday, October 12, 2015 0
Restarting the Management agents on ESXi

To restart the management agents on ESXi:

From the Direct Console User Interface (DCUI):

    Connect to the console of your ESXi host.
    Press F2 to customize the system.
    Log in as root.
    Use the Up/Down arrows to navigate to Restart Management Agents.

    Note: In ESXi 4.1 and ESXi 5.0, 5.1, 5.5 and 6.0 this option is available under Troubleshooting Options.
    Press Enter.
    Press F11 to restart the services.
    When the service restarts, press Enter.
    Press Esc to log out of the system.

From the Local Console or SSH:

    Log in to SSH or Local console as root.
    Run these commands:

    /etc/init.d/hostd restart
    /etc/init.d/vpxa restart

    Note: In ESXi 4.x, run this command to restart the vpxa agent:

    service vmware-vpxa restart

Friday, October 9, 2015

Step-By-Step Configuration Guide for NAT with IPTABLES

Friday, October 09, 2015 0
This guide shows how to set up network-address-translation (NAT) on a Linux system with iptables so that the system can act as a gateway and provide internet access to multiple hosts on a local are network using a single public IP address. This is achieved by rewriting the source and/or destination addresses of IP packets as they pass through the NAT system.

Assuming that you have:

OS - Any Linux distribution
Software - Iptables
Network Interface Cards: 2

WAN = eth0 with public IP xx.xx.xx.xx (Replace xx.xx.xx.xx with your WAN IP)
LAN = eth1 with private IP yy.yy.yy.yy / 255.255.0.0 (Replace yy.yy.yy.yy with your LAN IP)

Step by Step Procedure:

Step #1. Configure eth0 for Internet with a Public ( IP External network or Internet)

vi /etc/sysconfig/network-scripts/ifcfg-eth0

Edit the following in that file.

IPADDR=xx.xx.xx.xx
NETMASK=255.255.255.0    # Provided by the ISP
GATEWAY=xx.xx.xx.1    # Provided by the ISP

Step #2. Configure eth1 for LAN with a Private IP (Internal Local Area network)

vi /etc/sysconfig/network-scripts/ifcfg-eth1

NETMASK=255.255.0.0        # Specify based on your requirement
IPADDR=192.168.2.1        # Gateway of the LAN

Step #3. Gateway Configuration

vi /etc/sysconfig/network
    NETWORKING=yes
    HOSTNAME=nat
    GATEWAY=xx.xx.xx.1    # Internet Gateway, provided by the ISP

Step #4. DNS Configuration

cat /etc/resolv.conf
    nameserver 4.2.2.2
    nameserver  8.8.8.8
   nameserver 202.56.250.5      

Step #5. NAT configuration with IP Tables

    # Delete and flush. Default table is "filter". Others like "nat" must be explicitly stated.
iptables --flush # Flush all the rules in filter and nat tables
iptables --table nat --flush
iptables --delete-chain
# Delete all chains that are not in default filter and nat table
iptables --table nat --delete-chain
# Set up IP FORWARDing and Masquerading
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPTclip_image001
# Enables packet forwarding by kernel
echo 1 > /proc/sys/net/ipv4/ip_forward
#Apply the configuration
service iptables restart

Step #6. Configuring PCs on the network (Clients)

All PC's on the private office network should set their "gateway" to be the local private network IP address of the Linux gateway computer.
The DNS should be set to that of the ISP on the internet.

Step #7. Testing
# Ping the Gateway of the network and some website from the client system
ping 192.168.2.1
ping www.google.com

Thursday, October 8, 2015

KVM Virtualization in RHEL 7 - Detailed document

Thursday, October 08, 2015 0
Purpose of this document
 
This document describes how to quickly setup and manage a virtualized environment with KVM (Kernel-based Virtual Machine) in Red Hat Enterprise Linux 7. This is not an in-depth discussion of virtualization or KVM, but rather an easy-to-follow step-by-step description of how to install and manage Virtual Machines (VMs) on a physical server.

A very brief overview of KVM

KVM is a Linux kernel module that allows a user space program access to the hardware virtualization features of Intel and AMD processors. With the KVM kernel module, VMs run as ordinary user-space processes.

KVM uses QEMU for I/O hardware emulation. QEMU is a user-space emulator that can emulate a variety of guest processors on host processors with decent performance. Using the KVM kernel module allows it to approach native speeds.


KVM is managed via the libvirt API and tools. Some libvirt tools used in this article include virsh, virt-install and virt-clone.

Virtualization Technology

Verify that Virtualization Technology (VT) is enabled in your server’s BIOS.
Another item to check once your server boots up is whether your processors support VT. This is not a requirement but it will help a lot with performance, so you will be better off with processors that support VT. Check for these CPU extensions:


# grep -E 'svm|vmx' /proc/cpuinfo
- vmx is for Intel processors
- svm is for AMD processors


Required packages
 

There are several packages to install that are not part of the base RHEL 7 installation. Assuming that you have a yum repository already defined, install the following:

# yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install

 
Enable and start the libvirtd service:


# systemctl enable libvirtd && systemctl start libvirtd

 
Verify the following kernel modules are loaded, and if not load manually:
kvm
kvm_intel (only on Intel-based systems)
OS Installation Source


You need to have an OS installation source ready for your VMs. You can either use an iso or a network installation source that can be accessed via http, ftp or nfs.

 
Disk Space

 
When a VM is created, image files are created in the default directory /var/lib/libvirt/images, but you can choose any directory you’d like. Regardless of what directory you choose, you will have to verify there is enough disk space available in that partition.  I use directory /vm-images.


KVM supports several types of VM image formats, which determine the amount of actual disk space each VM uses on the host. Here, we will only create VMs with raw file formats, which use the exact amount of disk space you specify. So for example if you specify that a VM will have 10 GB of disk space, the VM install tool will create a file image of exactly 10 GB on the host, regardless whether the VM uses all 10 GB or not.


Best practice here is to allocate more than enough disk space on the host to safely fit all your VMs. For example, if you want to create 4 VMs with 20 GB storage each, be sure you have at least 85-90 GB space available on your host
 

Networking
 
By default, VMs will only have network access to other VMs on the same server (and to the host itself) via private network 192.168.122.0. If you want the VMs to have access to your LAN, then you must create a network bridge on the host that is connected to the NIC that connects to your LAN. Follow these steps to create a network bridge:


1. We will create a bridge named ‘br0’. Add to your network controller configuration file (i.e. /etc/sysconfig/network-scripts/ifcfg-em1) this line:
BRIDGE=br0


2. Create /etc/sysconfig/network-scripts/ifcfg-br0 and add:
DEVICE="br0"
# BOOTPROTO is up to you. If you prefer “static”, you will need to
# specify the IP address, netmask, gateway and DNS information.
BOOTPROTO="dhcp"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
ONBOOT="yes"
TYPE="Bridge"
DELAY="0"


3. Enable network forwarding. Add to /etc/sysctl.conf:
net.ipv4.ip_forward = 1


And read the file:
# sysctl -p /etc/sysctl.conf

 
4. Restart the ‘NetworkManager’ service so that the bridge you just created can get an IP address:
# systemctl restart NetworkManager


Firewalld
In RHEL 6, the default packet filtering and forwarding service is ‘iptables’. In RHEL 7, the default service is ‘firewalld’, which provides the same packet filtering and forwarding capabilities as iptables, but implements rules dynamically and has additional features such as network zones, which give you added flexibility when managing different networks.
Please note that the iptables tool is still available in RHEL 7, and in fact it is used by firewalld to talk to the kernel packet filter (it is the iptables service that has been replaced by firewalld). If you prefer, you can install the iptables-service package to use the iptables service instead of firewalld.


SELinux
If you are using SELinux in Enforcing mode, then there are some things to consider. The most common issue is when you use a non-default directory for your VM images. If you use a directory other than /var/lib/libvirt/images, then you must change the security context for that directory. For example, let’s say you select /vm-images to place your VM images:


1. Create the directory:
# mkdir /vm-images


2. Install the policycoreutils-python package (which contains the semanage SELinux utility):
# yum -y install policycoreutils-python


3. Set the security context for the directory and everything under it:
# semanage fcontext --add -t virt_image_t '/vm-images(/.*)?'


Verify it:

# semanage fcontext -l | grep virt_image_t
/var/lib/imagefactory/images(/.*)? all files system_u:object_r:virt_image_t:s0
/var/lib/libvirt/images(/.*)? all files system_u:object_r:virt_image_t:s0
/vm-images(/.*)? all files system_u:object_r:virt_image_t:s0


4. Restore the security context. This will effectively change the context to virt_image_t:
# restorecon -R -v /vm-images
Verify that the context was changed:
# ls –aZ /vm-images
drwxr-xr-x. root root system_u:object_r:virt_image_t:s0 .
dr-xr-xr-x. root root system_u:object_r:root_t:s0 ..


5. If you are going to export the directory /vm-images as a samba or NFS share, there are SELinux Booleans that need to be set as well:
# setsebool -P virt_use_samba 1
# setsebool -P virt_use_nfs 1


Creating VMs

 
Installation of VMs using the virt-install tool is very straight-forward. This tool can run in interactive or non-interactive mode. Let’s use virt-install in non-interactive mode to create a RHEL 7 x64 VM named vm1 with one virtual CPU, 1 GB memory and 10 GB of disk space:
# virt-install \
--network bridge:br0 \
--name vm1 \
--ram=1024 \
--vcpus=1 \
--disk path=/vm-images/vm1.img,size=10 \
--graphics none \
--location=http://my.server.com/pub/rhel7/install-x86_64/ \
--extra-args="console=tty0 console=ttyS0,115200"
    --network bridge:br0


If you created a network bridge (as specified in Chapter I, steps 6-10) and want to use it for full inbound and outbound connectivity, then you must specify it.
     --name vm1
No big mystery here, this is the name of the VM
     --ram=1024
This is the amount of memory in the VM in MBs
    --vcpus=1
You guessed it, this is the number of virtual CPUs
     --disk path=/vm-images/vm1.img,size=10
This is the image file for the VM, the size is specified in GBs.
     --graphics none


This tells the installer not to launch a VNC window to access the VM’s main console. Instead, it will use a text console on the VM’s serial port. If you rather use an X window with graphics to install the OS on the VM, omit this parameter.
     --location=http://my.server.com/pub/rhel7/install-x86_64/


This is the location of the RHEL 7 x64 installation directory, which of course will be different for you. If you don’t have a remote installation location for the OS, you can install from an iso instead. Instead of using the location parameter, use the cdrom parameter:
    --cdrom /root/RHEL-7.0-20140507.0-Server-x86_64-dvd1.iso
     --extra-args="console=tty0 console=ttyS0,115200"


The extra-args parameter is used to pass kernel boot parameters to the OS installer. In this case, since we are connecting to the VM’s serial port, we must use the proper kernel parameters to set it up, just like we would on any server, virtual or not.
The extra-args parameter can also be used to specify a kickstart file for non-interactive installations. So if we had a kickstart file we would use:


    --extra-args="ks=http://my.server.com/pub/ks.cfg console=tty0 console=ttyS0,115200”
The OS installation on the VM proceeds as with a physical server, where you provide information such as disk partitions, time zone, root password, etc.


Here is another example: Install a RHEL 7 x86 VM with 2 VCPUs, 2GB of memory, 15GB disk space, using the default network (private VM network), install from a local iso on the host and use VNC to interact with the VM (must have an X server running):
# virt-install \
--name vm1 \
--ram=2048 \
--vcpus=2 \
--disk path=/vm-images/vm1.img,size=15 \
--cdrom /root/RHEL-7.0-20140507.0-Server-x86_64-dvd1.iso


For more information on all virt-install parameters, refer to the virt-install man page
Cloning VMs


If you want several VMs with the same OS and same configuration, I recommend cloning existing VMs rather than installing the OS on each one, which can quickly become a time-consuming & tedious task. In this example, we clone vm1 to create a new VM clone called vm1-clone:


1. Suspend the VM to be cloned. This is a requirement since it ensures that all data and network I/O on the VM is stopped.
# virsh suspend vm1


2. Run the virt-clone command:
# virt-clone \
--connect qemu:///system \
--original vm1 \
--name vm1-clone \
--file /vm-images/vm1-clone.img
This operation will take 2-3 minutes, depending on the size of the VM.


3. When done, you can resume the original VM:
# virsh resume vm1


4. The cloned VM is placed in shutdown mode. To start it:
# virsh start vm1-clone


The cloned VM is an exact copy of the original VM, all VM properties (VCPUs, memory, disk space) and disk contents will be the same. The virt-clone command takes care to generate a new MAC address for the VM clone and updates the proper network controller configuration file (i.e. /etc/sysconfig/network-scripts/ifcfg-em1), thus avoiding duplicate MAC addresses.


These are some of the commands I use to administer my VMs. As always, for a list of all available commands, your best bet is the virsh man page


List all VMs on a host, running or otherwise:


# virsh list --all
Show VM information:


# virsh dominfo vm1
Show VCPU/memory usage for all running VMs:


# virt-top
Show VM disk partitions (will take a few moments):


# virt-df vm1
Stop a VM (shutdown the OS):


# virsh shutdown vm1
Start VM:


# virsh start vm1
Mark VM for autostart (VM will start automatically after host reboots):


# virsh autostart vm1
Mark VM for manual start (VM will not start automatically after host reboots):
# virsh autostart –disable vm1


Getting access to a VM’s console

 
If you do not have an X server running on your host, connecting to a VMs serial console might be the only way to login to a VM if networking is not available. Setting up access to a VM’s console is no different than in a physical server, where you simply add the proper kernel boot parameters to the VM. For example, for a RHEL VM, append the following parameters to the kernel boot line in /etc/grub.conf and then reboot the VM:
console=tty0 console=ttyS0,115200


Then, after the VM boots, run in the host:
# virsh console vm1


Attaching storage device to a VM
Say you have files in a USB key that you want to copy to your VM. Rather than copying the files to your VM via the network, you can directly attach the USB key (or any storage device other than USB)


to your VM, which will then appear as an additional storage device on your VM. First identify the device name of your storage device after you plug it in on the host. In this example, it will be /dev/sdb:


# virsh attach-disk vm1 /dev/sdb vdb --driver qemu --mode shareable
     vdb is the device name you want to map to inside the VM
     you can mount your device to more than one VM at a time, but be careful as there is no write access control here.


You can now access the storage device directly from your VM at /dev/vdb. When you are done with it, simply detach it from your VM:
# virsh detach-disk vm1 vdb


GUI Tools


Up until now we’ve been working only with the CLI, but there are a couple of GUI tools that can be very useful when managing and interacting with VMs.


     virt-viewer – Launches a VNC window that gives you access to a VMs main console.
     virt-manager – Launches a window where you can manage all your VMs. Among other things, you can start, pause & shutdown VMs, display VM details (VCPUs, memory, disk space), add devices to VMs and even create new VMs. I don’t cover virt-manager here, but it is rather intuitive to use.
To install these tools, run:
# yum install virt-manager virt-viewer


Changing VM parameters


You can easily change VM parameters after creating them, such as memory, VCPUs and disk space


Memory

 
You can dynamically change the memory in a VM up to what its maximum memory setting is. Note that by default the maximum memory setting in a VM will always equal the amount of memory you specified when you created the VM with the ram parameter in virt-install.
For example if you created a VM with 1 GB of memory, you can dynamically reduce this amount without having to shut down the VM. If you want to increase the memory above 1 GB, you will have to first increase its maximum memory setting which requires shutting down the VM first.


In our first example, let’s reduce the amount of memory in vm1 from 1 GB to 512 MB


1. View the VM’s current memory settings:
# virsh dominfo vm1 | grep memory
Max memory: 1048576 kB
Used memory: 1048576 kB


2. To dynamically set to 512 MB, run:
# virsh setmem vm1 524288
Value must be specified in KB, so 512 MB x 1024 = 524288 KB


3. View memory settings:
# virsh dominfo vm1 | grep memory
Max memory: 1048576 kB
Used memory: 524288 kB


In our second example, let’s increase the amount of memory in vm1 above from 512 MB to 2 GB:


1. In this case we will first need to increase the maximum memory setting. The best way to do it is by editing the VM’s configuration file. Shutdown the VM or you might see unexpected results:
# virsh shutdown vm1


2. Edit the VM’s configuration file:
# virsh edit vm1
Change the value in the memory tab (value is in KB):
<memory>2097152</memory>


3. Restart the VM from its updated configuration file:
# virsh create /etc/libvirt/qemu/vm1.xml


4. View memory settings:
# virsh dominfo vm1 | grep memory
Max memory: 2097152 kB
Used memory: 524288 kB


5. Now you can dynamically change the memory:
# virsh setmem vm1 2097152
Verify:
# virsh dominfo vm1 | grep memory
Max memory: 2097152 kB
Used memory: 2097152 kB


VCPUs

 
To change the number of virtual CPUs in a VM, change the number in the vcpu tag in the VM’s configuration file. For example, let’s change the number of virtual CPUs to 2:


# virsh shutdown vm1
# virsh edit vm1
<vcpu>2</vcpu>
# virsh create /etc/libvirt/qemu/vm1.xml


Disk capacity

 
You can always add additional ‘disks’ to your VMs by attaching additional file images. Say that you want to add an additional 10 GB of disk space in your VM, here is what you do:


1. Create a 10-GB non-sparse file:
# dd if=/dev/zero of=/vm-images/vm1-add.img bs=1M count=10240


2. Shutdown the VM:
# virsh shutdown vm1


3. Add an extra entry for ‘disk’ in the VM's XML file in /etc/libvirt/qemu. You can look copy & paste the entry for your mail storage device and just change the target and address tags. 


For example:
# virsh edit vm1
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm1.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
Add:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm1-add.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk> make sure that the name of the device (i.e. vdb) follows the first one in sequential order
    in the address tag, use a unique slot address (check the address tag of ALL devices, not just storage devices) 


4. Restart the VM from the updated XML configuration file:
# virsh create /etc/libvirt/qemu/vm1.xml


Deleting VMs


When you no longer need a VM, it is best practice to remove it to free up its resources. A VM that is shutdown is not taking up a VCPU or memory, but its image file is still taking up disk space.


Deleting a VM is a lot faster than creating one, just a few quick commands. Let’s delete vm1-clone:


1. First, shutdown the VM:
# virsh shutdown vm1-clone
If the VM is not responding or fails to shut down, shut it down forcefully:
# virsh destroy vm1-clone
2. Undefine the VMs configuration:
# virsh undefine vm1-clone
3. Finally, remove the VM’s image file:
# rm /vm-images/vm1-clone.img




Thursday, October 1, 2015

How to Configure Linux Cluster with 2 Nodes on RedHat and CentOS

Thursday, October 01, 2015 0
In an active-standby Linux cluster configuration, all the critical services including IP, filesystem will failover from one node to another node in the cluster.

It explains how to create and configure two node redhat cluster using command line utilities.
The following are the high-level steps involved in configuring Linux cluster on Redhat or CentOS:
• Install and start RICCI cluster service
• Create cluster on active node
• Add a node to cluster
• Add fencing to cluster
• Configure fail over domain
• Add resources to cluster
• Sync cluster configuration across nodes
• Start the cluster
• Verify fail over by shutting down an active node

1. Required Cluster Packages

First make sure the following cluster packages are installed. If you don’t have these packages install them using yum command.
[root@rh1 ~]# rpm -qa | egrep -i "ricci|luci|cluster|ccs|cman"
modcluster-0.16.2-28.el6.x86_64
luci-0.26.0-48.el6.x86_64
ccs-0.16.2-69.el6.x86_64
ricci-0.16.2-69.el6.x86_64
cman-3.0.12.1-59.el6.x86_64
clusterlib-3.0.12.1-59.el6.x86_64

2. Start RICCI service and Assign Password

Next, start ricci service on both the nodes.

[root@rh1 ~]# service ricci start
Starting oddjobd: [ OK ]
generating SSL certificates... done
Generating NSS database... done
Starting ricci: [ OK ]

You also need to assign a password for the RICCI on both the nodes.

[root@rh1 ~]# passwd ricci
Changing password for user ricci.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Also, If you are running iptables firewall, keep in mind that you need to have appropriate firewall rules on both the nodes to be able to talk to each other.

3. Create Cluster on Active Node

From the active node, please run the below command to create a new cluster.
The following command will create the cluster configuration file /etc/cluster/cluster.conf. If the file already exists, it will replace the existing
cluster.conf with the newly created cluster.conf.

[root@rh1 ~]# ccs -h rh1.mydomain.net --createcluster mycluster
rh1.mydomain.net password:

[root@rh1 ~]# ls -l /etc/cluster/cluster.conf
-rw-r-----. 1 root root 188 Sep 26 17:40 /etc/cluster/cluster.conf
Also keep in mind that we are running these commands only from one node on the cluster and we are not yet ready to propagate the changes
to the other node on the cluster.

4. Initial Plain cluster.conf File

After creating the cluster, the cluster.conf file will look like the following:
[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="mycluster">
<fence_daemon/>
<clusternodes/>
<cman/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>


5. Add a Node to the Cluster

Once the cluster is created, we need to add the participating nodes to the cluster using the ccs command as shown below.
First, add the first node rh1 to the cluster as shown below.

[root@rh1 ~]# ccs -h rh1.mydomain.net --addnode rh1.mydomain.net
Node rh1.mydomain.net added.

Next, add the second node rh2 to the cluster as shown below.
[root@rh1 ~]# ccs -h rh1.mydomain.net --addnode rh2.mydomain.net
Node rh2.mydomain.net added.
Once the nodes are created, you can use the following command to view all the available nodes in the cluster. This will also display the node
id for the corresponding node.

[root@rh1 ~]# ccs -h rh1 --lsnodes
rh1.mydomain.net: nodeid=1
rh2.mydomain.net: nodeid=2

6. cluster.conf File After Adding Nodes

This above will also add the nodes to the cluster.conf file as shown below.

[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="mycluster">
<fence_daemon/>
<clusternodes>
<clusternode name="rh1.mydomain.net" nodeid="1"/>
<clusternode name="rh2.mydomain.net" nodeid="2"/>
</clusternodes>
<cman/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>

7. Add Fencing to Cluster

Fencing is the disconnection of a node from shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity.
A fence device is a hardware device that can be used to cut a node off from shared storage.
This can be accomplished in a variety of ways: powering off the node via a remote power switch, disabling a Fiber Channel switch port, or revoking a host’s SCSI 3 reservations.
A fence agent is a software program that connects to a fence device in order to ask the fence device to cut off access to a node’s shared storage (via powering off the node or removing access to the shared storage by other means).
Execute the following command to enable fencing.

[root@rh1 ~]# ccs -h rh1 --setfencedaemon post_fail_delay=0
[root@rh1 ~]# ccs -h rh1 --setfencedaemon post_join_delay=25
Next, add a fence device. There are different types of fencing devices available. If you are using virtual machine to build a cluster, use
fence_virt device as shown below.

[root@rh1 ~]# ccs -h rh1 --addfencedev myfence agent=fence_virt
Next, add fencing method. After creating the fencing device, you need to created the fencing method and add the hosts to the fencing method.
[root@rh1 ~]# ccs -h rh1 --addmethod mthd1 rh1.mydomain.net
Method mthd1 added to rh1.mydomain.net.
[root@rh1 ~]# ccs -h rh1 --addmethod mthd1 rh2.mydomain.net
Method mthd1 added to rh2.mydomain.net.

Finally, associate fence device to the method created above as shown below:
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh1.mydomain.net mthd1
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh2.mydomain.net mthd1

8. cluster.conf File after Fencing

Your cluster.conf will look like below after the fencing devices, methods are added.
[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="10" name="mycluster">
<fence_daemon post_join_delay="25"/>
<clusternodes>
<clusternode name="rh1.mydomain.net" nodeid="1">
<fence>
<method name="mthd1">
<device name="myfence"/>
</method>
</fence>
</clusternode>
<clusternode name="rh2.mydomain.net" nodeid="2">
<fence>
<method name="mthd1">
<device name="myfence"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_virt" name="myfence"/>

</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>

9. Types of Failover Domain

A failover domain is an ordered subset of cluster members to which a resource group or service may be bound.
The following are the different types of failover domains:
• Restricted failover-domain: Resource groups or service bound to the domain may only run on cluster members which are also members
of the failover domain. If no members of failover domain are availables, the resource group or service is placed in stopped state.
• Unrestricted failover-domain: Resource groups bound to this domain may run on all cluster members, but will run on a member of the
domain whenever one is available. This means that if a resource group is running outside of the domain and member of the domain
transitions online, the resource group or
• service will migrate to that cluster member.
• Ordered domain: Nodes in the ordered domain are assigned a priority level from 1-100. Priority 1 being highest and 100 being the
lowest. A node with the highest priority will run the resource group. The resource if it was running on node 2, will migrate to node 1
when it becomes online.
• Unordered domain: Members of the domain have no order of preference. Any member may run in the resource group. Resource group
will always migrate to members of their failover domain whenever possible.

10. Add a Filover Domain

To add a failover domain, execute the following command. In this example, I created domain named as “webserverdomain”,
[root@rh1 ~]# ccs -h rh1 --addfailoverdomain webserverdomain ordered
Once the failover domain is created, add both the nodes to the failover domain as shown below:
[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain rh1.mydomain.net priority=1
[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain rh2.mydomain.net priority=2
You can view all the nodes in the failover domain using the following command.
[root@rh1 ~]# ccs -h rh1 --lsfailoverdomain
webserverdomain: restricted=0, ordered=1, nofailback=0
rh1.mydomain.net: 1
rh2.mydomain.net: 2

11. Add Resources to Cluster

Now it is time to add a resources. This indicates the services that also should failover along with ip and filesystem when a node fails. For
example, the Apache webserver can be part of the failover in the Redhat Linux Cluster.
When you are ready to add resources, there are 2 ways you can do this.
You can add as global resources or add a resource directly to resource group or service.
The advantage of adding it as global resource is that if you want to add the resource to more than one service group you can just reference the global resource on your service or resource group. In this example, we added the filesystem on a shared storage as global resource and referenced it on the service.
[root@rh1 ~]# ccs –h rh1 --addresource fs name=web_fs device=/dev/cluster_vg/vol01 mountpoint=/var/www fstype=ext4
To add a service to the cluster, create a service and add the resource to the service.
[root@rh1 ~]# ccs -h rh1 --addservice webservice1 domain=webserverdomain recovery=relocate autostart=1
Now add the following lines in the cluster.conf for adding the resource references to the service. In this example, we also added failover IP to
our service.
<fs ref="web_fs"/>
<ip address="192.168.1.12" monitor_link="yes" sleeptime="10"/>