This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Showing posts with label RHEL Cluster. Show all posts
Showing posts with label RHEL Cluster. Show all posts

Thursday, November 8, 2018

Extend the cluster file system by extending the existing netapp storage lun in RHEL server

Thursday, November 08, 2018 0
If storage team extended the existing LUN instead of creating new LUN, below steps need to follow.

Run multipath -ll command and search the device info which is mapped to the LUN.

root@nsk# multipath -ll | grep -A 6 -i 3600a09634224747a367d4b55357c4f87
3600a09634224747a367d4b55357c4f87 dm-6 NETAPP,LUN C-Mode
size=400G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 2:0:0:1 sdg 8:96  active ready running
  `- 1:0:0:1 sdc 8:32  active ready running
3600a09803830436a345d4b51506c4f43 dm-2 NETAPP,LUN C-Mode
size=110G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw

As per above output, sdg and sdc is the devices.  Now rescan the LUN.

root@nsk# echo "1" > /sys/block/sdg/device/rescan
root@nsk# echo "1" > /sys/block/sdc/device/rescan 

Reload multipathd service 

root@nsk# /etc/init.d/multipathd reload

Resize the PV (provide complete path like below)

root@nsk# pvresize /dev/mapper/3600a09634224747a367d4b55357c4f87  

Extend the LV

root@nsk# lvextend -L +199.95g /dev/mapper/oracle_vg-oracledata

Saturday, November 4, 2017

How to Manually Deactivate a High Available cluster volume group?

Saturday, November 04, 2017
Follow the below steps 

unmount all filesystem assocated with the VG:

#umount <mount_point>

deactivate the cluster VG:

#vgchange –a n HAVG_<vgname>

Remove all hostname tags from the VG:

#vgchange –-deltag <server_name>.testdoamin.com HAVG_<vgname>

Thursday, November 2, 2017

How to create New Volume Group in RHEL Cluster?

Thursday, November 02, 2017 0

Follow the below steps to create Volume Group in RHEL Cluster

1. Add the disk to the server.

2. Pvcreate the new partitions

3. Create the VG’s


vgcreate -c n -s 32M HAVG_<vg_name> /dev/mapper/mpath#p#

-s : Sets the physical extent size on physical volumes of this volume group
-c : If clustered locking is enabled, this defaults to y indicating that this Volume Group is shared with other nodes in the cluster

Creating a new Logical volume to a HA Volume group


To create a LV to a newly created volume group you have to add the hostname tag to the VG. This will permit one server in the cluster to have access to the volume group at a given time.

1. vgchange --addtag <hostname>.testdomain.com HAVG_<vg_name>
2. lvcreate –L ##M –n lv-<lvname> HAVG_<vg_name>
3. mke2fs –j /dev/HAVG_<vg_name>/lv-<lvname>


Create mountpoints on all nodes in the cluster. Do not add to /etc/fstab

1. mkdir <mount_point>
2. mount /dev/HAVG_<vg_name>/lv-<lvname> <mount_point>
3. Change permissions on mountpoint.

Thursday, October 15, 2015

What are the performance enhancements in GFS2 as compared to GFS?

Thursday, October 15, 2015 0
 GFS2 features  
  • Better performance for heavy usage in a single directory.
  • Faster synchronous I/O operations.
  • Faster cached reads (no locking overhead).
  • Faster direct I/O with preallocated files (provided I/O size is reasonably large, such   as     4M    blocks).
  • Faster I/O operations in general.
  • Faster Execution of the df command, because of faster statfs calls.
  • Improved atime mode to reduce the number of write I/O operations generated by atime.
  When compared with GFS GFS2 supports the following features.
  • Extended file attributes (xattr) the lsattr() and chattr() attribute settings via standard ioctl() calls nanosecond timestamps
  • GFS2 uses less kernel memory.
  • GFS2 requires no metadata generation numbers.
  • Allocating GFS2 metadata does not require reads. Copies of metadata blocks in multiple journals are managed by revoking blocks from the journal before lock release.
  • GFS2 includes a much simpler log manager that knows nothing about unlinked inodes or quota changes.
  • The gfs2_grow and gfs2_jadd commands use locking to prevent multiple instances running at the same time.
  • The ACL code has been simplified for calls like creat() and mkdir().
  • Unlinked inodes, quota changes, and statfs changes are recovered without remounting the journal

What is a Quorum Disk in cluster?

Thursday, October 15, 2015 0
Quorum Disk is a disk-based quorum daemon, qdiskd, that provides supplemental heuristics to determine node fitness.
With heuristics you can determine factors that are important to the operation of the node in the event of a network partition.


    For a 3 node cluster a quorum state is present untill 2 of the 3 nodes are active i.e. more than half. But what if due to some reasons the 2nd node also stops communicating with the the 3rd node? In that case under a normal architecture  the cluster would dissolve and stop working. 

But for mission critical environments and such scenarios we use quorum disk in which an additional disk is configured which is mounted on all the nodes with qdiskd service running and a vote value is assigned to it.

    So suppose in above case I have assigned 1 vote to qdisk so even after 2 nodes stops communicating with 3rd node, the cluster would have 2 votes (1 qdisk + 1 from 3rd node) which is still more than half of vote count for a 3 node cluster. Now both the inactive nodes would be fenced and your 3rd node would be still up and running being a part of the cluster.