This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Tuesday, December 29, 2015

Differences between upgraded and newly created VMFS-5 datastores:

Tuesday, December 29, 2015 0
Differences between upgraded and newly created VMFS-5 datastores:
  • VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size. Copy operations between datastores with different block sizes won’t be able to leverage VAAI.  This is the primary reason I would recommend the creation of new VMFS-5 datastores and migrating virtual machines to new VMFS-5 datastores rather than performing in place upgrades of VMFS-3 datastores.
  • VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not new 8K sub-blocks.
  • VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30,720 rather than the new file limit of > 100,000 for newly created VMFS-5.
  • VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume is grown above 2TB, it automatically switches from MBR to GPT (GUID Partition Table) without impact to the running VMs.
  • VMFS-5 upgraded from VMFS-3 will continue to have a partition starting on sector 128; newly created VMFS-5 partitions start at sector 2,048.

Based on the information above, the best approach to migrate to VMFS-5 is to create net new VMFS-5 datastores if you have the extra storage space, can afford the number of Storage vMotions required, and have a VAAI capable storage array holding existing datastores with 2, 4, or 8MB block sizes.

Difference between VMFS 3 and VMFS 5 -- Part1

Tuesday, December 29, 2015 0
  • Explains you the major difference between VMFS 3 and VMFS 5. VM FS 5 is available as part of vSphere 5. VMFS 5 is introduced with lot of performance enhancements. 
  • Newly installed ESXi 5 will be formatted with VMFS 5 version but if you have upgraded the ESX 4 or ESX 4.1 to ESXi 5, then datastore version will be VMFS 3 only. 
  • You will able to upgrade the VMFS 3 to VMFS 5 via vSphere client once ESXi upgrade is Complete. This posts tells you some major differences between    VMFS 3 and VMFS 5




How to Identify the virtual machines with Raw Device Mappings (RDMs) using PowerCLI

Tuesday, December 29, 2015 0
Open the vSphere PowerCLI command-line.
Run the command:

Get-VM | Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName | fl

This command produces a list of virtual machines with RDMs, along with the backing SCSI device for the RDMs.

An output looks similar to:

Parent                      Virtual Machine Display Name
Name                       Hard Disk n
DiskType                  RawVirtual
ScsiCanonicalName naa.646892957789abcdef0892957789abcde
DeviceName            vml.020000000060912873645abcdef0123456789abcde9128736450ab

If you need to save the output to a file the command can be modified:

Get-VM | Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName | fl | Out-File –FilePath RDM-list.txt

Identify the backing SCSI device from either the ScsiCanonicalName or DeviceName identifiers.

Snapshot consolidation "error: maximum consolidate retries was exceeded for scsix:x"

Tuesday, December 29, 2015 0
Whenever you cannot perform snapshot consolidation in VMware ESXi 5.5 and ESXi 6.0.x.Performing a snapshot consolidation in ESXi 5.5 fails.

or

When attempting to consolidate snapshots using the vSphere Client, you see the error:

maximum consolidate retries was exceeded for scsix:x

Consolidate Disks message: The virtual machine has exceeded the maximum downtime of 12 seconds for disk consolidation.

 This issue occurs because ESXi 5.5 introduced a different behavior to prevent the virtual machine from being stunned for an extended period of time.

This message is reported if the virtual machine is powered on and the asynchronous consolidation fails after 10 iterations. An additional iteration is performed if the estimated stun time is over 12 seconds.This occurs when the virtual machine generates data faster than the consolidated rate.

To resolve this issue, turn off the snapshots consolidation enhancement in ESXi 5.5 and ESXi 6.0.x, so that it works like earlier versions of ESX/ESXi. This can be done by setting the snapshot.asyncConsolidate.forceSync to TRUE.

  Note: If the parameter is set to true, the virtual machine is stunned for long time to perform the snapshot consolidation, and it may not respond to ping during the consolidation.

To set the parameter snapshot.asyncConsolidate.forceSync to TRUE using the vSphere client:

Shut down the virtual machine.

Right-click the virtual machine and click Edit settings.

Click the Options tab.

Under Advanced, right-click General

Click Configuration Parameters, then click Add Row.

In the left pane, add this parameter:

snapshot.asyncConsolidate.forceSync

In the right pane, add this value:


TRUE

Click OK to save your change, and power on the virtual machine.

To set the parameter snapshot.asyncConsolidate.forceSync to TRUE without shutting down the virtual machine, run this Powercli command:

get-vm virtual_machine_name | New-AdvancedSetting -Name snapshot.asyncConsolidate.forceSync -Value TRUE -Confirm:$False

How to resolve : Cannot take a quiesced snapshot of Windows 2008 R2 virtual machine

Tuesday, December 29, 2015 0
When creating a snapshot on a Windows 2008 R2 virtual machine on ESXi/ESX 4.1 and later versions, you may experience these symptoms:
  • The snapshot operation fails to complete.
  • Unable to create a quiesced snapshot of the virtual machine.
  • Unable to back up the virtual machine.
  • Cloning a Windows 2008 R2 virtual machine fails.
  • In the Application section of the Event Viewer in virtual machine, Windows guest operating system reports an VSS error similar to:
           Volume Shadow Copy Service error: Unexpected error calling routine IOCTL_DISK_SET_SNAPSHOT_INFO(\\.\PHYSICALDRIVE1) fails with winerror 1168. hr = 0x80070490, Element not found.
  •  Any process that creates a quiesced snapshot fails.
  •  You see the error:
    Can not create a quiesced snapshot because the create snapshot operation exceeded the time limit for holding off I/O in the frozen virtual machine.

Backup applications, such as VMware Data Recovery, fails.You see the error:

  • Failed to create snapshot for vmname, error -3960 (cannot quiesce virtual machine)
  • This is a known issue with VSS application snapshots which is not caused by VMware software. It affects ESXi/ESX 4.1 and later versions.
  • Currently, there is no resolution.
  • To work around this issue, disable VSS quiesced application-based snapshots and revert to file system quiesced snapshots. You can disable VSS applications quiescing with either the VMware vSphere Client or with VMware Tools. Use one of these procedures:
 Disable VSS application quiescing using the vSphere Client:
  •  Power off the virtual machine.
  •  Log in to the vCenter Server or the ESXi/ESX host through the vSphere Client.
  •  Right-click the virtual machine and click Edit settings.
  •  Click the Options tab.
  •  Navigate to Advanced > General > Configuration Parameters.
  •  Add or modify the row disk.EnableUUID with the value FALSE.
  •  Click OK to save.
  •  Click OK to exit.
  •  Reboot the virtual machine for changes to take in effect.
Note: If this change is done through the command line using a text editor, the vim-cmd command to reload the vmx is enough to see the changes. For more information
Alternatively, un-register the virtual machine from the vCenter Server inventory. To un-register, right-click the virtual machine and click Remove from Inventory.
        Re-register the virtual machine back to the inventory.

Disable VSS application quiescing using VMware Tools:

  • Open the C:\ProgramData\VMware\VMware Tools\Tools.conf file in a text editor, such as Notepad. If the file does not exist, create it.
  • Add these lines to the file
            [vmbackup]
            vss.disableAppQuiescing = true
  • Save and close the file.
  • Restart the VMware Tools Service for the changes to take effect.
  • Click Start > Run, type services.msc, and click OK.
  • Right-click the VMware Tools Service and click Restart.

Taking a snapshot fails with the Error "Failed to take a memory snapshot, since the virtual machine is configured with independent disks"

Tuesday, December 29, 2015 0
When attempting to take a snapshot of a powered on virtual machine, you experience these symptoms:
You cannot take a snapshot with the Snapshot the virtual machine's memory option selected.

You see this error:

Failed to take a memory snapshot, since the virtual machine is configured with independent disks.

Resolution


This is an expected behavior, virtual machines with Independent disks cannot use memory or quiesced snapshots.

To resolve this issue, use one of these options:
When taking a snapshot of a virtual machine, deselect the Snapshot the virtual machine's memory and Quiesce Snapshot options.
Deselect the independent option in the virtual disk options.

To change the options for the virtual disk(s):

  • Open the vSphere Client.
  • Right-click the virtual machine and click Edit Settings.
  • Find the affected virtual disk(s) and deselect the Independent option.
  • Click OK to apply and save the changes to the virtual machine configuration.
 Note: This change requires the virtual machine to be powered off. If not, the option is grayed out.

How to Troubleshoot the NTP issue on ESX and ESXi 4.x / 5.x

Tuesday, December 29, 2015 0
Validate network connectivity between the ESXi/ESX host and the NTP server using the ping command.

Query ntpd service using ntpq

Use the NTP Query utility program ntpq to remotely query the ESXi/ESX host's ntpd service.
The ntpq utility is commonly installed on Linux clients and is also available in the ESX service console and the vSphere Management Assistant. For more information on the installation and use of the ntpq utility program on a given Linux distribution, see your Linux distribution's documentation.

For an ESXi 5.x host, the ntpq utility is included by default and does not need to be installed. It can be run locally from the ESXi 5.x host.


The ntpq utility is not available on ESXi 3.x/4.x. To query an ESXi host's NTP service ntpd, install ntpq on a remote Linux client and query the ESXi host's ntpd service from the Linux client.

To use the NTP Query utility ntpq to remotely query the ESX host's NTP service (ntpd) and determine whether it is successfully synchronizing with the upstream NTP server:

When using a Linux client, open a console session on the client where ntpq is installed.
Run this command:


When using an SSH shell or local console session on ESXi 5.5 and 5.1:
# "watch ntpq -p localhost_or_127.0.0.1"

When using a Linux client for ESXi/ESX 4.x:
# watch "ntpq -p ESX_host_IP_or_domain_name"

Monitor the output for 30 seconds and press Ctrl+C on your keyboard to stop the watch command.


Note: In ESXi 5.5 and 5.1, the output you see either localhost or loopback (127.0.0.1).

remote              refid    st  t  when poll reach delay  offset  jitter
======================================================
*10.11.12.130  1.0.0.0  1  u   46   64   377   43.76  5.58   40000


How to resolve : vMotion fails with network errors

Tuesday, December 29, 2015 0
Network misconfiguration can cause random vMotion failure. Retrying the vMotion operation may be successful, then to isolate and correct the problem please check as suggested by VMware.

To resolve this issue:
  • Check for IP address conflicts on the vMotion network. Each host in the cluster should have a vMotion vmknic, assigned a unique IP address.
  • Check for packet loss over the vMotion network. Try having the source host ping (vmkping) the destination host's vMotion vmknic IP address for the duration of the vMotion.
  • Check for connectivity between the two hosts (use the same ping test as above).
  • Check for potential interaction with firewall hardware or software that prevents connectivity between the source and the destination TCP port 8000.
For the Connection refused error, after confirming a lack of IP address conflicts, check to see that the vmotionServer process is running. If it is running, it exists as a kernel process visible in the output of the ps or esxtop command.

Remediating an ESXi 5.x and 6.0 host with Update Manager fails with the error: There was an error checking file system on altbootbank

Tuesday, December 29, 2015 0
Whenever you find below Symptoms

You cannot remediate an ESXi 5.x and 6.0 host.
Remediation of an ESXi 5.x and 6.0 host using vCenter Update Manager fails.
You see the error:


The host returns esxupdate error code:15. The package manager transaction is not successful. Check the Update Manager log files and esxupdate log files for more details

In the /var/log/esxupdate.log file, you see entries similar to:

esxupdate: esxupdate: ERROR: InstallationError: ('', 'There was an error checking file system on altbootbank, please see log for detail.')

Then Resolution would be as follows

To resolve the issue, repair the altbootbank partition.

To repair the altbootbank partition:

    Run this command to determine the device for /altbootbank:

    vmkfstools -P /altbootbank

    You see output similar to:

mpx.vmhba32:C0:T0:L0:5


Run this command to repair the altbootbank filesystem:

dosfsck -a -w /dev/disks/device_name
   
#dosfsck -a -w /dev/disks/mpx.vmhba32:C0:T0:L0:5 


 If remediation fails at this stage, reboot the host.

Esxupdate error code:15. The package manager transaction is not successful error While Remediating an ESXi 5.x or 6.0 host

Tuesday, December 29, 2015 2
Whenever - You cannot remediate an ESXi 5.x or 6.0 host using vCenter Update Manager.

 Remediating ESXi 5.x or 6.0 hosts fails.


 A package is to be updated on the host, particularly when VMware_locker_tools-light* is corrupt.  


error code:15. The package manager transaction is not successful. Check the Update Manager log files and esxupdate log files for more details.

To resolve this issue

 Recreate the/locker/packages/version/ folder, where version is:
        ESXi 5.0 – /locker/packages/5.0.0/
        ESXi 5.1 – /locker/packages/5.1.0/
        ESXi 5.5 – /locker/packages/5.5.0/
        ESXi 6.0 – /locker/packages/6.0.0/

To verify the store folders contents and symbolic link:

 Connect to the ESXi host using an SSH session.
 Check for information in the /store folder by running this command:
        ls /store

This folder must contain packages and var folder.
Run this command to verify that the symbolic link is valid:
        ls -l /

The /store folder should be linked to /locker and appear as:
        locker  -> /store

If that link is not displayed, run this command to add the symbolic link:
        ln -s /store /locker

To recreate the/locker/packages/version/ folder:
 Put the host in the maintenance mode.
 Navigate to the /locker/packages/version/ folder on the host.
 Rename /locker/packages/version/ folder to /locker/packages/version.old.
 Remediate the host using Update Manager.

The /locker/packages/version/ folder is recreated and the remediation should now be successful.
 Note: Verify if you can change to the other folders in /locker/packages/version/. If not, rename all the three folders including floppies.

An alternative resolution for ESXi:
Put the host in the maintenance mode.
Navigate to the /locker/packages/version/ folder on the host.
Rename the folder to:
       /locker/packages/ version.old

Run this command as the root user to recreate the folder:
       mkdir / locker/packages/ version/

For ex:

In ESXi 5.0:
        mkdir / locker/packages/5.0.0/

In ESXi 5.1:
        mkdir / locker/packages/5.1.0/

 In ESXi 5.5:
        mkdir / locker/packages/5.5.0/

In ESXi 6.0:
        mkdir / locker/packages/6.0.0/


Use WinSCP to copy the folders and files from the / locker/packages/ version/ directory on a working host to the affected host.


If the preceding methods do not resolve the issue:
Verify and ensure that there is sufficient free space on root folder by running this command:
        vdf -h

Check the locker location by running this command:
        ls -ltr /

If the locker is not pointing to a datastore:
Rename the old locker file by running this command:
        mv /locker /locker.old

Recreate the symbolic link by running this command:
        ln -s /store /locker

Monday, December 28, 2015

Configuring Network Time Protocol (NTP) on ESX/ESXi hosts using the vSphere Client

Monday, December 28, 2015 0
This Blog  provides steps to enable Network Time Protocol  (NTP) on an ESX/ESXi host using the vSphere Client.

To configure NTP on ESX/ESXi 4.1 and ESXi 5.x hosts using the vSphere Client:

  • Connect to the ESX/ESXi host using the vSphere Client.
  • Select a host in the inventory.
  • Click the Configuration tab.

  • Click Time Configuration.
  • Click Properties.
  • Click Options.


  • Click NTP Settings.
  • Click Add.
  • Enter the NTP Server name. For example, pool.ntp.org.
    Note: When entering the multiple NTP Server names, use a comma (,) followed by a space ( ) between the entries.
  • Click OK.



  • Click the General tab.
  • Click Start automatically under Startup Policy.
    Note: It is recommended to set the time manually prior to starting the service.
  • Click Start and click OK.
  • Click OK to exit.

Monday, December 21, 2015

Install/Upgrade VMware Tools by Supressing Reboot

Monday, December 21, 2015 0
Using the vSphere Client

To update VMware Tools is via the vSphere Client. It’s simple and straightforward.

    Right click on a VM in the vSphere Client
    Choose Guest
    Choose Install/Upgrade VMware Tools

You will be prompted to choose how you would like the upgrade to take place, either Interactively or Automatically. Along with the Automatic option comes the ability to enter some arguments, listed as “advanced options” in the GUI, that will be passed to the install.


Upgrade VMware Tools without reboot

Monday, December 21, 2015 0
Upgrade VMware Tools, selected virtual machine:

Get-Cluster "Productie" | Get-VM "vmname" | Update-Tools –NoReboot

Get VMware-Tools versions:

Get-View -ViewType VirtualMachine | select Name, @{ Name=”ToolsVersion”;

Expression={$_.config.tools.toolsVersion}}

Troubleshooting Syslog Collector in VMWare vsphere

Monday, December 21, 2015 0
Whenever syslog files aren’t updating  in the repository from he vSphere Syslog Collector server.












Here are some basic steps that can be used to troubleshoot this problem.
VMware ESXi hosts

On the VMware ESXi hosts check the following settings:
– Syslog destination. Open the vSphere Client. On the ESXi server, open the configuration tab and select advanced Settings. Check the Syslog.global.logHost value. The format is: protocol://FQDN:port . For example udp://syslog.beerens.local:514




















– Is the ESXi firewall port open for syslog traffic. Open the vSphere Client, on the ESXi server, open the Configuration tab, select Security Profile, Firewall and select Properties. Check if the syslog service is enabled.



















 vSphere Syslog Collector
On the vSphere Syslog Collector server check the following settings:
– Is the syslog port 514 (default) listening:






-  Reload and update the syslog configuration.  On the ESXi host use the following command:
esxcli system syslog reload
– Is the Syslog Collector service started. Restart the Syslog Collector service if needed 















After the reloading the syslog settings and restarting the Syslog Collector service the files begun to update again in the repository.

How to Enable Execute Disable/No Execute CPU feature at ESXI

Monday, December 21, 2015 0

ESXi requires the Execute Disable/No Execute CPU feature to be enabled

Restart the host, enter in(press F9) to boot in BIOS mode.

Advanced Options --> Processor Options --> No-Execute Memory Protection, then configure: Enabled



image

 

Hope it helps.

VMware: How to rollback ESXi 5.1 to 5.0

Monday, December 21, 2015 0
Whenever you find issues after upgrading  to esxi 5.1 from 5.0 , rollback is as simple as below.

Reboot the host and press R to start the Recovery Mode..
Installed hypervisors:
HYPERVISOR1: 5.0.0-623860
HYPERVISOR2: 5.1.0-799733 (Default)
CURRENT DEFAULT HYPERVISOR WILL BE REPLACED PERMANENTLY
DO YOU REALLY WANT TO ROLL BACK?

Press Y to start the roll back

image

Result:
image
The host is downgraded and back online again with VMware vSphere ESXi 5.0.0

How to Disable the interrupt remapping on ESXi

Monday, December 21, 2015 0
 ESXi/ESX 4.1

To disable interrupt remapping on ESXi/ESX 4.1, perform one of these options:

    Run this command from a console or SSH session to disable interrupt mapping:

    # esxcfg-advcfg -k TRUE iovDisableIR

    To back up the current configuration, run this command twice:

    # auto-backup.sh

    Note: It must be run twice to save the change.

    Reboot the ESXi/ESX host:

    # reboot

    To check if interrupt mapping is set after the reboot, run the command:

    # esxcfg-advcfg -j iovDisableIR

    iovDisableIR=TRUE
    In the vSphere Client:
        Click Configuration > (Software) Advanced Settings > VMkernel.
        Click VMkernel.Boot.iovDisableIR, then click OK.
        Reboot the ESXi/ESX host.

ESXi 5.x and ESXi 6.0.x

ESXi 5.x and ESXi 6.0.x does not provide this parameter as a GUI client configurable option. It can only be changed using the esxcli command or via the PowerCLI.


    To set the interrupt mapping using the esxcli command:

    List the current setting by running the command:

    # esxcli system settings kernel list -o iovDisableIR

    The output is similar to:

    Name          Type  Description                              Configured  Runtime  Default
    ------------  ----  ---------------------------------------  ----------  -------  -------
    iovDisableIR  Bool  Disable Interrupt Routing in the IOMMU   FALSE        FALSE    FALSE

    Disable interrupt mapping on the host using this command:

    # esxcli system settings kernel set --setting=iovDisableIR -v TRUE

    Reboot the host after running the command.

    Note: If the hostd service fails or is not running, the esxcli command does not work. In such cases, you may have to use the localcli instead. However, the changes made using localcli do not persist across reboots. Therefore, ensure that you repeat the configuration changes using the esxcli command after the host reboots and the hostd service starts responding. This ensures that the configuration changes persist across reboots.
    To set the interrupt mapping through PowerCLI:

    Note: The PowerCLI commands do not work with ESXi 5.1. You must use the esxcli commands as detailed above.

    PowerCLI> Connect-VIServer -Server 10.21.69.233 -User Administrator -Password passwd
    PowerCLI> $myesxcli = Get-EsxCli -VMHost 10.21.69.111
    PowerCLI> $myesxcli.system.settings.kernel.list("iovDisableIR")

    Configured  : FALSEDefault     : FALSE
    Description : Disable Interrrupt Routing in the IOMMU
    Name        : iovDisableIR
    Runtime     : FALSE
    Type        : Bool

    PowerCLI> $myesxcli.system.settings.kernel.set("iovDisableIR","TRUE")
    true

    PowerCLI> $myesxcli.system.settings.kernel.list("iovDisableIR")

    Configured  : TRUEDefault     : FALSE
    Description : Disable Interrrupt Routing in the IOMMU
    Name        : iovDisableIR
    Runtime     : FALSE
    Type        : Bool
    After the host has finished booting, you see this entry in the /var/log/boot.gz log file confirming that interrupt mapping has been disabled:

    TSC: 543432 cpu0:0)BootConfig: 419: iovDisableIR = TRUE

How to resolve - vCenter Server task migration fails with the error: Failed to create journal file provider, Failed to open for write

Monday, December 21, 2015 0
For vCenter Server

The journal files for vCenter Server on Windows are located at:

    Windows 2003 and earlier – %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\journal\
    Windows 2008 and later – %PROGRAMDATA%\VMware\VMware VirtualCenter\journal\

Whenever you..

    Cannot perform provisioning operations, such as vMotion, Clone, storage DRS
    Cannot create new virtual machines.
    Cannot add RDM disk to a virtual machine.
    Provisioning operations (such as vMotion, Clone, or Migrate) fail.
    vCenter Server task migration fails. You see the error:

    A general system error occurred: Failed to create journal file providerFailed to open "<filename>" for write   

Cause

This issue occurs if there is not enough free disk space to store the journal information. The management components in vCenter Server and ESX/ESXi record transactions to journal files when tracking long-running operations. The path and filename cited in the error message indicate the layer that failed to create a journal file.

The Resolution should be ..

Delete or archive the unnecessary files on this filesystem to free up disk space. Depending on your vCenter Server implementation, it is recommended to have a minimum of 40GB of disk space free on the system.



Explain about VSS writers in Virtual Machines and how to disable the specific VSS writers with VMware Tools

Monday, December 21, 2015 0
VMware products may require file systems within a guest operating system to be quiesced prior to a snapshot operation for the purposes of backup and data integrity.
VMware products which use quiesced snapshots include, but are not limited to, VMware Consolidated Backup and VMware Data Recovery.
As of ESX 3.5 Update 2, quiescing can be done by Microsoft Volume Shadow Copy Service (VSS), which is available in Windows Server 2003.


Operating systems which do not have VSS make use of the SYNC driver for quiescing operations. When VSS is invoked, all VSS providers must be running. If there is an issue with any third-party providers or the VSS service itself, the snapshot operation may fail.
Before verifying a VSS quiescing issue, ensure that you are able to create a manual non-quiesced snapshot using the vSphere Snapshot Manager.


With vSphere 4.0, VMware introduced the ability to disable specificVSS writers for the benefit of troubleshooting a specific VSS writer issue.

If you experience an issue backing up a specific virtual machine using snapshot technology and you have identified an issue with a specific VSS writer within the virtual machine, this blog  explains how to disable that VSS writer from being called during a snapshot operation.

To disable a specific VSS writer being called during a snapshot operation:

    Determine the name of the VSS writer that you want to exclude from the snapshot operation. Run this command from within Windows:
    vssadmin list writers
    Note: With Windows Vista, 7, and 2008 the command prompt may need to be run with administrator elevation.

    You see output similar to:

    Writer name: 'Task Scheduler Writer'
       Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124}
       Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b}
       State: [1] Stable
       Last error: No error
    Note: The name of the VSS Writer is highlighted.

    Create or edit the vmbackup.conf file which is located at %ALLUSERSPROFILE%\Application Data\VMware\VMware Tools\ .

    Note: If the vmbackup.conf file does not exist then create it.
    Place the name of the VSS writer you want to disable on a separate line. If you want to disable more than one VSS writer, ensure that you place each VSS writer name on a separate line. For example:
    Task Scheduler Writer
    NTDS
    SqlServerWriter
    Microsoft Exchange Replica Writer
    Microsoft Exchange Writer
    Restart the VMware Tools service.
    When the writer issue has been resolved, you can remove the offending writer from the vmbackup.conf file.

Note: VMware does not provide these VSS writers. Engage the provider of the VSS writer to troubleshoot the writer issue to ensure application consistency with the writer.

Wednesday, December 9, 2015

Explain about SysRq command and How to reboot the hanged physical Linux & Xen Linux VM Server

Wednesday, December 09, 2015 0
The magic SysRq key is a key combination in the Linux kernel which allows the user to perform various low level commands regardless of the system’s state.

It is often used to recover from freezes, or to reboot a computer without corrupting the filesystem. The key combination consists of Alt+SysRq+commandkey. In many systems the SysRq key is the printscreen key.

First, you need to enable the SysRq key, as shown below.

echo "1" > /proc/sys/kernel/sysrq

List of SysRq Command Keys

Following are the command keys available for Alt+SysRq+commandkey.

    ‘k’ – Kills all the process running on the current virtual console.
    ‘s’ – This will attempt to sync all the mounted file system.
    ‘b’ – Immediately reboot the system, without unmounting partitions or syncing.
    ‘e’ – Sends SIGTERM to all process except init.
    ‘m’ – Output current memory information to the console.
    ‘i’ – Send the SIGKILL signal to all processes except init
    ‘r’ – Switch the keyboard from raw mode (the mode used by programs such as X11), to XLATE mode.
    ‘s’ – sync all mounted file system.
    ‘t’ – Output a list of current tasks and their information to the console.
    ‘u’ – Remount all mounted filesystems in readonly mode.
    ‘o’ – Shutdown the system immediately.
    ‘p’ – Print the current registers and flags to the console.
    ‘0-9′ – Sets the console log level, controlling which kernel messages will be printed to your console.
    ‘f’ – Will call oom_kill to kill process which takes more memory.
    ‘h’ – Used to display the help. But any other keys than the above listed will print help.

Perform a Safe reboot of Linux

To perform a safe reboot of a Linux computer which hangs up, do the following. This will avoid the fsck during the next re-booting. i.e Press Alt+SysRq+letter highlighted below.

  •     unRaw (take control of keyboard back from X11,
  •     Terminate (send SIGTERM to all processes, allowing them to terminate gracefully),
  •     Kill (send SIGILL to all processes, forcing them to terminate immediately),
  •     Sync (flush data to disk),
  •     Unmount (remount all filesystems read-only),
  •     Reboot.
VM Server

 To perform a safe reboot of a Linux Xen Virtual Server which hangs up, do the following. This will avoid the fsck during the next re-booting.

Run the below command in Xen Dom0.

#xm sysrq <domainid> s
#xm sysrq <domainid> u
#xm sysrq <domainid> b


Thursday, December 3, 2015

What are the tools available to properly diagnose a network performance problem in Linux Server?

Thursday, December 03, 2015 0
Below listed Linux tools are used to diagnose the network performance in Linux server.

netstat

    A command-line utility that prints network connections, routing tables, interface statistics, masquerade connections and multicast memberships. It retrieves information about the networking subsystem from the /proc/net/ file system. These files include:

        /proc/net/dev (device information)
        /proc/net/tcp (TCP socket information)
        /proc/net/unix (Unix domain socket information)

    For more information about netstat and its referenced files from /proc/net/, refer to the netstat man page: man netstat.
dropwatch
    A monitoring utility that monitors packets dropped by the kernel. For more information, refer to the dropwatch man page: man dropwatch.

ip
    A utility for managing and monitoring routes, devices, policy routing, and tunnels.

ethtool
    A utility for displaying and changing NIC settings.

/proc/net/snmp
    A file that displays ASCII data needed for the IP, ICMP, TCP, and UDP management information bases for an snmp agent. It also displays real-time UDP-lite statistics.

Wednesday, December 2, 2015

Explain about Linux Memory Huge Pages & Transparent Huge Pages

Wednesday, December 02, 2015 0
1.       Memory is managed in blocks known as pages.
2.       A page is 4096 bytes.
3.       1MB of memory is equal to 256 pages;
4.       1GB of memory is equal to 256,000 pages, etc.
5.       CPUs have a built-in memory management unit that contains a list of these pages, with each page referenced through a page table entry

There are two ways to enable the system to manage large amounts of memory:

    Increase the number of page table entries in the hardware memory management unit
    Increase the page size

The first method is expensive, since the hardware memory management unit in a modern processor only supports hundreds or thousands of page table entries.

Red Hat Enterprise Linux 6 implements the second method

  • Simply put, huge pages are blocks of memory that come in 2MB and 1GB sizes.
  • The page tables used by the 2MB pages are suitable for managing multiple gigabytes of memory, whereas the page tables of 1GB pages are best for scaling to terabytes of memory.
  • Huge pages must be assigned at boot time.
  • They are also difficult to manage manually, and often require significant changes to code in order to be used effectively.

THP (transparent huge pages) is an abstraction layer that automates most aspects of creating, managing, and using huge pages.

  • THP hides much of the complexity in using huge pages from system administrators and developers.
  • As the goal of THP is improving performance, its developers (both from the community and Red Hat) have tested and optimized THP across a wide range of systems, configurations, applications, and workloads.
  • This allows the default settings of THP to improve the performance of most system configurations
  • THP is not recommended for database workloads.
  • THP can currently only map anonymous memory regions such as heap and stack space.

Saturday, November 21, 2015

Brief about ESXTOP - Batch Mode

Saturday, November 21, 2015 0
Batch mode – Statistics can be collected  and output can be saved in a file (csv) and  also  it can be viewed & analyzed using windows perfmon & other tools in later time.

To run esxtop in batch mode and save the output file for feature analysis use the command as in in below syntax

esxtop -b -d 10 -n 5 >/home/nagu/esxtstats.csv

–d Switch is used for the number of seconds between refreshes
–n switch is the number of iterations to run the esxtop


In our above example, esxtop command will run for about 50 seconds. 10 seconds dealy* 5 iterations. redirecting the output of above esxtop stats into csv file to store in the location  

/home/nagu/esxstats.csv




Once the command completed, Browse towards the location /home/nagu to see the esxtop output file “esxstats.csv”. Transfer the csv file using winscp to your windows desktop and analyze using windows perfmon or esxplot.

VMWare Interview Questions and answers on vMotion

Saturday, November 21, 2015 0
1.What is vMotion?

      Live migration of a virtual machine from one ESX server to another with Zero downtime.

2. What are the use cases of vMotion ?
  • Balance the load on ESX servers (DRS
  • Save power by shutting down ESX using DPM
  • Perform patching and maintenance on ESX server (Update Manager or HW maintenance
3.  What are Pre-requisites for the vMotion to Work?
  • ESX host must be licensed for VMotion
  • ESX  servers must be configured with vMotion Enabled VMkernel Ports.   
  • ESX servers must have compatible CPU’s for the vMotion to work
  • ESX servers should have Shared storage (FB, iSCSI or NFS) and VM’s should be stored on that    storage.
  • ESX servers should have exact similar network & network names
4. What are the Limitations of vMotion?
  • Virtual machines configured with the Raw Device Mapping(RDM) for clustering features using vMotion
  • VM cannot be connected to a CD-ROM or floppy drive that is using an ISO or floppy image stored on a drive that is local to the host server. The device should be disconnected before initiating the vMotion.
  • Virtual Machine cannot be migrated with VMotion unless the destination swapfile location is the same as the source swapfile location. As a best practice, Place the virtual machine swap files with the virtual  machine configuration file.
  • Virtual Machine affinity must not be set (aka, bound to physical CPUs).
5. Steps involved in VMWare vMotion ?
  • A request has been made that VM-1 should be migrated (or “VMotioned”) from ESX A to ESX B.
  • VM-1’s memory is pre-copied from ESX A to ESX B while ongoing changes are written to a memory bitmap on ESX A.
  • VM-1 is quiesced on ESX A and VM-1’s memory bitmap is copied to ESX B.
  • VM-1 is started on ESX B and all access to VM-1 is now directed to the copy running on ESX B.
  • The rest of VM-1’s memory is copied from ESX A all the while memory is being read and written from VM-1 on ESX A when applications attempt to access that memory on VM-1 on ESX B.
  • If the migration is successful, VM-1 is unregistered on ESX A. 

PowerShell Script to List all VM’s with a connected CD-ROM/floppy device

Saturday, November 21, 2015 0
This script will report all VMs with a connected CD-ROM/floppy device. It will give you information about the device status – e.g. connected, connect at power on, client device

Replace vCenter Server with your vCenter Server name in the first line:

Connect-VIServer vCenter_name

$vms = Get-VM
write “VMs with a connected CD-ROM:”
foreach ($vm in $vms | where { $_ | Get-CDDrive | where { $_.ConnectionState.Connected -eq “true”}}) {
write $vm.name
}
write “VMs with CD-ROM connected at power on:”
foreach ($vm in $vms | where { $_ | Get-CDDrive | where { $_.ConnectionState.StartConnected -eq “true”}}) {
write $vm.name
}
write “VMs with CD-ROM connected as ‘Client Device’:”
foreach ($vm in $vms | where { $_ | Get-CDDrive | where { $_.RemoteDevice.Length -ge 0}}) {
write $vm.name
}
write “VMs with CD-ROM connected to ‘Datastore ISO file’:”
foreach ($vm in $vms | where { $_ | Get-CDDrive | where { $_.ISOPath -like “*.ISO*”}}) {
write $vm.name
}
write “VMs with connected Floppy:”
foreach ($vm in $vms | where { $_ | Get-FloppyDrive | where { $_.ConnectionState.Connected -eq “true”}}) {
write $vm.name
}
write “VMs with floppy connected at power on:”
foreach ($vm in $vms | where { $_ | Get-FloppyDrive | where { $_.ConnectionState.StartConnected -eq “true”}}) {
write $vm.name
}
write “VMs with floppy connected as ‘Client Device’:”
foreach ($vm in $vms | where { $_ | Get-FloppyDrive | where { $_.RemoteDevice.Length -ge 0}}) {
write $vm.name
}

Note: Copy this code in a notepad and save the file as .ps1

vSphere 6.0 -Difference between vSphere 5.0, 5.1, 5.5 and vSphere 6.0

Saturday, November 21, 2015 0
vSphere 6.0 -Difference between vSphere 5.0, 5.1, 5.5 and vSphere 6.0
vSphere 6.0 released with lot of new features and enhancements as compared to the previous versions of vSphere releases. Below is the difference between vSphere 5.0, 5.1, 5.5 and vSphere 6.0:



VMWare HA Slots Calculation

Saturday, November 21, 2015 0
 What is SLOT?

As per VMWare’s Definition,
“A slot is a logical representation of the memory and CPU resources that satisfy the requirements for any powered-on virtual machine in the cluster.”
If you have configured reservations at VM level, It influence the HA slot calculation. Highest memory reservation and highest CPU reservation of the VM in your cluster determines the slot size for the cluster.

Here is the Example,

If you have the VM configured with the highest memory reservation of 8192 MB (8 GB) and  highest CPU reservation of 4096 MHZ. among the other VM’s  in the cluster, then the slot size for memory is 8192 MB and slot size for CPU is 4096 MHZ. in the cluster.



If no VM level reservation is configured , Minimum CPU size of 256 MHZ and memory size of 0 MB +  VM memory overhead will be considered as CPU and Memory slot size.
Calculation for Number of Slots in cluster :-
Once we got the Slot size for memory and CPU by the above method , Use the below calculation
Num of CPU Slots  = Total available CPU resource of ESX or cluster   /  CPU Slot Size
Num of memory slots = Total available memory resource of ESX or cluster minus memory used for service console & ESX system /  Memory Slot size

Let’s take a Example, 
I have 3 host on the cluster and 6 Virtual machine is running on the cluster and Each host capacity as follows
RAM = 50 GB per Host
CPU = 8 X 2.666 GHZ  per host
Cluster RAM Resources = 50 X 3 = 150 GB – Memory for service console and system = 143 GB
Cluster CPU resources = 8 X 2.6 X 3 =  63 GHZ (63000 MHZ) of total CPU capacity in the cluster – CPU Capacity used by the ESX System = 60384 MHZ



I don’t have any memory  or CPU reservation in my cluster, So,  the default CPU slot size 256 MHZ and one of my Virtual machine is assigned with 8 vcpu and its memory overhead is  344.98 MB (which is the highest overhead among my 6 virtual machines in the cluster)
Let’s calculate the num of  CPU  & Memory slots
Num of CPU Slots  = Total available CPU resource of cluster /  CPUSlot size in MHZ
No of CPU Slots = 60384 MHZ / 256 MHZ = 235.875 Approx
Num of Memory Slots =  Total available Memory resource of cluster  /  memory Slot Size  in MB
Num of Memory Slots =  146432 / 345 =  424 Approx
The most restrictive number among CPU and Memory slots determines the amount of slots for this cluster. We have 235 slots available for  CPU and 424 Slots available for Memory. So the most restrictive number is 235.
So, Total number of slots for my cluster is 235 Approx. Please find the below snapshot





Installing Esxi Patches by using LCI

Saturday, November 21, 2015 0
Pre-requisites steps for installing ESXi patches

    Download the patches applicable for our ESX/ESXi  version manually

    We can install patches using esxcli command by using SSH connection or via ESXi shell using remote console connections like ILO, DRAC.
    Now the downloaded patches needs to be transferred to the datastore of ESX/ESXi hosts

 Implementation steps

    1. Login to your ESXi host using SSH or ESXi shell with your root credentials
    2. Browse towards the Patch location in your datastore  and verify the donwloaded patches are alreadys in and note down the complete path for the patch.
   3 .Before installing patches placing your ESXi host in maintenance mode    is  very important.

    esxcli software vib install -d /vmfs/volumes/datastore1/ESXi\ patches/ESXi510-201210001.zip

    To verify the installed VIB's installed on your host execute the below command

esxcli software vib list

    Reboot your ESXi host for the changes to take effect and exit your host from the maintenance mode

Explain the Vmotion Background Process

Saturday, November 21, 2015 0
 Vmotion Background Process
  • The Virtual Machine Memory state is copied over the Vmotion Network from the source Host to the Target Host. users continue to access the virtual machine and potentially update pages in memory. A list of modified pages in memory is kept in a memory Bitmap on the source Host.
  • After most of the virtual machine memory is copied from the source host to target host the virtual machine quiesced no additional activity occurs on the virtual machine. In quiesce period VMOTION transfers the virtual machine device state and memory Bitmap to the destination Host.
  • Immediately after the virtual machine is quiesced on the source host, the virtual machine initialized and starts running on the target host.
  • Users access the virtual machine on the target host instead of the source host.
  • The memory pages that the virtual machine was using on the source host are marked as free.

Difference Between Esx and Esxi

Saturday, November 21, 2015 0
Difference Between Esx and Esxi



Thursday, November 19, 2015

How to Ignore the Local Disks when Generating Multipath Devices in Linux Server

Thursday, November 19, 2015
Some machines have local SCSI cards for their internal disks. DM-Multipath is not recommended for these devices.

The following procedure shows how to modify the multipath  configuration file to ignore the local disks when configuring multipath.

1.  Determine which disks are the internal disks and mark them as the ones to blacklist.

In this example, /dev/sda is the internal disk. Note that as originally configured in the default multipath configuration file, executing the multipath -v2 shows the local disk, /dev/sda, in the multipath map.

[root@test ~]# multipath -v2
create: SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
[size=33 GB][features="0"][hwhandler="0"]
\_ round-robin 0
  \_ 0:0:0:0 sda  8:0    [---------
device-mapper ioctl cmd 9 failed: Invalid argument
device-mapper ioctl cmd 14 failed: No such device or address
create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
  \_ 2:0:0:0 sdb  8:16  
  \_ 3:0:0:0 sdf  8:80  
create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
  \_ 2:0:0:1 sdc  8:32  
  \_ 3:0:0:1 sdg  8:96  
create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
  \_ 2:0:0:2 sdd  8:48  
  \_ 3:0:0:2 sdh  8:112 

2. In order to prevent the device mapper from mapping /dev/sda in its multipath maps, edit the blacklist section of the /etc/multipath.conf file to include this device. Although you could blacklist the sda device using a devnode type, that would not be safe procedure since /dev/sda is not guaranteed to be the same on reboot. To blacklist individual devices, you can blacklist using the WWID of that device.
ote that in the output to the multipath -v2 command, the WWID of the /dev/sda device is SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1.
To blacklist this device, include the following in the /etc/multipath.conf file.

blacklist {
      wwid SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
}

3. After you have updated the /etc/multipath.conf file, you must manually tell the multipathd daemon to reload the file.

The following command reloads the updated /etc/multipath.conf file.
service multipathd reload

4. Run the following commands:

multipath -F
multipath -v2
[root@test~]# multipath -F
[root@test ~]# multipath -v2

create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
  \_ 2:0:0:0 sdb  8:16  
  \_ 3:0:0:0 sdf  8:80  
create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
  \_ 2:0:0:1 sdc  8:32  
  \_ 3:0:0:1 sdg  8:96  
create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
  \_ 2:0:0:2 sdd  8:48  
  \_ 3:0:0:2 sdh  8:112 

Tuesday, November 17, 2015

Explain Multipath command output in Linux Server

Tuesday, November 17, 2015
When you create, modify, or list a multipath device, you get a printout of the current device setup. The format is as follows.

For each multipath device:

 action_if_any: alias (wwid_if_different_from_alias) [size][features][hardware_handler]

For each path group:

\_ scheduling_policy [path_group_priority_if_known] [path_group_status_if_known]

For each path:

\_ host:channel:id:lun devnode major:minor [path_status] [dm_status_if_known]

For example, the output of a multipath command might appear as follows:

mpath1 (3600d0230003228bc000339414edb8101) [size=10 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=1][active]
 \_ 2:0:0:6 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 3:0:0:6 sdc 8:64 [active][ready]

If the path is up and ready for I/O, the status of the path is ready or active. If the path is down, the status is faulty or failed.


 The path status is updated periodically by the multipathd daemon based on the polling interval defined in the /etc/multipath.conf file.

The dm status is similar to the path status, but from the kernel's point of view. The dm tatus has two states: failed, which is analogous to faulty, and active which covers all other path states. Occasionally, the path state and the dm state of a device will temporarily not agree.

Friday, November 13, 2015

How to setup DM-Multipath in Linux server?

Friday, November 13, 2015 0
DM-Multipath includes compiled-in default settings that are suitable for common multipath configurations.

Setting up DM-multipath is often a simple procedure.

The basic procedure for configuring your system with DM-Multipath is as follows:

1. Install device-mapper-multipath rpm.
 
Before setting up DM-Multipath on your system, ensure that your system has been updated and includes the device-mapper-multipath package.

2. Edit the multipath.conf configuration file:

  Edit the /etc/multipath.conf file by commenting out the following lines at the top of the file. This section of the configuration   file, in its initial state, blacklists all devices. You must comment it out to enable multipathing.
     
       blacklist {
        devnode "*"
}

The default settings for DM-Multipath are compiled in to the system and do not need to be explicitly set in the /etc/multipath.conf file.

The default value of path_grouping_policy is set to failover, so in this example you do not need to change the default value.

The initial defaults section of the configuration file configures your system that the names of the multipath devices are of the  form mpathn; without this setting, the names of the multipath devices would be aliased to the WWID of the device.

Save the configuration file and exit the editor.

3. Start the multipath daemons.

modprobe dm-multipath
service multipathd start
multipath -v2

The multipath -v2 command prints out multipathed paths that show which devices are multipathed. If the command does not print anything out, ensure that all SAN connections are set up properly and the system is multipathed.

4. Execute the following command to ensure sure that the multipath daemon starts on bootup:

    chkconfig multipathd on

Since the value of user_friendly_name is set to yes in the configuration file the multipath devices will be created as /dev/mapper/mpathn

Monday, November 9, 2015

Understanding the TCPDUMP command with an example - Linvirtshell

Monday, November 09, 2015 0
In most cases you will need root permission to be able to capture packets on an interface. Using tcpdump (with root) to capture the packets and saving them to a file to analyze.

See the list of interfaces on which tcpdump can listen:

tcpdump -D

[root@nsk-linux nsk]# tcpdump -D

1.usbmon1 (USB bus number 1)
2.eth4
3.any (Pseudo-device that captures on all interfaces)
4.lo

Listen on interface eth0:

tcpdump -i eth0

Listen on any available interface (cannot be done in promiscuous mode. Requires Linux kernel 2.2 or greater)

tcpdump -i any

Capture only N number of packets using tcpdump -c

 [root@nsk-linux nsk]# tcpdump -c 2 -i eth4

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth4, link-type EN10MB (Ethernet), capture size 65535 bytes
18:35:51.382706 IP 10.0.2.15.ssh > 10.0.2.2.51879: Flags [P.], seq 4037059562:4037059770, ack 3747030, win 36432, length 208
18:35:51.383008 IP 10.0.2.2.51879 > 10.0.2.15.ssh: Flags [.], ack 208, win 65535, length 0
2 packets captured
6 packets received by filter
0 packets dropped by kernel

Display Captured Packets in ASCII using tcpdump -A

# tcpdump -A -i eth0

Display Captured Packets in HEX and ASCII using tcpdump -XX

#tcpdump -XX -i eth0

Be verbose while capturing packets

#tcpdump –v

Be very verbose while capturing packets

#tcpdump -vvv

Be verbose and print the data of each packet in both hex and ASCII, excluding the link level header

tcpdump -v -X

Be verbose and print the data of each packet in both hex and ASCII, also including the link level header

tcpdump -v -XX

Be less verbose (than the default) while capturing packets

tcpdump -q

Limit the capture to 100 packets

tcpdump -c 100

Record the packet capture to a file called capture.cap

tcpdump -w capture.cap

Record the packet capture to a file called capture.cap but display on-screen how many packets have been captured in real-time

tcpdump -v -w capture.cap

Display the packets of a file called capture.cap

tcpdump -r capture.cap

Display the packets using maximum detail of a file called capture.cap

tcpdump -vvv -r capture.cap

Display IP addresses and port numbers instead of domain and service names when capturing packets (note: on some systems you need to specify -nn to display port numbers)

tcpdump -n

Capture any packets where the destination host is 10.0.2.2. Display IP addresses and port numbers

tcpdump -n dst host 10.0.2.2

Capture any packets where the source host is 10.0.2.2. Display IP addresses and port numbers

tcpdump -n src host 10.0.2.2

Capture any packets where the source or destination host is 10.0.2.15. Display IP addresses and port numbers

tcpdump -n host 10.0.2.15

Capture any packets where the destination network is 10.0.2.0/24. Display IP addresses and port numbers

tcpdump -n dst net 10.0.2.0/24

Capture any packets where the source network is 10.0.2.0/24. Display IP addresses and port numbers

tcpdump -n src net 10.0.2.0/24


Capture any packets where the source or destination network is 10.0.2.0/24. Display IP addresses and port numbers

[root@nsk ~]# tcpdump -n net 10.0.2.0/24

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes

18:56:07.471583 IP 10.0.2.15.ssh > 10.0.2.2.60038: Flags [P.], seq 312243348:312243556, ack 3492510, win 65136, length 208
18:56:07.471790 IP 10.0.2.15.ssh > 10.0.2.2.60038: Flags [P.], seq 208:384, ack 1, win 65136, length 176
18:56:07.471947 IP 10.0.2.15.ssh > 10.0.2.2.60038: Flags [P.], seq 384:544, ack 1, win 65136, length 160
18:56:07.472093 IP 10.0.2.15.ssh > 10.0.2.2.60038: Flags [P.], seq 544:704, ack 1, win 65136, length 160
18:56:07.472247 IP 10.0.2.15.ssh > 10.0.2.2.60038: Flags [P.], seq 704:864, ack 1, win 65136, length 160
18:56:07.472370 IP 10.0.2.15.ssh > 10.0.2.2.60038: Flags [P.], seq 864:1024, ack 1, win 65136, length 160
18:56:07.472576 IP 10.0.2.15.ssh > 10.0.2.2.60038: Flags [P.], seq 1024:1184, ack 1, win 65136, length 160
18:56:07.472605 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 208, win 65535, length 0
18:56:07.472619 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 384, win 65535, length 0
18:56:07.472624 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 544, win 65535, length 0
18:56:07.472627 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 704, win 65535, length 0
18:56:07.472629 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 864, win 65535, length 0
18:56:07.472632 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 1024, win 65535, length 0

Capture any packets where the destination port is 22. Display IP addresses and port numbers

[root@nsk ~]# tcpdump -n dst port 22

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
18:54:41.047546 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 312125892, win 65535, length 0
18:54:41.047856 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 161, win 65535, length 0
18:54:41.048086 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 305, win 65535, length 0
18:54:41.048309 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 449, win 65535, length 0
18:54:41.048535 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 593, win 65535, length 0
18:54:41.048744 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 737, win 65535, length 0
18:54:41.048969 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 881, win 65535, length 0

Capture any packets where the destination port is is between 1 and 1023 inclusive. Display IP addresses and port numbers

[root@nsk ~]# tcpdump -n dst portrange 1-1023

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
18:53:33.082176 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 311660756, win 65535, length 0
18:53:33.082872 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 161, win 65535, length 0
18:53:33.083288 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 305, win 65535, length 0
18:53:33.083668 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 449, win 65535, length 0
18:53:33.083860 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 593, win 65535, length 0
18:53:33.084131 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 737, win 65535, length 0
18:53:33.084410 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 881, win 65535, length 0
18:53:33.084655 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 1025, win 65535, length 0

Capture only TCP packets where the destination port is is between 1 and 1023 inclusive. Display IP addresses and port numbers

[root@nsk ~]# tcpdump -n tcp dst portrange 1-1023

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
18:51:43.154211 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 311537732, win 65535, length 0
18:51:43.155095 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 161, win 65535, length 0
18:51:43.155509 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 305, win 65535, length 0
18:51:43.155805 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 449, win 65535, length 0
18:51:43.156082 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 593, win 65535, length 0
18:51:43.156352 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 737, win 65535, length 0
18:51:43.156619 IP 10.0.2.2.60038 > 10.0.2.15.ssh: Flags [.], ack 881, win 65535, length 0


Capture only UDP packets where the destination port is is between 1 and 1023 inclusive. Display IP addresses and port numbers

[root@nsk ~]# tcpdump -n udp dst portrange 1-1023


Capture any packets with destination IP 10.0.2.15 and destination port 23. Display IP addresses and port numbers

[root@nsk ~]# tcpdump -n "dst host 10.0.2.15 and dst port 23"


Capture any packets with destination IP 10.0.2.15 and destination port 80 or 443. Display IP addresses and port numbers

[root@nsk ~]# tcpdump -n "dst host 10.0.2.15 and (dst port 80 or dst port 443)"


Capture any ICMP packets

[root@nsk ~]# tcpdump -v icmp


Capture any ARP packets

[root@nsk ~]# tcpdump -v arp


Capture 500 bytes of data for each packet rather than the default of 68 bytes

[root@nsk-linux nsk]# tcpdump -s 500


Capture all bytes of data within the packet

[root@nsk-linux nsk]# tcpdump -s 0


Capture the particular interface traffic and save as .cap file

[root@nsk-linux nsk]# tcpdump -i enp0s3 -s 0 -vvv -w /home/nsk/file_18:03:54.pcap
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes
^C97390 packets captured
97855 packets received by filter
460 packets dropped by kernel