This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Showing posts with label VMware Networking. Show all posts
Showing posts with label VMware Networking. Show all posts

Thursday, September 14, 2017

Getting ESXi Network driver firmware details by using Script.

Thursday, September 14, 2017 0
Find the script as follows.

$ExportFilePath = "C:\Users\user\Desktop\esxi.txt"
$PuttyUser = "root"
$PuttyPwd = "w2k8the$"
$HostNic = "vmnic0,vmnic1"
$Plink = "C:\Users\user\Desktop\plink.exe"
$PlinkOptions = " -v -batch -pw $PuttyPwd"
$RCommand = '"' + "ethtool -i " + $HostNic + '"'
$ESXHosts = Get-VMHost | where {$_.Model -match "ProLiant BL*"} | Sort Name
$Report = @()
ForEach ($ESXHost in $ESXHosts) {
        $Message = ""
        $HostInfo = {} | Select HostName,ESXVersion,Cluster,pNic,DriverName,DriverVersion,DriverFirmware
        $HostInfo.HostName = $ESXHost.Name
        $HostInfo.ESXVersion = $ESXHost.Version
        $HostInfo.Cluster = (Get-Cluster -VMHost $ESXHost.Name).Name
        $HostInfo.pNic = $HostNic
        Write-Host "Connecting to: " $ESXHost.Name -ForegroundColor Green
        $Command = $Plink + " " + $PlinkOptions + " " + $PuttyUser + "@" + $ESXHost.Name + " " + $rcommand
        $Message = Invoke-Expression -command $command
        $HostInfo.DriverName = ($Message[0] -split"driver:")[1]
        $HostInfo.DriverVersion = ($Message[1] -split"version:")[1]
        $HostInfo.DriverFirmware = ($Message[2] -split"firmware-version:")[1]
        $Report += $HostInfo
}

$Report = $Report | Sort-Object HostName
IF ($Report -ne "") {
$Report | Export-Csv $ExportFilePath -NoTypeInformation
}
Invoke-Item $ExportFilePath

Tuesday, June 14, 2016

NIC bonding in Vmware ESXi and ESX

Tuesday, June 14, 2016 0

To utilize NIC teaming, two or more network adapters must be uplinked to a virtual switch. The main advantages of NIC teaming are:
  • Increased network capacity for the virtual switch hosting the team.
  • Passive failover in the event one of the adapters in the team goes down.
Observe these guidelines to choose the correct NIC Teaming policy:
  • Route based on the originating port ID: Choose an uplink based on the virtual port where the traffic entered the virtual switch.
  • Route based on an IP hash: Choose an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, whatever is at those offsets is used to compute the hash.
  • Route based on a source MAC hash: Choose an uplink based on a hash of the source Ethernet.
  • Use explicit failover order: Always use the highest order uplink from the list of Active adapters which passes failover detection criteria.
  • Route based on physical NIC load (Only available on Distributed Switch): Choose an uplink based on the current loads of physical NICs.
Before you begin :
  • The default load balancing policy is Route based on the originating virtual port ID. If the physical switch is using link aggregation, Route based on IP hash load balancing must be used.
  • LACP support has been introduced in vSphere 5.1 on distributed vSwitches and requires additional configuration. 
  •  
  • Ensure VLAN and link aggregation protocol (if any) are configured correctly on the physical switch ports.

To configure NIC teaming for standard vSwitch using the vSphere / VMware Infrastructure Client:
  1. Highlight the host and click the Configuration tab.
  2. Click the Networking link.
  3. Click Properties.
  4. Under the Network Adapters tab, click Add.
  5. Select the appropriate (unclaimed) network adapter(s) and click Next.
  6. Ensure that the selected adapter(s) are under Active Adapters.
  7. Click Next > Finish.
  8. Under the Ports tab,highlight the name of the port group and click Edit.
  9. Click the NIC Teaming tab.
  10. Select the correct Teaming policy under the Load Balancing field.
  11. Click OK.
To configure NIC teaming for standard vSwitch using the vSphere Web Client:
  1. Under vCenter Home, click Hosts and Clusters.
  2. Click on the host.
  3. Click Manage > Networking > Virtual Switches.
  4. Click on the vSwitch.
  5. Click Manage the physical network adapters.
  6. Select the appropriate (unclaimed) network adapter(s) and use the arrow to move the adapter(s) to Active Adapters.
  7. Click Edit settings.
  8. Select the correct Teaming policy under the Load Balancing field.
  9. Click OK.
To configure NIC teaming for Distributed portgroup for VMware vSphere Distributed Switch (VDS) using the vSphere/VMware Infrastructure Client:
  1. From Inventory, go to Networking.
  2. Click on the Distributed switch.
  3. Click the Configuration tab.
  4. Click Manage Hosts. A window pops up.
  5. Click the host.
  6. From the Select Physical Adapters option, select the correct vmnics.
  7. Click Next for the rest of the options.
  8. Click Finish.
  9. Expand the Distributed switch.
  10. Right-click the Distributed Port Group.
  11. Click Edit Settings.
  12. Click Teaming and Failover.
  13. Select the correct Teaming policy under the Load Balancing field.
  14. Click OK.
To configure NIC teaming for Distributed portgroup for VDS using the vSphere Web Client:
  1. Under vCenter Home, click Networking.
  2. Click on the Distributed switch.
  3. Click the Getting Started tab.
  4. Under Basic tasks, click Add and manage hosts. A window pops up.
  5. Click Manage host networking.
  6. Click Next > Attached hosts.
  7. Select the host(s).
  8. Click Next.
  9. Select Manage physical adapters and deselect the rest.
  10. Click Next
  11. Select the correct vmnics.
  12. Click Assign Uplink > Next.
  13. Click Next for the rest of the options.
  14. Click Finish.
  15. Expand the Distributed Switch.
  16. Click Distributed Port Group. > Manage > Settings.
  17. Under Properties, click Edit.
  18. Click Teaming and failover.
  19. Select the correct Teaming policy underthe Load Balancing field.
  20. Click OK.

Explain about NIC Teaming in vmware

Tuesday, June 14, 2016 0

NIC Teaming

Let’s take a well-deserved break from networking math for a moment and shift into the fun world of NIC teaming. The concept of teaming goes by many different names: bonding, grouping, and trunking to name a few. Really, it just means that we’re taking multiple physical NICs on a given ESXi host and combining them into a single logical link that provides bandwidth aggregation and redundancy to a vSwitch. You might think that this sounds a little bit like port channels from earlier in the book. And you’re partially right—the goal is very similar, but the methods are vastly different.



  Let’s go over all of the configuration options for NIC teaming within a vSwitch. These options are a bit more relevant when your vSwitch is using multiple uplinks but are still valid configuration points no matter the quantity of uplinks.

Load Balancing

The first point of interest is the load-balancing policy. This is basically how we tell the vSwitch to handle outbound traffic, and there are four choices on a standard vSwitch:
  1. Route based on the originating virtual port
  2. Route based on IP hash
  3. Route based on source MAC hash
  4. Use explicit failover order
Keep in mind that we’re not concerned with the inbound traffic because that’s not within our control. Traffic arrives on whatever uplink the upstream switch decided to put it on, and the vSwitch is only responsible for making sure it reaches its destination.

The first option, route based on the originating virtual port, is the default selection for a new vSwitch. Every VM and VMkernel port on a vSwitch is connected to a virtual port. When the vSwitch receives traffic from either of these objects, it assigns the virtual port an uplink and uses it for traffic. 

The chosen uplink will typically not change unless there is an uplink failure, the VM changes power state, or the VM is migrated around via vMotion.

The second option, route based on IP hash, is used in conjunction with a link aggregation group (LAG), also called an EtherChannel or port channel. When traffic enters the vSwitch, the load-balancing policy will create a hash value of the source and destination IP addresses in the packet. The resulting hash value dictates which uplink will be used.

The third option, route based on source MAC hash, is similar to the IP hash idea, except the policy examines only the source MAC address in the Ethernet frame. To be honest, we have rarely seen this policy used in a production environment, but it can be handy for a nested hypervisor VM to help balance its nested VM traffic over multiple uplinks.


The fourth and final option, use explicit failover order, really doesn’t do any sort of load balancing. Instead, the first Active NIC on the list is used. If that one fails, the next Active NIC on the list is used, and so on, until you reach the Standby NICs. Keep in mind that if you select the Explicit Failover option and you have a vSwitch with many uplinks, only one of them will be actively used at any given time. 

Use this policy only in circumstances where using only one link rather than load balancing over all links is desired or required.

Network Failure Detection

When a network link fails (and they definitely do), the vSwitch is aware of the failure because the link status reports the link as being down. This can usually be verified by seeing if anyone tripped over the cable or mistakenly unplugged the wrong one. 

In most cases, this is good enough to satisfy your needs and the default configuration of “link status only” for the network failure detection is good enough.
But what if you want to determine a failure further up the network, such as a failure beyond your upstream connected switch? This is where beacon probing might be able to help you out. Beacon probing is actually a great term because it does roughly what it sounds like it should do. 

A beacon is regularly sent out from the vSwitch through its uplinks to see if the other uplinks can “hear” it.


Below image shows an example of a vSwitch with three uplinks. When Uplink1 sends out a beacon that Uplink2 receives but Uplink3 does not, this is because the upstream aggregation switch 2 is down, and therefore, the traffic is unable to reach Uplink3.
 
An example where beacon probing finds upstream switch failures
Are you curious why we use an example with three uplinks? Imagine you only had two uplinks and sent out a beacon that the other uplink did not hear. Does the sending uplink have a failure, or does the receiving uplink have a failure? It’s impossible to know who is at fault. Therefore, you need at least three uplinks in order for beacon probing to work.

 NOTE

Notify Switches

The Notify Switches configuration is a bit mystifying at first. Notify the switches about what, exactly? By default, it’s set to “Yes,” and as we cover here, that’s almost always a good thing.

Remember that all of your upstream physical switches have a MAC address table that they use to map ports to MAC addresses. This avoids the need to flood their ports—which means sending frames to all ports except the port they arrived on (which is the required action when a frame’s destination MAC address doesn’t appear in the switch’s MAC address table).

But what happens when one of your uplinks in a vSwitch fails and all of the VMs begin using a new uplink? The upstream physical switch would have no idea which port the VM is now using and would have to resort to flooding the ports or wait for the VM to send some traffic so it can re-learn the new port. 

Instead, the Notify Switches option speeds things along by sending Reverse Address Resolution Protocol (RARP) frames to the upstream physical switch on behalf of the VM or VMs so that upstream switch updates its MAC address table. This is all done before frames start arriving from the newly vMotioned VM, the newly powered-on VM, or from the VMs that are behind the uplink port that failed and was replaced.

These RARP announcements are just a fancy way of saying that the ESXi host will send out a special update letting the upstream physical switch know that the MAC address is now on a new uplink so that the switch will update its MAC address table before actually needing to send frames to that MAC address. It’s sort of like ESXi is shouting to the upstream physical switch and saying, “Hey! This VM is over here now!”

Failback

Since we’re already on the topic of an uplink failure, let’s talk about Failback. If you have a Standby NIC in your NIC Team, it will become Active if there are no more Active NICs in the team. Basically, it will provide some hardware redundancy while you go figure out what went wrong with the failed NIC.

 When you fix the problem with the failed Active NIC, the Failback setting determines if the previously failed Active NIC should now be returned to Active duty.

If you set this value to Yes, the now-operational NIC will immediately go back to being Active again, and the Standby NIC returns to being Standby. Things are returned back to the way they were before the failure.

If you choose the No value, the replaced NIC will simply remain inactive until either another NIC fails or you return it to Active status.

Failover Order

The final section in a NIC team configuration is the failover order. It consists of three different adapter states:
  • Active adapters: Adapters that are Actively used to pass along traffic.
  • Standby adapters: These adapters will only become Active if the defined Active adapters have failed.
  • Unused adapters: Adapters that will never be used by the vSwitch, even if all the Active and Standby adapters have failed.
While the Standby and Unused statuses do have value for some specific configurations, such as with balancing vMotion and management traffic on a specific pair of uplinks, it’s common to just set all the adapters to Active and let the load-balancing policy do the rest.