Dear readers
As you are probably aware NSX-T use its own vSwitch called N-VDS. The N-VDS is primarily used to encapsulate and decapsulate GENEVE overlay traffic between NSX-T transport nodes along supporting the distributed Firewall (dFW) for micro-segmentation. The N-VDS requires its own dedicated pNIC interfaces. These pNIC cannot be shared with vSphere vSwitches (vDS or vSS). Each NSX-T transport node has in a typically NSX-T deployment one or two Tunnel End Points (TEPs) to terminate the GENEVE overlay traffic. The number of TEP is directly related to the attached Uplink Profile. In case you use an uplink teaming policy "failover", then only a single TEP is used. In case of a teaming policy "Load Balance Source" then you have for each physical NIC a TEP assigned. Such an "Load Balance Source" Uplink Profile is showed below and will be used for this lab exercise.
The mapping of the "Uplinks" is as follow:
- ActiveUplink1 is the pNIC (vmnic2) connected to ToR switch NY-CAT3750G-A
- ActiveUplink2 is the pNIC (vmnic3) connected to ToR switch NY-CAT3750G-B
Additionally, you could see the VLAN 150 to carry the GENEVE encapsulated traffic.
However, the N-VDS can also be used for VLAN-based segments. VLAN-based segments are very similar as vDS portgroups. In deployment, where your hosts has only two pNICs and both pNICs are used for the N-VDS (yes, for redundancy reason), you have to use VLAN-based segments to carry VmKernel interfaces (e.g. mgmt, vMotion or vSAN). When your VLAN-based segments are used to carry VMKernel interface traffic and you use an Uplink Profile as shown above, then it is difficult to figure out on which pNIC the VmKernel traffic is carried, as these traffic is following the default teaming policy, in our case "Load Balance Source". Please note, VLAN-based segments is not limited to VmKernel traffic, such segment can also carry regular virtual machine traffic.
There are often good reasons to do traffic steering to have a predicable traffic flow behavior, as example you would like to transport Management and vMotion VmKernel traffic under normal conditions (all physical links are up) on pNIC_A and vSAN on pNIC_B. One of the top two reasons are:
1.) predict the forwarding traffic pattern under normal conditions (all links are up) and align as example the VmKernel traffic with the active First Hop Gateway Protocol (e.g. HSRP)
2.) reduce ISL traffic between the two ToR Switches or ToR-to-Spine traffic for high load traffic (e.g. vSAN or vMotion) along with predictable and low latency traffic forwarding (assume as example you have 20 hosts in a single rack and all hosts use for vSAN the left ToR Switch, in such situation the ISL is not carrying vSAN traffic)
This is where NSX-T "VLAN Pinning" comes into the game. The term "VLAN Pinning" is in our NSX-T public documentation referred as "Named Teaming Policy". Actually I like the term "VLAN Pinning". In this lab exercise for this blog, I would like to show you how you could configure "VLAN Pinning". The physical lab setup looks like the diagram below:
For this exercise is only host NY-ESX72A relevant. This host NY-ESX72A is attached to two Top of Rack (ToR) Layer 3 Switches, called NY-CAT3750G-A and NY-CAT3750G-B. As you see, this hosts has four pNICs (vmnic0...3). But only the pNIC vmnic2 and vmnic3 assigned to the N-VDS are relevant for this lab exercise. On the host NY-ESX72A, I have created three additional "artificial/dummy" VmKernel interfaces (vmk3, vmk4, vmk5). Each of the three VmKernel is assigned to a dedicated NSX-T VLAN-based segment. The diagram below shows the three VmKernel interfaces, all attached to a dedicated VLAN-based segment owned by the N-VDS (NY-NVDS) and the MAC address from vmk3 as example.
The simplified logical setup is shown below:
From the NSX-T perspective we actually have configured three VLAN-based segments. These VLAN-based segments are created with the new policy UI/API.
The policy UI/API is the new interface since NSX-T 2.4.0 which is the preferred interface for the majority of NSX-T deployments. The "legacy" UI/API is still available and is visible in the UI under the tab "Advanced Networking & Security".
As already mentioned, the three VLAN-based segments use the default teaming policy (Load Balance Source), so the VMkernel traffic is distributed over the two pNIC (vmnic2 or vmnic3). Hence, we typically cannot predict, which of the ToR switches will learn the associated MAC address from the three individual VMkernel interfaces. Before we move forward and configure "VLAN Pinning", lets see how the three VmKernel traffic is distributed. One of the easiest way is to check the "MAC address" table for the two ToR switches for interface Gi1/0/10.
As you could see NY-CAT3750G-A is learning the MAC address from vmk3 (0050.5663.f4eb) only, whereas NY-CAT3750G-B is learning the MAC address from vmk4 (0050.5667.50eb) and vmk5 (0050.566d.410d). With the default teaming option "Load Balance Source", the administrator has actually no option to steer the traffic. Please ignore the two learned MAC addresses from VLAN 150, these are TEP MAC addresses.
Before we now configure VLAN Pinning, lets assume we would like that vmk3 and vmk4 are learnt on NY-CAT3750-A and vmk5 on the NY-CAT3750-B (when all links are up). We would like to use two new "Named Teaming Policies" with failover. The traffic flows should look like the diagram below --> dotted line means "standby link".
The first step is to create two additionally "Named Teaming Policies". Please compare this diagram with the very first diagram above. Please be sure you use the identically names for the uplinks (ActiveUplink1 and ActiveUplink2) as for the default teaming policy.
The second step is we need to make these two new "Named Teaming Policy" or the associated VLAN transport zone (TZ) available.
The third and last step is to edit the three VLAN-based segments according to your traffic steering policy. As you could see, we unfortunately need to edit the VLAN-based segments in the "legacy" "Advanced Networking & Security" UI section. We plan to support this editing option to be available in the new policy UI/API in one of the future NSX-T releases.
As soon you edit the VLAN-based segments with the new "Named Teaming Policy", the ToR switches will immediately learn the MAC address from the associated physical interfaces.
The two ToR switches learn after applying "VLAN Pinning" through two new "Named Teaming Policy" in the following way:
As you could see NY-CAT3750G-A is learning now the MAC address from vmk3 and vmk4, whereas NY-CAT3750G-B is learning only the MAC address from vmk5.
Hope you had a little bit fun reading this NSX-T VLAN Pinning write-up.
Software Inventory:
vSphere version: 6.5.0, build 13635690
vCenter version: 6.5.0, build 10964411
NSX-T version: 2.4.1.0.0.13716575
Blog history
Version 1.0 - 19.08.2019 - first published version