Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all 3135 articles
Browse latest View live

VMware TAM Source 12.04

$
0
0



FROM THE EDITORS VIRTUAL DESK
Hi everyone, welcome to the latest VMware TAM newsletter. This week I would like to introduce to you a new blog from our friends at VMware on VMware. From this week we will feature posts and other information courtesy of the VoV team. To kick this off, we have the latest report - "Increasing Business Agility Through Digital Transformation VMware IT Performance Annual Report 2019" as well as the correspondingCIO article. We are sure you will find this information to be very valuable and of course are here to answer any of your questions. The VoV regular blog is also presented below as part of the VMware News section so please take a look.

We also feature 2 upcoming Webinars from the TAM program on "Operationalizing Hyperconverged Infrastructure" as well as Support on "Skyline Day 2", so be sure to check those out.

Don't forget to check the latest KB articles, keep your TAM's, SAM's and account teams in the loop with what you are doing so they are able to better support you in your transformation journey.

Virtually Yours
VMware TAM Team

Newsletter | SignUp | Archive | Twitter | FB | YT | LI

-
TAM @ VMWARE
TAM CUSTOMER WEBINAR
March 2020 – Operationalizing Hyperconverged Infrastructure. Ensuring long-term customer success with vSAN-powered HCI Solutions
Date: Thursday, March 12th
Time: 11:00am EST/ 10:00am CST/ 8:00am PST
Duration: 1.5 Hour
Synopsis:
What are the approaches we can take to support HCI based solutions (Native vSAN, VxRail & VCF) while ensuring platform stability and safeguarding data?
In this session, Paul McSharry will discuss theory, processes, & gotchas to help prepare for production HCI
Guest speaker:
Paul McSharry – vSAN Customer Success Architect
Registration Link:
https://vmware.zoom.us/webinar/register/WN_uhcz7bHGT_CORoO34hJBsQ

SUPPORT@VMWARE
Skyline "Day 2" Webcast Feb 27, 2020

Now that you have installed Skyline proactive support technology, what's next? Join an informative webcast on Feb 27, 2020 at 7am PST where we'll be featuring "Day 2" content to answer the question, "I've installed Skyline -- now what?"
During this live webcast, The Skyline team will educate you on how to obtain maximize value from Skyline proactive support so you can increase reliability, security and productivity of your environments. Using Skyline helps you prevent issues so you don't have to call VMware Tech Support. You will learn how best to leverage Skyline features as well as best practices and tips and tricks for getting the most out of Skyline.
Register:https://vmwarelearningzone.vmware.com/oltpublish/site/openlearn.do?dispatch=previewLesson&id=411b5d07-4cbc-11ea-9f48-0cc47adeb5f8&playlistId=8c8c15e5-4970-11ea-9f48-0cc47adeb5f8

NEWS AND DEVELOPMENTS FROM VMWARE

Open Source Blog

Network Virtualization Blog

vSphere Blog

Cloud management Blog

Cloud Native Blog

EUC Blog

Cloud Foundation Blog

VMware on VMware Blog

EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

Virtually Ghetto

ESX Virtualization

Cormac Hogan

Scott's Weblog

vSphere-land

NTPRO.NL

Virten.net

vEducate.co.uk

vSwitchZero

vNinja

VMExplorer

KB Articles

DISCLAIMER
While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

© 2019 VMware Inc. All rights reserved.

VMware TAM Source 12.05

$
0
0



FROM THE EDITORS VIRTUAL DESK
Hi everyone, this week I wanted to give you information on another exciting new Kubernetes related solution from VMware in the form of the next version of VMware Fusion for Mac. As I am sure many of you know Fusion has been a leading type 2 virtualization platform on the Mac for a very long time. As the Mac has progressed and the type of applications we consume have changed, Fusion has been a leader in this space. The next version of Fusion has just been announced and is available for you to download and try as a Tech Preview as 'Project Nautilus'. Adding full support for containers on the desktop via an easy to use command line directly into Fusion means developers can create apps for many different platforms and with extremely complex technical requirements, for example requiring suitable databases etc, that are virtualized via Fusion and then accompanying containers as well. This allows them to easily extend their desktop development workflow in a single product.The tech preview is a great way to get to know this new technology. Head over to the blog here for more information.

This week we have a full newsletter with Webinars, new KB articles and of course many interesting articles from the world of virtualization so please enjoy.

Virtually Yours
VMware TAM Team

Newsletter | SignUp | Archive | Twitter | FB | YT | LI
-
TAM@VMWARE
TAM LAB

The VMware TAM Lab videos is a series that are run out of the TAM program. They are very informative and are not presentations but rather live sessions of TAMs building, and working on VMware technology in their labs. There are many more on the TAM Lab YouTube page.

TAM CUSTOMER WEBINAR
March 2020 – Operationalizing Hyperconverged Infrastructure. Ensuring long-term customer success with vSAN-powered HCI Solutions
Date: Thursday, March 12th
Time: 11:00am EST/ 10:00am CST/ 8:00am PST
Duration: 1.5 Hour
Synopsis:
What are the approaches we can take to support HCI based solutions (Native vSAN, VxRail & VCF) while ensuring platform stability and safeguarding data?
In this session, Paul McSharry will discuss theory, processes, & gotchas to help prepare for production HCI
Guest speaker:
Paul McSharry – vSAN Customer Success Architect
Registration Link:
https://vmware.zoom.us/webinar/register/WN_uhcz7bHGT_CORoO34hJBsQ

KUBERNETES@VMWARE
VMware Fusion - Project Nautilus
Project Nautilus enables Fusion to run OCI compliant containers on the Mac in a different way than folks might be used to. Our initial release can run containers, but as we grow we’re working towards being able to declare full kubernetes clusters on the desktop. By leveraging innovations we’re making in Project Pacific, and a bevy of incredible open source projects such as runC, containerD, Cri-O, Kubernetes and more, we’re aiming to make containers first-class citizens, in both Fusion and Workstation, right beside virtual machines. Currently a command-line oriented user-experience, we’ve introduced a new tool for controlling containers and the necessary system services in VMware Fusion and Workstation: vctl.

VMWARE on VMWARE
Topic: How VMware IT Manages Large-Scale Mac Deployments with Workspace ONE
Description: Get an inside look at how VMware IT manages its Macs using Workspace ONE and how they transitioned to a Modern Management. In this session, we will share our journey of Mac management in an enterprise environment, including how VMware transformed its Mac estate from certificate management to application delivery. After successfully enrolling more than 15,000 Macs, VMware now easily manages policies, certificates and application deployments, resulting in fewer requests for IT help. We will also discuss our challenges and lessons learned during the implementation, as well as benefits achieved.
Date: Wednesday, March 4, 2020 from 9 – 10 a.m. PT
Register

Topic: Envisioning the Future of the Workplace
Description: Cast out your antiquated notions of what an office setting should look like and find out how VMware IT uses our Workplace X initiative to re-envision the traditional workplace. This industry-leading program enables IT to deliver highly personalized (and secure) experiences that maximize intelligence. We’ll showcase how next-generation authentication (voice, fingerprint, near-field communication (NFC), face ID) and advancements in mixed reality hardware and software lets colleagues work together seamlessly around the world—where barriers like distance and language are not an issue.
Date: Thursday, March 19, 2020 from 9 – 10 a.m. PT
Register
-
NEWS AND DEVELOPMENTS FROM VMWARE
Open Source Blog

Network Virtualization Blog

vSphere Blog

Cloud management Blog

Cloud Native Blog

EUC Blog

Cloud Foundation Blog

VMware on VMware Blog

EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

Virtually Ghetto

ESX Virtualization

Cormac Hogan

Scott's Weblog

vSphere-land

NTPRO.NL

Virten.net

vEducate.co.uk

vSwitchZero

vNinja

VMExplorer

KB Articles

© 2019 VMware Inc. All rights reserved.

 

DISCLAIMER
While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

VMware TAM Source 12.06

$
0
0



FROM THE EDITORS VIRTUAL DESK
Hi everyone, today VMware held a virtual event to announce vSphere 7, Tanzu and more specifically focusing on Application Modernization. You can watch the replay of the event here! I wanted to ensure no delay in getting this and other new product announcements to you as a special edition, so please read on for more info, and please reach out to your VMware representative for any further info on these and other announcements.

Virtually Yours
VMware TAM Team

Newsletter | SignUp | Archive | Twitter | FB | YT | LI
-
APPLICATION MODERNIZATION AND OTHER NEW SOLUTION ANNOUNCEMENTS
Become a Modern Software Organization with VMware Tanzu
With VMware, you now can modernize the applications that matter most and automate the path to production. You also can modernize your infrastructure by establishing a unified operating model for virtual machines and containers within your private cloud. Moreover, by taking full advantage of Kubernetes, VMware can help you extend that consistent operating model across clouds.

Introducing vSphere 7: Essential Services for the Modern Hybrid Cloud
vSphere 7 is the biggest release of vSphere in over a decade and delivers these innovations and the rearchitecting of vSphere with native Kubernetes that we introduced at VMworld 2019 as Project Pacific.

vSphere 7 with Kubernetes
What does the future of computing look like? This video is an introduction to vSphere with Kubernetes, a revolutionary advancement that will help IT administrators and developers run complex, modern applications as easily as they run their virtualized environments today.

Overview of vSphere 7
vSphere 7 has many hundreds of new and improved features and we will take a stroll through the big areas, from lifecycle all the way through to security features like Identity Federation.

What’s New in VMware Cloud Foundation 4
Today, VMware announced VMware Cloud Foundation™ 4 during the App Modernization in a Multi-Cloud World online launch event.  VMware Cloud Foundation 4 brings together the latest innovations in VMware vSphere 7, VMware vSAN 7, VMware NSX-T, and VMware vRealize Suite 2019, along with new capabilities from VMware Tanzu to support Kubernetes, cloud native architectures and app transformation in your business.

Announcing vSAN 7
We’re excited to announce the latest release of the industry-leading hyperconverged infrastructure software, VMware vSAN 7. vSAN, with its rich ecosystem of partners, has evolved as the platform of choice for private and public clouds. This article discusses how vSAN 7 accelerates modernizing the data center with newer features and enhancements.

What’s New in SRM and vSphere Replication 8.3
SRM and vSphere Replication have been available since 2008 and 2012 respectively. They both started as good products and have only gotten better over time. I'm still surprised and impressed by our engineering team’s ability to add significant new capabilities while at the same time keeping the core functionality solid. This post will cover what's new in both products at a high level with more detailed posts diving into features in more detail over the next couple weeks.

What’s New in vRealize Operations 8.1
Today we are very excited to announce the upcoming release of VMware vRealize Operations 8.1, which is part of today’s announcement about VMware Cloud Foundation 4.0 and VMware vSphere 7.0 with Kubernetes, to support modernizing infrastructure and applications. vRealize Operations 8.1 will deliver new and enhanced capabilities for self-driving operations to help customers optimize, plan and scale VMware Cloud, whether on-premises private cloud or VMware SDDC in multiple public clouds such as VMware Cloud on AWS, while at the same time unifying multi-cloud monitoring. Powered by artificial intelligence (AI), this release will provide a unified operations platform, delivering continuous performance optimization, efficient capacity and cost management, proactive planning, app-aware intelligent remediation and integrated compliance.

Announcing VMware vRealize Automation 8.1
Today we are announcing VMware vRealize Automation 8.1, the latest release of VMware’s industry-leading, modern infrastructure automation platform. This release delivers new and enhanced capabilities to enable IT/Cloud admins, DevOps admins, and SREs to further accelerate their on-going datacenter infrastructure modernization and cloud migration initiatives, focused on the following key use cases: self-service hybrid cloud, multicloud automation, infrastructure DevOps, and Kubernetes infrastructure automation. In addition, vRealize Automation 8.1 supports the latest release of VMware Cloud Foundation 4.0 to enable self-service automation and infrastructure DevOps for VMware Cloud-based private and hybrid clouds, as well as integration with vSphere 7.0 with Kubernetes to automate Kubernetes supervisor cluster and namespace management.

vExpert Blog Posts: VMware Announces vSphere 7 with Kubernetes, Tanzu, vSAN 7, VCF 4 and vRA 8.1
Today we announced vSphere 7 with Kubernetes, Tanzu, vSAN 7, VCF 4 and vRA 8.1. The vExpert community has some excellent insight into what these releases is about and how they can help your business. The articles below are written by VMware vExperts who are the best IT professionals in virtualization.
-
NEWS AND DEVELOPMENTS FROM VMWARE

Open Source Blog

Network Virtualization Blog

vSphere Blog

Cloud management Blog

Cloud Native Blog

EUC Blog

Cloud Foundation Blog

VMware on VMware Blog

EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

Virtually Ghetto

ESX Virtualization

Cormac Hogan

Scott's Weblog

vSphere-land

NTPRO.NL

Virten.net

vEducate.co.uk

vSwitchZero

vNinja

VMExplorer

KB Articles

 

 

DISCLAIMER
While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

© 2020 VMware Inc. All rights reserved.

Virtual News 12.07

$
0
0

FROM THE EDITORS "VIRTUAL NEWS" DESK
Hi everyone, before we begin todays newsletter and some of the changes, I wanted to share a great way that you can help researchers find treatments for diseases such as Coronavirus, Cancer, Alzheimer’s, and more via the folding@home project.

- A Force for Good: VMware Appliance for Folding@Home Blog Post
- VMware Appliance for Folding@Home

This Fling is a vSphere Appliance that contains the Folding@Home client software. Upon deploying the VMware Appliance for Folding@Home, the user will be prompted to enter information to configure the Folding@Home software. Once the appliance is deployed, the Folding@Home client is running and ready for Working Units. The Fling is also pre-configured to allow remote management of the Folding@Home client. For more information on the Folding@Home Project and how we can be a Force for Good against diseases like the Coronavirus, visit the website www.foldingathome.org.
-
Our newsletter that has been around in various forms for the past 11+ years has a new name, Virtual News. So will we be adding or changing anything? Cloud Native will be given a larger focus in our newsletter each edition. We will still have all of the standard topics covered but from this edition onwards an extra section specifically focused on Cloud Native Applications and the related items that from this technology. I hope you find this useful, as we begin we will start slowly but I suspect that our news in this area will grow exponentially over the next few editions.

I hope that this newsletter finds everyone well during this time, and that I can continue to provide some quality reading for you on a regular basis. Please feel free to reach out anytime to our new email address vmwnews@vmware.com.

Virtually Yours

Newsletter | SignUp | Archive | Twitter | FB | YT | LI

-
CLOUD NATIVE APPS @ VMWARE
Managing Kubernetes at enterprise scale: A closer look at Tanzu Mission Control
As Kubernetes continues to mature—rounding the corner toward its 6th birthday—we’ve started to see a shift in terms of the challenges our customers need to solve. Initially, Kubernetes installation was complex. As multiple solutions for installation and lifecycle management sprang up, companies seeking to adopt Kubernetes had to figure out the right approach. With the open source community standardizing on technologies like Cluster API for installation and declarative lifecycle management of multiple clusters, we’re now seeing a path toward consistency in this respect across clouds....[continue reading!]

FEATURED
Kubernetes without Microservices
First, let's see what Kubernetes brings to the table: The biggest benefit of running Kubernetes as the basis of your infrastructure is removing "snowflakes". In a non-Kubernetes infrastructure, servers or VM turn into snowflakes: they are all the same but also different. Servers that run your MySQL are different from those running your Redis and those running your Java API are different from those running your frontend Node.js web application. There are ways to make those snowflakes easier to manage for sure: Chef, Puppet, Ansible, and all other configuration management systems are parts of that toolset, but they just make building snowflakes easier, they don't turn them into identical servers.
Kubernetes, on the other hand, takes in your servers and turns them into uniform "nodes". You can take a node out and replace it with another one while the workload sitting on top of them can be your API, frontend or database, completely oblivious to the type of node they are running on. This has huge implications on many aspects of your operations: It makes applying security patches easier and more standard, leading your infrastructure to be more secure. It also increases your high availability by allowing you to swap faulty nodes with new ones or resizing small ones with bigger ones without the downtime and little impact.
Another major benefit of using Kubernetes is that it mostly forces any infrastructure change to be performed through configuration files that enforce and standardize the documentation of infrastrcuture. On top of that, using Kubernetes means running applications in containers which in turn means using files like Dockerfile to define the application's operational requirements.
None of these great benefits above have anything to do with Microservices or exclusively benefit large organizations.
They work to benefit both small and large teams with or without DevOps personnel. They don't require breaking up monoliths into Microservices and having to deal with the challenges that it brings about.
Microservices, like any other technology, can be beneficial for certain circumstances, depending on your requirements. Kubernetes is also the same: not all teams need to run on a Kubernetes cluster. But coupling these two together limits the scope of their usefulness and reduces the size of the applicable audience they can benefit enormously.

PODCAST
- Don’t Break the Bank, Run IT and Change IT
Introducing a new podcast from VMware for curious minds in the financial services industry.
Working in IT in the banking sector, it is easy to focus on keeping the lights on. In this new podcast series, host and Financial Services Industry Managing Director in the Office of the CTO at VMware, Matthew O’Neill along with fellow VMware colleagues explore the challenges and different facets of life in banking keeping CIOs and their teams awake at night. The goal is to transform the way banks and other financial service providers work and to provide the hose to put out the fires!

- VMware Community Podcast #504 - vSphere 7 Core Features w/ Bob Plankers

WEBINARS
- VMware on VMware Webinar: Save the Date

Our next VMware on VMware webinar "How VMware IT Solved Load Balancer Challenges Using Avi Networks" will be held April 28 from 9-10am PST. Planning, implementation, and lessons learned will be covered. Register here.

- Join us on March 24 for a free NSX-T vHoL live webcast
Would you like to get more immersive with using VMware NSX-T v2.4 without leaving home?  Please join our NSX-T experts on March 24 at 10:00am Pacific time for a free virtual hands-on-labs telecast.
This is a 90-minute session that will go beyond the surface of NSX-T.   In this hands-on lab environment, we demonstrate the logical switching, logical routing, and distributed firewall capabilities that NSX-T brings to your fingertips. Then, take the Hands-on-Labs Odyssey challenge to test your skills and abilities as a network architect. You’ll compete against other lab attendees to complete a set of NSX-T tasks and reach the top of the leaderboard. 
Showcase your knowledge, sharpen your skills, and share your achievements as you tackle each task.  By the end of the session, you will feel more comfortable in unleashing the capabilities of NSX-T v2.4.
Learn more and register for this exciting learning opportunity.

Empowering your Remote Workforce with VMware Solutions – Brian Madden
March 2020 – Empowering your Remote Workforce with VMware Solutions
Date: Thursday, April 9th
Time: 11:00am EST/ 10:00am CST/ 8:00am PST
Duration: 1.5 Hour
Synopsis:
Brian Madden will discuss how VMware EUC solutions like Workspace ONE and Horizon can enable users to work from home, or anywhere else they need. This is something that everyone knows in theory, but now with increased demand for remote working, we’ll dig into the specific scenarios with concrete steps that can be taken today. Topics discussed will include:
- How real is stretching an on-prem Horizon environment to the cloud?
- How quickly can you stand up a new Horizon cloud-based environment, and what’s really required?
- Is it possible to manage BYO / employee-owned laptops from anywhere, possibly negating the need for VDI or RDS?
- How real are VMware’s Zero Trust capabilities which ensure you can trust workers’ devices regardless of where they are or who owns them?
Guest speaker:
Our presenter will be Brian Madden, from VMware’s EUC Office of the CTO. Brian has been focusing on EUC for 25 years. Prior to joining VMware in 2018, he was the creator of BrianMadden.com and the BriForum conference, as well as author of thousands of blog posts and six books.
Registration Link:
https://VMware.zoom.us/webinar/register/WN_F0PMszqmQTymWWR65p7TWw

EDUCATION
Earn your 2020 VMware Certifications Online, Get Free Exam Vouchers

VMware Education Services is pleased to offer a special “Certification on Demand” promotion that will help you earn your desired VCP 2020 designation.  It’s easy to participate.  Enroll in any 2020 certification eligible On Demand course, receive two complementary exam vouchers valued at $375.  With more prerequisite courses for certification available On Demand, you can access our best-in-class instructional courses - all without leaving your desk. Virtual Hands-on-Labs also available as an optional module. 
VMware Certification paths for the year 2020 are now available across these functional disciplines:

  • Data Center Virtualization
  • Network Virutalization and Security
  • Cloud Management and Automation
  • Desktop and Mobility
  • Digital Workspace

You will receive both exam vouchers shortly after you complete your course registration. Act now because this offer expires on May 1, 2020.  Applies only to customers based in the U.S., Canada, Caribbean, Mexico, Central and Latin America. Learn more by sending us an e-mail or contacting your designated Americas Learning Specialist by scrolling down this page.

NEWS AND DEVELOPMENTS FROM VMWARE

Open Source Blog

Network Virtualization Blog

vSphere Blog

Cloud management Blog

Cloud Native Blog

EUC Blog

Cloud Foundation Blog

VMware on VMware Blog

EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

Virtually Ghetto

ESX Virtualization

Cormac Hogan

Scott's Weblog

vSphere-land

NTPRO.NL

Virten.net

vEducate.co.uk

vSwitchZero

vNinja

VMExplorer

KB Articles


DISCLAIMER
While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

Network Readiness - Part 1: Physical Network MTU

$
0
0

Dear readers

Welcome to a new series of blogs talking about the network readiness. As you might be already aware, NSX-T requires from the physical underlay network mainly two things:

  • IP Connectivity– IP connectivity between all components of NSX-T and compute hosts. This includes on one hand the Geneve Tunnel Endpoint (TEP) interfaces and an other management interfaces (typically vmk0) on hosts as well NSX-T Edge nodes (management interface) - both bare metal and virtual NSX-T Edge nodes.
  • Jumbo Frame Support– A minimum required MTU is 1600, however MTU of 1700 bytes is recommended to address the full possibility of variety of functions and future proof the environment for an expanding Geneve header. To get out most of your VMware SDDC your physical underlay network should support at least an MTU of 9000 bytes.

This blog has a focus on the MTU readiness for NSX-T. There are other VMkernel interfaces than for the overlay encapsulation with Geneve, like vSAN or vMotion which perform better with a higher MTU. So we keep this discussion on the MTU more generally. Physical network gear vendors, like Cisco with the Nexus Data Center switch family typically support a MTU of 9216 bytes. Other vendors might have the same MTU upper size.

 

This blog is about the correct MTU configuration and the verification within the Data Center spine-leaf architecture with Nexus 3K switches running NX-OS. Lets have a look to a very basic and simple lab spine-leaf topology with only three Nexus N3K-C3048TP-1GE switches:

Lab Spine Leaf Topology.png

Out of the box, the Nexus 3048 switches are configured with a MTU of 1500 bytes only. For an MTU of 9216 bytes we need to configure three pieces.

  • Layer 3 Interfaces MTU Configuration – This type of interface is used between the Leaf-10 and the Borderspine-12 switch respective between the Leaf-11 and Borderpine-12 switch. We run on this interface OSPF to announce the Loopback0 interface for the iBGP peering connectivity. As example the MTU Layer 3 interface configuration on interface e1/49 from the Leaf-10 is shown below:
Nexus 3048 Layer 3 Interface MTU Configuration

NY-N3K-LEAF-10# show run inter e1/49

---snip---

interface Ethernet1/49

  description **L3 to NY-N3K-BORDERSPINE-12**

  no switchport

  mtu 9216

  no ip redirects

  ip address 172.16.3.18/30

  ip ospf network point-to-point

  no ip ospf passive-interface

  ip router ospf 1 area 0.0.0.0

NY-N3K-LEAF-10#

 

  • Layer 3 Switch Virtual Interfaces (SVI) MTU Configuration – This type of interface is required as example to establish an IP connectivity between the Leaf-10 and Leaf-11 switches when the interfaces between the Leaf switches are configured as Layer 2 interfaces. We are using a dedicated SVI for VLAN 3 for the OSPF neighborship and the iBGP peering connectivity between the Leaf-10 and Leaf-11. In this lab topology are the interfaces e1/51 and e1/52 configured as dot1q trunk to carry multiple VLANs (including VLAN 3) and these to interfaces are combined into a portchannel running LACP for redundancy reason. As example the MTU configuration of the SVI for VLAN 3 from the Leaf-10 is shown below:
Nexus 3048 Switch Virtual Interface (SVI) MTU Configuration

NY-N3K-LEAF-10# show run inter vlan 3

---snip---

interface Vlan3

  description *iBGP-OSPF-Peering*

  no shutdown

  mtu 9216

  no ip redirects

  ip address 172.16.3.1/30

  ip ospf network point-to-point

  no ip ospf passive-interface

  ip router ospf 1 area 0.0.0.0

NY-N3K-LEAF-10#

 

  • Global Layer 2 Interface MTU Configuration – This global configuration is required for this type of Nexus switches and a few other Nexus switches (please see footnote 1 for more details). This Nexus 3000 does not support individual Layer 2 interface MTU configuration; the MTU for Layer 2 interfaces must be configured via a network-qos policy command. All interfaces configured as access or trunk port for host connectivity and as well for the dot1q trunk between the Leaf switches (e1/51 and e1/52) requires the network-qos configuration as shown below:
Nexus 3048 Global MTU QoS Policy Configuration

NY-N3K-LEAF-10#show run

---snip---

policy-map type network-qos POLICY-MAP-JUMBO

  class type network-qos class-default

   mtu 9216

system qos

  service-policy type network-qos POLICY-MAP-JUMBO

NY-N3K-LEAF-10#

 

The network-qos global MTU configuration needs to be verified with the command as shown below:

Nexus 3048 Global MTU QoS Policy Verification

NY-N3K-LEAF-10# show queuing interface ethernet 1/51-52 | include MTU

HW MTU of Ethernet1/51 : 9216 bytes

HW MTU of Ethernet1/52 : 9216 bytes

NY-N3K-LEAF-10#

 

The verification of the end-to-end MTU of 9216 bytes within the physical network should be done already typically before you attach your first hypervisor ESXi hosts. Please keep in mind, the virtual distributed switch (vDS) and the NSX-T N-VDS (e.g uplink profile MTU configuration) supports today up to 9000 bytes. This MTU includes the overhead for the Geneve encapsulation. As you could see in the table below of an ESXi host, the MTU is set to the maximum of 9000 bytes for the VMkernel interfaces used for Geneve (we label it unfortunately still with vxlan) respective for vMotion and IP storage.

ESXi Host MTU VMkernel Interface Verification

[root@NY-ESX50A:~] esxcfg-vmknic -l

Interface  Port Group/DVPort/Opaque Network        IP Family IP Address      Netmask         Broadcast       MAC Address       MTU     TSO MSS   Enabled Type     NetStack           

vmk0       2                                       IPv4      172.16.50.10    255.255.255.0   172.16.50.255   b4:b5:2f:64:f9:48 1500    65535     true    STATIC   defaultTcpipStack  

vmk2       17                                      IPv4      172.16.52.10    255.255.255.0   172.16.52.255   00:50:56:63:4c:85 9000    65535     true    STATIC   defaultTcpipStack  

vmk10      10                                      IPv4      172.16.150.12   255.255.255.0   172.16.150.255  00:50:56:67:d5:b4 9000    65535     true    STATIC   vxlan              

vmk50      910dba45-2f63-40aa-9ce5-85c51a138a7d    IPv4      169.254.1.1     255.255.0.0     169.254.255.255 00:50:56:69:68:74 1500    65535     true    STATIC   hyperbus           

vmk1       8                                       IPv4      172.16.51.10    255.255.255.0   172.16.51.255   00:50:56:6c:7c:f9 9000    65535     true    STATIC   vmotion            

[root@NY-ESX50A:~]

 

For sure, the verification of the end-to-end MTU between two ESXi hosts I still highly recommend by sending VMkernel pings with the don't-fragment bit set (e.g. vmkping ++netstack=vxlan -d -c 3 -s 8972 -I vmk10 172.16.150.13).

 

But for a serious end-to-end MTU 9216 physical network verification we need to look for another tool than the VMkernel ping. In my case I just using BGP running on the Nexus 3048 switches. BGP is running on the top of TCP and TCP support the option "Maximum Segment Size" to maximize the TCP datagrams.

 

The TCP Maximum Segment Size (MSS) is a parameter of the options field of the TCP header that specifies the largest amount of data, specified in bytes. This information is part of the SYN TCP three-way handshake, as the diagram below shows from a wireshark sniffer trace.

Wireshark-MTU9216-MSS-TCP.png

The TCP MSS defines the maximum amount of data that an IPv4 endpoint is willing to accept in a single TCP/IPv4 datagram. RFC879 explicit mention that MSS counts only data octets in the segment, but it does not count the TCP header or the IP header. In the wireshark trace example the two IPv4 endpoints (Loopback 172.16.3.10 and 172.16.3.12) have accepted an MSS of 9176 bytes on a physical Layer 3 link with MTU 9216 during the TCP three-way handshake. The difference of 40 bytes is based on the default TCP header of 20 bytes and IP header of again 20 bytes.

Please keep in mind, a small MSS values will reduce or eliminate IP fragmentation for any TCP based application, but will result in higher overhead. This is also truth for BGP messages.

BGP update messages carry all the BGP prefixes as part of the Network Layer Reachability Information (NLRI) Path Attribute. In regards for an optimal BGP performance in a spine-leaf architecture running BGP, it is advisable to set the MSS for BGP to the maximum value but avoid fragmentation. As defined RFC879 all IPv4 endpoints are required to handle an MSS of 536 bytes (=MTU 576 bytes minus 20 bytes for TCP Header*** minus 20 bytes IP Header).

But are these Nexus switches using MSS of 536 bytes only? Nope!

These Nexus 3048 switches running NX-OS 7.0(3)I7(6) are by default configured to discover the maximal MTU path between the two IPv4 endpoints leveraging Path MTU Discovery (PMTUD) feature. Other Nexus switches may requires the configuration of the global command "ip tcp path-mtu-discovery" to enable PMTUD.

 

MSS is sometimes mistaken for PMTUD. MSS is a concept used by TCP in the Transport Layer and it specifies the largest amount of data that a computer or communications device can receive in a single TCP segment. While PMTUD is used to specifies the largest packet size that can be sent over this path without suffering fragmentation.

 

But how we could verify the MSS used for the BGP peering session between the Nexus 3048 switches?

Nexus 3048 switches running NX-OS software allows the administrator to check the MSS of the TCP BGP session with the following command: show sockets connection tcp details.

Below we see two TCP BGP sessions between the IPv4 endpoints (Switch Loopback Interfaces) and each of the session shows a MSS of 9164 bytes.

BGP TCP Session Maximum Segment Size Verification

NY-N3K-LEAF-10# show sockets connection tcp local 172.16.3.10 detail

 

---snip---

 

Kernel Socket Connection:

State      Recv-Q Send-Q        Local Address:Port          Peer Address:Port

 

ESTAB      0      0               172.16.3.10:24415          172.16.3.11:179    ino:78187 sk:ffff88011f352700

 

     skmem:(r0,rb262144,t0,tb262144,f0,w0,o0) ts sack cubic wscale:2,2 rto:210 rtt:12.916/14.166 ato:40 mss:9164 cwnd:10 send 56.8Mbps rcv_space:18352

 

 

ESTAB      0      0               172.16.3.10:45719          172.16.3.12:179    ino:79218 sk:ffff880115de6800

 

     skmem:(r0,rb262144,t0,tb262144,f0,w0,o0) ts sack cubic wscale:2,2 rto:203.333 rtt:3.333/1.666 ato:40 mss:9164 cwnd:10 send 220.0Mbps rcv_space:18352

 

 

NY-N3K-LEAF-10#

Please reset always the BGP session when you change the MTU, as the MSS is only discovered during the initial TCP three-way handshake.

 

The MSS value of 9164 bytes confirms that the underlay physical network is ready with an end-to-end MTU of 9216 bytes. But why is the MSS value (9164) of BGP 12 bytes smaller than the TCP MSS value (9176) negotiated during the TCP three-way handshake?

Again, in many TCP IP stacks implementation we could see a MSS of 1460 bytes with the interface MTU of 1500 bytes respective a MSS of 9176 bytes for a interface MTU of 9216 bytes (40 bytes difference) , but there are other factors that can change this. For example, if both sides support RFC 1323/7323 (enhanced timestamps, windows scaling, PAWS***) this will add 12 bytes to the TCP header, reducing the payload to 1448 bytes respective 9164 bytes.

And indeed, the Nexus NX-OS TCP/IP stacks used for BGP supports by default the TCP enhanced timestamps option and leverage the PMTUD (RFC 1191) feature to handle the 12 byte extra room and hence reduce the maximal payload (payload in our case is BGP) to a MSS of 9164 bytes.

 

The below diagram from a wireshark sniffer trace confirms the extra 12 byte used for the TCP timestamps option.

Wireshark-TCP-12bytes-Option-timestamps.png

Hope you had a little bit fun reading this small Network Readiness write-up.

 

Footnote 1: Configure and Verify Maximum Transmission Unit on Cisco Nexus Platforms - Cisco

** 20 bytes TCP Header is only correct when default TCP header options are used, RFC 1323 - TCP Extensions for High Performance and replaced by RFC 7323 - TCP Extensions for High Performance  defines TCP extension which requires up to 12 bytes more.

*** PAWS = Protect Against Wrapped Sequences

 

Software Inventory:

vSphere version: VMware ESXi, 6.5.0, 15256549

vCenter version:6.5.0, 10964411

NSX-T version: 2.5.1.0.0.15314288 (GA)

Cisco Nexus 3048 NX-OS version: 7.0(3)I7(6)

 

Blog history:

Version 1.0 - 23.03.2020 - first published version

VMware vSphere Distributed Switch Design & Configuration Tutorial (Video Series)

$
0
0

Upgrade vCenter Server 6.7 to 7.0

Single NSX-T Edge Node N-VDS with correct VLAN pinning

$
0
0

Dear readers

Welcome to a new blog talking about a specific NSX-T Edge Node VM deployment with only a single Edge Node N-VDS. You may have seen the 2019 VMworld session "Next-Generation Reference Design with NSX-T: Part 1" (CNET2061BU or CNET2061BE) from Nimish Desai. On one of his slides he mention how we could deploy a single NSX-T Edge Node N-VDS instead the three Edge Node N-VDS. This new approach (available since NSX-T 2.5 for Edge Node VM) with a single Edge Node N-VDS has the following advantages:

  • Multiple TEPs to load balance overlay traffic for different overlay segments
  • Same NSX-T Edge Node N-VDS design for VM-based and Bare Metal (with 2 pNIC)
  • Only two Transport Zone (Overlay & VLAN) assigned to a single N-VDS

The diagram below shows the slide with a single Edge Node N-VDS from one of the VMware session (CNET2061BU):

Edge Support with Multi-TEP-Nimish-Desai-VM.png

However, the single NSX-T Edge Node design comes with an additional requirements respective recommendation:

  • vDS port group Trunks configuration to carry multiple VLANs (requirement)
  • VLAN pinning for deterministic North/South flows (recommendation)

This blog talks mainly about the second bullet point and how we can achieve the correct VLAN pinning. A correct VLAN pinning requires multiple individual configuration steps at different levels, as example vDS trunk port group teaming or N-VDS named teaming policy configuration. The goal behind this VLAN pinning is a deterministic end-to-end path.

When configured correctly the BGP session is enforced to be aligned with the data forwarding path and hence the MAC addresses from the Tier-0 Gateway Layer 3 Interfaces (LIF) are only learnt at the expected ToR/Leaf switch trunk interfaces.

 

In this blog the NSX-T Edge Node VMs are deployed on ESXi hosts which are NOT prepared for NSX-T. The two ESXi hosts belong to a single vSphere Cluster exclusively used only for NSX-T Edge Node VMs. There are a few good reason NOT to prepare these ESXi hosts with NSX-T where you host only NSX-T Edge Node VMs:

  • It is not required and does not cost you extra licenses
  • Better NSX-T upgrade-ability (you don't need to evacuate the NSX-T Edge Node VM during host NSX-T software upgrade with vMotion to enter maintenance mode; every vMotion of the NSX-T Edge Node VM will cause a short unnecessary data plane glitch)
  • Shorter NSX-T upgrade cycles (for every NSX-T upgrade you need only to upgrade the ESXi hosts which are used for the payload VMs and only the NSX-T Edge Node VMs, but not the ESXi hosts where you have your Edge Node deployed
  • vSphere HA can be turned off (do we want to move an highly loaded packet forwarding node with vMotion in a host vSphere HA event? No I dont think so - as the routing HA model is much quicker)
  • Simplified DRS settings (do we want to move an NSX-T Edge Node with vMotion to balance the resources?)
  • Typically a resource pool is not required

We should never underestimate how important are smooth upgrade cycles. Upgrade cycles are time consuming events and are typically required multiple time per year.

To have the ESXi host NOT prepared for NSX-T is considered best practice and should be always deployed in any NSX-T deployments which can afford a dedicated vSphere Cluster only for NSX-T Edge Node VM. Install NSX-T on ESXi hosts where you have deployed your NSX-T Edge Node VM (called collapsed design) is appropriate for customer which have a low number of ESXi hosts the keep the CAPEX costs low.

 

The diagram below shows the lab test bed of single ESXi host with a single Edge Node appliance which has only a single N-VDS. The relevant configuration steps are marked with 1 to 4.

Networking – NSX-T Edge Topology.png

 

The NSX-T Edge Node VM is configured with two transport zone. The same overlay transport zone is used for the compute ESXi hosts where I host the payload VMs. Both transport zone are assigned to a single N-VDS, called NY-HOST-NVDS. The name of the N-VDS might confused you a little bit due the selected name, but the same NY-HOST-NVDS is used for all compute ESXi hosts prepped with NSX-T and indicate that only a single N-VDS is required independent of Edge Node or compute ESXi host. However, you might select a different name for the N-VDS.

Screen Shot 2020-04-11 at 11.40.18.png

The single N-VDS (NY-HOST-NVDS) on the Edge Node is configured with a Uplink Profile (please see more details below) with two static TEP IP addresses, which allow use to load balance the Geneve encapsulated overlay traffic for North/South. Both Edge Node FastPath interfaces (fp-eth0 & fp-eth1) are mapped to an labelled Active Uplink name as part of the default teaming policy.

Screen Shot 2020-04-11 at 11.40.26.png

There are 4 areas where we need to take care of the correct settings.

<1> - At the physical ToR/Leaf Switch Level

The trunk ports will allow only the required VLANs

  • VLAN 60 - NSX-T Edge Node management interface
  • VLAN 151 - Geneve TEP VLAN
  • VLAN 160 - Northbound Uplink VLAN for NY-N3K-LEAF-10
  • VLAN 161 - Northbound Uplink VLAN for NY-N3K-LEAF-11

The resulting interface configuration along with the relevant BGP configuration is in the table below shown. Please note for redundancy reason both Northbound Uplink VLAN 160 and 161 are allowed on the trunk configuration. Under normal condition, NY-N3K-LEAF-10 will learn only MAC addresses from VLAN 60, 151 and 160 and NY-N3K-LEAF-11 will learn only MAC addresses from VLAN 60, 151 and 161.

Table 1 - Nexus ToR/LEAF Switch Configuration

NY-N3K-LEAF-10 Interface Configuration
NY-N3K-LEAF-11 Interface Configuration

interface Ethernet1/2

  description *NY-ESX50A-VMNIC2*

  switchport mode trunk

  switchport trunk allowed vlan 60,151,160-161

  spanning-tree port type edge trunk

interface Ethernet1/2

  description *NY-ESX50A-VMNIC3*

  switchport mode trunk

  switchport trunk allowed vlan 60,151,160-161

  spanning-tree port type edge trunk

interface Ethernet1/4

  description *NY-ESX51A-VMNIC2*

  switchport mode trunk

  switchport trunk allowed vlan 60,151,160-161

  spanning-tree port type edge trunk

interface Ethernet1/4

  description *NY-ESX51A-VMNIC3*

  switchport mode trunk

  switchport trunk allowed vlan 60,151,160-161

  spanning-tree port type edge trunk

router bgp 64512

  router-id 172.16.3.10

  log-neighbor-changes

  ---snip---

  neighbor 172.16.160.20 remote-as 64513

    update-source Vlan160

    timers 4 12

    address-family ipv4 unicast

  neighbor 172.16.160.21 remote-as 64513

   update-source Vlan160

    timers 4 12

    address-family ipv4 unicast

router bgp 64512

  router-id 172.16.3.11

  log-neighbor-changes

  ---snip---

  neighbor 172.16.161.20 remote-as 64513

    update-source Vlan161

    timers 4 12

    address-family ipv4 unicast

  neighbor 172.16.161.21 remote-as 64513

    update-source Vlan161

    timers 4 12

    address-family ipv4 unicast

As part of the Cisco Nexus 3048 BGP configuration we see that only NY-N3K-LEAF-10 terminates the BGP session on VLAN 160 and only NY-N3K-LEAF-11 terminates the BGP session on VLAN 161.

 

<2> - At the vDS Port Group Level

The vDS is configured with totally four vDS port groups:

  • Port Group (Type VLAN): NY-VDS-PG-ESX5x-NSXT-EDGE-MGMT60: carries only VLAN 60 has an active/standby teaming policy
  • Port Group (Type VLAN): NY-vDS-PG-ESX5x-EDGE2-Dummy999: this dummy port group is used for the remaining unused Edge Node Fastpath (fp-eth2) interface to avoid that NSX-T report admin status down
  • Port Group (Type VLAN trunking): NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkA: Carries the Edge Node TEP VLAN 151 and Uplink VLAN 160
  • Port Group (Type VLAN trunking): NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkB: Carries the Edge Node TEP VLAN 151 and Uplink VLAN 161

The two trunk port groups have only one vDS-Uplink active, the other vDS-Uplink is set to standby. This is required that the Uplink VLAN traffic along the BGP session can only be forwarded on the specific vDS-Uplink (vDS-Uplink is mapped to the corresponding pNIC) during normal condition. With these settings we can achieve

  • Failover order gets deterministic
  • Symmetric Bandwidth for both overlay and North/South traffic
  • The BGP session between the Tier-0 Gateway and the ToR/Leaf switches should stay UP even when one or both physical links between the ToR/Leaf switches and the ESXi hosts goes down (the BGP session is then carried over the Trunk Link between the ToR/Leaf switches).

 

The table below highlights the relevant VLAN and Teaming settings:

Table 2 - vDS Port Group Configuration

NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkA ConfigurationNY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkB Configuration
Trunka-vlan-Screen Shot 2020-04-11 at 10.38.25.pngTrunkb-vlan-Screen Shot 2020-04-11 at 10.39.49.png
Trunka-teaming-Screen Shot 2020-04-11 at 10.38.06.pngTrunkb-teaming-Screen Shot 2020-04-11 at 10.39.58.png

 

<3> - At the NSX-T Uplink Profile Level

The NSX-T Uplink Profile is a global construct that defines how traffic will leave a Transport Node respective Edge Transport Node.

The single Uplink Profile used for the two Edge Node FastPath interfaces (fp-eth0 & fp-eth1) needs to be extended with two additional Named Teaming Policies to steer the North/South uplink traffic to the corresponding ToR/Leaf switch.

  • The default teaming requires to be configured Source-port-ID with the two Active Uplinks (I am using label EDGE-UPLINK1 & EDGE-UPLINK2)
  • An additional teaming policy called NY-Named-Teaming-N3K-LEAF-10 is configure with failover teaming policy with one a single Active Uplink (label EDGE-UPLINK1)
  • An additional teaming policy called NY-Named-Teaming-N3K-LEAF-11 is configure with failover teaming policy with one a single Active Uplink (label EDGE-UPLINK2)

Please note, the Active Uplink labels for the default and the additional Named Teaming Policy needs to be the same.

Screen Shot 2020-04-11 at 10.58.49.png

 

<4> - At the NSX-T Uplink VLAN Segment Level

To activate the previous configured Named Teaming Policy for the specific Tier-0 VLAN segment 160 respective segment 161 we need first to assign the Named Teaming Policy to the VLAN transport zone.

Screen Shot 2020-04-11 at 11.07.12.png

The last step involves the configuration of each of the two Uplink VLAN segment (160 & 161) the corresponding Named Teaming Policy. NSX-T 2.5.1 requires to configure the VLAN Segment with the Named Teaming Policy in the "legacy" Advance Networking&Security UI. The recently released NSX-T 3.0 will support Policy UI.

Table 3 - NSX-T VLAN Segment Configuration

VLAN Segment NY-T0-EDGE-UPLINK-SEGMENT-160
VLAN Segment NY-T0-EDGE-UPLINK-SEGMENT-161

Screen Shot 2020-04-11 at 11.09.50.png

Screen Shot 2020-04-11 at 11.09.37.png
Screen Shot 2020-04-11 at 11.29.17.pngScreen Shot 2020-04-11 at 11.29.25.png

 

Verification

The resulting topology with both NSX-T Edge Nodes and the previous shown steps is shown below. It shows how the Tier-0 VLAN Segment 160 respective 161 is "routed" through the different levels from the Tier-0 Gateway towards the Nexus Leaf switches via the vDS trunk port groups.

Networking – NSX-T Edge Pinned VLAN.png

The best option to verify if all your settings were correct is to validate on which ToR/Leaf trunk port you learn the appropriate MAC address of the Tier-0 Gateway Layer 3 interfaces. These Layer 3 interfaces belongs to the Tier-0 Service Router (SR). You can get the MAC address via CLI.

Table 4 - NSX-T Tier-0 Layer 3 Interface Configuration

ny-edge-transport-node-20(tier0_sr)> get interfacesny-edge-transport-node-21(tier0_sr)> get interfaces

Interface: 2f83fda5-0da5-4764-87ea-63c0989bf059

Ifuid: 276

Name: NY-T0-LIF160-EDGE-20

Internal name: uplink-276

Mode: lif

IP/Mask: 172.16.160.20/24

MAC: 00:50:56:97:51:65

LS port: 40102113-c8af-4d4e-a94d-ca44f9efe9a5

Urpf-mode: STRICT_MODE

DAD-mode: LOOSE

RA-mode: SLAAC_DNS_TRHOUGH_RA(M=0, O=0)

Admin: up

Op_state: up

MTU: 9000

Interface: a3d7669a-e81c-43ea-81c0-dd60438284bc

Ifuid: 289

Name: NY-T0-LIF160-EDGE-21

Internal name: uplink-289

Mode: lif

IP/Mask: 172.16.160.21/24

MAC: 00:50:56:97:84:c3

LS port: 045cd486-d8c5-4df5-8784-2e49862771f4

Urpf-mode: STRICT_MODE

DAD-mode: LOOSE

RA-mode: SLAAC_DNS_TRHOUGH_RA(M=0, O=0)

Admin: up

Op_state: up

MTU: 9000

Interface: a1f0d5d0-3883-4e04-b985-e391ec1d9711

Ifuid: 281

Name: NY-T0-LIF161-EDGE-20

Internal name: uplink-281

Mode: lif

IP/Mask: 172.16.161.20/24

MAC: 00:50:56:97:a7:33

LS port: d180ee9a-8e82-4c59-8195-ea65660ea71a

Urpf-mode: STRICT_MODE

DAD-mode: LOOSE

RA-mode: SLAAC_DNS_TRHOUGH_RA(M=0, O=0)

Admin: up

Op_state: up

MTU: 9000

Interface: 2de46a54-3dba-4ddc-abe7-5b713260e7d4

Ifuid: 296

Name: NY-T0-LIF161-EDGE-21

Internal name: uplink-296

Mode: lif

IP/Mask: 172.16.161.21/24

MAC: 00:50:56:97:ec:1b

LS port: c32e2109-32d0-4c0f-a916-bfba01fdd6ac

Urpf-mode: STRICT_MODE

DAD-mode: LOOSE

RA-mode: SLAAC_DNS_TRHOUGH_RA(M=0, O=0)

Admin: up

Op_state: up

MTU: 9000

 

The MAC address tables shows that ToR/Leaf switch NY-N3K-LEAF-10 learns the Tier-0 Layer 3 MAC addresses from VLAN 160 locally and from VLAN 161 via Portchannel 1 (Po1).

And the MAC address tables shows that ToR/Leaf switch NY-N3K-LEAF-11 learns the Tier-0 Layer 3 MAC addresses from VLAN 161 locally and from VLAN 160 via Portchannel 1 (Po1).

Table 5 - ToR/Leaf Switch MAC Address Table for Northbound Uplink VLAN 160 and 161

ToR/Leaf Switch NY-N3K-LEAF-10
ToR/Leaf Switch NY-N3K-LEAF-11

NY-N3K-LEAF-10# show mac address-table dynamic vlan 160

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*  160     0050.5697.5165   dynamic  0         F      F    Eth1/2

*  160     0050.5697.84c3   dynamic  0         F      F    Eth1/4

NY-N3K-LEAF-11# show mac address-table dynamic vlan 160

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*  160     0050.5697.5165   dynamic  0         F      F    Po1

*  160     0050.5697.84c3   dynamic  0         F      F    Po1

*  160     780c.f049.0c81   dynamic  0         F      F    Po1

NY-N3K-LEAF-10# show mac address-table dynamic vlan 161

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*  161     0050.5697.a733   dynamic  0         F      F    Po1

*  161     0050.5697.ec1b   dynamic  0         F      F    Po1

*  161     502f.a8a8.717c   dynamic  0         F      F    Po1

NY-N3K-LEAF-11# show mac address-table dynamic vlan 161

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*  161     0050.5697.a733   dynamic  0         F      F    Eth1/2

*  161     0050.5697.ec1b   dynamic  0         F      F    Eth1/4

*  161     780c.f049.0c81   dynamic  0         F      F    Po1

 

As we have seen in the Edge Transport Node configuration each Edge Node has two TEP IP addresses statically configured. Both Fastpath interfaces load balance the Geneve encapsulated overlay traffic. Table 8 shows the TEP MAC address in order to verify the Edge Node TEP MAC addresses.

Table 7 - ToR/Leaf Switch MAC Address Table for Edge Node TEP VLAN 151

ToR/Leaf Switch NY-N3K-LEAF-10ToR/Leaf Switch NY-N3K-LEAF-11

NY-N3K-LEAF-10# show mac address-table dynamic vlan 151

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*  151     0050.5697.5165   dynamic  0         F      F    Eth1/2

*  151     0050.5697.84c3   dynamic  0         F      F    Eth1/4

*  151     0050.5697.a733   dynamic  0         F      F    Po1

*  151     0050.5697.ec1b   dynamic  0         F      F    Po1

*  151     502f.a8a8.717c   dynamic  0         F      F    Po1

NY-N3K-LEAF-11# show mac address-table dynamic vlan 151

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*  151     0000.0c9f.f097   dynamic  0         F      F    Po1

*  151     0050.5697.5165   dynamic  0         F      F    Po1

*  151     0050.5697.84c3   dynamic  0         F      F    Po1

*  151     0050.5697.a733   dynamic  0         F      F    Eth1/2

*  151     0050.5697.ec1b   dynamic  0         F      F    Eth1/4

*  151     780c.f049.0c81   dynamic  0         F      F    Po1

 

Table 8 - NSX-T Edge Node TEP MAC Addresses

ny-edge-transport-node-20>ny-edge-transport-node-21>

ny-edge-transport-node-20> get interface fp-eth0 | find MAC

  MAC address: 00:50:56:97:51:65

 

ny-edge-transport-node-20> get interface fp-eth1 | find MAC

  MAC address: 00:50:56:97:a7:33

ny-edge-transport-node-21> get interface fp-eth0 | find MAC

  MAC address: 00:50:56:97:84:c3

 

ny-edge-transport-node-21> get interface fp-eth1 | find MAC

MAC address: 00:50:56:97:ec:1b

 

For the sake of completeness, the table below shows that only ToR/Leaf Switch NY-N3K-LEAF-10 learns the two Edge Node Management MAC address from VLAN 60 locally, ToR/Leaf Switch NY-N3K-LEAF-11 only via Portchannel 1 (Po1). This is expected, as we have configured on the vDS port group level the teaming policy in active/standby. The Edge Node N-VDS is for the Edge Node management interface not relevant.

Table 8 - ToR/Leaf Switch MAC Address Table for Edge Node Management VLAN 60

ToR/Leaf Switch NY-N3K-LEAF-10
ToR/Leaf Switch NY-N3K-LEAF-11

NY-N3K-LEAF-10# show mac address-table dynamic vlan 60

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*   60     0050.5697.1e49   dynamic  0         F      F    Eth1/4

*   60     0050.5697.4555   dynamic  0         F      F    Eth1/2

*   60     502f.a8a8.717c   dynamic  0         F      F    Po1

NY-N3K-LEAF-11# show mac address-table dynamic vlan 60

Legend:

        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

        age - seconds since last seen,+ - primary entry using vPC Peer-Link,

        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan

   VLAN     MAC Address      Type      age     Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

*   60     0000.0c9f.f03c   dynamic  0         F      F    Po1

*   60     0050.5697.1e49   dynamic  0         F      F    Po1

*   60     0050.5697.4555   dynamic  0         F      F    Po1

 

Please note, I always highly recommend to run a few failover tests to confirm that the NSX-T Edge Node deployment works as expected.

 

I hope you had a little bit fun reading this blog about a single N-VDS on the Edge Node with VLAN pinning.

 

Software Inventory:

vSphere version: VMware ESXi, 6.5.0, 15256549

vCenter version:6.5.0, 10964411

NSX-T version: 2.5.1.0.0.15314288 (GA)

Cisco Nexus 3048 NX-OS version: 7.0(3)I7(6)

 

 

Blog history

Version 1.0 - 13.04.2020 - first published version


vSphere 7 on an old DellEMC Poweredge T710 #Part1

$
0
0

As vSphere7 hit the road I decided to give a try to reuse my ten years old homelab hardware ... and it worked. Let's have a look to the installation.

 

From a learning perspective, you should not stop writing down your findings when using repurposed hardware. Do not use hardware with expired maintenance support or software with expired interoperability support in production. If you run into hardware or software related issues you are on your own.

In my definition of an IT homelab, I should be capable to explain it to 1) family members 2) to people interesting in backend IT but without practice and accreditation yet 3) to other IT pros 4) to path my way of becoming a business economist. This blog entry belongs to the category level 3.

 

Here's the homelab hardware list:

  • DellEMCPoweredge T710: Bios 6.6.0, 2 x Intel Xeon CPU E5645 @ 2.4 GHz, 48 GB Memory, Local SSD Samsung 840 and 850, Local SAS disk
  • 8-Port Switch Cisco SG200-08
  • Dell Vostro 1700 Laptop with Windows 10 1909 Home and with Internet Access

 

Having noticed the servers compatibility matrix for the Poweredge T710 modell, the hardware did had compatibility support until vSphere 6.5U3. The T710 has been capable to run the latest DellEMC ESXi 6.7 release so far.

 

 

As from the release notes, comparing the processors supported by vSphere 6.7, vSphere 7.0 no longer supports the following processors:

  • Intel Family 6, Model = 2C (Westmere-EP)
  • Intel Family 6, Model = 2F (Westmere-EX)

 

The processor ID of the T710 is shown as part of the servers compatibility matrix.

 

 

The CPUID Series Detail shows the CPUID Info 6.2C. Hence, the T710 Intel CPU 5645 is not supported for a vSphere 7.0 installation.

 

For all homelabs running on old hardware, vSphere offers a boot option called allowLegacyCPU=True. The blogger William Lam describes in its blog the option to allow to bypass the CPU check on ESXi7.0. This works for an Intel Xeon CPU 5645 as well.

Without the CPU bypass the installation stops with the following CPU_SUPPORT ERROR:

 

 

The option allowLegacyCPU=True still produces a warning, but allows to complete the installation. This is fine for the moment.

 

 

The unsupported devices warning correlates to the Perc H700 Adapter which isn't supported anymore.

 

 

For a nested lab and without the boot option, it is necessary to know the cpu make, model and features exposed to the virtual hardware of a vESXi7.0 VM.

The processor version information is returned from a CPUID register called eax. The value 0x000206c2 as part of the Processor ID is determinable, on a ESXi hypervisor using the cli utility smbiosDump.

 

In vSphere you can modify the cpuid of a virtual hardware. Let's think the Dell hardware better would be a T630 with an Intel Xeon E5-2620 processor. According to the release notes of vSphere 7, the CPUID Intel Family 6, Model = 2D (Sandy Bridge EP, GA 2012) still is supported. The CPUID would change1 from 0x000206x2 to 0x000206d7.

Specifying the cpuid of a virtual hardware by adding an entry cpuid.1.eax in its virtual machine .vmx file needs that the hex value is presented in binary form:

 

cpuid.1.eax = " 0010 0000 0110 1101 0111"

 

We must keep in mind that exposing a CPUID Intel Xeon E5-2620 to a guest os does not avoid the use of hardware functionalities of that specific CPUID only. By way of mitigation, as example, the avoidance of the instruction set AVX / Advanced Vector Extensions is a must, as the instruction set AVX / Advanced Vector Extensions isn't available on the Intel Xeon CPU 5645. It is especially a must if we would make use of Vmotion. To get an idea, have a look to blogger and VMware communities entries like here, here or here.

 

All in all, ESXi7.0 on an old Dell T710 hardware is up and running. A baremetal installation with ESXi6.7U3 can host a vESXi7.0 virtual machine.

 

And a baremetall installation with ESXi7.0 works as well.

 

The VMFS datastores of the local sata-attached ssd disks did not show up after the installation. It was necessary to remount them. I came across this when analyzing the vmfs volumes.

 

 

To remount the datastores type esxcfg-volume -M {UUID}

 

Now the vmfs snapshots are gone.

 

 

The first sight seems that the old DellEMC T710 hardware still can be useful for a nested vSphere homelab. So far the findings in comparison to vSphere 6.7U3 are

  • The Intel Xeon CPU 5645 isn't supported anymore. There are some options to bypass the CPU check during installation to make run ESXi7.0.
  • The Dell Perc H700 storage adapter isn't supported anymore.
  • SSD disks on a sata controller with existing VMFS5 datastores from a ESXi6.7 installation had been remounted manually.
  • There is a remarkable decrease in idle CPU usage and a slight decrease in idle memory.
  • The UI shows up a information alert to check https://kb.vmware.com/s/article/55636 before running any VM.

Error - Could not download from depot at Zip

$
0
0

If encounter the below error while installing the vibs :

 

>>Could not download from depot at Zip:........ Error while installing zip vib on esxi host .

 

If you getting the below error message use full path of vib with data store UID:

d1.jpg

 

 

To Install  use- esxcli software vib install –d “vib fullpath”

To Update the Vib esxcli software vib update –d “vib fullpath”

Error while Applying Host Profile with third party VIbs

$
0
0

So recently I was trying to apply Host profile on newy built ESXi host and extracted profile contain some third party vib which need to be installed on the new host. We are using Cisco 1kv vsm swithes and it was trying to install vsm driver on new host and got below error: 

 

 

On checking the logs I found that ESXi host acceptance level is not set to “partner supported “. Since I was trying to install third part vib. set acceptance level as below and it worked.

#esxcli software acceptance set --level=PartnerSupported

 

There are 4 acceptance levels :

 

VMwareCertified

VMwareAccepted 

PartnerSupported 

CommunitySupported

Horizon 7.x: How to configure Horizon Client air-gapped network download

$
0
0

Once the Connection Server is installed, we can access the Connection Server at https://<Server FQDN>. On this page, there are two options to access the Horizon environment.

 

  • VMware Horizon HTML Access
  • Install VMware Horizon Client

 

Let’s assume that a business user needs to install the native Horizon client on his windows system in order to access the VDI/apps. So here are few challenges that we may face.

  • One of the challenge would be, if your infrastructure is configured for air gapped networks, your redirection request for user might not be completed successfully.
  • Even if Internet connectivity is not a problem, another concern is, will user be able to make decision about correct version of horizon client and bits? Not every user is going to be tech savvy and this might lead to user confusion and frustration.
  • Also, user might be connecting from remote location may be due to business travel and not in the office premise, so helping remote user in this situation will require some extra efforts.

 

So How do we handle this?

 

To overcome these challenges, as a Horizon administrators we will change the way how user gets the horizon client installer.

 

Basically an administrator can download the required version of the Horizon client as per infrastructure requirement and store it on the Connection Server for end users to download. So when user clicks on download client link as shown below.

 

For more details, check

 

http://blogs.virtualmaestro.in/2020/04/19/horizon-7-x-how-to-configure-horizon-client-air-gapped-network-download/

What is URL Content Redirection in Horizon View 7.x?

$
0
0

With the URL Content Redirection feature, we configure specific URLs to open on the Horizon client machine and specific URLs on the Horizon virtual desktop or application.

 

How does this feature help?

 

URL Content Redirection helps both improve security and reduce unnecessary bandwidth and resource consumption in virtual desktop when users browse certain Internet based content.

 

Let us understand with some example here.

 

Let us assume that user is connecting to a virtual desktop and we basically do not want to allow certain websites (say "yahoo.com") to be accessed in virtual desktop but at the same time we do not want to block the website when user tries to connect to it. Instead of blocking the website, whenever user tries to access yahoo.com inside virtual desktop, the moment user types URL in IE address bar and hits enter, users browser session gets redirected to client machine IE as shown in image below. So yahoo.com will be opened but using the IE of client machine and not that of virtual desktop.

 

For more details, check

 

https://blogs.virtualmaestro.in/2020/04/21/what-is-url-content-redirection-in-horizon-view-7-x/

VMware Tools 11.0.6 compatibility with vSphere

$
0
0

VMware tool version 11 was released around Sep 19 which added features such as.

 

  • Ability to map a specific OS volume to the respective VMDK.
  • Updated drivers of PVSCSI, VMXNET3 & VMCI available through Microsoft Update Service for Windows Server 2016 & later.
  • Upgraded compiler in Visual Studio 2017 for VMware Tools drivers.
  • VMware Tools notarised for MacOS 10.14.5.
  • Added support for additional drivers for AppDefense.
  • Added appInfo to publish information about running applications inside the guest.

 

VMware tools have separate release cycles from that of vSphere since version 10. With mention to that, VMware Tools 11.0.6 update adds VMware tools with a new key/certificate pair for signing VMware tools ISO files. The old signing key/certificate is expired in the month of December 2019.

 

In case you download the VMware tools 11.0.6 from VMware download portal and install on ESXi hosts that are pre 6.5 U2 version, you will not be able to install or upgrade the VMware Tools service in virtual machines using the existing UI.

 

To fix this issue, either upgrade the ESXi hosts to version 6.5 U2 or later, else manual VMware tools installation/upgrade is required inside guest OS.

 

Check additional details at

 

http://blogs.virtualmaestro.in/2020/04/17/vmware-tools-11-0-6-compatibility-with-vsphere/

vSphere 7 の新機能に関する個人的なまとめ

$
0
0

vSphere Update Manager(VUM)からvSphere Lifecycle manager (vLCM)への変更

vSphereの標準のUpgrade機能であったVUMからvLCMに変更された。Upgrade前のPrecheck機能が追加されたことにより、より安全にUpgradeを遂行できるようになった。

また、DellとHPEが提供するPluginを利用することによってFirmwareやDriverのUpgradeも可能になった。

ただし、VxRail環境ではこの機能は使うことができず、以下のようにエラーになる

 

1.png

 

参照URL

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere-lifecycle-manager.doc/GUID-74295A37-E8BB-4EB9-BFBA-47B78F0C570D.html

https://tinkertry.com/how-to-upgrade-from-vsphere-6-7-to-7

 

 

vSAN7 がNative File Serviceを提供

vSANはこれまで仮想マシンを格納するためのデータストアとしての機能や、vSAN外部に対してiSCSIストレージとして動作することが可能だったが、今回あらたにNAS Storage機能が追加された。

NASサービスを有効化すると、NASを提供するための仮想マシンがOVAからデプロイされ各ホスト(最大8ホストまで)に配置されるため、リソースを消費する。

またvSAN Skyline healtcheckにFileService用の項目が増える

これまでは、NASが必要な際はvSANの外部に別途構築したり、vSANデータストア上の仮想マシンにNASのサービスをインストールする必要があったがその必要がなくなった。

プロトコルとしてNFS v4.1 とv3が提供されるがCIFS/SMBの機能はない。

vSANが提供するNFSをESXiにマウントして仮想マシンのVMDKを保存するためのデータストアとして利用することはできないが、vSAN Cluster上の仮想マシンに対して直接マウントさせることは可能。

ファイルサービス用にデプロイされたVMではコンテナが動作している。そのため、障害時やメンテナンス時はvMotionやHAは使われずにコンテナのサービスが自動で移動する

 

 

https://blogs.vmware.com/virtualblocks/2020/03/10/announcing-vsan-7/

https://storagehub.vmware.com/t/vsan-frequently-asked-questions-faq/file-service-7/

https://www.youtube.com/watch?v=Zzp1E4d4eIw

 

 

2.png

3.png

 

4.png

PSCがEmbeddedのみになる。

vSphere 6.xで推奨されていた外部PSC構成は非サポートとなり、内蔵型のPSCのみがサポートとなる。

VxRailで内部VCSAを利用していた場合は、Upgrade時に自動でEmbedded PSC構成に代わる。

外部VC構成だった場合は、VC側のUpgradeを行った後に、VxRail Managerにて、構成を一致させる処理が必要になる

https://blogs.vmware.com/vsphere/2020/03/vsphere-7-vcenter-server-7-migration-upgrades.html

 

 

Windows Server 版のvCenterが使えなくなる。

7.0 からはVCSAのみが利用可能なため、現在Windows 版のVCを使っている場合はマイグレーションが必要になる。

https://blogs.vmware.com/vsphere/2020/03/vsphere-7-vcenter-server-7-migration-upgrades.html

 

vSphere6.0.x (VxRail 4.0.x) からはUpgradeできない

vSphere 7.0  へアップグレード可能なのはvSphere 6.5/6.7のみであり、6.0からの直接のUpgradeはできない。

https://blogs.vmware.com/vsphere/2020/03/vsphere-7-vcenter-server-7-migration-upgrades.html

 

 

DRSのエンハンスメント

DRSはこれまでESXiレベルでのパラメータに基づき負荷分散をしてきたが、vSphere 7.0では仮想マシンレベルのパフォーマンスに基づき移動させるように変わった。

また、モニタリング頻度も5分から1分に短縮された

https://blogs.vmware.com/vsphere/2020/03/vsphere-7-improved-drs.html

 

vMotionのエンハンスメント

vMotionのパフォーマンスが改善され業務影響が減った

https://blogs.vmware.com/vsphere/2020/03/vsphere-7-vmotion-enhancements.html

 

ライセンスのUpgradeが必要

vSphere6.xでは6.0/6.5/6.7で共通のライセンスが利用可能だったが、vSphere7ではUpgradeする必要がある。

https://kb.vmware.com/s/article/2006974

 

VCF4 & Kubernetes関連

vSphere 7.0はVCF4と対応する。Kubernetesとの連携も強化されている

https://vmusketeers.com/2020/03/10/vcf4-vsphere-7-vsan7-vrops-8-1-and-everything-else/

https://blocksandfiles.com/2020/03/10/vmware-vsphere-7-kubernetes/

https://blogs.vmware.com/vsphere/2020/03/vsphere-7-tanzu-kubernetes-clusters.html

https://blogs.vmware.com/vsphere/2020/04/how-to-get-vsphere-with-kubernetes.html

 

VCSAの複数vNIC設定が可能になった

vCenterのマルチホームが可能になった。

ただしVxRailのInternal VCSAで使えるかどうかは要確認

https://www.vladan.fr/what-is-vcenter-server-7-multi-homing/

https://www.youtube.com/watch?v=2kFZXa9lloM

 

 

参考ブログ・ドキュメント

https://www.vmware.com/products/vsan/whats-new.html

https://lab8010.com/announcing-vsan-7-summary/

https://lab8010.com/introduction-vsphere-7-summary/

https://vmusketeers.com/2020/03/10/vcf4-vsphere-7-vsan7-vrops-8-1-and-everything-else/


ESXi 6.5 & later Password policy

$
0
0

ESXi uses the Linux PAM module pam_passwdqc for password management and control. We can change the required length, character class requirement, or allow pass phrases using the ESXi Advanced setting Security.PasswordQualityControl

I will be using ESXi 6.7 from my test lab for this discussion.

As you can see, it is similar to that of what we had in ESXi 6.0 and its predecessor’s as well.

retry=3 min=disabled,disabled,disabled,7,7

Above setting can also be written as

retry=3 min=8,8,8,7,7

The above statement also has the same meaning as first entry

Let us understand the changes if any, in password policy of ESXi 6.5 and later. I have tried to simplify the ESXi password policy as much as possible at below link.

 

https://blogs.virtualmaestro.in/2020/04/24/esxi-6-5-later-password-policy/

What is RDSH App Session Pre-Launch feature in Horizon 7.3 & later?

$
0
0

Application Session Pre-Launch feature greatly reduces the application launch time for end users.

 

Generally when user launches any RDSH application, Horizon client establishes the session with backend RDSH application with a message "Connection server is preparing your application" and then launches the application from it. This usually takes some time to load the application (Mostly under a minute or so).

 

With session pre-launch feature, Horizon client launches the session to RDSH application (Not application itself just session) even before user actually launches the application from Horizon client. So initial connection process is already done by the time user actually launches the application.

 

  • What does it mean?
  • Will it open Applications automatically for user?
  • How does it work exactly?
  • Does this work in cloud pod architecture?

 

To explore all these questions, we will setup RDS Farm settings and an application pool in Horizon with session pre-launch option set to enabled and Horizon client settings. By the way, I am using Horizon 7.7 in my test lab.

 

See detailed information at https://blogs.virtualmaestro.in/2020/04/23/what-is-rdsh-session-pre-launch-feature-in-horizon-7-3-later/

【VxRail】SRS設定と正常稼働モニタリング設定の確認方法

$
0
0

この記事では、VxRail のGUIからSRS設定と正常稼働モニタリング設定の確認方法を説明します。

 

 

 

【VxRail 4.0.x/4.5.x/4.7.0x】VxRail Manager GUIからの確認

SRSステータスの確認

無効の場合

日本語表示

1.png

英語表示

2.png

有効の場合

日本語表示

3.png

英語表示

4.png

正常動作モニタリングの設定確認

※正常性モニタリングの項目が「Suppress Mode」や「抑制モード」という項目で表示される場合があります。その場合は下記のDell EMC コミュニティをご参照ください。

https://dell.to/2zq9qQI

 

無効の場合

日本語表示

7.png

英語表示

8.png

有効の場合

日本語表示

5.png

英語表示

6.png

 

 

【VxRail 4.7.x/7.x】vCenter Plugin からの確認

SRSステータスの確認

無効の場合

日本語表示

9.png

英語表示

10.png

有効の場合

日本語表示

11.png

英語表示

12.png

 

正常動作モニタリングの設定確認

無効の場合

日本語表示

13.png

英語表示

14.png

有効の場合

日本語表示

15.png

英語表示

16.png

VMware ブログでテーブルを利用した際にインデントをする方法

$
0
0

VMware ブログ(Jiveのプラットフォーム)で、前からちょっと困っていたことがありました。

それは引用やテーブル機能を利用した際にインデントが反映されないということです。

 

記事を書いていると、項目ごとにインデントを加えたりして見やすくすると思いますが、テーブルや引用を利用するとその部分だけインデントされずに、ちょっとイマイチな感じになります。

例えば以下です。

 

 

インデント済み

 

インデントされない

 

インデントされない

 

インデントされた

 

 

上記のようにテーブルや引用に対して、インデントを実行しても枠自体はインデントされず、中身のコンテンツがむなしくインデントされるのみです。

これに対する解決策として、記事のHTMLを直に編集する、という手段があります。

HTMLエディターは以下の場所をクリックすることで開けます。

0.PNG

 

 

HTMLエディターを開いたら、対象のテーブルや引用を<div style="position: relative;left: 80px;">と</div>で囲みます。

具体的には以下のようにします。

 

1.PNG

 

すると以下のようにインデントされたかのようにすることができます。

要素内で80pxとしているところの数字を変えれば、左側からのずらし幅を調整できます。

 

 

インデントされた

 

インデントされた

 

 

この方法を見つけたので今後は記事作成のストレスが軽減されそうです。

もっと早く調べておけばよかったなと思いました。

Step by step guide for basic setup and troubleshooting of VMware vRealize Network Insight - Part 1: Deployment and initial config

$
0
0

I provided a document about how to deploy the vRealize Network Insight (vRNI) as a great solution for managing networking components of vSphere and NSX and their networking streams in the virtualization infrastructure. This document includes some of command lines of vRNI Proxy (Collector) appliance for managing and troubleshooting its connectivity to the vRNI primary server. I hope it can be helpful for you all.

Viewing all 3135 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>