In this section we will go over mostly standard steps of installing NSX-T 3.0 appliance by deploying the downloaded ova on physical esxi host and select the datastore and network / portgroup for NSX-T management.
As this is HomeLab and NSX-T is only deployed for hands-on mainly, I will not be deploying the three node cluster, which is a must when it comes to production environment.
Next in Part-3 we will look at adding the compute manager and creating TEP IP pool and Transport Zone.
Before we dive in NSX-T part, lets take a quick look at physical hosts and nested esxi hosts setup.
As seen above we have 3x physical host in one cluster (HW-Clu01) and 3x nested esxi hosts in cluster (nClu01) and all are connected to same vDSwitch for management.
HW-Clu01 hosts vcsa, 3x nested esxi vm’s, 2x test vm’s and will also host nsx-t manager and edge vm’s down the line. All 6 hosts has access to same NFS datastore where the only test VMs are located for ease of this demo, though I did think of enabling vSAN services on nested hosts but will leave that detour for some other blog-post
Quick overview at vDS, nested esxi vm nic’s config and portgroups used for physical and nested host vmk’s and TEP.
Quick overview of subnets and vlans used for this demo.
Ever since VMUG added NSX-T to its offerings I have been meaning to get it deployed in my homelab and try it myself but due to limited cpu and memory resources was not able to, until recently.
There are some very great blogs out there explaining NSX-T while trying to recreate as close as possible to real world scenarios and thus very helpful too. However, in my case initially I just wanted to have an overlay network between physical and nested esxi hosts where 3 VMs in 3 logical different segments from NSX-T are able to communicate with each other.
I’m myself not able to answer the silent “but why?” that may follows but none the less I get my hands-on amazing product. Thanks to VMUG Advantage.
In short, all 3 physical host in my homelab has 1 nic (or more) for management and 1 nic for NSX which is patched to TEP VLAN on physical switch. All 3 nested hosts has 2 nics for management and 2 nics for NSX, NSX nics which are connected to TEP VLAN on vDSwitch portgroup. Aim is to have logical segment spanning across physical and nested hosts in two clusters in same vcenter server
FreeNAS in a VM – with 2x 1G NIC and RAID card in passthrough mode.
AP – Meraki MR18 – running OpenWRT.
As seen above I have a mixture of old desktop, custom built hosts and was only recently able to add supermicro server. Though this has its own limitations and it is opposite of the best practices to have identical hardware and same version of ESXi hosts in a cluster, as it is a HomeLab approval was not that difficult to get. 😉