NSX-T 2.5: Create an NSX Edge Transport Node


An NSX Edge Node is a transport node that runs the local control plane. The Edge Nodes are service appliances dedicated to running centralized network services that cannot be distributed to the hypervisors.

Important:

An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. An NSX Edge belongs to at least one VLAN transport zone to provide the uplink access.

Follow the steps below to create a NSX Edge.

1 – Select System > Fabric > Nodes > Edge Transport Nodes > Add Edge VM.

image

2 – Type a name for the NSX Edge and the Host name or FQDN.

image

3 – Specify the CLI and the root passwords for the NSX Edge.

image

4 – Enter the NSX Edge details.

image

5 – Enter the NSX Edge interface details.

image

6 – Select the transport zones that this transport node belongs to and Enter the N-VDS information for Overlay Network.

Note: An NSX Edge transport node belongs to at least two transport zones, an overlay for NSX-T Data Center connectivity and a VLAN for uplink connectivity.

image

7 – Enter the N-VDS information for VLAN network

image

Note: Uplink interfaces are displayed as DPDK Fastpath Interfaces if the NSX Edge is installed using NSX Manager or on a Bare Metal server.

8 – Validate the configuration.

image

9 – validate the configuration by ssh with the following commands.

# get interface eth0

# get managers

image

Hoping you have enjoyed this post, see you next time.

NSX-T 2.5: Configure a Managed Host Transport Node


If we have a vCenter Server, we can automate the installation and creation of transport nodes on all the NSX-T Data Center hosts instead of configuring manually.

If the transport node is already configured, then automated transport node creation is not applicable for that node.

Follow the steps below to configure it.

1 – Select System > Fabric > Nodes > Host Transport Nodes, from the Managed By drop-down menu, select an existing vCenter Server and Select the hosts from the list and click Configure NSX.

image

2 – Select the Transport Node Profile and click save.

image

3 – Validate the Configuration State and Node Status will show Success and Up.

image

4 – Visualize the N-VDS from vCenter Server.

image

To identify TEP IP and basic connectivity test follow the steps below.

1 – System -> Fabric -> Nodes -> Host Transport Nodes -> Select the Site A’s vCenter Server -> Select the Host. Click Physical Adapters -> Look for vmk10 and click the Interface Details icon.

It will display the TEP IP configured.

image

image

2 – View the ESXi connection status by running.

# esxcli network ip connection list | grep 1235

image

3 – Login on to the esxi host and running the command below:

# esxcli network ip interface ipv4 get

image

4 – Run the following command to perform a ping test using the TEP interface:

# vmkping ++netstack=vxlan 10.10.2.202

image

Hoping you have enjoyed this post, see you next time.

NSX-T 2.5: Configure Transport Node Profile


A transport node profile captures the configuration required to create a transport node. Transport node profiles define an IP Pool, Transport Zone(s) and Uplink profile (created in the previous blogs) in a single configuration profile that is applied to the Transport nodes hypervisors.

The transport node profile can be applied to an existing vCenter Server cluster to create transport nodes for the member hosts.

Follow the steps below to configure it.

1- Select System > Fabric > Profiles > Transport Node Profiles > Add.

image

2 – Enter a name to identify the transport node profile and select the available transport zones.

image

3 – Click the N-VDS tab and enter the switch details.

image

4 – Validate the configuration.

image

Hoping you have enjoyed this post, see you next time.

NSX-T 2.5: Create an Uplink Profile

An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

Uplink profile for Edge VM and Host Transport nodes considerations:

  • If the Failover teaming policy is configured for an uplink profile, then you can only configure a single active uplink in the teaming policy. Standby uplinks are not supported and must not be configured in the failover teaming policy. When you install NSX Edge as a virtual appliance or host transport node, use the default uplink profile.

  • If the Load Balanced Source teaming policy is configured for an uplink profile, then you can configure multiple active uplinks on the same N-VDS. Each uplink is associated with one physical NIC with a distinct name and IP address. The IP address assigned to an uplink endpoint is configurable using IP Assignment for the N-VDS.

To Create an Uplink Profile follow the steps below.

1- Select System > Fabric > Profiles > Uplink Profiles > Add.

image

2 – Enter an uplink profile name.

image

3 – We can enter a default teaming policy or we can choose to enter a named teaming policy. Click Add to add a naming teaming policy. A teaming policy defines how N-VDS uses its uplink for redundancy and traffic load balancing.

Teaming policy modes:

  • Failover Order: An active uplink is specified along with an optional list of standby uplinks. If the active uplink fails, the next uplink in the standby list replaces the active uplink. No actual load balancing is performed with this option.
  • Load Balance Source: A list of active uplinks is specified, and each interface on the transport node is pinned to one active uplink. This configuration allows use of several active uplinks at the same time.

image

Note: The uplink profile MTU default value is 1600.

4 – Validate the configuration.

image

Hoping you have enjoyed this post, see you next time.

NSX-T 2.5: Configure Transport Zones


Transport zones dictate which hosts and, therefore, which VMs can participate in the use of a particular network

Types of Transport Zones

  • Overlay: Overlay transport zone is used by both host and NSX Edge transport nodes. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or NSX Edge.
  • VLAN: VLAN transport zone is also used by both host and NSX Edge transport nodes but for its VLAN uplinks.

What is N-VDS?

The N-VDS allows virtual-to-physical packet flow by binding logical router uplinks and downlinks to physical NICs.

Follow the steps below to create a Tranport Zone.

1 – Select System > Fabric > Transport Zones > Add.

image_thumb2

2 – Enter a name for the N-VDS and Select an N-VDS mode.

  • Standard – Mode that applies to all the supported hosts.
  • Enhanced Datapath – Is a networking stack mode that applies to only transport nodes of ESXi host version 6.7 and later type that can belong in a transport zone.

Note:

  • If the N-VDS mode is set to Standard, select a traffic type.
    The options are Overlay and VLAN.
  • If the N-VDS mode is set to Enhanced Datapath, select a traffic type.
    The options are Overlay and VLAN.

The enhanced datapath mode, only specific NIC configurations are supported. Make sure that you configure the supported NICs.

We are going to create (3) three transport zones.

Transport Zone Name

Transport Zone Type

Details/Relevance

TZ-STD-OVERLAY-21 Overlay Geneve Encapsulated Traffic
TZ-STD-VLAN-21 vLAN NSX-T Distributed Switch
TZ-STD-VLAN-ToR-21 vLAN Top of the rack switch network

image

image

image

3 – Validate the configuration.

image

 

Hoping you have enjoyed this post, see you next time.

NSX-T 2.5: Deploy and add Compute Manager


At first, we need to deploy OVF appliance.

1 – Right Click on vCenter and Deploy OVF template.

.image

2 – Select the file OVF.

image

3 – Select the name and location.

image

4 – Select resources.

image

5 – Select the appliance size.

Note: Extra small size is just for Cloud Service manager

image

6 – Select the datastore you want to deploy the appliance.

image

7 – Select the network you want to manage the appliance.

image

8 – Customize the template as DNS, NTP, IP address, gateway, password etc.

Important: Check enable ssh before deploy the appliance.

image

9 – Review the summary and click finish

image

Second, validate appliance configuration.

1 – When the deployment is finished, turn on the virtual appliance and login into the console as a admin user, validate the configuration interface.

image

2 – Validate the following services by running get services.

Note: The following services are not running by default: liagent, migration-coordinator, and snmp.

image

We can start them as follows:

start service liagent

start service migration-coordinator

image

3 – Vefiry the connectivity by testing the following tasks.

  • The NSX Manager can ping its default gateway.
  • The NSX Manager can ping the hypervisor hosts that are in the same network as the NSX Manager using the management interface.
  • The NSX Manager can ping its DNS server and its NTP server.

imageimage

Third, now we are ready to connect NSX-T to vCenter Server.

1 – Connect to the NSX-T GUI by opening a web browser and navigating to the NSX-T Manager IP address.

image

2 – After log in with admin credencials read and accept the EULA terms.

image

3 – Now select System > Fabric > Compute Managers > Add.

image

4 – And fill the spaces as follow.

image

If we left the thumbprint value blank, we are prompted to accept the server provided thumbprint.

5 – Validate compute manager’s name to view the details.

image

 

Now we are ready to use NSX-T 2.5!

Hoping you have enjoyed this post, see you next time.

KB001: Backup and Restore VCSA vPostgres database 6.X


At first, we need to download  the backup and restore package from the Oficial KB (https://kb.vmware.com/s/article/2091961) and copy to VCSA following the next steps.

1. Download the package from the official BK VMware website.

image

2. Login in to the vCSA, create a directory and copy the file by using WinSCP into the VCSA.

# mkdir /backups/

SNAGHTML96dcb8d8

Second, execute the backup procedure.

1. Unzip the files by running this command.

# Unzip 2091961_linux_backup_respote.zip

image

2. Make the file backup_lin.py  executable.

# chmod 700 /backups/backup_lin.py

3. Run the following command to execute the backup.

# python backup_lin.py –f backup1_VCDB.bak

image

4. Copy the backup file to a safe location.

SNAGHTML970f2ad9

Third, execute the restore procedure.

1. Login in to the new VCSA, create a directory and copy the file by using WinSCP into the new VCSA.

image

# mkdir /backups/

SNAGHTML96dcb8d8

2. Copy the file you saved into the new VCSA.

image

3. Unzip the files by running this command.

# Unzip 2091961_linux_backup_restore.zip

image

4. Make the file restore_lin.py  executable.

# chmod 700 /backups/restore_lin.py

5. Stop the following services.

For 6.7 and 6.5:

service-control –stop vmware-vpxd

service-control –stop vmware-content-library

For 6.0:

service-control –stop vmware-vpxd

service-control –stop vmware-vdcs

6. Run the following command to execute the restore.

# python restore_lin.py –f backup1_VCDB.bak

At the end of the restore you see something like this.

image

7. Start the VMware services.

For 6.7 and 6.5:
service-control –start vmware-vpxd
service-control –start vmware-content-library

For 6.0:
service-control –start vmware-vpxd
service-control –start vmware-vdcs


IMPORTANT:

When you login into vCenter Server again, you need to reconnect the ESXi hosts.

Hoping you have enjoyed this post, see you next time.

vCloud Director 9.1: Install vCloud Director Second Node – Part 6


At first, we need to prepare the virtual machine we are going to deploy the Second cell. These are the characteristics of the virtual machine.

  • O.S: CentOS 7 (64-bit)
  • vCPU: 2
  • RAM: 16 GB
  • HDD: 16 GB
  • IP: 10.161.115.167 (Used for HTTP)
  • IP: 10.161.115.168 (Used for Proxy Console)

Next, we need to prepare the operating system with the following packages.

# yum install alsa-lib bash chkconfig coreutils findutils glibc grep initscripts 
krb5-libs libgcc libICE libSM libstdc libX11 libXau libXdmcp libXext libXi libXt 
libXtst module-init-tools net-tools pciutils procps redhat-lsb sed tar which wget

And them, we need to install the lastest operating system updates.

# yum update

And finally, we need to stop and disable the operating system firewall to avoid communication issues.

# systemctl stop firewalld

# systemctl disable firewalld

Second, follow the next steps to deploy the vCloud Director 9.1.

1. Copy the installation file using winSCP.

01

2. Ensure execute permission.

# chmod u+x vmware-vcloud-director-distribution-9.1.0-8825802.bin

3. Run the installation file.

# ./vmware-vcloud-director-distribution-9.1.0-8825802.bin

Note: After the software is installed, the installer prompts you to run the configuration scritp, which configures Certificates, Server’s network and database connections.

In this case, choose n and before to execute the script we must configure the certificates.

02

Third, follow the next steps to create a Self-Signed SSL Certificate.

1. Go to the directory /opt/vmware/vcloud-director/jre/bin.

2. Execute the following commands to generate the Self-Signed SSL Certificates.

  • For HTTP service
# ./keytool -keystore certificates.ks -alias http -storepass passwd -keypass passwd -storetype JCEKS -genkeypair -keyalg RSA -keysize 2048 -validity 365 -dname "CN=vcd02-prd.example.com, OU= Engineering, O=Example Corp, L=Palo Alto, S=California, C=US" -ext "san=dns:vcd02-prd.example.com,dns:vcd02-prd,ip:10.161.115.167"
  • For Console Proxy service
# ./keytool -keystore certificates.ks -alias consoleproxy -storepass passwd  -keypass passwd -storetype JCEKS -genkeypair -keyalg RSA -keysize 2048 -validity 365  -dname "CN=vcd02-proxy-prd.example.com, OU=Engineering, O=Example Corp, L=Palo Alto, S=California, C=US" -ext "san=dns:vcd02-proxy-prd.example.com,dns:vcd02-proxy-prd,ip:10.161.115.168"

3. Verify that all the certificates are generated, list the context of the keystore file.

# ./keytool -storetype JCEKS -storepass passwd -keystore certificates.ks -list

03

4. Validate the certificates using WinSCP in the following directory /opt/vmware/vcloud-director/jre/bin.

04

5. Important: Copy the certificates in a directory in which must be readable by the user vcloud.vcloud. The vCloud Director installer creates this user and group. In this case ‘/opt/vmware/‘.

05

Continue reading “vCloud Director 9.1: Install vCloud Director Second Node – Part 6”

vCloud Director 9.1: Migrate and Transfer NFS – Part 5


At first, we need to stop vCloud Services by running the next commands.

# service vmware-vcd stop

And them, copy the information to other location.

# cp -r /opt/vmware/vcloud-director/data/transfer/ /tmp/copy

Them, delete the exiting data.

# rm -fR /opt/vmware/vcloud-director/data/transfer/*

Next, we need to mount the shared NFS by runing this command.

# mount -t nfs 10.161.115.160:/nfs /opt/vmware/vcloud-director/data/transfer

And them copy the information from /tmp to /opt/vmware/vcloud-director/data/transfer/

# cp -r /tmp/copy-of-transfer/* /opt/vmware/vcloud-director/data/transfer/

We will confirm the mount point.

01

And them we need to make sure the shared NFS is mounted after reboot by editing /etc/export.

# nano /etc/export

10.161.115.168:/nfs /opt/vmware/vcloud-director/data/transfer/ nfs rw 0 0

02

We have to confirm that the “cells” directory is now owned by vcloud. To configure this:

# chown -R vcloud:vcloud /opt/vmware/vcloud-director/data/transfer/*

Finally, start the vCloud Director service

# service vmware-vcd start

 

Hoping you have enjoyed this post, see you next time.