top of page

Building a Cisco NX-OS EVPN-VXLAN Multisite Fabric with Cisco NDFC - Part 2

Writer: Chun Fung WongChun Fung Wong

Updated: Dec 16, 2023

Bringing up the ND cluster with NDFC service


Installing ND (Nexus Dashboard) VM

Installing and configuring NDFC is pretty straightforward. In this lab, a VMware VM is deployed through an OVA downloaded from Cisco. In the real world, there are options to choose between deploying ND as a physical appliance cluster or a virtual machine cluster. Pay attention to the terminology between ND and NDFC: ND (Nexus Dashboard) is the cluster product Cisco built on top of Kubernetes, while NDFC (Nexus Dashboard Fabric Controller) is the service running on the ND cluster. I've found that people sometimes may use the two terms interchangeably, but there are other services like NDO (Nexus Dashboard Orchestrator) and NDI (Nexus Dashboard Insights), which are also services that can run on the ND cluster. Therefore, for clarity, I use ND to refer to the cluster itself, and NDFC refers to the service that manages the EVPN-VXLAN fabric.


First things first, start by deploying the OVA and enter the corresponding parameters. I'll skip through the self-explanatory stuff like VM name and VM datastore, etc. However, be cautious about the technical specifications required by one ND node:

  • 16 vCPUs

  • 64GB memory

  • 500GB storage

One ND node is good enough in a lab, but it is always recommended to have three in a real-world deployment. I will cover cross-site ND cluster design in a later part of this series, so please stay tuned.


Once you reach step 4 of the OVA deployment, you have two options to choose from for the Configuration mode: App or Data. According to Cisco documentation, App is used for EVPN fabric, and Data is used for SAN fabric. Hence, choose App in this scenario.


Moving on to step 6, you'll encounter the storage requirements. Then, in step 7, details about the networks to be connected to the ND node are provided. Please note that mgmt0 refers to the out-of-band interface in a physical appliance, and fabric0 refers to the in-band. With a VM, make sure mgmt0 connects to the VLAN of the management network. ND will use this to reach the switches' out-of-band IP addresses.


In a lab setup, it's okay to have fabric0 on the same network as mgmt0 and later assign an arbitrary IP address to it. We will not use fabric0 in the entire lab. However, a well-planned IP addressing scheme should be in place with fabric0 on an in-band network for a production deployment.


Next, in step 8, you'll be customizing the template. There are a couple of key parameters to enter:

  • Disk size: You can adjust accordingly but 500GB is good for a lab

  • rescue-user Password: ND uses a specific user named rescue-user to login to the CLI prompt. This password is used for that user.

  • Management IP and gateway: This is self-explanatory.



 

Configuring ND and Installing NDFC


Configuring ND cluster

At this stage, once the VM deployment is done, you can confirm the system status in the VM console. When the system is ready, ND can be managed through a web browser using the specific IP address as the URL (use HTTPS).

First, assign the admin password. I've found that some versions may not give you access via HTTPS right away. In that case, the following steps and parameters can be configured via the web console.

Then assign a name for the cluster.

Provide NTP and DNS address. ND requires reachability to NTP and DNS.

Below the Proxy Server, if you are not using one, click the small (i) icon and there will be a prompt that you can skip that configuration.


Click Next to the Node details screen. Notice that the Next button is greyed out because the data interface and the node name have not been configured. Click the edit icon to configure those details.

Once completed, click the Update button at the lower right corner and then will go back to the cluster Node Details screen. Notice the Next button is now available.


Within a lab setup, simply click Next and accept the "Confirm Installation" warning regarding insufficient nodes. In a production environment, please add at least two (2) more nodes and complete the cluster node details. The cluster does not allow to add master nodes beyond this point.


When everything is ready, click Configure for the cluster to boot up.

Please allow 30 minutes for the cluster to come up.



Installing NDFC service

The next step in the equation is setting up NDFC service in ND.


When the ND cluster has finished its initial setup, log back into the management interface as you may have been logged out by the browser.


The first screen you will see is a welcome and getting started wizard. I recommend newcomers going through the steps via the Getting Started wizard as it provides a thorough guidance to configure the minimal parameters for ND to be operational.

I will not cover all of the details directed by the Getting Started wizard but will highlight a key thing.


Persistent IP addresses

These are the IP addresses which ND uses for certain services, for example, SNMP trap hosts. ND requires configuring at least two (2) of them. This screen can be found through the Admin > System Settings menu or the first time going through the Getting Started wizard.

When all parameters are all set, go to the left-hand side menu and choose Operate > Services to start installing NDFC.

You can choose to install from the Cisco DC App Center or upload from a downloaded file.

In my case, I use a local file named Cisco-ndfc-12.1.3b.nap, which is a version directly downloaded from Cisco.


It takes about 30 to 45 minutes for installing NDFC. Go back to the Operate > Services screen to Enable the service. It takes another 30 to 45 minutes to see all services to come online. When you see green icon (don't bother with the number of services whether it is 27/27 or 28/28, I find this value varies based on the configuration), then everything should be ready, and you are good to go configuring NDFC!

- End of Part 2, move on to Part 3 to configure NDFC -


Go back to Part 1.


Gary Wong@Geelong, Australia. Nov 2023.

Recent Posts

See All

Tweaking the Cisco Nexus 9000 TCAM

In a recent project, I had the opportunity to work with something "new" yet familiar. During a customer data center (DC) refresh project,...

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

@2024 All Contents are copyrighted

bottom of page