The Intention
Cisco is positioning NDFC (which is under ND, a.k.a. Nexus Dashboard) as the replacement for DCNM - Data Center Network Manager, in parallel with the almost 8 to 9-year-old ACI (Application Centric Infrastructure) as part of their two-pronged data center network management portfolio.
Depending on the use case, ACI may still seem overkill as the initial effort to learn its specific networking constructs remains a significant concern for most organizations lacking ACI expertise. Meanwhile, EVPN-VXLAN is quickly becoming the industry standard in Data Center network deployment, as it is not bound to a specific vendor: Juniper, Arista, NVIDIA (Cumulus), Huawei, and others offer similar solutions. The networking concepts of EVPN-VXLAN are comparatively easier for network engineers to grasp. Hence, there is a group of customers who prefer deploying EVPN-VXLAN to retire their long overdue 3-tier data center networks.
One of the major challenges with EVPN-VXLAN is the introduction of the overlay: VTEPs, VNI, NVE, etc., which may initially seem alien. Configuring them across a fabric of, say, 20 switches appears to be a daunting task. To address this issue, almost every vendor offers a management tool to automate the configuration of the data center fabric, such as NDFC from Cisco. However, practical deployment resources, aside from online Cisco docs and session recordings from Cisco Live, are scarce. Therefore, I am writing a series of posts to document a recent project I assisted with for an Australian customer. Hopefully, this would serve as a good reference for the others who are in need.
This series will reference a lab built with EVE-NG and VMware. It is assumed that readers have experience deploying labs with EVE-NG and VMware. Images used in the posts have been obtained through official channels, so please refrain from requesting them.
The EVPN-VXLAN Lab Topology
Back-to-Back Border Gateway (BGW)
Below is the topology of the EVPN-VXLAN network lab, simulating physically dispersed data centers using EVPN-VXLAN to build a multisite fabric. Each data center has two Cisco Nexus 9000v switches (the hardware Nexus 9300 series in practical deployment), each serving as a BGW and connected back-to-back via dark fiber links.
Versions used in the lab:
N9000v: 10.2(5)
NDFC: 12.1(3)

Summary of the switch roles as below:
Switch | Function |
NXOS1 | DC1 Border Gateway |
NXOS2 | DC1 Border Gateway |
NXOS3 | DC2 Border Gateway |
NXOS4 | DC2 Border Gateway |
NXOS on the left | DC1 external device not using service networks |
NXOS on the right | DC2 external device not using service networks |
Bootstrapping the N9000v
Upon initial boot-up, configure the Nexus switch for management reachability to the NDFC via the mgmt0 interface.
The following configurations should be completed upon first-time login to a newly deployed NXOS 9000v and can typically be skipped thereafter:
NXOS-1(config)# username admin password <nxos_password> role network-admin
NXOS-1(config)# snmp-server admin password 5 <password_hash>
Then, configure the management connectivity:
NXOS-1(config)# interface mgmt0
NXOS-1(config-if)# ip address 192.168.1.200/24
NXOS-1(config-if)# exit
NXOS-1(config)# vrf context management
NXOS-1(config-vrf)# ip route 0.0.0.0/0 192.168.1.254
By this stage, ensure SSH connectivity is possible from the management network using the admin credentials.
Continue to complete the management configuration for all switches. Then, we'll save for now and let NDFC do the hard work!
- End of Part 1, continue to install and configure NDFC in Part 2 -
Gary Wong@Geelong, Australia. Nov 2023.
Opmerkingen