NSX distributed logical router appliance Part 2

In a previous post I talked about UDLR/DLR differences, why you might need to deploy an appliance with your (U)DLR and how high availability works for the optional appliance.

This post will cover more on IP addressing for the appliance and configuration changes after a brief rant on naming.

What’s in a name

Would a DLR that wasn’t referred to as an Edge smell as sweet? – W. Shakespeare

So in the 6.3 documentation, the DLR falls under Routing / Add Logical (Distributed) Router

And there it mentions An NSX Edge appliance provides dynamic routing ability if needed.

In the GUI you find the (U)DLR under NSX Edges even though it’s not on “the edge” and deploying a VM based off an Edge appliance isn’t required for a distributed logical router.

In the KB article referenced in the docs the appliance is called a DLR Control VM which the Design Guide also call it.  Most VMware presentations refer to it as a “Control” appliance.

At least it’s explicitly mentioned in the docs now, it was kind of glossed over in the install section previously (6.0/6.1).

Speaking of names, it’s still referenced repeatedly as a vShield Edge appliance.

vApp details

First time logging in – and second.

Could we please pick one name and stick with it?

IPs

As mentioned in my last post each DLR VM created will show all configured IPs for the DLR.

However, the only one that normally sees traffic is the HA heartbeat APIPA address, and in the event of a failover, the HA Interface IP will be ARPed.

If you configure OSPF or BGP, you will be asked for a Protocol Address and a Forwarding Address.

BGP config also sets up the Forwarding and Protocol Addresses.

The Forwarding address is the “next hop” for all networks managed by the DLR.  The networks and “next hop” that will be pushed out to the OSPF neighbors.

OSPF Forwarding Address is the Transit IP for the DLR.

The Protocol Address will be assigned to the DLR VM and that is how the DLR VM will exchange routing information with the OSPF neighbors.

DLR VM with OSPF Protocol Address replacing Transit Uplink

Looking at the interfaces on the DLR VM, all internal links are added to a VDR interface and used to generate the dynamic routing updates.

The DLR HA interface gets added it’s own interface (assuming its not on the same network as another uplink)

While uplink DLR interfaces (whether or not a Protocol address is created) each get their own interface.

Uplink vs Internal

Dynamic routing protocol (BGP and OSPF) neighbors are supported only on uplink interfaces.  And of course dynamic routing protocols are the only reason to depoy a DLR VM.

Firewall rules are applicable only on uplink interfaces of the DLR VM and are limited to control and management traffic that is destined to the DLR VM.

Configuration

The VM appears to be configured via Tools.  NSX Manager sends an update to the host holding the VM, which pushes the update via VMware Tools.

While the Design Guide states on pg 53:

The controller leverages the control plane with the ESXi hosts to push the new DLR configuration, including LIFs and their associated IP and vMAC addresses.

I don’t believe that statement is correct.

Because without a Controller:

Adding a new OSPF area:

The new changes show up on the DLR VM:

I believe updates are pushed via the “message bus” to the DLR VIB and DLR, which would explain why changes can be sent with no Controllers and no network connectivity to the DLR VM.

Speaking of controllers, in the NSX Troubleshooting guide it says Controller Cluster allocates one of the Controller nodes to be the master for this DLR instance. and references the Controller cluster receiving route tables and updates from the DLR VM and sending that information on to the hosts.

This does appear to be the case, as deleting my controllers stopped route updates from being delivered to the hosts.

 

This entry was posted in Network, NSX, Virtualization, VMware and tagged . Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.