USD ($)
$
United States Dollar
India Rupee

What is Overlay Transport Virtualization (OTV) and How to Configure It

Created by Deepak Sharma in Articles 16 Dec 2024
Share
«Cisco Nexus Port Channel: Configuring ...

Overlay Transport Virtualization (OTV) is a Cisco proprietary technology designed to extend Layer 2 networks across Layer 3 domains, allowing organizations to stretch their VLANs across data centers. This enables seamless application and workload mobility without compromising network efficiency or security.

In other words, an OTV is an Ethernet over IP tunnel that allows you to span the broadcast domain of two Data Center sites together over an IP transport.

Overlay Transport Virtualization works similarly to other L2 tunneling protocols such as L2TPv3, Any Transport over MPLS (AToM), or Virtual Private LAN Services (VPLS), but it includes special enhancements specific to Data Center Interconnect (DCI) environments.

In this Overlay Transport Virtualization practice guide, we will cover all important concepts related to OTV, like its features. benefits and configuration steps in simple language.

To master the Overlay Transport Virtualization (OTV) concept in computer networks, you can enroll in our Cisco Data Center Courses. In these courses, we explain OTV and similar Cisco data center technologies in great detail using a virtual lab and practical real-life scenarios.

What is Overlay Transport Virtualization(OTV) and How it Works?

Overlay Transport Virtualization (OTV) is a Cisco technology that enables organizations to extend their Layer 2 networks across multiple locations using Layer 3 infrastructure.

When a device needs to send data to another data center, OTV encapsulates the Layer 2 Ethernet frames into IP packets. These packets are then transmitted across the Layer 3 network to the destination site.

Upon arrival, the packets are de-encapsulated back into their original Ethernet frames, allowing for seamless communication between geographically dispersed data centers. 

This approach eliminates the need for complex tunneling configurations or VLAN stretching via traditional methods.


Overlay Transport Virtualization Features

The key Features involved in Overlay Transport Virtualization are as follows:

● MAC Address Advertisement: Uses IS-IS for MAC routing.

● Flood and Learn: Reduces broadcast and unknown unicast traffic.

● Overlay Network: Decouples the Layer 2 domain from the transport network.


Cisco ACI Training LiveJoin the live class and master Cisco ACI concepts.Explore course
custom banner static image

Benefits of Overlay Transport Virtualization(OTV)

Overlay Transport Virtualization (OTV) offers several advantages for organizations looking to extend their Layer 2 networks over Layer 3 infrastructures. Here are the key benefits:

● Organizations can connect multiple data centers without complex configurations using OTV.

● OTV  enables performance boost by efficiently using available bandwidth, through multicast support for control plane communications.

● OTV saves money for organizations by using existing IP addresses.

● Removing and adding new data centers becomes an easy task.


If you seek to learn more networking technologies, check out our IT infrastructure Training Courses, where we provide training for various technologies from Cisco, Juniper, VMware, F5, etc



banner image


Overlay Transport Virtualization Configuration Example

Let's look at an example of OTV configuration on Cisco Nexus 7000 switches. After following this example, you will learn how to configure OTV( Overlay Transport Virtualization).

OTV Topology

In our topology, we have four nexus switches. NXOS01 and NXOS02 are connected while NXOS03 is connected to NXOS01 and VPC01, and NXOS04 is connected to NXOS02 and VPC02. Here is the topology diagram for the same.

Image description

In this Cisco Nexus OTV configuration example, first, we need to configure the basic layer 2 reachability from end hosts as follows

● Configure VPC01’s link to NXOS03 with the IP address 10.0.0.1/24. (VPCS > 10.0.0.1/24)

● Configure VPC02’s link to NXOS04 with the IP address 10.0.0.2/24. (VPCS > 10.0.0.2/24)

VPC01:


VPCS >hostname VPC01

VPC01 >ip 10.0.0.1/24

VPC02:


VPCS >hostname VPC02

VPC02 >ip 10.0.0.2/24

To achieve the desired Cisco OTV configuration, We also need to configure the basic nexus switching on all switches as follows

● Configure NXOS03’s link to VPC01 in VLAN 10

● Configure NXOS04’s link to VPC02 in VLAN 10

● Configure VLAN 10 trunking on the links between NXOS01 and NXOS03

● Configure VLAN 10 trunking on the links between NXOS02 and NXOS04

● Configure NXOS01 and NXOS02 as the STP root bridges for VLAN 10

● Disable all other unused ports on all four switches

NXOS03:


!

license grace-period

!

vlan 10

!

interface Ethernet2/7 

 switchport

 switchport access vlan 10

 no shutdown

!

interface Ethernet2/3-4 

 switchport

 switchport mode trunk 

 no shutdown

!

Interface Ethernet2/1-2 

 no shutdown

!

Interface Ethernet2/5-6 

 no shutdown

!

NXOS04:


!

license grace-period

!

vlan 10

!

interface Ethernet2/7 

 switchport

 switchport access vlan 10

 no shutdown

!

interface Ethernet2/3-4 

 switchport

 switchport mode trunk 

 no shutdown

!

Interface Ethernet2/1-2 

 no shutdown

!

Interface Ethernet2/5-6 

 no shutdown

!

NXOS01:


!

license grace-period

!

vlan 10

!

spanning-tree vlan 10 priority 4096

!

interface Ethernet2/3-4 

 switchport

 switchport mode trunk 

 no shutdown

!

Interface Ethernet2/2 

 shutdown

!

Interface Ethernet2/5-6 

 shutdown

!

NXOS02:


!

license grace-period

!

vlan 10

!

spanning-tree vlan 10 priority 4096

!

interface Ethernet2/3-4 

 switchport

 switchport mode trunk 

 no shutdown

!

Interface Ethernet2/2 

 shutdown

!

Interface Ethernet2/5-6 

 shutdown

!

Configuring Overlay Transport Virtualization 

Now configure OTV-related configuration between NXOS01 and NXOS02 to tunnel traffic between end hosts VPC01 and VPC02 as follows

● Enable the OTV feature on NXOS01 and NXOS02.

● Create Overlay Transport Virtualization VLAN 999 on NXOS01 and NXOS02 and configure it as the OTV Site VLAN.

● Configure NXOS01 with the OTV Site Identifier 0x101 and configure NXOS02 with the OTV Site Identifier 0x102.

●Configure Eth2/1 ports connecting NXOS01 and NXOS02 as a native layer 3 routed interface using the addresses 169.254.0.71/24 and 169.254.0.72/24, respectively. This will be the OTV Join Interface.

● Enable IGMPv3 on the OTV Join Interface.

● Configure interface Overlay 1 on both NXOS01 and NXOS02. Use the routed interface between them as the OTV Join Interface, 224.1.1.1 as the OTV Control Group, 232.1.1.0/24 as the OTV Data Group range, and VLAN 10 as an Extend VLAN.

Note: Assign the Mac addresses on Cisco Nexus switches as follows


Nexus SwitchesInterfacesMAC Address
NXOS01Ethernet 2/10000.0000.1021
NXOS02Ethernet 2/10000.0000.2021

Step-by-Step OTV Configuration on Nexus Switches

Step 1: Enable the Overlay Transport Virtualization feature on both Cisco Nexus switches

NXOS01 and NXOS02:


!

feature otv

!

Step 2: Create and define an OTV VLAN site on both Nexus switches

NXOS01 and NXOS02:


!

vlan 999

 name OTV_SITE_VLAN

!

otv site-vlan 999

otv site-identifier 0x101

!

Step 3: Configure OTV Overlay Interface on both Nexus switches

NXOS01 and NXOS02:


!

interface Overlay1

 otv join-interface Ethernet2/1

 otv control-group 224.1.1.1

 otv data-group 232.1.1.0/24

 otv extend-vlan 10

 no shutdown

!

Step 4: Configure Join Interface - It is the Layer 3 interface used for OTV encapsulation

NXOS01:


!

interface Ethernet2/1 

 no switchport

 ip address 169.254.0.71/24 

 mac-address 0000.0000.1021

 ip igmp version 3 

 no shutdown

!

NXOS02:


!

interface Ethernet2/1 

 no switchport

 ip address 169.254.0.72/24 

 mac-address 0000.0000.2021

 ip igmp version 3 

 no shutdown

!

In this example, our OTV Edge Devices NXOS01 and NXOS02 are directly connected over a point-to-point layer 3 routed interface. However, OTV can run over any transport as long as unicast and multicast IP connectivity exists.

This means that Overlay Transport Virtualization can run over Dark Fibre, MPLS L2VPN/L3VPN, DMVPN, etc., as long as reachability is there. In some cases, we see how to lessen this connectivity requirement so that only unicast IP reachability is required, as opposed to both unicast and multicast.

The layer 3 routed interface used to form the OTV tunnel is called the Join Interface.

Note that as of the release used in this example, the Join Interface cannot be an SVI or a Loopback; it must be a native layer 3 routed physical interface, sub-interface, or port channel. This link is essentially what you would think of as the “tunnel source” in a normal GRE tunnel configuration.

The logical interface where traffic is OTV encapsulated is called the Overlay Interface, which is analogous to the “interface tunnel” in normal GRE tunnel configurations.

A concise verification of the state of both the Join and Overlay interfaces is shown below.


Image description

The above output shows us both the Unicast and Multicast transport addresses used, as well as other pertinent information such as the VLANs extended (bridged) over the tunnel, whether the local Site VLAN is up (which it must be), and whether the edge router has elected itself as Authoritative (which it must be).

Another important output from above is the Site Identifier, which must be the same for Edge Devices in the same DC site, and different between different sites (as is this case). The specific need for the Site VLAN and Site Identifier is not a part of this scenario.

For the transport addresses, Unicast reachability is simple to verify. If the OTV Edge Devices can ping the IP address on their Join Interfaces, that’s basically all there is to it. For multicast reachability, we would, of course, require a multicast-capable transport, such as a point-to-point link like in this case, or MPLS L3VPN that has MDT service enabled.

Note that the multicast transport must be both ASM (*,G) and SSM (S,G) capable, because the discovery of other OTV edge devices uses shared trees (*,G), whereas actual multicast data plane over the tunnel uses shortest path trees (S,G).

Because SSM is used for certain flows, IGMPv3 support must be enabled on the Join Interface as well as the actual PIM router on the other side of the link (if applicable).

Without doing verification in the DCI transit network itself, multicast reachability can be quickly verified from the OTV edge routers themselves simply by checking whether the OTV IS-IS adjacencies have formed.

IS-IS traffic is encapsulated in the OTV control-group multicast, which is the (*,G) feed. This means that if IS-IS is up, multicast likewise is up. This verification is shown below.


Image description

Unlike Fabric Path, which uses IS-IS simply to establish a shortest path tree between the Fabric Path Spine and Leaf switches, the OTV IS-IS process is used to advertise reachability information about the end hosts. Before traffic can flow over the tunnel, the end devices’ MAC addresses must be advertised into IS-IS. 

Further verification steps can be performed using the following commands however the output of these commands are not supported in this tested lab environment.

● show otv route

Note: MAC addresses local to the site should be reachable out a physical link or port-channel, whereas MAC addresses in other sites will be reachable out the overlay tunnel interface. The detailed advertisements in the IS-IS database are shown here. 

● show otv isis database detail

Note: Another indication that the data plane is working in OTV is to check the ARP and ICMP ND cache on the OTV Edge Devices. As a flooding optimization, the AEDs cache ARP and ND responses that they have learned so that further ARP/ND requests do not need to flood over the overlay tunnel.

● show otv arp-nd-cache
● debug otv arp-nd

OTV Packets MTU Adjustment

An important point to remember about Cisco Overlay Transport Virtualization and data plane traffic is that the OTV tunnel adds 42 bytes of overhead and does not support fragmentation. This means that if the DCI does not support Jumbo Frames, it is possible that traffic flows that work over other interconnects such as dark fiber won’t work over OTV.

In case of ICMP PING with the Don’t Fragment bit set work up to 1430 bytes. With the ICMP header of 8 bytes and IP header of 20 bytes, it means that the entire IP packet length is 1458. An additional 42 bytes of OTV bring us to a total length of 1500 bytes, which is the default MTU.

The ping with an ICMP payload of 1431 is dropped, as the OTV edge device silent discards the packet instead of fragmenting it. To increase the payload size that the tunnel supports, the DCI links (the Join Interface and everything else in the transit path) must support Jumbo Frames. On Nexus this is simply one command at the interface level, shown here.

NXOS01:


!

interface Eth 2/1

 mtu 9216

!

Limitations of Overlay Transport Virtualization(OTV)

While Overlay Transport Virtualization (OTV) offers several advantages for connecting data centers, it also has some limitations that organizations should consider:

1. Slower Convergence: When a change occurs, all known MAC addresses may need to be flushed, requiring re-learning and signaling across the network. This process can lead to temporary "black-holing" of traffic until the new MAC addresses are recognized.

2. Limited Multicast Support: OTV does not natively support multicast replication, which can complicate scenarios where multicast traffic is required across multiple sites. 

3. Potential for Suboptimal Routing: OTV may create suboptimal routing paths, especially when dealing with virtual machines that move between data centers. 

4. Complexity in Troubleshooting: The flat and non-hierarchical nature of Ethernet addressing in OTV can make troubleshooting more challenging, especially in large topologies where identifying issues may require significant effort.

5. Loop Prevention Challenges: While OTV limits the extension of Spanning Tree Protocol (STP) across the transport infrastructure, this can potentially create undetected end-to-end loops if not managed carefully.

Overlay Transport Virtualization Use Cases

Here are some key use cases where OTV proves beneficial:

1. Data Center Interconnect (DCI): OTV is ideal for connecting multiple data centers located in different geographical areas. 

2. Cloud Services Integration: As businesses increasingly adopt cloud solutions, OTV facilitates the connection between on-premises data centers and cloud environments. 

3. Disaster Recovery Solutions: OTV enhances disaster recovery strategies by maintaining connectivity between primary and backup data centers.

4. Workload Mobility: OTV supports the dynamic relocation of virtual machines across data centers without the need for reconfiguring the network. 

5. Migration Projects: When migrating applications or services from one data center to another, OTV allows for a smooth transition. 

Summing Up!

Overlay Transport Virtualization (OTV) enables seamless Layer 2 network extension over Layer 3 infrastructures, offering benefits like simplified data center interconnectivity and improved resource utilization.

While it enhances scalability, OTV also presents challenges in configuration and management. Understanding its features, benefits, limitations, and configuration is essential for network administrators to effectively implement OTV and optimize their network architecture across geographically dispersed data centers.

Nexus 5k Switch Replacement in VPC : ...»
Deepak Sharma

He is a senior solution network architect and currently working with one of the largest financial company. He has an impressive academic and training background. He has completed his B.Tech and MBA, which makes him both technically and managerial proficient. He has also completed more than 450 online and offline training courses, both in India and ...

More... | Author`s Bog | Book a Meeting

Related Articles

#Explore latest news and articles

Cisco Nexus Hardware Architecture Detail 2 Nov 2024

Cisco Nexus Hardware Architecture Detail

Learn Cisco Nexus Series Switches and its hardware architecture with the installation and Administration features. Explore More!
Configure Rapid PVST on Cisco Nexus 6 Nov 2024

Configure Rapid PVST on Cisco Nexus

Configure Rapid PVST on Cisco nexus switches with a lab scenario. Step by step implementation of PVST traffic engineering. Read More!
Cisco ACI vs Nexus: Comparison 9 Nov 2024

Cisco ACI vs Nexus: Comparison

Compare Cisco ACI vs Nexus in detail and examining the differences between Cisco NX-OS and ACI at IOS level.

FAQ

OTV incorporates built-in mechanisms like the Authoritative Edge Device (AED) concept, which ensures that only one edge device in the OTV overlay handles specific VLANs for unicast and multicast traffic. It also uses MAC address learning only on the OTV control plane, reducing chances of loops.
Supported Platforms: Cisco Nexus 7000, 7700, and some Nexus 9000 series switches. Line Cards: Ensure you have the correct OTV-supported M-series or F-series line cards for your hardware. Software Version: Verify that your Cisco NX-OS version supports OTV.
The Join Interface is a Layer 3 interface on the OTV edge device that connects to the transport network. It establishes adjacency with other OTV edge devices, allowing the encapsulated Layer 2 traffic to traverse the underlying Layer 3 network.
Yes, OTV supports multicast and broadcast traffic using an efficient control plane mechanism. By default, OTV suppresses unnecessary broadcast and multicast flooding and sends such traffic only when necessary (e.g., ARP requests or IGMP joins).

Comments (0)

Share

Share this post with others

Contact learning advisor

Captcha image