EVPN Underlay Configuration Guide: IGP and BGP Implementation with Cisco Examples
Table of Contents
- Introduction to EVPN Configuration
- Cisco Platform Support and Vendor Compatibility
- Lab Topology and Design Principles
- IGP-Based Underlay Configuration
- Multicast Underlay for BUM Traffic
- BGP Underlay: IBGP Implementation
- EBGP Two-AS Design and Considerations
- EBGP Multi-AS Design and Route Target Challenges
- Complete Configuration Examples
- Best Practices and Recommendations
- Conclusion and Next Steps
Introduction to EVPN Configuration
After mastering the theoretical concepts of BGP EVPN, it's time to see these technologies in action through practical configuration examples. This comprehensive configuration guide demonstrates underlay preparation for EVPN deployments, covering both IGP-based and BGP-based underlay options with real-world implementation considerations.
The underlay network provides the foundation for EVPN overlays, ensuring loopback-to-loopback reachability required for VXLAN tunnel establishment. This guide covers multiple underlay design options, from simple OSPF implementations to complex multi-AS BGP designs, with detailed configuration examples and troubleshooting considerations.
Configuration Scope and Approach
While this guide uses Cisco IOS XE for demonstrations, the concepts and principles apply universally across vendors. EVPN is an open standard, ensuring configuration consistency across Cisco, Juniper, Arista, and other vendor platforms.
Cisco Platform Support and Vendor Compatibility
Understanding platform support is crucial for EVPN deployment planning. Cisco supports EVPN across multiple product lines, reflecting the technology's broad applicability:
Cisco EVPN Platform Support
- IOS XR: Service Provider platforms for large-scale deployments
- NX-OS: Data Center platforms optimized for high-performance switching
- IOS XE: Enterprise-grade operating system on Catalyst platforms
This multi-platform support aligns with EVPN's penetration across Service Provider, Data Center, and Enterprise market segments. Regardless of your deployment vertical, mastering EVPN provides significant value across diverse networking environments.
Multi-Vendor Compatibility
EVPN's open standard nature ensures interoperability across vendors:
- Juniper Networks: Comprehensive EVPN support across EX and QFX platforms
- Arista Networks: Native EVPN implementation on EOS platforms
- Multi-vendor Deployments: Standards-based interoperability for heterogeneous networks
Lab Topology and Design Principles
Our demonstration topology represents a typical leaf-spine data center architecture optimized for EVPN deployment:
Leaf Switches:
- L1: 1.1.1.3 (Loopback0)
- L2: 1.1.1.4 (Loopback0)
- L3: 1.1.1.5 (Loopback0)
Spine Switches:
- S1: 1.1.1.1 (Loopback0)
- S2: 1.1.1.2 (Loopback0)
Additional Infrastructure:
- Multicast RP: 1.1.55.x (Loopback55)
- VXLAN Source: Loopback0 addresses
Fundamental Design Principles
The underlay design follows critical principles that ensure optimal EVPN performance:
Underlay Simplicity Principle
"Keep the underlay as simple as possible." This fundamental principle guides all underlay design decisions. Avoid complex features that don't directly contribute to loopback-to-loopback reachability.
- Loopback Reachability: Primary requirement for VXLAN tunnel establishment
- Protocol Simplicity: OSPF preferred for familiarity and operational ease
- Point-to-Point Links: Eliminates DR/BDR elections and Type-2 LSAs
- Physical Interface Preference: Faster convergence compared to SVI implementations
IGP-Based Underlay Configuration
OSPF represents the most common underlay protocol choice due to operational familiarity and simplicity. The configuration focuses on advertising loopback addresses and establishing efficient convergence characteristics.
Basic OSPF Configuration
The fundamental OSPF configuration requires minimal complexity while ensuring optimal performance:
! Loopback Interface
interface Loopback0
ip address 1.1.1.3 255.255.255.255
ip ospf 1 area 0
! Physical Interfaces (Point-to-Point)
interface GigabitEthernet0/0/1
ip address 10.1.13.3 255.255.255.252
ip ospf network point-to-point
ip ospf 1 area 0
interface GigabitEthernet0/0/2
ip address 10.1.23.3 255.255.255.252
ip ospf network point-to-point
ip ospf 1 area 0
! OSPF Process
router ospf 1
router-id 1.1.1.3
Ingress Replication Compatibility
For ingress replication BUM traffic handling, the basic IGP configuration provides sufficient functionality:
Ingress Replication Requirements
- Loopback Reachability: OSPF provides necessary routing information
- No Additional Configuration: Ingress replication works with basic underlay
- Overlay Handles BUM: EVPN control plane manages BUM traffic replication
Convergence Optimization
Enhanced convergence capabilities can be implemented through additional mechanisms:
- BFD (Bidirectional Forwarding Detection): Sub-second failure detection
- OSPF Timers: Tuned hello and dead intervals for specific requirements
- Physical Interface Benefits: Immediate link-down detection
Multicast Underlay for BUM Traffic
When implementing multicast underlay for BUM traffic handling, additional configuration is required while maintaining underlay simplicity principles.
Multicast Architecture Components
The multicast underlay introduces dedicated infrastructure for efficient BUM traffic distribution:
Multicast Complexity Consideration
If you lack multicast experience, stick with ingress replication for lab environments. The multicast underlay, while powerful, introduces operational complexity that may not justify the benefits in smaller deployments.
- Rendezvous Points (RP): Configured on spine switches (1.1.55.1, 1.1.55.2)
- MSDP (Multicast Source Discovery Protocol): Enables RP redundancy
- PIM Sparse Mode: Efficient multicast distribution mechanism
- Static RP Configuration: Simplified RP assignment for operational ease
Leaf Switch Multicast Configuration
! Enable IP Multicast Routing
ip multicast-routing
! Loopback Interface (NVE Source)
interface Loopback0
ip pim sparse-mode
! Physical Uplink Interfaces
interface GigabitEthernet0/0/1
ip pim sparse-mode
interface GigabitEthernet0/0/2
ip pim sparse-mode
! Static RP Configuration
ip pim rp-address 1.1.55.1
ip pim rp-address 1.1.55.2
Spine Switch Multicast Configuration
! Enable IP Multicast Routing
ip multicast-routing
! RP Loopback Interface
interface Loopback55
ip address 1.1.55.1 255.255.255.255
ip pim sparse-mode
ip ospf 1 area 0
! Physical Interfaces
interface GigabitEthernet0/0/1
ip pim sparse-mode
! MSDP Peer Configuration (RP Redundancy)
ip msdp peer 1.1.55.2 connect-source Loopback55
ip msdp originator-id Loopback55
BGP Underlay: IBGP Implementation
While IGP handles underlay reachability in most deployments, some customers prefer BGP-only architectures. IBGP provides one approach to BGP-based underlay implementation.
IBGP Design Considerations
IBGP underlay implementation follows traditional BGP scalability principles:
IBGP Architecture Characteristics
- Route Reflector Design: Spine switches serve as route reflectors
- Full Mesh Avoidance: Eliminates scalability limitations of full mesh IBGP
- Overlay Integration: Same BGP process handles both underlay and overlay
- Hybrid Approach: Often combined with IGP for loopback reachability
IBGP Configuration Example
router bgp 65001
bgp router-id 1.1.1.3
bgp log-neighbor-changes
! Spine Neighbors
neighbor 1.1.1.1 remote-as 65001
neighbor 1.1.1.1 update-source Loopback0
neighbor 1.1.1.2 remote-as 65001
neighbor 1.1.1.2 update-source Loopback0
! L2VPN EVPN Address Family
address-family l2vpn evpn
neighbor 1.1.1.1 activate
neighbor 1.1.1.1 send-community extended
neighbor 1.1.1.2 activate
neighbor 1.1.1.2 send-community extended
EBGP Two-AS Design and Considerations
EBGP two-AS design represents a popular architecture where spine switches operate in one AS while all leaf switches share a different AS. This design requires specific considerations to function correctly with EVPN.
Two-AS Architecture Overview
Spine Switches: AS 65001
- S1: 1.1.1.1 (AS 65001)
- S2: 1.1.1.2 (AS 65001)
Leaf Switches: AS 65002
- L1: 1.1.1.3 (AS 65002)
- L2: 1.1.1.4 (AS 65002)
- L3: 1.1.1.5 (AS 65002)
Critical EBGP Considerations
EBGP two-AS design requires three critical configuration adjustments to function with EVPN:
Consideration 1: AS Path Loop Prevention
Problem: When spine advertises routes to multiple leaves, BGP loop prevention discards routes containing the local AS in the AS path. Solution: Configure allowas-in or as-override to disable AS path checking.
Consideration 2: Next-Hop Preservation
Problem: EBGP changes next-hop to advertising router, breaking VXLAN tunnel requirements. Solution: Configure next-hop unchanged to preserve original next-hop addresses for proper tunnel establishment.
Consideration 3: Route Target Filtering
Problem: Spine switches without configured VRFs discard all EVPN routes with route targets. Solution: Configure no bgp default route-target filter to disable default route target filtering.
EBGP Two-AS Configuration
router bgp 65002
bgp router-id 1.1.1.3
neighbor 1.1.1.1 remote-as 65001
neighbor 1.1.1.1 ebgp-multihop 2
neighbor 1.1.1.1 update-source Loopback0
address-family l2vpn evpn
neighbor 1.1.1.1 activate
neighbor 1.1.1.1 allowas-in 1
neighbor 1.1.1.1 send-community extended
Spine Switch Configuration:
router bgp 65001
bgp router-id 1.1.1.1
no bgp default route-target filter
neighbor 1.1.1.3 remote-as 65002
neighbor 1.1.1.3 ebgp-multihop 2
neighbor 1.1.1.3 update-source Loopback0
address-family l2vpn evpn
neighbor 1.1.1.3 activate
neighbor 1.1.1.3 route-map NH-UNCHANGED out
neighbor 1.1.1.3 send-community extended
route-map NH-UNCHANGED permit 10
set ip next-hop unchanged
EBGP Multi-AS Design and Route Target Challenges
EBGP multi-AS design places each leaf switch in its own AS, creating additional complexity related to auto-derived route target values. This design, while less common, requires specific configuration to function correctly.
Multi-AS Architecture Characteristics
Spine Switches: AS 65001
- S1: 1.1.1.1 (AS 65001)
- S2: 1.1.1.2 (AS 65001)
Individual Leaf ASNs:
- L1: 1.1.1.3 (AS 65002)
- L2: 1.1.1.4 (AS 65003)
- L3: 1.1.1.5 (AS 65004)
Route Target Auto-Derivation Challenge
The multi-AS design introduces a critical challenge related to route target generation:
Auto-Derived Route Target Problem
Vendors auto-derive route target values using AS number + EVI number. In multi-AS designs, each leaf generates different route targets (e.g., 65002:100 vs 65003:100), preventing proper route import/export between leaves in the same EVPN instance.
Route Target Rewrite Solution
The solution involves configuring rewrite EVPN RT ASN to normalize route target values:
router bgp 65002
bgp router-id 1.1.1.3
neighbor 1.1.1.1 remote-as 65001
neighbor 1.1.1.1 ebgp-multihop 2
neighbor 1.1.1.1 update-source Loopback0
address-family l2vpn evpn
neighbor 1.1.1.1 activate
neighbor 1.1.1.1 rewrite-evpn-rt-asn
neighbor 1.1.1.1 send-community extended
Spine Multi-AS Configuration:
router bgp 65001
bgp router-id 1.1.1.1
no bgp default route-target filter
address-family l2vpn evpn
neighbor 1.1.1.3 rewrite-evpn-rt-asn
neighbor 1.1.1.3 route-map NH-UNCHANGED out
Alternative: Manual Route Target Configuration
If the platform doesn't support route target rewriting, manual configuration provides an alternative:
- Manual RT Configuration: Explicitly configure identical route targets on all leaves
- Configuration Overhead: Increases management complexity similar to L3VPN implementations
- Operational Trade-off: More configuration vs. automatic derivation benefits
Complete Configuration Examples
This section provides complete, deployable configuration examples for each underlay design option, ready for lab implementation and testing.
Complete OSPF + Ingress Replication Configuration
hostname Leaf1
! Loopback Interface
interface Loopback0
description VTEP Source
ip address 1.1.1.3 255.255.255.255
ip ospf 1 area 0
! Uplink Interfaces
interface GigabitEthernet0/0/1
description Link to Spine1
ip address 10.1.13.3 255.255.255.252
ip ospf network point-to-point
ip ospf 1 area 0
no shutdown
interface GigabitEthernet0/0/2
description Link to Spine2
ip address 10.1.23.3 255.255.255.252
ip ospf network point-to-point
ip ospf 1 area 0
no shutdown
! OSPF Process
router ospf 1
router-id 1.1.1.3
area 0 stub no-summary
! BGP for EVPN Overlay
router bgp 65001
bgp router-id 1.1.1.3
bgp log-neighbor-changes
no bgp default ipv4-unicast
neighbor SPINE-PEERS peer-group
neighbor SPINE-PEERS remote-as 65001
neighbor SPINE-PEERS update-source Loopback0
neighbor SPINE-PEERS send-community extended
neighbor 1.1.1.1 peer-group SPINE-PEERS
neighbor 1.1.1.2 peer-group SPINE-PEERS
address-family l2vpn evpn
neighbor SPINE-PEERS activate
Complete EBGP Two-AS Configuration
router bgp 65002
bgp router-id 1.1.1.3
bgp log-neighbor-changes
no bgp default ipv4-unicast
neighbor SPINE-PEERS peer-group
neighbor SPINE-PEERS remote-as 65001
neighbor SPINE-PEERS ebgp-multihop 2
neighbor SPINE-PEERS update-source Loopback0
neighbor SPINE-PEERS send-community extended
neighbor 1.1.1.1 peer-group SPINE-PEERS
neighbor 1.1.1.2 peer-group SPINE-PEERS
address-family l2vpn evpn
neighbor SPINE-PEERS activate
neighbor SPINE-PEERS allowas-in 1
Spine EBGP Configuration:
router bgp 65001
bgp router-id 1.1.1.1
bgp log-neighbor-changes
no bgp default ipv4-unicast
no bgp default route-target filter
neighbor LEAF-PEERS peer-group
neighbor LEAF-PEERS remote-as 65002
neighbor LEAF-PEERS ebgp-multihop 2
neighbor LEAF-PEERS update-source Loopback0
neighbor LEAF-PEERS send-community extended
neighbor 1.1.1.3 peer-group LEAF-PEERS
neighbor 1.1.1.4 peer-group LEAF-PEERS
neighbor 1.1.1.5 peer-group LEAF-PEERS
address-family l2vpn evpn
neighbor LEAF-PEERS activate
neighbor LEAF-PEERS route-map NH-UNCHANGED out
route-map NH-UNCHANGED permit 10
set ip next-hop unchanged
Best Practices and Recommendations
Based on real-world deployment experience, these best practices ensure optimal EVPN underlay performance and operational simplicity:
Underlay Design Recommendations
Primary Recommendations
- OSPF + Ingress Replication: Optimal for most deployments (80% use case)
- Physical Interface Preference: Faster convergence than SVI-based designs
- Point-to-Point OSPF: Eliminates DR/BDR election overhead
- Underlay Simplicity: Avoid complex features that don't provide value
BGP Underlay Considerations
| Design Option | Use Case | Complexity | Recommendation |
|---|---|---|---|
| OSPF + IBGP | Traditional DC/Enterprise | Low | Preferred |
| EBGP Two-AS | BGP-only preference | Medium | Acceptable |
| EBGP Multi-AS | Specialized requirements | High | Avoid unless required |
Operational Best Practices
- Lab Validation: Test configurations in lab environments before production deployment
- Ping Connectivity: Verify loopback-to-loopback reachability before overlay configuration
- Version Compatibility: Use recent IOS XE releases for optimal EVPN feature support
- Documentation: Maintain clear AS number assignments and IP addressing schemes
Conclusion and Next Steps
This comprehensive configuration guide provides the foundation for implementing EVPN underlay networks across various design options. The key to successful EVPN deployment lies in underlay simplicity and reliable loopback-to-loopback connectivity.
Implementation Pathway
- Start Simple: Begin with OSPF + ingress replication for initial deployments
- Validate Connectivity: Ensure loopback reachability before overlay configuration
- Progress Methodically: Add complexity only when business requirements justify it
- Test Thoroughly: Validate all configuration options in lab environments
The configuration examples and best practices presented here reflect real-world deployment experience across multiple customer environments. Whether you choose IGP-based or BGP-based underlay designs, the fundamental principle remains: establish reliable, simple underlay connectivity to enable robust EVPN overlay services.
Next Steps
With the underlay foundation properly configured, the next phase involves EVPN overlay configuration, including L2VNI and L3VNI implementation, route type advertisement, and service validation. These topics will be covered in subsequent configuration modules.
Thank you for following this comprehensive EVPN underlay configuration guide. The solid foundation established here will support robust overlay services and ensure optimal EVPN performance in production deployments.
No comments:
Post a Comment