Sunday, July 31, 2022

Cisco ACI Fabric Access Policies (Physical) Constructs

 

Fabric Access Policies enable communication of systems that are attached to the Cisco ACI fabric.

You build a fabric access policy with multiple configuration elements as:

  • Pool: Defines a range of identifiers, such as VLANs
  • Physical domain: References a pool. You can think of it as a resource container
  • Attachable Access entity profile (AAEP): Reference a physical domain, and therefore specifies the VLAN pool that is activated on an interface.
  • Interface policy: Defines a protocol or interface properties that are applied to interfaces.
  • Interface policy group: Gathers multiple interface policies into one set and binds them to an AAEP.
  • Interface profile: Chooses one or more access ports and associates them with an interface policy group.
  • Switch Profile: Chooses one or more leaf switches and associates them with an interface profile.


VLAN Pool

A pool represents a range of traffic encapsulation identifiers (For example: VLAN IDs, VNIDs, and multicast address). A pool is a shared resource and can be consumed by multiple domains, physical or virtual. A leaf switch does not support overlapping VLAN pools, so you must not associate different VLAN pools with the same virtual domain.

When you’re creating a vlan pool you must define the type of allocation which is used by the vlan pool. There are two types:

Static Allocation:

  • It requires the administrator to make a choice about which vlan will be used. This is used primarily to attach physical devices to the fabric.
  • The EPG has a relation to the domain, and the domain has a relation to the pool. The pool contains a range of encapsulated VLANs and VXLANs. For static EPG deployment, the user defines the interface and the encapsulation. The encapsulation must be within the range of a pool that is associated with a domain with which the EPG is associated.

Dynamic Allocation:

  • It means that ACI decides which vlan is used for a specific EPG. Most often you’ll see this when integrating with a hypervisor like VMware.
  • In this case ACI defines the vlan that will be used (and will configure the portgroup on the hypervisor to use that specific vlan). This is ideal for situations in which you don’t care which vlan runs underneath the traffic, if it is mapped into the right EPG.

Note: For completeness, there are also VXLAN pools. You can use these to attach to devices that support VXLAN. This could be your hypervisor. Most fabrics only use Vlan pools. Be aware that they exist and that you could use them if required.

Step to Navigate to Access Policies and create a VLAN pool for a physical domain:

Step: A: Navigate

  1. Click Fabric
  2. Click Access Policies
  3. Expand Pools by clicking the toggle arrow (>)
  4. Right-click on VLAN
  5. Click Create VLAN Pool


Step: B1:Create Static VLAN Pool and its VLAN range

  1. Name the VLAN Pool: <User defined name>
  2. Ensure Static Allocation is selected
  3. Then click the plus sign (+) button to add your VLAN pool range
  4. VLAN Range: For example, 2900 – 2949
  5. Click Ok


Step: B2:Create Dynamic VLAN Pool and its VLAN range

    1. Name the VLAN Pool: <User defined name>
    2. Ensure Dynamic Allocation is selected
    3. Then click the plus sign (+) button to add your VLAN pool range
    4. VLAN Range: For example, 2950 – 2999
    5. Click Ok



    Physical Domain

    A domain is used to define the scope of VLANs in the Cisco ACI fabric. In other words, where and how a VLAN pool will be used.
    Domains are used to map an EPG to a vlan pool. An EPG must be member of a domain, and the domain must reference a vlan pool. This makes it possible for an EPG to have a vlan encap.

    There are several types of domains:

    • Physical domains (physDomP): Typically used for bare metal server attachment and management access.
    • Virtual domains (vmmDomP): Required for virtual machine hypervisor integration
    • External Bridged domains (l2extDomP): Typically used to connect a bridged external network trunk switch to a leaf switch in the ACI fabric.
    • External Routed domains (or L3 domains) (l3extDomP): Used to connect a router to a leaf switch in the ACI fabric. Within this domain protocols like OSPF and BGP can be used to exchange routes
    • Fibre Channel domains (fcDomP): Used to connect Fibre Channel VLANs and VSANs

     Step to Navigate to Physical Domains for L2 Connections

    1. Click Fabric
    2. Click Access Policies
    3. In the left navigation pane, all the way the bottom, expand Physical and External Domains by clicking the toggle arrow (>)
    4. Right-click on Physical Domains
    5. Click Create Physical Domain
    6. Name the Physical Domain: <User-defined Name>
    7.  In the VLAN Pool dropdown, select your VLAN Pool created in the previous section
    8.  Click Submit

       Step to Navigate to Physical Domains for L3 External Domain

      1. Click Fabric
      2. Click Access Policies
      3. In the left navigation pane, all the way the bottom, expand Physical and External Domains by clicking the toggle arrow (>)
      4. Right-click on L3 Domains and Click Create Layer 3 Domain
      5. Name the Layer 3 Domain: aci_p29_extrtdom
      6. Click Submit

      Attachable Access entity profile (AAEP)

      The AAEP is another connector. It connects the domain (and thereby the vlan and the EPG) to the Policy Group which defines the policy on a physical port. When defining an AAEP you need to specify which domains are to be available to the AAEP. These domains (and their vlans) will be usable by the physical port.

      Sometimes you need to configure a lot of EPGs on a lot of ports. Say for example you’re not doing any VMware integration, but you do need to have ESXi hosts connected to your fabric. The old way of doing this was to create trunk ports and trunk all the required vlans to the VMware host. In ACI you’d need to configure a static port to the ESXi host on all EPGs that need to be available on the ESXi host. If you’re not automating this, it could take a lot of work. Even with automation this might be a messy way to do this.

      That’s why you can configure an EPG directly under the AAEP. This will cause every port that will be member of the same AAEP to automatically have all the EPGs defined at the AAEP level.


      Steps to Navigate and Create AAEP

      1. Click Fabric
      2. Click Access Policies
      3. Expand Policies by clicking the toggle arrow (>)
      4. Expand Global by clicking the toggle arrow (>)
      5. Right-click on Attachable Access Entity Profiles
      6. Click Create Attachable Access Entity Profile
      For Physical Domain
      1. Name the AEP: <User-defined Name>
      2. Click the plus button (+) to add a Domain
      3. In the Domain Profile dropdown, select your Physical Domain created in the previous section:
      4.  Click Update
      5.  Click Next
      For L3 Domain
      1. Name the AEP: <User-defined Name>
      2.  Click the plus button (+) to add an Domain
      3.  In the Domain Profile dropdown, select your Layer 3 Domain created in the previous section:
      4.  Click Update
      5.  Click Next

      Interface policy group

      The Interface Policy Group is a group of policies. These policies define the operation of the physical interface. Think of stuff like the Speed of the interface, CDP settings, BPDU settings, LACP and more.

      This is also the place where the AAEP is referenced. So, the Interface Policy Group takes care of attaching the vlan, domain and EPG to an interface through the AAEP.

      The specific policies are interface policies which are configured beforehand.

      Step to Navigate to Interface Policy Groups to Create Access Port Policy Group

      1. Fabric
      2. Access Policies
      3. Expand Interfaces by clicking the toggle arrow (>)
      4. Expand Leaf Interfaces by clicking the toggle arrow (>)
      5. Expand Policy Groups by clicking the toggle arrow (>)
      6. Right-click on Leaf Access Port
      7. Click Create Leaf Access Port Policy Group


      • Name the Policy Group: <User-defined name>
      • For the AEP, select aci_p29_l3_aep
      • For Link Level Policy, select aci_lab_10G
      • For CDP Policy, select aci_lab_cdp
      • For LLDP Policy, select aci_lab_lldp
      • For MCP Policy, select aci_lab_mcp
      • For L2 Interface Policy, select aci_lab_l2global
      • Scroll-down
      • Click Submit

      Similarly, we can create Port Channel policy group that will be used as a Layer 2 connectivity policy that is part of a single node port channel. In ACI, each policy group for either Port Channel or Virtual Port Channel identifies the bundle of interfaces as a singular interface policy in the fabric.

      Interface Profile/Selector

      Interface Profiles are the way the Policy Group is attached to a switch. Part of an Interface Profile is the Interface Selector. The Interface selector specifies the interfaces and attaches the policy to that specific interface. However, it does not specify which switch(es) those interfaces belong to.

      You can have multiple interface selectors listed under a single Interface Profiles. It depends on the way you like to work how you’re going to use them.

      • Interface Profiles per switch
      • Interface Profiles per policy group

      The advantage of using a Interface Profiles per policy group is that you can use consistent naming to map policy groups to Interface profiles, making it easier to find the interface profile where a policy group is attached to. However, if you have a lot of policy groups, this could cause long lists in the GUI. This way of working is better suited for automation when you’re working in large fabrics.

      Step to Create Interface Profiles

      1. Fabric
      2. Access Policies
      3. Expand Quick Start by clicking the toggle arrow (>)
      4. Right-click on Interface Configuration
      5. Click Configure Interface


      Now  you can create the interface profiles for:
      • Access port Interface
      • Port-channel Interface
      • VPC Interface

      Steps to Create Access Port Interface

      1. Set the Leafs to 203
      2. Set the Interfaces to 1/29
      3. Ensure the Interface Type is set to Individual
      4. In the dropdown, select your Leaf Access Port Policy Group you created earlier: aci_p29_intpolg_access
      5. The Leaf Profile Name will be aci_p29_access_sp
      6. The Interface Profile Name will be aci_p29_acc_intf_p

      Steps to Create Port-channel Interface

        1. Set the Leafs to 205
        2. Set the Interfaces to 1/57-58
        3. Ensure the Interface Type is set to Port Channel (PC)
        4. In the dropdown, select your Port Channel Policy Group you created earlier: aci_p29_intpolg_pc
        5. The Leaf Profile Name will be aci_p29_pc_sp
        6. The Interface Profile Name will be aci_p29_pc_intf_p
        7. Click Next

        Steps to Create VPC Interface

        1. Set the Leafs to 207 - 208
        2. Set the Interfaces to 1/29
        3. Ensure the Interface Type is set to Virtual Port Channel (VPC)
        4. In the dropdown, select your VPC Port Policy Group you created earlier: aci_p29_intpolg_vpc
        5. The Leaf Profile Name will be aci_p29_vpc_sp
        6. The Interface Profile Name will be aci_p29_vpc_intf_p
        7. Click Next

        Switch Profiles

        A switch profile is the mapping between the policy model and the actual physical switch. The switch profile maps the Leaf Interface Policy, containing the interface selectors to the physical switch. So, as soon as you apply a Interface profile onto a switch profile it will program the ports according to the policy group you defined.

        Step to Create Switch Profiles

        1. Fabric
        2. Access Policies
        3. Expand Quick Start by clicking the toggle arrow (>)
        4. Expand the Switch policies
        5. Right-click on Profile to create switch Profile
        6. Configure Switch Profile Name and assign leaf switch and Interface profile to it created above
        7. Click Submit

        Wrapping it all up together

        So, we’ve just read that all these policies in the end configure a port with specific parameters. We’ve also read that the domain and the AAEP ensure that an EPG can be programmed onto a port. But how does the ACI fabric know which EPGs to put onto the port?

        Several options exist. The most common ones are:

        • Static configuration
        • Dynamic configuration through VMM domains

        Static configuration

        As to static configuration. You as an administrator configure static ports at the EPG level. You need to define which port (or portchannel) to use and which encap must be used. Encap in this context is usually a vlan tag but could in theory also be a VXLAN or QinQ tag.


        Another way is to attach an EPG directly onto the AAEP. This causes the EPG with the specified encap to be attached to all policy groups that are configured with this AAEP as described earlier.

        Dynamic Configuration

        The dynamic configuration based on VMM domains automatically created a port group in the virtual machine manager that corresponds to the EPG when the EPG is configured to be a member of the VMM domain.

        Saturday, July 30, 2022

        Application Centric Infrastructure (ACI) Fabric Initialization

        How do I connect the APICs to the fabric?

        To setup the Application Centric Infrastructure (ACI) Fabric, below task need to be done as:

        • Rack and Cable the Hardware
        • Configure each Cisco APIC's Integrated Management Controller (CIMC)
        • Check APIC firmware and software
        • Check the image type (NX-OS/Cisco ACI) and software version of your switches
        • APIC1 initial setup
        • Fabric discovery
        • Setup the remainder of APIC Cluster

        Rack and Cable the Hardware

        APIC Connectivity

        The APICs will be connected to Leaf switches. When using multiple APICs, we recommend connecting APICs to separate Leafs for redundancy purposes.

        For #9: If it's APIC M3/L3, VIC 1445 has four ports (port-1, port-2, port-3, and port-4 from left to right). Port-1 and port-2 make a single pair corresponding to eth2-1 on the APIC; port-3 and port-4 make another pair corresponding to eth2-2 on the APIC. Only a single connection is allowed for each pair. For example, you can connect one cable to either port-1 or port-2 and another cable to either port-3 or port-4, but not 2 cables to both ports on the same pair. All ports must be configured for the same speed, either 10G or 25G.

        Switch Connectivity

        All Leaf switches will need to connect to spine switches and vice versa. This provides your fabric with a fully redundant switching fabric.  In addition to the fabric network connections, you'll also connect redundant PSUs to separate power sources, Management Interface to your 1G out-of-band management network, and a console connection to a Terminal server (optional, but highly recommended).

        Configure each Cisco APIC's Integrated Management Controller (CIMC)

        When you first connect your CIMC connection marked with "mgmt." on the Rear facing interface, it will be configured for DHCP by default.  Cisco recommends that you assign a static address for this purpose to avoid any loss of connectivity or changes to address leases.  You can modify the CIMC details by connecting a crash cart (physical monitor, USB keyboard and mouse) to the server and powering it on.  During the boot sequence, it will prompt you to press "F8" to configure the CIMC.  From here you will be presented with a screen like below – depending on your firmware version.

        • For the "NIC mode" we recommend using Dedicated which utilizes the dedicated "mgmt." interface in the rear of the APIC appliance for CIMC platform management traffic. 
        • Using "Shared LOM" mode which will send your CIMC traffic over the LAN on Motherboard (LOM) port along with the APICs OS management traffic.  This can cause issues with fabric discovery if not properly configured and not recommended by Cisco. 

        Aside from the IP address details, the rest of the options can be left alone unless there's a specific reason to modify them.  Once a static address has been configured you will need to Save the settings & reboot.  After a few minutes you should then be able to reach the CIMC Web Interface using the newly assigned IP along with the default CIMC credentials of admin and password.  It’s recommended that you change the CIMC default admin password after first use.

        Logging into the CIMC Web Interface

        To log into the CIMC, open a web browser to https://<CIMC_IP>. You'll need to ensure you have flash installed & permitted for the URL.  Once you've logged in with the default credentials, you'll be able to manage all the CIMC features including launching the KVM console.

        Note: Launching the KVM console will require that you have Java version 1.6 or later installed.  Depending on your client security settings, you may need to whitelist the IMC address within your local Java settings for the KVM applet to load.   Open the KVM console and you should be at the Setup Dialog for the APIC assuming the server is powered on.  If not powered up, you can do so from the IMC Web utility. 

        Check APIC firmware and software

        Equally important to note is that all your APICs require to run the same version when joining a cluster.  This may require manually upgrading/downgrading your APICs manually prior to joining them to the fabric.  Instructions on upgrading standalone APICs using KVM vMedia can be found in the "Cisco APIC Management, Installation, Upgrade, and Downgrade Guide" for your respective version.

        Switch nodes can be running any version of ACI switch image and can be upgraded/downgraded once joined to the fabric via firmware policy.

        Check the image type (NX-OS/Cisco ACI) and software version of switches

        For a Nexus 9000 series switch to be added to an ACI fabric, it needs to be running an ACI image.  Switches that are ordered as "ACI Switches" will typically be shipped with an ACI image.  If you have existing standalone Nexus 9000 switches running traditional NXOS, then you may need to install the appropriate image (For example, aci-n9000-dk9.14.0.1h.bin).  For detailed instructions on converting a standalone NXOS switch to ACI mode, please see the "Cisco Nexus 9000 Series NX-OS Software Upgrade and Downgrade Guide" on CCO for your respective version of NXOS.

        APIC1 initial setup

        Now that you have basic remote connectivity, you can complete the setup of your ACI fabric from any workstation with network access the APIC. If the server is not powered on, do so now from the CIMC interface.  The APIC will take 3-4 mins to fully boot. Next thing we'll do is open a console session via the CIMC KVM console. Assuming the APIC has completed the boot process it should sitting at a prompt "Press any key to continue…".  Doing so will begin the setup utility.

        From here, the APIC will guide you through the initial setup dialogue.  Carefully answer each question.  Some of the items configured can't be change after initial setup, so review your configuration before submitting it.

        Fabric Name: User defined, will be the logical friendly name of your fabric.

        Fabric ID: Leave this ID as the default 1.

        Number of Controllers in fabric: Set this to the number of APICs you plan to configure. This can be increased/decreased later.

        Pod ID: The Pod ID to which this APIC is connected to.  If this is your first APIC or you don't have more than a single Pod installed, this will always be 1.  If you are located additional APICs across multiple Pods, you'll want to assign the appropriate Pod ID where it's connected.

        Standby Controller: Beyond your active controllers (typically 3) you can designate additional APICs as standby.  In the event you have an APIC failure, you can promote a standby to assume the identity of the failed APIC.

        APIC-X: A special-use APIC model use for telemetry and other heavy ACI App purposes.  For your initial setup this typically would not be applicable.  Note: In future release this feature may be referenced as "ACI Services Engine".

        TEP Pool:  This will be a subnet of addresses used for Internal fabric communication.  This subnet will NOT be exposed to your legacy network unless you're deploying the Cisco AVS or Cisco ACI Virtual Edge.  Regardless, our recommendation is to assign an unused subnet of size between /16 and /21 subnet.  The size of the subnet used will impact the scale of your Pod.  Most customer allocate an unused /16 and move on. This value can NOT be changed once configured. Having to modify this value requires a wipe of the fabric.

        Note: The 172.17.0.0/16 subnet is not supported for the infra TEP pool due to a conflict of address space with the docker0 interface. If you must use the 172.17.0.0/16 subnet for the infra TEP pool, you must manually configure the docker0 IP address to be in a different address space in each Cisco APIC before you attempt to put the Cisco APICs in a cluster.

        Infra VLAN: This is another important item.  This is the VLAN ID for all fabric connectivity.  This VLAN ID should be allocated solely to ACI, and not used by any other legacy device in your network.  Though this VLAN is used for fabric communication, there are certain instances where this VLAN ID may need to be extended outside of the fabric such as the deployment of the Cisco AVS/AVE.   Due to this, we also recommend you ensure the Infra VLAN ID selected does not overlap with any "reserved" VLANs found on your networks.  Cisco recommends a VLAN smaller than VLAN 3915 as being a safe option as it is not a reserved VLAN on Cisco DC platforms as of today. This value can NOT be changed once configured. Having to modify this value requires a wipe of the fabric.

        BD Multicast Pool (GIPO): Used for internal connectivity.  We recommend leaving this as the default or assigning a unique range not used elsewhere in your infrastructure. This value can NOT be changed once configured. Having to modify this value requires a wipe of the fabric.

        Once the Setup Dialogue has been completed, it will allow you to review your entries before submitting.  If you need to make any changes enter "y" otherwise enter "n" to apply the configuration.  After applying the configuration allow the APIC 4-5 mins to fully bring all services online and initialize the REST login services before attempting to login though a web browser.

        Fabric discovery

        With our first APIC fully configured, now we will login to the GUI and complete the discovery process for our switch nodes.

        When logging in for the first time, you may have to accept the Cert warnings and/or add your APIC to the exception list.

        Now we'll proceed with the fabric discovery procedure.  We'll need to navigate to Fabric tab > Inventory sub-tab > Fabric Membership folder.

        From this view you are presented with a view of your registered fabric nodes.  Click on the Nodes Pending Registration tab in the work pane and we should see our first Leaf switch waiting discovery.  Note this would be one of the Leaf switches where the APIC is directly connected to.

        To register our first node, click on the first row, then from the Actions menu (Tool Icon) select Register.

        The Register wizard will pop up and require some details to be entered including the Node ID you wish to assign, and the Node Name (hostname).

        Hostnames can be modified, but the Node ID will remain assigned until the switch is decommissioned and remove from the APIC.  This information is provided to the APIC via LLDP TLVs.  If a switch was previously registered to another fabric without being erase, it would never appear as an unregistered node.  It's important that all switches have been wiped clean prior to discovery.   It's a common practice for Leaf switches to be assigned Node IDs from 100+, and Spine switches to be assigned IDs from 200+.  To accommodate your own numbering convention or larger fabrics you can implement your own scheme.  RL TEP Pool is reserved for Remote Leafs usage only and doesn't apply to local fabric-connected Leaf switches. Rack Name is an optional field.

        Once the registration details have been submitted, the entry for this leaf node will move from the Nodes Pending Registration tab to the Registered Nodes tab under Fabric Membership.  The node will take 3 to 4 minutes to complete the discovery, which includes the bootstrap process and bringing the switch to an "Active" state.  During the process, you will notice a tunnel endpoint (TEP) address gets assigned.  This will be pulled from the available addresses in your Infra TEP pool (such as 10.0.0.0/16).

        In depth, Fabric Discovery process:

        First, Cisco APIC uses LLDP neighbor discovery to discover a switch.

        After a successful discovery, the switch sends a request for an IP address via DHCP

        Cisco APIC then allocates an address from the DHCP pool. The switch uses this address as a TEP address. You can verify the allocated address from shell by using the acidiag fnvread command and by pinging the switch from the Cisco APIC.

        In the DHCP Offer packet, Cisco APIC passes the boot file information for the switch. The switch uses this information to acquire the boot file from Cisco APIC via HTTP GET to port 7777 of Cisco APIC.

        The boot file HTTP GET 200 OK response from the Cisco APIC contains the firmware that the switch will load. The switch then retrieves this file from the Cisco APIC with another HTTP GET to port 7777 on the Cisco APIC. 

        At last, Cisco APIC initiates the encrypted TCP session when the switch is listening on TCP port 12183 to establish the policy element Intra-Fabric Messaging (IFM).


        In summary, the initial steps of the discovery process are:

        • LLDP neighbor discovery
        • Cisco APIC assigns TEP address to the switch via DHCP
        • The switch downloads the boot file from Cisco APIC and performs firmware upgrade if necessary.
        • Policy element exchange via IFM, also known as intra-fabric messaging (IFM).

        Note: Communication between the various nodes and processes in the Cisco ACI Fabric uses IFM, and IFM uses SSL-encrypted TCP communication. Each Cisco APIC and fabric node has 1024-bit SSL keys that are embedded in secure storage. The SSL certificates are signed by Cisco Manufacturing Certificate Authority (CMCA).

        In the discovery process, a fabric node is considered active when the Cisco APIC and the node can exchange heartbeats through the IFM process.

        Node status may fluctuate between several states during the fabric registration process. The states are shown in the Fabric Node Vector table. The APIC CLI command to show the Fabric Node Vector table acidiag fnvread .
        Following are the States and descriptions:

        • Unknown – It states that Node discovered but no Node ID policy configured
        • Undiscovered – It states that Node ID configured but not yet discovered
        • Discovering – It states that Node discovered but IP not yet assigned
        • Unsupported – It states that Node is not a supported model
        • Disabled – when Node has been decommissioned, it will show Disabled
        • Inactive – if you have No IP connectivity
        • Active – When Node is active

        Note: ACI uses inter-fabric messaging (IFM) packets to communicate between the different nodes or between leaf and spine. These IFM packets are typically TCP packets, which are secured by 1024-bit SSL encryption, and the keys used for encryption are stored on secure storage. These keys are signed by Cisco Manufacturing Certificate Authority (CMCA). Any issues with IFM process can prevent fabric nodes communicating and from joining the fabric.

        After the first Leaf has been discovered and move to an Active state, it will then discovery every Spine switch it's connected to.  Go ahead and register each Spine switch in the same manner.

        Since each Leaf Switch connect to every Spine switch, once the first Spine completes the discovery process, you should see all remaining Leaf switch pending registration.  Go ahead with Registering all remaining nodes and wait for all switches to transition to an Active state.

        With all the switches online & active, our next step is to finish the APIC cluster configuration for the remaining nodes.  Navigate to System > Controllers sub menu > Controllers Folder > apic1 > Clusters as Seen by this Node folder.

        From here you will see your single APIC along with other important details such as the Target Cluster Size and Current Cluster Size.  Assuming you configured apic1 with a cluster size of 3, we'll have two more APICs to setup.

        Setup the remainder of APIC Cluster

        At this point we would want to now open the KVM console for APIC2 and begin running through the setup Dialogue just as we did for APIC1 previously.  When joining additional APICs to an existing cluster it's imperative that you configure the same Fabric Name, Infra VLAN and TEP Pool.  The controller ID should be set to ID 2.  You'll notice that you will not be prompted to configure Admin credentials.  This is expected as they will be inherited from APIC1 once you join the cluster.

        Allow APIC2 to fully boot and bring its service online.  You can confirm everything was successfully configure as soon as you see the entry for APIC2 in the Active Controllers view.  During this time, it will also begin syncing with APIC1's config.  Allow 4-5 mins for this process to complete.  During this time, you may see the State of the APICs transition back & forth between Fully Fit and Data Layer Synchronization in Progress. Continue through the same process for APIC3, ensuring you assign the correct controller ID.

        This concludes the entire fabric discovery process.  All your switches & controllers will now be in sync and under a single pane of management.  Your ACI fabric can be managed from any APIC IP.  All APICs are active and maintain a consistent operational view of your fabric.

        The Complete steps of IFM (Intra-Fabric Messaging)

        After this all process is completed, the fabric is ready for Production configuration.

        1. Link Layer Discovery Protocol (LLDP) Neighbor Discovery
        2. Tunnel End Point (TEP) IP address assignment to the node via DHCP
        3. Node software upgraded if necessary
        4. ISIS adjacency mode
        5. Certification Validation
        6. Start of DME Process on switches.
        7. Tunnel Setup (iVxlan)
        8. Policy Element IFM Setup

        Fabric Initialization Tasks

        • Configure APIC1
        • Add first Leaf to fabric.
        • All all spines to fabric
        • Add remaining Leaf’s to fabric.
        • Add remaining APIC to fabric
        • Setup NTP
        • Configured OOB Management IP Pool
        • Configure Export Policies for Configuration and Tech Support Exports
        • Configure Firmware Policies (For Upgrades)

        Friday, July 29, 2022

        Application Centric Infrastructure (ACI) Overview

        Cisco ACI Complete Guide - Application Centric Infrastructure Deep Dive

        Cisco ACI Complete Guide - Application Centric Infrastructure Deep Dive

        🎯 What You'll Master:
        Complete understanding of Cisco ACI architecture, from declarative policy models to spine-leaf topology, VXLAN overlay networks, and fabric initialization with real-world implementation

        What is Cisco ACI?

        The Paradigm Shift

        Cisco ACI represents a fundamental transformation in data center networking:

        Traditional vs Modern Approach:
        • Old: IP endpoint-based network → New: Application-based network
        • Old: Manually configured network → New: Software-based network
        • Old: Imperative configuration → New: Declarative model with Promise Theory

        Understanding Promise Theory

        What is Promise Theory?

        • Instead of configuring every single port explicitly, you describe the desired application behavior
        • ACI translates your intent from fabric-level policies down to hardware implementation
        • You define what you want to accomplish, not how to accomplish it
        🚖 Real-World Analogy:
        Think of taking a taxi. You tell the driver your destination, not every turn, which route to take, or how fast to drive. Similarly, with ACI's Promise Theory:
        • You declare: "Application A needs to communicate with Application B"
        • ACI handles: VLAN assignments, routing, ACLs, QoS, and all underlying configuration

        Core Architecture Principles

        • Separation of Control Plane and Data Plane - Centralized policy management with distributed forwarding
        • Holistic Architecture - Centralized automation with policy-driven application profiles
        • Software Flexibility + Hardware Performance - Best of both worlds for dynamic workloads

        Cisco Application Centric Infrastructure (Cisco ACI) in the data center is a holistic architecture with centralized automation and policy-driven application profiles. Cisco ACI delivers software flexibility with the scalability of hardware performance that provides a robust transport network for today's dynamic workloads.

        This system-based approach simplifies, optimizes, and accelerates the entire application deployment lifecycle across data center, WAN, access, and cloud environments. This empowers IT to be more responsive to changing business and application needs, enhancing agility and adding business value.

        Cisco ACI Key Characteristics

        • Application-Centric Fabric Connectivity:
          • Multi-tier applications
          • Traditional applications
          • Virtualized applications
        • Multivendor Support - Works with diverse hardware and software ecosystems
        • Physical and Virtual Endpoints - Seamless integration of bare-metal and virtualized workloads
        • Policy Abstraction - Simplify network configuration through high-level policies

        ACI Starts with a Better Switch – Nexus 9000

        The Cisco Nexus 9000 platform is the foundation of ACI, offering two distinct modes of operation.

        Nexus 9000 Operating Modes

        Mode 1: Standalone (NX-OS) Mode

        Characteristics:
        • Platforms: Nexus 9300 and 9500 series
        • Behavior: Functions as a traditional Nexus L2/L3 switch
        • OS: Enhanced version of NX-OS with automation capabilities
        • Features: Best-in-class efficiency, low latency, high 10G/40G port density
        • Use Case: Traditional switching with advanced programmability

        Mode 2: ACI Mode

        Characteristics:
        • Platforms: Nexus 9300, Nexus 9500 switches
        • Software: Runs "ACI version" of firmware
        • Management: Managed by APIC (Application Policy Infrastructure Controller)
        • Topology: Spine and Leaf fabric design
        • Features: Application-centric representation, profile-based deployments, advanced automation
        • Use Case: Modern data center with policy-driven networking

        ACI Network Topology

        ACI Topology is a CLOS Fabric

        The ACI fabric follows a CLOS (non-blocking) topology design for optimal performance and scalability.

        Fabric Design Rules

        • All leafs uplink to all spines with 40/100 GigE connections
        • APICs connect to leafs with redundant 10 GigE links
        • Leafs do not plug into leafs - No horizontal connections
        • Spines do not plug into spines - No horizontal connections
        • Traffic flow pattern: Host → Leaf → Spine → Leaf → Host
        • Scalability: Add more spines to scale out bandwidth

        ACI Main Components

        • Nexus 9K Spine Switches - Provide high-speed interconnection between all leaf switches
        • Nexus 9K Leaf Switches - Connect to endpoints (servers, storage, services)
        • Application Policy Infrastructure Controller (APIC) - Centralized policy and management controller

        Cisco APIC (Application Policy Infrastructure Controller)

        Cisco APIC is the brain of the ACI fabric - a policy controller that relays the intended state of policies to the fabric.

        🔑 Critical Understanding:
        • APIC is NOT in the data path - It's a management/policy plane controller
        • APIC is NOT the control plane - Control plane is distributed across the fabric
        • APIC holds the policy - It defines and pushes configuration to switches
        • Fabric continues operating even if APIC is temporarily unavailable

        APIC Key Features

        • Policy Controller - Holds and distributes the defined policies
        • Management Plane - Not in the control plane or traffic path
        • Redundant Cluster - Three or more servers in highly available configuration
        • Dual-Homed - Each APIC server connects to two leaf switches for resilience
        • Scalability-Based Sizing - Cluster requirements determined by leaf port density (Verified Scalability Guide)
        • Policy Instantiation - Translates high-level policies into switch configurations

        APIC Hardware Platform

        The Cisco APIC software runs on Cisco UCS C-Series server appliances with pre-installed software.

        • Current Models (Two Generations):
          • Generation 2: APIC-L2 (Large) and APIC-M2 (Medium) - UCS C220 M4
          • Generation 1: APIC-L1 (Large) and APIC-M1 (Medium) - UCS C220 M3
        • Management Interfaces:
          • GUI - Single pane of glass for entire topology (similar to UCS Manager)
          • CLI - Command-line interface for automation
          • APIs - RESTful APIs for programmatic access

        ACI Fabric Initialization

        ACI fabric supports automated discovery, boot, inventory, and system maintenance processes via the APIC.

        Fabric Discovery and Addressing

        The discovery process follows an automated sequence:

        1. APIC finds a leaf - Initial connection established
        2. Leaf finds the spines - Discovers all spine switches
        3. Spines find all other leafs - Complete fabric topology mapped
        4. Minimal GUI configuration - Simple setup steps required

        Additional Initialization Functions

        • Image Management - Centralized firmware distribution and upgrades
        • Topology Validation - Verifies wiring diagram and performs system checks
        • Automated Configuration - Self-configuring fabric with zero-touch provisioning
        💡 Note: More detailed fabric initialization procedures covered in advanced sections

        Spine-Leaf Topology

        The spine-leaf topology makes the fabric easier to build, test, and support. Scalability is achieved by simply adding more nodes as needed.

        Scalability Model

        • Need more ports? Add more leaf nodes for connecting hosts
        • Need more bandwidth? Add more spine nodes to increase fabric capacity
        • Predictable growth - Linear scaling with deterministic performance

        Spine-Leaf Advantages

        🎯 Key Benefits:
        • Simple and Consistent Topology - Predictable design pattern
        • Scalability - For both connectivity (ports) and bandwidth (throughput)
        • Symmetry - Optimized forwarding behavior across fabric
        • Least-Cost Design - High bandwidth at minimal cost
        • Low Latency - Maximum two hops for any host-to-host connection
        • Low Oversubscription - Predictable bandwidth availability

        Traffic Flow Characteristics

        The symmetrical topology allows for optimized forwarding behavior:

        • Any-to-Any Connectivity - All hosts can reach each other with same hop count
        • Two-Hop Maximum - Host → Leaf → Spine → Leaf → Host
        • Equal Cost Paths - Multiple paths available for load balancing
        • No Spanning Tree - All links active simultaneously

        IS-IS Fabric Infrastructure Routing

        The fabric leverages a densely tuned IS-IS environment utilizing Level 1 connections within the topology for advertising loopback addresses.

        IS-IS Role in ACI

        Primary Responsibilities:

        • Infrastructure Connectivity - Establishes routing between all fabric nodes
        • VTEP Advertisement - Advertises VXLAN Tunnel Endpoint addresses (loopbacks)
        • Multicast Trees - Computes multicast forwarding trees using FTAG (Forwarding TAG)
        • Tunnel Announcement - Announces overlay tunnels from every leaf to all other fabric nodes

        Technical Details

        • Protocol Level: IS-IS Level 1 only
        • Optimization: Tuned specifically for densely connected fabric environments
        • VTEP Usage: Loopback addresses serve as VTEPs for integrated overlay
        • Multicast FTAG: Vendor TLVs generate multicast forwarding tag trees
        🔑 Key Understanding:
        IS-IS in ACI is not used for routing application traffic. It's purely for:
        • Infrastructure connectivity between switches
        • Distributing VTEP addresses for VXLAN overlay
        • Building multicast distribution trees
        Application traffic uses the VXLAN overlay with distributed endpoint database.

        Decoupling of Endpoint Location and Policy

        The Cisco ACI fabric decouples the endpoint address from the location of that endpoint, defining endpoints by their locator or VTEP address.

        How It Works

        • Endpoints Identified - By IP or MAC address
        • Endpoint Location - Specified by VTEP address (which leaf switch)
        • Forwarding Mechanism - Occurs between VTEPs using VXLAN
        • Transport Protocol - Enhanced VXLAN header format
        • Reachability Database - Distributed database maps endpoints to VTEP locations

        Benefits of Location-Policy Separation

        Advantages:
        • Mobility - Endpoints can move without policy changes
        • Flexibility - Policy independent of physical location
        • Scalability - Efficient use of network resources
        • Simplification - Reduces configuration complexity

        Physical, Virtual and Distributed

        Multi-Hypervisor Support

        Modern data centers require support for diverse workload types:

        • Bare-Metal Servers - Physical servers directly connected
        • Virtualized Workloads - VMs running on various hypervisors
        • Containerized Applications - Modern microservices architectures
        • Mixed Environments - Combination of all above

        ACI Universal Support

        ACI supports any type of endpoint with consistent policy application:

        • Hypervisor Support:
          • VMware vSphere/ESXi
          • Microsoft Hyper-V
          • KVM (Kernel-based Virtual Machine)
          • Red Hat OpenStack
        • Bare-Metal Servers - Direct server connectivity
        • Containers - Kubernetes, Docker, OpenShift
        • Unified Policy - Same policies apply regardless of endpoint type

        Traffic Normalization

        One of ACI's powerful features is encapsulation normalization:

        How Normalization Works:
        • Incoming Traffic: Can use different encapsulations:
          • Standard VLAN 802.1Q tags
          • VXLAN IDs
          • NVGRE (Network Virtualization using Generic Routing Encapsulation)
        • ACI Normalization: Converts all traffic into Application Endpoint Groups (EPGs)
        • Unified Treatment: ACI speaks any "language" and treats all endpoints with the same policy
        • Benefit: Regardless of encapsulation type, consistent policy enforcement

        ACI Behind the Scenes - Technical Deep Dive

        ⚠️ Under the Hood:
        • Automated VXLAN Overlay - Tunnel system managed automatically
        • Layer 2 and Layer 3 Gateways - Both supported for VXLAN
        • VLANs with Local Significance - VLANs now have port-local meaning (not fabric-wide)
        • IS-IS for Underlay - IS-IS routing protocol builds the transport network
        • Leafs are VTEPs - Leaf switches serve as VXLAN Tunnel Endpoints
        • VTEP to VTEP Transport - IP transport through spines connects all VTEPs

        Important Points to Remember

        Essential ACI Concepts

        🎯 Critical Takeaways:

        Topology & Protocols:

        • No STP (Spanning Tree) - STP is not used because it blocks links; ACI uses all links simultaneously
        • ECMP (Equal Cost Multi-Pathing) - Load balances traffic between leaf switches
        • Layer 3 Fabric - ACI is fundamentally a Layer 3 fabric using IS-IS routing
        • VXLAN Overlay - Used for building the overlay network on top of IP fabric
        • LLDP Discovery - Protocol for discovering switches at Layer 2

        Addressing & Assignment:

        • Host-Based Networks - Every network in ACI is /32 (host-based routing)
        • DHCP for Switch IPs - APIC uses DHCP for allocating IPs to each switch during discovery

        Security Model:

        • Whitelist Model - By default, everything is blocked unless explicitly allowed
        • Security-First Approach - Excellent from a security perspective
        • Explicit Contracts - Communication requires defined contracts between EPGs

        Configuration & Management:

        • Object-Based Storage - Everything configured is stored as objects and policies
        • API Access - All configurations accessible using Cisco APIs
        • XML/JSON Format - Configuration stored in standard formats
        • API Configuration - Can be configured using RESTful APIs for automation

        Key Takeaways & Next Steps

        What You've Learned

        • ACI Fundamentals - Promise Theory, declarative model, application-centric approach
        • Nexus 9000 - Two modes (Standalone NX-OS and ACI), each with distinct use cases
        • CLOS Topology - Spine-leaf architecture with predictable performance
        • APIC Controller - Policy management plane (not data or control plane)
        • Fabric Technology - IS-IS underlay, VXLAN overlay, distributed endpoint database
        • Versatility - Support for physical, virtual, and containerized workloads

        Design Principles Summary

        Core Design Tenets:
        • Simplicity: Consistent topology pattern
        • Scalability: Add leafs for ports, spines for bandwidth
        • Performance: Low latency, high bandwidth, low oversubscription
        • Reliability: No single point of failure, APIC cluster redundancy
        • Flexibility: Multi-hypervisor, multi-vendor, multi-encapsulation
        • Security: Default-deny whitelist model with policy enforcement

        What's Next?

        Now that you understand the ACI architecture foundation, the next topics to explore include:

        • Application Endpoint Groups (EPGs) - How applications are grouped and policies applied
        • Contracts - How communication is permitted between EPGs
        • Tenants - Multi-tenancy and resource isolation
        • Bridge Domains & VRFs - Layer 2 and Layer 3 forwarding constructs
        • Fabric Access Policies - Configuring physical connectivity
        • Integration with External Networks - L3Out, L2Out configurations
        🎯 Practice Challenge:
        To solidify your understanding:
        1. Draw an ACI fabric with 2 spines and 4 leafs - identify all connections
        2. Trace a packet flow from Host A on Leaf1 to Host B on Leaf3
        3. Identify what happens if one spine fails
        4. Calculate bandwidth requirements for different oversubscription ratios
        5. Design an APIC cluster for a 50-leaf fabric

        Ready for the next level? Understanding ACI architecture is the foundation for implementing modern, automated data center networks. The next steps involve hands-on configuration of tenants, EPGs, contracts, and fabric policies to build production-ready ACI deployments.