What’s new in LogicMonitor? Explore the latest innovations advancing Autonomous IT

Read more

What is Cisco ACI?

Cisco ACI is a software-defined networking (SDN) technology offering from Cisco that allows for centralized application policy enforcement across a data center network. Learn more!
16 min read
July 24, 2024

The quick download

Cisco ACI replaces per-device network configuration with centralized, policy-driven control, and it’s built for data centers that have outgrown manual management.

  • The APIC controller defines and manages your entire network and security policy. Leaf and spine switches enforce it, which is what makes ACI scale where traditional networking breaks.

  • ACI blocks communication between EPGs by default unless a contract explicitly permits the flow. That gives you microsegmentation, audit-ready evidence for PCI DSS and HIPAA, and a real barrier to lateral movement after a breach.

  • Virtual APIC deployments carry your EPG and contract rules into AWS, Azure, and GCP, so workloads keep their policy when they move.

  • Pull fabric health, node health, tenant and EPG health, faults, interface telemetry, and contract statistics into your observability stack.

Cisco Application Centric Infrastructure (ACI) is a data center networking system that lets you define network and security policy once, then automatically enforce it across every switch in your fabric. 

Instead of configuring each device by hand, you describe what your applications need, and ACI handles the rest.

If you run a data center at scale, you’ve probably reached the limits of manual switch configuration. ACI is Cisco’s answer to that. 

Let’s see how it works, where it fits, and what your organization needs to run it.

What are the key Cisco ACI terms you need to know?

These eight terms define how the policy model is structured and how it gets enforced across the ACI fabric:

TermWhat is means in ACI
ACI FabricThe collective term for the complete leaf-spine network, including all switches and the APIC, managed as a single system.
Tenant A logical container for policies, applications, and resources belonging to one customer, team, or environment. Tenants are isolated from each other, by default, with optional controlled communication via contracts.
VRF (Virtual Routing and Forwarding)A Layer 3 routing context that allows multiple independent routing tables to exist on the same physical network device.  In Cisco ACI, a VRF exists within a tenant and provides traffic isolation by keeping routing domains separate. One tenant can have multiple VRFs to further isolate traffic segments.
Bridge DomainA Layer 2 broadcast domain within a VRF. Functions like a VLAN but decoupled from physical infrastructure and capable of spanning the entire fabric.
EPG (Endpoint Group)A logical group of similar endpoints, including servers, VMs, and containers that share the same security, QoS and policies 
ContractThe policy that controls what traffic is allowed between EPGs. 
Application ProfileA container for a set of EPGs and the contracts between them, representing the full network requirements of one application.
VXLAN / OverlayThe encapsulation protocol ACI uses to carry tenant traffic across the leaf-spine fabric. 

Why do enterprises use Cisco ACI?

Manual network configuration doesn’t scale, and the gaps it creates show up as security failures and compliance violations. If you’re evaluating ACI, one or more of these four pressures is probably why: 

  • Hybrid cloud sprawl: Workloads now span on-premises hardware, colocation facilities, and public cloud accounts. Consistent policies across all of them are difficult to maintain manually at scale.
  • Microsegmentation requirements: Zero-trust security models and regulatory frameworks demand traffic isolation between applications at a granular level. Traditional VLANs based segmentation becomes difficult to scale due to limited flexibility and high operational overhead.
  • Manual configuration bottlenecks: As data centers grow, per-device CLI changes become the main source of human error and the slowest step in any deployment.
  • Application velocity: Development teams deploy faster than network teams can configure infrastructure by hand. ACI closes that gap through APIs and automated policy enforcement.

What does Cisco ACI architecture look like?

ACI runs on three components: the APIC controller, leaf switches, and spine switches. Together they form the ACI fabric. 

APIC: the central controller

The Application Policy Infrastructure Controller (APIC) is where policy is defined and managed. It distributes configuration to leaf and spine switches, exposes a REST API for automation tools, and provides the web interface for manual management. 

The APIC runs as a clustered system (typically three or more nodes) for redundancy and high availability. It’s outside the data path, managing the network, but it doesn’t carry production traffic.

Leaf switches: the access layer

Leaf switches connect directly to servers, VMs, storage, and external networks. Every endpoint attaches to a leaf. Leaves enforce contracts at line rate using APIC-distributed policy. 

Physical leaves are Cisco Nexus 9000 Series switches. However, ACI also integrates with environments such as VMware and Nutanix through virtual switching and hypervisor integration.

Spine switches: the fabric backbone

Spine switches interconnect all leaf switches and carry inter-leaf traffic. Every leaf connects to every spine. This creates a high-performance, low-latency topology without a spanning tree. 

Spines never connect directly to endpoints; they exist solely to move traffic between leaf switches predictably.

How does Cisco ACI work?

ACI uses a declarative model: you define the connectivity and security rules an application needs, and the system translates those definitions into switch-level configuration automatically. This flow from intent to enforcement has five steps:

  1. Define requirements: Describe which endpoint groups need to communicate, which must stay isolated, and what traffic rules apply.
  2. Create a policy in the APIC: Those requirements become tenants, VRFs, bridge domains, EPGs, and contracts inside the controller. This can happen through the GUI or via the APIC’s REST API from tools like Terraform or Ansible.
  3. Map endpoints to EPGs: When a server or VM connects to the fabric, the APIC places it in the correct Endpoint Group based on its attributes (such as VLAN, IP, or VM metadata). Policy applies immediately.
  4. Enforce across the fabric: Leaf switches enforce contracts between EPGs at line rate over a VXLAN overlay. Traffic not explicitly permitted by a contract is blocked.
  5. Adapt when applications change: When a workload moves or a policy update is needed, the APIC pushes the change to all relevant switches automatically.

What problems does Cisco ACI solve?

Traditional networking creates four problems at scale, and ACI is designed to address each of these challenges: 

  1. Slow provisioning
  2. Inconsistent security
  3. High operational overhead
  4. No consistent way to extend policy into the cloud

Slow provisioning

In a traditional network, each new application requires manual configuration across multiple devices. In ACI, you define policy centrally in the APIC and it propagates automatically. Because of this, teams that previously spent days on network changes can provision in minutes through API calls. 

Inconsistent security

ACI enforces a zero-trust allow-list model by default. Traffic between EPGs is blocked unless a contract explicitly permits it. If one EPG is compromised, the attacker can’t reach other EPGs unless explicitly permitted by a contract. 

Security policy is defined centrally and enforced at the leaf switch, at line rate, with no performance penalty. ACI also supports line-rate encryption, role-based access control for APIC access, and built-in intrusion detection and prevention. 

For regulated industries, the contract model provides clear audit evidence of what traffic is permitted between which systems.

High operational overhead

Traditional networks require administrators to log into individual devices to make changes, which multiplies effort and error rate as environments grow. ACI centralizes all configuration in the APIC. When something degrades, the APIC shows health scores across the entire fabric rather than requiring SSH sessions to each device. 

Cloud policy gaps

ACI extends its policy model to AWS, Microsoft Azure, and Google Cloud Platform using cloud-integrated controllers and services (such as Cloud APIC). The EPG and contract model you use on-premises applies to cloud workloads too. 

When a workload migrates, its policy travels with it; you don’t have to rewrite rules for each target platform.

What Cisco ACI deployment models are available?

Cisco ACI offers several deployment models. Each targets a different scale or infrastructure type. But the right choice depends on how many sites you operate, where your workloads run, and how much network footprint you need:

  1. Single-site ACI: One APIC cluster manages one leaf-spine fabric in a single data center. It’s the standard entry point for teams new to ACI.
  2. Multi-Pod: Extends a single ACI fabric across multiple physical pods within the same or nearby data centers, connected by an inter-pod network (IPN). 
  3. Multi-Site: Connects independent ACI fabrics across geographically separate data centers using the Nexus Dashboard Orchestrator. Each site has its own APIC cluster, but policy is centrally orchestrated and deployed consistently across sites. It’s the right choice for active-active and disaster recovery deployments.
  4. Remote Leaf: Extends the ACI fabric to a branch office or colocation facility over a standard IP network. These switches are managed by the main APIC cluster and appear as part of the primary fabric.
  5. Mini ACI Fabric: A smaller-footprint deployment for environments that need ACI’s policy model without a full leaf-spine hardware count.
  6. Cloud ACI (vAPIC): Deploys a virtual APIC in AWS or Azure to extend ACI policy to public cloud environments. It’s also available for VMware vSphere and Nutanix hypervisor environments. 

How does Cisco ACI enforce security and microsegmentation?

ACI’s security model runs on contracts between EPGs. By default, every EPG is isolated from every other one, with no implicit trust even inside the same data center. Traffic flows only when a contract explicitly permits it.

Contracts replace traditional ACLs with application-aware policy that travels with your workloads wherever they run. If your web tier needs to reach a database EPG on port 5432, you write a contract that permits exactly that flow, and nothing else gets through. 

Leaf switches enforce the policy in hardware, so your isolation doesn’t depend on a software firewall that someone can misconfigure or bypass.

This approach aligns with zero-trust principles by enforcing segmentation and least-privilege communication. No workload inside the data center gets implicit access to any other workload. Lateral movement, which is an attacker’s primary tool after the initial breach, hits a wall at every EPG boundary.

Security teams can prove compliance by exporting the contract list from the APIC, because that list is the authoritative, complete record of what can communicate with what.

For PCI DSS and HIPAA compliance, ACI’s EPG-to-EPG contract model produces a useful audit trail. So, scoped environments like cardholder data zones can be isolated at the EPG level, with contracts that permit only the minimum required flows.

What do teams need to monitor in a Cisco ACI environment?

The APIC surfaces a lot of data, and it’s tempting to watch all of it. Don’t. In our experience running monitoring for ACI environments, six key areas matter most for catching problems before they hit production:

  1. Fabric health score: The aggregate health of the entire fabric. A single degraded leaf or spine pulls this score down, making problems visible without scanning individual devices.
  2. Leaf and spine node health: CPU utilization, memory, interface error rates, and hardware faults per switch. It’s critical for capacity decisions and early detection of hardware failure.
  3. Tenant and EPG health: Policy-related faults and contract violations at the application level. These surface when unexpected traffic patterns hit the policy model, which can indicate misconfigurations or security events.
  4. Faults and events: The APIC raises faults for configuration errors, connectivity issues, and hardware problems. Filtering by severity can help you focus on what requires action.
  5. Interface telemetry: Traffic rates, error rates, and drops per interface. It’s the raw data behind capacity and troubleshooting decisions.
  6. Contract statistics: Traffic volumes between EPGs, both permitted and dropped. This makes application communication patterns visible and anomalies detectable.

Most of this data is available through the APIC’s REST API, which is how LogicMonitor pulls it into a single view alongside the rest of your stack, including your servers, storage, cloud workloads, and the applications riding on top of ACI. 

That cross-layer view is what turns a fabric health score into an answer to the question you care about: is my application slow because of the network, or something else? The Cisco Nexus Dashboard, covered in the next section, handles the ACI-only view across multiple sites.

What is the Cisco Nexus Dashboard and how does it relate to ACI?

The Nexus Dashboard is Cisco’s unified operations platform above the APIC. It provides analytics, anomaly detection, and lifecycle management for ACI environments. 

The APIC manages policy and device configuration. The Nexus Dashboard provides centralized visibility, analytics, and orchestration across multiple ACI sites and fabrics.

For teams with a single ACI site, the APIC alone is sufficient. But if you have Multi-Site or Multi-Pod deployments, the Nexus Dashboard should be your central operations layer, the place where cross-site policy is managed, software upgrades are coordinated, and issues that span multiple fabrics become visible in one interface.

Its core capabilities include: 

  • Multi-site visibility
  • Software upgrade coordination across the fabric
  • Single sign-on across Cisco management tools
  • Capacity planning analytics
  • Integration with application observability tools like Cisco AppDynamics for application-aware network insights

What is Cisco ACI used for?

ACI shows up in seven kinds of production deployments:

  • Private cloud networking: Network teams define policy guardrails in the APIC, and application teams provision their own network resources through an API or portal; no tickets required for every new service.
  • Hybrid cloud with consistent policy: Workloads on AWS, Azure, or GCP use the same EPG and contract model as on-premises environments, so when a workload moves to the cloud, its policy travels with it.
  • Disaster recovery across sites: The Nexus Dashboard Orchestrator replicates policy across Multi-Site ACI deployments, so when an application fails over, the recovery site network can be pre-configured with consistent policy.
  • Segmentation for regulated environments: EPGs isolate cardholder data environments and patient record systems from general corporate traffic, and the contract model produces a clear audit trail for PCI DSS and HIPAA reviews.
  • Kubernetes and container networking: The ACI CNI plugin maps Kubernetes constructs (such as namespaces, labels, or pods) to EPGs and uses contracts to govern traffic between them, so container workloads follow the same policy model as VMs and bare-metal servers.
  • Infrastructure as a Service: Teams provision VMs, storage, and networking through the APIC API and scale infrastructure without manual hardware changes.
  • DevOps and CI/CD pipeline integration: Terraform, Ansible, and ServiceNow trigger network changes through the APIC API as part of a deployment pipeline, so network policy becomes code that’s versioned, tested, and deployed like any other infrastructure change.

What tools does Cisco ACI integrate with?

ACI integrates with external tools through its open REST API. Any platform that can make HTTP calls can read or write to the APIC. 

Cisco also provides certified integrations for the most common tool categories:

CategoryTools
Infrastructure as code / automationTerraform (Cisco ACI provider), Ansible (ACI modules), Puppet
IT service managementServiceNow; can integrate with ACI workflows to automate or trigger network changes via the APIC API
Security and identityCisco ISE (identity-based policy), Cisco SD-Access (integration for extending segmentation concepts between campus and data center environments)
ObservabilityCisco AppDynamics, LogicMonitor
Cloud platformsAWS (vAPIC), Microsoft Azure (vAPIC), Google Cloud Platform
Virtualization and containersVMware vSphere, Nutanix, Kubernetes (ACI CNI plugin), Cisco UCS
WAN and campus networkingCisco SD-WAN, Cisco Catalyst (via SD-Access integration)

Who is Cisco ACI designed for?

Cisco ACI is built for network engineers, data center architects, and NetOps teams running multi-application environments. You’ll get the most out of it if you:

  • Manage large numbers of applications or workloads across one or more data centers
  • Need consistent policy intent across physical, virtual, and cloud environments
  • Need to demonstrate and support network segmentation for PCI DSS, HIPAA, or FedRAMP
  • Work with developers who ship faster than manual network changes can keep up with
  • Run (or plan to run) a multi-cloud or hybrid cloud setup

Picture a bank running card transactions through a set of ACI EPGs. The cardholder environment is in its own EPGs, with contracts permitting only the specific flows auditors expect to see. 

When the PCI audit rolls around, your team exports the contract list from the APIC and hands it over. That list is the authoritative record of what can talk to what, which is exactly what the auditor is trying to confirm.

Smaller organizations can run the Mini ACI Fabric (fewer leaf and spine switches), but the operational payoff shows up most clearly in larger, more complex environments.

One caveat: the learning curve is steep. Your operators need to understand the EPG-and-contract model before they can work productively in the APIC, and if your team’s background is traditional networking, give them a few weeks before the policy model clicks.

How does Cisco ACI differ from traditional networking?

In traditional networking, policy is distributed across individual devices. In ACI, policy is placed in the APIC, and switches enforce it. That one structural shift is what drives every other difference.

DimensionTraditional networkingCisco ACI
Policy modelPer-device ACLs and VLANs, configured manually on each switchCentralized, declarative policy in the APIC, distributed to the fabric automatically
SegmentationVLAN-based; limited granularity; sprawl accumulates over timeEPG/contract microsegmentation; application-aware; enforced at the hardware level
ProvisioningManual CLI configuration per device for every changeAPI or GUI policy changes propagate to all relevant switches automatically
ScalingAdd switches and reconfigure existing ones manuallyAdd leaf or spine nodes; APIC integrates them into the fabric without manual reconfiguration
VisibilityPer-device logs and SNMP pollingCentralized health scores, faults, and tenant/EPG telemetry across the entire fabric
Security defaultImplicit trust inside the network; perimeter-focusedDefault deny model which uses zero-trust principles, between EPGs; all inter-EPG traffic blocked by default unless explicitly permitted
MulticloudSeparate tools and policies per cloud environmentConsistent ACI policy model extended to AWS, Azure, and GCP via virtual APIC
Operational modelHigh manual overhead; errors multiply with scaleCentralized management and open APIs reduce manual intervention

What is SDN and where does ACI fit into it?

SDN (software-defined networking) separates the control plane (the system that decides how traffic routes) from the data plane (the hardware that forwards packets). 

In a traditional network, both are in the same physical device. In an SDN, a central controller manages routing decisions in software and informs switches how to forward traffic, without manual reconfiguration of each one.

Cisco ACI is Cisco’s enterprise SDN implementation for data centers. While some early SDN models used protocols like OpenFlow to communicate with commodity hardware, ACI is an integrated system comprising the APIC controller, ACI firmware, and Nexus 9000 switches, designed to work together. 

This gives ACI more capability than open SDN implementations, but it also means ACI runs on Cisco hardware.

Where to Start With Cisco ACI

If you’re weighing ACI for your data center, two questions matter most: 

  1. Does your environment have the scale and complexity to justify it?
  2. Does your team have the runway to learn the policy model?

Map one application’s network requirements using ACI’s building blocks, including tenants, EPGs, and contracts. If that exercise clarifies your security and segmentation needs, ACI will likely pay off in production. If it feels like overkill for what a few VLANs already handle, you probably don’t need it yet.

From there, a Mini ACI Fabric or single-site deployment is a reasonable proof of concept. Get your network engineers’ hands-on with the APIC before you commit to a full rollout, since most ACI projects stall on the learning curve. So closing that gap early is what separates smooth rollouts from painful ones.

And once you’re running ACI, your monitoring has to keep pace because fabric health scores and contract statistics do nothing for you if no one sees them in time to act.

See your entire ACI fabric — and everything running on it — in one place with LogicMonitor

Pull fabric health, EPG contracts, and interface telemetry alongside your servers, cloud workloads, and applications. Spot the real cause of slowdowns before your users do.

FAQs

1. What is the difference between Cisco ACI and NX-OS?

NX-OS is Cisco’s traditional switch operating system. ACI is a completely different operational model that runs on the same Nexus 9000 hardware but uses ACI firmware instead of NX-OS mode.

2. How does ACI support Kubernetes?

Cisco ACI integrates with Kubernetes through the ACI CNI (Container Network Interface) plugin. The plugin connects the Kubernetes cluster to the ACI fabric and applies ACI network policy to container workloads. Kubernetes constructs such as namespaces map to EPGs, and contracts govern inter-namespace traffic. Container workloads participate in the same policy model as VMs and bare-metal servers, so no separate policy system for containers.

3. How do teams monitor ACI fabric health?

The APIC assigns health scores to the fabric, individual nodes, tenants, and EPGs on a 0-to-100 scale. Faults identify configuration errors, connectivity problems, and hardware issues in real time. Your teams can pull this data via the APIC REST API for integration with third-party monitoring tools, or use the Cisco Nexus Dashboard for a consolidated view across multiple ACI sites.

4. What skills do you need to operate Cisco ACI?

You need to understand ACI’s policy model, including tenants, VRFs, bridge domains, EPGs, and contracts, before you can work effectively in the APIC. 

14-day access to the full LogicMonitor platform