Cisco ACI Interview Questions
Cisco ACI Interview Questions
Cisco ACI Interview Questions
Answer: We have Cisco Nexus 9000 series. In this we mainly have Nexus 9500
Modular, Nexus 9300 Non-Modular series switches. In my course, I used 9500 as
spine and 9300 as Leaf Switches.
Answer: We can only connect Leaf switches to Spine Switches and vice versa.
5. In ACI mode of operation, can we connect Spine with another Spine switch?
Answer: No, connection will only work between Spine and Leaf. No Spine to
Spine connectivity can be established.
Answer: You may choose to have only one APIC controller, however, cisco
recommends using minimum 3 APIC controller and in order of 3,5,7.
In large L3 Fabric, we can use up to 200 Leaf Switches, 24 Spine switches per
fabric (6 spine per POD), 650 FEX per fabric (20 FEX per leaf Switch) & 3000
Tenants can be created.
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-
x/verified-scalability/Cisco-ACI-Verified-Scalability-Guide-422.html
Telegram Channel for Jobs - https://t.me/nwopenings
Telegram Group for Discussions - https://t.me/pynetlabs
LinkedIn for Latest Updates - https://www.linkedin.com/company/pynetlabs
YouTube for Learning – https://www.youtube.com/pynetlabs
10. What are the benefits of Nexus ACI compared to tradition network
solution/architecture?
· Hypervisors compatibility and integration without the need to add software to the
hypervisor.
· ACI is tailor made for Data Centers requiring multi-tenancy setup (Virtualized)
with easy to configure steps in GUI.
· Can run as a conventional switch NX-OS or in “ACI” mode and supports FEX.
· Enable seamless connectivity between on-premises and remote data centers and
geographically dispersed multiple data centers under a single pane of policy
orchestration.
· Open APIs allows easy integration with 3rd party devices like firewall and
ADCs.
o In ACI networks, network admins use the APIC to manage the network –
they no longer need to access the CLI on every node to configure or
provision network resources.
o Cisco APIC includes a CLI and a GUI as central points of management for
the entire Cisco ACI fabric
o Cisco APIC also has completely open APIs so that users can use
Representational State Transfer (REST)-based calls (through XML or
JavaScript Object Notation [JSON]) to provision, manage, monitor, or
troubleshoot the system
Answer: Cisco APIC Controller does not sit in data plane; therefore, it does
not forward data plane traffic. It works as orchestrator of ACI fabric.
Answer: If all the APIC controllers go down then there won’t be any outage
in data forwarding of traffic, however, we cannot make any changes to the
fabric. We need to bring up the APIC controller to be able to make new
policies or monitor/troubleshoot the ACI fabric.
16. Once fabric is up, can endpoints (Like Servers, Firewalls, IDS, IPS, Bare
metal servers etc.) communicate to each other?
The bridge domain is like a container for subnets — it’s used to define a L2
boundary, but not like a VLAN, infact it is a VXLAN, represented as VNI
(VXLAN Network Identifier).
The BD defines the unique Layer 2 MAC address space and a Layer 2 flood
domain if such flooding is enabled. It can carry multiple subnets in a single
We can create multiple Bridge Domains inside of a single VRF. We can not
link one BD to two different VRFs.
Bridge domains can be public, private, or shared. Public bridge domains are
where the subnet can be exported to a routed connection, whereas private
ones apply only within the tenancy. Shared bridge domains can be exported
to multiple VRFs within the same tenant, or across tenants when part of a
shared service.
Answer: Endpoints are the devices that are connected to the network
directly or indirectly. They have an address, a location, attributes (like
version or patch level) and can be virtual or physical e.g. Bare-metal server,
Switch, Router, Firewall, IDS, IPS etc.
Tenants allow re-use of an IP Address space i.e. multiple tenants can have
same IP Address schemas.
Cisco ACI tenants can contain multiple private networks (VRF instances).
One user created tenant can’t talk to another tenant.
configuration of host and fabric nodes (leaf, spine & controllers). MGMT
Tenant is used for In-Band and out of band services. It provides convenient
means to configure access policies for fabric nodes.
external access to the fabric. Access policies are used for configuring the
interfaces or ports on Leaf Switches which connect to Servers, Hosts,
Routers, Firewalls, or other endpoint devices.
We can enable port channel, vPC and protocols like LLDP, CDP, LACP and
some of the features like monitoring and diagnostics. Once the ACI Access
policy is setup, then it can automate the configuration for rest of the
interfaces.
Answer: Taboo contracts are used to deny, and log traffic related to regular
contracts and are configured into the hardware before the regular contract.
For example, if the objective was to allow traffic with source ports 100
through 900 with the exception of port 415, then the regular contract would
allow all ports in the range of 100 through 900 while the taboo contract
would have a single entry denying port 415.
Answer: Yes, we can have same VRF in multiple tenants. Each Tenant is
different logical unit, so we can have duplicate VRF names between
Tenants.
34. Can we link one EPG Endpoint group to multiple Bridge Domains?
Answer: No, Single EPG can not be referenced to multiple Bridge Domains.
Answer: No, policies can only be applied to EPGs. Rather than configuring &
managing endpoints individually, they are placed in an EPG and are
managed as a group. Therefore, policies are applied to EPGs
Answer: Yes, we can always create more than one bridge domain in same
VRF; however, we cannot duplicate the subnets. Bridge domain is a Layer 2
construct within the fabric, used to define a flood domain, also represented
with VNI (VXLAN Network Identifier).
VRF can have duplicate name if these are part of different Tenants.
Answer: Using a Layer 3 Out, ACI can extend its connectivity to the external
devices. These external devices may be External Router, firewall or Layer 3
Switch and are connected on Leaf Switches (therefore, known as Border
Leaf Switches). Border leaves use EIGRP OSPF, BGP dynamic routing
protocol and static routing to exchange external prefixes and networks. We
create External L3 EPG based on prefixes we receive from external
network. In one EPG, we can have all networks as well i.e. 0.0.0.0/0.
41. Which routing protocol runs for internal communication between ACI
Spine and Leaf?
Only one AS will be used in the ACI fabric, therefore, Leaf and Spine
relationship will be iBGP.
42. In ACI Fabric, which node is configured as BGP Route Reflector? Why it
is required?
Since prefixes of one IBGP can’t be shared with other IBGP peer so we
need to use either full mesh or BGP Route Reflector.
ACI fabric is 2-tier architecture and we can’t have full mesh, so we will use
BGP RR by making Spine as RR and Leaf switches will become BGP RR
Client.
Answer is we need to configure all spine switches as BGP RR.
43. Which Cisco 9K models are used as Spine Nodes in ACI Setup?
44. Which Cisco 9K models are used as Leaf Nodes in ACI Setup?
46. I have Trunk ports configured in one EPG. Can the access ports also be
added in same EPG? Answer: Yes, it can be configured. See below
snapshot, you can see that in the App EPG-1, we can see one port in trunk
whereas other in access (untagged).
Note, we cannot use the front panel VGA and the rear panel VGA at the
same time.
· LST (Local station Table) - This table contains address of all host
attached directly to leaf. When End Points are discovered, this table is
populated and is synchronized with spine-proxy full GST. When any Bridge
Domain is not configured for routing, then LST learns only MAC address(s)
and if the BD is enabled with routing option, this table will learn both IP
address and MAC address of End Points.
· Shards are evenly distributed across the appliances that comprise the
APIC cluster.
One or more shards are located on each APIC appliance. The shard data
assignments are based on a predetermined hash function, and a static
shard layout determines the assignment of shards to appliances.
Those separate ACI fabrics are named “Pods” and each of them looks like
a regular two-tiers spine-leaf fabrics.
· Microsoft vSwitch
Endpoint groups (EPGs) are used to group virtual machines (VMs) within a
tenant and apply filtering and forwarding policies to them.
Microsegmentation with Cisco ACI adds the ability to associate EPGs with
network or VM-based attributes, enabling you to filter with those attributes
and apply more dynamic policies. Microsegmentation with Cisco ACI also
allows you to apply policies to any endpoints within the tenant.
User traffic is encapsulated from the user space into VXLAN and use the
VXLAN overlay to provide layer 2 adjacency when need to.
So, we can emulate the layer 2 connectivity while providing the extensibility
of VXLAN for scalability and flexibility.
All traffic within the ACI Fabric is encapsulated with an extended VxLAN
header along with its VTEP, VXLAN Tunnel End Point.
The ACI VXLAN packet contains both Layer 2 MAC address and Layer 3 IP
address source and destination fields, which enables efficient and scalable
forwarding within the fabric
When traffic is received from a host at the Leaf, frames are translated to
VxLAN and transported to the destination on the fabric. ACI fabric gives the
ability to completely normalize traffic coming from one Leaf and send to
another (it can be on the same Leaf). When the frames exit the destination
Leaf, they are re-encapsulated to whatever the destination network is
asking for. It can be formatted to untagged frames, 802.1Q truck, VxLAN or
NVGRE.