VPC Lab Instructions

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 75

Lab: Nexus 5000/2000: Ethernet and VPC

Nexus 5000 and Nexus 2000 Overview

The Cisco Nexus™ 5000 Series Switches comprise a family of line-rate, low-
latency, lossless 10 Gigabit Ethernet, Cisco® Data Center Ethernet, and Fibre
Channel over Ethernet (FCoE) switches for data center applications. The switch
family is highly serviceable, with redundant, hot-pluggable power supplies and fan
modules. Its software is based on data center-class Cisco NX-OS Software for high
reliability and ease of management.

1
The Cisco Nexus 2248T provides 48 1Gbit/100Mbit Ethernet server ports and 4 10
Gigabit Ethernet uplink ports in a compact one rack-unit (1RU) form factor.The
Cisco Nexus 2000 Series Fabric Extender integrates with the parent switch, allowing
"zero-touch" provisioning as well automatic configuration and software upgrade
when a rack is connected to the parent switch. This integration allows large numbers
of servers to be supported with the same feature set of the parent Cisco Nexus 5000
Series switch, along with specific configuration parameters including security and
quality of service (QoS). The FEX does not run Spanning Tree Protocol even when
multiple links are connected to the parent switch, thus giving customers loop-free,
active-active connectivity between the two.

2
Lab Topology and Access (the Big Picture)

Core-1
Core-2

e1/15 e1/16 e1/15 e1/16


e1/17 e1/17
5K-1 e1/18 e1/18 5K-2

e1/10
e1/7 e1/8 e1/9 e1/7 e1/8 e1/9 e1/10

2K (Fex 1) 2K (Fex 2)
Fex 100 Fex 101
e100/1/2 e101/1/1
e100/1/1 e101/1/2

This picture shows the layout for the entire set of Nexus5K/Nexus2K VPC labs. You will be
doing all configuration of the switches, fexes, and server ports in your pod, as indicated by the
shaded area.

The core switches are a shared resource and are preconfigured for you; you will have no access
to administering these. Certain connections will be configured during certain parts of the lab.
Subsets of the picture will be redrawn to focus on the task at hand as it arises. A text summary
goes like this:

 Each N5K is connected to one 2K (2248) using ports (1/7, 1/8) and a second 2K using
ports (1/9, 1/10)
o We will always refer to the fex connected to 5K ports 7 and 8 as “Fex 1” and use
Fex id 100
o We will always refer to the fex connected to 5K ports 9 and 10 as “Fex 2” and use
Fex id 101
o In certain parts of the lab, Fex 1 will be connected only to 5K-1 and Fex 2 only to
5K-2. In other parts, all of the cross-connections will be active
 Each N5K is connected northbound to a pair of core switches using ports (1/15, 1/16)
 Each Server on each pod has four ports on a quad Intel card connected to the N2K fexes

3
 Servers have a built-inEthernet that does not connect at all through the POD, for
consistent lab access. DO NOT TOUCH THIS Ethernet! It will have the (10.2.8.x)
address. If you mess up the configuration of the management connection to the server,
you will lose all access to the lab.
 You will be accessing equipment in two ways:
o Through TC connections to the two N5K’s. This is required for initial setup, since
you will be treating your pod N5K’s as if they were “straight out of the box”.
o Through a remote desktop connection to the server (the same server that is
connected to the pod fexes). You will be “coming in” through a management
connection that does not run through the Pod.

Accessing the Lab

Identify your pod number. This is a one-digit or two-digit number as shown. Later in this
lab this pod number is referred to as X (always in italics. Please do not literally substitute an
X that would be what is referred to as an “I, state your name” problem.)

You will be accessing equipment in two ways:

 Through TC connections to the two N5K’s. This is required for initial setup, since
you will be treating your pod N5K’s as if they were “straight out of the box”.
 Through a remote desktop connection to the server (the same server that is
connected to the pod fexes). You will be “coming in” through a management
connection that does notrun through the Pod.

4
Connect through terminal contentrator to console of each Pod 5K

 Click each pod 5K (labeled 5K-1 and 5K-2) in the topology diagram to access a
terminal concentrator telnet session to the serial console for that 5K.You may
have to associate the telnet action with your own desktop telnet client (such as
putty, if you are using Windows). If you are locked (you have probably locked
yourself) out of the particular serial line, you can invoke the "ClearLine" action as
well.
Left-click
here

Connect to the remote student desktop (“the Server”)., You will be doing all of your
work after initial 5K on this remote desktop. In order to connect, left click on the Server
icon on your lab web page and select “RDP Client”, as shown:

Left-click
here

An RDP configuration file is downloaded. Open it using your native RDP client (on
Windows clients you can just select “Open with” on the download dialogue, and you should
not even have to browse for an application.)

Nexus 5K User: admin


Nexus 5K Password: nexus5k

Remote Desktop Windows User: administrator


Remote Desktop Windows Password: nexus5k

5
Task 1: Perform Initial Setup of Your Pod 5K’s
1. Access the serial console of each of your 5K, as explained in the previous section.

Please be accurate in which 5K you designate as #1 and which one is #2, so that the
lab reads consistently (matching which console is which)

2. Your 5K’s should be sitting at a switch prompt.You will be rebooting them and entering
the setup diaologue as if they were fresh out of the box. Answer the questions as
indicated in the example:

If (maybe because of a timeout) you are at a login prompt rather than the switch#
prompt, login using user admin and password nexus5k .

You can do this setup in parallel (both 5K at the same time) so you don't have to
wait for switch reboots separately.

switch# write erase


Warning: This command will erase the startup-configuration.
Do you wish to proceed anyway? (y/n) [n] y
switch# reload
WARNING: This command will reboot the system
Do you want to continue? (y/n) [n] y
// watch your switch reboot – takes a minute or two

// It's conceivable you could get stuck at the loader> prompt


// If so then type boot kickstart-latest system-latest

// The message below will repeat ad nauseam. Type yes just once

2013 Feb 23 04:16:25 switch %$ VDC-1 %$ %POAP-2-POAP_INFO: - Abort


Power On Auto Provisioning and continue with normal setup
?(yes/no)[n]: yes

---- System Admin Account Setup ----

Do you want to enforce secure password standard (yes/no): no


Enter the password for "admin": nexus5k
Confirm the password for "admin": nexus5k

---- Basic System Configuration Dialog ----

This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.

6
*Note: setup is mainly used for configuring the system initially,
when no configuration is present. So setup always assumes system
defaults and not the current system configuration values.

Press Enter at anytime to skip a dialog. Use ctrl-c at anytime


to skip the remaining dialogs.

Would you like to enter the basic configuration dialog (yes/no): yes

Create another login account (yes/no) [n]:no

Configure read-only SNMP community string (yes/no) [n]:no

Configure read-write SNMP community string (yes/no) [n]:no

Enter the switch name :podX-n5k-1 or podX-5k-2


Use your pod number in place of X. Name them accurately according to which console
is which!
Continue with Out-of-band (mgmt0) management configuration? (yes/no)
[y]:yes

Mgmt0 IPv4 address :10.2.8.X3(5k1) or 10.2.8.X4(5K-2)


For X, use your Pod number. For example, if you have pod 5, the two IP addresses
should be .53 and .54 If you have pod 12, they should be .123 and .124 .

Mgmt0 IPv4 netmask : 255.255.255.0

Configure the default gateway? (yes/no) [y]:yes

IPv4 address of the default gateway : 10.2.8.1

Enable the telnet service? (yes/no) [n]: no

Enable the ssh service? (yes/no) [y]: yes

Type of ssh key you would like to generate (dsa/rsa) : rsa

Number of key bits <768-2048> : 1024

Configure the ntp server? (yes/no) [n]: n

Enter basic FC configurations (yes/no) [n]: n

The following configuration will be applied:

switchname podX-n5k-2

interface mgmt0

ip address 10.2.8.XX 255.255.255.0

no shutdown

7
exit

vrf context management

ip route 0.0.0.0/0 10.2.8.1


if anything looks wrong, you can answer yes when
exit
asked if you want to edit the config below, and it will
no telnet server enable repeat the whole dialog

ssh key rsa 1024 force

ssh server enable

Would you like to edit the configuration? (yes/no) [n]: n

Use this configuration and save it? (yes/no) [y]: y

[########################################] 100%

podX-n5k-2 login:

Make sure you complete the setup dialog for each N5K. While you can continue to
configure the 5K on the serial console, it will be easier to do the entire remainder of the
lab on your remote Server desktop, and access the 5K’s from there via ssh (putty).

8
Task 2: Access your remote Server Desktop, examine
network adapters and access your 5K
1. Access the Server remote desktop, as explained earlier in the lab

This is the server connected to your 5K/2K pod. You should be able to do all of your
remaining work here.

2. Examine the network adapters on your server. We have given them nice adapter names
to make your life in this lab easier. In the real world, you would be highly recommended
to do similarly in a similar situation. Right-click on any adapter in the system tray and
open network connections:

Examine the adapters. If it doesn’t have this detail view, use View->Details. You might
want to click on the Name label to make sure they are sorted by name alphabetically:

We have changed the “Windows Adapter name” to indicate exactly where each adapter is
plugged in.
9
3. Use putty (icon is on your remote desktop) to open two windows to your two Pod 5K,
using the IP addresses you assigned in the initial setup. It will be easiest to do all your
remaining 5K configuration here.

Login using the name admin and the password nexus5k .

Task 3: Examine Basic 5K Information


1. Type the following commands on each 5K, to examine sanity, version, and basic port
information . Any ports on which there is a physical link will be up in access mode --
we will be configuring them correctly later.

podX-n5k-n# show version

podX-n5k-n# show int mgmt0

podX-n5k-n# show feature

podX-n5k-n# show int brief

Confirm that onlythe following ports are connected: 1, 2 7, 8, 9, 10, 15, 16, 17, 18.
This should match the diagram on the first page of the lab (except for pors 1 and 2,
which are parts of the FCOE and VM-FEX add-on labs -- don’t worry about it)

podX-n5k-n# show cdp neighbors

Note that Cisco Discovery Protocol (CDP) is enabled by default on all ports, and you
can see neighboring switches (the Core and your other Pod 5K) although those ports
are not yet configured the way we will want them.

Task 4: Configure a Data VLAN, and Connectivity


“Northbound” to the Core
1. Configure a new data vlan, 930. Note how a new vlan is supported just by mentioning its
name. Perform the following on each 5K. You will always use VLAN 930 (precisely,
don't make up your own) as the data VLAN throughout the lab.

podX-n5k-n# conf t
Enter configuration commands, one per line. End with
CNTL/Z.
podX-n5k-n(config)# vlan 930

10
podX-n5k-n(config-vlan)# sh vlan

VLAN Name Status Ports


---- -------------------------------- --------- -----------
--------------------
1 default active Eth1/1, Eth1/2, Eth1/3,
etc

930 VLAN0930 active

2. Configure ports 15 and 16 as trunks allowing your new data VLAN northward to the
core.
podX-n5k-n(config-vlan)# int e1/15-16
podX-n5k-n(config-if-range)# switchport mode trunk
podX-n5k-n(config-if-range)# switchport trunk allow vlan
1,930
3. Verify your new port settings.

podX-n5k-n(config-vlan)# end
podX-n5k-n# sh int e1/15
podX-n5k-n# sh int e1/15 switchport
podX-n5k-n# sh int e1/16
podX-n5k-n# sh int e1/16 switchport

Remember to perform the entire task on both 5K.

11
Task 5: Configure Access to the N2K Fexes (No VPC)
Note in this section each N2K fex will be attached to only 1 5K (Fex 1 to 5K1, Fex 2 to
5K2). You will be able to create teams on your Server that fail over or balance using
different MAC addresses across both fexes, but will notbe able to create port channels
across both fexes.

The diagram shows the configuration, including IP’s preconfigured on your data VLAN
northbound for you to test connectivity:

100.1.1.21
100.1.1.22

e1/15 e1/16 e1/15 e1/16

Failover or load-balancing team


(separate MAC each adapter) – NO
port channel

1. Enable fex 1 access on 5K-1 (ports 7 and 8). Note the connection between the 5K and
the fex will be a port channel. Note that both fexes are visible (using sh fex) from both
5K, but we will be configuring access to each fex from only 1 5K at this time:

podX-n5k-1# conf t
podX-n5k-1 (config)# feature fex
wait about 10 seconds….

podX-n5k-1(config)# sh fex
FEX FEX FEX FEX
Number Description State Model Serial
-----------------------------------------------------------------------
-
--- -------- Discovered N2K-C2248TP-1GE JAF1450BBCS
--- -------- Discovered N2K-C2248TP-1GE JAF1449EEFG

12
You cannot really tell which fex is which here
Now we create a port channel (call it po78) from ports 7 and 8 and connect it to the fex.
This is not an LACP port channel (LACP cannot be used between 5K and fex)

The “fex 100” is a random designation (must be 100-199) that we will always use to
identify this fex (connected to ports 7 and 8)

podX-n5k-1(config)# fex 100


podX-n5k-1(config-fex)# pinning max-links 1
Change in Max-links will cause traffic disruption.
podX-n5k-1(config-fex)# description "Fex 1 (ports 7,8)"
podX-n5k-1(config-fex)# int e1/7-8
podX-n5k-1(config-if-range)# channel-group 78
podX-n5k-1(config-if-range)# int po78
podX-n5k-1(config-if)# switchport mode fex
podX-n5k-1(config-if)# fex associate 100

ignore any environment alarms. After 30 seconds or so your fex should come on line
(you can repeat “sh fex” command)

podX-n5k-1(config-if)# sh fex
FEX FEX FEX FEX
Number Description State Model Serial
-----------------------------------------------------------------------
-
100 Fex 1 (ports 7,8) Online Sequence N2K-C2248TP-1GE JAF1449EEFG
--- -------- Discovered N2K-C2248TP-1GE JAF1450BBCS

podX-n5k-1(config-if)# sh fex
FEX FEX FEX FEX
Number Description State Model Serial
-----------------------------------------------------------------------
-
100 Fex 1 (ports 7,8) Online N2K-C2248TP-1GE JAF1449EEFG
--- -------- Discovered N2K-C2248TP-1GE JAF1450BBCS

2. Show how the fex “satellite ports” are created on the 5K and appear exactly as if they are
line card ports on the 5K. The port designation is e100/1/port# . The first number is fex
id we assigned, the middle number is always 1.

podX-n5k-1(config-if)# sh int brief

13
3. Show details about the fex ports connectivity to 5K. Note how all fex ports traffic up to
the 5K travels over the port channel (if you lose a single 5K port, you will not lose access
to any fex ports):

podX-n5k-1(config-if)# sh fex 100 detail

4. Configure fex ports 1 and 2 for access mode to your data vlan (101):
podX-n5k-1(config-if)# int e100/1/1-2
podX-n5k-1(config-if-range)# switch mode access
podX-n5k-1(config-if-range)# switch access vlan 930
podX-n5k-1(config-if-range)# sh int e100/1/1 switchport

5. Enable fex 2 access on 5K-2 (ports 9 and 10). Note the connection between the 5K and
the fex will be a port channel. Note that both fexes are visible (using sh fex) from both
5K, but we will be configuring access to each fex from only 1 5K at this time:

podX-n5k-2# conf t
podX-n5k-2(config)# feature fex
wait about 10 seconds….

podX-n5k-2(config)# sh fex
FEX FEX FEX FEX
Number Description State Model Serial
-----------------------------------------------------------------------
--- -------- Discovered N2K-C2248TP-1GE JAF1450BBCS
--- -------- Discovered N2K-C2248TP-1GE JAF1449EEFG

You can cross reference serial numbers from 5k-1. Here we will connect to the other
fex.

Now we create a port channel (call it po91) from ports 9 and 10 and connect it to the
fex. This is not an LACP port channel (LACP cannot be used between 5K and fex)

The “fex 101” is a random designation (must be 100-199) that we will always use to
identify this fex (connected to ports 9 and 10)

podX-n5k-2(config)# fex 101


podX-n5k-2(config-fex)# pinning max-links 1
Change in Max-links will cause traffic disruption.
podX-n5k-2(config-fex)# description "Fex 2 (ports 9,10)"
podX-n5k-2(config-fex)# int e1/9-10
podX-n5k-2(config-if-range)# channel-group 91
podX-n5k-2(config-if-range)# int po91
podX-n5k-2(config-if)# switchport mode fex
podX-n5k-2(config-if)# fex associate 101

ignore any environment alarms. After 30 seconds or so your fex should come on line
(you can repeat “sh fex” command)

14
podX-n5k-2(config-if)# sh fex
FEX FEX FEX FEX
Number Description State Model Serial
-----------------------------------------------------------------------
101 Fex 2 (ports 9,10 ) Online Sequence N2K-C2248TP-1GE JAF1450BBCS
--- -------- Discovered N2K-C2248TP-1GE JAF1449EEFG

podX-n5k-2(config-if)# sh fex
FEX FEX FEX FEX
Number Description State Model Serial
-----------------------------------------------------------------------
101 Fex 2 (ports 9,10 ) Online N2K-C2248TP-1GE JAF1450BBCS
--- -------- Discovered N2K-C2248TP-1GE JAF1449EEFG

6. Show how the fex “satellite ports” are created on the 5K and appear exactly as if they are
line card ports on the 5K. The port designation is e101/1/port# . The first number is fex
id we assigned, the middle number is always 1.

podX-n5k-2(config-if)# sh int brief

7. Show details about the fex ports connectivity to 5K. Note how all fex ports traffic up to
the 5K travels over the port channel (if you lose a single 5K port, you will not lose access
to any fex ports):

podX-n5k-2(config-if)# sh fex 101 detail

8. Configure fex ports 1 and 2 for access mode to your data vlan (101):
podX-n5k-2(config-if)# int e101/1/1-2
podX-n5k-2(config-if-range)# switch mode access
podX-n5k-2(config-if-range)# switch access vlan 930
podX-n5k-2(config-if-range)# sh int e101/1/1 switchport

15
Task 6: Configure Teaming on your Server
1. On your server, look at the network connections view as before. Run commands to shut
down fex ports so that you can verify that the connections match the adapter names
(labels) that we have given the ports on the quad card.

As you shut down a particular port, you should see status of the corresponding server
adapter change to “Network cable unplugged”

(on 5k-1)
podX-n5k-1(config)# conf t
podX-n5k-1(config)# int e100/1/1
podX-n5k-1(config-if)# shut
podX-n5k-1(config-if)# no shut
podX-n5k-1(config)# int e100/1/2
podX-n5k-1(config-if)# shut
podX-n5k-1(config-if)# no shut

(on 5K-2)
podX-n5k-2(config)# conf t
podX-n5k-2(config)# int e101/1/1
podX-n5k-2(config-if)# shut
podX-n5k-2(config-if)# no shut
podX-n5k-2(config)# int e101/1/2
podX-n5k-2(config-if)# shut
podX-n5k-2(config-if)# no shut

2. In the network connections view, right-click your first fex (quad card) port and click
“Properties”:

16
3. Click the “Configure” Button on the adapter properties window:

4. Click the Teaming tab, check the “Team this adapter….” box and click “New Team”:

17
5. Create a new team name and click next.
6. Select all four quad-port adapters (not any other adapters).
7. Select Adapter Fault Tolerance (this is a basic failover team) and click Finish.
8. Be patient as the team as configured.
9. You may be asked to reboot the server (this one time only). If so, say yes. You will get
disconnected from the remote desktop, and will need to wait 5 minutes or so before you
can reconnect.

18
Task 7: Configure the Team Virtual Adapter, Access your
Data VLAN (101) northbound, and watch Team behavior as
you shut down fex and 5K ports
1. Reconnect to your Server remote desktop if you need to. Examine the network
connections window again and find the newly created virtual team adapter:

2. Rename this adapter (right click on the “Local Area Connection ..” name and choose
Rename) so it is easier to find.

3. The adapter should have received an IP address beginning with 100.1.1 via DHCP, if
you have configured everything correctly.

Open a cmd window (or you can be cool and use the Cygwin shell), and run

ipconfig /all

to verify that the team adapter has reveived its IP address. The original Intel Quad
adapters should have disappeared from the output of ipconfig /all

19
4. In your command or Cygwin shell window, run “ping –t 100.1.1.21” (or .22).

The verify first ping request may time out, but then should succeed. You can keep this
window open and keep the ping going to see what happens as you simulate failures by
shutting down individual fex ports.

5. Examine properties of your team virtual adapter (not any individual quad port). Click
Configure as shown:

20
On the properties window, click the Settings tab to examine the state of the individual
adapters within the team. Note the adapters are referred to by the device name, not by the
user friendlier adapter name. You can cross reference your network adapters window:

Fex 1 Port 1 – Quad Port Server Adapter


Fex 1 Port 2 – Quad Port Server Adapter #2
Fex 2 Port 1 – Quad Port Server Adapter #3
Fex 2 Port 1 – Quad Port Server Adapter #4

21
6. Shut down various individual ports on the two fexes, and combinations of ports.
Remember, you have access to fex 1 only from 5k-1 and fex 2 only from 5K-2. As you
shut down and then “no shut” ports, watch what happens both in your “ping” window,
and in the team settings window above.

Your pings may be interrupted for a second or two if the active adapter shuts down (as
long as others are available).

Sometimes it takes a while (could even be up to 30-45 sec) for the team status as seen
through “Settings” to correctly show the individual adapter state (even ‘though you know
it must have failed over because your ping is already running)

podX-n5k-1(config)# int e100/1/2


podX-n5k-1(config-if)# shut
if this were active adapter, see what happens. Try various port combinations
podX-n5k-1(config-if)# no shut

7. Shut down an individual connection between 5K and the fex. Since these are connected
with a port-channel, there should be no effect on your server (verify). The only place to
see the change is looking at the individual 5K ports or port channel:

podX-n5k-1(config)# int e1/7


podX-n5k-1(config-if)# shut
podX-n5k-1(config-if)# sh fex 100 detail
nothing should look any different. Fex to 5K traffic is on the port channel

podX-n5k-1(config-if)# sh int e1/7-8 brief


podX-n5k-1(config-if)# sh port-channel summary
podX-n5k-1(config-if)# no shut

22
Task 8: (Optional) Play with Some Teaming Variations in the
“no port-channel” configuration
1. Keep your pinging going during all of this to watch what happens.
2. In the team settings, click Modify Team

3. Highlight one of your quad port adapters and assign it the Primary role. This guy will be
the “preferred adapter” for the failover. If this adapter gets repaired, the team will fail
back to it.

23
4. Click OK, back in the settings window, that adapter should become active.
5. Shut down whichever fex port corresponds to that adapter and watch the team fail over to
another adapter.
6. Repair that fex port (no shut), and watch it fail back.
7. In the Team Settings window, click Modify Team, and access the Type tab on the new
window.

8. Change the type to Adaptive Load Balancing and click OK.

This is an “active-active-active-active” teaming, using separate MAC addresses on each


adapter, with no port channel. It will work on any switch. Essentially, your Windows
server will round-robin sent packets, which includes ARP responses to the team IP,
which will “naturally” do some load balancing on inbound packets to the IP

9. Back in the team settings window, watch your adapters change to “Active-Active-Active-
Active”.

10. Shut down fex ports (remaning ports will stay active). There could still be a second-or-
two interruption to pings. Machines out there who have a “dead” MAC address cached in
ARP tables should have it updated automatically by a team-generated gratuitous ARP.

11. Restore all your Fex ports (no shut)

12. Switch the teaming type back to Adapter Fault Tolerance

24
Task 9: Modify connections from a 5K to a Fex to Use Static
Pinning Rather than Port Channel (then switch it back)
You can connect 5K to fex with static pinning --- fex ports will use individual
connections to send and receive data to 5K, and loss of a single connection will imply
loss of those fex ports. The advantage is more customized “traffic shaping”.

1. On 5k-1 (only), undo the current connection to Fex 1 (id 100)


podX-n5k-1(config-if)# int po78
podX-n5k-1(config-if-range)# no fex associate 100
2011 Mar 18 21:36:14 podX-n5k-1 %$ VDC-1 %$ %NOHMS-2-
NOHMS_ENV_FEX_OFFLINE: FEX-100 Off-line (Serial Number )
2011 Mar 18 21:36:15 podX-n5k-1 %$ VDC-1 %$ %PFMA-2-
FEX_STATUS: Fex 100 is offline
podX-n5k-1(config-if-range)# no int po78

2. On 5k-1(only), do the fex association with static pinning. You need to change the fex
properties (max-pinning) so that the fex will send traffic for different ports over different
connections:
podX-n5k-1(config)# fex 100
podX-n5k-1(config-fex)# pinning max-links 2
Change in Max-links will cause traffic disruption.
podX-n5k-1(config-fex)# int e1/7-8
podX-n5k-1(config-if-range)# fex associate 100
podX-n5k-1(config-if-range)# sh fex (repeat until fex is online)
podX-n5k-1(config-if-range)# sh fex 100 detail
Note how first 24 fex ports are connected via 5K port 7, and second 24 through5K port 8

3. Switch it back so it is pinned to port channel again:


podX-n5k-1(config-if-range)# int e1/7-8
podX-n5k-1(config-if-range)# no fex associate 100
2011 Mar 18 21:42:21 podX-n5k-1 %$ VDC-1 %$ %NOHMS-2-
NOHMS_ENV_FEX_OFFLINE: FEX-100 Off-line (Serial Number )
2011 Mar 18 21:42:21 podX-n5k-1 %$ VDC-1 %$ %PFMA-2-
FEX_STATUS: Fex 100 is offline

podX-n5k-1(config-if-range)# channel-group 78
podX-n5k-1(config-if-range)# fex 100
podX-n5k-1(config-fex)# pinning max-links 1
Change in Max-links will cause traffic disruption.
podX-n5k-1(config-fex)# int po78
podX-n5k-1(config-if)# fex associate 100

podX-n5k-1(config-if-range)# sh fex (repeat until fex is online)


podX-n5k-1(config-if-range)# sh fex 100 detail
back the way it was before

25
Task 10: Set up the VPC (Virtual Port Channel) Infrastructure
on the two pod 5K’s
Setting up VPC infrastructure on (precisely) a pair of 5K involves:
 Creating VPC domain (an ID which identifies the 2 5K’s as vpc partners)
 Creating the peer-keepalive (typically over management network)
 Creating vpc peer-link (this is a real data channel that carries VLAN traffic
between the pair).

1. On 5k-1, configure the VPC infrastructure. For your domain, use X where X is your pod
number.

Ports 17-18 are the 5K-5K links (for vpc peer) on every pod.

podX-n5k-1# conf t
podX-n5k-1(config)# feature lacp
podX-n5k-1(config)# feature vpc
podX-n5k-1(config)# vpc domain X
podX-n5k-1(config-vpc-domain)# peer-keepalive des 10.2.8.X4
make sure you use mgmt IP of other 5K in YOUR Pod (it won’t let you use IP of the 5K
you are typing on)

Note:
--------:: Management VRF will be used as the default VRF
::--------
podX-n5k-1(config-vpc-domain)# int e1/17-18
podX-n5k-1(config-if-range)# channel-group 1718 mode active
podX-n5k-1(config-if-range)# int po 1718
podX-n5k-1(config-if)# switch mode trunk
podX-n5k-1(config-if)# switch trunk allow vlan all
podX-n5k-1(config-if)# vpc peer-link
Please note that spanning tree port type is changed to
"network" port type on vPC peer-link.
This will enable spanning tree Bridge Assurance on vPC
peer-link provided the STP Bridge Assurance
(which is enabled by default) is not disabled.

26
2. On 5K-2, configure the VPC infrastructure. For your domain, use X where X is your pod
number.

Ports 17-18 are the 5K-5K links (for vpc peer) on every pod.

podX-n5k-2# conf t
podX-n5k-2(config)# feature lacp
podX-n5k-2(config)# feature vpc
podX-n5k-2(config)# vpc domain X
podX-n5k-2(config-vpc-domain)# peer-keepalive des 10.2.8.X3
make sure you use mgmt IP of other 5K in YOUR Pod (it won’t let you use IP of the 5K
you are typing on)

Note:
--------:: Management VRF will be used as the default VRF
::--------
podX-n5k-2(config-vpc-domain)# int e1/17-18
podX-n5k-2(config-if-range)# channel-group 1718 mode active
podX-n5k-2(config-if-range)# int po 1718
podX-n5k-2(config-if)# switch mode trunk
podX-n5k-2(config-if)# switch trunk allow vlan all
podX-n5k-2(config-if)# vpc peer-link
Please note that spanning tree port type is changed to
"network" port type on vPC peer-link.
This will enable spanning tree Bridge Assurance on vPC
peer-link provided the STP Bridge Assurance
(which is enabled by default) is not disabled.

3. On each 5K, verify that the VPC infrastructure is healthy.

Note it can take up to 30 seconds or so after correct configuration before it shows that the
infrastructure is healthy. Keep running sh vpc brief util it looks like the example below.
If it persists in being unhealthy, you have done something wrong.

podX-n5k-X(config-if)# sh vpc brief


Legend:
(*) - local vPC is down, forwarding via vPC
peer-link

vPC domain id : X
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status: success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : secondary
Number of vPCs configured : 0

27
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled

vPC Peer-link status


----------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------
1 Po1718 up 1,930

Task 11: Set up VPC between 5K and Fex (so both 5K


connected to both Fex)
This scenario uses VPC so that a single 2K can be connected to 2 5K. The four “uplinks”
are exposed to the fexes as a virtual port channel that distributes over the 2 5K.

You can lose an entire 5K and this scenario, and none of your fex ports are affected.

In this scenario, you cannot expose an LACP port-channel down from the fex to the
server.

Failover or load-
balancing team
(separate MAC
each adapter) –
NO port channel

1. On 5k-1 (existing “po78” already connected to fex 1), identify po78 as a vpc (the fex
goes offline and then back online as it recognizes “half the vpc” is up)

podX-n5k-1(config-if)# int po78


podX-n5k-1(config-if)# vpc 78

28
2. On 5K-2, create the other half of the vpc connected to fex 1. What has to match the first
5K is the vpc identifier.
At the end you will make the port settings on fex 1 consistent The individual fex ports are
not themselves vpc’s, but they require manual vpc consistency across the 2 5K’s:
podX-n5k-2(config-if)# fex 100
podX-n5k-2(config-fex)# description "Fex 1 (ports 7,8)"
podX-n5k-2(config-fex)# pinning max-links 1
Change in Max-links will cause traffic disruption.
podX-n5k-2(config-fex)# int e1/7-8
podX-n5k-2(config-if-range)# channel-group 78
podX-n5k-2(config-if-range)# int po78
podX-n5k-2(config-if)# switch mode fex
podX-n5k-2(config-if)# fex associate 100
podX-n5k-2(config-if)# vpc 78
you will see the fex come online on both 5k after a little while
podX-n5k-2(config-if)# sh fex
podX-n5k-2(config-if)# int e100/1/1-2
podX-n5k-2(config-if-range)# switch access vlan 930
podX-n5k-2(config-if-range)# sh vpc consistency int
e100/1/1

3. On 5K-2 (existing “po91 already connected to fex 2), identify po91 as a vpc (the fex goes
offline and then back online as it recognizes “half the vpc” is up)

podX-n5k-2(config-if)# int po91


podX-n5k-2(config-if)# vpc 91

4. On 5k-1, create the other half of the vpc connected to fex 2.


podX-n5k-1(config-if)# fex 101
podX-n5k-1(config-fex)# description "Fex 2 (ports 9,10)"
podX-n5k-1(config-fex)# pinning max-links 1
Change in Max-links will cause traffic disruption.
podX-n5k-1(config-fex)# int e1/9-10
podX-n5k-1(config-if-range)# channel-group 91
podX-n5k-1(config-if-range)# int po91

29
podX-n5k-1(config-if)# switch mode fex
podX-n5k-1(config-if)# fex associate 101
podX-n5k-1(config-if)# vpc 91
you will see the fex come online on both 5k after a while
podX-n5k-1(config-if)# sh fex

podX-n5k-1(config-if)# int e101/1/1-2


podX-n5k-1(config-if-range)# switch access vlan 930
podX-n5k-1(config-if-range)# sh vpc consistency int
e101/1/1

5. On your Windows server, failover teaming (or adaptive load-balacning teaming) should
still work the same as before. You can shut down any fex port (e100/1/1, e100/1/2,
e101/1/1, e100/1/2) from either 5K. Verify behavior as you shut down fex ports by
keeping your “ping –t 100.1.1.21” (or 22) running, and looking at the team settings that
show the adapter status as before.

6. Make sure all the fex ports are now in a “no shut” mode.

7. Reboot an entire 5K. Note that all your fex ports stay alive.

podX-n5k-1# copy run start


podX-n5k-1# reload
WARNING: This command will reboot the system
Do you want to continue? (y/n) [n]y

30
Task 12: Set up VPC / LAPC on Fex Ports across Multiple
Fexes (with each 5K connected to only 1 Fex)
This scenario allows you to configure VPC across fexes down to the Server. You have to
revert to each 5K connected to only 1 Fex (you can not have VPC on “both sides of the
fex”, VPC (LACP) down to the server requires straight-through connections to the fex.

It looks like this:

1. Disconnect the extra 5K- fex connections you made form 5k-1 in the previous task, on
“un-vpc-ify” the remaining connection.
podX-n5k-1# conf t
Enter configuration commands, one per line. End with
CNTL/Z.
podX-n5k-1(config)# int po91
podX-n5k-1(config-if)# no fex assoc 101
podX-n5k-1(config-if)# no int po91
podX-n5k-1(config)# no fex 101

podX-n5k-1(config)# int po78


podX-n5k-1(config-if)# no vpc 78
the remaining fex will stay offline until the corresponding vpc is removed on the other
N5K. . The disconnected fex will maintain its numerical identity and description (for no
particular reason; would rather have them disappear)
podX-n5k-1(config-if)# sh fex

31
2. Disconnect the extra 5K- fex connections you made form 5K-2 in the previous task, on
“un-vpc-ify” the remaining connection.
podX-n5k-2# conf t
Enter configuration commands, one per line. End with
CNTL/Z.
podX-n5k-2(config)# int po78
podX-n5k-2(config-if)# no fex assoc 100
2011 Mar 18 23:30:05 podX-n5k-2 %$ VDC-1 %$ %NOHMS-2-
NOHMS_ENV_FEX_OFFLINE: FEX-100 Off-line (Serial Number )
2011 Mar 18 23:30:05 podX-n5k-2 %$ VDC-1 %$ %PFMA-2-
FEX_STATUS: Fex 101 is offline
podX-n5k-2(config-if)# no int po78

podX-n5k-2(config)# int po91


podX-n5k-2(config-if)# no vpc 91
the remaining fex comes back online after a while. The disconnected fex will maintain its
numerical identity and description (for no particular reason; would rather have them
disappear)

podX-n5k-2(config-if)# sh fex

3. On 5K-1, create an LACP port channel using the two fex ports, and give it a VPC
identity:
podX-n5k-1(config)# int e100/1/1-2
podX-n5k-1(config-if-range)# channel-group 1212 mode active
podX-n5k-1(config-if-range)# int po1212
podX-n5k-1(config-if)# vpc 1212

4. On 5K-2, create an LACP port channel using two ports on other fex, and match the VPC
identity from the first fex on the first 5K.
podX-n5k-2(config)# int e101/1/1-2
podX-n5k-2(config-if-range)# channel-group 1212 mode active
podX-n5k-2(config-if-range)# int po1212
podX-n5k-2(config-if)# vpc 1212

32
5. On your server, configure your team adapter and change the type to use LACP (IEEE
802.3ad Dynamic Link Aggregation). Click Modify Team.. on the team settings page,
and then choose the new type as shown, and click OK:

6. After a little while (it could take up to 30 seconds, your group should go “ Active-Active-
Active-Active” using a real LACP port-channel (single MAC address).

You can verify the state of the port channel on the two 5K. It should look like the
example below (with E101/1/1 and E101/1/2 on the other 5K)
podX-n5k-2(config-if)# sh port-c sum
….
78 Po78(SU) Eth NONE Eth1/7(P) Eth1/8(P)
1212 Po1212(SU) Eth LACP Eth100/1/1(P) Eth100/1/2(P)
1718 Po1718(SU) Eth LACP Eth1/17(P) Eth1/18(P)
7. Experiment with failure of individual fex ports, as before. Keep your pings going and
your team status window open.

33
Lab: Multihop FCOE Add-on (now with FCOE-NPV extra add-
on!)

There is a separate FCOE lab for Nexus 5000 which does the full set up for FCOE (ignoring the
fexes), configuring from scratch as you have already done in this lab.

Since you already have your N5K framework set up – if you have a little extra time, feel free to
do this version of the FCOE lab. It is identical to the "official FCOE lab", without the initial
setup parts.

Lab Topology:

Your 5K will communicate FCOE northbound using the same connections it is using for the
northbound Ethernet ('though not the "crossovers" --- FCOE will pass only from your 5K-1 to
core 5K-1 and your 5K-2 to core 5K-2. From there, the core 5K "translate" FCOE to traditional
fibre channel --- Core 1 is connected to the A controller of an EMC array, and Core-2 to the B
controller.

34
Task F1: Configure a new Data VLAN (Just so we can
distinguish Ethernet traffic on the CNA from the Quad
adapters)
1. Configure a new data vlan, 1422, and allow it northbound Perform the following on
each 5K.

podX-n5k-n# conf t
Enter configuration commands, one per line. End with
CNTL/Z.
podX-n5k-n(config)# vlan 1422
podX-n5k-n(config-vlan)# int e1/15-16
podX-n5k-n(config-if-range)#switchport trunk allow vlan add
1422
podX-n5k-n(config-if-range)# end
podX-n5k-n# sh int e1/15
podX-n5k-n# sh int e1/15 switchport
podX-n5k-n# sh int e1/16
podX-n5k-n# sh int e1/16 switchport

Remember to do this on both 5K

35
Task F2: Initialize FCOE, Configure separate fabric VSAN's
(and FCOE VLAN's to carry them)
As is typical best practice, you will be using two different VSAN's on the A (to and from
your 5K-1) and B (to and from your 5K-2) fabrics.

Because the northbound core is preconfigured for you, you must use the exact VSAN's
and FCOE VLAN's specified in the lab (11 on 5K-1, 12 on 5K-2)

9. On your 5K-1 (only), enable FCOE, VSAN 11, and FCOE VLAN 11:

podX-n5k-1# conf t
podX-n5k-1 (config)# feature fcoe
FC license checked out successfully
fc_plugin extracted successfully
FC plugin loaded successfully
FCoE manager enabled successfully
FC enabled on all modules successfully
Enabled FCoE QoS policies successfully

podX-n5k-1(config)# vsan datab


podX-n5k-1(config-vsan-db)# vsan 11
podX-n5k-1(config-vsan-db)# vlan 11
podX-n5k-1(config-vlan)# fcoe vsan 11
podX-n5k-1(config-vlan)# exit
podX-n5k-1(config)# show vlan fcoe

Original VLAN ID Translated VSAN ID Association State


---------------- ------------------ -----------------
11 11 Operational

podX-n5k-1(config)# int e1/15


podX-n5k-1(config-if)# switchport trunk allow vlan add 11
podX-n5k-1(config-if)# show int e1/15 switchport
Name: Ethernet1/15
Switchport: Enabled
Switchport Monitor: Not enabled
Operational Mode: trunk
Access Mode VLAN: 1 (default)
Trunking Native Mode VLAN: 1 (default)
Trunking VLANs Enabled: 1,11,930,1422

36
10. On your 5K-2 (only), enable FCOE, VSAN 12, and FCOE VLAN 12:

podX-n5k-2# conf t
podX-n5k-2(config)# feature fcoe
FC license checked out successfully
fc_plugin extracted successfully
FC plugin loaded successfully
FCoE manager enabled successfully
FC enabled on all modules successfully
Enabled FCoE QoS policies successfully

podX-n5k-2(config)# vsan datab


podX-n5k-2(config-vsan-db)# vsan 12
podX-n5k-2(config-vsan-db)# vlan 12
podX-n5k-2(config-vlan)# fcoe vsan 12
podX-n5k-2(config-vlan)# exit
podX-n5k-2(config)# show vlan fcoe

Original VLAN ID Translated VSAN ID Association State


---------------- ------------------ -----------------
12 12 Operational

podX-n5k-2(config)# int e1/16


podX-n5k-2(config-if)# switchport trunk allow vlan add 12
podX-n5k-2(config-if)# show int e1/16 switchport
Name: Ethernet1/16
Switchport: Enabled
Switchport Monitor: Not enabled
Operational Mode: trunk
Access Mode VLAN: 1 (default)
Trunking Native Mode VLAN: 1 (default)
Trunking VLANs Enabled: 1,12,930,1422

37
TaskF3: Configure VE Ports (Inter-switch FCOE ports)
The northbound ports on the core are preconfigured for you. You should see ports on your side
come up correctly only if you do everything absolutely correctly!

1. Run the following commands on 5K-1 (only):

podX-n5k-1# conf t
Enter configuration commands, one per line. End with
CNTL/Z.
podX-n5k-1(config)# int vfc1
podX-n5k-1(config-if)# switchport mode e
podX-n5k-1(config-if)# bind int e1/15
podX-n5k-1(config-if)# no shut

2. Run the following command until you see VSAN 11 become active on the link , as
indicated in the highlighting below. It may take 20 seconds or so (just keep running the
command until it looks good). If VSAN 11 never becomes active on the link, you have
done something wrong previously:

podX-n5k-1(config-if)# sh int vfc1


vfc1 is trunking (Not all VSANs UP on the trunk)
Bound interface is Ethernet1/15
Hardware is Virtual Fibre Channel
Port WWN is 20:00:00:05:9b:22:91:ff
Admin port mode is E, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Trunk vsans (admin allowed and active) (1,11)
Trunk vsans (up) (11)
Trunk vsans (isolated) ()
Trunk vsans (initializing) (1)
etc etc etc
podX-n5k-1(config-if)# sh int vfc1 brief
note it is fine that this interface is officially a "member" of VSAN 1 – by default VE ports
trunk all possible VSAN's (as long as the corresponding FCOE VLAN is allowed on the
underlying ethernet port)

3. Examine the zoning (active zoneset) that is automatically loaded from the northbound
Core N5K. There is a separate zone for each pod (but all zoned to the same EMC port
50:xx:xx:xx:xx:xx:xx:xx) – this is recommended practice.

Do NOT activate your own zoneset (it would be pushed to the fabric and you would
pollute the zoning for every pod in the entire lab)

podX-n5k-1(config-if)# sh zoneset active


38
4. Run the following commands on 5K-2 (only):

podX-n5k-2# conf t
Enter configuration commands, one per line. End with
CNTL/Z.
podX-n5k-2(config)# int vfc1
podX-n5k-2(config-if)# switchport mode e
podX-n5k-2(config-if)# bind int e1/16
podX-n5k-2(config-if)# no shut

5. Run the following command until you see VSAN 12 become active on the link , as
indicated in the highlighting below. It may take 20 seconds or so (just keep running the
command until it looks good). If VSAN 12 never becomes active on the link, you have
done something wrong previously:

podX-n5k-2(config-if)# sh int vfc1


vfc1 is trunking (Not all VSANs UP on the trunk)
Bound interface is Ethernet1/16
Hardware is Virtual Fibre Channel
Port WWN is 20:00:00:05:9b:22:91:ff
Admin port mode is E, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Trunk vsans (admin allowed and active) (1,12)
Trunk vsans (up) (12)
Trunk vsans (isolated) ()
Trunk vsans (initializing) (1)
etc etc etc

podX-n5k-2(config-if)# sh int vfc1 brief


note it is fine that this interface is officially a "member" of VSAN 1 – by default VE ports
trunk all possible VSAN's (as long as the corresponding FCOE VLAN is allowed on the
underlying ethernet port)

6. Examine the zoning (active zoneset) that is automatically loaded from the northbound
Core N5K. There is a separate zone for each pod (but all zoned to the same EMC port
50:xx:xx:xx:xx:xx:xx:xx) – this is recommended practice.

Do NOT activate your own zoneset (it would be pushed to the fabric and you would
pollute the zoning for every pod in the entire lab)

podX-n5k-2(config-if)# sh zoneset active

39
Task F4: Configure Port e1/1 on each 5K as a VPC (down to
the Server)
1. On each 5K, issue the following:

podX-n5k-X(config-if)# int e1/1


podX-n5k-X(config-if)# channel-group 1111 mode active
podX-n5k-X(config-if)# int po1111
podX-n5k-X(config-if)# switch mode trunk
podX-n5k-X(config-if)# switch trunk native vlan 1422
podX-n5k-X(config-if)# spanning-tree port type edge trunk
you will get warnings that you can ignore. You must use this setting, otherwise traffic may
be blocked on the port by spanning tree even if it seems connected.

podX-n5k-X(config-if)# vpc 1111

podX-n5k-X(config-if)# show int po1111 switchport


since all VLAN are allowed on trunk as default, you should see your FCOE VLAN (11 on 5K-
1, 12 on 5K-2 allowed (but not native) on the trunk, as well as the data vlans. This is the
requirement for FCOE.

podX-n5k-X(config-if)# show port-channel summary

Remember to do both 5K. The port-channel will not come up until the equivalent
LACP configuration is created on the Server Side

2. On your server, run the QLogic QConvergeConsole CLI (this guy: ) – there
is an icon on your desktop, or you can run it from the Start menu. It runs in a cmd shell.
3. It takes some time (30 sec. or so) for the utility to scan your CNA. Once it starts talking
for real, interact with the setup dialogue thus: (not all entire menus are shown. You
always just type numbers (or the word ALL sometimes):
a. Main Menu (choose 2) [Adapter Configuration]
b. Choose 1 [CNA Configuration]
c. Choose 2 [CNA NIC Configuration]
d. Choose 4 [Team Configuration]
e. Choose 3 [Configure New Team]
f. Choose 4 [802.3ad Dynamic Team – Active LACP]
g. Enter ALL when asked to choose ports (you only have 2)
h. Enter no for setting primary (doesn't matter)
i. Enter no so it doesn't add non-Qlogic ports
j. User assigned name: QTeam (or make up your own, use ATeam if you are a fan
of Mr. T)
k. Configure Team Parameters: no

40
l. After it starts to create the team, you will get a Windows popup warning that the
drivers have not passed Microsoft Logo Testing (now shame on them!!!!). Click
Continue Anyway.
m. Wait until the CLI confirms it has created the team and has finished rescanning
the adapters.
n. You can quit the CLI (you'll figure out how).
4. Look again at your Windows network connections window. You will have a new virtual
adapter representing the Qlogic Team:

5. Rename the new adapter if you like (right-click on the "Local Area" name as show, and
choose Rename..
6. Back on your 5K, you should see your VPC go alive. Type the following on each 5k to
verify that both sides of the VPC are connected.

podX-n5k-X# sh port-channel summary


podX-n5k-X# sh vpc 1111

10. Back on your server, your new virtual adapter (which you may have renamed, above)
should receive an IP address via DHCP beginning with 14.22.1.

11. Open up a command window. If you are “cool” and can’t live life without a Unix-like
shell, use the Cygwin icon on the desktop (a regular cmd window is fine too).

Run ipconfig /all and verify that your new virtual adapter has such an IP address.

If it does not have such an IP (can’t get an IP via DHCP), you can tried to get a DHCP
address again with ipconfig /renew adapter-name . If you really can’t get a
DHCP address, you have likely done something wrong in the previous configuration.

7. Open up a command window. If you are “cool” and can’t live life without a Unix-like
shell, use the Cygwin icon on the desktop (a regular cmd window is fine too).

8. In your command window, run “ping –t 14.22.1.21” (or .22).

The verify first ping request may time out, but then should succeed. You can keep this
window open and keep the ping going to see what happens as you simulate failures by
shutting down individual ports on the 5K.

41
Task F5: Configure the VFC "F-Port" Down to the Server
1. On 5K-1 (only), enter the following:
podX-n5k-1# conf t
podX-n5k-1(config)# int vfc2
podX-n5k-1(config-if)# vsan datab
podX-nk-1(config-vsan-db)# vsan 11 interface vfc2
podX-n5k-1(config-vsan-db)# int vfc2
podX-n5k-1(config-if)# bind int po1111
podX-n5k-1(config-if)# no shut
podX-n5k-1(config-if)# sh flogi datab
after a few seconds (you may have to run it once or twice more) you should see the
WWPN of the CNA port 1 on the server logged in

podX-n5k-1(config-if)# sh int vfc2


podX-n5k-1(config-if)# sh int vfc2 brief

VFC F-ports (to a server or to storage) are alwaysTF (trunking F) ports in the current
NxOS. The Qlogin Gen-2 CNA cards are not VSAN aware and will only pass traffic on
the "native" VSAN.

podX-n5k-1(config-if)# sh zoneset active

Find the zone for your pod within the active zoneset --- now both the EMC port WWN
and the server initiator port WWN (the same one you see in sh flogi datab) should have
"*" and a fibre channel id (fcid) in front of them to show they are logged in to the fabric.

2. On 5k-2 (only), enter the following:


podX-n5k-2# conf t
Enter configuration commands, one per line. End with
CNTL/Z.
podX-n5k-2(config)# int vfc2
podX-n5k-2(config-if)# vsan datab
podX-nk-1(config-vsan-db)# vsan 12 interface vfc2
podX-n5k-2(config-vsan-db)# int vfc2
podX-n5k-2(config-if)# bind int po1111
podX-n5k-2(config-if)# no shut
podX-n5k-2(config-if)# sh flogi datab
after a few seconds (you may have to run it once or twice more) you should see the
WWPN of the CNA port 2 on the server logged in

podX-n5k-2(config-if)# sh int vfc2


podX-n5k-2(config-if)# sh int vfc2 brief
podX-n5k-2(config-if)# sh zoneset active

42
Task F6: Verify that You can See and Use Multipathed SAN
Storage on the Server

13. Invoke the Powerpath monitor. After just a second or two it should show a new SAN
disk with two paths:

43
14. Invoke the Disk Manager (Start -> Right Click on My Computer ->Manage and click
Disk Management)
15. You should see a popup asking if you want to initialize a new disk:

16. Accept the defaults for the wizard (it will have you initialize but not convert the new
disk)
17. In the disk manager, right click on the layout indicator for your new disk and click New
Partition..:

44
18. Proceed through the partition wizard, accepting all the defaults, which will create a new
NTFS filesystem on your new multipathed SAN LUN mounted on E:

19. Use your new E: disk to your heart's delight.

20. Cause a failure of one of your FCOE paths: On 5K-1 (only!):

podX-n5k-1# conf
podX-n5k-1(config)# int e1/1
podX-n5k-1(config-if)# shut

21. Check what happens in your PowerPath monitor (you should see the icon in your system
tray flashing, and if you open it, see one dead path) – be patient may take 20 – 30 seconds
to respond.

22. Check what happens with your E: storage (it should be fine).

23. Check what happens with pinging 14.22.1.21 (it should be fine: ethernet-wise, this is an
LACP port-channel)

24. Restore your dead path:

podX-n5k-1# conf
podX-n5k-1(config)# int e1/1
podX-n5k-1(config-if)# no shut

25. Check Powerpath again and make sure all paths are restored.

45
Using NPV Mode over FCOE
The following diagram gives a very terse description of NPV (N-port virtualization) and NPIV
(N-port ID virtualization). NPV over FCOE is supported starting with version 5.0(3)N2(1).

Core Switch (NPIV)

Accepts multiple FC
logins on single port

Zoning done here only

Edge servers show up F or VF port


as logged in directly on
core
NP or VNP port Edge Switch (NPV)

“Proxies” server logins


through northbound
NP ports
F or VF port
Simpler:
No FC Domain ID
FC or FCOE (N or VN port) No zoning!!

A brief summary is:

 An edge switch running in NPV mode proxies its downstream server logins up to
the core
 A core switch must have NPIV enabled in order to support a downstream edge
switch running in NPV mode
 Originally, the NPV(edge) to F (core) connections were single VSAN only
(needed separate such connections for each VSAN). Today, VSAN trunking is
supported on both sides (need separate connections only for redundancy). The
running mode of the virtual ports will be reported as "TNP" and "TF"

46
Task NPV1: Switch the Core Port to F Mode

1. Log into Core N5K 1 (also known as A) :


Pods 1-8 : 10.2.8.9
Pods 11-18: 10.2.8.201
Use restricted user credentials coreuserX / nexus5k . Use your pod number in
place of X.
.

2. Show the status of the virtual interface going down to your pod (X is your pod
number – this has been intentionally set up so that vfc1 goes to pod 1, vfc2 goes
to pod 2, and so on). Note this is currently an "E" mode port.

N5K-Core-1# sh int vfcX

3. Change the port mode (keeping the virtual interface bound to the same physical
interface. Since we have not yet modified the edge switch, you will not see the
VSAN 11 become active:

N5K-Core-1# conf t
N5K-Core-1(config)# int vfcX
N5K-Core-1(config-if)# shut
N5K-Core-1(config-if)# switch mode F
N5K-Core-1(config-if)# no shut
N5K-Core-1(config-if)# end
N5K-Core-1# sh int vfcX

4. Invoke the EMC Powerpath monitor on your server (remote desktop) and note
that one of your paths to the SAN storage is now dead

5. Repeat steps 1-3 on CoreN5K 2 (also known as B):


Pods 1-8 : 10.2.8.10
Pods 11-18: 10.2.8.202
Same credentials as the first core. You will be modifying the vfcX with the exact
same X as on the first core switch (matching your pod number)

6. Note (in the EMC Powerpath monitor on your server) that both paths to the SAN
storage are now dead.

47
Task NPV2: Undo FCOE Configurations on Your Pod (Edge)
Switches and Reload
1. Undo all FCOE configurations (you will still be keeping your Ethernet, port channel, and
vpc configurations) on your pods n5k-1.

You will see (expected) error messages about your vfc ports being down as you delete
them.

Make sure you save your startup configuration (as shown) before you reload!!

pod8-n5k-1# conf t
podX-n5k-1(config)# no int vfc1
podX-n5k-1(config)# no int vfc2
podX-n5k-1(config)# vsan datab
podX-n5k-1(config-vsan-db)# no vsan 11
Do you want to continue? (y/n) y
podX-n5k-1(config-vsan-db)# vlan 11
podX-n5k-1(config-vlan)# no fcoe vsan 11
podX-n5k-1(config-vlan)# no feature fcoe
podX-n5k-1(config)# copy run start
[########################################] 100%
podX-n5k-1(config)# reload
WARNING: This command will reboot the system
Do you want to continue? (y/n) [n] y
//you can proceed to step 2 and do the other switch as the first one reboots
2. Do the same process on your pods n5k-2

pod8-n5k-2# conf t
podX-n5k-2(config)# no int vfc1
podX-n5k-2(config)# no int vfc2
podX-n5k-2(config)# vsan datab
podX-n5k-2(config-vsan-db)# no vsan 12
Do you want to continue? (y/n) y
podX-n5k-2(config-vsan-db)# vlan 12
podX-n5k-2(config-vlan)# no fcoe vsan 12
podX-n5k-2(config-vlan)# no feature fcoe
podX-n5k-2(config)# copy run start
[########################################] 100%
podX-n5k-2(config)# reload
WARNING: This command will reboot the system
Do you want to continue? (y/n) [n] y

You have to wait for your switch(es) to reload. You can monitor them back on the console
ports, or just wait until you are able to connect via ssh using putty again.

48
Task NPV3: Enable FCOE-NPV and Set up the Northbound
NP Port (to the Core)

1. On switch 1, (once it has reloaded and you can login again):

podX-n5k-1# conf t
podX-n5k-1(config) feature fcoe-npv
podX-n5k-1(config)# vsan datab
podX-n5k-1(config-vsan-db)# vsan 11
podX-n5k-1(config-vsan-db)# vlan 11
podX-n5k-1(config-vlan)# fcoe vsan 11
podX-n5k-1(config-vlan)#
podX-n5k-1(config-vlan)# int vfc1
podX-n5k-1(config-if)# switch mode np
podX-n5k-1(config-if)# bind int e1/15
podX-n5k-1(config-if)# no shut
podX-n5k-1(config-if)# sh int vfc1

make sure the output here shows that vsan 11 is up, including this line of output:
Trunk vsans (up) (11)

If after a couple of "sh int vfc1" it is not up, you have done something wrong previously
and you cannot proceed.

podX-n5k-1(config-if)# sh zoneset active


(there are no zonesets in NPV mode --- "sh zoneset" is now a syntax error!)

2. On Core Switch 1 show what is happening with the core F-port that is enabled for NPIV.

N5K-Core-1# sh int vfcX


N5K-Core-1# sh flogi datab int vfcX

Note the NP (northbound) port on the edge logs in to the Core port --- that is the WWN
that you are seeing.

49
On switch 2 (once it has reloaded and you can log in again):

podX-n5k-2# conf t
podX-n5k-2(config) feature fcoe-npv
podX-n5k-2(config)# vsan datab
podX-n5k-2(config-vsan-db)# vsan 12
podX-n5k-2(config-vsan-db)# vlan 12
podX-n5k-2(config-vlan)# fcoe vsan 12
podX-n5k-2(config-vlan)#
podX-n5k-2(config-vlan)# int vfc1
podX-n5k-2(config-if)# switch mode np
podX-n5k-2(config-if)# bind int e1/16
podX-n5k-2(config-if)# no shut
podX-n5k-2(config-if)# sh int vfc1

make sure the output here shows that vsan 12 is up, including this line of output:
Trunk vsans (up) (12)

If after a couple of "sh int vfc1" it is not up, you have done something wrong previously
and you cannot proceed.

podX-n5k-2(config-if)# sh zoneset active


(there are no zonesets in NPV mode --- "sh zoneset" is now a syntax error!)

3. On Core Switch 2 show what is happening with the core F-port that is enabled for NPIV.

N5K-Core-2# sh int vfcX


N5K-Core-2# sh flogi datab int vfcX

Note the NP (northbound) port on the edge logs in to the Core port --- that is the WWN
that you are seeing.

50
Task NPV4: Enable the Port from Your Switch Back Down to
the Server

1. On your switch 1 (NOT the Core):

podX-n5k-1# conf t
podX-n5k-1(config)# int vfc2
podX-n5k-1(config-if)# vsan datab
podX-n5k-1(config-vsan-db)# vsan 11 interface vfc2
podX-n5k-1(config-vsan-db)# int vfc2
podX-n5k-1(config-if)# bind int po1111
podX-n5k-1(config-if)# no shut
podX-n5k-1(config-if)# sh flogi datab
^
% Invalid command at '^' marker.

podX-n5k-1(config-if)# sh npv flogi

Note that the "sh flogi datab" does not show your server as logged in (since its login is
proxied up to the Core). However you can see the server's WWN being proxied using "sh
npv flogi" .

2. On Core Switch 1 (10.2.8.9)

N5K-Core-1# sh flogi datab int vfcX

N5K-Core-1# sh zoneset active

Note how both the edge's northbound NP port, and the server (and as many servers as
you might have in the real world connected to your edge) are logged in on the same "VF"
port on the Core. Zones are still configured/enforced on the core – you should see your
server's WWN and the SAN Array's WWN "connecting" (stars in front of both in the
zoneset output).

3. Back on your server, your storage should be reconnected on that path (check with the
EMC Powerpath monitor)

51
4. On your switch 2 (NOT the Core):

podX-n5k-2# conf t
podX-n5k-2(config)# int vfc2
podX-n5k-2(config-if)# vsan datab
podX-n5k-2(config-vsan-db)# vsan 12 interface vfc2
podX-n5k-2(config-vsan-db)# int vfc2
podX-n5k-2(config-if)# bind int po1111
podX-n5k-2(config-if)# no shut
podX-n5k-2(config-if)# sh flogi datab
^
% Invalid command at '^' marker.

podX-n5k-2(config-if)# sh npv flogi

Note that the "sh flogi datab" does not show your server as logged in (since its login is
proxied up to the Core). However you can see the server's WWN being proxied using "sh
npv flogi" .

5. On Core Switch 2 (10.2.8.10)

N5K-Core-2# sh flogi datab int vfcX

N5K-Core-2# sh zoneset active

Note how both the edge's northbound NP port, and the server (and as many servers as
you might have in the real world connected to your edge) are logged in on the same "VF"
port on the Core. Zones are still configured/enforced on the core – you should see your
server's WWN and the SAN Array's WWN "connecting" (stars in front of both in the
zoneset output).

6. Back on your server, your storage should be reconnected on the second path (check with
the EMC Powerpath monitor)

52
Optional Add-on: Adapter-Fex and VM-Fex for Nexus 5548

Note --- previous lab sections in this write-up are prerequisite to doing this lab, except for
the NPV part (you could have done that or not done that and still continue here)

Discussion:
Adapter-Fex and VM-Fex are features of Nexus 5548 that allow virtual nics (vNIcs) on a Cisco
P81e Virtual Interface Card to be managed as virtual Ethernet ports on the 5548. The server VIC
acts very much like a physical Nexus 2xxx fex, where all the port management is centralized on
the controlling Nexus 5xxx.

Adapter-Fex
The adapter-fex feature allows the Nexus 5548 to control vNICSs and fibre channel vHBAs on
the server's virtual card. These virtual nics appear precisely as if they were physical adapter
ports on the server containing the VIC. This feature has nothing specifically to do with VMware
ESX or virtual machines. The server can be running any OS that has the drivers required for the
NICs and fibre HBA's presented to it. The OS in this case could be ESX; in this case these
adapters created on the VIC would just look like the virtual switch uplinks, they would not have
any relationship to specific virtual machines.

Adapter fex can be implemented on a single 5548, but it is most likely implemented on a 5548
vPC pair. In this case, every vNIC on the VIC can be protected by redundancy. The picture looks
like this.

53
VM-Fex
VM-Fex is a feature whereby the vNICs (not fibre vHBA's) created on the server's VIC:

 can be dedicated vNIC by vNIC to specific virtual machines


 cause visibility of virtual machines as consumers of individual virtual Ethernet
ports on the 5548.
 extend the entire layer 2 per-port functionality of 5548 (every bell and whistle) to
control of individual virtual machine adapters.
 Allow migration of virtual machines (e.g. ESX vMotion) with no change of the
virtual port identification, settings, or counters on the controlling 5548's.

VM-FEX integrates with ESX's "Distributed Virtual Switch' (DVS) feature. DVS is also called
vDS (vSphere Distributed Switch) --- they are one and the same. The DVS concept presents an
abstraction of a single switch across multiple (2-64) instances of ESX, which must be part of the
same DataCenter container in vCenter.

The DVS concept implies that virtual network port configuration is done from some kind of
central manager, rather than separately on each ESX instance, as is done with the old “vSwitch"
model. Here, the central point of management for VM networking is the 5548s themselves

54
There is software related to this feature that is installed in ESX known as the VEM (virtual
Ethernet module). While this is the same VEM software as you would use with the Nexus 1000v
virtual switch, it modifies its behavior for the VM-Fex. In the case of VM-Fex the VEM works
strictly in a pass-through mode and does no local switching between VM's on the same ESX.

While DVS in general and the VM-fex feature are most interesting with multiple instances of
ESX, at the current time our lab pods provide only one UCS C-series server with a compatible
UCS P81e VIC. The picture looks like this:

55
As mentioned earlier, a more typical "real picture" has multiple ESX servers inside the same pair
of 5548's, with special value for VM migration – the virtual port identity of a VM on the 5548's,
it's configuration and even its counters are guaranteed to be unchanging across VM migration.

56
VM-FEX with DirectPath I/O (High-Performance Mode)
The normal functionality of the VM-FEX feature is not hypervisor-bypass (although sometimes
it is incorrectly described as such). Instead it is pass-through --- the VEM running in software in
the ESX is still responsible for providing a virtual port attached to a VM virtual adapter, and
passes through the traffic to the appropriate uplink. This normal functionality works with any
choice of virtual adapter provided to the VM.

UCS VM-FEX has a "high performance mode" that can be requested via the port-profile for all
or some VM's (mixtures of VM's running in normal mode and those running in high-
performance mode are fine).

In high performance mode, VM traffic is migrated from the normal "pass-through" model to a
real bypass model, where it is not handled by the VEM. Traffic is routed directly to the Cisco
Virtual NIC (P81e in our example), which itself can perform virtual adapter emulation. In this
case the virtual adapter provided to the VM must be of type vmxnet3. The picture looks like
this:

57
Lab Topology For This Add-on

58
Task V1: Enable 5K Features for Virtualization and Set Up the
Connection down to the C-series server VIC

Perform the following on each 5K. After features are enabled (pay no attention to any license
warnings), the connection down to the C-series VIC is set up as switchport mode vntag.

podX-n5k-n# conf t
podX-n5k-n (config)# install feature-set virtualization
podX-n5k-n (config)# feature-set virtualization
podX-n5k-n (config)# vethernet auto-create
// this will cause automatic creation of N5K vethernet interfaces for the adapter-fex
// interfaces (the ones visible to the physical machine)

podX-n5k-n (config)# feature vmfex

podX-n5k-n (config)# int e1/2


podX-n5k-n (config-if)# switchport mode vntag

Remember to perform the task on each 5K.

59
Task V2: Connect to CIMC For Your C-series Server, Set P81e
VIC Adapter Properties and Reboot

1. On your remote desktop, invoke a browser and enter:


http://10.2.8.X5 where X is your pod number (eg .25 for pod 2)

2. If you get any certificate or security warnings, proceed on through – accept and yes to
everything.

3. Log into the CIMC interface using the credentials admin/nexus5k .

4. Invoke the KVM console for your C-series server. We will not need it that much, but just
to verify what's going on now…. (Use the Launch KVM Console link or the little
keyboard icon at the top). This is a Java webstart application – please accept any
warnings and watch the console launch. You should see the splash screen for the ESX5i.

5. Back on the CIMC page click Inventory on the left and go to the Network
Adapters tab. You should be here:

6. Click Modify Adapter Properties near the lower left.


7. On the popup, check Enable NIV Mode (both NIV mode and FIP mode enabled).
//Note without NIV mode the P81e is just a plain vanilla two-port CNA

8. Change the Number of VM FEX Interfaces to 10.


9. Click Save Changes
10. Halt and restart the server. Use the little red-down arrow and then green-up arrow at the
top of the CIMC screen (and confirm the pop-ups). You should see your server halt and
start up again in the KVM Console.

60
Task V3: Create Port-Profiles to be used by the Adapter-Fex
Interfaces

The adapter-fex interfaces (VIC virtual adapters that are visible to physical OS running on the
server, not to virtual machines) can be configured using port profiles. These profiles can be
either:
a. Applied to N5K vethernet interfaces that are created manually
b. Applied on the CIMC, causing (after a server reboot), the automatic creation of N5K
vethernet interfaces.

On each 5K, create a port-profile to be used with adapter fex interfaces


(since we will be configuring VM-Fex later, the "static" vnics we are configuring here are
really just placeholders and it doesn't matter what vlans flow on them).

podX-n5k-n# conf t
podX-n5k-n (config)# port-profile type veth dummy-downlink
pod1-n5k-1(config-port-prof)# switch mode access
pod1-n5k-1(config-port-prof)# switch access vlan 1
pod1-n5k-1(config-port-prof)# no shut
pod1-n5k-1(config-port-prof)# state enabled

Remember to do the task on both 5K

Task V4: Apply Port-Profiles to Adapter-Fex Interfaces,


modify vHBAs, and Reboot Server One More Time

1. Go to the CIMC web interface (you might have to log in again if your session times out)
2. Go to the Inventory page and Network Adapters tab as before if not already
there
3. Hit the little Refresh icon (blue circle arrow thing next to the power buttons near the top)
4. Examine the vNICs tab on the bottom section. Note that these are the adapter-fex
interfaces. You could add more (we will not), but they would all be visible to the
physical server OS, not VM's. Note there are no port profiles assigned.

61
5. Click on the VM FEXs tab. This is kind of a misnomer since it means VM-Fex
interfaces. These are the dynamic vNIC's that will be mapped to individual virtual
machines later. There is nothing to configure here.
6. Go back to the vNICs tab and highlight eth0 as shown above.
7. Click the Properties button above.
8. Scroll down just a tiny bit and choose your dummy-downlink port-profile (note how
it got pushed automatically from the 5K):

9. Click Save Changes. Note the profile is now listed in the display of vNICs.
10. Repeat steps 6-9 for eth1
11. Click on the vHBAs tab:

62
12. Highlight the first vHBA fc0 (it should already be highlighted) and click Properties
13. On the VLAN right, click the radio button next to the fill-in field and enter 11
// This is the FCOE VLAN – it can not be native, so the "default" here is a
// little confusing

14. Click Save Changes


15. Repeat steps 12-14 for fc1 but use VLAN 12 (make sure to save changes)
16. Halt and restart the server (use the red down and green up at the top) one more time
(sorry, we swear that's the last time).
17. Wait for the server to boot
18. Note the IP of your ESX server on the splash screen in the KVM console. We will use it
to add the ESX host to vCenter in the next step. You can leave the KVM console up or
quit out of it, whichever you like. We will not do anything more with it.

63
Task V5: Observe Automatically-Created vethernet Interfaces
on N5K

On each 5K, run the following commands. You should see the veth interfaces (1 per 5K) get
automatically created. There are for the vNICs (dummy downlinks), not the vHBAs (the
interfaces for those must be created manually, in the next task).

podX-n5k-n# sh int brief


// you will see the veth interface near the bottom

podX-n5k-n# sh int virtual summary


// Note you can see the port-profile associated with the veth interface

podX-n5k-n# sh int virtual status


// You can even see the vntag value used for this vNIC, if that sounds exciting (it is not)

Task V6: Manually create veth and vfc interfaces for FCOE
Connection down to the Server

A vHBA on the VIC in NIV mode (as we are) is represented as a vfc interface bound on top of
a veth interface bound on top of the physical interface. All of this is created manually.

1. On your 5K-1 (only), run the following to create the proper veth interface on which to
run FCOE (this must be a trunk and the FCOE vlan must not be native):

podX-n5k-1# conf t
podX-n5k-1(config)# int veth11
podX-n5k-1(config-if)# switch mode trunk
podX-n5k-1(config-if)# switch trunk allow vlan 1,11
podX-n5k-1(config-if)# bind int e1/2 channel 3
podX-n5k-1(config-if)# no shut

2. On your 5K-1 (only), run the following to create vfc interface. At the end of this step
you should see the initiator on the server log into the fabric:

podX-n5k-1(config-if)# int vfc11


podX-n5k-1(config-if)# vsan datab
podX-n5k-1(config-vsan-db)# vsan 11 interface vfc11
podX-n5k-1(config-vsan-db)# int vfc11
podX-n5k-1(config-if)# bind int veth11
podX-n5k-1(config-if)# no shut
podX-n5k-1(config-if)# sh flogi datab
// you should see the initiator logged in. If you see no output from that last command,
// wait a few seconds and run (just the last command) again
64
3. On your 5K-2 (only), run the following to create the proper veth interface on which to
run FCOE (this must be a trunk and the FCOE vlan must not be native):

podX-n5k-2# conf t
podX-n5k-2(config)# int veth12
podX-n5k-2(config-if)# switch mode trunk
podX-n5k-2(config-if)# switch trunk allow vlan 1,12
podX-n5k-2(config-if)# bind int e1/2 channel 4
podX-n5k-2(config-if)# no shut

4. On your 5K-2 (only), run the following to create vfc interface. At the end of this step
you should see the initiator on the server log into the fabric:

podX-n5k-2(config-if)# int vfc12


podX-n5k-2(config-if)# vsan datab
podX-n5k-2(config-vsan-db)# vsan 12 interface vfc12
podX-n5k-2(config-vsan-db)# int vfc12
podX-n5k-2(config-if)# bind int veth12
podX-n5k-2(config-if)# no shut
podX-n5k-2(config-if)# sh flogi datab
// you should see the initiator logged in. If you see no output from that last command,
// wait a few seconds and run (just the last command) again

Note that the veth names have to be unique across the two 5K's. They do not need to
match any VLAN or VSAN numbers, the naming here is just mnemonic.

Task V7: Use vCenter and build a vCenter Data Center with
your ESX host
1. On your remote desktop Invoke vSphere client. Login to server localhost (check the
Use Windows session credentials checkbox)
2. Ignore certificate warnings.
3. Highlight the name of vCenter, right click and create a new DataCenter. Name it MyDC
4. Highlight MyDC, right-click and select Add Host... On the wizard:
a. Host: enter the ESX (C-series) IP( 10.2.8.X6). Enter the user name and
password (root/cangetin) and click Next
b. Accept the ssh key (click yes)
c. Confirm the information and click Next
d. Keep it in Evaluation mode and click Next
e. Keep Lockdown Mode unchecked and click Next
f. Highlight the MyDC and click Next
g. Review the information and click Finish

Your ESX host will be added to vCenter. . If your ESX host still shows a red warning,
that is OK (may be just warning about the amount of RAM), as long as it is connected.

65
Task V8: Copy VEM Software (for VM-Fex) to ESX and Install
We will manually copy the VEM software, then log into ESX and install. The VEM software is
normally retrieved from the cisco.com website (if you searched downloads for "5548" you would
find it easily). This is the same VEM as used in Nexus1000v and VM-Fex on UCS. We have
already made a copy for you in My Documents on your desktop.

1. Launch WinSCP from your remote desktop


2. For "Host Name", enter the IP of your ESX server (10.2.8.X6). Use the user/password
root/cangetin and keep everything else the same (SFTP is fine). Click Login…
3. Accept any ssh key warning
4. On the right (remote) side, move to (double-click) on the tmp folder
5. On the left side (local machine) navigate to My Documents (you may be there already)
6. Drag the .vib file (cross_cisco-vem-v132-4.2.1.1.4.1.0-3.0.4.vib)
over to the remote side (not into any of the sub-folders). Confirm on the popup that you
want to copy the file.
7. Quit out of WinSCP

8. Open a putty (ssh) connection to the ESX server to which you just copied the VEM
software (same IP). Log in with root/cangetin
9. Install the VEM:
# ls –l /tmp/*.vib
# esxcli software vib install –v /tmp/*.vib
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Cisco_bootbank_cisco-vem-v132-
esx_4.2.1.1.4.1.0-3.0.4
VIBs Removed:
VIBs Skipped:

66
Task V9: Install the Nexus 5K VM-Fex plug-in on vCenter
Before you can tell your N5K to connect to vCenter as part of the VM-Fex feature you need to
retrieve an "extension" file from the 5K and install it on the vCenter server. This will serve as an
authentication so that vCenter will allow the connection from the 5K.

1. Make sure you know which 5K is the vPC primary. Look in the vPC role field of:
podX-n5k-X# sh vpc brief

2. On whichever 5K is primary (only), enable the http feature:


podX-n5k-X# conf t
podX-n5k-X(config)# feature http

3. Access http://IP_of_your_5K_which_is_primary from a web browser.


(this is 10.2.8.X3 or 10.2.8.X4, whichever is primary vPC)

4. Right click on cisco_nexus_5000_extension.xml and save the file. You can


save it anywhere (Downloads or My Documents is fine) as long as you remember
where it is.

5. In the vSphere client, choose the Plugins  Manage Plugins:

6. At the white space near the bottom of the Plug-in Manager window, right-click to pop up
the New Plug-in button, and click on it.

7. In the Register Plug-in window,click on Browse… and navigate to and select the
extension file that you saved.

8. After you double-click or Open the extension file, its contents will appear in the "View
Xml (read-only) area. Click Register Plus-in at the bottom..

9. Click Ignore on the certificate warning popup.

10. Your extension will be registered in vCenter and you will see it in the Plug-in Manager
under Available Plug-ins. There is nothing else to do.

11. Click "Close" at the bottom of the Plug-in Manager window to close it.

67
Task V10: Switch Between "Hosts and Clusters" and
"Networking" Views in vSphere Client
In many of the future tasks we will be switching back and forth between the "Hosts and Clusters"
view in vSphere client (the only one we have been using so far) and the Networking view, which
will show information specifically from the point of view of the DVS. There are a few ways to
navigate back and forth, the easiest is to click on the word "Inventory" in the white location bar
and follow the menus as shown:

1. Switch to the "Networking" view, as shown.

2. Stay on this View for now. As we go on, the instructions will say "switch to the Hosts
and Clusters view" and "switch to the Networking view" and you will know what to do.

68
Task V11: Create the Connection from N5K to vCenter (and
the DVS)
Now that the extension key is successfully installed in vCenter we can connect our 5K to
vCenter. This will automatically create the VM-Fex DVS within vCenter.

1. On your remote desktop (which is the same as the vCenter server), figure out which IP
address you have on the 10.2.8 network (call this the vCenter IP – should be 10.2.8.X1)

[ run ipconfig /all in a cmd window. Make sure you get the one that begins with
10.2.8]

2. Configure the connection parameters on primary vPC partner.


podX-n5k-X # conf t
podX-n5k-X(config)# svs conn MyVC
podX-n5k-X(config-svs-conn)# remote ip address 10.2.8.X1
port 80 vrf management

podX-n5k-X(config-svs-conn)# protocol vmware-vim


podX-n5k-X(config-svs-conn)# vmware dvs datacenter MyDC
podX-n5k-X(config-svs-conn)# dvs MyVMFex
podX-n5k-X(config-svs-conn)# sh svs conn

3. Repeat step 2 in its entirety on the other 5K (secondary vPC partner).

4. Back on n the primary vPC partner (only), connect to the vCenter


podX-n5k-X # conf t
podX-n5k-X(config)# svs conn MyVC
podX-n5k-X(config-svs-conn)# connect

5. Observe in vCenter Networking view – your DVS should be automatically created: If you
expand the tree out you should be here:

69
Task V12: Create a Dummy Port Profile for DVS Uplink
On each 5k, create a port-profile used by the DVS to add the host.

podX-n5k-X# conf t
podX-n5k-X (config)# port-profile type eth dummy-uplink
podX-n5k-X (config-port-prof)# vmware port-group
podX-n5k-X (config-port-prof)# no shut
podX-n5k-X (config-port-prof)# state enabled

Make sure you repeat the configuration on both 5K.

The configuration will be pushed to the vCenter as you complete configuration on the vPC
primary, and in the Networking View you should see a new port group get created.

Task V13: Add ESX Host to the DVS


This procedure adds the ESX host as a member of the DVS. You will be specifying the
appropriate uplink port group (the dummy you just created).
1. In vSphere client, go to the Networking view (if you are not already there)
2. Single click to highlight the MyVMFex DVS (not the folder, the DVS inside the folder)
3. Right clickAdd Host…
4. Set the next window precisely as in the screenshot below (IP of your host will be
different). For both hosts, we are just identifying the (placeholder) DVS uplinks.

5. Click Next
6. Leave the page alone (everything says "Do not migrate"). Click Next.
7. Leave the page alone. Click Next.
8. Click Finish.
9. Your host will be added to the DVS. To confirm:
a. click on Hosts tab in the Networking view and you will see the Host status on DVS
b. Go back to Hosts and Clusters view. Click Configuration tab. Click on
Networking in the Hardware box. Highlight your ESX instance Click on
vSphere Distrubuted Switch
70
Task V14: Create VM-Fex port profile for VM data
1. On each N5K, create a port-profile that will be used for VM data

podX-n5k-X# conf t
podX-n5k-X (config)# port-profile type veth vmdata
podX-n5k-X (config-port-prof)# switch mode access
podX-n5k-X (config-port-prof)# switch access vlan 1422
podX-n5k-X (config-port-prof)# vmware port-group
podX-n5k-X (config-port-prof)# no shut
podX-n5k-X (config-port-prof)# state enabled

Make sure you repeat the configuration on both 5K.

The configuration will be pushed to the vCenter as you complete configuration on the vPC
primary, and in the Networking View you should see a new port group get created

Task V15: Add your Shared Storage


In vSphere Client

1. Go to Hosts and Clusters view if not already there


2. Highlight your ESX instance
3. Click the Configuration tab.
4. Click Storage (inside the Hardware box on the left)
5. Click Rescan All.. just above the list of existing datastores (to the far right)
6. Confirm you want to rescan
7. Click Add Storage… just next to the previous button. In the wizard:
a. Choose Disk/LUN, and click next (it may scan for a while)
b. Highlight the shared LUN. It will be LUN #2 on the shared storage (DGC).
c. Select “Assign a new signature” and click Next
d. Review (nothing to choose) and click Next, then click Finish
8. Your new datastore should appear in the list; you may have to wait up to 30 seconds or so
(it will be named snap-xxxx). Try to click Refresh.. if it seems like a really long time.

71
Task V16: Bring in some Prefab Virtual Machines and Attach
to the VMFex DVS
1. Bring in a prefab virtual machine.
a. Highlight your datastore (snap-xxxx) in the list
b. Right-clickBrowse Datastore
c. In the datastore browser, single-click the directory (RH1) on the left Commented [SH1]: Changed from VM to directory as it is a
directory
d. Highlight vmx file on the right (eg RH1.vmx)
e. Right-click and select Add to Inventory. In the wizard:
a. Name: RH1 and click Next
b. Ensure your ‘MyDC’ datacenter is selected Commented [SH2]: Current vc1.4 vm I have has a default
datacenter… will remove but just incase added this to clarify
c. Highlight your ESX and click Next. which datacenter to choose which includes the esx’s
d. Verify and click Finish
f. Close the datastore browser
2. You will see your new VM (not yet powered on) underneath the ESX hosting it
3. Highlight the RH1 VM
4. Right Click  Edit Settings
5. Highlight the Network Adapter 1
6. On the right side, choose the label from the pulldown menu vmdata(MyVMFex)
7. Click OK
8. Power on the RH1 virtual machine by clicking the green arrow above
9. One of the following will happen (why it varies is a mystery…)
a. You will get a pop-up asking if you moved or copied the VM. Select “I
copied it” and click OK
b. You will get a little hard to see yellow balloon on the VM icon. On this
icon , Right Click  Guest… Answer Question… and you
will get the popup. Select "I copied it" and click OK

10. Highlight the name of your new virtual machine, access the Summary tab, and wait until
it reports that VMware tools is running and reports the virtual machine IP address (next Commented [SH3]: Vmtools listed as ‘out of date’ – may
need to correct on final vm’s
to the label IP Addresses, not the bottom blue one which is the ESX IP. You should see
an IP beginning with 14.22.1

11. Verify can access the virtual machine using ssh (putty) from your remote desktop (login
root/cangetin). Congratulations, you are accessing a VM through the DVS!

12. Repeat steps 1-10 for the other two prefab virtual machines (WinXP and Win2K08). Commented [SH4]: Said 2-4, but 4 is specific to rh
Once they are booted you can see their IP addresses (14.22.1.x) from the vSphere client..
To prove that they are accessible through the DVS, you can also access these via RDP
from your remote desktop. The password for each default user (StudentDude on XP,
Administrator on Win2K08) is cangetin.

72
Task V17: View VM Networking from the 5K
Run the following commands on each of your 5K.

1. Run the following commands on each of your 5K.

podX-n5k-X# show int virtual summary

podX-n5k-X# show int virtual status

2. Now, on either one of your 5K (but not both!) simulate a failure:

podX-n5k-X# int e1/2

podX-n5k-X(config)# shut

3. rerun the commands in step 1 (especially the sh int virtual status to see the
active and standby connections on the two 5K's)

4. Undo your failure (with no shut) and observe again

73
Task V18: Implement High Performance Mode
Recall that a VM in high performance mode on the VM-FEX DVS will bypass the hypervisor
completely using DirectPath IO.

The request for high performance mode is in the port profile.

It should be considered just a request. Reasons that VM cannot implement the request are:
a. vmxnet3 must be supplied as the virtual adapter type. Some OS's may not have
drivers for this virtual adapter.
b. The VM must have a full memory reservation (reserves in advance from ESX the
full capacity of the VM memory). This is a setting you can change on the fly
(without rebooting VM).

c. Some OS's (WinXP) for example, just cannot implement the feature (to be
technical  the feature requires "MSI" or MSI+" which are "message signaled
interrupts" which exist in the modern (Vista, Win7, 2008) Windows varieties but
not in XP.

1. On the vSphere client, go to the Hosts and Clusters view, if not there already
2. Highlight your RH1 VM
3. Right click  Edit Settings
4. Highlight Network Adapter 1 on the left
5. Confirm (near upper right) that the adapter type is VMXNET 3
6. Confirm that DirectIO Path is Inactive (toward middle right)
7. In the same Settings window, click on the Resources tab
8. Highlight the Memory label on the left
9. Pull the Reservation slider as far right as it will go (1024MB). You should be here:

10. Click OK

74
11. Modify the port-profile for vmdata on each 5K:

podX-n5k-X# conf t
podX-n5k-X (config)# port-profile type veth vmdata
podX-n5k-X (config-port-prof)# high-performance host-netio

Remember to perform the commands on each 5K.

12. Back on vSphere client, edit the settings for RH1 again as in steps 1-6

13. Woo-hoo! DirectPath I/O should be Active (you won't notice any other difference, but
networking for this adapter is optimized and bypassing ESX completely. Click Cancel.

14. Repeat steps 1-13 for the Win2K08 VM. Make sure you slide the memory reservation as
far right as you can (you should be successful in seeing DirectPath IO become active)

15. Repeat steps 1-13 for the WinXP VM (DirectPath IO will not ever become active since
there is a prerequisite not satisfied by the OS).

75

You might also like