Vsphere ICM 8 Lab 20
Vsphere ICM 8 Lab 20
Vsphere ICM 8 Lab 20
0
INSTALL, CONFIGURE, MANAGE
Contents
Introduction .............................................................................................................................................. 3
Objectives ................................................................................................................................................. 3
Lab Topology ............................................................................................................................................. 4
Lab Settings ............................................................................................................................................... 5
1 Prepare the Lab Environment ........................................................................................................... 6
2 Configure vSphere vMotion Networking on sa-esxi-01.vclass.local ................................................ 19
3 Configure vSphere vMotion Networking on sa-esxi-02.vclass.local ................................................ 24
4 Prepare Virtual Machines for vSphere vMotion Migration ............................................................. 29
5 Migrate Virtual Machines Using vSphere vMotion.......................................................................... 35
Introduction
In this lab, you will configure vSphere vMotion networking and migrate Virtual Machines (VMs) using
vSphere vMotion. It is important to note that proper configuration of virtual networks is essential for
ensuring that VMs function and communicate effectively in a production environment. Additionally,
gaining an understanding of this concept can provide added benefits in terms of practice.
vSphere vMotion is a feature in VMware's virtualization platform that allows for the live migration of
VMs from one host to another without interruption to their operation. This feature provides several
benefits, including:
• Improved resource utilization: vMotion allows for the dynamic balancing of workloads across
hosts in a cluster, which can help ensure that resources are used more efficiently.
• Reduced downtime for maintenance: vMotion enables maintenance and upgrades to be
performed on hosts without affecting the VMs running on them, which can help reduce
downtime for maintenance and improve availability.
• Improved disaster recovery: vMotion can be used in conjunction with other VMware features,
such as vSphere High Availability (HA) and vSphere Distributed Resource Scheduler (DRS), to
provide improved disaster recovery capabilities.
• Increased flexibility: vMotion allows for the movement of VMs between hosts, clusters, and
even datacenters, providing increased flexibility for managing virtualized environments.
• Better performance: vMotion can also be used to move VMs to hosts with more resources or
better performance, which can improve the overall performance of the VMs.
Overall, vMotion provides a powerful tool for managing virtualized environments, enabling greater
flexibility, better resource utilization, and improved availability and performance.
Objectives
Lab Topology
Lab Settings
The information in the table below will be needed to complete the lab. The task sections further below
provide details on the use of this information.
In this task, you will prepare the lab environment for vSphere vMotion Migrations.
To launch the console window for a VM, either click on the machine’s
graphic image from the topology page, or click on the machine’s
respective tab from the Navigator.
2. Launch the Mozilla Firefox web browser by either clicking on the icon found in the bottom toolbar
or by navigating to Start Menu > Internet > Firefox Web Browser.
If the VMware Getting Started webpage does not load, please wait an
additional 3 - 5 minutes, and refresh the page to continue. This is
because the vCenter Server Appliance is still booting up and requires
extra time to initialize.
4. To log in to the vCenter Server Appliance, enter sysadmin@vclass.local as the username and
NDGlabpass123! as the password. Click LOGIN.
5. In the Navigator, on the Hosts and Clusters tab, select sa-vcsa.vclass.local. In the right pane, select
Datastores and right-click iSCSI-Datastore. In the Actions menu, click Increase Datastore
Capacity….
6. In the Increase Datastore Capacity window on the Select Device step, select LUN 3 and click NEXT.
7. On the Specify Configuration step, leave the defaults, and click NEXT.
8. On the Ready to Complete step, review the information, and click FINISH.
9. Repeat steps 5 – 8, and expand the iSCI-Datastore using LUN 4 and Lun 2. For Lun 2, you will only
increase the size by 10 GB for lab purposes.
10. Ensure you are still viewing the Datastores tab. Verify that the iSCSI-Datastore is showing a
capacity of 59 GB and at least 32 GB of free space.
11. In the Recent Tasks pane, verify that the iSCSI-Datastore tasks have successfully completed.
13. In the ICM-Datacenter main workspace, click the Virtual Machines tab, and select LinuxGUI-01 and
LinuxGUI-02.
16. In the 2 Virtual Machines – Migrate window on the Select a migration type step, select Change
storage only. Click NEXT.
17. On the Select storage step, in the drop-down menu for Select virtual disk format, ensure iSCSI-
Datastore is selected. Select Thin Provision for the virtual disk format, and click NEXT.
18. On the Ready to complete pane, review the information, and click FINISH.
19. In the Navigator, on the Hosts and Clusters tab, expand sa-vcsa.vclass.local and ICM-Datacenter.
Select sa-esxi-01.vclass.local and in the right pane, and click Configure. Navigate to Networking >
Virtual switches.
20. In the Virtual Switches pane, expand Standard Switch: vSwitch1. Click the More ellipsis (…) and
click Remove.
23. In the Add Networking window, on the Select connection type step, select the radial button for
Virtual Machine Port Group for a Standard Switch and click NEXT.
24. On the Select target device step, select the radial button for the New standard switch and click
NEXT.
25. On the Create a Standard Switch step, select vmnic1 and click MOVE DOWN until (New) vmnic1
appears under Active adapters. Click NEXT.
26. On the Connections settings step, enter Production for the Network label. Click NEXT.
27. Review the information for the new active adapter and click FINISH.
28. In the Navigator, on the Hosts and Clusters tab, select sa-esxi-02.vclass.local. In the right pane,
click Configure. Navigate to Networking > Virtual switches.
29. In the Virtual Switches pane, expand Standard Switch: vSwitch1. Click the More ellipsis (…) and
click Remove.
32. In the Add Networking window, on the Select connection type step, select the radial button for
Virtual Machine Port Group for a Standard Switch and click NEXT.
33. On the Select target device step, select the radial button for the New standard switch and click
NEXT.
34. On the Create a Standard Switch step, select (New) vmnic1 and click MOVE DOWN until (New)
vmnic1 appears under Active adapters. Click NEXT.
35. On the Connections settings step, enter Production for the Network label. Click NEXT.
36. Review the information for the new active adapter, and click FINISH.
37. Leave the vSphere Client open, and continue to the next task.
In this task, you will create a standard switch and a VMkernel port group on sa-esxi-01.vclass.local that
can be used to move VMs from one host to another while maintaining continuous service availability.
Configuring vMotion networking is necessary because it allows for the live migration of VMs from one
host to another. vMotion requires a dedicated network connection between the source and
destination hosts to transfer VM memory and other state information. This dedicated network is
known as the vMotion network.
By having a dedicated vMotion network, it is possible to minimize the amount of network traffic on the
production network, which can improve the performance and security of the VMs. Also, it can prevent
vMotion traffic from impacting other network traffic and vice versa. Additionally, it is possible to
configure vMotion traffic to use a specific NIC (Network Interface Card) or VLAN (Virtual LAN) that is
separate from the production network. This ensures that vMotion traffic is isolated from other network
traffic and can be controlled more easily.
In summary, configuring vMotion networking enables live migration of VMs, improves performance
and security by isolating vMotion traffic, and allows for more control over vMotion traffic.
4. In the Add Networking window, on the Select connection type step, select VMkernel Network
Adapter. Click NEXT.
5. On the Select target device step, click New standard switch. Click NEXT.
6. On the Create a Standard Switch step, move vmnic2 to Active adapters. Select vmnic2 and click
MOVE DOWN until vmnic2 is under Active adapters. Click NEXT.
7. On the Port properties step, enter vMotion for the Network label. Check the vMotion box, and click
NEXT.
8. On the IPv4 settings step, select the radial button for Use static IPv4 settings and use the
information below to make configurations:
9. On the Ready to complete step, review the information, and click FINISH.
10. In the Virtual switches pane, expand Standard Switch: vSwitch4. Verify that that vSwitch4 contains
the vMotion port group, the vmk1 VMkernel port, and the vmnic2 physical adapter.
11. Leave vSphere Client open, and continue with the next task.
In this task, you will create a standard switch and a VMkernel port group on sa-esxi-02.vclass.local that
can be used to move VMs from one host to another while maintaining continuous service availability.
Configuring vMotion networking is necessary because it allows for the live migration of VMs from one
host to another. vMotion requires a dedicated network connection between the source and
destination hosts to transfer VM memory and other state information. This dedicated network is
known as the vMotion network.
By having a dedicated vMotion network, it is possible to minimize the amount of network traffic on the
production network, which can improve the performance and security of the VMs. Also, it can prevent
vMotion traffic from impacting other network traffic and vice versa. Additionally, it is possible to
configure vMotion traffic to use a specific NIC (Network Interface Card) or VLAN (Virtual LAN) that is
separate from the production network. This ensures that vMotion traffic is isolated from other network
traffic and can be controlled more easily.
In summary, configuring vMotion networking enables live migration of VMs, improves performance
and security by isolating the vMotion traffic, and allows for more control over vMotion traffic.
4. In the Add Networking window, on the Select connection type step, select VMkernel Network
Adapter. Click NEXT.
5. On the Select target device step, click New standard switch. Click NEXT.
6. On the Create a Standard Switch step, move vmnic2 to Active adapters. Select vmnic2 and click
MOVE DOWN until vmnic2 is under Active adapters. Click NEXT.
7. On the Port properties step, enter vMotion for the Network label. Check the vMotion box, and click
NEXT.
8. On the IPv4 settings step, click the radial button for Use static IPv4 settings and use the
information below to make configurations:
a. IPv4 address: 172.20.12.52
b. Subnet mask: 255.255.255.0
c. Click NEXT
9. On the Ready to complete step, review the information, and click FINISH.
10. In the Virtual switches pane, expand Standard Switch: vSwitch4. Verify that vSwitch4 contains the
vMotion port group, the vmk1 VMkernel port, and the vmnic2 physical adapter.
11. Leave vSphere Client open, and continue with the next task.
In this task, you will be using vSphere vMotion, and you will prepare VMs for hot migration between
hosts.
Preparing VMs for a vSphere vMotion migration is important because it can help ensure that the
migration process goes smoothly and that VM operations are not interrupted. Some of the key reasons
for preparing VMs for a vSphere vMotion migration include:
• Compatibility: VMs must be compatible with the version of vSphere that is running on the
destination host. This means that VM hardware versions and virtual devices must be
compatible with the version of vSphere running on the destination host.
• Shared storage: Both the source and destination hosts must have access to the same shared
storage, so it is important to ensure that VM files are located on shared storage that is
accessible by both the source and destination hosts.
• Networking: The source and destination hosts must be connected to the same network, so it is
important to ensure that VM network settings are configured correctly, and that the VM
network adapters are connected to the appropriate networks.
• Resources: The destination host must have enough resources to accommodate the VMs, so it is
important to ensure that VM resource requirements are met on the destination host.
• Applications: Some applications may require additional preparation steps before migration,
such as shutting down or disconnecting from specific resources.
By preparing VMs for a vSphere vMotion migration, it is possible to minimize the risk of issues and to
ensure that the migration process goes as smoothly as possible.
1. In the Navigator, select the VMs and Templates tab. Expand sa-vcsa.vclass.local and ICM-
Datacenter.
3. In the Edit Settings window, Virtual Hardware tab, click the drop-down menu for the Network
adapter 1. Click Browse.
4. In the Select Network window, select the Production network, and click OK.
5. Expand the view for Network adapter 1 by clicking the arrow, and verify that the Connect At
Power On checkbox is selected. Click OK to save the configurations.
6. Power on the machine by right-clicking on LinuxGUI-01 from the Navigator, and selecting Power >
Power On. You may also use the Power On icon highlighted in orange below.
10. Open the Terminal by double-clicking on the QTerminal icon on the desktop.
11. At the prompt, enter the command below, and verify that the IP address is set to 172.20.11.131 on
the ens33 interface.
sysadmin@linuxgui-01:~$ ip a
12. Ping the default gateway IP address 172.20.11.10 (sa-aio) by entering the command below at the
command prompt.
13. Notice the successful pings where LinuxGUI-01 can now communicate with sa-aio. Repeat Steps 8 -
12 for the LinuxGUI-02 VM, and verify that the IP address is 172.20.11.132. Once you have verified
the IP address, ping the default gateway 172.20.11.10.
Ensure you have repeated steps 8 - 12 for the LinuxGUI-02 VM. In the next
task, you will migrate both LinuxGUI-01 and LinuxGUI-02 between hosts.
15. Leave the vSphere Client open, and continue to the next task.
In this task, you will perform hot migrations of VMs residing on a shared datastore that is accessible to
both the source and the target ESXi hosts.
A hot migration in vSphere refers to the process of live migrating a VM from one host to another with
no interruption to VM operation. This is made possible through the vSphere vMotion feature, which
allows for the transfer of VM memory and other state information over a dedicated vMotion network,
while the VM remains powered on and continues to operate.
During a hot migration, VM memory and other state information is transferred to the destination host
while VMs continue to run on the source host. Once the transfer is complete, the VMs are switched
over to the destination host, and the migration is complete. The entire process is transparent to the
VMs, and there is no interruption to their services provided.
The term hot migration is used to emphasize that the VMs remain powered on and continue to
operate throughout the migration process. This is in contrast to a cold migration, in which VMs are
powered off before the migration and remain offline until the migration is complete.
There are several reasons why you might want to migrate a VM to a new host:
• Resource optimization: By migrating a VM to a new host, you can ensure that the VM is
running on a host that has the resources it needs to perform optimally. This can be especially
important if the current host is running low on resources or if the VM’s resource requirements
have changed over time.
• Maintenance: Migrating a VM to a new host can also be useful when performing maintenance
or upgrades on the current host. This can help to minimize downtime and ensure that the VM’s
operation is not interrupted while maintenance is being performed.
• Improved performance: Migrating a VM to a new host can also be used to improve the overall
performance of the VM. For example, if the new host has a faster CPU, more memory, or faster
storage, the VM may perform better on the new host.
In summary, migrating a VM to a new host can be used to optimize resources, perform maintenance,
and improve performance.
a. On the VMs and Templates tab, right-click LinuxGUI-01 and select Migrate.
b. In the Migrate window, on the Select the migration type step, click the radial button for
Change compute resource only and click NEXT.
c. On the Select a compute resource step, choose sa-esxi-02.vclass.local and click NEXT.
d. On the Select networks step, ensure that Production is selected from the Destination
Network drop-down menu, and click NEXT.
e. On the Select vMotion priority step, leave Schedule vMotion with high priority
(recommended) selected, and click NEXT.
f. On the Ready to complete step, review the information, and click FINISH.
2. If the Web Console closes, reopen the LinuxGUI-01 Web Console and monitor that no pings are
dropped during migration. If the login window appears, type NDGlabpass123! for the password and
click Unlock.
3. Go back to the vSphere Client. In the Navigator, select Hosts and Clusters. Expand the ICM-
Datacenter, sa-esxi-01.vclass.local, and sa-esxi-02.vclass.local objects. Verify that LinuxGUI-01 has
been successfully migrated to sa-esxi-02.vclass.local.
a. Verify that the VMs and Templates tab is selected. Right-click LinuxGUI-02 and select
Migrate.
b. In the Migrate window, on the Select the migration type step, select the radial button
for Change compute resource only and click NEXT.
c. On the Select a compute resource step, choose sa-esxi-01.vclass.local and click NEXT.
d. On the Select networks step, ensure that Production is selected from the Destination
Network drop-down menu, and click NEXT.
e. On the Select vMotion priority step, leave Schedule vMotion with high priority
(recommended) selected, and click NEXT.
f. On the Ready to complete step, review the information, and click FINISH.
5. If the Web Console closes, reopen the LinuxGUI-02 Web Console and monitor that no pings are
dropped during migration. If the login window appears, type NDGlabpass123! for the password and
click Unlock.
6. Go back to the vSphere Client. In the Navigator, select Hosts and Clusters. Expand the ICM-
Datacenter, sa-esxi-01.vclass.local, and sa-esxi-02.vclass.local objects. Verify that LinuxGUI-02 has
been successfully migrated to sa-esxi-01.vclass.local.