Apache CloudStack 4.2.0 Admin Guide en US
Apache CloudStack 4.2.0 Admin Guide en US
Apache CloudStack 4.2.0 Admin Guide en US
Apache CloudStack
Legal Notice
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Abstract
Administration Guide for CloudStack.
1. Concepts 1.1. What Is CloudStack? 1.2. What Can CloudStack Do? 1.3. Deployment Architecture Overview 1.3.1. Management Server Overview 1.3.2. Cloud Infrastructure Overview 1.3.3. Networking Overview 2. Cloud Infrastructure Concepts 2.1. About Regions 2.2. About Zones 2.3. About Pods 2.4. About Clusters 2.5. About Hosts 2.6. About Primary Storage 2.7. About Secondary Storage 2.8. About Physical Networks 2.8.1. Basic Zone Network Traffic Types 2.8.2. Basic Zone Guest IP Addresses 2.8.3. Advanced Zone Network Traffic Types 2.8.4. Advanced Zone Guest IP Addresses 2.8.5. Advanced Zone Public IP Addresses 2.8.6. System Reserved IP Addresses 3. Accounts
3. Accounts 3.1. Accounts, Users, and Domains 3.1.1. Dedicating Resources to Accounts and Domains 3.2. Using an LDAP Server for User Authentication 3.2.1. Example LDAP Configuration Commands 3.2.2. Search Base 3.2.3. Query Filter 3.2.4. Search User Bind DN 3.2.5. SSL Keystore Path and Password 4. User Services Overview 4.1. Service Offerings, Disk Offerings, Network Offerings, and Templates 5. User Interface 5.1. Log In to the UI 5.1.1. End User's UI Overview 5.1.2. Root Administrator's UI Overview 5.1.3. Logging In as the Root Administrator 5.1.4. Changing the Root Password 5.2. Using SSH Keys for Authentication 5.2.1. Creating an Instance Template that Supports SSH Keys 5.2.2. Creating the SSH Keypair 5.2.3. Creating an Instance 5.2.4. Logging In Using the SSH Keypair 5.2.5. Resetting SSH Keys 6. Using Projects to Organize Users and Resources 6.1. Overview of Projects 6.2. Configuring Projects 6.2.1. Setting Up Invitations 6.2.2. Setting Resource Limits for Projects 6.2.3. Setting Project Creator Permissions 6.3. Creating a New Project 6.4. Adding Members to a Project 6.4.1. Sending Project Membership Invitations 6.4.2. Adding Project Members From the UI 6.5. Accepting a Membership Invitation 6.6. Suspending or Deleting a Project 6.7. Using the Project View 7. Steps to Provisioning Your Cloud Infrastructure 7.1. Overview of Provisioning Steps 7.2. Adding Regions (optional) 7.2.1. The First Region: The Default Region 7.2.2. Adding a Region 7.2.3. Adding Third and Subsequent Regions 7.2.4. Deleting a Region 7.3. Adding a Zone 7.3.1. Basic Zone Configuration 7.3.2. Advanced Zone Configuration 7.4. Adding a Pod 7.5. Adding a Cluster 7.5.1. Add Cluster: KVM or XenServer 7.5.2. Add Cluster: vSphere 7.6. Adding a Host 7.6.1. Adding a Host (XenServer or KVM) 7.6.2. Adding a Host (vSphere) 7.7. Add Primary Storage 7.7.1. System Requirements for Primary Storage 7.7.2. Adding Primary Storage 7.7.3. Configuring a Storage Plug-in 7.8. Add Secondary Storage
7.8. Add Secondary Storage 7.8.1. System Requirements for Secondary Storage 7.8.2. Adding Secondary Storage 7.8.3. Adding an NFS Secondary Staging Store for Each Zone 7.9. Initialize and Test 8. Service Offerings 8.1. Compute and Disk Service Offerings 8.1.1. Creating a New Compute Offering 8.1.2. Creating a New Disk Offering 8.1.3. Modifying or Deleting a Service Offering 8.2. System Service Offerings 8.2.1. Creating a New System Service Offering 8.3. Network Throttling 8.4. Changing the Default System Offering for System VMs 9. Setting Up Networking for Users 9.1. Overview of Setting Up Networking for Users 9.2. About Virtual Networks 9.2.1. Isolated Networks 9.2.2. Shared Networks 9.2.3. Runtime Allocation of Virtual Network Resources 9.3. Network Service Providers 9.4. Network Offerings 9.4.1. Creating a New Network Offering 10. Working With Virtual Machines 10.1. About Working with Virtual Machines 10.2. Best Practices for Virtual Machines 10.2.1. Monitor VMs for Max Capacity 10.2.2. Install Required Tools and Drivers 10.3. VM Lifecycle 10.4. Creating VMs 10.5. Accessing VMs 10.6. Stopping and Starting VMs 10.7. Assigning VMs to Hosts 10.7.1. Affinity Groups 10.8. Virtual Machine Snapshots for VMware 10.8.1. Limitations on VM Snapshots 10.8.2. Configuring VM Snapshots 10.8.3. Using VM Snapshots 10.9. Changing the VM Name, OS, or Group 10.10. Appending a Display Name to the Guest VMs Internal Name 10.11. Changing the Service Offering for a VM 10.11.1. CPU and Memory Scaling for Running VMs 10.11.2. Updating Existing VMs 10.11.3. Configuring Dynamic CPU and RAM Scaling 10.11.4. How to Dynamically Scale CPU and RAM 10.11.5. Limitations 10.12. Resetting the Virtual Machine Root Volume on Reboot 10.13. Moving VMs Between Hosts (Manual Live Migration) 10.14. Deleting VMs 10.15. Working with ISOs 10.15.1. Adding an ISO 10.15.2. Attaching an ISO to a VM 10.15.3. Changing a VM's Base Image 11. Working With Hosts 11.1. Adding Hosts 11.2. Scheduled Maintenance and Maintenance Mode for Hosts 11.2.1. vCenter and Maintenance Mode 11.2.2. XenServer and Maintenance Mode 11.3. Disabling and Enabling Zones, Pods, and Clusters
11.3. Disabling and Enabling Zones, Pods, and Clusters 11.4. Removing Hosts 11.4.1. Removing XenServer and KVM Hosts 11.4.2. Removing vSphere Hosts 11.5. Re-Installing Hosts 11.6. Maintaining Hypervisors on Hosts 11.7. Changing Host Password 11.8. Over-Provisioning and Service Offering Limits 11.8.1. Limitations on Over-Provisioning in XenServer and KVM 11.8.2. Requirements for Over-Provisioning 11.8.3. Setting Over-Provisioning Ratios 11.8.4. Service Offering Limits and Over-Provisioning 11.9. VLAN Provisioning 11.9.1. VLAN Allocation Example 11.9.2. Adding Non Contiguous VLAN Ranges 11.9.3. Assigning VLANs to Isolated Networks 12. Working with Templates 12.1. Creating Templates: Overview 12.2. Requirements for Templates 12.3. Best Practices for Templates 12.4. The Default Template 12.5. Private and Public Templates 12.6. Creating a Template from an Existing Virtual Machine 12.7. Creating a Template from a Snapshot 12.8. Uploading Templates 12.9. Exporting Templates 12.10. Creating a Windows Template 12.10.1. System Preparation for Windows Server 2008 R2 12.10.2. System Preparation for Windows Server 2003 R2 12.11. Importing Amazon Machine Images 12.12. Converting a Hyper-V VM to a Template 12.13. Adding Password Management to Your Templates 12.13.1. Linux OS Installation 12.13.2. Windows OS Installation 12.14. Deleting Templates 13. Working With Storage 13.1. Storage Overview 13.2. Primary Storage 13.2.1. Best Practices for Primary Storage 13.2.2. Runtime Behavior of Primary Storage 13.2.3. Hypervisor Support for Primary Storage 13.2.4. Storage Tags 13.2.5. Maintenance Mode for Primary Storage 13.3. Secondary Storage 13.4. Working With Volumes 13.4.1. Creating a New Volume 13.4.2. Uploading an Existing Volume to a Virtual Machine 13.4.3. Attaching a Volume 13.4.4. Detaching and Moving Volumes 13.4.5. VM Storage Migration 13.4.6. Resizing Volumes 13.4.7. Reset VM to New Root Disk on Reboot 13.4.8. Volume Deletion and Garbage Collection 13.5. Working with Snapshots 13.5.1. Automatic Snapshot Creation and Retention 13.5.2. Incremental Snapshots and Backup 13.5.3. Volume Status 13.5.4. Snapshot Restore 13.5.5. Snapshot Job Throttling 13.5.6. VMware Volume Snapshot Performance 14. Working with Usage 14.1. Configuring the Usage Server 14.2. Setting Usage Limits 14.3. Globally Configured Limits 14.4. Limiting Resource Usage
14.4. Limiting Resource Usage 14.4.1. User Permission 14.4.2. Limit Usage Considerations 14.4.3. Limiting Resource Usage in a Domain 14.4.4. Default Account Resource Limits 15. Managing Networks and Traffic 15.1. Guest Traffic 15.2. Networking in a Pod 15.3. Networking in a Zone 15.4. Basic Zone Physical Network Configuration 15.5. Advanced Zone Physical Network Configuration 15.5.1. Configure Guest Traffic in an Advanced Zone 15.5.2. Configure Public Traffic in an Advanced Zone 15.5.3. Configuring a Shared Guest Network 15.6. Using Multiple Guest Networks 15.6.1. Adding an Additional Guest Network 15.6.2. Reconfiguring Networks in VMs 15.6.3. Changing the Network Offering on a Guest Network 15.7. IP Reservation in Isolated Guest Networks 15.7.1. IP Reservation Considerations 15.7.2. Limitations 15.7.3. Best Practices 15.7.4. Reserving an IP Range 15.8. Reserving Public IP Addresses and VLANs for Accounts 15.8.1. Dedicating IP Address Ranges to an Account 15.8.2. Dedicating VLAN Ranges to an Account 15.9. Configuring Multiple IP Addresses on a Single NIC 15.9.1. Use Cases 15.9.2. Guidelines 15.9.3. Assigning Additional IPs to a VM 15.9.4. Port Forwarding and StaticNAT Services Changes 15.10. About Multiple IP Ranges 15.11. About Elastic IP 15.12. Portable IPs 15.12.1. About Portable IP 15.12.2. Configuring Portable IPs 15.12.3. Acquiring a Portable IP 15.12.4. Transferring Portable IP 15.13. Multiple Subnets in Shared Network 15.13.1. Prerequisites and Guidelines 15.13.2. Adding Multiple Subnets to a Shared Network 15.14. Isolation in Advanced Zone Using Private VLAN 15.14.1. About Private VLAN 15.14.2. Prerequisites 15.14.3. Creating a PVLAN-Enabled Guest Network 15.15. Security Groups 15.15.1. About Security Groups 15.15.2. Adding a Security Group 15.15.3. Security Groups in Advanced Zones (KVM Only) 15.15.4. Enabling Security Groups 15.15.5. Adding Ingress and Egress Rules to a Security Group 15.16. External Firewalls and Load Balancers 15.16.1. About Using a NetScaler Load Balancer 15.16.2. Configuring SNMP Community String on a RHEL Server 15.16.3. Initial Setup of External Firewalls and Load Balancers 15.16.4. Ongoing Configuration of External Firewalls and Load Balancers 15.16.5. Load Balancer Rules 15.16.6. Configuring AutoScale 15.17. Global Server Load Balancing Support 15.17.1. About Global Server Load Balancing 15.17.2. Configuring GSLB 15.17.3. Known Limitation
15.17.3. Known Limitation 15.18. Guest IP Ranges 15.19. Acquiring a New IP Address 15.20. Releasing an IP Address 15.21. Static NAT 15.21.1. Enabling or Disabling Static NAT 15.22. IP Forwarding and Firewalling 15.22.1. Firewall Rules 15.22.2. Egress Firewall Rules in an Advanced Zone 15.22.3. Port Forwarding 15.23. IP Load Balancing 15.24. DNS and DHCP 15.25. Remote Access VPN 15.25.1. Configuring Remote Access VPN 15.25.2. Using Remote Access VPN with Windows 15.25.3. Using Remote Access VPN with Mac OS X 15.25.4. Setting Up a Site-to-Site VPN Connection 15.26. About Inter-VLAN Routing (nTier Apps) 15.27. Configuring a Virtual Private Cloud 15.27.1. About Virtual Private Clouds 15.27.2. Adding a Virtual Private Cloud 15.27.3. Adding Tiers 15.27.4. Configuring Network Access Control List 15.27.5. Adding a Private Gateway to a VPC 15.27.6. Deploying VMs to the Tier 15.27.7. Deploying VMs to VPC Tier and Shared Networks 15.27.8. Acquiring a New IP Address for a VPC 15.27.9. Releasing an IP Address Alloted to a VPC 15.27.10. Enabling or Disabling Static NAT on a VPC 15.27.11. Adding Load Balancing Rules on a VPC 15.27.12. Adding a Port Forwarding Rule on a VPC 15.27.13. Removing Tiers 15.27.14. Editing, Restarting, and Removing a Virtual Private Cloud 15.28. Persistent Networks 15.28.1. Persistent Network Considerations 15.28.2. Creating a Persistent Guest Network 16. Working with System Virtual Machines 16.1. The System VM Template 16.2. Accessing System VMs 16.3. Multiple System VM Support for VMware 16.4. Console Proxy 16.4.1. Using a SSL Certificate for the Console Proxy 16.4.2. Changing the Console Proxy SSL Certificate and Domain 16.5. Virtual Router 16.5.1. Configuring the Virtual Router 16.5.2. Upgrading a Virtual Router with System Service Offerings 16.5.3. Best Practices for Virtual Routers 16.6. Secondary Storage VM 17. System Reliability and High Availability 17.1. HA for Management Server 17.2. Management Server Load Balancing 17.3. HA-Enabled Virtual Machines 17.4. HA for Hosts 17.4.1. Dedicated HA Hosts 17.5. Primary Storage Outage and Data Loss 17.6. Secondary Storage Outage and Data Loss 17.7. Limiting the Rate of API Requests 17.7.1. Configuring the API Request Rate 17.7.2. Limitations on API Throttling 18. Managing the Cloud 18.1. Using Tags to Organize Resources in the Cloud 18.2. Changing the Database Configuration
18.2. Changing the Database Configuration 18.3. Changing the Database Password 18.4. Administrator Alerts 18.4.1. Sending Alerts to External SNMP and Syslog Managers 18.5. Customizing the Network Domain Name 18.6. Stopping and Restarting the Management Server 19. Setting Configuration Parameters 19.1. About Configuration Parameters 19.2. Setting Global Configuration Parameters 19.3. Setting Local Configuration Parameters 19.4. Granular Global Configuration Parameters 20. CloudStack API 20.1. Provisioning and Authentication API 20.2. Allocators 20.3. User Data and Meta Data 21. Tuning 21.1. Performance Monitoring 21.2. Increase Management Server Maximum Memory 21.3. Set Database Buffer Pool Size 21.4. Set and Monitor Total VM Limits per Host 21.5. Configure XenServer dom0 Memory 22. Troubleshooting 22.1. Events 22.1.1. Event Logs 22.1.2. Event Notification 22.1.3. Standard Events 22.1.4. Long Running Job Events 22.1.5. Event Log Queries 22.1.6. Deleting and Archiving Events and Alerts 22.2. Working with Server Logs 22.3. Data Loss on Exported Primary Storage 22.4. Recovering a Lost Virtual Router 22.5. Maintenance mode not working on vCenter 22.6. Unable to deploy VMs from uploaded vSphere template 22.7. Unable to power on virtual machine on VMware 22.8. Load balancer rules fail after changing network offering A. Time Zones B. Event Types C. Alerts D. Revision History
Chapter 1. Concepts
1.1. What Is CloudStack? 1.2. What Can CloudStack Do? 1.3. Deployment Architecture Overview 1.3.1. Management Server Overview 1.3.2. Cloud Infrastructure Overview 1.3.3. Networking Overview
Set up an on-premise private cloud for use by employees. Rather than managing virtual machines in the same way as physical machines, with CloudStack an enterprise can offer self-service virtual machines to users without involving IT departments.
A more full-featured installation consists of a highly-available multi-node Management Server installation and up to tens of thousands of hosts using any of several advanced networking setups. For information about deployment options, see the "Choosing a Deployment Architecture" section of the CloudStack Installation Guide.
More Information For more information, see documentation on cloud infrastructure concepts.
Regions are visible to the end user. When a user starts a guest VM on a particular CloudStack Management Server, the user is implicitly selecting that region for their guest. Users might also be required to copy their private templates to additional regions to enable creation of guest VMs using their templates in those regions.
Zones are visible to the end user. When a user starts a guest VM, the user must select a zone for their guest. Users might also be required to copy their private templates to additional zones to enable creation of guest VMs using their templates in those zones. Zones can be public or private. Public zones are visible to all users. This means that any user may create a guest in that zone. Private zones are reserved for a specific domain. Only users in that domain or its subdomains may create guests in that zone. Hosts in the same zone are directly accessible to each other without having to go through a firewall. Hosts in different zones can access each other through statically configured VPN tunnels. For each zone, the administrator must decide the following. How many pods to place in each zone. How many clusters to place in each pod. How many hosts to place in each cluster. (Optional) How many primary storage servers to place in each zone and total capacity for these storage servers. How many primary storage servers to place in each cluster and total capacity for these storage servers. How much secondary storage to deploy in a zone. When you add a new zone using the CloudStack UI, you will be prompted to configure the zones physical network and add the first pod, cluster, host, primary storage, and secondary storage. In order to support zone-wide functions for VMware, CloudStack is aware of VMware Datacenters and can map each Datacenter to a CloudStack zone. To enable features like storage live migration and zone-wide primary storage for VMware hosts, CloudStack has to make sure that a zone contains only a single VMware Datacenter. Therefore, when you are creating a new CloudStack zone, you can select a VMware Datacenter for the zone. If you are provisioning multiple VMware Datacenters, each one will be set up as a single zone in CloudStack.
Note
If you are upgrading from a previous CloudStack version, and your existing deployment contains a zone with clusters from multiple VMware Datacenters, that zone will not be forcibly migrated to the new model. It will continue to function as before. However, any new zone-wide operations, such as zone-wide primary storage and live storage migration, will not be available in that zone.
CloudStack allows multiple clusters in a cloud deployment. Even when local storage is used exclusively, clusters are still required organizationally, even if there is just one host per cluster. When VMware is used, every VMware cluster is managed by a vCenter server. Administrator must register the vCenter server with CloudStack. There may be multiple vCenter servers per zone. Each vCenter server may manage multiple VMware clusters.
server, and an ESXi server are hosts. The host is the smallest organizational unit within a CloudStack deployment. Hosts are contained within clusters, clusters are contained within pods, and pods are contained within zones. Hosts in a CloudStack deployment: Provide the CPU, memory, storage, and networking resources needed to host the virtual machines Interconnect using a high bandwidth TCP/IP network and connect to the Internet May reside in multiple data centers across different geographic locations May have different capacities (different CPU speeds, different amounts of RAM, etc.), although the hosts within a cluster must all be homogeneous Additional hosts can be added at any time to provide more capacity for guest VMs. CloudStack automatically detects the amount of CPU and memory resources provided by the Hosts. Hosts are not visible to the end user. An end user cannot determine which host their guest has been assigned to. For a host to function in CloudStack, you must do the following: Install hypervisor software on the host Assign an IP address to the host Ensure the host is connected to the CloudStack Management Server
one or more types of network traffic. The choices of traffic type for each network vary depending on whether you are creating a zone with basic networking or advanced networking. A physical network is the actual network hardware and wiring in a zone. A zone can have multiple physical networks. An administrator can: Add/Remove/Update physical networks in a zone Configure VLANs on the physical network Configure a name so the network can be recognized by hypervisors Configure the service providers (firewalls, load balancers, etc.) available on a physical network Configure the IP addresses trunked to a physical network Specify what type of traffic is carried on the physical network, as well as other properties like network speed
Note
We strongly recommend the use of separate NICs for management traffic and guest traffic. Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudStack UI to acquire these IPs to implement NAT between their guest network and the public network, as described in Acquiring a New IP Address. Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. CloudStack uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network. In a basic network, configuring the physical network is fairly straightforward. In most cases, you only need to configure one guest network to carry traffic that is generated by guest VMs. If you use a NetScaler load balancer and enable its elastic IP and elastic load balancing (EIP and ELB) features, you must also configure a network to carry public traffic. CloudStack takes care of presenting the necessary network configuration steps to you in the UI when you add a new zone.
range, and gateway. The administrator may provision thousands of these networks if desired. Additionally, the administrator can reserve a part of the IP address space for non-CloudStack VMs and servers.
Chapter 3. Accounts
3.1. Accounts, Users, and Domains 3.1.1. Dedicating Resources to Accounts and Domains 3.2. Using an LDAP Server for User Authentication 3.2.1. Example LDAP Configuration Commands 3.2.2. Search Base 3.2.3. Query Filter 3.2.4. Search User Bind DN 3.2.5. SSL Keystore Path and Password
Accounts are grouped by domains. Domains usually contain multiple accounts that have some logical relationship to each other and a set of delegated administrators with some authority over the domain and its subdomains. For example, a service provider with several resellers could create a domain for each reseller. For each account created, the Cloud installation creates three different types of user accounts: root administrator, domain administrator, and user. Users Users are like aliases in the account. Users in the same account are not isolated from each other, but they are isolated from users in other accounts. Most installations need not surface the notion of users; they just have one user per account. The same user cannot belong to multiple accounts. Username is unique in a domain across accounts in that domain. The same username can exist in other domains, including sub-domains. Domain name can repeat only if the full pathname from root is unique. For example, you can create root/d1, as well as root/foo/d1, and root/sales/d1. Administrators are accounts with special privileges in the system. There may be multiple administrators in the system. Administrators can create or delete other administrators, and change the password for any user in the system. Domain Administrators Domain administrators can perform administrative operations for users who belong to that domain. Domain administrators do not have visibility into physical servers or other domains. Root Administrator Root administrators have complete access to the system, including managing templates, service offerings, customer care administrators, and domains Resource Ownership Resources belong to the account, not individual users in that account. For example, billing, resource limits, and so on are maintained by the account, not the users. A user can operate on any resource in the account provided the user has privileges for that operation. The privileges are determined by the role. A root administrator can change the ownership of any virtual machine from one account to any other account by using the assignVirtualMachine API. A domain or subdomain administrator can do the same for VMs within the domain from one account to any other account in the domain or any of its sub-domains.
the host tag requested by the user, then the VM will not deploy. If you delete an account or domain, any hosts, clusters, pods, and zones that were dedicated to it are freed up. They will now be available to be shared by any account or domain, or the administrator may choose to re-dedicate them to a different account or domain. System VMs and virtual routers affect the behavior of host dedication. System VMs and virtual routers are owned by the CloudStack system account, and they can be deployed on any host. They do not adhere to explicit dedication. The presence of system vms and virtual routers on a host makes it unsuitable for strict implicit dedication. The host can not be used for strict implicit dedication, because the host already has VMs of a specific account (the default system account). However, a host with system VMs or virtual routers can be used for preferred implicit dedication.
The command must be URL-encoded. Here is the same example without the URL encoding:
http://127.0.0.1:8080/client/api?command=ldapConfig &hostname=127.0.0.1 &searchbase=ou=testing,o=project &queryfilter=(&(%uid=%u)) &binddn=cn=John+Singh,ou=testing,o=project &bindpass=secret &port=10389 &ssl=true &truststore=C:/company/info/trusted.ks &truststorepass=secret &response=json &apiKey=YourAPIKey&signature=YourSignatureHash
The following shows a similar command for Active Directory. Here, the search base is the testing group within a company, and the users are matched up based on email address.
http://10.147.29.101:8080/client/api? command=ldapConfig&hostname=10.147.28.250&searchbase=OU%3Dtesting%2CDC%3Dcompany&queryfilte r=%28%26%28mail%3D%25e%29%29 &binddn=CN%3DAdministrator%2COU%3Dtesting%2CDC%3Dcompany&bindpass=1111_aaaa&port=389&respon se=json&apiKey=YourAPIKey&signature=YourSignatureHash
The next few sections explain some of the concepts you will need to know when filling out the ldapConfig parameters.
The following examples assume you are using Active Directory, and refer to user attributes from the Active Directory schema. If the CloudStack user name is the same as the LDAP user ID:
(uid=%u)
On a fresh Management Server installation, a guided tour splash screen appears. On later visits, youll see a login screen where you specify the following to proceed to your Dashboard: Username The user ID of your account. The default username is admin. Password The password associated with the user ID. The password for the default username is password. Domain If you are a root user, leave this field blank. If you are a user in the sub-domains, enter the full path to the domain, excluding the root domain. For example, suppose multiple levels are created under the root domain, such as Comp1/hr. The users in the Comp1 domain should enter Comp1 in the Domain field, whereas the users in the Comp1/sales domain should enter Comp1/sales. For more guidance about the choices that appear when you log in to this UI, see Logging In as the Root Administrator.
After logging into a fresh Management Server installation, a guided tour splash screen appears. On later visits, youll be taken directly into the Dashboard. 2. If you see the first-time splash screen, choose one of the following. Continue with basic setup. Choose this if you're just trying CloudStack, and you want a guided walkthrough of the simplest possible configuration so that you can get started right away. We'll help you set up a cloud with the following features: a single machine that runs CloudStack software and uses NFS to provide storage; a single machine running VMs under the XenServer or KVM hypervisor; and a shared public network. The prompts in this guided tour should give you all the information you need, but if you want just a bit more detail, you can follow along in the Trial Installation Guide. I have used CloudStack before. Choose this if you have already gone through a design phase and planned a more sophisticated deployment, or you are ready to start scaling up a trial cloud that you set up earlier with the basic setup screens. In the Administrator UI, you can start using the more powerful features of CloudStack,
basic setup screens. In the Administrator UI, you can start using the more powerful features of CloudStack, such as advanced VLAN networking, high availability, additional network elements such as load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware vSphere. The root administrator Dashboard appears. 3. You should set a new root administrator password. If you chose basic setup, youll be prompted to create a new password right away. If you chose experienced user, use the steps in Section 5.1.4, Changing the Root Password.
Warning
You are logging in as the root administrator. This account manages the CloudStack deployment, including physical infrastructure. The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person. Please change the default password to a new, unique password.
2. Log in to the UI using the current root user ID and password. The default is admin, password. 3. Click Accounts. 4. Click the admin account name. 5. Click View Users. 6. Click the admin user name. 7. Click the Change Password button. 8. Type the new password, and click OK.
Note
Ensure that you adjust these values to meet your needs. If you are making the API call from a different server, your URL/PORT will be different, and you will need to use the API keys. 1. Run the following curl command:
curl --globoff "http://localhost:8096/?command=createSSHKeyPair&name=keypairdoc&account=admin&domainid=5163440e-c44b-42b5-9109-ad75cae8e8a2"
2. Copy the key data into a file. The file looks like this:
-----BEGIN RSA PRIVATE KEY----MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7 VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14 4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3 -----END RSA PRIVATE KEY-----
Note
You cannot create the instance by using the GUI at this time and associate the instance with the newly created SSH keypair. A sample curl command to create a new instance is:
curl --globoff http://localhost:<port number>/? command=deployVirtualMachine\&zoneId=1\&serviceOfferingId=18727021-7556-4110-9322d625b52e0813\&templateId=e899c18a-ce13-4bbf-98a9-625c5026e0b5\&securitygroupids=ff03f02f9e3b-48f8-834d-91b822da40c5\&account=admin\&domainid=1\&keypair=keypair-doc
Substitute the template, service offering and security group IDs (if you are using the security group feature) that are in your cloud environment.
The -i parameter tells the ssh client to use a ssh key found at ~/.ssh/keypair-doc.
Amount of time to allow for a new member to respond to the invitation. Name of the host that acts as an email server to handle invitations. (Optional) Password required by the SMTP server. You must also set project.smtp.username and set project.smtp.useAuth to true. SMTP servers listening port. Set to true if the SMTP server requires a username and password. (Optional) User name required by the SMTP server for authentication. You must also set project.smtp.password and set project.smtp.useAuth to true..
2. In the left navigation, click Global Settings. 3. In the search box, type allow.user.create.projects. 4. Click the edit button to set the parameter. allow.user.create.projects Set to true to allow end users to create projects. Set to false if you want only the CloudStack root administrator and domain administrators to create projects.
1. Log in to the CloudStack UI. 2. In the left navigation, click Projects. 3. In Select View, choose Invitations. 4. If you see the invitation listed onscreen, click the Accept button. Invitations listed on screen were sent to you using your CloudStack account name. 5. If you received an email invitation, click the Enter Token button, and provide the project ID and unique ID code (token) from the email.
7.6.1. Adding a Host (XenServer or KVM) 7.6.2. Adding a Host (vSphere) 7.7. Add Primary Storage 7.7.1. System Requirements for Primary Storage 7.7.2. Adding Primary Storage 7.7.3. Configuring a Storage Plug-in 7.8. Add Secondary Storage 7.8.1. System Requirements for Secondary Storage 7.8.2. Adding Secondary Storage 7.8.3. Adding an NFS Secondary Staging Store for Each Zone 7.9. Initialize and Test This section tells how to add regions, zones, pods, clusters, hosts, storage, and networks to your cloud. If you are unfamiliar with these entities, please begin by looking through Chapter 2, Cloud Infrastructure Concepts.
2. By the end of the installation procedure, the Management Server should have been started. Be sure that the Management Server installation was successful and complete. 3. Now add the new region to region 1 in CloudStack. a. Log in to CloudStack in the first region as root administrator (that is, log in to <region.1.IP.address>:8080/client). b. In the left navigation bar, click Regions. c. Click Add Region. In the dialog, fill in the following fields: ID. A unique identifying number. Use the same number you set in the database during Management Server installation in the new region; for example, 2. Name. Give the new region a descriptive name. Endpoint. The URL where you can log in to the Management Server in the new region. This has the format <region.2.IP.address>:8080/client. 4. Now perform the same procedure in reverse. Log in to region 2, and add region 1. 5. Copy the account, user, and domain tables from the region 1 database to the region 2 database. In the following commands, it is assumed that you have set the root password on the database, which is a CloudStack recommended best practice. Substitute your own MySQL root password. a. First, run this command to copy the contents of the database:
# mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > region1.sql
b. Then run this command to put the data onto the region 2 database:
# mysql -u root -p<mysql_password> -h <region2_db_host> cloud < region1.sql
2. Once the Management Server is running, add your new region to all existing regions by repeatedly using the Add Region button in the UI. For example, if you were adding region 3: a. Log in to CloudStack in the first region as root administrator (that is, log in to <region.1.IP.address>:8080/client), and add a region with ID 3, the name of region 3, and the endpoint <region.3.IP.address>:8080/client. b. Log in to CloudStack in the second region as root administrator (that is, log in to <region.2.IP.address>:8080/client), and add a region with ID 3, the name of region 3, and the endpoint <region.3.IP.address>:8080/client. 3. Repeat the procedure in reverse to add all existing regions to the new region. For example, for the third region, add the other two existing regions: a. Log in to CloudStack in the third region as root administrator (that is, log in to <region.3.IP.address>:8080/client). b. Add a region with ID 1, the name of region 1, and the endpoint <region.1.IP.address>:8080/client. c. Add a region with ID 2, the name of region 2, and the endpoint <region.2.IP.address>:8080/client. 4. Copy the account, user, and domain tables from any existing region's database to the new region's database. In the following commands, it is assumed that you have set the root password on the database, which is a CloudStack recommended best practice. Substitute your own MySQL root password. a. First, run this command to copy the contents of the database:
# mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > region1.sql
b. Then run this command to put the data onto the new region's database. For example, for region 3:
# mysql -u root -p<mysql_password> -h <region3_db_host> cloud < region1.sql
DefaultSharedNetworkOffering DefaultSharedNetscalerEIPandELBNetworkOffering
Network Domain. (Optional) If you want to assign a special domain name to the guest VM network, specify the DNS suffix. Public. A public zone is available to all users. A zone that is not public will be assigned to a particular domain. Only users in that domain will be allowed to create guest VMs in this zone. 2. Choose which traffic types will be carried by the physical network. The traffic types are management, public, guest, and storage traffic. For more information about the types, roll over the icons to display their tool tips, or see Basic Zone Network Traffic Types. This screen starts out with some traffic types already assigned. To add more, drag and drop traffic types onto the network. You can also change the network name if desired. 3. Assign a network traffic label to each traffic type on the physical network. These labels must match the labels you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type icon. A popup dialog appears where you can type the label, then click OK. These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors, the labels can be configured after the zone is created. 4. Click Next. 5. (NetScaler only) If you chose the network offering for NetScaler, you have an additional screen to fill out. Provide the requested details to set up the NetScaler, then click Next.
the requested details to set up the NetScaler, then click Next. IP address. The NSIP (NetScaler IP) address of the NetScaler device. Username/Password. The authentication credentials to access the device. CloudStack uses these credentials to access the device. Type. NetScaler device type that is being added. It could be NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the types, see About Using a NetScaler Load Balancer. Public interface. Interface of NetScaler that is configured to be part of the public network. Private interface. Interface of NetScaler that is configured to be part of the private network. Number of retries. Number of times to attempt a command on the device before considering the operation failed. Default is 2. Capacity. Number of guest networks/accounts that will share this NetScaler device. Dedicated. When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1. 6. (NetScaler only) Configure the IP range for public traffic. The IPs in this range will be used for the static NAT capability which you enabled by selecting the network offering for NetScaler with EIP and ELB. Enter the following details, then click Add. If desired, you can repeat this step to add more IP ranges. When done, click Next. Gateway. The gateway in use for these IP addresses. Netmask. The netmask associated with this IP range. VLAN. The VLAN that will be used for public traffic. Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest VMs. 7. In a new zone, CloudStack adds the first pod for you. You can always add more pods later. For an overview of what a pod is, see Section 2.3, About Pods. To configure the first pod, enter the following, then click Next: Pod Name. A name for the pod. Reserved system gateway. The gateway for the hosts in that pod. Reserved system netmask. The network prefix that defines the pod's subnet. Use CIDR notation. Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses. 8. Configure the network for guest traffic. Provide the following, then click Next: Guest gateway. The gateway that the guests should use. Guest netmask. The netmask in use on the subnet the guests will use. Guest start IP/End IP. Enter the first and last IP addresses that define a range that CloudStack can assign to guests. We strongly recommend the use of multiple NICs. If multiple NICs are used, they may be in a different subnet. If one NIC is used, these IPs should be in the same CIDR as the pod CIDR. 9. In a new pod, CloudStack adds the first cluster for you. You can always add more clusters later. For an overview of what a cluster is, see About Clusters. To configure the first cluster, enter the following, then click Next: Hypervisor. (Version 3.0.0 only; in 3.0.1, this field is read only) Choose the type of hypervisor software that all hosts in this cluster will run. If you choose VMware, additional fields appear so you can give information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. See Add Cluster: vSphere. Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack. 10. In a new cluster, CloudStack adds the first host for you. You can always add more hosts later. For an overview of what a host is, see About Hosts.
Note
When you add a hypervisor host to CloudStack, the host must not have any VMs already running. Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see: Citrix XenServer Installation and Configuration VMware vSphere Installation and Configuration KVM vSphere Installation and Configuration To configure the first host, enter the following, then click Next: Host Name. The DNS name or IP address of the host. Username. The username is root. Password. This is the password for the user named above (from your XenServer or KVM install). Host Tags. (Optional) Any labels that you use to categorize hosts for ease of maintenance. For example, you can set this to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For more information, see HA-Enabled Virtual Machines as well as HA for Hosts. 11. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers later. For an overview of what primary storage is, see About Primary Storage. To configure the first primary storage server, enter the following, then click Next: Name. The name of the storage device. Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS, SharedMountPoint,CLVM, or RBD. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The
SharedMountPoint,CLVM, or RBD. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here.
Note
When you deploy CloudStack, the hypervisor host must not have any VMs already running.
When you deploy CloudStack, the hypervisor host must not have any VMs already running. Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see: Citrix XenServer Installation for CloudStack VMware vSphere Installation and Configuration KVM Installation and Configuration To configure the first host, enter the following, then click Next: Host Name. The DNS name or IP address of the host. Username. Usually root. Password. This is the password for the user named above (from your XenServer or KVM install). Host Tags. (Optional) Any labels that you use to categorize hosts for ease of maintenance. For example, you can set to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For more information, see HA-Enabled Virtual Machines as well as HA for Hosts, both in the Administration Guide. 10. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers later. For an overview of what primary storage is, see Section 2.6, About Primary Storage. To configure the first primary storage server, enter the following, then click Next: Name. The name of the storage device. Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS, SharedMountPoint, CLVM, and RBD. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here. NFS Server. The IP address or DNS name of the storage device. Path. The exported path from the server. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. iSCSI Server. The IP address or DNS name of the storage device. Target IQN. The IQN of the target. For example, iqn.1986-03.com.sun:02:01ec9bb5491271378984. Lun. The LUN number. For example, 3. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. preSetup Server. The IP address or DNS name of the storage device. SR Name-Label. Enter the name-label of the SR that has been set up outside CloudStack. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. SharedMountPoint Path. The path on each host that is where this primary storage is mounted. For example, "/mnt/primary". Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. VMFS Server. The IP address or DNS name of the vCenter server.
vCenter server. Path. A combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/cluster1datastore". Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. 11. In a new zone, CloudStack adds the first secondary storage server for you. For an overview of what secondary storage is, see Section 2.7, About Secondary Storage. Before you can fill out this screen, you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudStack System VM template. See Adding Secondary Storage : NFS Server. The IP address of the server or fully qualified domain name of the server. Path. The exported path from the server. 12. Click Launch.
2. Log in to the UI. 3. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the cluster. 4. Click the Compute tab, and click View All on Pods. Choose the pod to which you want to add the cluster. 5. Click View Clusters. 6. Click Add Cluster. 7. In Hypervisor, choose VMware. 8. Provide the following information in the dialog. The fields below make reference to the values from vCenter.
Cluster Name : Enter the name of the cluster you created in vCenter. For example, "cloud.cluster.2.2.1"
Cluster Name : Enter the name of the cluster you created in vCenter. For example, "cloud.cluster.2.2.1" vCenter Username : Enter the username that CloudStack should use to connect to vCenter. This user must have all the administrative privileges. CPU overcommit ratio: Enter the CPU overcommit ratio for the cluster. The value you enter determines the CPU consumption of each VM in the selected cluster. By increasing the over-provisioning ratio, more resource capacity will be used. If no value is specified, the value is defaulted to 1, which implies no over-provisioning is done. RAM overcommit ratio: Enter the RAM overcommit ratio for the cluster. The value you enter determines the memory consumption of each VM in the selected cluster. By increasing the over-provisioning ratio, more resource capacity will be used. If no value is specified, the value is defaulted to 1, which implies no overprovisioning is done. vCenter Host: Enter the hostname or IP address of the vCenter server. vCenter Password: Enter the password for the user named above. vCenter Datacenter : Enter the vCenter datacenter that the cluster is in. For example, "cloud.dc.VM". Override Public Traffic : Enable this option to override the zone-wide public traffic for the cluster you are creating. Public Traffic vSwitch Type : This option is displayed only if you enable the Override Public Traffic option. Select a desirable switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch. If you have enabled Nexus dvSwitch in the environment, the following parameters for dvSwitch configuration are displayed: Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance. Nexus dvSwitch Username: The username required to access the Nexus VSM appliance. Nexus dvSwitch Password: The password associated with the username specified above. Override Guest Traffic : Enable this option to override the zone-wide guest traffic for the cluster you are creating. Guest Traffic vSwitch Type : This option is displayed only if you enable the Override Guest Traffic option. Select a desirable switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch. If you have enabled Nexus dvSwitch in the environment, the following parameters for dvSwitch configuration are displayed: Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance. Nexus dvSwitch Username: The username required to access the Nexus VSM appliance. Nexus dvSwitch Password: The password associated with the username specified above. There might be a slight delay while the cluster is provisioned. It will automatically display in the UI.
Warning
Be sure you have performed the additional CloudStack-specific configuration steps described in the hypervisor installation section for your particular hypervisor. 2. Now add the hypervisor host to CloudStack. The technique to use varies depending on the hypervisor. Section 7.6.1, Adding a Host (XenServer or KVM) Section 7.6.2, Adding a Host (vSphere)
Warning
Make sure the hypervisor host does not have any VMs already running before you add it to CloudStack. Configuration requirements: Each cluster must contain only hosts with the identical hypervisor. For XenServer, do not put more than 8 hosts in a cluster. For KVM, do not put more than 16 hosts in a cluster. For hardware requirements, see the installation section for your hypervisor in the CloudStack Installation Guide. 7.6.1.1.1. XenServer Host Additional Requirements
7.6.1.1.1. XenServer Host Additional Requirements If network bonding is in use, the administrator must cable the new host identically to other hosts in the cluster. For all additional hosts to be added to the cluster, run the following command. This will cause the host to join the master in a XenServer pool.
# xe pool-join master-address=[master IP] master-username=root master-password=[your password]
Note
When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text. With all hosts added to the XenServer pool, run the cloud-setup-bond script. This script will complete the configuration and setup of the bonds on the new hosts in the cluster. 1. Copy the script from the Management Server in /usr/share/cloudstackcommon/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable. 2. Run the script:
# ./cloud-setup-bonding.sh
7.6.1.1.2. KVM Host Additional Requirements If shared mountpoint storage is in use, the administrator should ensure that the new host has all the same mountpoints (with storage mounted) as the other hosts in the cluster. Make sure the new host has the same network configuration (guest, private, and public network) as other hosts in the cluster. If you are using OpenVswitch bridges edit the file agent.properties on the KVM host and set the parameter network.bridge.type to openvswitch before adding the host to CloudStack
If you do not provision shared primary storage, you must set the global configuration parameter system.vm.local.storage.required to true, or else you will not be able to start VMs.
Warning
When using preallocated storage for primary storage, be sure there is nothing on the storage (ex. you have an empty SAN volume or an empty NFS share). Adding the storage to CloudStack will destroy any existing data.
Note
Primary storage can also be added at the zone level through the CloudStack API (adding zone-level primary storage is not yet supported through the CloudStack UI). Once primary storage has been added at the zone level, it can be managed through the CloudStack UI. 1. Log in to the CloudStack UI (see Section 5.1, Log In to the UI). 2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the primary storage. 3. Click the Compute tab. 4. In the Primary Storage node of the diagram, click View All. 5. Click Add Primary Storage. 6. Provide the following information in the dialog. The information required varies depending on your choice in Protocol. Scope. Indicate whether the storage is available to all hosts in the zone or only to hosts in a single cluster. Pod. (Visible only if you choose Cluster in the Scope field.) The pod for the storage device. Cluster. (Visible only if you choose Cluster in the Scope field.) The cluster for the storage device. Name. The name of the storage device. Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. Server (for NFS, iSCSI, or PreSetup). The IP address or DNS name of the storage device. Server (for VMFS). The IP address or DNS name of the vCenter server. Path (for NFS). In NFS this is the exported path from the server. Path (for VMFS). In vSphere this is a combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/cluster1datastore". Path (for SharedMountPoint). With KVM this is the path on each host that is where this primary storage is mounted. For example, "/mnt/primary". SR Name-Label (for PreSetup). Enter the name-label of the SR that has been set up outside CloudStack. Target IQN (for iSCSI). In iSCSI this is the IQN of the target. For example, iqn.198603.com.sun:02:01ec9bb549-1271378984. Lun # (for iSCSI). In iSCSI this is the LUN number. For example, 3. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. 7. Click OK.
Note
Primary storage that is based on a custom plug-in (ex. SolidFire) must be added through the CloudStack API (described later in this section). There is no support at this time through the CloudStack UI to add this type of primary storage (although most of its features are available through the CloudStack UI).
Note
At this time, a custom storage plug-in, such as the SolidFire storage plug-in, can only be leveraged for data disks (through Disk Offerings).
Note
The SolidFire storage plug-in for CloudStack is part of the standard CloudStack install. There is no additional work required to add this component.
Adding primary storage that is based on the SolidFire plug-in enables CloudStack to provide hard quality-of-service (QoS) guarantees. When used with Disk Offerings, an administrator is able to build an environment in which a data disk that a user creates leads to the dynamic creation of a SolidFire volume, which has guaranteed performance. Such a SolidFire volume is associated with one (and only ever one) CloudStack volume, so performance of the CloudStack volume does not vary depending on how heavily other tenants are using the system. The createStoragePool API has been augmented to support plugable storage providers. The following is a list of parameters to use when adding storage to CloudStack that is based on the SolidFire plug-in: command=createStoragePool scope=zone zoneId=[your zone id] name=[name for primary storage] hypervisor=Any provider=SolidFire capacityIops=[whole number of IOPS from the SAN to give to CloudStack] capacityBytes=[whole number of bytes from the SAN to give to CloudStack] The url parameter is somewhat unique in that its value can contain additional key/value pairs. url=[key/value pairs detailed below (values are URL encoded; for example, '=' is represented as '%3D')] MVIP%3D[Management Virtual IP Address] (can be suffixed with :[port number]) SVIP%3D[Storage Virtual IP Address] (can be suffixed with :[port number]) clusterAdminUsername%3D[cluster admin's username] clusterAdminPassword%3D[cluster admin's password] clusterDefaultMinIops%3D[Min IOPS (whole number) to set for a volume; used if Min IOPS is not specified by administrator or user] clusterDefaultMaxIops%3D[Max IOPS (whole number) to set for a volume; used if Max IOPS is not specified by administrator or user] clusterDefaultBurstIopsPercentOfMaxIops%3D[Burst IOPS is determined by (Min IOPS * clusterDefaultBurstIopsPercentOfMaxIops parameter) (can be a decimal value)] Example URL to add primary storage to CloudStack based on the SolidFire plug-in (note that URL encoding is used with the value of the url key, so '%3A' equals ':','%3B' equals '&' and '%3D' equals '='): http://127.0.0.1:8080/client/api?command=createStoragePool &scope=zone &zoneId=cf4e6ddf-8ae7-4194-8270d46733a52b55 &name=SolidFire_121258566 &url=MVIP%3D192.168.138.180%3A443 %3BSVIP%3D192.168.56.7 %3BclusterAdminUsername%3Dadmin %3BclusterAdminPassword%3Dpassword %3BclusterDefaultMinIops%3D200 %3BclusterDefaultMaxIops%3D300 %3BclusterDefaultBurstIopsPercentOfMaxIop%3D2.5 &provider=SolidFire &tags=SolidFire_SAN_1 &capacityIops=4000000 &capacityBytes=2251799813685248 &hypervisor=Any &response=json &apiKey=VrrkiZQWFFgSdA6k3DYtoKLcrgQJjZXoSWzicHXt8rYd9Bl47p8L39p0p8vfDpiljtlcMLn_jatMSqCWv5CsQ&signature=wqf8KzcPpY2JmT1Sxk%2F%2BWbgX3l8%3D
Warning
Be sure there is nothing stored on the server. Adding the server to CloudStack will destroy any existing data. 1. To prepare for the zone-based Secondary Staging Store, you should have created and mounted an NFS share during Management Server installation. See Preparing NFS Shares in the Installation Guide. 2. Make sure you prepared the system VM template during Management Server installation. See Prepare the System VM Template in the Installation Guide. 3. Log in to the CloudStack UI as root administrator. 4. In the left navigation bar, click Infrastructure. 5. In Secondary Storage, click View All. 6. Click Add Secondary Storage. 7. Fill in the following fields:
Name. Give the storage a descriptive name. Provider. Choose S3, Swift, or NFS, then fill in the related fields which appear. The fields will vary depending on the storage provider; for more information, consult the provider's documentation (such as the S3 or Swift website). NFS can be used for zone-based storage, and the others for region-wide storage.
Warning
You can use only a single S3 or Swift account per region. Create NFS Secondary Staging Store. This box must always be checked.
Warning
Even if the UI allows you to uncheck this box, do not do so. This checkbox and the three fields below it must be filled in. Even when Swift or S3 is used as the secondary storage provider, an NFS staging storage in each zone is still required. Zone. The zone where the NFS Secondary Staging Store is to be located. NFS server. The name of the zone's Secondary Staging Store. Path. The path to the zone's Secondary Staging Store.
8.1. Compute and Disk Service Offerings 8.1.1. Creating a New Compute Offering 8.1.2. Creating a New Disk Offering 8.1.3. Modifying or Deleting a Service Offering 8.2. System Service Offerings 8.2.1. Creating a New System Service Offering 8.3. Network Throttling 8.4. Changing the Default System Offering for System VMs In this chapter we discuss compute, disk, and system service offerings. Network offerings are discussed in the section on setting up networking for users.
To create a new disk offering: 1. Log in with admin privileges to the CloudStack UI. 2. In the left navigation bar, click Service Offerings. 3. In Select Offering, choose Disk Offering. 4. Click Add Disk Offering. 5. In the dialog, make the following choices: Name. Any desired name for the disk offering. Description. A short description of the offering that can be displayed to users Custom Disk Size. If checked, the user can set their own disk size. If not checked, the root administrator must define a value in Disk Size. Disk Size. Appears only if Custom Disk Size is not selected. Define the volume size in GB. QoS Type. Three options: Empty (no Quality of Service), hypervisor (rate limiting enforced on the hypervisor side), and storage (guaranteed minimum and maximum IOPS enforced on the storage side). If leveraging QoS, make sure that the hypervisor or storage system supports this feature. Custom IOPS. If checked, the user can set their own IOPS. If not checked, the root administrator can define values. If the root admin does not set values when using storage QoS, default values are used (the defauls can be overridden if the proper parameters are passed into CloudStack when creating the primary storage in question). Min IOPS. Appears only if storage QoS is to be used. Set a guaranteed minimum number of IOPS to be enforced on the storage side. Max IOPS. Appears only if storage QoS is to be used. Set a maximum number of IOPS to be enforced on the storage side (the system may go above this limit in certain circumstances for short intervals). (Optional)Storage Tags. The tags that should be associated with the primary storage for this disk. Tags are a comma separated list of attributes of the storage. For example "ssd,blue". Tags are also added on Primary Storage. CloudStack matches tags on a disk offering to tags on the storage. If a tag is present on a disk offering that tag (or tags) must also be present on Primary Storage for the volume to be provisioned. If no such primary storage exists, allocation from the disk offering will fail.. Public. Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; CloudStack will then prompt for the subdomain's name. 6. Click Add.
Host Tags. (Optional) Any tags that you use to organize your hosts CPU cap. Whether to limit the level of CPU usage even if spare capacity is available. Public. Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; CloudStack will then prompt for the subdomain's name. 6. Click Add.
A guest VM must have a default network, and can also have many additional networks. Depending on various parameters, such as the host and virtual switch used, you can observe a difference in the network rate in your cloud. For example, on a VMware host the actual network rate varies based on where they are configured (compute offering, network offering, or both); the network type (shared or isolated); and traffic direction (ingress or egress). The network rate set for a network offering used by a particular network in CloudStack is used for the traffic shaping policy of a port group, for example: port group A, for that network: a particular subnet or VLAN on the actual network. The virtual routers for that network connects to the port group A, and by default instances in that network connects to this port group. However, if an instance is deployed with a compute offering with the network rate set, and if this rate is used for the traffic shaping policy of another port group for the network, for example port group B, then instances using this compute offering are connected to the port group B, instead of connecting to port group A. The traffic shaping policy on standard port groups in VMware only applies to the egress traffic, and the net effect depends on the type of network used in CloudStack. In shared networks, ingress traffic is unlimited for CloudStack, and egress traffic is limited to the rate that applies to the port group used by the instance if any. If the compute offering has a network rate configured, this rate applies to the egress traffic, otherwise the network rate set for the network offering applies. For isolated networks, the network rate set for the network offering, if any, effectively applies to the ingress traffic. This is mainly because the network rate set for the network offering applies to the egress traffic from the virtual router to the instance. The egress traffic is limited by the rate that applies to the port group used by the instance if any, similar to shared networks. For example: Network rate of network offering = 10 Mbps Network rate of compute offering = 200 Mbps In shared networks, ingress traffic will not be limited for CloudStack, while egress traffic will be limited to 200 Mbps. In an isolated network, ingress traffic will be limited to 10 Mbps and egress to 200 Mbps.
For more information, see Creating a New System Service Offering. 2. Back up the database:
mysqldump -u root -p cloud | bzip2 > cloud_backup.sql.bz2
4. Run the following queries on the cloud database. a. In the disk_offering table, identify the original default offering and the new offering you want to use by default. Take a note of the ID of the new offering.
select id,name,unique_name,type from disk_offering;
b. For the original default offering, set the value of unique_name to NULL.
# update disk_offering set unique_name = NULL where id = 10;
Ensure that you use the correct value for the ID. c. For the new offering that you want to use by default, set the value of unique_name as follows: For the default Console Proxy VM (CPVM) offering,set unique_name to 'Cloud.com-ConsoleProxy'. For the default Secondary Storage VM (SSVM) offering, set unique_name to 'Cloud.com-SecondaryStorage'. For example:
update disk_offering set unique_name = 'Cloud.com-ConsoleProxy' where id = 16;
5. Restart CloudStack Management Server. Restarting is required because the default offerings are loaded into the memory at startup.
service cloudstack-management restart
6. Destroy the existing CPVM or SSVM offerings and wait for them to be recreated. The new CPVM or SSVM are configured with the new offering.
Resources such as VLAN are allocated and garbage collected dynamically There is one network offering for the entire network The network offering can be upgraded or downgraded but it is for the entire network For more information, see Section 15.5.1, Configure Guest Traffic in an Advanced Zone.
Source NAT Static NAT Port Forwarding Load Balancing Firewall VPN (Optional) Name one of several available providers to use for a given service, such as Juniper for the firewall (Optional) Network tag to specify which physical network to use When creating a new VM, the user chooses one of the available network offerings, and that determines which network services the VM can use. The CloudStack administrator can create any number of custom network offerings, in addition to the default network offerings provided by CloudStack. By creating multiple custom network offerings, you can set up your cloud to offer different classes of service on a single multi-tenant physical network. For example, while the underlying physical wiring may be the same for two tenants, tenant A may only need simple firewall protection for their website, while tenant B may be running a web server farm and require a scalable firewall solution, load balancing solution, and alternate networks for accessing the database backend.
Note
If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudStack virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function. When creating a new virtual network, the CloudStack administrator chooses which network offering to enable for that network. Each virtual network is associated with one network offering. A virtual network can be upgraded or downgraded by changing its associated network offering. If you do this, be sure to reprogram the physical network to match. CloudStack also has internal network offerings for use by CloudStack system VMs. These network offerings are not visible to users but can be modified by administrators.
DNS
Supported
Supported
Load Balancer
Supported
Supported
Firewall
cloud. For more information, see the Administration Guide. If you select Source NAT, you can choose the CloudStack virtual router or any other Source NAT providers that have been configured in the cloud. If you select Static NAT, you can choose the CloudStack virtual router or any other Static NAT providers that have been configured in the cloud. If you select Port Forwarding, you can choose the CloudStack virtual router or any other Port Forwarding providers that have been configured in the cloud. For more information, see Section 15.25, Remote Access VPN. For more information, see Section 20.3, User Data and Meta Data. For more information, see Section 15.27.4, Configuring Network Access Control List. For more information, see Section 15.15.2, Adding a Security Group.
Supported
Supported
Source NAT
Supported
Supported
Static NAT
Supported
Supported
Port Forwarding
Supported
Not Supported
VPN
Supported
Not Supported
User Data
Not Supported
Supported
Network ACL
Supported
Not Supported
Security Groups
Not Supported
Supported
System Offering. If the service provider for any of the services selected in Supported Services is a virtual router, the System Offering field appears. Choose the system service offering that you want virtual routers to use in this network. For example, if you selected Load Balancer in Supported Services and selected a virtual router to provide load balancing, the System Offering field appears so you can choose between the CloudStack default system service offering and any custom system service offerings that have been defined by the CloudStack root administrator. For more information, see Section 8.2, System Service Offerings. LB Isolation: Specify what type of load balancer isolation you want for the network: Shared or Dedicated. Dedicated: If you select dedicated LB isolation, a dedicated load balancer device is assigned for the network from the pool of dedicated load balancer devices provisioned in the zone. If no sufficient dedicated load balancer devices are available in the zone, network creation fails. Dedicated device is a good choice for the high-traffic networks that make full use of the device's resources. Shared: If you select shared LB isolation, a shared load balancer device is assigned for the network from the pool of shared load balancer devices provisioned in the zone. While provisioning CloudStack picks the shared load balancer device that is used by the least number of accounts. Once the device reaches its maximum capacity, the device will not be allocated to a new account. Mode : You can select either Inline mode or Side by Side mode: Inline mode : Supported only for Juniper SRX firewall and BigF5 load balancer devices. In inline mode, a firewall device is placed in front of a load balancing device. The firewall acts as the gateway for all the incoming traffic, then redirect the load balancing traffic to the load balancer behind it. The load balancer in this case will not have the direct access to the public network. Side by Side : In side by side mode, a firewall device is deployed in parallel with the load balancer device. So the traffic to the load balancer public IP is not routed through the firewall, and therefore, is exposed to the public network. Associate Public IP: Select this option if you want to assign a public IP address to the VMs deployed in the guest network. This option is available only if Guest network is shared. StaticNAT is enabled. Elastic IP is enabled. For information on Elastic IP, see Section 15.11, About Elastic IP. Redundant router capability : Available only when Virtual Router is selected as the Source NAT provider. Select this option if you want to use two virtual routers in the network for uninterrupted connection: one operating as the master virtual router and the other as the backup. The master virtual router receives requests from and sends responses to the users VM. The backup virtual router is activated only when the master is down. After the failover, the backup becomes the master virtual router. CloudStack deploys the routers on different hosts to ensure reliability if one host is down. Conserve mode : Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network. When conservative mode is off, the public IP can only be used for a single service. For example, a public IP used for a port forwarding rule cannot be used for defining other services, such as StaticNAT or load balancing. When the conserve mode is on, you can define more than one service on the same public IP.
Note
Note
If StaticNAT is enabled, irrespective of the status of the conserve mode, no port forwarding or load balancing rule can be created for the IP. However, you can add the firewall rules by using the createFirewallRule command. Tags : Network tag to specify which physical network to use. Default egress policy : Configure the default policy for firewall egress rules. Options are Allow and Deny. Default is Allow if no egress policy is specified, which indicates that all the egress traffic is accepted when a guest network is created from this offering. To block the egress traffic for a guest network, select Deny. In this case, when you configure an egress rules for an isolated guest network, rules are added to allow the specified traffic. 6. Click Add.
Instance name a unique, immutable ID that is generated by CloudStack and can not be modified by the user. This name conforms to the requirements in IETF RFC 1123. Display name the name displayed in the CloudStack web UI. Can be set by the user. Defaults to instance name. Name host name that the DHCP server assigns to the VM. Can be set by the user. Defaults to instance name
Note
You can append the display name of a guest VM to its internal name. For more information, see Section 10.10, Appending a Display Name to the Guest VMs Internal Name. Guest VMs can be configured to be Highly Available (HA). An HA-enabled VM is monitored by the system. If the system detects that the VM is down, it will attempt to restart the VM, possibly on a different host. For more information, see HAEnabled Virtual Machines on Each new VM is allocated one public IP address. When the VM is started, CloudStack automatically creates a static NAT between this public IP address and the private IP address of the VM. If elastic IP is in use (with the NetScaler load balancer), the IP address initially allocated to the new VM is not marked as elastic. The user must replace the automatically configured IP with a specifically acquired elastic IP, and set up the static NAT mapping between this new IP and the guest VMs private IP. The VMs original IP address is then released and returned to the pool of available public IPs. Optionally, you can also decide not to allocate a public IP to a VM in an EIPenabled Basic zone. For more information on Elastic IP, see Section 15.11, About Elastic IP. CloudStack cannot distinguish a guest VM that was shut down by the user (such as with the shutdown command in Linux) from a VM that shut down unexpectedly. If an HA-enabled VM is shut down from inside the VM, CloudStack will restart it. To shut down an HA-enabled VM, you must go through the CloudStack UI or API.
10.3. VM Lifecycle
Virtual machines can be in the following states:
Once a virtual machine is destroyed, it cannot be recovered. All the resources used by the virtual machine will be reclaimed by the system. This includes the virtual machines IP address. A stop will attempt to gracefully shut down the operating system, which typically involves terminating all the running applications. If the operation system cannot be stopped, it will be forcefully terminated. This has the same effect as pulling the power cord to a physical machine. A reboot is a stop followed by a start. CloudStack preserves the state of the virtual machine hard disk until the machine is destroyed. A running virtual machine may fail because of hardware or network issues. A failed virtual machine is in the down state. The system places the virtual machine into the down state if it does not receive the heartbeat from the hypervisor for three minutes. The user can manually restart the virtual machine from the down state. The system will start the virtual machine from the down state automatically if the virtual machine is marked as HAenabled.
Note
You can create a VM without starting it. You can determine whether the VM needs to be started as part of the VM deployment. A request parameter, startVM, in the deployVm API provides this feature. For more information, see the Developer's Guide To create a VM from a template: 1. Log in to the CloudStack UI as an administrator or user. 2. In the left navigation bar, click Instances. 3. Click Add Instance. 4. Select a zone. 5. Select a template, then follow the steps in the wizard. For more information about how the templates came to be in this list, see Chapter 12, Working with Templates. 6. Be sure that the hardware you have allows starting the selected service offering. 7. Click Submit and your VM will be created and started.
Note
For security reason, the internal name of the VM is visible only to the root admin. To create a VM from an ISO:
Note
(XenServer) Windows VMs running on XenServer require PV drivers, which may be provided in the template or added after the VM is created. The PV drivers are necessary for essential management functions such as mounting additional volumes and ISO images, live migration, and graceful shutdown. 1. Log in to the CloudStack UI as an administrator or user. 2. In the left navigation bar, click Instances. 3. Click Add Instance. 4. Select a zone. 5. Select ISO Boot, and follow the steps in the wizard. 6. Click Submit and your VM will be created and started.
. To access a VM directly over the network: 1. The VM must have some port open to incoming traffic. For example, in a basic zone, a new VM might be assigned to a security group which allows incoming traffic. This depends on what security group you picked when creating the VM. In other cases, you can open a port by setting up a port forwarding policy. See Section 15.22, IP Forwarding and Firewalling. 2. If a port is open but you can not access the VM using ssh, its possible that ssh is not already enabled on the VM. This will depend on whether ssh is enabled in the template you picked when creating the VM. Access the VM through the CloudStack UI and enable ssh on the machine using the commands for the VMs operating system. 3. If the network has an external firewall device, you will need to create a firewall rule to allow access. See Section 15.22, IP Forwarding and Firewalling.
that your installation of CloudStack has been extended with customized affinity group plugins. Assign a New VM to an Affinity Group To assign a new VM to an affinity group: Create the VM as usual, as described in Section 10.4, Creating VMs. In the Add Instance wizard, there is a new Affinity tab where you can select the affinity group. Change Affinity Group for an Existing VM To assign an existing VM to an affinity group: 1. Log in to the CloudStack UI as an administrator or user. 2. In the left navigation bar, click Instances. 3. Click the name of the VM you want to work with. 4. Stop the VM by clicking the Stop button. 5. Click the Change Affinity button. View Members of an Affinity Group To see which VMs are currently assigned to a particular affinity group: 1. In the left navigation bar, click Affinity Groups. 2. Click the name of the group you are interested in. 3. Click View Instances. The members of the group are listed. From here, you can click the name of any VM in the list to access all its details and controls. Delete an Affinity Group To delete an affinity group: 1. In the left navigation bar, click Affinity Groups. 2. Click the name of the group you are interested in. 3. Click Delete. Any VM that is a member of the affinity group will be disassociated from the group. The former group members will continue to run normally on the current hosts, but if the VM is restarted, it will no longer follow the host allocation rules from its former affinity group.
Description The maximum number of VM snapshots that can be saved for any given virtual machine in the cloud. The total possible number of VM snapshots in the cloud is (number of VMs) * vmsnapshots.max. If the number of snapshots for any VM ever hits the maximum, the older ones are removed by the snapshot expunge job. Number of seconds to wait for a snapshot job to succeed before declaring failure and issuing an error.
vmsnapshot.create.wait
Note
If a snapshot is already in progress, then clicking this button will have no effect. 5. Provide a name and description. These will be displayed in the VM Snapshots list. 6. (For running VMs only) If you want to include the VM's memory in the snapshot, click the Memory checkbox. This saves the CPU and memory state of the virtual machine. If you don't check this box, then only the current state of the VM disk is saved. Checking this box makes the snapshot take longer. 7. Click OK. To delete a snapshot or restore a VM to the state saved in a particular snapshot: 1. Navigate to the VM as described in the earlier steps. 2. Click View VM Snapshots. 3. In the list of snapshots, click the name of the snapshot you want to work with. 4. Depending on what you want to do: To delete the snapshot, click the Delete button. To revert to the snapshot, click the Revert button.
Note
VM snapshots are deleted automatically when a VM is destroyed. You don't have to manually delete the snapshots in this case.
guest VM, the display name is appended to the internal name of the guest VM on the host. This makes the internal name format as i-<user_id>-<vm_id>-<displayName>. The default value of vm.instancename.flag is set to false. This feature is intended to make the correlation between instance names and internal names easier in large data center deployments. The following table explains how a VM name is displayed in different scenarios. User-Provided Display Name Yes No Yes No vm.instancename.flag True True False False Hostname on the VM Display name UUID Display name UUID Name on vCenter i-<user_id>-<vm_id>displayName i-<user_id>-<vm_id><instance.name> i-<user_id>-<vm_id><instance.name> i-<user_id>-<vm_id><instance.name> Internal Name i-<user_id>-<vm_id>displayName i-<user_id>-<vm_id><instance.name> i-<user_id>-<vm_id><instance.name> i-<user_id>-<vm_id><instance.name>
compute offering that has the desired CPU and RAM values. You can use the same steps described above in Section 10.11, Changing the Service Offering for a VM, but skip the step where you stop the virtual machine. Of course, you might have to create a new compute offering first. When you submit a dynamic scaling request, the resources will be scaled up on the current host if possible. If the host does not have enough resources, the VM will be live migrated to another host in the same cluster. If there is no host in the cluster that can fulfill the requested level of CPU and RAM, the scaling operation will fail. The VM will continue to run as it was before.
10.11.5. Limitations
You can not do dynamic scaling for system VMs on XenServer. CloudStack will not check to be sure that the new CPU and RAM levels are compatible with the OS running on the VM. When scaling memory or CPU for a Linux VM on VMware, you might need to run scripts in addition to the other steps mentioned above. For more information, see Hot adding memory in Linux (1012764) in the VMware Knowledge Base. (VMware) If resources are not available on the current host, scaling up will fail on VMware because of a known issue where CloudStack and vCenter calculate the available capacity differently. For more information, see https://issues.apache.org/jira/browse/CLOUDSTACK-1809. On VMs running Linux 64-bit and Windows 7 32-bit operating systems, if the VM is initially assigned a RAM of less than 3 GB, it can be dynamically scaled up to 3 GB, but not more. This is due to a known issue with these operating systems, which will freeze if an attempt is made to dynamically scale from less than 3 GB to more than 3 GB.
Note
If the VM's storage has to be migrated along with the VM, this will be noted in the host list. CloudStack will take care of the storage migration for you. 6. Click OK.
ISOs may be public or private, like templates.ISOs are not hypervisor-specific. That is, a guest on vSphere can mount the exact same image that a guest on KVM can mount. ISO images may be stored in the system and made available with a privacy level similar to templates. ISO images are classified as either bootable or not bootable. A bootable ISO image is one that contains an OS image. CloudStack allows a user to boot a guest VM off of an ISO image. Users can also attach ISO images to guest VMs. For example, this enables installing PV drivers into Windows. ISO images are not hypervisor-specific.
Note
It is not recommended to choose an older version of the OS than the version in the image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will usually not work. In these cases, choose Other. Extractable : Choose Yes if the ISO should be available for extraction. Public : Choose Yes if this ISO should be available to other users. Featured: Choose Yes if you would like this ISO to be more prominent for users to select. The ISO will appear in the Featured ISOs list. Only an administrator can make an ISO Featured. 6. Click OK. The Management Server will download the ISO. Depending on the size of the ISO, this may take a long time. The ISO status column will display Ready once it has been successfully downloaded into secondary storage. Clicking Refresh updates the download percentage. 7. Important: Wait for the ISO to finish downloading. If you move on to the next task and try to use the ISO right away, it will appear to fail. The entire ISO must be available before CloudStack can work with it.
software patch. The administrator or user naturally wants to apply the patch and then make sure existing VMs start using it. Whether a software update is involved or not, it's also possible to simply switch a VM from its current template to any other desired template. To change a VM's base image, call the restoreVirtualMachine API command and pass in the virtual machine ID and a new template ID. The template ID parameter may refer to either a template or an ISO, depending on which type of base image the VM was already using (it must match the previous type of image). When this call occurs, the VM's root disk is first destroyed, then a new root disk is created from the source designated in the template ID parameter. The new root disk is attached to the VM, and now the VM is based on the new template. You can also omit the template ID parameter from the restoreVirtualMachine call. In this case, the VM's root disk is destroyed and recreated, but from the same template or ISO that was already in use by the VM.
When the host comes back online, the VMs that were migrated off of it may be migrated back to it manually and new VMs can be added.
5. If you are disabling or enabling a pod or cluster, click the name of the zone that contains the pod or cluster. 6. Click the Compute tab. 7. In the Pods or Clusters node of the diagram, click View All. 8. Click the pod or cluster name in the list. 9. Click the Enable/Disable button.
your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.
Note
The lack of up-do-date hotfixes can lead to data corruption and lost VMs. (XenServer) For more information, see Highly Recommended Hotfixes for XenServer in the CloudStack Knowledge Base.
4. This should return a single ID. Record the set of such IDs for these hosts. 5. Update the passwords for the host in the database. In this example, we change the passwords for hosts with IDs 5, 10, and 12 to "password".
mysql> update cloud.host set password='password' where id=5 or id=10 or id=12;
Note
It is safer not to deploy additional new VMs while the capacity recalculation is underway, in case the new values for available capacity are not high enough to accommodate the new VMs. Just wait for the new used/available values to become available, to be sure there is room for all the new VMs you want. To change the over-provisioning ratios for an existing cluster: 1. Log in as administrator to the CloudStack UI. 2. In the left navigation bar, click Infrastructure. 3. Under Clusters, click View All. 4. Select the cluster you want to work with, and click the Edit button. 5. Fill in your desired over-provisioning multipliers in the fields CPU overcommit ratio and RAM overcommit ratio. The value which is intially shown in these fields is the default value inherited from the global configuration settings.
Note
In XenServer, due to a constraint of this hypervisor, you can not use an over-provisioning factor greater than 4.
service offering. Guests receive a CPU allocation that is proportionate to the GHz in the service offering. For example, a guest created from a 2 GHz service offering will receive twice the CPU allocation as a guest created from a 1 GHz service offering. CloudStack does not perform memory over-provisioning.
800-899
Specify VLAN: Select the option. For more information, see the CloudStack Installation Guide. 2. Using this network offering, create a network. You can create a VPC tier or an Isolated network. 3. Specify the VLAN when you create the network. When VLAN is specified, a CIDR and gateway are assigned to this network and the state is changed to Setup. In this state, the network will not be garbage collected.
Note
You cannot change a VLAN once it's assigned to the network. The VLAN remains with the network for its entire life cycle.
For XenServer, install PV drivers / Xen tools on each template that you create. This will enable live migration and clean guest shutdown. For vSphere, install VMware Tools on each template that you create. This will enable console view to work properly.
anywhere
Chain FORWARD (policy ACCEPT) target prot opt source RH-Firewall-1-INPUT all -- anywhere Chain OUTPUT (policy ACCEPT) target prot opt source
Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT esp -- anywhere anywhere ACCEPT ah -- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT tcp -- anywhere anywhere tcp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-
Note
Note: Generally you should not choose an older version of the OS than the version in the image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will in general not work. In those cases you should choose Other. Public . Choose Yes to make this template accessible to all users of this CloudStack installation. The template will appear in the Community Templates list. See Section 12.5, Private and Public Templates.
will appear in the Community Templates list. See Section 12.5, Private and Public Templates. Password Enabled. Choose Yes if your template has the CloudStack password change script installed. See Section 12.13, Adding Password Management to Your Templates. 5. Click Add. The new template will be visible in the Templates section when the template creation process has been completed. The template is then available when creating a new VM.
Note
You should not choose an older version of the OS than the version in the image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will in general not work. In those cases you should choose Other. Hypervisor : The supported hypervisors are listed. Select the desired one. Format. The format of the template upload file, such as VHD or OVA. Password Enabled. Choose Yes if your template has the CloudStack password change script installed. See Adding Password Management to Your Templates Extractable . Choose Yes if the template is available for extraction. If this option is selected, end users can download a full image of a template. Public . Choose Yes to make this template accessible to all users of this CloudStack installation. The template will appear in the Community Templates list. See Section 12.5, Private and Public Templates. Featured. Choose Yes if you would like this template to be more prominent for users to select. The template will appear in the Featured Templates list. Only an administrator can make a template Featured.
Note
(XenServer) Windows VMs running on XenServer require PV drivers, which may be provided in the template or added after the VM is created. The PV drivers are necessary for essential management functions such as mounting additional volumes and ISO images, live migration, and graceful shutdown. An overview of the procedure is as follows:
1. Upload your Windows ISO. For more information, see Section 10.15.1, Adding an ISO. 2. Create a VM Instance with this ISO. For more information, see Section 10.4, Creating VMs. 3. Follow the steps in Sysprep for Windows Server 2008 R2 (below) or Sysprep for Windows Server 2003 R2, depending on your version of Windows Server 4. The preparation steps are complete. Now you can actually create the template as described in Creating the Windows Template.
Note
The steps outlined here are derived from the excellent guide by Charity Shelbourne, originally published at Windows Server 2008 Sysprep Mini-Setup. 1. Download and install the Windows AIK
Note
Windows AIK should not be installed on the Windows 2008 R2 VM you just created. Windows AIK should not be part of the template you create. It is only used to create the sysprep answer file. 2. Copy the install.wim file in the \sources directory of the Windows 2008 R2 installation DVD to the hard disk. This is a very large file and may take a long time to copy. Windows AIK requires the WIM file to be writable. 3. Start the Windows System Image Manager, which is part of the Windows AIK. 4. In the Windows Image pane, right click the Select a Windows image or catalog file option to load the install.wim file you just copied. 5. Select the Windows 2008 R2 Edition. You may be prompted with a warning that the catalog file cannot be opened. Click Yes to create a new catalog file. 6. In the Answer File pane, right click to create a new answer file. 7. Generate the answer file from the Windows System Image Manager using the following steps: a. The first page you need to automate is the Language and Country or Region Selection page. To automate this, expand Components in your Windows Image pane, right-click and add the Microsoft-WindowsInternational-Core setting to Pass 7 oobeSystem. In your Answer File pane, configure the InputLocale, SystemLocale, UILanguage, and UserLocale with the appropriate settings for your language and country or region. Should you have a question about any of these settings, you can right-click on the specific setting and select Help. This will open the appropriate CHM help file with more information, including examples on the setting you are attempting to configure.
b. You need to automate the Software License Terms Selection page, otherwise known as the End-User License Agreement (EULA). To do this, expand the Microsoft-Windows-Shell-Setup component. High-light the OOBE setting, and add the setting to the Pass 7 oobeSystem. In Settings, set HideEULAPage true.
c. Make sure the license key is properly set. If you use MAK key, you can just enter the MAK key on the Windows 2008 R2 VM. You need not input the MAK into the Windows System Image Manager. If you use KMS host for activation you need not enter the Product Key. Details of Windows Volume Activation can be found at http://technet.microsoft.com/en-us/library/bb892849.aspx d. You need to automate is the Change Administrator Password page. Expand the Microsoft-Windows-ShellSetup component (if it is not still expanded), expand UserAccounts, right-click on AdministratorPassword, and add the setting to the Pass 7 oobeSystem configuration pass of your answer file. Under Settings, specify a password next to Value.
You may read the AIK documentation and set many more options that suit your deployment. The steps above are the minimum needed to make Windows unattended setup work. 8. Save the answer file as unattend.xml. You can ignore the warning messages that appear in the validation window. 9. Copy the unattend.xml file into the c:\windows\system32\sysprep directory of the Windows 2008 R2 Virtual Machine 10. Once you place the unattend.xml file in c:\windows\system32\sysprep directory, you run the sysprep tool as follows:
cd c:\Windows\System32\sysprep sysprep.exe /oobe /generalize /shutdown
The Windows 2008 R2 VM will automatically shut down after sysprep is complete.
d. On the License Agreement screen, select Yes fully automate the installation. e. Provide your name and organization. f. Leave display settings at default. g. Set the appropriate time zone. h. Provide your product key. i. Select an appropriate license mode for your deployment j. Select Automatically generate computer name. k. Type a default administrator password. If you enable the password reset feature, the users will not actually use this password. This password will be reset by the instance manager after the guest boots up. l. Leave Network Components at Typical Settings. m. Select the WORKGROUP option. n. Leave Telephony options at default. o. Select appropriate Regional Settings. p. Select appropriate language settings. q. Do not install printers. r. Do not specify Run Once commands. s. You need not specify an identification string. t. Save the Answer File as c:\sysprep\sysprep.inf. 3. Run the following command to sysprep the image:
c:\sysprep\sysprep.exe -reseal -mini -activated
Note
When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text. To import an AMI: 1. Set up loopback on image file:
# mkdir -p /mnt/loop/centos62 # mount -o loop CentOS_6.2_x64 /mnt/loop/centos54
2. Install the kernel-xen package into the image. This downloads the PV kernel and ramdisk to the image.
# yum -c /mnt/loop/centos54/etc/yum.conf --installroot=/mnt/loop/centos62/ -y install kernel-xen
4. Determine the name of the PV kernel that has been installed into the image.
# cd /mnt/loop/centos62 # ls lib/modules/ 2.6.16.33-xenU 2.6.16-xenU 2.6.18-164.15.1.el5xen 2.6.18-164.6.1.el5.centos.plus 2.6.18-xenU-ec2-v1.0 2.6.21.7-2.fc8xen 2.6.31-302-ec2 # ls boot/initrd* boot/initrd-2.6.18-164.6.1.el5.centos.plus.img boot/initrd-2.6.18-164.15.1.el5xen.img # ls boot/vmlinuz* boot/vmlinuz-2.6.18-164.15.1.el5xen boot/vmlinuz-2.6.18-164.6.1.el5.centos.plus boot/vmlinuz-2.6.18-xenU-ec2-v1.0 boot/vmlinuz-2.6.21-2952.fc8xen
Xen kernels/ramdisk always end with "xen". For the kernel version you choose, there has to be an entry for that version under lib/modules, there has to be an initrd and vmlinuz corresponding to that. Above, the only kernel that satisfies this condition is 2.6.18-164.15.1.el5xen. 5. Based on your findings, create an entry in the grub.conf file. Below is an example entry.
default=0 timeout=5 hiddenmenu title CentOS (2.6.18-164.15.1.el5xen) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.15.1.el5xen ro root=/dev/xvda initrd /boot/initrd-2.6.18-164.15.1.el5xen.img
1 0 0 0 0
1 0 0 0 0
7. Enable login via the console. The default console device in a XenServer system is xvc0. Ensure that etc/inittab and etc/securetty have the following lines respectively:
# grep xvc0 etc/inittab co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav # grep xvc0 etc/securetty xvc0
8. Ensure the ramdisk supports PV disk and PV network. Customize this for the kernel version you have determined above.
# chroot /mnt/loop/centos54 # cd /boot/ # mv initrd-2.6.18-164.15.1.el5xen.img initrd-2.6.18-164.15.1.el5xen.img.bak # mkinitrd -f /boot/initrd-2.6.18-164.15.1.el5xen.img --with=xennet --preload=xenblk --omit-scsi-modules 2.6.18-164.15.1.el5xen
11. Check etc/ssh/sshd_config for lines allowing ssh login using a password.
# egrep "PermitRootLogin|PasswordAuthentication" /mnt/loop/centos54/etc/ssh/sshd_config PermitRootLogin yes PasswordAuthentication yes
12. If you need the template to be enabled to reset passwords from the CloudStack UI or API, install the password change script into the image at this point. See Section 12.13, Adding Password Management to Your Templates. 13. Unmount and delete loopback mount.
# umount /mnt/loop/centos54 # losetup -d /dev/loop0
14. Copy the image file to your XenServer host's file-based storage repository. In the example below, the Xenserver is "xenhost". This XenServer has an NFS repository whose uuid is a9c5b8c8-536b-a193-a6dc-51af3e5ff799.
# scp CentOS_6.2_x64 xenhost:/var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799/
15. Log in to the Xenserver and create a VDI the same size as the image.
[root@xenhost ~]# cd /var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799 [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# ls -lh CentOS_6.2_x64 -rw-r--r-- 1 root root 10G Mar 16 16:49 CentOS_6.2_x64 [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-create virtual-size=10GiB sr-uuid=a9c5b8c8-536b-a193-a6dc-51af3e5ff799 type=user name-label="Centos 6.2 x86_64" cad7317c-258b-4ef7-b207-cdf0283a7923
16. Import the image file into the VDI. This may take 1020 minutes.
[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-import filename=CentOS_6.2_x64 uuid=cad7317c-258b-4ef7-b207-cdf0283a7923
17. Locate a the VHD file. This is the file with the VDIs UUID as its name. Compress it and upload it to your web server.
[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# bzip2 -c cad7317c-258b-4ef7b207-cdf0283a7923.vhd > CentOS_6.2_x64.vhd.bz2 [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# scp CentOS_6.2_x64.vhd.bz2 webserver:/var/www/html/templates/
Integration Components files). 2. Restore the original initrd from backup in /boot/ (the backup is named *.backup0). 3. Remove the "hdX=noprobe" entries from /boot/grub/menu.lst. 4. Check /etc/fstab for any partitions mounted by device name. Change those entries (if any) to mount by LABEL or UUID. You can get that information with the blkid command. The next step is make sure the VM is not running in Hyper-V, then get the VHD into XenServer. There are two options for doing this. Option one: 1. Import the VHD using XenCenter. In XenCenter, go to Tools>Virtual Appliance Tools>Disk Image Import. 2. Choose the VHD, then click Next. 3. Name the VM, choose the NFS VHD SR under Storage, enable "Run Operating System Fixups" and choose the NFS ISO SR. 4. Click Next, then Finish. A VM should be created. Option two: 1. Run XenConvert, under From choose VHD, under To choose XenServer. Click Next. 2. Choose the VHD, then click Next. 3. Input the XenServer host info, then click Next. 4. Name the VM, then click Next, then Convert. A VM should be created. Once you have a VM created from the Hyper-V VHD, prepare it using the following steps: 1. Boot the VM, uninstall Hyper-V Integration Services, and reboot. 2. Install XenServer Tools, then reboot. 3. Prepare the VM as desired. For example, run sysprep on Windows VMs. See Section 12.10, Creating a Windows Template. Either option above will create a VM in HVM mode. This is fine for Windows VMs, but Linux VMs may not perform optimally. Converting a Linux VM to PV mode will require additional steps and will vary by distribution. 1. Shut down the VM and copy the VHD from the NFS storage to a web server; for example, mount the NFS share on the web server and copy it, or from the XenServer host use sftp or scp to upload it to the web server. 2. In CloudStack, create a new template using the following values: URL. Give the URL for the VHD OS Type. Use the appropriate OS. For PV mode on CentOS, choose Other PV (32-bit) or Other PV (64-bit). This choice is available only for XenServer. Hypervisor. XenServer Format. VHD The template will be created, and you can create instances from it.
4. Depending on the Linux distribution, continue with the appropriate step. On Fedora, CentOS/RHEL, and Debian, run:
chkconfig --add cloud-set-guest-password
CloudStack volume. This is highly useful for features such as storage Quality of Service. Currently this feature is supported for data disks (Disk Offerings).
XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the CloudStack can still support storage over-provisioning by running on thin-provisioned storage volumes. KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the CloudStack does not attempt to mount or unmount the storage as is done with NFS. The CloudStack requires that the administrator insure that the storage is available With NFS storage, CloudStack manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type. Local storage is an option for primary storage for vSphere, XenServer, and KVM. When the local disk option is enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration. CloudStack supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity.
Note
CloudStack supports attaching up to 13 data disks to a VM on XenServer hypervisor versions 6.0 and above. For the VMs on other hypervisor types, the data disk limit is 6.
3. In the left navigation bar, click Storage. 4. Click Upload Volume. 5. Provide the following: Name and Description. Any desired name and a brief description that can be shown in the UI. Availability Zone. Choose the zone where you want to store the volume. VMs running on hosts in this zone can attach the volume. Format. Choose one of the following to indicate the disk image format of the volume. Hypervisor XenServer VMware KVM Disk Image Format VHD OVA QCOW2
URL. The secure HTTP or HTTPS URL that CloudStack can use to access your disk. The type of file at the URL must match the value chosen in Format. For example, if Format is VHD, the URL might look like the following: http://yourFileServerIP/userdata/myDataDisk.vhd MD5 checksum. (Optional) Use the hash that you created in step 1. 6. Wait until the status of the volume shows that the upload is complete. Click Instances - Volumes, find the name you specified in step 5, and make sure the status is Uploaded.
Note
This procedure is different from moving volumes from one storage pool to another as described in Section 13.4.5, VM Storage Migration. A volume can be detached from a guest VM and attached to another guest. Both CloudStack administrators and users can detach volumes from VMs and move them to other VMs. If the two VMs are in different clusters, and the volume is large, it may take several minutes for the volume to be moved to the new VM. 1. Log in to the CloudStack UI as a user or admin. 2. In the left navigation bar, click Storage, and choose Volumes in Select View. Alternatively, if you know which VM the volume is attached to, you can click Instances, click the VM name, and click View Volumes. 3. Click the name of the volume you want to detach, then click the Detach Disk button. 4. To move the volume to another VM, follow the steps in Section 13.4.3, Attaching a Volume.
Note
This procedure is different from moving disk volumes from one VM to another as described in Section 13.4.4, Detaching and Moving Volumes. You can migrate a virtual machines root disk volume or any additional data disk volume from one storage pool to another in the same zone. You can use the storage migration feature to achieve some commonly desired administration goals, such as balancing the load on storage pools and increasing the reliability of virtual machines by moving them away from any storage pool that is experiencing issues. On XenServer and VMware, live migration of VM storage is enabled through CloudStack support for XenMotion and vMotion. Live storage migration allows VMs to be moved from one host to another, where the VMs are not located on storage shared between the two hosts. It provides the option to live migrate a VMs disks along with the VM itself. It is possible to migrate a VM from one XenServer resource pool / VMware cluster to another, or to migrate a VM whose disks are on local storage, or even to migrate a VMs disks from one storage repository to another, all while the VM is running.
are on local storage, or even to migrate a VMs disks from one storage repository to another, all while the VM is running.
Note
Because of a limitation in VMware, live migration of storage for a VM is allowed only if the source and target storage pool are accessible to the source host; that is, the host where the VM is running when the live migration operation is requested.
6. Watch for the volume status to change to Migrating, then back to Ready. 13.4.5.1.2. Migrating Storage and Attaching to a Different VM 1. Log in to the CloudStack UI as a user or admin. 2. Detach the disk from the VM. See Section 13.4.4, Detaching and Moving Volumes but skip the reattach step at the end. You will do that after migrating to new storage. 3. Click the Migrate Volume button and choose the destination from the dropdown list.
4. Watch for the volume status to change to Migrating, then back to Ready. You can find the volume by clicking Storage in the left navigation bar. Make sure that Volumes is displayed at the top of the window, in the Select View dropdown. 5. Attach the volume to any desired VM running in the same cluster as the new storage server. See Section 13.4.3, Attaching a Volume
Note
If the VM's storage has to be migrated along with the VM, this will be noted in the host list. CloudStack will take care of the storage migration for you. 5. Watch for the volume status to change to Migrating, then back to Running (or Stopped, in the case of KVM). This can take some time. 6. (KVM only) Restart the VM.
Before you try to resize a volume, consider the following: The VMs associated with the volume are stopped. The data disks associated with the volume are removed. When a volume is shrunk, the disk associated with it is simply truncated, and doing so would put its content at risk of data loss. Therefore, resize any partitions or file systems before you shrink a data disk so that all the data is moved off from that disk. To resize a volume: 1. Log in to the CloudStack UI as a user or admin. 2. In the left navigation bar, click Storage. 3. In Select View, choose Volumes. 4. Select the volume name in the Volumes list, then click the Resize Volume button 5. In the Resize Volume pop-up, choose desired characteristics for the storage.
a. If you select Custom Disk, specify a custom size. b. Click Shrink OK to confirm that you are reducing the size of a volume. This parameter protects against inadvertent shrinking of a disk, which might lead to the risk of data loss. You must sign off that you know what you are doing. 6. Click OK.
Note
For upgrading customers: This process applies only to newly created snapshots after upgrade to CloudStack 4.2. Snapshots that have already been taken and stored in OVA format will continue to exist in that format, and will continue to work as expected.
The following table shows the global configuration settings that control the behavior of the Usage Server. Parameter Name enable.usage.server usage.aggregation.timezone Description Whether the Usage Server is active. Time zone of usage records. Set this if the usage records and daily job execution are in different time zones. For example, with the following settings, the usage job will run at PST 00:15 and generate usage records for the 24 hours from 00:00:00 GMT to 23:59:59 GMT:
usage.stats.job.exec.time = 00:15 usage.execution.timezone = PST usage.aggregation.timezone = GMT
Valid values for the time zone are specified in Appendix A, Time Zones Default: GMT usage.execution.timezone The time zone of usage.stats.job.exec.time. Valid values for the time zone are specified in Appendix A, Time Zones Default: The time zone of the management server. usage.sanity.check.interval The number of days between sanity checks. Set this in order to periodically search for records with erroneous data before issuing customer invoices. For example, this checks for VM usage records created after the VM was destroyed, and similar checks for templates, volumes, and so on. It also checks for usage times longer than the aggregation range. If any issue is found, the alert ALERT_TYPE_USAGE_SANITY_RESULT = 21 is sent. The time period in minutes between Usage Server processing jobs. For example, if you set it to 1440, the Usage Server will run once per day. If you set it to 600, it will run every ten hours. In general, when a Usage Server job runs, it processes all events generated since usage was last run. There is special handling for the case of 1440 (once per day). In this case the Usage Server does not necessarily process all records since Usage was last run. CloudStack assumes that you require processing once
usage.stats.job.aggregation.range
CloudStack assumes that you require processing once per day for the previous, complete days records. For example, if the current day is October 7, then it is assumed you would like to process records for October 6, from midnight to midnight. CloudStack assumes this midnight to midnight is relative to the usage.execution.timezone. Default: 1440 usage.stats.job.exec.time The time when the Usage Server processing will start. It is specified in 24-hour format (HH:MM) in the time zone of the server, which should be GMT. For example, to start the Usage job at 10:30 GMT, enter 10:30. If usage.stats.job.aggregation.range is also set, and its value is not 1440, then its value will be added to usage.stats.job.exec.time to get the time to run the Usage Server job again. This is repeated until 24 hours have elapsed, and the next day's processing begins again at usage.stats.job.exec.time. Default: 00:15. For example, suppose that your server is in GMT, your user population is predominantly in the East Coast of the United States, and you would like to process usage records every night at 2 AM local (EST) time. Choose these settings: enable.usage.server = true usage.execution.timezone = America/New_York usage.stats.job.exec.time = 07:00. This will run the Usage job at 2:00 AM EST. Note that this will shift by an hour as the East Coast of the U.S. enters and exits Daylight Savings Time. usage.stats.job.aggregation.range = 1440 With this configuration, the Usage job will run every night at 2 AM EST and will process records for the previous days midnight-midnight as defined by the EST (America/New_York) time zone.
Note
Because the special value 1440 has been used for usage.stats.job.aggregation.range, the Usage Server will ignore the data between midnight and 2 AM. That data will be included in the next day's run.
snapshot.max.daily
snapshot.max.weekly
snapshot.max.monthly
recurring daily snapshots can not be scheduled Maximum recurring weekly snapshots to be retained for a volume. If the limit is reached, snapshots from the beginning of the week are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring weekly snapshots can not be scheduled Maximum recurring monthly snapshots to be retained for a volume. If the limit is reached, snapshots from the beginning of the month are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring monthly snapshots can not be scheduled.
To modify global configuration parameters, use the global configuration screen in the CloudStack UI. See Setting Global Configuration Parameters
max.account.ram (MB)
max.account.primary.storage (GB)
max.account.secondary.storage (GB)
max.project.cpus
max.project.ram (MB)
max.project.primary.storage (GB)
account tries to execute a new operation using any of these resources. For example, the existing behavior in the case of a VM are: migrateVirtualMachine: The users under that account will be able to migrate the running VM into any other host without facing any limit issue. recoverVirtualMachine: Destroyed VMs cannot be recovered. For any resource type, if a domain has limit X, sub-domains or accounts under that domain can have there own limits. However, the sum of resource allocated to a sub-domain or accounts under the domain at any point of time should not exceed the value X. For example, if a domain has the CPU limit of 40 and the sub-domain D1 and account A1 can have limits of 30 each, but at any point of time the resource allocated to D1 and A1 should not exceed the limit of 40. If any operation needs to pass through two of more resource limit check, then the lower of 2 limits will be enforced, For example: if an account has the VM limit of 10 and CPU limit of 20, and a user under that account requests 5 VMs of 4 CPUs each. The user can deploy 5 more VMs because VM limit is 10. However, the user cannot deploy any more instances because the CPU limit has been exhausted.
Public IP Limits
Volume Limits
account. The default is 20. Snapshot Limits The number of snapshots that can be created in an account. The default is 20. The number of templates that can be registered in an account. The default is 20. The number of VPCs that can be created in an account. The default is 20. The number of CPU cores that can be used for an account. The default is 40. The number of RAM that can be used for an account. The default is 40960. The primary storage space that can be used for an account. The default is 200. The secondary storage space that can be used for an account. The default is 400.
Template Limits
VPC limits
CPU limits
6. Click Apply.
15.12.3. Acquiring a Portable IP 15.12.4. Transferring Portable IP 15.13. Multiple Subnets in Shared Network 15.13.1. Prerequisites and Guidelines 15.13.2. Adding Multiple Subnets to a Shared Network 15.14. Isolation in Advanced Zone Using Private VLAN 15.14.1. About Private VLAN 15.14.2. Prerequisites 15.14.3. Creating a PVLAN-Enabled Guest Network 15.15. Security Groups 15.15.1. About Security Groups 15.15.2. Adding a Security Group 15.15.3. Security Groups in Advanced Zones (KVM Only) 15.15.4. Enabling Security Groups 15.15.5. Adding Ingress and Egress Rules to a Security Group 15.16. External Firewalls and Load Balancers 15.16.1. About Using a NetScaler Load Balancer 15.16.2. Configuring SNMP Community String on a RHEL Server 15.16.3. Initial Setup of External Firewalls and Load Balancers 15.16.4. Ongoing Configuration of External Firewalls and Load Balancers 15.16.5. Load Balancer Rules 15.16.6. Configuring AutoScale 15.17. Global Server Load Balancing Support 15.17.1. About Global Server Load Balancing 15.17.2. Configuring GSLB 15.17.3. Known Limitation 15.18. Guest IP Ranges 15.19. Acquiring a New IP Address 15.20. Releasing an IP Address 15.21. Static NAT 15.21.1. Enabling or Disabling Static NAT 15.22. IP Forwarding and Firewalling 15.22.1. Firewall Rules 15.22.2. Egress Firewall Rules in an Advanced Zone 15.22.3. Port Forwarding 15.23. IP Load Balancing 15.24. DNS and DHCP 15.25. Remote Access VPN 15.25.1. Configuring Remote Access VPN 15.25.2. Using Remote Access VPN with Windows 15.25.3. Using Remote Access VPN with Mac OS X 15.25.4. Setting Up a Site-to-Site VPN Connection 15.26. About Inter-VLAN Routing (nTier Apps) 15.27. Configuring a Virtual Private Cloud 15.27.1. About Virtual Private Clouds 15.27.2. Adding a Virtual Private Cloud 15.27.3. Adding Tiers 15.27.4. Configuring Network Access Control List 15.27.5. Adding a Private Gateway to a VPC 15.27.6. Deploying VMs to the Tier 15.27.7. Deploying VMs to VPC Tier and Shared Networks 15.27.8. Acquiring a New IP Address for a VPC 15.27.9. Releasing an IP Address Alloted to a VPC 15.27.10. Enabling or Disabling Static NAT on a VPC 15.27.11. Adding Load Balancing Rules on a VPC 15.27.12. Adding a Port Forwarding Rule on a VPC 15.27.13. Removing Tiers 15.27.14. Editing, Restarting, and Removing a Virtual Private Cloud 15.28. Persistent Networks 15.28.1. Persistent Network Considerations
15.28.1. Persistent Network Considerations 15.28.2. Creating a Persistent Guest Network In a CloudStack, guest VMs can communicate with each other using shared infrastructure with the security and user perception that the guests have a private LAN. The CloudStack virtual router is the main component providing networking features for guest traffic.
Typically, the Management Server automatically creates a virtual router for each network. A virtual router is a special virtual machine that runs on the hosts. Each virtual router in an isolated network has three network interfaces. If multiple public VLAN is used, the router will have multiple public interfaces. Its eth0 interface serves as the gateway for the guest traffic and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the virtual router. Its eth2 interface is assigned a public IP address for public traffic. If multiple public VLAN is used, the router will have multiple public interfaces. The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses. Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs
Servers are connected as follows: Storage devices are connected to only the network that carries management traffic. Hosts are connected to networks for both management traffic and public traffic. Hosts are also connected to one or more networks carrying guest traffic. We recommend the use of multiple physical Ethernet cards to implement each network interface as well as redundant switch fabric in order to maximize throughput and improve reliability.
A firewall for management traffic operates in the NAT mode. The network typically is assigned IP addresses in the 192.168.0.0/16 Class B private address space. Each pod is assigned IP addresses in the 192.168.*.0/24 Class C private address space. Each zone has its own set of public IP addresses. Public IP addresses from different zones do not overlap.
4. Provide the following information: Name . The name of the network. This will be user-visible Display Text: The description of the network. This will be user-visible Zone : The zone in which you are configuring the guest network. Network offering: If the administrator has configured multiple network offerings, select the one you want to use for this network Guest Gateway : The gateway that the guests should use Guest Netmask : The netmask in use on the subnet the guests will use 5. Click OK.
In zones that use advanced networking, additional networks for guest traffic may be added at any time after the initial installation. You can also customize the domain name associated with the network by specifying a DNS suffix for each network. A VM's networks are defined at VM creation time. A VM cannot add or remove networks after it has been created, although the user can go into the guest and remove the IP address from the NIC on a particular network. Each VM has just one default network. The virtual router's DHCP reply will set the guest's default gateway as that for the default network. Multiple non-default networks may be added to a guest in addition to the single, required default network. The administrator can control which networks are available as the default network. Additional networks can either be available to all accounts or be assigned to a specific account. Networks that are available to all accounts are zone-wide. Any user with access to the zone can create a VM with access to that network. These zone-wide networks provide little or no isolation between guests.Networks that are assigned to a specific account provide strong isolation.
15.6.2.1. Prerequisites
Ensure that vm-tools are running on guest VMs for adding or removing networks to work on VMware hypervisor.
1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, click Instances. 3. Choose the VM that you want to work with. 4. Click the NICs tab. 5. Locate the NIC you want to work with. 6. Click the Set default NIC button. 7. Click Yes to confirm.
1 2
10.1.1.0/24 10.1.1.0/26
None 10.1.1.0/24
10.1.1.0/24
None
None
No IP Reservation. IP Reservation configured by the UpdateNetwork API with guestvmcidr=10.1.1.0/26 or enter 10.1.1.0/26 in the CIDR field in the UI. Removing IP Reservation by the UpdateNetwork API with guestvmcidr=10.1.1.0/24 or enter 10.1.1.0/24 in the CIDR field in the UI.
15.7.2. Limitations
The IP Reservation is not supported if active IPs that are found outside the Guest VM CIDR. Upgrading network offering which causes a change in CIDR (such as upgrading an offering with no external devices
Upgrading network offering which causes a change in CIDR (such as upgrading an offering with no external devices to one with external devices) IP Reservation becomes void if any. Reconfigure IP Reservation in the new reimplemeted network.
b. Click Add.
15.9.2. Guidelines
To prevent IP conflict, configure different subnets when multiple networks are connected to the same VM.
CloudStack provides you with the flexibility to add guest IP ranges from different subnets in Basic zones and security groups-enabled Advanced zones. For security groups-enabled Advanced zones, it implies multiple subnets can be added to the same VLAN. With the addition of this feature, you will be able to add IP address ranges from the same subnet or from a different one when IP address are exhausted. This would in turn allows you to employ higher number of subnets and thus reduce the address management overhead. To support this feature, the capability of createVlanIpRange API is extended to add IP ranges also from a different subnet. Ensure that you manually configure the gateway of the new subnet before adding the IP range. Note that CloudStack supports only one gateway for a subnet; overlapping subnets are not currently supported. Use the deleteVlanRange API to delete IP ranges. This operation fails if an IP from the remove range is in use. If the remove range contains the IP address on which the DHCP server is running, CloudStack acquires a new IP from the same subnet. If no IP is available in the subnet, the remove operation fails. This feature is supported on KVM, xenServer, and VMware hypervisors.
In the illustration, a NetScaler appliance is the default entry or exit point for the CloudStack instances, and firewall is the default entry or exit point for the rest of the data center. Netscaler provides LB services and staticNAT service to the guest networks. The guest traffic in the pods and the Management Server are on different subnets / VLANs. The policy-based routing in the data center core switch sends the public traffic through the NetScaler, whereas the rest of the data center goes through the firewall. The EIP work flow is as follows: When a user VM is deployed, a public IP is automatically acquired from the pool of public IPs configured in the zone. This IP is owned by the VM's account. Each VM will have its own private IP. When the user VM starts, Static NAT is provisioned on the NetScaler device by using the Inbound Network Address Translation (INAT) and Reverse NAT (RNAT) rules between the public IP and the private IP.
Note
Inbound NAT (INAT) is a type of NAT supported by NetScaler, in which the destination IP address is replaced in the packets from the public network, such as the Internet, with the private IP address of a VM in the private network. Reverse NAT (RNAT) is a type of NAT supported by NetScaler, in which the source IP address is replaced in the packets generated by a VM in the private network with the public IP address. This default public IP will be released in two cases: When the VM is stopped. When the VM starts, it again receives a new public IP, not necessarily the same one allocated initially, from the pool of Public IPs. The user acquires a public IP (Elastic IP). This public IP is associated with the account, but will not be mapped to any private IP. However, the user can enable Static NAT to associate this IP to the private IP of a VM in the account. The Static NAT rule for the public IP can be disabled at any time. When Static NAT is disabled, a new public IP is allocated from the pool, which is not necessarily be the same one allocated initially.
For the deployments where public IPs are limited resources, you have the flexibility to choose not to allocate a public IP by default. You can use the Associate Public IP option to turn on or off the automatic public IP assignment in the EIP-enabled Basic zones. If you turn off the automatic public IP assignment while creating a network offering, only a private IP is assigned to a VM when the VM is deployed with that network offering. Later, the user can acquire an IP for the VM and enable static NAT. For more information on the Associate Public IP option, see Section 9.4.1, Creating a New Network Offering.
Note
The Associate Public IP feature is designed only for use with user VMs. The System VMs continue to get both public IP and private by default, irrespective of the network offering configuration. New deployments which use the default shared network offering with EIP and ELB services to create a shared network in the Basic zone will continue allocating public IPs to each user VM.
Replace the UUID with appropriate UUID. For example, if you want to transfer a portable IP to network X and VM Y in a network, execute the following:
http://localhost:8096/client/api? command=enableStaticNat&response=json&ipaddressid=a4bc37b2-4b4e-461d-9a62b66414618e36&virtualmachineid=Y&networkid=X
10. Specify the following: All the fields are mandatory. Gateway : The gateway for the tier you create. Ensure that the gateway is within the Super CIDR range that you specified while creating the VPC, and is not overlapped with the CIDR of any existing tier within the VPC. Netmask : The netmask for the tier you create. For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is 255.255.255.0. Start IP/ End IP: A range of IP addresses that are accessible from the Internet and will be allocated to guest
Start IP/ End IP: A range of IP addresses that are accessible from the Internet and will be allocated to guest VMs. Enter the first and last IP addresses that define a range that CloudStack can assign to guest VMs . 11. Click OK.
15.14.2. Prerequisites
Use a PVLAN supported switch. See Private VLAN Catalyst Switch Support Matrixfor more information. All the layer 2 switches, which are PVLAN-aware, are connected to each other, and one of them is connected to a router. All the ports connected to the host would be configured in trunk mode. Open Management VLAN, Primary VLAN (public) and Secondary Isolated VLAN ports. Configure the switch port connected to the router in PVLAN promiscuous trunk mode, which would translate an isolated VLAN to primary VLAN for the PVLAN-unaware router. Note that only Cisco Catalyst 4500 has the PVLAN promiscuous trunk mode to connect both normal VLAN and PVLAN to a PVLAN-unaware switch. For the other Catalyst PVLAN support switch, connect the switch to upper switch by using cables, one each for a PVLAN pair. Configure private VLAN on your physical switches out-of-band. Before you use PVLAN on XenServer and KVM, enable Open vSwitch (OVS).
Note
OVS on XenServer and KVM does not support PVLAN natively. Therefore, CloudStack managed to simulate PVLAN on OVS for XenServer and KVM by modifying the flow table.
7. On the Guest node of the diagram, click Configure. 8. Click the Network tab. 9. Click Add guest network. The Add guest network window is displayed. 10. Specify the following: Name : The name of the network. This will be visible to the user. Description: The short description of the network that can be displayed to users. VLAN ID: The unique ID of the VLAN. Secondary Isolated VLAN ID: The unique ID of the Secondary Isolated VLAN. For the description on Secondary Isolated VLAN, see Section 15.14.1, About Private VLAN. Scope : The available scopes are Domain, Account, Project, and All. Domain: Selecting Domain limits the scope of this guest network to the domain you specify. The network will not be available for other domains. If you select Subdomain Access, the guest network is available to all the sub domains within the selected domain. Account: The account for which the guest network is being created for. You must specify the domain the account belongs to. Project: The project for which the guest network is being created for. You must specify the domain the project belongs to. All: The guest network is available for all the domains, account, projects within the selected zone. Network Offering: If the administrator has configured multiple network offerings, select the one you want to use for this network. Gateway : The gateway that the guests should use. Netmask : The netmask in use on the subnet the guests will use. IP Range : A range of IP addresses that are accessible from the Internet and are assigned to the guest VMs. Network Domain: A custom DNS suffix at the level of a network. If you want to assign a special domain name to the guest VM network, specify a DNS suffix. 11. Click OK to confirm.
Note
In a zone that uses advanced networking, you can instead define multiple guest networks to isolate traffic to VMs. Each CloudStack account comes with a default security group that denies all inbound traffic and allows all outbound traffic. The default security group can be modified so that all new VMs inherit some other desired set of rules. Any CloudStack user can set up any number of additional security groups. When a new VM is launched, it is assigned to the default security group unless another user-defined security group is specified. A VM can be a member of any number of security groups. Once a VM is assigned to a security group, it remains in that group for its entire lifetime; you can not move a running VM from one security group to another. You can modify a security group by deleting or adding any number of ingress and egress rules. When you do, the new rules apply to all VMs in the group, whether running or stopped. If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.
Limitations The following are not supported for this feature: Two IP ranges with the same VLAN and different gateway or netmask in security group-enabled shared network. Two IP ranges with the same VLAN and different gateway or netmask in account-specific shared networks. Multiple VLAN ranges in security group-enabled shared network. Multiple VLAN ranges in account-specific shared networks. Security groups must be enabled in the zone in order for this feature to be used.
5. To add an egress rule, click the Egress Rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this security group. If no egress rules are specified, then all traffic will be allowed out. Once egress rules are specified, the following types of traffic are allowed out: traffic specified in egress rules; queries to DNS and DHCP servers; and responses to any traffic that has been allowed in through an ingress rule Add by CIDR/Account. Indicate whether the destination of the traffic will be defined by IP address (CIDR) or an existing security group in a CloudStack account (Account). Choose Account if you want to allow outgoing traffic to all VMs in another security group. Protocol. The networking protocol that VMs will use to send outgoing traffic. TCP and UDP are typically used for data exchange and end-user communications. ICMP is typically used to send error messages or network monitoring data. Start Port, End Port. (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields. ICMP Type, ICMP Code . (ICMP only) The type of message and error code that will be sent CIDR. (Add by CIDR only) To send traffic only to IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0. Account, Security Group. (Add by Account only) To allow traffic to be sent to another security group, enter the CloudStack account and name of a security group that has already been defined in that account. To allow traffic between VMs within the security group you are editing now, enter its name. 6. Click Add.
Note
In a Basic zone, load balancing service is supported only if Elastic IP or Elastic LB services are enabled. When NetScaler load balancer is used to provide EIP or ELB services in a Basic zone, ensure that all guest VM traffic must enter and exit through the NetScaler device. When inbound traffic goes through the NetScaler device, traffic is routed by using the NAT protocol depending on the EIP/ELB configured on the public IP to the private IP. The traffic that is originated from the guest VMs usually goes through the layer 3 router. To ensure that outbound traffic goes through NetScaler device providing EIP/ELB, layer 3 router must have a policy-based routing. A policy-based route must be set up so that all traffic originated from the guest VM's are directed to NetScaler device. This is required to ensure that the outbound traffic from the guest VM's is routed to a public IP by using NAT.For more information on Elastic IP, see Section 15.11, About Elastic IP. The NetScaler can be set up in direct (outside the firewall) mode. It must be added before any load balancing rules are deployed on guest VMs in the zone. The functional behavior of the NetScaler with CloudStack is the same as described in the CloudStack documentation for using an F5 external load balancer. The only exception is that the F5 supports routing domains, and NetScaler does not. NetScaler can not yet be used as a firewall. To install and enable an external load balancer for CloudStack management, see External Guest Load Balancer Integration in the Installation Guide. The Citrix NetScaler comes in three varieties. The following table summarizes how these variants are treated in CloudStack. NetScaler ADC Type MPX Description of Capabilities Physical appliance. Capable of deep packet inspection. Can act as application firewall and load balancer CloudStack Supported Features In advanced zones, load balancer functionality fully supported without limitation. In basic zones, static NAT, elastic IP (EIP), and elastic load balancing (ELB) are also provided. Supported on ESXi and XenServer. Same functional support as for MPX. CloudStack will treat VPX and MPX as the same device type. CloudStack will dynamically provision, configure, and manage the life cycle of VPX instances on the SDX. Provisioned instances are added into CloudStack automatically no manual configuration by the administrator is required. Once a VPX instance is added into CloudStack, it is treated the same as a VPX on an ESXi host.
VPX
SDX
Virtual appliance. Can run as VM on XenServer, ESXi, and Hyper-V hypervisors. Same functionality as MPX Physical appliance. Can create multiple fully isolated VPX instances on a single appliance to support multi-tenant usage
2. Edit the /etc/snmp/snmpd.conf file to allow the SNMP polling from the NetScaler device. a. Map the community name into a security name (local and mynetwork, depending on where the request is coming from):
Note
Use a strong password instead of public when you edit the following table.
sec.name
source
community
# com2sec com2sec
Note
Setting to 0.0.0.0 allows all IPs to poll the NetScaler server. b. Map the security names into group names:
# group group group group group.name MyRWGroup MyRWGroup MyROGroup MyROGroup sec.model v1 v2c v1 v2c sec.name local local mynetwork mynetwork
d. Grant access with different write permissions to the two groups to the view you created.
# context access access sec.model MyROGroup "" MyRWGroup "" sec.level any noauth any noauth prefix exact exact read all all write none all notif none all
5. Ensure that the SNMP service is started automatically during the system startup:
chkconfig snmpd on
Note
If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudStack virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function.
For details on how to set a health check policy using the UI, see Section 15.16.5.1, Adding a Load Balancer Rule.
Note
AutoScale is supported on NetScaler Release 10 Build 73.e and beyond.
Prerequisites Before you configure an AutoScale rule, consider the following: Ensure that the necessary template is prepared before configuring AutoScale. When a VM is deployed by using a template and when it comes up, the application should be up and running.
Note
If the application is not running, the NetScaler device considers the VM as ineffective and continues provisioning the VMs unconditionally until the resource limit is exhausted. Deploy the templates you prepared. Ensure that the applications come up on the first boot and is ready to take the traffic. Observe the time requires to deploy the template. Consider this time when you specify the quiet time while configuring AutoScale. The AutoScale feature supports the SNMP counters that can be used to define conditions for taking scale up or scale down actions. To monitor the SNMP-based counter, ensure that the SNMP agent is installed in the template used for creating the AutoScale VMs, and the SNMP operations work with the configured SNMP community and port by using standard SNMP managers. For example, see Section 15.16.2, Configuring SNMP Community String on a RHEL Server to configure SNMP on a RHEL machine. Ensure that the endpointe.url parameter present in the Global Settings is set to the Management Server API URL. For example, http://10.102.102.22:8080/client/api. In a multi-node Management Server deployment, use the virtual IP address configured in the load balancer for the management servers cluster. Additionally, ensure that the NetScaler device has access to this IP address to provide AutoScale support. If you update the endpointe.url, disable the AutoScale functionality of the load balancer rules in the system, then enable them back to reflect the changes. For more information see Updating an AutoScale Configuration If the API Key and Secret Key are regenerated for an AutoScale user, ensure that the AutoScale functionality of the load balancers that the user participates in are disabled and then enabled to reflect the configuration changes in the NetScaler. In an advanced Zone, ensure that at least one VM should be present before configuring a load balancer rule with AutoScale. Having one VM in the network ensures that the network is in implemented state for configuring AutoScale. Configuration Specify the following:
Template : A template consists of a base OS image and application. A template is used to provision the new instance of an application on a scaleup action. When a VM is deployed from a template, the VM can start taking the traffic from the load balancer without any admin intervention. For example, if the VM is deployed for a Web service, it should have the Web server running, the database connected, and so on. Compute offering: A predefined set of virtual hardware attributes, including CPU speed, number of CPUs, and RAM size, that the user can select when creating a new virtual machine instance. Choose one of the compute offerings to be used while provisioning a VM instance as part of scaleup action. Min Instance : The minimum number of active VM instances that is assigned to a load balancing rule. The active VM instances are the application instances that are up and serving the traffic, and are being load balanced. This parameter ensures that a load balancing rule has at least the configured number of active VM instances are available to serve the traffic.
Note
If an application, such as SAP, running on a VM instance is down for some reason, the VM is then not counted as part of Min Instance parameter, and the AutoScale feature initiates a scaleup action if the number of active VM instances is below the configured value. Similarly, when an application instance comes up from its earlier down state, this application instance is counted as part of the active instance count and the AutoScale process initiates a scaledown action when the active instance count breaches the Max instance value. Max Instance : Maximum number of active VM instances that should be assigned to a load balancing rule. This parameter defines the upper limit of active VM instances that can be assigned to a load balancing rule. Specifying a large value for the maximum instance parameter might result in provisioning large number of VM instances, which in turn leads to a single load balancing rule exhausting the VM instances limit specified at the account or domain level.
Note
If an application, such as SAP, running on a VM instance is down for some reason, the VM is not counted as part of Max Instance parameter. So there may be scenarios where the number of VMs provisioned for a scaleup action might be more than the configured Max Instance value. Once the application instances in the VMs are up from an earlier down state, the AutoScale feature starts aligning to the configured Max Instance value. Specify the following scale-up and scale-down policies: Duration: The duration, in seconds, for which the conditions you specify must be true to trigger a scaleup action. The conditions defined should hold true for the entire duration you specify for an AutoScale action to be invoked. Counter : The performance counters expose the state of the monitored instances. By default, CloudStack offers four performance counters: Three SNMP counters and one NetScaler counter. The SNMP counters are Linux User CPU, Linux System CPU, and Linux CPU Idle. The NetScaler counter is ResponseTime. The root administrator can add additional counters into CloudStack by using the CloudStack API. Operator : The following five relational operators are supported in AutoScale feature: Greater than, Less than, Less than or equal to, Greater than or equal to, and Equal to. Threshold: Threshold value to be used for the counter. Once the counter defined above breaches the threshold value, the AutoScale feature initiates a scaleup or scaledown action. Add: Click Add to add the condition. Additionally, if you want to configure the advanced settings, click Show advanced settings, and specify the following: Polling interval: Frequency in which the conditions, combination of counter, operator and threshold, are to be evaluated before taking a scale up or down action. The default polling interval is 30 seconds. Quiet Time : This is the cool down period after an AutoScale action is initiated. The time includes the time taken to complete provisioning a VM instance from its template and the time taken by an application to be ready to serve traffic. This quiet time allows the fleet to come up to a stable state before any action can take place. The default is 300 seconds. Destroy VM Grace Period: The duration in seconds, after a scaledown action is initiated, to wait before the VM is destroyed as part of scaledown action. This is to ensure graceful close of any pending sessions or transactions being served by the VM marked for destroy. The default is 120 seconds. Security Groups : Security groups provide a way to isolate traffic to the VM instances. A security group is a group of VMs that filter their incoming and outgoing traffic according to a set of rules, called ingress and egress rules. These rules filter network traffic according to the IP address that is attempting to communicate with the VM. Disk Offerings : A predefined set of disk size for primary data storage. SNMP Community : The SNMP community string to be used by the NetScaler device to query the configured counter value from the provisioned VM instances. Default is public. SNMP Port: The port number on which the SNMP agent that run on the provisioned VMs is listening. Default port is 161. User : This is the user that the NetScaler device use to invoke scaleup and scaledown API calls to the cloud. If no option is specified, the user who configures AutoScaling is applied. Specify another user name to override. Apply : Click Apply to create the AutoScale configuration.
Apply : Click Apply to create the AutoScale configuration. Disabling and Enabling an AutoScale Configuration If you want to perform any maintenance operation on the AutoScale VM instances, disable the AutoScale configuration. When the AutoScale configuration is disabled, no scaleup or scaledown action is performed. You can use this downtime for the maintenance activities. To disable the AutoScale configuration, click the Disable AutoScale button.
The button toggles between enable and disable, depending on whether AutoScale is currently enabled or not. After the maintenance operations are done, you can enable the AutoScale configuration back. To enable, open the AutoScale configuration page again, then click the Enable AutoScale Updating an AutoScale Configuration You can update the various parameters and add or delete the conditions in a scaleup or scaledown rule. Before you update an AutoScale configuration, ensure that you disable the AutoScale load balancer rule by clicking the Disable AutoScale button. After you modify the required AutoScale parameters, click Apply. To apply the new AutoScale policies, open the AutoScale configuration page again, then click the Enable AutoScale button. Runtime Considerations An administrator should not assign a VM to a load balancing rule which is configured for AutoScale. Before a VM provisioning is completed if NetScaler is shutdown or restarted, the provisioned VM cannot be a part of the load balancing rule though the intent was to assign it to a load balancing rule. To workaround, rename the AutoScale provisioned VMs based on the rule name or ID so at any point of time the VMs can be reconciled to its load balancing rule. Making API calls outside the context of AutoScale, such as destroyVM, on an autoscaled VM leaves the load balancing configuration in an inconsistent state. Though VM is destroyed from the load balancer rule, NetScaler continues to show the VM as a service assigned to a rule. button.
queries, such as web site IP address. In a GSLB environment, an ADNS service responds only to DNS requests for domains for which the GSLB service provider is authoritative. When an ADNS service is configured, the service provider owns that IP address and advertises it. When you create an ADNS service, the NetScaler responds to DNS queries on the configured ADNS service IP and port.
Tenant-A wishes to leverage the GSLB service provided by the xyztelco cloud. Tenant-A configures a GSLB rule to load balance traffic across virtual server 1 at Zone-1 and virtual server 2 at Zone-2. The domain name is provided as A.xyztelco.com. CloudStack orchestrates setting up GSLB virtual server 1 on the GSLB service provider at Zone-1. CloudStack binds virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 1. GSLB virtual server 1 is configured to start monitoring the health of virtual server 1 and 2 in Zone-1. CloudStack will also orchestrate setting up GSLB virtual server 2 on GSLB service provider at Zone-2. CloudStack will bind virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 2. GSLB virtual server 2 is configured to start monitoring the health of virtual server 1 and 2. CloudStack will bind the domain A.xyztelco.com to both the GSLB virtual server 1 and 2. At this point, Tenant-A service will be globally reachable at A.xyztelco.com. The private DNS server for the domain xyztelcom.com is configured by the admin out-of-band to resolve the domain A.xyztelco.com to the GSLB providers at both the zones, which are configured as ADNS for the domain A.xyztelco.com. A client when sends a DNS request to resolve A.xyztelcom.com, will eventually get DNS delegation to the address of GSLB providers at zone 1 and 2. A client DNS request will be received by the GSLB provider. The GSLB provider, depending on the domain for which it needs to resolve, will pick up the GSLB virtual server associated with the domain. Depending on the health of the virtual servers being load balanced, DNS request for the domain will be resolved to the public IP associated with the selected virtual server.
As per the example given above, the site names are A.xyztelco.com and B.xyztelco.com. For more information, see Configuring a Basic GSLB Site. d. Configure a GSLB virtual server. For more information, see Configuring a GSLB Virtual Server. e. Configure a GSLB service for each virtual server. For more information, see Configuring a GSLB Service. f. Bind the GSLB services to the GSLB virtual server. For more information, see Binding GSLB Services to a GSLB Virtual Server. g. Bind domain name to GSLB virtual server. Domain name is obtained from the domain details. For more information, see Binding a Domain to a GSLB Virtual Server. 3. In each zone that are participating in GSLB, add GSLB-enabled NetScaler device. For more information, see Section 15.17.2.2, Enabling GSLB in NetScaler. As a domain administrator/ user perform the following: 1. Add a GSLB rule on both the sites. See Section 15.17.2.3, Adding a GSLB Rule. 2. Assign load balancer rules. See Section 15.17.2.4, Assigning Load Balancing Rules to GSLB.
Number of Retries . Number of times to attempt a command on the device before considering the operation failed. Default is 2. Capacity : The number of networks the device can handle. Dedicated: When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1. 9. Click OK.
6. Specify the following: Name : Name for the GSLB rule. Description: (Optional) A short description of the GSLB rule that can be displayed to users. GSLB Domain Name : A preferred domain name for the service. Algorithm : (Optional) The algorithm to use to load balance the traffic across the zones. The options are Round Robin, Least Connection, and Proximity. Service Type : The transport protocol to use for GSLB. The options are TCP and UDP. Domain: (Optional) The domain for which you want to create the GSLB rule. Account: (Optional) The account on which you want to apply the GSLB rule. 7. Click OK to confirm.
network in a fashion that will enable VPN linking between their guest network and their clients. In shared networks in Basic zone and Security Group-enabled Advanced networks, you will have the flexibility to add multiple guest IP ranges from different subnets. You can add or remove one IP range at a time. For more information, see Section 15.10, About Multiple IP Ranges.
Section 15.22.2, Egress Firewall Rules in an Advanced Zone. Firewall rules can be created using the Firewall tab in the Management Server UI. This tab is not displayed by default when CloudStack is installed. To display the Firewall tab, the CloudStack administrator must set the global configuration parameter firewall.rule.ui.enabled to "true." To create a firewall rule: 1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, choose Network. 3. Click the name of the network where you want to work with. 4. Click View IP Addresses. 5. Click the IP address you want to work with. 6. Click the Configuration tab and fill in the following values. Source CIDR. (Optional) To accept only traffic from IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. Example: 192.168.0.0/22. Leave empty to allow all CIDRs. Protocol. The communication protocol in use on the opened port(s). Start Port and End Port. The port(s) you want to open on the firewall. If you are opening a single port, use the same number in both fields ICMP Type and ICMP Code . Used only if Protocol is set to ICMP. Provide the type and code required by the ICMP protocol to fill out the ICMP header. Refer to ICMP documentation for more details if you are not sure what to enter 7. Click Add.
CIDR: (Add by CIDR only) To send traffic only to the IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0. Protocol: The networking protocol that VMs uses to send outgoing traffic. The TCP and UDP protocols are typically used for data exchange and end-user communications. The ICMP protocol is typically used to send error messages or network monitoring data. Start Port, End Port: (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields. ICMP Type, ICMP Code : (ICMP only) The type of message and error code that are sent. 5. Click Add.
from the guest network that you create by using this network offering. You have two options: Allow and Deny. Allow If you select Allow for a network offering, by default egress traffic is allowed. However, when an egress rule is configured for a guest network, rules are applied to block the specified traffic and rest are allowed. If no egress rules are configured for the network, egress traffic is accepted. Deny If you select Deny for a network offering, by default egress traffic for the guest network is blocked. However, when an egress rules is configured for a guest network, rules are applied to allow the specified traffic. While implementing a guest network, CloudStack adds the firewall egress rule specific to the default egress policy for the guest network. This feature is supported only on virtual router and Juniper SRX. 1. Create a network offering with your desirable default egress policy: a. Log in with admin privileges to the CloudStack UI. b. In the left navigation bar, click Service Offerings. c. In Select Offering, choose Network Offering. d. Click Add Network Offering. e. In the dialog, make necessary choices, including firewall provider. f. In the Default egress policy field, specify the behaviour. g. Click OK. 2. Create an isolated network by using this network offering. Based on your selection, the network will have the egress public traffic blocked or allowed.
Note
Make sure that not all traffic goes through the VPN. That is, the route installed by the VPN should be only for the guest network and not for all traffic. Road Warrior / Remote Access . Users want to be able to connect securely from a home or office to a private network in the cloud. Typically, the IP address of the connecting client is dynamic and cannot be preconfigured on the VPN server. Site to Site . In this scenario, two private subnets are connected over the public Internet with a secure VPN tunnel. The cloud users subnet (for example, an office network) is connected through a gateway to the network in the cloud. The address of the users gateway must be preconfigured on the VPN server in the cloud. Note that although L2TP-overIPsec can be used to set up Site-to-Site VPNs, this is not the primary intent of this feature. For more information, see Section 15.25.4, Setting Up a Site-to-Site VPN Connection
Note, these instructions were written on Mac OS X 10.7.5. They may differ slightly in older or newer releases of Mac OS X. 1. On your Mac, open System Preferences and click Network. 2. Make sure Send all traffic over VPN connection is not checked. 3. If your preferences are locked, you'll need to click the lock in the bottom left-hand corner to make any changes and provide your administrator credentials. 4. You will need to create a new network entry. Click the plus icon on the bottom left-hand side and you'll see a dialog that says "Select the interface and enter a name for the new service." Select VPN from the Interface dropdown menu, and "L2TP over IPSec" for the VPN Type. Enter whatever you like within the "Service Name" field. 5. You'll now have a new network interface with the name of whatever you put in the "Service Name" field. For the purposes of this example, we'll assume you've named it "CloudStack." Click on that interface and provide the IP address of the interface for your VPN under the Server Address field, and the user name for your VPN under Account Name. 6. Click Authentication Settings, and add the user's password under User Authentication and enter the pre-shared IPSec key in the Shared Secret field under Machine Authentication. Click OK. 7. You may also want to click the "Show VPN status in menu bar" but that's entirely optional. 8. Now click "Connect" and you will be connected to the CloudStack VPN.
Note
In addition to the specific Cisco and Juniper devices listed above, the expectation is that any Cisco or Juniper device running on the supported operating systems are able to establish VPN connections. To set up a Site-to-Site VPN connection, perform the following: 1. Create a Virtual Private Cloud (VPC). See Section 15.27, Configuring a Virtual Private Cloud. 2. Create a VPN Customer Gateway. 3. Create a VPN gateway for the VPC that you created. 4. Create VPN connection from the VPC VPN gateway to the customer VPN gateway.
Note
A VPN customer gateway can be connected to only one VPN gateway at a time. To add a VPN Customer Gateway: 1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, choose Network. 3. In the Select view, select VPN Customer Gateway. 4. Click Add site-to-site VPN.
Provide the following information: Name : A unique name for the VPN customer gateway you create. Gateway : The IP address for the remote gateway. CIDR list: The guest CIDR list of the remote subnets. Enter a CIDR or a comma-separated list of CIDRs. Ensure that a guest CIDR list is not overlapped with the VPCs CIDR, or another guest CIDR. The CIDR must be RFC1918-compliant. IPsec Preshared Key : Preshared keying is a method where the endpoints of the VPN share a secret key. This key value is used to authenticate the customer gateway and the VPC VPN gateway to each other.
Note
The IKE peers (VPN end points) authenticate each other by computing and sending a keyed hash of data that includes the Preshared key. If the receiving peer is able to create the same hash independently by using its Preshared key, it knows that both peers must share the same secret, thus authenticating the customer gateway. IKE Encryption: The Internet Key Exchange (IKE) policy for phase-1. The supported encryption algorithms are AES128, AES192, AES256, and 3DES. Authentication is accomplished through the Preshared Keys.
Note
The phase-1 is the first phase in the IKE process. In this initial negotiation phase, the two VPN endpoints agree on the methods to be used to provide security for the underlying IP traffic. The phase-1 authenticates the two VPN gateways to each other, by confirming that the remote gateway has a matching Preshared Key. IKE Hash: The IKE hash for phase-1. The supported hash algorithms are SHA1 and MD5. IKE DH: A public-key cryptography protocol which allows two parties to establish a shared secret over an insecure communications channel. The 1536-bit Diffie-Hellman group is used within IKE to establish session keys. The supported options are None, Group-5 (1536-bit) and Group-2 (1024-bit). ESP Encryption: Encapsulating Security Payload (ESP) algorithm within phase-2. The supported encryption algorithms are AES128, AES192, AES256, and 3DES.
Note
The phase-2 is the second phase in the IKE process. The purpose of IKE phase-2 is to negotiate IPSec security associations (SA) to set up the IPSec tunnel. In phase-2, new keying material is extracted from the Diffie-Hellman key exchange in phase-1, to provide session keys to use in protecting the VPN data flow. ESP Hash: Encapsulating Security Payload (ESP) hash for phase-2. Supported hash algorithms are SHA1 and MD5. Perfect Forward Secrecy : Perfect Forward Secrecy (or PFS) is the property that ensures that a session key derived from a set of long-term public and private keys will not be compromised. This property enforces a new Diffie-Hellman key exchange. It provides the keying material that has greater key material life and thereby greater resistance to cryptographic attacks. The available options are None, Group-5 (1536-bit) and Group-2 (1024-bit). The security of the key exchanges increase as the DH groups grow larger, as does the time of the exchanges.
Note
Note
When PFS is turned on, for every negotiation of a new phase-2 SA the two gateways must generate a new set of phase-1 keys. This adds an extra layer of protection that PFS adds, which ensures if the phase-2 SAs have expired, the keys used for new phase-2 SAs have not been generated from the current phase-1 keying material. IKE Lifetime (seconds): The phase-1 lifetime of the security association in seconds. Default is 86400 seconds (1 day). Whenever the time expires, a new phase-1 exchange is performed. ESP Lifetime (seconds): The phase-2 lifetime of the security association in seconds. Default is 3600 seconds (1 hour). Whenever the value is exceeded, a re-key is initiated to provide a new IPsec encryption and authentication session keys. Dead Peer Detection: A method to detect an unavailable Internet Key Exchange (IKE) peer. Select this option if you want the virtual router to query the liveliness of its IKE peer at regular intervals. Its recommended to have the same configuration of DPD on both side of VPN connection. 5. Click OK. Updating and Removing a VPN Customer Gateway You can update a customer gateway either with no VPN connection, or related VPN connection is in error state. 1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, choose Network. 3. In the Select view, select VPN Customer Gateway. 4. Select the VPN customer gateway you want to work with. 5. To modify the required parameters, click the Edit VPN Customer Gateway button 6. To remove the VPN customer gateway, click the Delete VPN Customer Gateway button 7. Click OK.
Note
CloudStack supports creating up to 8 VPN connections. 1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, choose Network. 3. In the Select view, select VPC. All the VPCs that you create for the account are listed in the page.
4. Click the Configure button of the VPC to which you want to deploy the VMs. The VPC page is displayed where all the tiers you created are listed in a diagram. 5. Click the Settings icon. For each tier, the following options are displayed: Internal LB Public LB IP Static NAT Virtual Machines CIDR The following router information is displayed: Private Gateways Public IP Addresses Site-to-Site VPNs Network ACL Lists 6. Select Site-to-Site VPN. The Site-to-Site VPN page is displayed. 7. From the Select View drop-down, ensure that VPN Connection is selected. 8. Click Create VPN Connection. The Create VPN Connection dialog is displayed:
9. Select the desired customer gateway, then click OK to confirm. Within a few moments, the VPN Connection is displayed. The following information on the VPN connection is displayed: IP Address Gateway State IPSec Preshared Key IKE Policy ESP Policy
To restart a VPN connection, click the Reset VPN connection button present in the Details tab.
Note
A VLAN allocated for an account cannot be shared between multiple accounts. The administrator can allow users create their own VPC and deploy the application. In this scenario, the VMs that belong to the account are deployed on the VLANs allotted to that account. Both administrators and users can create multiple VPCs. The guest network NIC is plugged to the VPC virtual router when the first VM is deployed in a tier. The administrator can create the following gateways to send to or receive traffic from the VMs: VPN Gateway : For more information, see Section 15.25.4.2, Creating a VPN gateway for the VPC. Public Gateway : The public gateway for a VPC is added to the virtual router when the virtual router is created for VPC. The public gateway is not exposed to the end users. You are not allowed to list it, nor allowed to create any static routes. Private Gateway : For more information, see Section 15.27.5, Adding a Private Gateway to a VPC. Both administrators and users can create various possible destinations-gateway combinations. However, only one gateway of each type can be used in a deployment. For example: VLANs and Public Gateway : For example, an application is deployed in the cloud, and the Web application VMs communicate with the Internet. VLANs, VPN Gateway, and Public Gateway : For example, an application is deployed in the cloud; the Web application VMs communicate with the Internet; and the database VMs communicate with the on-premise devices. The administrator can define Network Access Control List (ACL) on the virtual router to filter the traffic among the VLANs or between the Internet and a VLAN. You can define ACL based on CIDR, port range, protocol, type code (if ICMP protocol is selected) and Ingress/Egress type. The following figure shows the possible deployment scenarios of a Inter-VLAN setup:
To set up a multi-tier Inter-VLAN deployment, see Section 15.27, Configuring a Virtual Private Cloud.
For example, if a VPC has the private range 10.0.0.0/16, its guest networks can have the network ranges 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, and so on. Major Components of a VPC: A VPC is comprised of the following network components: VPC: A VPC acts as a container for multiple isolated networks that can communicate with each other via its virtual router. Network Tiers : Each tier acts as an isolated network with its own VLANs and CIDR list, where you can place groups of resources, such as VMs. The tiers are segmented by means of VLANs. The NIC of each tier acts as its gateway. Virtual Router : A virtual router is automatically created and started when you create a VPC. The virtual router connect the tiers and direct traffic among the public gateway, the VPN gateways, and the NAT instances. For each tier, a corresponding NIC and IP exist in the virtual router. The virtual router provides DNS and DHCP services through its IP. Public Gateway : The traffic to and from the Internet routed to the VPC through the public gateway. In a VPC, the public gateway is not exposed to the end user; therefore, static routes are not support for the public gateway. Private Gateway : All the traffic to and from a private network routed to the VPC through the private gateway. For more information, see Section 15.27.5, Adding a Private Gateway to a VPC. VPN Gateway : The VPC side of a VPN connection. Site-to-Site VPN Connection: A hardware-based VPN connection between your VPC and your datacenter, home network, or co-location facility. For more information, see Section 15.25.4, Setting Up a Site-to-Site VPN Connection. Customer Gateway : The customer side of a VPN Connection. For more information, see Section 15.25.4.1, Creating and Updating a VPN Customer Gateway. NAT Instance : An instance that provides Port Address Translation for instances to access the Internet via the public gateway. For more information, see Section 15.27.10, Enabling or Disabling Static NAT on a VPC. Network ACL: Network ACL is a group of Network ACL items. Network ACL items are nothing but numbered rules that are evaluated in order, starting with the lowest numbered rule. These rules determine whether traffic is allowed in or out of any tier associated with the network ACL. For more information, see Section 15.27.4, Configuring Network Access Control List. Network Architecture in a VPC In a VPC, the following four basic options of network architectures are present: VPC with a public gateway only VPC with public and private gateways VPC with public and private gateways and site-to-site VPN access VPC with a private gateway only and site-to-site VPN access Connectivity Options for a VPC You can connect your VPC to: The Internet through the public gateway. The corporate datacenter by using a site-to-site VPN connection through the VPN gateway. Both the Internet and your corporate datacenter by using both the public gateway and a VPN gateway. VPC Network Considerations Consider the following before you create a VPC: A VPC, by default, is created in the enabled state. A VPC can be created in Advance zone only, and can't belong to more than one zone at a time. The default number of VPCs an account can create is 20. However, you can change it by using the max.account.vpcs global parameter, which controls the maximum number of VPCs an account is allowed to create. The default number of tiers an account can create within a VPC is 3. You can configure this number by using the vpc.max.networks parameter. Each tier should have an unique CIDR in the VPC. Ensure that the tier's CIDR should be within the VPC CIDR range. A tier belongs to only one VPC. All network tiers inside the VPC should belong to the same account. When a VPC is created, by default, a SourceNAT IP is allocated to it. The Source NAT IP is released only when the VPC is removed. A public IP can be used for only one purpose at a time. If the IP is a sourceNAT, it cannot be used for StaticNAT or port forwarding. The instances can only have a private IP address that you provision. To communicate with the Internet, enable NAT to an instance that you launch in your VPC. Only new networks can be added to a VPC. The maximum number of networks per VPC is limited by the value you specify in the vpc.max.networks parameter. The default value is three. The load balancing service can be supported by only one tier inside the VPC. If an IP address is assigned to a tier: That IP can't be used by more than one tier at a time in the VPC. For example, if you have tiers A and B, and a public IP1, you can create a port forwarding rule by using the IP either for A or B, but not for both. That IP can't be used for StaticNAT, load balancing, or port forwarding rules for another guest network inside the VPC. Remote access VPN is not supported in VPC networks.
Provide the following information: Name : A short name for the VPC that you are creating. Description: A brief description of the VPC. Zone : Choose the zone where you want the VPC to be available. Super CIDR for Guest Networks : Defines the CIDR range for all the tiers (guest networks) within a VPC. When you create a tier, ensure that its CIDR is within the Super CIDR value you enter. The CIDR must be RFC1918 compliant. DNS domain for Guest Networks : If you want to assign a special domain name, specify the DNS suffix. This parameter is applied to all the tiers within the VPC. That implies, all the tiers you create in the VPC belong to the same DNS domain. If the parameter is not specified, a DNS domain name is generated automatically. Public Load Balancer Provider : You have two options: VPC Virtual Router and Netscaler. 5. Click OK.
Note
The end users can see their own VPCs, while root and domain admin can see any VPC they are authorized to see. 4. Click the Configure button of the VPC for which you want to set up tiers. 5. Click Create network. The Add new tier dialog is displayed, as follows:
If you have already created tiers, the VPC diagram is displayed. Click Create Tier to add a new tier. 6. Specify the following: All the fields are mandatory. Name : A unique name for the tier you create. Network Offering: The following default network offerings are listed: Internal LB, DefaultIsolatedNetworkOfferingForVpcNetworksNoLB, DefaultIsolatedNetworkOfferingForVpcNetworks In a VPC, only one tier can be created by using LB-enabled network offering. Gateway : The gateway for the tier you create. Ensure that the gateway is within the Super CIDR range that you specified while creating the VPC, and is not overlapped with the CIDR of any existing tier within the VPC. VLAN: The VLAN ID for the tier that the root admin creates. This option is only visible if the network offering you selected is VLAN-enabled. For more information, see Section 11.9.3, Assigning VLANs to Isolated Networks. Netmask : The netmask for the tier you create. For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is 255.255.255.0. 7. Click OK. 8. Continue with configuring access control list for the tier.
6. Click Add ACL Lists, and specify the following: ACL List Name : A name for the ACL list. Description: A short description of the ACL list that can be displayed to users.
The following options are displayed. Internal LB Public LB IP Static NAT Virtual Machines CIDR The following router information is displayed: Private Gateways Public IP Addresses Site-to-Site VPNs Network ACL Lists 6. Select Private Gateways. The Gateways page is displayed. 7. Click Add new gateway:
8. Specify the following: Physical Network : The physical network you have created in the zone. IP Address : The IP address associated with the VPC gateway. Gateway : The gateway through which the traffic is routed to and from the VPC. Netmask : The netmask associated with the VPC gateway. VLAN: The VLAN associated with the VPC gateway. Source NAT: Select this option to enable the source NAT service on the VPC private gateway. See Section 15.27.5.1, Source NAT on Private Gateway. ACL: Controls both ingress and egress traffic on a VPC private gateway. By default, all the traffic is blocked. See Section 15.27.5.2, ACL on Private Gateway. The new gateway appears in the list. You can repeat these steps to add more gateway for this VPC.
Use the Quickview. See 3. Use the Details tab. See 4 through . 3. In the Quickview of the selected Private Gateway, click Replace ACL, select the ACL rule, then click OK 4. Click the IP address of the Private Gateway you want to work with. 5. In the Detail tab, click the Replace ACL button. The Replace ACL dialog is displayed. 6. select the ACL rule, then click OK. Wait for few seconds. You can see that the new ACL rule is displayed in the Details page.
The Add Instance page is displayed. Follow the on-screen instruction to add an instance. For information on adding an instance, see the Installation Guide.
8. Click Next, review the configuration and click Launch. Your VM will be deployed to the selected VPC tier and shared network.
5. Select Public IP Addresses. The IP Addresses page is displayed. 6. Click the IP you want to release. 7. In the Details tab, click the Release IP button
9. Select the tier and the destination VM, then click Apply.
15.27.11.1.2. Creating a Network Offering for External LB To have external LB support on VPC, create a network offering as follows: 1. Log in to the CloudStack UI as a user or admin. 2. From the Select Offering drop-down, choose Network Offering. 3. Click Add Network Offering. 4. In the dialog, make the following choices: Name : Any desired name for the network offering. Description: A short description of the offering that can be displayed to users. Network Rate : Allowed data transfer rate in MB per second. Traffic Type : The type of network traffic that will be carried on the network. Guest Type : Choose whether the guest network is isolated or shared. Persistent: Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network. VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see Section 15.27.1, About Virtual Private Clouds. Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used. Supported Services : Select Load Balancer. Use Netscaler or VpcVirtualRouter. Load Balancer Type : Select Public LB from the drop-down. LB Isolation: Select Dedicated if Netscaler is used as the external LB provider. System Offering: Choose the system service offering that you want virtual routers to use in this network. Conserve mode : Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network. 5. Click OK and the network offering is created. 15.27.11.1.3. Creating an External LB Rule 1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, choose Network. 3. In the Select view, select VPC. All the VPCs that you have created for the account is listed in the page. 4. Click the Configure button of the VPC, for which you want to configure load balancing rules. The VPC page is displayed where all the tiers you created listed in a diagram. For each tier, the following options are displayed: Internal LB Public LB IP Static NAT Virtual Machines CIDR The following router information is displayed: Private Gateways Public IP Addresses Site-to-Site VPNs Network ACL Lists 5. In the Router node, select Public IP Addresses. The IP Addresses page is displayed. 6. Click the IP address for which you want to create the rule, then click the Configuration tab. 7. In the Load Balancing node of the diagram, click View All. 8. Select the tier to which you want to apply the rule. 9. Specify the following: Name : A name for the load balancer rule. Public Port: The port that receives the incoming traffic to be balanced. Private Port: The port that the VMs will use to receive the traffic. Algorithm . Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms: Round-robin Least connections Source Stickiness . (Optional) Click Configure and choose the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules. Add VMs : Click Add VMs, then select two or more VMs that will divide the load of incoming traffic, and click Apply. The new load balancing rule appears in the list. You can repeat these steps to add more load balancing rules for this IP address.
CloudStack supports sharing workload across different tiers within your VPC. Assume that multiple tiers are set up in your environment, such as Web tier and Application tier. Traffic to each tier is balanced on the VPC virtual router on the public side, as explained in Section 15.27.11, Adding Load Balancing Rules on a VPC. If you want the traffic coming from the Web tier to the Application tier to be balanced, use the internal load balancing feature offered by CloudStack. 15.27.11.2.1. How Does Internal LB Work in VPC? In this figure, a public LB rule is created for the public IP 72.52.125.10 with public port 80 and private port 81. The LB rule, created on the VPC virtual router, is applied on the traffic coming from the Internet to the VMs on the Web tier. On the Application tier two internal load balancing rules are created. An internal LB rule for the guest IP 10.10.10.4 with load balancer port 23 and instance port 25 is configured on the VM, InternalLBVM1. Another internal LB rule for the guest IP 10.10.10.4 with load balancer port 45 and instance port 46 is configured on the VM, InternalLBVM1. Another internal LB rule for the guest IP 10.10.10.6, with load balancer port 23 and instance port 25 is configured on the VM, InternalLBVM2.
15.27.11.2.2. Guidelines Internal LB and Public LB are mutually exclusive on a tier. If the tier has LB on the public side, then it can't have the Internal LB. Internal LB is supported just on VPC networks in CloudStack 4.2 release. Only Internal LB VM can act as the Internal LB provider in CloudStack 4.2 release. Network upgrade is not supported from the network offering with Internal LB to the network offering with Public LB. Multiple tiers can have internal LB support in a VPC. Only one tier can have Public LB support in a VPC. 15.27.11.2.3. Enabling Internal LB on a VPC Tier 1. Create a network offering, as given in Section 15.27.11.2.5, Creating an Internal LB Rule. 2. Create an internal load balancing rule and apply, as given in Section 15.27.11.2.5, Creating an Internal LB Rule. 15.27.11.2.4. Creating a Network Offering for Internal LB To have internal LB support on VPC, either use the default offering, DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB, or create a network offering as follows: 1. Log in to the CloudStack UI as a user or admin. 2. From the Select Offering drop-down, choose Network Offering. 3. Click Add Network Offering. 4. In the dialog, make the following choices: Name : Any desired name for the network offering. Description: A short description of the offering that can be displayed to users. Network Rate : Allowed data transfer rate in MB per second. Traffic Type : The type of network traffic that will be carried on the network. Guest Type : Choose whether the guest network is isolated or shared. Persistent: Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network. VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see Section 15.27.1, About Virtual Private Clouds.
Clouds. Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used. Supported Services : Select Load Balancer. Select InternalLbVM from the provider list. Load Balancer Type : Select Internal LB from the drop-down. System Offering: Choose the system service offering that you want virtual routers to use in this network. Conserve mode : Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network. 5. Click OK and the network offering is created. 15.27.11.2.5. Creating an Internal LB Rule When you create the Internal LB rule and applies to a VM, an Internal LB VM, which is responsible for load balancing, is created. You can view the created Internal LB VM in the Instances page if you navigate to Infrastructure > Zones > <zone_ name> > <physical_network_name> > Network Service Providers > Internal LB VM. You can manage the Internal LB VMs as and when required from the location. 1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, choose Network. 3. In the Select view, select VPC. All the VPCs that you have created for the account is listed in the page. 4. Locate the VPC for which you want to configure internal LB, then click Configure. The VPC page is displayed where all the tiers you created listed in a diagram. 5. Locate the Tier for which you want to configure an internal LB rule, click Internal LB. In the Internal LB page, click Add Internal LB. 6. In the dialog, specify the following: Name : A name for the load balancer rule. Description: A short description of the rule that can be displayed to users. Source IP Address : (Optional) The source IP from which traffic originates. The IP is acquired from the CIDR of that particular tier on which you want to create the Internal LB rule. If not specified, the IP address is automatically allocated from the network CIDR. For every Source IP, a new Internal LB VM is created for load balancing. Source Port: The port associated with the source IP. Traffic on this port is load balanced. Instance Port: The port of the internal LB VM. Algorithm . Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms: Round-robin Least connections Source
TCP UDP Add VM: Click Add VM. Select the name of the instance to which this rule applies, and click Apply. You can test the rule by opening an SSH session to the instance.
Note
Ensure that all the tiers are removed before you remove a VPC. 1. Log in to the CloudStack UI as an administrator or end user. 2. In the left navigation, choose Network. 3. In the Select view, select VPC. All the VPCs that you have created for the account is listed in the page. 4. Select the VPC you want to work with. 5. In the Details tab, click the Remove VPC button You can remove the VPC by also using the remove button in the Quick View. You can edit the name and description of a VPC. To do that, select the VPC, then click the Edit button. To restart a VPC, select the VPC, then click the Restart button.
Note
You can configure the system.vm.random.password parameter to create a random system VM password to ensure higher security. If you reset the value for system.vm.random.password to true and restart the Management Server, a random password is generated and stored encrypted in the database. You can view the decrypted password under the system.vm.password global parameter on the CloudStack UI or by calling the listConfigurations API.
c. Take a note of the 'Host', 'Private IP Address' and 'Link Local IP Address' of the System VM you wish to access.
2. XenServer/KVM Hypervisors a. Connect to the Host of which the System VM is running. b. SSH the 'Link Local IP Address' of the System VM from the Host on which the VM is running. c. Format: ssh -i <path-to-private-key> <link-local-ip> -p 3922 d. Example: root@faith:~# ssh -i /root/.ssh/id_rsa.cloud 169.254.3.93 -p 3922 3. ESXi Hypervisors a. Connect to your CloudStack Management Server. b. ESXi users should SSH to the private IP address of the System VM. c. Format: ssh -i <path-to-private-key> <vm-private-ip> -p 3922 d. Example: root@management:~# ssh -i /var/lib/cloudstack/management/.ssh/id_rsa 172.16.0.250 -p 3922
Note
The hypervisors will have many ports assigned to VNC usage so that multiple VNC sessions can occur simultaneously. There is never any traffic to the guest virtual IP, and there is no need to enable VNC within the guest. The console proxy VM will periodically report its active session count to the Management Server. The default reporting interval is five seconds. This can be changed through standard Management Server configuration with the parameter consoleproxy.loadscan.interval. Assignment of guest VM to console proxy is determined by first determining if the guest VM has a previous session associated with a console proxy. If it does, the Management Server will assign the guest VM to the target Console Proxy VM regardless of the load on the proxy VM. Failing that, the first available running Console Proxy VM that has the capacity to handle new sessions is used.
to handle new sessions is used. Console proxies can be restarted by administrators but this will interrupt existing console sessions for users.
c. Head to the website of your favorite trusted Certificate Authority, purchase an SSL certificate, and submit the CSR. You should receive a valid certificate in return d. Convert your private key format into PKCS#8 encrypted format.
openssl pkcs8 -topk8 -in yourprivate.key -out yourprivate.pkcs8.encrypted.key
e. Convert your PKCS#8 encrypted private key into the PKCS#8 format that is compliant with CloudStack
openssl pkcs8 -in yourprivate.pkcs8.encrypted.key -out yourprivate.pkcs8.key
3. In the Update SSL Certificate screen of the CloudStack UI, paste the following: The certificate you've just generated. The private key you've just generated. The desired new domain name; for example, company.com
4. The desired new domain name; for example, company.com This stops all currently running console proxy VMs, then restarts them with the new certificate and key. Users might notice a brief interruption in console availability. The Management Server generates URLs of the form "aaa-bbb-ccc-ddd.company.com" after this change is made. The new console requests will be served with the new DNS domain name, certificate, and key.
persistence is required. Even if persistence is not required, enabling it is permitted. Source Port 80 or 443 8250 8096 Destination Port 8080 (or 20400 with AJP) 8250 8096 Protocol HTTP (or AJP) TCP HTTP Persistence Required? Yes Yes No
In addition to above settings, the administrator is responsible for setting the 'host' global config value from the management server IP to load balancer virtual IP address. If the 'host' value is not set to the VIP for Port 8250 and one of your management servers crashes, the UI is still available but the system VMs will not be able to contact the management server.
Note
If you set ha.tag, be sure to actually use that tag on at least one host in your cloud. If the tag specified in ha.tag is not set for any host in the cloud, the HA-enabled VMs will fail to restart after a crash.
Note
Even with these limitations, CloudStack is still able to effectively use API throttling to avoid malicious attacks causing denial of service. In a deployment with multiple Management Servers, the cache is not synchronized across them. In this case, CloudStack might not be able to ensure that only the exact desired number of API requests are allowed. In the worst case, the number of API calls that might be allowed is (number of Management Servers) * (api.throttling.max). The API commands resetApiLimit and getApiLimit are limited to the Management Server where the API is invoked.
The following API commands have the "tags" input parameter: listVirtualMachines listVolumes listSnapshots listNetworks listTemplates listIsos
listIsos listFirewallRules listPortForwardingRules listPublicIpAddresses listSecurityGroups listLoadBalancerRules listProjects listVPCs listNetworkACLs listStaticRoutes
2. Next, you'll update the password for the CloudStack user on the MySQL server.
# mysql -u root -p
At the MySQL shell, you'll change the password and flush privileges:
update mysql.user set password=PASSWORD("newpassword123") where User='cloud'; flush privileges; quit;
3. The next step is to encrypt the password and copy the encrypted password to CloudStack's database configuration (/etc/cloudstack/management/db.properties).
# java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar \ org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh \ input="newpassword123" password="`cat /etc/cloudstack/management/key`" \ verbose=false
5. After copying the new password over, you can now start CloudStack (and the usage engine, if necessary).
# service cloudstack-management start # service cloud-usage start
to use an SNMP or Syslog manager to monitor your cloud. The alerts which can be sent are listed in Appendix C, Alerts. You can also display the most up to date list by calling the API command listAlerts.
For example:
Mar 4 10:13:47 WARN localhost server node 127.0.0.1 is up alertType:: managementNode message:: Management
3. Add an entry using the syntax shown below. Follow the appropriate example depending on whether you are adding an SNMP manager or a Syslog manager. To specify multiple external managers, separate the IP addresses and other configuration values with commas (,).
Note
The recommended maximum number of SNMP or Syslog managers is 20 for each. The following example shows how to configure two SNMP managers at IP addresses 10.1.1.1 and 10.1.1.2. Substitute your own IP addresses, ports, and communities. Do not change the other values (name, threshold, class, and layout values).
<appender name="SNMP" class="org.apache.cloudstack.alert.snmp.SnmpTrapAppender"> <param name="Threshold" value="WARN"/> <!-- Do not edit. The alert feature assumes WARN. --> <param name="SnmpManagerIpAddresses" value="10.1.1.1,10.1.1.2"/> <param name="SnmpManagerPorts" value="162,162"/> <param name="SnmpManagerCommunities" value="public,public"/> <layout class="org.apache.cloudstack.alert.snmp.SnmpEnhancedPatternLayout"> <!-- Do not edit --> <param name="PairDelimeter" value="//"/> <param name="KeyValueDelimeter" value="::"/> </layout> </appender>
The following example shows how to configure two Syslog managers at IP addresses 10.1.1.1 and 10.1.1.2. Substitute your own IP addresses. You can set Facility to any syslog-defined value, such as LOCAL0 - LOCAL7. Do not change the other values.
<appender name="ALERTSYSLOG"> <param name="Threshold" value="WARN"/> <param name="SyslogHosts" value="10.1.1.1,10.1.1.2"/> <param name="Facility" value="LOCAL6"/> <layout> <param name="ConversionPattern" value=""/> </layout> </appender>
4. If your cloud has multiple Management Server nodes, repeat these steps to edit log4j-cloud.xml on every instance. 5. If you have made these changes while the Management Server is running, wait a few minutes for the change to take effect. Troubleshooting: If no alerts appear at the configured SNMP or Syslog manager after a reasonable amount of time, it is likely that there is an error in the syntax of the <appender> entry in log4j-cloud.xml. Check to be sure that the format and settings are correct.
xen.setup.multipath
For XenServer nodes, this is a true/false variable that instructs CloudStack to enable iSCSI multipath on the XenServer Hosts when they are added. This defaults to false. Set it to true if you would like CloudStack to enable multipath. If this is true for a NFS-based deployment multipath will still be enabled on the XenServer host. However, this does not impact NFS operation and is harmless. This is used to protect your internal network from rogue attempts to download arbitrary files using the template download feature. This is a comma-separated list of CIDRs. If a requested URL matches any of these CIDRs the Secondary Storage VM will use the private network interface to fetch the URL. Other URLs will go through the public interface. We suggest you set this to 1 or 2 hardened internal machines where you keep your templates. For example, set it to 192.168.1.66/32. Determines whether CloudStack will use storage that is local to the Host for data disks, templates, and snapshots. By default CloudStack will not use this storage. You should change this to true if you want to use local storage and you understand the reliability and feature drawbacks to choosing local storage. This is the IP address of the Management Server. If you are using multiple Management Servers you should enter a load balanced IP address that is reachable via the private network. Maximum number of items per page that can be returned by a CloudStack API command. The limit applies at the cloud level and can vary from cloud to cloud. You can override this with a lower value on a particular API call by using the page and pagesize API command parameters. For more information, see the Developer's Guide. Default: 500. The label you want to use throughout the cloud to designate certain hosts as dedicated HA hosts. These hosts will be used only for HA-enabled VMs that are restarting due to the failure of another host. For example, you could set this to ha_host. Specify the ha.tag value as a host tag when you add a new host to the cloud.
secstorage.allowed.internal.sites
use.local.storage
host
default.page.size
ha.tag
remotely access the VPN clients. The first IP in the range is used by the VPN server. account account allow.public.user.templates use.system.public.ips If false, users will not be able to create public templates. If true and if an account has one or more dedicated public IP ranges, IPs are acquired from the system pool after all the IPs dedicated to the account have been consumed. If true and if an account has one or more dedicated guest VLAN ranges, VLANs are allocated from the system pool after all the VLANs dedicated to the account have been consumed. The percentage, as a value between 0 and 1, of allocated storage utilization above which alerts are sent that the storage is below the threshold. The percentage, as a value between 0 and 1, of storage utilization above which alerts are sent that the available storage is below the threshold. The percentage, as a value between 0 and 1, of cpu utilization above which alerts are sent that the available CPU is below the threshold. The percentage, as a value between 0 and 1, of memory utilization above which alerts are sent that the available memory is below the threshold. The percentage, as a value between 0 and 1, of CPU utilization above which allocators will disable that cluster from further usage. Keep the corresponding notification threshold lower than this value to be notified beforehand. The percentage, as a value between 0 and 1, of memory utilization above which allocators will disable that cluster from further usage. Keep the corresponding notification threshold lower than this value to be notified beforehand. Used for CPU over-provisioning calculation; the available CPU will be the mathematical product of actualCpuCapacity and cpu.overprovisioning.factor. Used for memory overprovisioning calculation. Specify whether or not to reserve CPU when not over-provisioning; In case of CPU over-provisioning, CPU is always reserved. Specify whether or not to reserve memory when not overprovisioning; In case of memory over-provisioning memory is always reserved. The percentage, as a value between 0 and 1, of allocated storage utilization above which allocators will disable that pool because the available allocated storage is below the threshold. The percentage, as a value between 0 and 1, of storage utilization above which allocators will disable the pool because the available storage capacity is below the threshold. Used for storage over-provisioning calculation; available storage will
account
use.system.guest.vlans
cluster
cluster.storage.allocated.capacity.notificationthreshold
cluster
cluster.storage.capacity.notificationthreshold
cluster
cluster.cpu.allocated.capacity.notificationthreshold
cluster
cluster.memory.allocated.capacity.notificationthreshold
cluster
cluster.cpu.allocated.capacity.disablethreshold
cluster
cluster.memory.allocated.capacity.disablethreshold
cluster
cpu.overprovisioning.factor
cluster cluster
mem.overprovisioning.factor vmware.reserve.cpu
cluster
vmware.reserve.mem
zone
pool.storage.allocated.capacity.disablethreshold
zone
pool.storage.capacity.disablethreshold
zone
storage.overprovisioning.factor
zone
network.throttling.rate
calculation; available storage will be the mathematical product of actualStorageSize and storage.overprovisioning.factor. Default data transfer rate in megabits per second allowed in a network. Default domain name for VMs inside a virtual networks with a router. Name of the default router template on Xenserver. Name of the default router template on KVM. Name of the default router template on VMware. Enable or diable dynamically scaling of a VM. Bypass internal DNS, and use the external DNS1 and DNS2 Routes that are blacklisted cannot be used for creating static routes for a VPC Private Gateway.
zone
guest.domain.suffix
20.2. Allocators
CloudStack enables administrators to write custom allocators that will choose the Host to place a new guest and the storage host from which to allocate guest virtual disk images.
2. Access user data by running the following command using the result of the above command
# curl http://10.1.1.1/latest/user-data
Meta Data can be accessed similarly, using a URL of the form http://10.1.1.1/latest/meta-data/{metadata type}. (For backwards compatibility, the previous URL http://10.1.1.1/latest/{metadata type} is also supported.) For metadata type, use one of the following: service-offering. A description of the VMs service offering
service-offering. A description of the VMs service offering availability-zone. The Zone name local-ipv4. The guest IP of the VM local-hostname. The hostname of the VM public-ipv4. The first public IP for the router. (E.g. the first IP of eth2) public-hostname. This is the same as public-ipv4 instance-id. The instance name of the VM
2. Change the command-line parameter -XmxNNNm to a higher value of N. For example, if the current value is -Xmx128m, change it to -Xmx1024m or higher. 3. To put the new setting into effect, restart the Management Server.
# service cloudstack-management restart
For more information about memory issues, see "FAQ: Memory" at Tomcat Wiki.
2. Insert the following line in the [mysqld] section, below the datadir line. Use a value that is appropriate for your situation. We recommend setting the buffer pool at 40% of RAM if MySQL is on the same server as the management server or 70% of RAM if MySQL has a dedicated server. The following example assumes a dedicated server with 1024M of RAM.
innodb_buffer_pool_size=700M
For more information about the buffer pool, see "The InnoDB Buffer Pool" at MySQL Reference Manual.
22.1. Events
An event is essentially a significant or meaningful change in the state of both virtual and physical resources associated with a cloud environment. Events are used by monitoring systems, usage and billing systems, or any other event-driven workflow systems to discern a pattern and make the right business decision. In CloudStack an event could be a state change of virtual or physical resources, an action performed by an user (action events), or policy based events (alerts).
Configuration As a CloudStack administrator, perform the following one-time configuration to enable event notification framework. At run time no changes can control the behaviour. 1. Open 'componentContext.xml. 2. Define a bean named eventNotificationBus as follows: name : Specify a name for the bean. server : The name or the IP address of the RabbitMQ AMQP server. port : The port on which RabbitMQ server is running. username : The username associated with the account to access the RabbitMQ server. password : The password associated with the username of the account to access the RabbitMQ server. exchange : The exchange name on the RabbitMQ server where CloudStack events are published. A sample bean is given below:
<bean id="eventNotificationBus" class="org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus"> <property name="name" value="eventNotificationBus"/> <property name="server" value="127.0.0.1"/> <property name="port" value="5672"/> <property name="username" value="guest"/> <property name="password" value="guest"/> <property name="exchange" value="cloudstack-events"/> </bean>
The eventNotificationBus bean represents the org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus class. 3. Restart the Management Server.
Note
Archived alerts or events cannot be viewed in the UI or by using the API. They are maintained in the database for auditing or compliance purposes.
22.1.6.1. Permissions
Consider the following: The root admin can delete or archive one or multiple alerts or events. The domain admin or end user can delete or archive one or multiple events.
22.1.6.2. Procedure
1. Log in as administrator to the CloudStack UI. 2. In the left navigation, click Events. 3. Perform either of the following: To archive events, click Archive Events, and specify event type and date. To archive events, click Delete Events, and specify event type and date. 4. Click OK.
Note
When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
The CloudStack processes requests with a Job ID. If you find an error in the logs and you are interested in debugging the issue you can grep for this job ID in the management server log. For example, suppose that you find the following ERROR message:
2010-10-04 13:49:32,595 ERROR [cloud.vm.UserVmManagerImpl] (Job-Executor-11:job1076) Unable to find any host for [User|i-8-42-VM-untagged]
Note that the job ID is 1076. You can track back the events relating to job 1076 with the following grep:
grep "job-1076)" management-server.log
Adjust the above command to suit your deployment needs. More Information See the export procedure in the "Secondary Storage" section of the CloudStack Installation Guide
Symptom A virtual router is running, but the host is disconnected. A virtual router no longer functions as expected. Cause The Virtual router is lost or down. Solution If you are sure that a virtual router is down forever, or no longer functions as expected, destroy it. You must create one afresh while keeping the backup router up and running (it is assumed this is in a redundant router setup): Force stop the router. Use the stopRouter API with forced=true parameter to do so. Before you continue with destroying this router, ensure that the backup router is running. Otherwise the network connection will be lost. Destroy the router by using the destroyRouter API. Recreate the missing router by using the restartNetwork API with cleanup=false parameter. For more information about redundant router setup, see Creating a New Network Offering. For more information about the API syntax, see the API Reference at http://docs.cloudstack.org/CloudStack_Documentation/API_Reference%3A_CloudStackAPI Reference.
Time Zones
The following time zone identifiers are accepted by CloudStack. There are several places that have a time zone as a required or optional parameter. These include scheduling recurring snapshots, creating a user, and specifying the usage time zone in the Configuration table. Etc/GMT+12 Pacific/Honolulu Mexico/BajaNorte America/Chihuahua America/Mexico_City America/New_York America/Cuiaba America/Santiago America/Argentina/Buenos_Aires America/Montevideo Atlantic/Cape_Verde Atlantic/Reykjavik Europe/Bucharest Africa/Cairo Europe/Moscow Asia/Kolkata Asia/Kuala_Lumpur Asia/Tokyo Australia/Darwin Pacific/Guam Etc/GMT+11 US/Alaska US/Arizona America/Chicago Canada/Saskatchewan America/Caracas America/Halifax America/St_Johns America/Cayenne Etc/GMT+2 Africa/Casablanca Europe/London Africa/Johannesburg Asia/Jerusalem Africa/Nairobi Asia/Bangkok Australia/Perth Asia/Seoul Australia/Brisbane Pacific/Auckland Pacific/Samoa America/Los_Angeles US/Mountain America/Costa_Rica America/Bogota America/Asuncion America/La_Paz America/Araguaina America/Godthab Atlantic/Azores Etc/UTC CET Asia/Beirut Europe/Minsk Asia/Karachi Asia/Shanghai Asia/Taipei Australia/Adelaide Australia/Canberra
Event Types
VM.CREATE VM.DESTROY VM.START VM.STOP VM.REBOOT VM.UPGRADE VM.RESETPASSWORD ROUTER.CREATE ROUTER.DESTROY ROUTER.START ROUTER.STOP ROUTER.REBOOT ROUTER.HA PROXY.CREATE PROXY.DESTROY PROXY.START PROXY.STOP PROXY.REBOOT PROXY.HA VNC.CONNECT NET.IPRELEASE NET.RULEMODIFY LB.ASSIGN.TO.RULE TEMPLATE.EXTRACT TEMPLATE.UPLOAD TEMPLATE.CLEANUP VOLUME.CREATE VOLUME.DELETE VOLUME.ATTACH VOLUME.DETACH VOLUME.UPLOAD SERVICEOFFERING.CREATE SERVICEOFFERING.UPDATE SERVICEOFFERING.DELETE DOMAIN.CREATE DOMAIN.DELETE DOMAIN.UPDATE SNAPSHOT.CREATE SNAPSHOT.DELETE SNAPSHOTPOLICY.CREATE SNAPSHOTPOLICY.UPDATE SNAPSHOTPOLICY.DELETE VNC.DISCONNECT NET.RULEADD NETWORK.CREATE LB.REMOVE.FROM.RULE SG.REVOKE.INGRESS HOST.RECONNECT MAINT.CANCEL MAINT.CANCEL.PS MAINT.PREPARE MAINT.PREPARE.PS VPN.REMOTE.ACCESS.CREATE VPN.USER.ADD VPN.USER.REMOVE NETWORK.RESTART UPLOAD.CUSTOM.CERTIFICATE UPLOAD.CUSTOM.CERTIFICATE STATICNAT.DISABLE SSVM.CREATE SSVM.DESTROY SSVM.START SSVM.STOP SSVM.REBOOT SSVM.H NET.IPASSIGN NET.RULEDELETE NETWORK.DELETE LB.CREATE
LB.ASSIGN.TO.RULE LB.DELETE USER.LOGOUT USER.UPDATE TEMPLATE.DELETE TEMPLATE.DOWNLOAD.START ISO.CREATE ISO.ATTACH ISO.UPLOAD SERVICE.OFFERING.DELETE DISK.OFFERING.DELETE NETWORK.OFFERING.DELETE POD.DELETE ZONE.DELETE CONFIGURATION.VALUE.EDIT
LB.REMOVE.FROM.RULE LB.UPDATE USER.CREATE USER.DISABLE TEMPLATE.UPDATE TEMPLATE.DOWNLOAD.SUCCESS ISO.DELETE ISO.DETACH SERVICE.OFFERING.CREATE DISK.OFFERING.CREATE NETWORK.OFFERING.CREATE POD.CREATE ZONE.CREATE VLAN.IP.RANGE.CREATE SG.AUTH.INGRESS
LB.CREATE USER.LOGIN USER.DELETE TEMPLATE.CREATE TEMPLATE.COPY TEMPLATE.DOWNLOAD.FAILED ISO.COPY ISO.EXTRACT SERVICE.OFFERING.EDIT DISK.OFFERING.EDIT NETWORK.OFFERING.EDIT POD.EDIT ZONE.EDIT VLAN.IP.RANGE.DELETE
Alerts
The following is the list of alert type numbers. The current alerts can be found by calling listAlerts.
MEMORY = 0 CPU = 1 STORAGE =2 STORAGE_ALLOCATED = 3 PUBLIC_IP = 4 PRIVATE_IP = 5 HOST = 6 USERVM = 7 DOMAIN_ROUTER = 8 CONSOLE_PROXY = 9 ROUTING = 10// lost connection to default route (to the gateway) STORAGE_MISC = 11 // lost connection to default route (to the gateway) USAGE_SERVER = 12 // lost connection to default route (to the gateway) MANAGMENT_NODE = 13 // lost connection to default route (to the gateway) DOMAIN_ROUTER_MIGRATE = 14 CONSOLE_PROXY_MIGRATE = 15 USERVM_MIGRATE = 16 VLAN = 17 SSVM = 18 USAGE_SERVER_RESULT = 19 STORAGE_DELETE = 20; UPDATE_RESOURCE_COUNT = 21; //Generated when we fail to update the resource count USAGE_SANITY_RESULT = 22; DIRECT_ATTACHED_PUBLIC_IP = 23; LOCAL_STORAGE = 24; RESOURCE_LIMIT_EXCEEDED = 25; //Generated when the resource limit exceeds the limit.
RESOURCE_LIMIT_EXCEEDED = 25; //Generated when the resource limit exceeds the limit. Currently used for recurring snapshots only
Revision History
Revision 0-0 Tue May 29 2012 Initial creation of book by publican Jessica Tomechak