AHV Admin Guide v501
AHV Admin Guide v501
AHV Admin Guide v501
Acropolis 5.0
26-Jan-2017
Notice
Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: January 26, 2017 (2017-01-26 22:32:23 GMT-8)
1: Node Management...................................................................................5
Controller VM Access.......................................................................................................................... 5
Shutting Down a Node in a Cluster (AHV)......................................................................................... 5
Starting a Node in a Cluster (AHV).................................................................................................... 6
Changing CVM Memory Configuration (AHV).....................................................................................7
Changing the Acropolis Host Name.................................................................................................... 8
Changing the Acropolis Host Password..............................................................................................8
Upgrading the KVM Hypervisor to Use Acropolis Features................................................................ 9
Nonconfigurable AHV Components...................................................................................................11
5: Event Notifications................................................................................ 33
List of Events..................................................................................................................................... 33
Creating a Webhook.......................................................................................................................... 34
Listing Webhooks...............................................................................................................................35
Updating a Webhook......................................................................................................................... 36
Deleting a Webhook.......................................................................................................................... 37
4
1
Node Management
Controller VM Access
Most administrative functions of a Nutanix cluster can be performed through the web console or nCLI.
Nutanix recommends using these interfaces whenever possible and disabling Controller VM SSH access
with password or key authentication. Some functions, however, require logging on to a Controller VM
with SSH. Exercise caution whenever connecting directly to a Controller VM as the risk of causing cluster
issues is increased.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.
Replace cvm_name with the name of the Controller VM that you found from the preceding command.
5. If the node is in maintenance mode, log on to the Controller VM and take the node out of maintenance
mode.
nutanix@cvm$ acli
<acropolis> host.exit_maintenance_mode host
<acropolis> exit
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
Zeus UP [3704, 3727, 3728, 3729, 3807, 3821]
Scavenger UP [4937, 4960, 4961, 4990]
SSLTerminator UP [5034, 5056, 5057, 5139]
Hyperint UP [5059, 5082, 5083, 5086, 5099, 5108]
Medusa UP [5534, 5559, 5560, 5563, 5752]
DynamicRingChanger UP [5852, 5874, 5875, 5954]
Pithos UP [5877, 5899, 5900, 5962]
Stargate UP [5902, 5927, 5928, 6103, 6108]
Cerebro UP [5930, 5952, 5953, 6106]
Chronos UP [5960, 6004, 6006, 6075]
Curator UP [5987, 6017, 6018, 6261]
Prism UP [6020, 6042, 6043, 6111, 6818]
CIM UP [6045, 6067, 6068, 6101]
AlertManager UP [6070, 6099, 6100, 6296]
Arithmos UP [6107, 6175, 6176, 6344]
SysStatCollector UP [6196, 6259, 6260, 6497]
Tunnel UP [6263, 6312, 6313]
Caution: To avoid impacting cluster availability, shut down one Controller VM at a time. Wait until
cluster services are up before proceeding to the next Controller VM.
Replace cvm_name with the name of the Controller VM that you found from the preceding command.
Replace cvm_name with the name of the Controller VM that you found in step 2.
5. Increase the memory of the Controller VM (if needed), depending on your configuration settings for
deduplication and other advanced features.
See CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge) on page 14 for memory sizing
guidelines.
root@ahv# virsh setmaxmem cvm_name --config --size ram_gbGiB
root@ahv# virsh setmem cvm_name --config --size ram_gbGiB
Replace cvm_name with the name of the Controller VM and ram_gb with the recommended amount
from the sizing guidelines in GiB (for example, 1GiB).
2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/sysconfig/
network file.
HOSTNAME=my_hostname
Replace my_hostname with the name that you want to assign to the host.
3. Use the text editor to replace the host name in the /etc/hostname file.
3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
The password you choose must meet the following complexity requirements:
In configurations with high-security requirements, the password must contain:
At least 15 characters.
At least one upper case letter (AZ).
At least one lower case letter (az).
At least one digit (09).
At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
At least eight characters different from the previous password.
At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.
In both types of configuration, if a password for an account is entered three times unsuccessfully within
a 15-minute period, the account is locked for 15 minutes.
Note: If you are currently deploying NOS 4.1.x/4.1.1.x and later, and previously upgraded to an
Acropolis-compatible version of the KVM hypervisor (for example, version KVM-20150120):
Do not use the script or procedure described in this topic.
Upgrade to the latest available Nutanix version of the KVM hypervisor using the Upgrade
Software feature through the Prism web console. See Software and Firmware Upgrades in the
Web Console Guide for the upgrade instructions.
Use this procedure if you are currently using a legacy, non-Acropolis version of KVM and want to use the
Acropolis distributed VM management service features. The first generally-available Nutanix KVM version
with Acropolis is KVM-2015120; the Nutanix support portal always makes the latest version available.
Log in to the hypervisor host and type cat /etc/ For example, the following result indicates that you
nutanix-release are running an Acropolis-compatible hypervisor:
el6.nutanix.2015412. The minimum result for AHV
is el6.nutanix.20150120
Log in to the hypervisor host and type cat /etc/ For example, the following result indicates that your
centos-release are running an Acropolis-compatible hypervisor:
CentOS release 6.6 (Final). Any result that
returns CentOS 6.4 or previous is non-Acropolis
(that is, KVM).
Log in to the Prism web console View the Hypervisor Summary on the home page.
If it shows a version of 20150120 or later, you are
running AHV.
NOS 3.5.5 and KVM CentOS 6.4 1. Upgrade KVM using the upgrade script.
2. Import existing VMs.
NOS 3.5.4.6 or earlier and KVM CentOS 6.4 1. Upgrade to NOS 3.5.5.
2. Upgrade KVM using the upgrade script.
3. Import existing VMs.
NOS 4.0.2/4.0.2.x and KVM CentOS 6.4 1. Upgrade KVM using the upgrade script.
2. Import existing VMs.
NOS 4.1 and KVM CentOS 6.4 1. Upgrade KVM using the upgrade script.
2. Import existing VMs.
Note:
See the Nutanix Support Portal for the latest information on Acropolis Upgrade Paths.
This procedure requires that you shut down any VMs running on the host and leave them off
until the hypervisor and the AOS upgrade is completed.
Do not run the upgrade script on the same Controller VM where you are upgrading the node's
hypervisor. You can run it from another Controller VM in the cluster.
1. Download the hypervisor upgrade bundle from the Nutanix support portal at the Downloads link.
You must copy this bundle to the Controller VM you are upgrading. This procedure assumes you copy it
to and extract it from the /home/nutanix directory.
2. Log on to the Controller VM of the hypervisor host to be upgraded to shut down each VM and shut
down the Controller VM.
a. Shut down each VM, specified by vm_name, running on the host to be upgraded.
nutanix@cvm$ virsh shutdown vm_name
b. Shut down the Controller VM once all VMs are powered off.
nutanix@cvm$ sudo shutdown -h now
4. Copy the upgrade bundle you downloaded to /home/nutanix and extract the upgrade tar file.
nutanix@cvm$ tar -xzvf upgrade_kvm-el6.nutanix.version.tar.gz
5. Change to the upgrade_kvm/bin directory and run the upgrade_kvm upgrade script, where host_ip is the
IP address of the hypervisor host to be upgraded (the host where you have just shutdown the Controller
VM in Step 2).
nutanix@cvm$ cd upgrade_kvm/bin
nutanix@cvm$ ./upgrade_kvm --host_ip host_ip
The Controller VM of the upgraded host restarts and messages similar to the following are displayed.
This message shows the first generally-available KVM version with Acropolis (KVM-2015120).
...
6. Log on to the upgraded Controller VM and verify that cluster services have started by noting that all
services are listed as UP .
nutanix@cvm$ cluster status
After the hypervisor is upgraded, you can now import any existing powered-off VMs according to
procedures described in the Acropolis App Mobility Fabric Guide.
Warning: Modifying any of the settings listed here may render your cluster inoperable.
Warning: You must not run any commands on a Controller VM that are not covered in the Nutanix
documentation.
Nutanix Software
Settings and contents of any Controller VM, including the name and the virtual hardware configuration
(except memory when required to enable certain features)
Note: Nutanix Engineering has determined that memory requirements for each Controller VM in
your cluster are likely to increase for subsequent releases. Nutanix recommends that you plan to
upgrade memory.
Platform Default
The following table show the minimum amount of memory required for the Controller VM on each node for
platforms that do not follow the default. For the workload translation into models, see Platform Workload
Translation (G5/Broadwell) on page 14.
Note: To calculate the number of vCPUs for your model, use the number of physical cores per
socket in your model. The minimum number of vCPUS your Controller VM can have is eight and
the maximum number is 12.
If your CPU has less than eight logical cores, allocate a maximum of 75 percent of the cores of a
single CPU to the Controller VM. For example, if your CPU has 6 cores, allocate 4 vCPUs.
Platform Default
The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.
Dell Platforms
XC730xd-24 32 16 8
XC6320-6AF
XC630-10AF
Lenovo Platforms
HX-3500 24 8
HX-5500
HX-7500
Note:
SSP requires a minimum of 24 GB of memory for the CVM. If the CVMs
already have 24 GB of memory, no additional memory is necessary to run
SSP.
If the CVMs have less than 24 GB of memory, increase the memory to 24 GB
to use SSP.
If the cluster is using any other features that require additional CVM memory,
add 4 GB for SSP in addition to the amount needed for the other features.
Open vSwitch Do not modify the OpenFlow tables that are associated with the default
OVS bridge br0.
VLANs Add the Controller VM and the AHV host to the same VLAN. By default,
the Controller VM and the hypervisor are assigned to VLAN 0, which
effectively places them on the native VLAN configured on the upstream
physical switch.
Do not add any other device, including guest VMs, to the VLAN to which
the Controller VM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.
OVS bonded port (bond0) Aggregate the 10 GbE interfaces on the physical host to an OVS bond
on the default OVS bridge br0 and trunk these interfaces on the physical
switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode. LACP configurations are known to
work, but support might be limited.
1 GbE and 10 GbE If you want to use the 10 GbE interfaces for guest VM traffic, make sure
interfaces (physical host) that the guest VMs do not use the VLAN over which the Controller VM and
hypervisor communicate.
If you want to use the 1 GbE interfaces for guest VM connectivity, follow
the hypervisor manufacturers switch port and networking configuration
guidelines.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to bridge
br0, either individually or in a second bond. Use them on other bridges.
IPMI port on the hypervisor Do not trunk switch ports that connect to the IPMI interface. Configure the
host switch ports as access ports for management simplicity.
Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612).
Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches
with larger buffers for production workloads.
Use an 802.3-2012 standardscompliant switch that has a low-latency,
cut-through design and provides predictable, consistent traffic latency
regardless of packet size, traffic pattern, or the features enabled on
the 10 GbE interfaces. Port-to-port latency should be no higher than 2
microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch
ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for
each port.
Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture.
This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2
network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.
Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.
This diagram shows the recommended network configuration for an Acropolis cluster. The interfaces in the
diagram are connected with colored lines to indicate membership to different VLANs:
Figure:
The following diagram illustrates the default factory configuration of OVS on an Acropolis node:
The Controller VM has two network interfaces. As shown in the diagram, one network interface connects to
bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to
communicate with the hypervisor host.
To show interface properties such as link speed and status, log on to the Controller VM, and then list the
physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces
To show the ports and interfaces that are configured as uplinks, log on to the Controller VM, and then
list the uplink configuration.
nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks
Replace bridge with the name of the bridge for which you want to view uplink information. Omit the --
bridge_name parameter if you want to view uplink information for the default OVS bridge br0.
To show the virtual switching configuration, log on to the Acropolis host with SSH, and then list the
configuration of Open vSwitch.
root@ahv# ovs-vsctl show
59ce3252-f3c1-4444-91d1-b5281b30cdba
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "vnet0"
Interface "vnet0"
Port "br0-arp"
Interface "br0-arp"
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port "bond0"
Interface "eth3"
Interface "eth2"
Port "bond1"
Interface "eth1"
Interface "eth0"
Port "br0-dhcp"
Interface "br0-dhcp"
type: vxlan
options: {key="1", remote_ip="192.0.2.131"}
ovs_version: "2.3.1"
To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then list the
configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
Replace bridge with a name for the bridge. The output does not indicate success explicitly, so you can
append && echo success to the command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
Executing ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success on the
cluster
================== 192.0.2.203 =================
FIPS mode initialized
Nutanix KVM
success
...
Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from
the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond, so
the disassociation is necessary to help prevent any unpredictable performance issues that might
result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that you
aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate OVS
bridge.
To create an OVS bond with the desired interfaces, do the following:
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
2015-03-05 11:17:17 WARNING manage_ovs:291 Interface eth1 does not have link state
2015-03-05 11:17:17 INFO manage_ovs:325 Deleting OVS ports: bond1
2015-03-05 11:17:18 INFO manage_ovs:333 Adding bonded OVS ports: eth0 eth1
2015-03-05 11:17:22 INFO manage_ovs:364 Sending gratuitous ARPs for 192.0.2.21
To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:
2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
5. Verify connectivity to the IP address of the AHV host by performing a ping test.
By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to
a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public
interface from a device that is on the new VLAN.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are
logged on to the Controller VM through its public interface. To change the VLAN ID, log on to the
internal interface that has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10
new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
<source bridge="br0" />
<virtualport type="openvswitch" />
</interface>
CVM external NIC successfully updated.
By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can
send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]
Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see the
"VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App Mobility
Fabric Guide.
Caution: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor
can be multihomed provided that one interface is on the same subnet as the Controller VM.
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0
For information about how to log on to a Controller VM, see Controller VM Access on page 5.
3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
Acropolis Host to a VLAN on page 24.
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the
previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond
name are br0 and br0-up , respectively.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up
update_uplinks'
The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.
6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to
a bond named br1-up .
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up
update_uplinks'
7. Log on to each Controller VM and create a network on a separate VLAN for the guest VMs, and
associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN
10 .
nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on
the guest VMs to the network.
For information about assigning guest VM interfaces to a network, see "Creating a VM" in the Prism
Web Console Guide.
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]
Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see the
"VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App Mobility
Fabric Guide.
Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the
memory is not online, you cannot use the new memory. Perform the following procedure to make the
memory online.
a. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/device/system/memory/memoryXXX/state
2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more memory
to that VM so that the final memory is greater than 3 GB, results in a memory-overflow condition. To
resolve the issue, restart the guest OS (CentOS 7.2) with the following setting:
swiotlb=force
CPU OS Limitations
1. On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you might
have to bring the CPUs online. For each hot-plugged CPU, run the following command to bring the CPU
online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Replace vm with the name of the VM and new_memory_size with the memory size.
Replace vm with the name of the VM and n with the number of CPUs.
List of Events
You can register webhook listeners to receive notifications for events described here.
Event Description
VM.UPDATE A VM is updated.
VM.MIGRATE A VM is migrated from one host to another.
VM.ON A VM is powered on.
VM.OFF A VM is powered off.
VM.NIC_PLUG A virtual NIC is plugged into a network.
VM.NIC_UNPLUG A virtual NIC is unplugged from a network.
Event Description
NETWORK.CREATE A virtual network is created.
NETWORK.DELETE A virtual network is deleted.
NETWORK.UPDATE A virtual network is updated.
Note: Each POST request creates a separate webhook with a unique UUID, even if the data
in the body is identical. Each webhook generates a notification when an event occurs, and that
results in multiple notifications for the same event. If you want to update a webhook, do not
send another request with changes. Instead, update the webhook. See Updating a Webhook on
page 36.
To create a webhook, send the Nutanix cluster an API request of the following form:
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
name. Name for the webhook.
post_url. URL at which the webhook listener receives notifications.
username and password. User name and password to use for authentication.
events_filter_list. Comma-separated list of events for which notifications must be generated.
description. Description of the webhook.
api_version. Version of Nutanix REST API in use.
The Nutanix cluster responds to the API request with a 200 OK HTTP response that contains the UUID
of the webhook that is created. The following response is an example:
{
"status": {
"state": "kPending"
},
"spec": {
. . .
"uuid": "003f8c42-748d-4c0b-b23d-ab594c087399"
}
}
The notification contains metadata about the entity along with information about the type of event that
occurred. The event type is specified by the event_type parameter.
Listing Webhooks
You can list webhooks to view their specifications or to verify that they were created successfully.
To list webhooks, do the following:
To show a single webhook, send the Nutanix cluster an API request of the following form:
https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address with the IP address of the Nutanix cluster. Replace webhook_uuid with the
UUID of the webhook that you want to show.
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
filter. Filter to apply to the list of webhooks.
sort_order. Order in which to sort the list of webhooks. Ordering is performed on webhook names.
offset.
total_matches. Number of matches to list.
sort_column. Parameter on which to sort the list.
length.
Updating a Webhook
You can update a webhook by sending a PUT request to the Nutanix cluster. You can update the name,
listener URL, event list, and description.
To update a webhook, send the Nutanix cluster an API request of the following form:
PUT https://cluster_IP_address:9440/api/nutanix/v3/webhooks/webhook_uuid
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of
the webhook you want to update, respectively. For a description of the parameters, see Creating a
Webhook on page 34.
To delete a webhook, send the Nutanix cluster an API request of the following form:
DELETE https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of the
webhook you want to update, respectively.