Skip to content

Conversation

gophercloud-backport-bot[bot]
Copy link

Backport: #3332

Nice to meet you. I always appreciate your developments.
I have this pull request ready to do, so could you please review it?

I implemented a missing blockstorage API call to manage an existing volume and unmanage a volume.
API document for the endpoints are these:

I wrote the unit and acceptance tests and checked that they worked.

Therefore, I tested the new code with the environment and the script below.

  • cinder-api/noble-updates,now 2:25.0.0-0ubuntu1~cloud0 all [installed]
  • cinder-volume/noble-updates,now 2:25.0.0-0ubuntu1~cloud0 all [installed]
  • using cinder lvm driver
package main

import (
	"context"
	"fmt"
	"log"
	"os"
	"time"

	"github.com/gophercloud/gophercloud/v2"
	"github.com/gophercloud/gophercloud/v2/openstack"
	"github.com/gophercloud/gophercloud/v2/openstack/blockstorage/v3/manageablevolumes"
	"github.com/gophercloud/gophercloud/v2/openstack/blockstorage/v3/volumes"
	"github.com/gophercloud/gophercloud/v2/openstack/config"
)

func main() {
	ctx := context.Background()

	authOpts := gophercloud.AuthOptions{
		IdentityEndpoint: os.Getenv("OS_AUTH_URL"),
		Username:         os.Getenv("OS_USERNAME"),
		Password:         os.Getenv("OS_PASSWORD"),
		DomainName:       os.Getenv("OS_USER_DOMAIN_NAME"),
		Scope: &gophercloud.AuthScope{
			ProjectID: os.Getenv("OS_PROJECT_ID"),
		},
	}
	endpointOpts := gophercloud.EndpointOpts{
		Region: os.Getenv("OS_REGION_NAME"),
	}
	providerClient, err := config.NewProviderClient(ctx, authOpts)
	if err != nil {
		log.Fatal(err)
	}
	client, err := openstack.NewBlockStorageV3(providerClient, endpointOpts)
	if err != nil {
		log.Fatal(err)
	}
	// Manage existing volume requires microversion at least 3.8
	client.Microversion = "3.8"

	createOpts := volumes.CreateOpts{
		Name:             "New Volume",
		Size:             8,
		AvailabilityZone: "nova",
		Description:      "Volume newly created",
		VolumeType:       "lvm",
		Metadata:         map[string]string{"key1": "value1"},
	}
	created, err := volumes.Create(ctx, client, createOpts, nil).Extract()
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("Created volume: %+v
", created)

	time.Sleep(3 * time.Second)

	if err := volumes.Unmanage(ctx, client, created.ID).ExtractErr(); err != nil {
		log.Fatal(err)
	}
	fmt.Println("Unmanaged volume")

	time.Sleep(3 * time.Second)

	manageOpts := manageablevolumes.ManageExistingOpts{
		Host:             "storage01@lvm#LVM",
		Ref:              map[string]string{"source-name": fmt.Sprintf("volume-%s", created.ID)},
		Name:             "Managed Volume",
		AvailabilityZone: "nova",
		Description:      "Volume imported from existingLV",
		VolumeType:       "lvm",
		Bootable:         false,
		Metadata:         map[string]string{"key1": "value1"},
	}
	managed, err := manageablevolumes.ManageExisting(ctx, client, manageOpts).Extract()
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("Managed volume: %+v
", managed)
}

And it worked.

$ cinder get-pools
+----------+-------------------+
| Property | Value             |
+----------+-------------------+
| name     | storage01@lvm#LVM |
+----------+-------------------+

$ go run .
Created volume: &{ID:b62ccb65-0261-473d-924e-c40717fd8b5a Status:creating Size:8 AvailabilityZone:nova CreatedAt:2025-03-23 15:17:34.787783 +0000 UTC UpdatedAt:0001-01-01 00:00:00 +0000 UTC Attachments:[] Name:New Volume Description:Volume newly created VolumeType:lvm SnapshotID: SourceVolID: BackupID:<nil> Metadata:map[key1:value1] UserID:149dd9648b9647c8b4e6cbf509f2078f Bootable:false Encrypted:false ReplicationStatus: ConsistencyGroupID: Multiattach:false VolumeImageMetadata:map[] Host: TenantID:}
Unmanaged volume
Managed volume: &{ID:b9ecbba7-fce5-4198-ae67-1b1b7163b761 Status:creating Size:0 AvailabilityZone:nova CreatedAt:2025-03-23 15:17:40.952282 +0000 UTC UpdatedAt:0001-01-01 00:00:00 +0000 UTC Attachments:[] Name:Managed Volume Description:Volume imported from existingLV VolumeType:lvm SnapshotID: SourceVolID: BackupID:<nil> Metadata:map[key1:value1] UserID:149dd9648b9647c8b4e6cbf509f2078f Bootable:false Encrypted:false ReplicationStatus: ConsistencyGroupID: Multiattach:false VolumeImageMetadata:map[] Host: TenantID:}

$ openstack volume list
+--------------------------------------+-------------+-----------+------+--------------------------------------+
| ID                                   | Name        | Status    | Size | Attached to                          |
+--------------------------------------+-------------+-----------+------+--------------------------------------+
| b9ecbba7-fce5-4198-ae67-1b1b7163b761 | Managed Volume | available |    8 |                                      |
+--------------------------------------+-------------+-----------+------+--------------------------------------+

Fixes #3324

Links to the line numbers/files in the OpenStack source code that support the
code in this PR:

API doc

source code

Added acceptance test for managing a existing volume and unmanaging a
volume api calls.
@github-actions github-actions bot added v2 This PR targets v2 edit:blockstorage This PR updates blockstorage code semver:minor Backwards-compatible change labels May 23, 2025
@coveralls
Copy link

coveralls commented May 23, 2025

Coverage Status

coverage: 63.586% (-0.09%) from 63.672%
when pulling ed64c54 on bp-v2-ea86f48-469ca6a-6fb7d11
into ac09823 on v2.

@TOMOFUMI-KONDO
Copy link
Contributor

Hello.

This PR originates from my PR.
#3332

May I fix this to pass the tests?

kayrus and others added 2 commits May 27, 2025 13:22
We get an unexpected response code, not the actual response code. Handle
accordingly.

Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
@kayrus kayrus merged commit 82aec7d into v2 May 27, 2025
21 checks passed
@kayrus kayrus deleted the bp-v2-ea86f48-469ca6a-6fb7d11 branch May 27, 2025 12:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
edit:blockstorage This PR updates blockstorage code semver:minor Backwards-compatible change v2 This PR targets v2
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants