From 4c8364cfe46ea57cbf2337f6257764ad4c8799b3 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 27 Oct 2020 14:58:52 -0400
Subject: [PATCH 001/276] Do not require pgBackRest Secret for cluster creation
This changes the methodology for how a pgBackRest Secret is
generated such that it is no longer required at the time a
cluster is created vis-a-vis the pgcluster custom resource.
Instead, the Operator follows this heuristic:
- If a pgBackRest Secret is provided, this Secret is used
- If the pgBackRest Secret is partially filled out, the
missing pieces are filled in
- If no Secret is provided, a Secret is generated.
Note that if you want to use S3 or a S3-like storage system,
you will need to still create the Secret with the appropriate
S3 credentials.
This also updates various documentation to show the easier
workflow.
Issue: [ch9451]
---
docs/content/custom-resources/_index.md | 65 +--------
examples/create-by-resource/run.sh | 39 -----
examples/helm/README.md | 53 ++++---
.../apiserver/clusterservice/clusterimpl.go | 30 ++--
internal/operator/backrest/repo.go | 13 ++
internal/operator/cluster/cluster.go | 8 +
internal/util/cluster.go | 137 ++++++++++++------
7 files changed, 165 insertions(+), 180 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index e27adfd3e6..af913755e9 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -255,68 +255,7 @@ create additional secrets.
The following guide goes through how to create a PostgreSQL cluster called
`hippo` by creating a new custom resource.
-#### Step 1: Create the pgBackRest Secret
-
-pgBackRest is a fundamental part of a PostgreSQL deployment with the PostgreSQL
-Operator: not only is it a backup and archive repository, but it also helps with
-operations such as self-healing. A PostgreSQL instance a pgBackRest communicate
-using ssh, and as such, we need to generate a unique ssh keypair for
-communication for each PostgreSQL cluster we deploy.
-
-In this example, we generate a ssh keypair using ED25519 keys, but if your
-environment requires it, you can also use RSA keys.
-
-In your working directory, run the following commands:
-
-
-# this variable is the name of the cluster being created
-export pgo_cluster_name=hippo
-# this variable is the namespace the cluster is being deployed into
-export cluster_namespace=pgo
-
-# generate a SSH public/private keypair for use by pgBackRest
-ssh-keygen -t ed25519 -N '' -f "${pgo_cluster_name}-key"
-
-# base64 encoded the keys for the generation of the Kubernetes secret, and place
-# them into variables temporarily
-public_key_temp=$(cat "${pgo_cluster_name}-key.pub" | base64)
-private_key_temp=$(cat "${pgo_cluster_name}-key" | base64)
-export pgbackrest_public_key="${public_key_temp//[$'\n']}" pgbackrest_private_key="${private_key_temp//[$'\n']}"
-
-# create the backrest-repo-config example file and substitute in the newly
-# created keys
-#
-# (Note: that the "config" / "sshd_config" entries contain configuration to
-# ensure that PostgreSQL instances are able to communicate with the pgBackRest
-# repository, which houses backups and archives, and vice versa. Most of the
-# settings follow the sshd defaults, with a few overrides. Edit at your own
-# discretion.)
-cat <<-EOF > "${pgo_cluster_name}-backrest-repo-config.yaml"
-apiVersion: v1
-kind: Secret
-type: Opaque
-metadata:
- labels:
- pg-cluster: ${pgo_cluster_name}
- pgo-backrest-repo: "true"
- name: ${pgo_cluster_name}-backrest-repo-config
- namespace: ${cluster_namespace}
-data:
- authorized_keys: ${pgbackrest_public_key}
- id_ed25519: ${pgbackrest_private_key}
- ssh_host_ed25519_key: ${pgbackrest_private_key}
- config: SG9zdCAqClN0cmljdEhvc3RLZXlDaGVja2luZyBubwpJZGVudGl0eUZpbGUgL3RtcC9pZF9lZDI1NTE5ClBvcnQgMjAyMgpVc2VyIHBnYmFja3Jlc3QK
- sshd_config: IwkkT3BlbkJTRDogc3NoZF9jb25maWcsdiAxLjEwMCAyMDE2LzA4LzE1IDEyOjMyOjA0IG5hZGR5IEV4cCAkCgojIFRoaXMgaXMgdGhlIHNzaGQgc2VydmVyIHN5c3RlbS13aWRlIGNvbmZpZ3VyYXRpb24gZmlsZS4gIFNlZQojIHNzaGRfY29uZmlnKDUpIGZvciBtb3JlIGluZm9ybWF0aW9uLgoKIyBUaGlzIHNzaGQgd2FzIGNvbXBpbGVkIHdpdGggUEFUSD0vdXNyL2xvY2FsL2JpbjovdXNyL2JpbgoKIyBUaGUgc3RyYXRlZ3kgdXNlZCBmb3Igb3B0aW9ucyBpbiB0aGUgZGVmYXVsdCBzc2hkX2NvbmZpZyBzaGlwcGVkIHdpdGgKIyBPcGVuU1NIIGlzIHRvIHNwZWNpZnkgb3B0aW9ucyB3aXRoIHRoZWlyIGRlZmF1bHQgdmFsdWUgd2hlcmUKIyBwb3NzaWJsZSwgYnV0IGxlYXZlIHRoZW0gY29tbWVudGVkLiAgVW5jb21tZW50ZWQgb3B0aW9ucyBvdmVycmlkZSB0aGUKIyBkZWZhdWx0IHZhbHVlLgoKIyBJZiB5b3Ugd2FudCB0byBjaGFuZ2UgdGhlIHBvcnQgb24gYSBTRUxpbnV4IHN5c3RlbSwgeW91IGhhdmUgdG8gdGVsbAojIFNFTGludXggYWJvdXQgdGhpcyBjaGFuZ2UuCiMgc2VtYW5hZ2UgcG9ydCAtYSAtdCBzc2hfcG9ydF90IC1wIHRjcCAjUE9SVE5VTUJFUgojClBvcnQgMjAyMgojQWRkcmVzc0ZhbWlseSBhbnkKI0xpc3RlbkFkZHJlc3MgMC4wLjAuMAojTGlzdGVuQWRkcmVzcyA6OgoKSG9zdEtleSAvc3NoZC9zc2hfaG9zdF9lZDI1NTE5X2tleQoKIyBDaXBoZXJzIGFuZCBrZXlpbmcKI1Jla2V5TGltaXQgZGVmYXVsdCBub25lCgojIExvZ2dpbmcKI1N5c2xvZ0ZhY2lsaXR5IEFVVEgKU3lzbG9nRmFjaWxpdHkgQVVUSFBSSVYKI0xvZ0xldmVsIElORk8KCiMgQXV0aGVudGljYXRpb246CgojTG9naW5HcmFjZVRpbWUgMm0KUGVybWl0Um9vdExvZ2luIG5vClN0cmljdE1vZGVzIG5vCiNNYXhBdXRoVHJpZXMgNgojTWF4U2Vzc2lvbnMgMTAKClB1YmtleUF1dGhlbnRpY2F0aW9uIHllcwoKIyBUaGUgZGVmYXVsdCBpcyB0byBjaGVjayBib3RoIC5zc2gvYXV0aG9yaXplZF9rZXlzIGFuZCAuc3NoL2F1dGhvcml6ZWRfa2V5czIKIyBidXQgdGhpcyBpcyBvdmVycmlkZGVuIHNvIGluc3RhbGxhdGlvbnMgd2lsbCBvbmx5IGNoZWNrIC5zc2gvYXV0aG9yaXplZF9rZXlzCiNBdXRob3JpemVkS2V5c0ZpbGUJL3BnY29uZi9hdXRob3JpemVkX2tleXMKQXV0aG9yaXplZEtleXNGaWxlCS9zc2hkL2F1dGhvcml6ZWRfa2V5cwoKI0F1dGhvcml6ZWRQcmluY2lwYWxzRmlsZSBub25lCgojQXV0aG9yaXplZEtleXNDb21tYW5kIG5vbmUKI0F1dGhvcml6ZWRLZXlzQ29tbWFuZFVzZXIgbm9ib2R5CgojIEZvciB0aGlzIHRvIHdvcmsgeW91IHdpbGwgYWxzbyBuZWVkIGhvc3Qga2V5cyBpbiAvZXRjL3NzaC9zc2hfa25vd25faG9zdHMKI0hvc3RiYXNlZEF1dGhlbnRpY2F0aW9uIG5vCiMgQ2hhbmdlIHRvIHllcyBpZiB5b3UgZG9uJ3QgdHJ1c3Qgfi8uc3NoL2tub3duX2hvc3RzIGZvcgojIEhvc3RiYXNlZEF1dGhlbnRpY2F0aW9uCiNJZ25vcmVVc2VyS25vd25Ib3N0cyBubwojIERvbid0IHJlYWQgdGhlIHVzZXIncyB+Ly5yaG9zdHMgYW5kIH4vLnNob3N0cyBmaWxlcwojSWdub3JlUmhvc3RzIHllcwoKIyBUbyBkaXNhYmxlIHR1bm5lbGVkIGNsZWFyIHRleHQgcGFzc3dvcmRzLCBjaGFuZ2UgdG8gbm8gaGVyZSEKI1Bhc3N3b3JkQXV0aGVudGljYXRpb24geWVzCiNQZXJtaXRFbXB0eVBhc3N3b3JkcyBubwpQYXNzd29yZEF1dGhlbnRpY2F0aW9uIG5vCgojIENoYW5nZSB0byBubyB0byBkaXNhYmxlIHMva2V5IHBhc3N3b3JkcwpDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIHllcwojQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiBubwoKIyBLZXJiZXJvcyBvcHRpb25zCiNLZXJiZXJvc0F1dGhlbnRpY2F0aW9uIG5vCiNLZXJiZXJvc09yTG9jYWxQYXNzd2QgeWVzCiNLZXJiZXJvc1RpY2tldENsZWFudXAgeWVzCiNLZXJiZXJvc0dldEFGU1Rva2VuIG5vCiNLZXJiZXJvc1VzZUt1c2Vyb2sgeWVzCgojIEdTU0FQSSBvcHRpb25zCiNHU1NBUElBdXRoZW50aWNhdGlvbiB5ZXMKI0dTU0FQSUNsZWFudXBDcmVkZW50aWFscyBubwojR1NTQVBJU3RyaWN0QWNjZXB0b3JDaGVjayB5ZXMKI0dTU0FQSUtleUV4Y2hhbmdlIG5vCiNHU1NBUElFbmFibGVrNXVzZXJzIG5vCgojIFNldCB0aGlzIHRvICd5ZXMnIHRvIGVuYWJsZSBQQU0gYXV0aGVudGljYXRpb24sIGFjY291bnQgcHJvY2Vzc2luZywKIyBhbmQgc2Vzc2lvbiBwcm9jZXNzaW5nLiBJZiB0aGlzIGlzIGVuYWJsZWQsIFBBTSBhdXRoZW50aWNhdGlvbiB3aWxsCiMgYmUgYWxsb3dlZCB0aHJvdWdoIHRoZSBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIGFuZAojIFBhc3N3b3JkQXV0aGVudGljYXRpb24uICBEZXBlbmRpbmcgb24geW91ciBQQU0gY29uZmlndXJhdGlvbiwKIyBQQU0gYXV0aGVudGljYXRpb24gdmlhIENoYWxsZW5nZVJlc3BvbnNlQXV0aGVudGljYXRpb24gbWF5IGJ5cGFzcwojIHRoZSBzZXR0aW5nIG9mICJQZXJtaXRSb290TG9naW4gd2l0aG91dC1wYXNzd29yZCIuCiMgSWYgeW91IGp1c3Qgd2FudCB0aGUgUEFNIGFjY291bnQgYW5kIHNlc3Npb24gY2hlY2tzIHRvIHJ1biB3aXRob3V0CiMgUEFNIGF1dGhlbnRpY2F0aW9uLCB0aGVuIGVuYWJsZSB0aGlzIGJ1dCBzZXQgUGFzc3dvcmRBdXRoZW50aWNhdGlvbgojIGFuZCBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIHRvICdubycuCiMgV0FSTklORzogJ1VzZVBBTSBubycgaXMgbm90IHN1cHBvcnRlZCBpbiBSZWQgSGF0IEVudGVycHJpc2UgTGludXggYW5kIG1heSBjYXVzZSBzZXZlcmFsCiMgcHJvYmxlbXMuClVzZVBBTSB5ZXMKCiNBbGxvd0FnZW50Rm9yd2FyZGluZyB5ZXMKI0FsbG93VGNwRm9yd2FyZGluZyB5ZXMKI0dhdGV3YXlQb3J0cyBubwpYMTFGb3J3YXJkaW5nIHllcwojWDExRGlzcGxheU9mZnNldCAxMAojWDExVXNlTG9jYWxob3N0IHllcwojUGVybWl0VFRZIHllcwojUHJpbnRNb3RkIHllcwojUHJpbnRMYXN0TG9nIHllcwojVENQS2VlcEFsaXZlIHllcwojVXNlTG9naW4gbm8KI1Blcm1pdFVzZXJFbnZpcm9ubWVudCBubwojQ29tcHJlc3Npb24gZGVsYXllZAojQ2xpZW50QWxpdmVJbnRlcnZhbCAwCiNDbGllbnRBbGl2ZUNvdW50TWF4IDMKI1Nob3dQYXRjaExldmVsIG5vCiNVc2VETlMgeWVzCiNQaWRGaWxlIC92YXIvcnVuL3NzaGQucGlkCiNNYXhTdGFydHVwcyAxMDozMDoxMDAKI1Blcm1pdFR1bm5lbCBubwojQ2hyb290RGlyZWN0b3J5IG5vbmUKI1ZlcnNpb25BZGRlbmR1bSBub25lCgojIG5vIGRlZmF1bHQgYmFubmVyIHBhdGgKI0Jhbm5lciBub25lCgojIEFjY2VwdCBsb2NhbGUtcmVsYXRlZCBlbnZpcm9ubWVudCB2YXJpYWJsZXMKQWNjZXB0RW52IExBTkcgTENfQ1RZUEUgTENfTlVNRVJJQyBMQ19USU1FIExDX0NPTExBVEUgTENfTU9ORVRBUlkgTENfTUVTU0FHRVMKQWNjZXB0RW52IExDX1BBUEVSIExDX05BTUUgTENfQUREUkVTUyBMQ19URUxFUEhPTkUgTENfTUVBU1VSRU1FTlQKQWNjZXB0RW52IExDX0lERU5USUZJQ0FUSU9OIExDX0FMTCBMQU5HVUFHRQpBY2NlcHRFbnYgWE1PRElGSUVSUwoKIyBvdmVycmlkZSBkZWZhdWx0IG9mIG5vIHN1YnN5c3RlbXMKU3Vic3lzdGVtCXNmdHAJL3Vzci9saWJleGVjL29wZW5zc2gvc2Z0cC1zZXJ2ZXIKCiMgRXhhbXBsZSBvZiBvdmVycmlkaW5nIHNldHRpbmdzIG9uIGEgcGVyLXVzZXIgYmFzaXMKI01hdGNoIFVzZXIgYW5vbmN2cwojCVgxMUZvcndhcmRpbmcgbm8KIwlBbGxvd1RjcEZvcndhcmRpbmcgbm8KIwlQZXJtaXRUVFkgbm8KIwlGb3JjZUNvbW1hbmQgY3ZzIHNlcnZlcgo=
-EOF
-
-# remove the pgBackRest ssh keypair from the shell session
-unset pgbackrest_public_key pgbackrest_private_key
-
-# create the pgBackRest secret
-kubectl apply -f "${pgo_cluster_name}-backrest-repo-config.yaml"
-
-
-#### Step 2: Creating the PostgreSQL User Secrets
+#### Step 1: Creating the PostgreSQL User Secrets
As mentioned above, there are a minimum of three PostgreSQL user accounts that
you must create in order to bootstrap a PostgreSQL cluster. These are:
@@ -354,7 +293,7 @@ kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser
kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" "pg-cluster=${pgo_cluster_name}"
```
-#### Step 3: Create the PostgreSQL Cluster
+#### Step 2: Create the PostgreSQL Cluster
With the Secrets in place. It is now time to create the PostgreSQL cluster.
diff --git a/examples/create-by-resource/run.sh b/examples/create-by-resource/run.sh
index 1cdefdda77..ea034a4fe2 100755
--- a/examples/create-by-resource/run.sh
+++ b/examples/create-by-resource/run.sh
@@ -41,49 +41,10 @@ rm $DIR/fromcrd-key $DIR/fromcrd-key.pub
# EXAMPLE RUN #
###############
-# generate a SSH public/private keypair for use by pgBackRest
-ssh-keygen -t ed25519 -N '' -f $DIR/fromcrd-key
-
-# base64 encoded the keys for the generation of the Kube secret, and place
-# them into variables temporarily
-PUBLIC_KEY_TEMP=$(cat $DIR/fromcrd-key.pub | base64)
-PRIVATE_KEY_TEMP=$(cat $DIR/fromcrd-key | base64)
-
-export PUBLIC_KEY="${PUBLIC_KEY_TEMP//[$'\n']}"
-export PRIVATE_KEY="${PRIVATE_KEY_TEMP//[$'\n']}"
-
-unset PUBLIC_KEY_TEMP
-unset PRIVATE_KEY_TEMP
-
-# create the backrest-repo-config example file and substitute in the newly
-# created keys
-cat <<-EOF > $DIR/backrest-repo-config.yaml
-apiVersion: v1
-data:
- authorized_keys: ${PUBLIC_KEY}
- id_ed25519: ${PRIVATE_KEY}
- ssh_host_ed25519_key: ${PRIVATE_KEY}
- config: SG9zdCAqClN0cmljdEhvc3RLZXlDaGVja2luZyBubwpJZGVudGl0eUZpbGUgL3RtcC9pZF9lZDI1NTE5ClBvcnQgMjAyMgpVc2VyIHBnYmFja3Jlc3QK
- sshd_config: IwkkT3BlbkJTRDogc3NoZF9jb25maWcsdiAxLjEwMCAyMDE2LzA4LzE1IDEyOjMyOjA0IG5hZGR5IEV4cCAkCgojIFRoaXMgaXMgdGhlIHNzaGQgc2VydmVyIHN5c3RlbS13aWRlIGNvbmZpZ3VyYXRpb24gZmlsZS4gIFNlZQojIHNzaGRfY29uZmlnKDUpIGZvciBtb3JlIGluZm9ybWF0aW9uLgoKIyBUaGlzIHNzaGQgd2FzIGNvbXBpbGVkIHdpdGggUEFUSD0vdXNyL2xvY2FsL2JpbjovdXNyL2JpbgoKIyBUaGUgc3RyYXRlZ3kgdXNlZCBmb3Igb3B0aW9ucyBpbiB0aGUgZGVmYXVsdCBzc2hkX2NvbmZpZyBzaGlwcGVkIHdpdGgKIyBPcGVuU1NIIGlzIHRvIHNwZWNpZnkgb3B0aW9ucyB3aXRoIHRoZWlyIGRlZmF1bHQgdmFsdWUgd2hlcmUKIyBwb3NzaWJsZSwgYnV0IGxlYXZlIHRoZW0gY29tbWVudGVkLiAgVW5jb21tZW50ZWQgb3B0aW9ucyBvdmVycmlkZSB0aGUKIyBkZWZhdWx0IHZhbHVlLgoKIyBJZiB5b3Ugd2FudCB0byBjaGFuZ2UgdGhlIHBvcnQgb24gYSBTRUxpbnV4IHN5c3RlbSwgeW91IGhhdmUgdG8gdGVsbAojIFNFTGludXggYWJvdXQgdGhpcyBjaGFuZ2UuCiMgc2VtYW5hZ2UgcG9ydCAtYSAtdCBzc2hfcG9ydF90IC1wIHRjcCAjUE9SVE5VTUJFUgojClBvcnQgMjAyMgojQWRkcmVzc0ZhbWlseSBhbnkKI0xpc3RlbkFkZHJlc3MgMC4wLjAuMAojTGlzdGVuQWRkcmVzcyA6OgoKSG9zdEtleSAvc3NoZC9zc2hfaG9zdF9lZDI1NTE5X2tleQoKIyBDaXBoZXJzIGFuZCBrZXlpbmcKI1Jla2V5TGltaXQgZGVmYXVsdCBub25lCgojIExvZ2dpbmcKI1N5c2xvZ0ZhY2lsaXR5IEFVVEgKU3lzbG9nRmFjaWxpdHkgQVVUSFBSSVYKI0xvZ0xldmVsIElORk8KCiMgQXV0aGVudGljYXRpb246CgojTG9naW5HcmFjZVRpbWUgMm0KUGVybWl0Um9vdExvZ2luIG5vClN0cmljdE1vZGVzIG5vCiNNYXhBdXRoVHJpZXMgNgojTWF4U2Vzc2lvbnMgMTAKClB1YmtleUF1dGhlbnRpY2F0aW9uIHllcwoKIyBUaGUgZGVmYXVsdCBpcyB0byBjaGVjayBib3RoIC5zc2gvYXV0aG9yaXplZF9rZXlzIGFuZCAuc3NoL2F1dGhvcml6ZWRfa2V5czIKIyBidXQgdGhpcyBpcyBvdmVycmlkZGVuIHNvIGluc3RhbGxhdGlvbnMgd2lsbCBvbmx5IGNoZWNrIC5zc2gvYXV0aG9yaXplZF9rZXlzCkF1dGhvcml6ZWRLZXlzRmlsZQkvc3NoZC9hdXRob3JpemVkX2tleXMKCiNBdXRob3JpemVkUHJpbmNpcGFsc0ZpbGUgbm9uZQoKI0F1dGhvcml6ZWRLZXlzQ29tbWFuZCBub25lCiNBdXRob3JpemVkS2V5c0NvbW1hbmRVc2VyIG5vYm9keQoKIyBGb3IgdGhpcyB0byB3b3JrIHlvdSB3aWxsIGFsc28gbmVlZCBob3N0IGtleXMgaW4gL2V0Yy9zc2gvc3NoX2tub3duX2hvc3RzCiNIb3N0YmFzZWRBdXRoZW50aWNhdGlvbiBubwojIENoYW5nZSB0byB5ZXMgaWYgeW91IGRvbid0IHRydXN0IH4vLnNzaC9rbm93bl9ob3N0cyBmb3IKIyBIb3N0YmFzZWRBdXRoZW50aWNhdGlvbgojSWdub3JlVXNlcktub3duSG9zdHMgbm8KIyBEb24ndCByZWFkIHRoZSB1c2VyJ3Mgfi8ucmhvc3RzIGFuZCB+Ly5zaG9zdHMgZmlsZXMKI0lnbm9yZVJob3N0cyB5ZXMKCiMgVG8gZGlzYWJsZSB0dW5uZWxlZCBjbGVhciB0ZXh0IHBhc3N3b3JkcywgY2hhbmdlIHRvIG5vIGhlcmUhCiNQYXNzd29yZEF1dGhlbnRpY2F0aW9uIHllcwojUGVybWl0RW1wdHlQYXNzd29yZHMgbm8KUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBubwoKIyBDaGFuZ2UgdG8gbm8gdG8gZGlzYWJsZSBzL2tleSBwYXNzd29yZHMKQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiB5ZXMKI0NoYWxsZW5nZVJlc3BvbnNlQXV0aGVudGljYXRpb24gbm8KCiMgS2VyYmVyb3Mgb3B0aW9ucwojS2VyYmVyb3NBdXRoZW50aWNhdGlvbiBubwojS2VyYmVyb3NPckxvY2FsUGFzc3dkIHllcwojS2VyYmVyb3NUaWNrZXRDbGVhbnVwIHllcwojS2VyYmVyb3NHZXRBRlNUb2tlbiBubwojS2VyYmVyb3NVc2VLdXNlcm9rIHllcwoKIyBHU1NBUEkgb3B0aW9ucwojR1NTQVBJQXV0aGVudGljYXRpb24geWVzCiNHU1NBUElDbGVhbnVwQ3JlZGVudGlhbHMgbm8KI0dTU0FQSVN0cmljdEFjY2VwdG9yQ2hlY2sgeWVzCiNHU1NBUElLZXlFeGNoYW5nZSBubwojR1NTQVBJRW5hYmxlazV1c2VycyBubwoKIyBTZXQgdGhpcyB0byAneWVzJyB0byBlbmFibGUgUEFNIGF1dGhlbnRpY2F0aW9uLCBhY2NvdW50IHByb2Nlc3NpbmcsCiMgYW5kIHNlc3Npb24gcHJvY2Vzc2luZy4gSWYgdGhpcyBpcyBlbmFibGVkLCBQQU0gYXV0aGVudGljYXRpb24gd2lsbAojIGJlIGFsbG93ZWQgdGhyb3VnaCB0aGUgQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiBhbmQKIyBQYXNzd29yZEF1dGhlbnRpY2F0aW9uLiAgRGVwZW5kaW5nIG9uIHlvdXIgUEFNIGNvbmZpZ3VyYXRpb24sCiMgUEFNIGF1dGhlbnRpY2F0aW9uIHZpYSBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIG1heSBieXBhc3MKIyB0aGUgc2V0dGluZyBvZiAiUGVybWl0Um9vdExvZ2luIHdpdGhvdXQtcGFzc3dvcmQiLgojIElmIHlvdSBqdXN0IHdhbnQgdGhlIFBBTSBhY2NvdW50IGFuZCBzZXNzaW9uIGNoZWNrcyB0byBydW4gd2l0aG91dAojIFBBTSBhdXRoZW50aWNhdGlvbiwgdGhlbiBlbmFibGUgdGhpcyBidXQgc2V0IFBhc3N3b3JkQXV0aGVudGljYXRpb24KIyBhbmQgQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiB0byAnbm8nLgojIFdBUk5JTkc6ICdVc2VQQU0gbm8nIGlzIG5vdCBzdXBwb3J0ZWQgaW4gUmVkIEhhdCBFbnRlcnByaXNlIExpbnV4IGFuZCBtYXkgY2F1c2Ugc2V2ZXJhbAojIHByb2JsZW1zLgpVc2VQQU0geWVzIAoKI0FsbG93QWdlbnRGb3J3YXJkaW5nIHllcwojQWxsb3dUY3BGb3J3YXJkaW5nIHllcwojR2F0ZXdheVBvcnRzIG5vClgxMUZvcndhcmRpbmcgeWVzCiNYMTFEaXNwbGF5T2Zmc2V0IDEwCiNYMTFVc2VMb2NhbGhvc3QgeWVzCiNQZXJtaXRUVFkgeWVzCiNQcmludE1vdGQgeWVzCiNQcmludExhc3RMb2cgeWVzCiNUQ1BLZWVwQWxpdmUgeWVzCiNVc2VMb2dpbiBubwpVc2VQcml2aWxlZ2VTZXBhcmF0aW9uIG5vCiNQZXJtaXRVc2VyRW52aXJvbm1lbnQgbm8KI0NvbXByZXNzaW9uIGRlbGF5ZWQKI0NsaWVudEFsaXZlSW50ZXJ2YWwgMAojQ2xpZW50QWxpdmVDb3VudE1heCAzCiNTaG93UGF0Y2hMZXZlbCBubwojVXNlRE5TIHllcwojUGlkRmlsZSAvdmFyL3J1bi9zc2hkLnBpZAojTWF4U3RhcnR1cHMgMTA6MzA6MTAwCiNQZXJtaXRUdW5uZWwgbm8KI0Nocm9vdERpcmVjdG9yeSBub25lCiNWZXJzaW9uQWRkZW5kdW0gbm9uZQoKIyBubyBkZWZhdWx0IGJhbm5lciBwYXRoCiNCYW5uZXIgbm9uZQoKIyBBY2NlcHQgbG9jYWxlLXJlbGF0ZWQgZW52aXJvbm1lbnQgdmFyaWFibGVzCkFjY2VwdEVudiBMQU5HIExDX0NUWVBFIExDX05VTUVSSUMgTENfVElNRSBMQ19DT0xMQVRFIExDX01PTkVUQVJZIExDX01FU1NBR0VTCkFjY2VwdEVudiBMQ19QQVBFUiBMQ19OQU1FIExDX0FERFJFU1MgTENfVEVMRVBIT05FIExDX01FQVNVUkVNRU5UCkFjY2VwdEVudiBMQ19JREVOVElGSUNBVElPTiBMQ19BTEwgTEFOR1VBR0UKQWNjZXB0RW52IFhNT0RJRklFUlMKCiMgb3ZlcnJpZGUgZGVmYXVsdCBvZiBubyBzdWJzeXN0ZW1zClN1YnN5c3RlbQlzZnRwCS91c3IvbGliZXhlYy9vcGVuc3NoL3NmdHAtc2VydmVyCgojIEV4YW1wbGUgb2Ygb3ZlcnJpZGluZyBzZXR0aW5ncyBvbiBhIHBlci11c2VyIGJhc2lzCiNNYXRjaCBVc2VyIGFub25jdnMKIwlYMTFGb3J3YXJkaW5nIG5vCiMJQWxsb3dUY3BGb3J3YXJkaW5nIG5vCiMJUGVybWl0VFRZIG5vCiMJRm9yY2VDb21tYW5kIGN2cyBzZXJ2ZXI=
-kind: Secret
-metadata:
- labels:
- pg-cluster: fromcrd
- pgo-backrest-repo: "true"
- name: fromcrd-backrest-repo-config
- namespace: ${NS}
-type: Opaque
-EOF
-
-# unset the *_KEY environmental variables
-unset PUBLIC_KEY
-unset PRIVATE_KEY
-
# create the required postgres credentials for the fromcrd cluster
$PGO_CMD -n $NS create -f $DIR/postgres-secret.yaml
$PGO_CMD -n $NS create -f $DIR/primaryuser-secret.yaml
$PGO_CMD -n $NS create -f $DIR/testuser-secret.yaml
-$PGO_CMD -n $NS create -f $DIR/backrest-repo-config.yaml
# create the pgcluster CRD for the fromcrd cluster
$PGO_CMD -n $NS create -f $DIR/fromcrd.json
diff --git a/examples/helm/README.md b/examples/helm/README.md
index 09d06cbeb3..390bfbbaae 100644
--- a/examples/helm/README.md
+++ b/examples/helm/README.md
@@ -1,23 +1,32 @@
# create-cluster
This is a working example of how to create a cluster via the crd workflow
-using a helm chart
+using a [Helm](https://helm.sh/) chart.
+
+## Prerequisites
+
+### Postgres Operator
-## Assumptions
This example assumes you have the Crunchy PostgreSQL Operator installed
-in a namespace called pgo.
+in a namespace called `pgo`.
+
+### Helm
-## Helm
Helm will also need to be installed for this example to run
-## Documenation
+## Documentation
+
Please see the documentation for more guidance using custom resources:
https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/
+## Setup
+
+If you are running Postgres Operator 4.5.1 or later, you can skip the below
+step.
+
+### Before 4.5.1
-## Example set up and execution
-create a certs directy and generate certs
```
cd postgres-operator/examples/helm/create-cluster
@@ -29,27 +38,32 @@ export pgo_cluster_name=hippo
# generate a SSH public/private keypair for use by pgBackRest
ssh-keygen -t ed25519 -N '' -f "${pgo_cluster_name}-key"
-
```
-For this example we will deploy the cluster into the pgo
-namespace where the opertor is installed and running.
-return to the create-cluster directory
+## Running the Example
+
+For this example we will deploy the cluster into the `pgo` namespace where the
+Postgres Operator is installed and running.
+
+Return to the `create-cluster` directory:
+
```
cd postgres-operator/examples/helm/create-cluster
```
-The following commands will allow you to execute a dry run first with debug
-if you want to verify everthing is set correctly. Then after everything looks good
-run the install command with out the flags
+The following commands will allow you to execute a dry run first with debug
+if you want to verify everything is set correctly. Then after everything looks
+good run the install command with out the flags:
+
```
helm install --dry-run --debug postgres-operator-create-cluster . -n pgo
-
helm install postgres-operator-create-cluster . -n pgo
```
+
## Verify
-Now you can your Hippo cluster has deployed into the pgo
-namespace by running these few commands
+
+Now you can your Hippo cluster has deployed into the pgo namespace by running
+these few commands:
```
kubectl get all -n pgo
@@ -58,7 +72,8 @@ pgo test hippo -n pgo
pgo show cluster hippo -n pgo
```
+
## NOTE
-As of operator version 4.5.0 when using helm uninstall you will have to manually
-clean up some left over artifacts afer running the unistall
+As of operator version 4.5.0 when using helm uninstall you will have to manually
+clean up some left over artifacts after running the uninstall.
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 67eaea056c..37ce1f0eab 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1001,17 +1001,25 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
backrestS3CACert = backrestSecret.Data[util.BackRestRepoSecretKeyAWSS3KeyAWSS3CACert]
}
- err := util.CreateBackrestRepoSecrets(apiserver.Clientset,
- util.BackrestRepoConfig{
- BackrestS3CA: backrestS3CACert,
- BackrestS3Key: request.BackrestS3Key,
- BackrestS3KeySecret: request.BackrestS3KeySecret,
- ClusterName: clusterName,
- ClusterNamespace: request.Namespace,
- OperatorNamespace: apiserver.PgoNamespace,
- })
-
- if err != nil {
+ // set up the secret for the cluster that contains the pgBackRest
+ // information
+ secret := &v1.Secret{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: secretName,
+ Labels: map[string]string{
+ config.LABEL_VENDOR: config.LABEL_CRUNCHY,
+ config.LABEL_PG_CLUSTER: clusterName,
+ config.LABEL_PGO_BACKREST_REPO: "true",
+ },
+ },
+ Data: map[string][]byte{
+ util.BackRestRepoSecretKeyAWSS3KeyAWSS3CACert: backrestS3CACert,
+ util.BackRestRepoSecretKeyAWSS3KeyAWSS3Key: []byte(request.BackrestS3Key),
+ util.BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret: []byte(request.BackrestS3KeySecret),
+ },
+ }
+
+ if _, err := apiserver.Clientset.CoreV1().Secrets(ns).Create(ctx, secret, metav1.CreateOptions{}); err != nil && !kubeapi.IsAlreadyExists(err) {
resp.Status.Code = msgs.Error
resp.Status.Msg = fmt.Sprintf("could not create backrest repo secret: %s", err)
return resp
diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go
index 68c1152056..e53427fd1d 100644
--- a/internal/operator/backrest/repo.go
+++ b/internal/operator/backrest/repo.go
@@ -159,6 +159,19 @@ func CreateRepoDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluste
return nil
}
+// CreateRepoSecret allows for the creation of the Secret used to populate
+// some (mostly) sensitive fields for managing the pgBackRest repository.
+//
+// If the Secret already exists, then missing fields will be overwritten.
+func CreateRepoSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error {
+ return util.CreateBackrestRepoSecrets(clientset,
+ util.BackrestRepoConfig{
+ ClusterName: cluster.Name,
+ ClusterNamespace: cluster.Namespace,
+ OperatorNamespace: operator.PgoNamespace,
+ })
+}
+
// setBootstrapRepoOverrides overrides certain fields used to populate the pgBackRest repository template
// as needed to support the creation of a bootstrap repository need to bootstrap a new cluster from an
// existing data source.
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 474e92a52a..651cba0aa6 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -89,6 +89,14 @@ func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace s
return
}
+ // ensure the the pgBackRest Secret is created. If this fails, we have to
+ // abort
+ if err := backrest.CreateRepoSecret(clientset, cl); err != nil {
+ log.Error(err)
+ publishClusterCreateFailure(cl, err.Error())
+ return
+ }
+
if err := annotateBackrestSecret(clientset, cl); err != nil {
log.Error(err)
publishClusterCreateFailure(cl, err.Error())
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index b0b72ea6dd..185bc04035 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -28,6 +28,7 @@ import (
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
+ kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
@@ -117,72 +118,112 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
backrestRepoConfig BackrestRepoConfig) error {
ctx := context.TODO()
- keys, err := NewPrivatePublicKeyPair()
- if err != nil {
- return err
+ // first: determine if a Secret already exists. If it does, we are going to
+ // work on modifying that Secret.
+ secretName := fmt.Sprintf("%s-%s", backrestRepoConfig.ClusterName,
+ config.LABEL_BACKREST_REPO_SECRET)
+ secret, secretErr := clientset.CoreV1().Secrets(backrestRepoConfig.ClusterNamespace).Get(
+ ctx, secretName, metav1.GetOptions{})
+
+ // only return an error if this is a **not** a not found error
+ if secretErr != nil && !kerrors.IsNotFound(secretErr) {
+ log.Error(secretErr)
+ return secretErr
+ }
+
+ // determine if we need to create a new secret, i.e. this is a not found error
+ newSecret := secretErr != nil
+ if newSecret {
+ // set up the secret for the cluster that contains the pgBackRest information
+ secret = &v1.Secret{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: secretName,
+ Labels: map[string]string{
+ config.LABEL_VENDOR: config.LABEL_CRUNCHY,
+ config.LABEL_PG_CLUSTER: backrestRepoConfig.ClusterName,
+ config.LABEL_PGO_BACKREST_REPO: "true",
+ },
+ },
+ Data: map[string][]byte{},
+ }
}
- // Retrieve the S3/SSHD configuration files from secret
- configs, err := clientset.
+ // next, load the Operator level pgBackRest secret templates, which contain
+ // SSHD(...?) and possible S3 credentials
+ configs, configErr := clientset.
CoreV1().Secrets(backrestRepoConfig.OperatorNamespace).
Get(ctx, "pgo-backrest-repo-config", metav1.GetOptions{})
- if err != nil {
- log.Error(err)
- return err
+ if configErr != nil {
+ log.Error(configErr)
+ return configErr
+ }
+
+ // set the SSH/SSHD configuration, if it is not presently set
+ for _, key := range []string{backRestRepoSecretKeySSHConfig, backRestRepoSecretKeySSHDConfig} {
+ if len(secret.Data[key]) == 0 {
+ secret.Data[key] = configs.Data[key]
+ }
}
- // if an S3 key has been provided via the request, then use key and key secret
- // included in the request instead of the default credentials that are
- // available in the Operator pgBackRest secret
- backrestS3Key := []byte(backrestRepoConfig.BackrestS3Key)
+ // set the SSH keys if any appear to be unset
+ if len(secret.Data[backRestRepoSecretKeyAuthorizedKeys]) == 0 ||
+ len(secret.Data[backRestRepoSecretKeySSHPrivateKey]) == 0 ||
+ len(secret.Data[backRestRepoSecretKeySSHHostPrivateKey]) == 0 {
+ // generate the keypair and then assign it to the values in the Secret
+ keys, keyErr := NewPrivatePublicKeyPair()
+
+ if keyErr != nil {
+ log.Error(keyErr)
+ return keyErr
+ }
+
+ secret.Data[backRestRepoSecretKeyAuthorizedKeys] = keys.Public
+ secret.Data[backRestRepoSecretKeySSHPrivateKey] = keys.Private
+ secret.Data[backRestRepoSecretKeySSHHostPrivateKey] = keys.Private
+ }
- if backrestRepoConfig.BackrestS3Key == "" {
- backrestS3Key = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key]
+ // Set the S3 credentials
+ // If explicit S3 credentials are passed in, use those.
+ // If the Secret already has S3 credentials, use those.
+ // Otherwise, try to load in the default credentials from the Operator Secret.
+ if len(backrestRepoConfig.BackrestS3CA) != 0 {
+ secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert] = backrestRepoConfig.BackrestS3CA
}
- backrestS3KeySecret := []byte(backrestRepoConfig.BackrestS3KeySecret)
+ if len(secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert]) == 0 &&
+ len(configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert]) != 0 {
+ secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert] = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert]
+ }
- if backrestRepoConfig.BackrestS3KeySecret == "" {
- backrestS3KeySecret = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret]
+ if backrestRepoConfig.BackrestS3Key != "" {
+ secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key] = []byte(backrestRepoConfig.BackrestS3Key)
}
- // determine if there is a CA override provided, and if not, use the default
- // from the configuration
- caCert := backrestRepoConfig.BackrestS3CA
- if len(caCert) == 0 {
- caCert = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert]
+ if len(secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key]) == 0 &&
+ len(configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key]) != 0 {
+ secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key] = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key]
}
- // set up the secret for the cluster that contains the pgBackRest information
- secret := v1.Secret{
- ObjectMeta: metav1.ObjectMeta{
- Name: fmt.Sprintf("%s-%s", backrestRepoConfig.ClusterName,
- config.LABEL_BACKREST_REPO_SECRET),
- Labels: map[string]string{
- config.LABEL_VENDOR: config.LABEL_CRUNCHY,
- config.LABEL_PG_CLUSTER: backrestRepoConfig.ClusterName,
- config.LABEL_PGO_BACKREST_REPO: "true",
- },
- },
- Data: map[string][]byte{
- BackRestRepoSecretKeyAWSS3KeyAWSS3CACert: caCert,
- BackRestRepoSecretKeyAWSS3KeyAWSS3Key: backrestS3Key,
- BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret: backrestS3KeySecret,
- backRestRepoSecretKeyAuthorizedKeys: keys.Public,
- backRestRepoSecretKeySSHConfig: configs.Data[backRestRepoSecretKeySSHConfig],
- backRestRepoSecretKeySSHDConfig: configs.Data[backRestRepoSecretKeySSHDConfig],
- backRestRepoSecretKeySSHPrivateKey: keys.Private,
- backRestRepoSecretKeySSHHostPrivateKey: keys.Private,
- },
+ if backrestRepoConfig.BackrestS3KeySecret != "" {
+ secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret] = []byte(backrestRepoConfig.BackrestS3KeySecret)
}
- _, err = clientset.CoreV1().Secrets(backrestRepoConfig.ClusterNamespace).
- Create(ctx, &secret, metav1.CreateOptions{})
- if kubeapi.IsAlreadyExists(err) {
- _, err = clientset.CoreV1().Secrets(backrestRepoConfig.ClusterNamespace).
- Update(ctx, &secret, metav1.UpdateOptions{})
+ if len(secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret]) == 0 &&
+ len(configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret]) != 0 {
+ secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret] = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret]
}
+
+ // time to create or update the secret!
+ if newSecret {
+ _, err := clientset.CoreV1().Secrets(backrestRepoConfig.ClusterNamespace).Create(
+ ctx, secret, metav1.CreateOptions{})
+ return err
+ }
+
+ _, err := clientset.CoreV1().Secrets(backrestRepoConfig.ClusterNamespace).Update(
+ ctx, secret, metav1.UpdateOptions{})
+
return err
}
From cc2aa0cc648d4f38c85397b601a8bf415d10f634 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 27 Oct 2020 15:03:32 -0400
Subject: [PATCH 002/276] Reorder custom resource documentation
Move the attributes to the latter half of the page, and showcase
the workflows at the top.
---
docs/content/custom-resources/_index.md | 396 ++++++++++++------------
1 file changed, 198 insertions(+), 198 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index af913755e9..7e024900f4 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -35,204 +35,6 @@ need to interface through the [`pgo` client]({{< relref "/pgo-client/_index.md"
The following sections will describe the functionality that is available today
when manipulating the PostgreSQL Operator Custom Resources directly.
-## PostgreSQL Operator Custom Resource Definitions
-
-There are several PostgreSQL Operator Custom Resource Definitions (CRDs) that
-are installed in order for the PostgreSQL Operator to successfully function:
-
-- `pgclusters.crunchydata.com`: Stores information required to manage a
-PostgreSQL cluster. This includes things like the cluster name, what storage and
-resource classes to use, which version of PostgreSQL to run, information about
-how to maintain a high-availability cluster, etc.
-- `pgreplicas.crunchydata.com`: Stores information required to manage the
-replicas within a PostgreSQL cluster. This includes things like the number of
-replicas, what storage and resource classes to use, special affinity rules, etc.
-- `pgtasks.crunchydata.com`: A general purpose CRD that accepts a type of task
-that is needed to run against a cluster (e.g. take a backup) and tracks the
-state of said task through its workflow.
-- `pgpolicies.crunchydata.com`: Stores a reference to a SQL file that can be
-executed against a PostgreSQL cluster. In the past, this was used to manage RLS
-policies on PostgreSQL clusters.
-
-Below takes an in depth look for what each attribute does in a Custom Resource
-Definition, and how they can be used in the creation and update workflow.
-
-### Glossary
-
-- `create`: if an attribute is listed as `create`, it means it can affect what
-happens when a new Custom Resource is created.
-- `update`: if an attribute is listed as `update`, it means it can affect the
-Custom Resource, and by extension the objects it manages, when the attribute is
-updated.
-
-### `pgclusters.crunchydata.com`
-
-The `pgclusters.crunchydata.com` Custom Resource Definition is the fundamental
-definition of a PostgreSQL cluster. Most attributes only affect the deployment
-of a PostgreSQL cluster at the time the PostgreSQL cluster is created. Some
-attributes can be modified during the lifetime of the PostgreSQL cluster and
-make changes, as described below.
-
-#### Specification (`Spec`)
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| Annotations | `create`, `update` | Specify Kubernetes [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) that can be applied to the different deployments managed by the PostgreSQL Operator (PostgreSQL, pgBackRest, pgBouncer). For more information, please see the "Annotations Specification" below. |
-| BackrestConfig | `create` | Optional references to pgBackRest configuration files
-| BackrestLimits | `create`, `update` | Specify the container resource limits that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| BackrestResources | `create`, `update` | Specify the container resource requests that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| BackrestS3Bucket | `create` | An optional parameter that specifies a S3 bucket that pgBackRest should use. |
-| BackrestS3Endpoint | `create` | An optional parameter that specifies the S3 endpoint pgBackRest should use. |
-| BackrestS3Region | `create` | An optional parameter that specifies a cloud region that pgBackRest should use. |
-| BackrestS3URIStyle | `create` | An optional parameter that specifies if pgBackRest should use the `path` or `host` S3 URI style. |
-| BackrestS3VerifyTLS | `create` | An optional parameter that specifies if pgBackRest should verify the TLS endpoint. |
-| BackrestStorage | `create` | A specification that gives information about the storage attributes for the pgBackRest repository, which stores backups and archives, of the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. |
-| CCPImage | `create` | The name of the PostgreSQL container image to use, e.g. `crunchy-postgres-ha` or `crunchy-postgres-ha-gis`. |
-| CCPImagePrefix | `create` | If provided, the image prefix (or registry) of the PostgreSQL container image, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
-| CCPImageTag | `create` | The tag of the PostgreSQL container image to use, e.g. `{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}`. |
-| CollectSecretName | `create` | An optional attribute unless `crunchy-postgres-exporter` is specified in the `UserLabels`; contains the name of a Kubernetes Secret that contains the credentials for a PostgreSQL user that is used for metrics collection, and is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
-| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
-| CustomConfig | `create` | If specified, references a custom ConfigMap to use when bootstrapping a PostgreSQL cluster. For the shape of this file, please see the section on [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) |
-| Database | `create` | The name of a database that the PostgreSQL user can log into after the PostgreSQL cluster is created. |
-| ExporterLimits | `create`, `update` | Specify the container resource limits that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| ExporterPort | `create` | If the `"crunchy-postgres-exporter"` label is set in `UserLabels`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) |
-| ExporterResources | `create`, `update` | Specify the container resource requests that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
-| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
-| PGBadgerPort | `create` | If the `"crunchy-pgbadger"` label is set in `UserLabels`, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
-| PGDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. |
-| PGOImagePrefix | `create` | If provided, the image prefix (or registry) of any PostgreSQL Operator images that are used for jobs, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
-| PgBouncer | `create`, `update` | If specified, defines the attributes to use for the pgBouncer connection pooling deployment that can be used in conjunction with this PostgreSQL cluster. Please see the specification defined below. |
-| PodAntiAffinity | `create` | A required section. Sets the [pod anti-affinity rules]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) for the PostgreSQL cluster and associated deployments. Please see the `Pod Anti-Affinity Specification` section below. |
-| Policies | `create` | If provided, a comma-separated list referring to `pgpolicies.crunchydata.com.Spec.Name` that should be run once the PostgreSQL primary is first initialized. |
-| Port | `create` | The port that PostgreSQL will run on, e.g. `5432`. |
-| PrimaryStorage | `create` | A specification that gives information about the storage attributes for the primary instance in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. |
-| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL _replication user_ that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
-| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
-| Replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. |
-| Resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL superuser that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
-| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
-| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
-| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection or pgBadger, you would specify `"crunchy-postgres-exporter": "true"` and `"crunchy-pgbadger": "true"` here, respectively. However, this structure does need to be set, so just follow whatever is in the example. |
-| UserSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a standard PostgreSQL user that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
-| TablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. |
-| TLS | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. |
-| TLSOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. |
-| Standby | `create`, `update` | If set to true, indicates that the PostgreSQL cluster is a "standby" cluster, i.e. is in read-only mode entirely. Please see [Kubernetes Multi-Cluster Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) for more information. |
-| Shutdown | `create`, `update` | If set to true, indicates that a PostgreSQL cluster should shutdown. If set to false, indicates that a PostgreSQL cluster should be up and running. |
-
-##### Storage Specification
-
-The storage specification is a spec that defines attributes about the storage to
-be used for a particular function of a PostgreSQL cluster (e.g. a primary
-instance or for the pgBackRest backup repository). The below describes each
-attribute and how it works.
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| AccessMode| `create` | The name of the Kubernetes Persistent Volume [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) to use. |
-| MatchLabels | `create` | Only used with `StorageType` of `create`, used to match a particular subset of provisioned Persistent Volumes. |
-| Name | `create` | Only needed for `PrimaryStorage` in `pgclusters.crunchydata.com`.Used to identify the name of the PostgreSQL cluster. Should match `ClusterName`. |
-| Size | `create` | The size of the [Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC). Must use a Kubernetes resource value, e.g. `20Gi`. |
-| StorageClass | `create` | The name of the Kubernetes [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) to use. |
-| StorageType | `create` | Set to `create` if storage is provisioned (e.g. using `hostpath`). Set to `dynamic` if using a dynamic storage provisioner, e.g. via a `StorageClass`. |
-| SupplementalGroups | `create` | If provided, a comma-separated list of group IDs to use in case it is needed to interface with a particular storage system. Typically used with NFS or hostpath storage. |
-
-##### Pod Anti-Affinity Specification
-
-Sets the [pod anti-affinity]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}})
-for the PostgreSQL cluster and associated deployments. Each attribute can
-contain one of the following values:
-
-- `required`
-- `preferred` (which is also the recommended default)
-- `disabled`
-
-For a detailed explanation for how this works. Please see the [high-availability]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}})
-documentation.
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| Default | `create` | The default pod anti-affinity to use for all Pods managed in a given PostgreSQL cluster. |
-| PgBackRest | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBackRest repository. |
-| PgBouncer | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBouncer Pods. |
-
-##### PostgreSQL Data Source Specification
-
-This specification is used when one wants to bootstrap the data in a PostgreSQL
-cluster from a pgBackRest repository. This can be a pgBackRest repository that
-is attached to an active PostgreSQL cluster or is kept around to be used for
-spawning new PostgreSQL clusters.
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| RestoreFrom | `create` | The name of a PostgreSQL cluster, active or former, that will be used for bootstrapping the data of a new PostgreSQL cluster. |
-| RestoreOpts | `create` | Additional pgBackRest [restore options](https://pgbackrest.org/command.html#command-restore) that can be used as part of the bootstrapping operation, for example, point-in-time-recovery options. |
-
-##### TLS Specification
-
-The TLS specification makes a reference to the various secrets that are required
-to enable TLS in a PostgreSQL cluster. For more information on how these secrets
-should be structured, please see [Enabling TLS in a PostgreSQL Cluster]({{< relref "/pgo-client/common-tasks.md#enable-tls" >}}).
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| CASecret | `create` | A reference to the name of a Kubernetes Secret that specifies a certificate authority for the PostgreSQL cluster to trust. |
-| ReplicationTLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair for authenticating the replication user. Must be used with `CASecret` and `TLSSecret`. |
-| TLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the PostgreSQL instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with `CASecret`. |
-
-##### pgBouncer Specification
-
-The pgBouncer specification defines how a pgBouncer deployment can be deployed
-alongside the PostgreSQL cluster. pgBouncer is a PostgreSQL connection pooler
-that can also help manage connection state, and is helpful to deploy alongside
-a PostgreSQL cluster to help with failover scenarios too.
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| Limits | `create`, `update` | Specify the container resource limits that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| Replicas | `create`, `update` | The number of pgBouncer instances to deploy. Must be set to at least `1` to deploy pgBouncer. Setting to `0` removes an existing pgBouncer deployment for the PostgreSQL cluster. |
-| Resources | `create`, `update` | Specify the container resource requests that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-
-##### Annotations Specification
-
-The `pgcluster.crunchydata.com` specification contains a block that allows for
-custom [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
-to be applied to the Deployments that are managed by the PostgreSQL Operator,
-including:
-
-- PostgreSQL
-- pgBackRest
-- pgBouncer
-
-This also includes the option to apply Annotations globally across the three
-different deployment groups.
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| Backrest | `create`, `update` | Specify annotations that are only applied to the pgBackRest deployments |
-| Global | `create`, `update` | Specify annotations that are applied to the PostgreSQL, pgBackRest, and pgBouncer deployments |
-| PgBouncer | `create`, `update` | Specify annotations that are only applied to the pgBouncer deployments |
-| Postgres | `create`, `update` | Specify annotations that are only applied to the PostgreSQL deployments |
-
-### `pgreplicas.crunchydata.com`
-
-The `pgreplicas.crunchydata.com` Custom Resource Definition contains information
-pertaning to the structure of PostgreSQL replicas associated within a PostgreSQL
-cluster. All of the attributes only affect the replica when it is created.
-
-#### Specification (`Spec`)
-
-| Attribute | Action | Description |
-|-----------|--------|-------------|
-| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
-| Name | `create` | The name of this PostgreSQL replica. It should be unique within a `ClusterName`. |
-| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
-| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section in the `pgclusters.crunchydata.com` description. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
-| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection, you would specify `"crunchy-postgres-exporter": "true"` here. This also allows for node selector pinning using `NodeLabelKey` and `NodeLabelValue`. However, this structure does need to be set, so just follow whatever is in the example. |
-
## Custom Resource Workflows
### Create a PostgreSQL Cluster
@@ -629,3 +431,201 @@ spec:
Save your edits, and in a short period of time, you should see these annotations
applied to the managed Deployments.
+
+## PostgreSQL Operator Custom Resource Definitions
+
+There are several PostgreSQL Operator Custom Resource Definitions (CRDs) that
+are installed in order for the PostgreSQL Operator to successfully function:
+
+- `pgclusters.crunchydata.com`: Stores information required to manage a
+PostgreSQL cluster. This includes things like the cluster name, what storage and
+resource classes to use, which version of PostgreSQL to run, information about
+how to maintain a high-availability cluster, etc.
+- `pgreplicas.crunchydata.com`: Stores information required to manage the
+replicas within a PostgreSQL cluster. This includes things like the number of
+replicas, what storage and resource classes to use, special affinity rules, etc.
+- `pgtasks.crunchydata.com`: A general purpose CRD that accepts a type of task
+that is needed to run against a cluster (e.g. take a backup) and tracks the
+state of said task through its workflow.
+- `pgpolicies.crunchydata.com`: Stores a reference to a SQL file that can be
+executed against a PostgreSQL cluster. In the past, this was used to manage RLS
+policies on PostgreSQL clusters.
+
+Below takes an in depth look for what each attribute does in a Custom Resource
+Definition, and how they can be used in the creation and update workflow.
+
+### Glossary
+
+- `create`: if an attribute is listed as `create`, it means it can affect what
+happens when a new Custom Resource is created.
+- `update`: if an attribute is listed as `update`, it means it can affect the
+Custom Resource, and by extension the objects it manages, when the attribute is
+updated.
+
+### `pgclusters.crunchydata.com`
+
+The `pgclusters.crunchydata.com` Custom Resource Definition is the fundamental
+definition of a PostgreSQL cluster. Most attributes only affect the deployment
+of a PostgreSQL cluster at the time the PostgreSQL cluster is created. Some
+attributes can be modified during the lifetime of the PostgreSQL cluster and
+make changes, as described below.
+
+#### Specification (`Spec`)
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| Annotations | `create`, `update` | Specify Kubernetes [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) that can be applied to the different deployments managed by the PostgreSQL Operator (PostgreSQL, pgBackRest, pgBouncer). For more information, please see the "Annotations Specification" below. |
+| BackrestConfig | `create` | Optional references to pgBackRest configuration files
+| BackrestLimits | `create`, `update` | Specify the container resource limits that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| BackrestResources | `create`, `update` | Specify the container resource requests that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| BackrestS3Bucket | `create` | An optional parameter that specifies a S3 bucket that pgBackRest should use. |
+| BackrestS3Endpoint | `create` | An optional parameter that specifies the S3 endpoint pgBackRest should use. |
+| BackrestS3Region | `create` | An optional parameter that specifies a cloud region that pgBackRest should use. |
+| BackrestS3URIStyle | `create` | An optional parameter that specifies if pgBackRest should use the `path` or `host` S3 URI style. |
+| BackrestS3VerifyTLS | `create` | An optional parameter that specifies if pgBackRest should verify the TLS endpoint. |
+| BackrestStorage | `create` | A specification that gives information about the storage attributes for the pgBackRest repository, which stores backups and archives, of the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. |
+| CCPImage | `create` | The name of the PostgreSQL container image to use, e.g. `crunchy-postgres-ha` or `crunchy-postgres-ha-gis`. |
+| CCPImagePrefix | `create` | If provided, the image prefix (or registry) of the PostgreSQL container image, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
+| CCPImageTag | `create` | The tag of the PostgreSQL container image to use, e.g. `{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}`. |
+| CollectSecretName | `create` | An optional attribute unless `crunchy-postgres-exporter` is specified in the `UserLabels`; contains the name of a Kubernetes Secret that contains the credentials for a PostgreSQL user that is used for metrics collection, and is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
+| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
+| CustomConfig | `create` | If specified, references a custom ConfigMap to use when bootstrapping a PostgreSQL cluster. For the shape of this file, please see the section on [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) |
+| Database | `create` | The name of a database that the PostgreSQL user can log into after the PostgreSQL cluster is created. |
+| ExporterLimits | `create`, `update` | Specify the container resource limits that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| ExporterPort | `create` | If the `"crunchy-postgres-exporter"` label is set in `UserLabels`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) |
+| ExporterResources | `create`, `update` | Specify the container resource requests that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
+| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
+| PGBadgerPort | `create` | If the `"crunchy-pgbadger"` label is set in `UserLabels`, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
+| PGDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. |
+| PGOImagePrefix | `create` | If provided, the image prefix (or registry) of any PostgreSQL Operator images that are used for jobs, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
+| PgBouncer | `create`, `update` | If specified, defines the attributes to use for the pgBouncer connection pooling deployment that can be used in conjunction with this PostgreSQL cluster. Please see the specification defined below. |
+| PodAntiAffinity | `create` | A required section. Sets the [pod anti-affinity rules]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) for the PostgreSQL cluster and associated deployments. Please see the `Pod Anti-Affinity Specification` section below. |
+| Policies | `create` | If provided, a comma-separated list referring to `pgpolicies.crunchydata.com.Spec.Name` that should be run once the PostgreSQL primary is first initialized. |
+| Port | `create` | The port that PostgreSQL will run on, e.g. `5432`. |
+| PrimaryStorage | `create` | A specification that gives information about the storage attributes for the primary instance in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. |
+| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL _replication user_ that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
+| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
+| Replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. |
+| Resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL superuser that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
+| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
+| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
+| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection or pgBadger, you would specify `"crunchy-postgres-exporter": "true"` and `"crunchy-pgbadger": "true"` here, respectively. However, this structure does need to be set, so just follow whatever is in the example. |
+| UserSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a standard PostgreSQL user that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
+| TablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. |
+| TLS | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. |
+| TLSOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. |
+| Standby | `create`, `update` | If set to true, indicates that the PostgreSQL cluster is a "standby" cluster, i.e. is in read-only mode entirely. Please see [Kubernetes Multi-Cluster Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) for more information. |
+| Shutdown | `create`, `update` | If set to true, indicates that a PostgreSQL cluster should shutdown. If set to false, indicates that a PostgreSQL cluster should be up and running. |
+
+##### Storage Specification
+
+The storage specification is a spec that defines attributes about the storage to
+be used for a particular function of a PostgreSQL cluster (e.g. a primary
+instance or for the pgBackRest backup repository). The below describes each
+attribute and how it works.
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| AccessMode| `create` | The name of the Kubernetes Persistent Volume [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) to use. |
+| MatchLabels | `create` | Only used with `StorageType` of `create`, used to match a particular subset of provisioned Persistent Volumes. |
+| Name | `create` | Only needed for `PrimaryStorage` in `pgclusters.crunchydata.com`.Used to identify the name of the PostgreSQL cluster. Should match `ClusterName`. |
+| Size | `create` | The size of the [Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC). Must use a Kubernetes resource value, e.g. `20Gi`. |
+| StorageClass | `create` | The name of the Kubernetes [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) to use. |
+| StorageType | `create` | Set to `create` if storage is provisioned (e.g. using `hostpath`). Set to `dynamic` if using a dynamic storage provisioner, e.g. via a `StorageClass`. |
+| SupplementalGroups | `create` | If provided, a comma-separated list of group IDs to use in case it is needed to interface with a particular storage system. Typically used with NFS or hostpath storage. |
+
+##### Pod Anti-Affinity Specification
+
+Sets the [pod anti-affinity]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}})
+for the PostgreSQL cluster and associated deployments. Each attribute can
+contain one of the following values:
+
+- `required`
+- `preferred` (which is also the recommended default)
+- `disabled`
+
+For a detailed explanation for how this works. Please see the [high-availability]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}})
+documentation.
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| Default | `create` | The default pod anti-affinity to use for all Pods managed in a given PostgreSQL cluster. |
+| PgBackRest | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBackRest repository. |
+| PgBouncer | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBouncer Pods. |
+
+##### PostgreSQL Data Source Specification
+
+This specification is used when one wants to bootstrap the data in a PostgreSQL
+cluster from a pgBackRest repository. This can be a pgBackRest repository that
+is attached to an active PostgreSQL cluster or is kept around to be used for
+spawning new PostgreSQL clusters.
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| RestoreFrom | `create` | The name of a PostgreSQL cluster, active or former, that will be used for bootstrapping the data of a new PostgreSQL cluster. |
+| RestoreOpts | `create` | Additional pgBackRest [restore options](https://pgbackrest.org/command.html#command-restore) that can be used as part of the bootstrapping operation, for example, point-in-time-recovery options. |
+
+##### TLS Specification
+
+The TLS specification makes a reference to the various secrets that are required
+to enable TLS in a PostgreSQL cluster. For more information on how these secrets
+should be structured, please see [Enabling TLS in a PostgreSQL Cluster]({{< relref "/pgo-client/common-tasks.md#enable-tls" >}}).
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| CASecret | `create` | A reference to the name of a Kubernetes Secret that specifies a certificate authority for the PostgreSQL cluster to trust. |
+| ReplicationTLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair for authenticating the replication user. Must be used with `CASecret` and `TLSSecret`. |
+| TLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the PostgreSQL instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with `CASecret`. |
+
+##### pgBouncer Specification
+
+The pgBouncer specification defines how a pgBouncer deployment can be deployed
+alongside the PostgreSQL cluster. pgBouncer is a PostgreSQL connection pooler
+that can also help manage connection state, and is helpful to deploy alongside
+a PostgreSQL cluster to help with failover scenarios too.
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| Limits | `create`, `update` | Specify the container resource limits that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| Replicas | `create`, `update` | The number of pgBouncer instances to deploy. Must be set to at least `1` to deploy pgBouncer. Setting to `0` removes an existing pgBouncer deployment for the PostgreSQL cluster. |
+| Resources | `create`, `update` | Specify the container resource requests that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+
+##### Annotations Specification
+
+The `pgcluster.crunchydata.com` specification contains a block that allows for
+custom [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
+to be applied to the Deployments that are managed by the PostgreSQL Operator,
+including:
+
+- PostgreSQL
+- pgBackRest
+- pgBouncer
+
+This also includes the option to apply Annotations globally across the three
+different deployment groups.
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| Backrest | `create`, `update` | Specify annotations that are only applied to the pgBackRest deployments |
+| Global | `create`, `update` | Specify annotations that are applied to the PostgreSQL, pgBackRest, and pgBouncer deployments |
+| PgBouncer | `create`, `update` | Specify annotations that are only applied to the pgBouncer deployments |
+| Postgres | `create`, `update` | Specify annotations that are only applied to the PostgreSQL deployments |
+
+### `pgreplicas.crunchydata.com`
+
+The `pgreplicas.crunchydata.com` Custom Resource Definition contains information
+pertaning to the structure of PostgreSQL replicas associated within a PostgreSQL
+cluster. All of the attributes only affect the replica when it is created.
+
+#### Specification (`Spec`)
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
+| Name | `create` | The name of this PostgreSQL replica. It should be unique within a `ClusterName`. |
+| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
+| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section in the `pgclusters.crunchydata.com` description. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
+| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection, you would specify `"crunchy-postgres-exporter": "true"` here. This also allows for node selector pinning using `NodeLabelKey` and `NodeLabelValue`. However, this structure does need to be set, so just follow whatever is in the example. |
From 050e063dc67cc4e4b653a5aaed97369c85298601 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 27 Oct 2020 15:24:17 -0400
Subject: [PATCH 003/276] Add custom resource example for pgBackRest repo in S3
This examples shows how one can create a new PostgreSQL cluster
where the pgBackRest backups and archives exist in a S3
repository via creating a custom resource.
---
docs/content/custom-resources/_index.md | 192 ++++++++++++++++++++++++
1 file changed, 192 insertions(+)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 7e024900f4..f4384e311e 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -212,6 +212,198 @@ EOF
kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
```
+### Create a PostgreSQL Cluster With Backups in S3
+
+A frequent use case is to create a PostgreSQL cluster with S3 or a S3-like
+storage system for storing backups. This requires adding a Secret that contains
+the S3 key and key secret for your account, and adding some additional
+information into the custom resource.
+
+#### Step 1: Create the pgBackRest S3 Secrets
+
+As mentioned above, it is necessary to create a Secret containing the S3 key and
+key secret that will allow a user to create backups in S3.
+
+The below code will help you set up this Secret.
+
+```
+# this variable is the name of the cluster being created
+pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+cluster_namespace=pgo
+# the following variables are your S3 key and key secret
+backrest_s3_key=yours3key
+backrest_s3_key_secret=yours3keysecret
+
+kubectl -n "${cluster_namespace}" create secret generic "${pgo_cluster_name}-backrest-repo-config" \
+ --from-literal="aws-s3-key=${backrest_s3_key}" \
+ --from-literal="aws-s3-key-secret=${backrest_s3_key_secret}"
+
+unset backrest_s3_key
+unset backrest_s3_key_secret
+```
+
+#### Step 2: Creating the PostgreSQL User Secrets
+
+Similar to the basic create cluster example, there are a minimum of three
+PostgreSQL user accounts that you must create in order to bootstrap a PostgreSQL
+cluster. These are:
+
+- A PostgreSQL superuser
+- A replication user
+- A standard PostgreSQL user
+
+The below code will help you set up these Secrets.
+
+```
+# this variable is the name of the cluster being created
+pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+cluster_namespace=pgo
+
+# this is the superuser secret
+kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-postgres-secret" \
+ --from-literal=username=postgres \
+ --from-literal=password=Supersecurepassword*
+
+# this is the replication user secret
+kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser-secret" \
+ --from-literal=username=primaryuser \
+ --from-literal=password=Anothersecurepassword*
+
+# this is the standard user secret
+kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" \
+ --from-literal=username=hippo \
+ --from-literal=password=Moresecurepassword*
+
+
+kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-postgres-secret" "pg-cluster=${pgo_cluster_name}"
+kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser-secret" "pg-cluster=${pgo_cluster_name}"
+kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" "pg-cluster=${pgo_cluster_name}"
+```
+
+#### Step 3: Create the PostgreSQL Cluster
+
+With the Secrets in place. It is now time to create the PostgreSQL cluster.
+
+The below manifest references the Secrets created in the previous step to add a
+custom resource to the `pgclusters.crunchydata.com` custom resource definition.
+There are some additions in this example specifically for storing backups in S3.
+
+```
+# this variable is the name of the cluster being created
+export pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+export cluster_namespace=pgo
+# the following variables store the information for your S3 cluster. You may
+# need to adjust them for your actual settings
+export backrest_s3_bucket=your-bucket
+export backrest_s3_endpoint=s3.region-name.amazonaws.com
+export backrest_s3_region=region-name
+
+cat <<-EOF > "${pgo_cluster_name}-pgcluster.yaml"
+apiVersion: crunchydata.com/v1
+kind: Pgcluster
+metadata:
+ annotations:
+ current-primary: ${pgo_cluster_name}
+ labels:
+ autofail: "true"
+ backrest-storage-type: "s3"
+ crunchy-pgbadger: "false"
+ crunchy-pgha-scope: ${pgo_cluster_name}
+ crunchy-postgres-exporter: "false"
+ deployment-name: ${pgo_cluster_name}
+ name: ${pgo_cluster_name}
+ pg-cluster: ${pgo_cluster_name}
+ pg-pod-anti-affinity: ""
+ pgo-backrest: "true"
+ pgo-version: {{< param operatorVersion >}}
+ pgouser: admin
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+spec:
+ BackrestStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ PrimaryStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ${pgo_cluster_name}
+ size: 1G
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ ReplicaStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ annotations:
+ backrestLimits: {}
+ backrestRepoPath: ""
+ backrestResources:
+ memory: 48Mi
+ backrestS3Bucket: ${backrest_s3_bucket}
+ backrestS3Endpoint: ${backrest_s3_endpoint}
+ backrestS3Region: ${backrest_s3_region}
+ backrestS3URIStyle: ""
+ backrestS3VerifyTLS: ""
+ ccpimage: crunchy-postgres-ha
+ ccpimageprefix: registry.developers.crunchydata.com/crunchydata
+ ccpimagetag: {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}
+ clustername: ${pgo_cluster_name}
+ customconfig: ""
+ database: ${pgo_cluster_name}
+ exporterport: "9187"
+ limits: {}
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+ pgBouncer:
+ limits: {}
+ replicas: 0
+ pgDataSource:
+ restoreFrom: ""
+ restoreOpts: ""
+ pgbadgerport: "10000"
+ pgoimageprefix: registry.developers.crunchydata.com/crunchydata
+ podAntiAffinity:
+ default: preferred
+ pgBackRest: preferred
+ pgBouncer: preferred
+ policies: ""
+ port: "5432"
+ primarysecretname: ${pgo_cluster_name}-primaryuser-secret
+ replicas: "0"
+ rootsecretname: ${pgo_cluster_name}-postgres-secret
+ shutdown: false
+ standby: false
+ tablespaceMounts: {}
+ tls:
+ caSecret: ""
+ replicationTLSSecret: ""
+ tlsSecret: ""
+ tlsOnly: false
+ user: hippo
+ userlabels:
+ backrest-storage-type: "s3"
+ crunchy-postgres-exporter: "false"
+ pg-pod-anti-affinity: ""
+ pgo-version: {{< param operatorVersion >}}
+ usersecretname: ${pgo_cluster_name}-hippo-secret
+EOF
+
+kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
+```
+
### Modify a Cluster
There following modification operations are supported on the
From 5c832c5ecd77e9b7bff928fff929cda78761d193 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 27 Oct 2020 20:40:12 -0400
Subject: [PATCH 004/276] Create "pgo-config" ConfigMap if not provided
The Operator (and associated containers) will create the pgo-config
ConfigMap from the stored default configuration if the pgo-config
ConfigMap is not present.
This allows for the editing of "default configuration" after
the Operator is deployed, and allows for the removal of the
pgo-config creation step from several of the installation
methods.
Issue: [ch9451]
---
docs/content/Configuration/configuration.md | 6 +-
.../installation/other/operator-hub.md | 14 --
installers/olm/description.openshift.md | 14 --
installers/olm/description.upstream.md | 14 --
internal/config/pgoconfig.go | 178 +++++++++++-------
5 files changed, 114 insertions(+), 112 deletions(-)
diff --git a/docs/content/Configuration/configuration.md b/docs/content/Configuration/configuration.md
index e85823a865..e6bae33be9 100644
--- a/docs/content/Configuration/configuration.md
+++ b/docs/content/Configuration/configuration.md
@@ -16,9 +16,9 @@ The configuration files used by the Operator are found in 2 places:
* the pgo-config ConfigMap in the namespace the Operator is running in
* or, a copy of the configuration files are also included by default into the Operator container images themselves to support a very simplistic deployment of the Operator
-If the pgo-config ConfigMap is not found by the Operator, it will use
-the configuration files that are included in the Operator container
-images.
+If the `pgo-config` ConfigMap is not found by the Operator, it will create a
+`pgo-config` ConfigMap using the configuration files that are included in the
+Operator container.
## conf/postgres-operator/pgo.yaml
The *pgo.yaml* file sets many different Operator configuration settings and is described in the [pgo.yaml configuration]({{< ref "pgo-yaml-configuration.md" >}}) documentation section.
diff --git a/docs/content/installation/other/operator-hub.md b/docs/content/installation/other/operator-hub.md
index 9b077ef073..b610ee2664 100644
--- a/docs/content/installation/other/operator-hub.md
+++ b/docs/content/installation/other/operator-hub.md
@@ -43,19 +43,6 @@ git clone -b v{{< param operatorVersion >}} https://github.com/CrunchyData/postg
cd postgres-operator
```
-### PostgreSQL Operator Configuration
-
-Edit `conf/postgres-operator/pgo.yaml` to configure the deployment. Look over all of the options and make any
-changes necessary for your environment. A full description of each option is available in the
-[`pgo.yaml` configuration guide]({{< relref "configuration/pgo-yaml-configuration.md" >}}).
-
-When the file is ready, upload the entire directory to the `pgo-config` ConfigMap.
-
-```
-kubectl -n "$PGO_OPERATOR_NAMESPACE" create configmap pgo-config \
- --from-file=./conf/postgres-operator
-```
-
### Secrets
Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit
@@ -152,4 +139,3 @@ pgo version
# pgo client version {{< param operatorVersion >}}
# pgo-apiserver version {{< param operatorVersion >}}
```
-
diff --git a/installers/olm/description.openshift.md b/installers/olm/description.openshift.md
index ad31cbe1e5..e4eb3c0831 100644
--- a/installers/olm/description.openshift.md
+++ b/installers/olm/description.openshift.md
@@ -63,20 +63,6 @@ edit `conf/postgres-operator/pgo.yaml` and set `DisableFSGroup` to `true`.
[Security Context Constraint]: https://docs.openshift.com/container-platform/latest/authentication/managing-security-context-constraints.html
-### PostgreSQL Operator Configuration
-
-Edit `conf/postgres-operator/pgo.yaml` to configure the deployment. Look over all of the options and make any
-changes necessary for your environment. A [full description of each option][pgo-yaml-reference] is available in the documentation.
-
-[pgo-yaml-reference]: https://access.crunchydata.com/documentation/postgres-operator/${PGO_VERSION}/configuration/pgo-yaml-configuration/
-
-When the file is ready, upload the entire directory to the `pgo-config` ConfigMap.
-
-```
-oc -n "$PGO_OPERATOR_NAMESPACE" create configmap pgo-config \
- --from-file=./conf/postgres-operator
-```
-
### Secrets
Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit
diff --git a/installers/olm/description.upstream.md b/installers/olm/description.upstream.md
index 8838098032..1e192fa9c2 100644
--- a/installers/olm/description.upstream.md
+++ b/installers/olm/description.upstream.md
@@ -56,20 +56,6 @@ git clone -b v${PGO_VERSION} https://github.com/CrunchyData/postgres-operator.gi
cd postgres-operator
```
-### PostgreSQL Operator Configuration
-
-Edit `conf/postgres-operator/pgo.yaml` to configure the deployment. Look over all of the options and make any
-changes necessary for your environment. A [full description of each option][pgo-yaml-reference] is available in the documentation.
-
-[pgo-yaml-reference]: https://access.crunchydata.com/documentation/postgres-operator/${PGO_VERSION}/configuration/pgo-yaml-configuration/
-
-When the file is ready, upload the entire directory to the `pgo-config` ConfigMap.
-
-```
-kubectl -n "$PGO_OPERATOR_NAMESPACE" create configmap pgo-config \
- --from-file=./conf/postgres-operator
-```
-
### Secrets
Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index b867aa8d93..ddb04cbf00 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -21,6 +21,7 @@ import (
"fmt"
"io/ioutil"
"os"
+ "path/filepath"
"strconv"
"strings"
"text/template"
@@ -29,6 +30,7 @@ import (
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
+ kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/validation"
@@ -37,8 +39,7 @@ import (
)
const CustomConfigMapName = "pgo-config"
-const DefaultConfigsPath = "/default-pgo-config/"
-const CustomConfigsPath = "/pgo-config/"
+const defaultConfigPath = "/default-pgo-config/"
var PgoDefaultServiceAccountTemplate *template.Template
@@ -513,29 +514,17 @@ func (c *PgoConfig) GetStorageSpec(name string) (crv1.PgStorageSpec, error) {
func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string) error {
- cMap, rootPath := getRootPath(clientset, namespace)
-
- var yamlFile []byte
- var err error
+ cMap, err := initialize(clientset, namespace)
//get the pgo.yaml config file
- if cMap != nil {
- str := cMap.Data[CONFIG_PATH]
- if str == "" {
- errMsg := fmt.Sprintf("could not get %s from ConfigMap", CONFIG_PATH)
- return errors.New(errMsg)
- }
- yamlFile = []byte(str)
- } else {
- yamlFile, err = ioutil.ReadFile(rootPath + CONFIG_PATH)
- if err != nil {
- log.Errorf("yamlFile.Get err #%v ", err)
- return err
- }
+ str := cMap.Data[CONFIG_PATH]
+ if str == "" {
+ return fmt.Errorf("could not get %s from ConfigMap", CONFIG_PATH)
}
- err = yaml.Unmarshal(yamlFile, c)
- if err != nil {
+ yamlFile := []byte(str)
+
+ if err := yaml.Unmarshal(yamlFile, c); err != nil {
log.Errorf("Unmarshal: %v", err)
return err
}
@@ -549,178 +538,178 @@ func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string)
c.CheckEnv()
//load up all the templates
- PgoDefaultServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGODefaultServiceAccountPath)
+ PgoDefaultServiceAccountTemplate, err = c.LoadTemplate(cMap, PGODefaultServiceAccountPath)
if err != nil {
return err
}
- PgoBackrestServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGOBackrestServiceAccountPath)
+ PgoBackrestServiceAccountTemplate, err = c.LoadTemplate(cMap, PGOBackrestServiceAccountPath)
if err != nil {
return err
}
- PgoTargetServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGOTargetServiceAccountPath)
+ PgoTargetServiceAccountTemplate, err = c.LoadTemplate(cMap, PGOTargetServiceAccountPath)
if err != nil {
return err
}
- PgoTargetRoleBindingTemplate, err = c.LoadTemplate(cMap, rootPath, PGOTargetRoleBindingPath)
+ PgoTargetRoleBindingTemplate, err = c.LoadTemplate(cMap, PGOTargetRoleBindingPath)
if err != nil {
return err
}
- PgoBackrestRoleTemplate, err = c.LoadTemplate(cMap, rootPath, PGOBackrestRolePath)
+ PgoBackrestRoleTemplate, err = c.LoadTemplate(cMap, PGOBackrestRolePath)
if err != nil {
return err
}
- PgoBackrestRoleBindingTemplate, err = c.LoadTemplate(cMap, rootPath, PGOBackrestRoleBindingPath)
+ PgoBackrestRoleBindingTemplate, err = c.LoadTemplate(cMap, PGOBackrestRoleBindingPath)
if err != nil {
return err
}
- PgoTargetRoleTemplate, err = c.LoadTemplate(cMap, rootPath, PGOTargetRolePath)
+ PgoTargetRoleTemplate, err = c.LoadTemplate(cMap, PGOTargetRolePath)
if err != nil {
return err
}
- PgoPgServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGOPgServiceAccountPath)
+ PgoPgServiceAccountTemplate, err = c.LoadTemplate(cMap, PGOPgServiceAccountPath)
if err != nil {
return err
}
- PgoPgRoleTemplate, err = c.LoadTemplate(cMap, rootPath, PGOPgRolePath)
+ PgoPgRoleTemplate, err = c.LoadTemplate(cMap, PGOPgRolePath)
if err != nil {
return err
}
- PgoPgRoleBindingTemplate, err = c.LoadTemplate(cMap, rootPath, PGOPgRoleBindingPath)
+ PgoPgRoleBindingTemplate, err = c.LoadTemplate(cMap, PGOPgRoleBindingPath)
if err != nil {
return err
}
- PVCTemplate, err = c.LoadTemplate(cMap, rootPath, pvcPath)
+ PVCTemplate, err = c.LoadTemplate(cMap, pvcPath)
if err != nil {
return err
}
- PolicyJobTemplate, err = c.LoadTemplate(cMap, rootPath, policyJobTemplatePath)
+ PolicyJobTemplate, err = c.LoadTemplate(cMap, policyJobTemplatePath)
if err != nil {
return err
}
- ContainerResourcesTemplate, err = c.LoadTemplate(cMap, rootPath, containerResourcesTemplatePath)
+ ContainerResourcesTemplate, err = c.LoadTemplate(cMap, containerResourcesTemplatePath)
if err != nil {
return err
}
- PgoBackrestRepoServiceTemplate, err = c.LoadTemplate(cMap, rootPath, pgoBackrestRepoServiceTemplatePath)
+ PgoBackrestRepoServiceTemplate, err = c.LoadTemplate(cMap, pgoBackrestRepoServiceTemplatePath)
if err != nil {
return err
}
- PgoBackrestRepoTemplate, err = c.LoadTemplate(cMap, rootPath, pgoBackrestRepoTemplatePath)
+ PgoBackrestRepoTemplate, err = c.LoadTemplate(cMap, pgoBackrestRepoTemplatePath)
if err != nil {
return err
}
- PgmonitorEnvVarsTemplate, err = c.LoadTemplate(cMap, rootPath, pgmonitorEnvVarsPath)
+ PgmonitorEnvVarsTemplate, err = c.LoadTemplate(cMap, pgmonitorEnvVarsPath)
if err != nil {
return err
}
- PgbackrestEnvVarsTemplate, err = c.LoadTemplate(cMap, rootPath, pgbackrestEnvVarsPath)
+ PgbackrestEnvVarsTemplate, err = c.LoadTemplate(cMap, pgbackrestEnvVarsPath)
if err != nil {
return err
}
- PgbackrestS3EnvVarsTemplate, err = c.LoadTemplate(cMap, rootPath, pgbackrestS3EnvVarsPath)
+ PgbackrestS3EnvVarsTemplate, err = c.LoadTemplate(cMap, pgbackrestS3EnvVarsPath)
if err != nil {
return err
}
- PgAdminTemplate, err = c.LoadTemplate(cMap, rootPath, pgAdminTemplatePath)
+ PgAdminTemplate, err = c.LoadTemplate(cMap, pgAdminTemplatePath)
if err != nil {
return err
}
- PgAdminServiceTemplate, err = c.LoadTemplate(cMap, rootPath, pgAdminServiceTemplatePath)
+ PgAdminServiceTemplate, err = c.LoadTemplate(cMap, pgAdminServiceTemplatePath)
if err != nil {
return err
}
- PgbouncerTemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerTemplatePath)
+ PgbouncerTemplate, err = c.LoadTemplate(cMap, pgbouncerTemplatePath)
if err != nil {
return err
}
- PgbouncerConfTemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerConfTemplatePath)
+ PgbouncerConfTemplate, err = c.LoadTemplate(cMap, pgbouncerConfTemplatePath)
if err != nil {
return err
}
- PgbouncerUsersTemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerUsersTemplatePath)
+ PgbouncerUsersTemplate, err = c.LoadTemplate(cMap, pgbouncerUsersTemplatePath)
if err != nil {
return err
}
- PgbouncerHBATemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerHBATemplatePath)
+ PgbouncerHBATemplate, err = c.LoadTemplate(cMap, pgbouncerHBATemplatePath)
if err != nil {
return err
}
- ServiceTemplate, err = c.LoadTemplate(cMap, rootPath, serviceTemplatePath)
+ ServiceTemplate, err = c.LoadTemplate(cMap, serviceTemplatePath)
if err != nil {
return err
}
- RmdatajobTemplate, err = c.LoadTemplate(cMap, rootPath, rmdatajobPath)
+ RmdatajobTemplate, err = c.LoadTemplate(cMap, rmdatajobPath)
if err != nil {
return err
}
- BackrestjobTemplate, err = c.LoadTemplate(cMap, rootPath, backrestjobPath)
+ BackrestjobTemplate, err = c.LoadTemplate(cMap, backrestjobPath)
if err != nil {
return err
}
- PgDumpBackupJobTemplate, err = c.LoadTemplate(cMap, rootPath, pgDumpBackupJobPath)
+ PgDumpBackupJobTemplate, err = c.LoadTemplate(cMap, pgDumpBackupJobPath)
if err != nil {
return err
}
- PgRestoreJobTemplate, err = c.LoadTemplate(cMap, rootPath, pgRestoreJobPath)
+ PgRestoreJobTemplate, err = c.LoadTemplate(cMap, pgRestoreJobPath)
if err != nil {
return err
}
- PVCMatchLabelsTemplate, err = c.LoadTemplate(cMap, rootPath, pvcMatchLabelsPath)
+ PVCMatchLabelsTemplate, err = c.LoadTemplate(cMap, pvcMatchLabelsPath)
if err != nil {
return err
}
- PVCStorageClassTemplate, err = c.LoadTemplate(cMap, rootPath, pvcSCPath)
+ PVCStorageClassTemplate, err = c.LoadTemplate(cMap, pvcSCPath)
if err != nil {
return err
}
- AffinityTemplate, err = c.LoadTemplate(cMap, rootPath, affinityTemplatePath)
+ AffinityTemplate, err = c.LoadTemplate(cMap, affinityTemplatePath)
if err != nil {
return err
}
- PodAntiAffinityTemplate, err = c.LoadTemplate(cMap, rootPath, podAntiAffinityTemplatePath)
+ PodAntiAffinityTemplate, err = c.LoadTemplate(cMap, podAntiAffinityTemplatePath)
if err != nil {
return err
}
- ExporterTemplate, err = c.LoadTemplate(cMap, rootPath, exporterTemplatePath)
+ ExporterTemplate, err = c.LoadTemplate(cMap, exporterTemplatePath)
if err != nil {
return err
}
- BadgerTemplate, err = c.LoadTemplate(cMap, rootPath, badgerTemplatePath)
+ BadgerTemplate, err = c.LoadTemplate(cMap, badgerTemplatePath)
if err != nil {
return err
}
- DeploymentTemplate, err = c.LoadTemplate(cMap, rootPath, deploymentTemplatePath)
+ DeploymentTemplate, err = c.LoadTemplate(cMap, deploymentTemplatePath)
if err != nil {
return err
}
- BootstrapTemplate, err = c.LoadTemplate(cMap, rootPath, bootstrapTemplatePath)
+ BootstrapTemplate, err = c.LoadTemplate(cMap, bootstrapTemplatePath)
if err != nil {
return err
}
@@ -728,20 +717,75 @@ func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string)
return nil
}
-func getRootPath(clientset kubernetes.Interface, namespace string) (*v1.ConfigMap, string) {
+// getOperatorConfigMap returns the config map that contains all of the
+// configuration for the Operator
+func getOperatorConfigMap(clientset kubernetes.Interface, namespace string) (*v1.ConfigMap, error) {
+ ctx := context.TODO()
+
+ return clientset.CoreV1().ConfigMaps(namespace).Get(ctx, CustomConfigMapName, metav1.GetOptions{})
+}
+
+// initialize attemps to get the configuration ConfigMap based on a name.
+// If the ConfigMap does not exist, a ConfigMap is created from the default
+// configuration path
+func initialize(clientset kubernetes.Interface, namespace string) (*v1.ConfigMap, error) {
ctx := context.TODO()
- cMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(ctx, CustomConfigMapName, metav1.GetOptions{})
- if err == nil {
- log.Infof("Config: %s ConfigMap found, using config files from the configmap", CustomConfigMapName)
- return cMap, ""
+
+ // if the ConfigMap exists, exit
+ if cm, err := getOperatorConfigMap(clientset, namespace); err == nil {
+ log.Infof("Config: %q ConfigMap found, using config files from the configmap", CustomConfigMapName)
+ return cm, nil
+ }
+
+ // otherwise, create a ConfigMap
+ log.Infof("Config: %q ConfigMap NOT found, creating ConfigMap from files from %q", CustomConfigMapName, defaultConfigPath)
+
+ cm := &v1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: CustomConfigMapName,
+ },
+ Data: map[string]string{},
+ }
+
+ // get all of the file names that are in the default configuration directory
+ if err := filepath.Walk(defaultConfigPath, func(path string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ // skip if a directory
+ if info.IsDir() {
+ return nil
+ }
+
+ // get all of the contents of a default configuration and load it into
+ // a ConfigMap
+ if contents, err := ioutil.ReadFile(path); err != nil {
+ return err
+ } else {
+ cm.Data[info.Name()] = string(contents)
+ }
+
+ return nil
+ }); err != nil {
+ return nil, err
+ }
+
+ // create the ConfigMap. If the error is that the ConfigMap was already
+ // created, then grab the new ConfigMap
+ if _, err := clientset.CoreV1().ConfigMaps(namespace).Create(ctx, cm, metav1.CreateOptions{}); err != nil {
+ if kerrors.IsAlreadyExists(err) {
+ return getOperatorConfigMap(clientset, namespace)
+ }
+
+ return nil, err
}
- log.Infof("Config: %s ConfigMap NOT found, using default baked-in config files from %s", CustomConfigMapName, DefaultConfigsPath)
- return nil, DefaultConfigsPath
+ return cm, nil
}
// LoadTemplate will load a JSON template from a path
-func (c *PgoConfig) LoadTemplate(cMap *v1.ConfigMap, rootPath, path string) (*template.Template, error) {
+func (c *PgoConfig) LoadTemplate(cMap *v1.ConfigMap, path string) (*template.Template, error) {
var value string
var err error
@@ -771,7 +815,7 @@ func (c *PgoConfig) LoadTemplate(cMap *v1.ConfigMap, rootPath, path string) (*te
func (c *PgoConfig) DefaultTemplate(path string) (string, error) {
// set the lookup value for the file path based on the default configuration
// path and the template file requested to be loaded
- fullPath := DefaultConfigsPath + path
+ fullPath := defaultConfigPath + path
log.Debugf("No entry in cmap loading default path [%s]", fullPath)
From 05191d4232b753683ab5380097185b33c9cd6711 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 9 Nov 2020 15:01:08 -0500
Subject: [PATCH 005/276] Remove pgBackRest Secret creation from Helm chart
example
This is no longer needed, as the Operator will reconcile this
information if it is missing.
---
.../templates/backrest-repo-config.yaml | 16 ----------------
1 file changed, 16 deletions(-)
delete mode 100644 examples/helm/create-cluster/templates/backrest-repo-config.yaml
diff --git a/examples/helm/create-cluster/templates/backrest-repo-config.yaml b/examples/helm/create-cluster/templates/backrest-repo-config.yaml
deleted file mode 100644
index 166d0b3dcd..0000000000
--- a/examples/helm/create-cluster/templates/backrest-repo-config.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-apiVersion: v1
-data:
- authorized_keys: {{ .Files.Get "certs/hippo-key.pub" | b64enc }}
- config: SG9zdCAqClN0cmljdEhvc3RLZXlDaGVja2luZyBubwpJZGVudGl0eUZpbGUgL3RtcC9pZF9lZDI1NTE5ClBvcnQgMjAyMgpVc2VyIHBnYmFja3Jlc3QK
- id_ed25519: {{ .Files.Get "certs/hippo-key" | b64enc }}
- ssh_host_ed25519_key: {{ .Files.Get "certs/hippo-key" | b64enc }}
- sshd_config: IwkkT3BlbkJTRDogc3NoZF9jb25maWcsdiAxLjEwMCAyMDE2LzA4LzE1IDEyOjMyOjA0IG5hZGR5IEV4cCAkCgojIFRoaXMgaXMgdGhlIHNzaGQgc2VydmVyIHN5c3RlbS13aWRlIGNvbmZpZ3VyYXRpb24gZmlsZS4gIFNlZQojIHNzaGRfY29uZmlnKDUpIGZvciBtb3JlIGluZm9ybWF0aW9uLgoKIyBUaGlzIHNzaGQgd2FzIGNvbXBpbGVkIHdpdGggUEFUSD0vdXNyL2xvY2FsL2JpbjovdXNyL2JpbgoKIyBUaGUgc3RyYXRlZ3kgdXNlZCBmb3Igb3B0aW9ucyBpbiB0aGUgZGVmYXVsdCBzc2hkX2NvbmZpZyBzaGlwcGVkIHdpdGgKIyBPcGVuU1NIIGlzIHRvIHNwZWNpZnkgb3B0aW9ucyB3aXRoIHRoZWlyIGRlZmF1bHQgdmFsdWUgd2hlcmUKIyBwb3NzaWJsZSwgYnV0IGxlYXZlIHRoZW0gY29tbWVudGVkLiAgVW5jb21tZW50ZWQgb3B0aW9ucyBvdmVycmlkZSB0aGUKIyBkZWZhdWx0IHZhbHVlLgoKIyBJZiB5b3Ugd2FudCB0byBjaGFuZ2UgdGhlIHBvcnQgb24gYSBTRUxpbnV4IHN5c3RlbSwgeW91IGhhdmUgdG8gdGVsbAojIFNFTGludXggYWJvdXQgdGhpcyBjaGFuZ2UuCiMgc2VtYW5hZ2UgcG9ydCAtYSAtdCBzc2hfcG9ydF90IC1wIHRjcCAjUE9SVE5VTUJFUgojClBvcnQgMjAyMgojQWRkcmVzc0ZhbWlseSBhbnkKI0xpc3RlbkFkZHJlc3MgMC4wLjAuMAojTGlzdGVuQWRkcmVzcyA6OgoKSG9zdEtleSAvc3NoZC9zc2hfaG9zdF9lZDI1NTE5X2tleQoKIyBDaXBoZXJzIGFuZCBrZXlpbmcKI1Jla2V5TGltaXQgZGVmYXVsdCBub25lCgojIExvZ2dpbmcKI1N5c2xvZ0ZhY2lsaXR5IEFVVEgKU3lzbG9nRmFjaWxpdHkgQVVUSFBSSVYKI0xvZ0xldmVsIElORk8KCiMgQXV0aGVudGljYXRpb246CgojTG9naW5HcmFjZVRpbWUgMm0KUGVybWl0Um9vdExvZ2luIG5vClN0cmljdE1vZGVzIG5vCiNNYXhBdXRoVHJpZXMgNgojTWF4U2Vzc2lvbnMgMTAKClB1YmtleUF1dGhlbnRpY2F0aW9uIHllcwoKIyBUaGUgZGVmYXVsdCBpcyB0byBjaGVjayBib3RoIC5zc2gvYXV0aG9yaXplZF9rZXlzIGFuZCAuc3NoL2F1dGhvcml6ZWRfa2V5czIKIyBidXQgdGhpcyBpcyBvdmVycmlkZGVuIHNvIGluc3RhbGxhdGlvbnMgd2lsbCBvbmx5IGNoZWNrIC5zc2gvYXV0aG9yaXplZF9rZXlzCiNBdXRob3JpemVkS2V5c0ZpbGUJL3BnY29uZi9hdXRob3JpemVkX2tleXMKQXV0aG9yaXplZEtleXNGaWxlCS9zc2hkL2F1dGhvcml6ZWRfa2V5cwoKI0F1dGhvcml6ZWRQcmluY2lwYWxzRmlsZSBub25lCgojQXV0aG9yaXplZEtleXNDb21tYW5kIG5vbmUKI0F1dGhvcml6ZWRLZXlzQ29tbWFuZFVzZXIgbm9ib2R5CgojIEZvciB0aGlzIHRvIHdvcmsgeW91IHdpbGwgYWxzbyBuZWVkIGhvc3Qga2V5cyBpbiAvZXRjL3NzaC9zc2hfa25vd25faG9zdHMKI0hvc3RiYXNlZEF1dGhlbnRpY2F0aW9uIG5vCiMgQ2hhbmdlIHRvIHllcyBpZiB5b3UgZG9uJ3QgdHJ1c3Qgfi8uc3NoL2tub3duX2hvc3RzIGZvcgojIEhvc3RiYXNlZEF1dGhlbnRpY2F0aW9uCiNJZ25vcmVVc2VyS25vd25Ib3N0cyBubwojIERvbid0IHJlYWQgdGhlIHVzZXIncyB+Ly5yaG9zdHMgYW5kIH4vLnNob3N0cyBmaWxlcwojSWdub3JlUmhvc3RzIHllcwoKIyBUbyBkaXNhYmxlIHR1bm5lbGVkIGNsZWFyIHRleHQgcGFzc3dvcmRzLCBjaGFuZ2UgdG8gbm8gaGVyZSEKI1Bhc3N3b3JkQXV0aGVudGljYXRpb24geWVzCiNQZXJtaXRFbXB0eVBhc3N3b3JkcyBubwpQYXNzd29yZEF1dGhlbnRpY2F0aW9uIG5vCgojIENoYW5nZSB0byBubyB0byBkaXNhYmxlIHMva2V5IHBhc3N3b3JkcwpDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIHllcwojQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiBubwoKIyBLZXJiZXJvcyBvcHRpb25zCiNLZXJiZXJvc0F1dGhlbnRpY2F0aW9uIG5vCiNLZXJiZXJvc09yTG9jYWxQYXNzd2QgeWVzCiNLZXJiZXJvc1RpY2tldENsZWFudXAgeWVzCiNLZXJiZXJvc0dldEFGU1Rva2VuIG5vCiNLZXJiZXJvc1VzZUt1c2Vyb2sgeWVzCgojIEdTU0FQSSBvcHRpb25zCiNHU1NBUElBdXRoZW50aWNhdGlvbiB5ZXMKI0dTU0FQSUNsZWFudXBDcmVkZW50aWFscyBubwojR1NTQVBJU3RyaWN0QWNjZXB0b3JDaGVjayB5ZXMKI0dTU0FQSUtleUV4Y2hhbmdlIG5vCiNHU1NBUElFbmFibGVrNXVzZXJzIG5vCgojIFNldCB0aGlzIHRvICd5ZXMnIHRvIGVuYWJsZSBQQU0gYXV0aGVudGljYXRpb24sIGFjY291bnQgcHJvY2Vzc2luZywKIyBhbmQgc2Vzc2lvbiBwcm9jZXNzaW5nLiBJZiB0aGlzIGlzIGVuYWJsZWQsIFBBTSBhdXRoZW50aWNhdGlvbiB3aWxsCiMgYmUgYWxsb3dlZCB0aHJvdWdoIHRoZSBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIGFuZAojIFBhc3N3b3JkQXV0aGVudGljYXRpb24uICBEZXBlbmRpbmcgb24geW91ciBQQU0gY29uZmlndXJhdGlvbiwKIyBQQU0gYXV0aGVudGljYXRpb24gdmlhIENoYWxsZW5nZVJlc3BvbnNlQXV0aGVudGljYXRpb24gbWF5IGJ5cGFzcwojIHRoZSBzZXR0aW5nIG9mICJQZXJtaXRSb290TG9naW4gd2l0aG91dC1wYXNzd29yZCIuCiMgSWYgeW91IGp1c3Qgd2FudCB0aGUgUEFNIGFjY291bnQgYW5kIHNlc3Npb24gY2hlY2tzIHRvIHJ1biB3aXRob3V0CiMgUEFNIGF1dGhlbnRpY2F0aW9uLCB0aGVuIGVuYWJsZSB0aGlzIGJ1dCBzZXQgUGFzc3dvcmRBdXRoZW50aWNhdGlvbgojIGFuZCBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIHRvICdubycuCiMgV0FSTklORzogJ1VzZVBBTSBubycgaXMgbm90IHN1cHBvcnRlZCBpbiBSZWQgSGF0IEVudGVycHJpc2UgTGludXggYW5kIG1heSBjYXVzZSBzZXZlcmFsCiMgcHJvYmxlbXMuClVzZVBBTSB5ZXMKCiNBbGxvd0FnZW50Rm9yd2FyZGluZyB5ZXMKI0FsbG93VGNwRm9yd2FyZGluZyB5ZXMKI0dhdGV3YXlQb3J0cyBubwpYMTFGb3J3YXJkaW5nIHllcwojWDExRGlzcGxheU9mZnNldCAxMAojWDExVXNlTG9jYWxob3N0IHllcwojUGVybWl0VFRZIHllcwojUHJpbnRNb3RkIHllcwojUHJpbnRMYXN0TG9nIHllcwojVENQS2VlcEFsaXZlIHllcwojVXNlTG9naW4gbm8KVXNlUHJpdmlsZWdlU2VwYXJhdGlvbiBubwojUGVybWl0VXNlckVudmlyb25tZW50IG5vCiNDb21wcmVzc2lvbiBkZWxheWVkCiNDbGllbnRBbGl2ZUludGVydmFsIDAKI0NsaWVudEFsaXZlQ291bnRNYXggMwojU2hvd1BhdGNoTGV2ZWwgbm8KI1VzZUROUyB5ZXMKI1BpZEZpbGUgL3Zhci9ydW4vc3NoZC5waWQKI01heFN0YXJ0dXBzIDEwOjMwOjEwMAojUGVybWl0VHVubmVsIG5vCiNDaHJvb3REaXJlY3Rvcnkgbm9uZQojVmVyc2lvbkFkZGVuZHVtIG5vbmUKCiMgbm8gZGVmYXVsdCBiYW5uZXIgcGF0aAojQmFubmVyIG5vbmUKCiMgQWNjZXB0IGxvY2FsZS1yZWxhdGVkIGVudmlyb25tZW50IHZhcmlhYmxlcwpBY2NlcHRFbnYgTEFORyBMQ19DVFlQRSBMQ19OVU1FUklDIExDX1RJTUUgTENfQ09MTEFURSBMQ19NT05FVEFSWSBMQ19NRVNTQUdFUwpBY2NlcHRFbnYgTENfUEFQRVIgTENfTkFNRSBMQ19BRERSRVNTIExDX1RFTEVQSE9ORSBMQ19NRUFTVVJFTUVOVApBY2NlcHRFbnYgTENfSURFTlRJRklDQVRJT04gTENfQUxMIExBTkdVQUdFCkFjY2VwdEVudiBYTU9ESUZJRVJTCgojIG92ZXJyaWRlIGRlZmF1bHQgb2Ygbm8gc3Vic3lzdGVtcwpTdWJzeXN0ZW0Jc2Z0cAkvdXNyL2xpYmV4ZWMvb3BlbnNzaC9zZnRwLXNlcnZlcgoKIyBFeGFtcGxlIG9mIG92ZXJyaWRpbmcgc2V0dGluZ3Mgb24gYSBwZXItdXNlciBiYXNpcwojTWF0Y2ggVXNlciBhbm9uY3ZzCiMJWDExRm9yd2FyZGluZyBubwojCUFsbG93VGNwRm9yd2FyZGluZyBubwojCVBlcm1pdFRUWSBubwojCUZvcmNlQ29tbWFuZCBjdnMgc2VydmVyCg==
-kind: Secret
-metadata:
- labels:
- pg-cluster: {{ .Values.pgclustername }}
- pgo-backrest-repo: "true"
- vendor: crunchydata
- name: {{ .Values.pgclustername }}-backrest-repo-config
- namespace: {{ .Values.namespace }}
-type: Opaque
From c0588be9d7df3ffc7c2c8ba9190a69db558c6cae Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 9 Nov 2020 18:11:57 -0500
Subject: [PATCH 006/276] Indicate support for PL/Perl
PL/Perl support was added to the PostGIS-enabled containers, as
such, the Operator can now support it.
Issue: CrunchyData/crunchy-containers#1287
---
README.md | 1 +
docs/content/_index.md | 1 +
2 files changed, 2 insertions(+)
diff --git a/README.md b/README.md
index f94ec04a6c..a58837efa6 100644
--- a/README.md
+++ b/README.md
@@ -189,6 +189,7 @@ There is also a `pgo-client` container if you wish to deploy the client directly
- [PostgreSQL](https://www.postgresql.org)
- [PostgreSQL Contrib Modules](https://www.postgresql.org/docs/current/contrib.html)
- [PL/Python + PL/Python 3](https://www.postgresql.org/docs/current/plpython.html)
+ - [PL/Perl](https://www.postgresql.org/docs/current/plperl.html)
- [pgAudit](https://www.pgaudit.org/)
- [pgAudit Analyze](https://github.com/pgaudit/pgaudit_analyze)
- [pgnodemx](https://github.com/CrunchyData/pgnodemx)
diff --git a/docs/content/_index.md b/docs/content/_index.md
index f83a7c49e1..b879f3db2a 100644
--- a/docs/content/_index.md
+++ b/docs/content/_index.md
@@ -104,6 +104,7 @@ The Crunchy PostgreSQL Operator extends Kubernetes to provide a higher-level abs
- [PostgreSQL](https://www.postgresql.org)
- [PostgreSQL Contrib Modules](https://www.postgresql.org/docs/current/contrib.html)
- [PL/Python + PL/Python 3](https://www.postgresql.org/docs/current/plpython.html)
+ - [PL/Perl](https://www.postgresql.org/docs/current/plperl.html)
- [pgAudit](https://www.pgaudit.org/)
- [pgAudit Analyze](https://github.com/pgaudit/pgaudit_analyze)
- [pgnodemx](https://github.com/CrunchyData/pgnodemx)
From 70d17aaff3cf1bad47a871ad0c492d6020877a25 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 10 Nov 2020 08:34:09 -0500
Subject: [PATCH 007/276] Add explanation for transient monitoring issue
This issue only occurs during the initialization phase of a
PostgreSQL cluster, but can be confusing as it indicates there
is an error.
Issue: #1962
---
docs/content/tutorial/create-cluster.md | 23 ++++++++++++++++++++++
docs/content/tutorial/customize-cluster.md | 23 ++++++++++++++++++++++
2 files changed, 46 insertions(+)
diff --git a/docs/content/tutorial/create-cluster.md b/docs/content/tutorial/create-cluster.md
index eeb798faf5..423f0a4dbb 100644
--- a/docs/content/tutorial/create-cluster.md
+++ b/docs/content/tutorial/create-cluster.md
@@ -120,6 +120,29 @@ Also ensure that you have enough persistent volumes available: your Kubernetes a
The most common occurrence of this is due to the Kubernetes network blocking SSH connections between Pods. Ensure that your Kubernetes networking layer allows for SSH connections over port 2022 in the Namespace that you are deploying your PostgreSQL clusters into.
+### PostgreSQL Pod reports "Authentication Failed for `ccp_monitoring`"
+
+This is a temporary error that occurs when a new PostgreSQL cluster is first
+initialized with the `--metrics` flag. The `crunchy-postgres-exporter` container
+within the PostgreSQL Pod may be ready before the container with PostgreSQL is
+ready. If a message in your logs further down displays a timestamp, e.g.:
+
+```
+ now
+-------------------------------
+2020-11-10 08:23:15.968196-05
+```
+
+Then the `ccp_monitoring` user is properly reconciled with the PostgreSQL
+cluster.
+
+If the error message does not go away, this could indicate a few things:
+
+- The PostgreSQL instance has not initialized. Check to ensure that PostgreSQL
+has successfully started.
+- The password for the `ccp_monitoring` user has changed. In this case you will
+need to update the Secret with the monitoring credentials.
+
## Next Steps
Once your cluster is created, the next step is to [connect to your PostgreSQL cluster]({{< relref "tutorial/connect-cluster.md" >}}). You can also [learn how to customize your PostgreSQL cluster]({{< relref "tutorial/customize-cluster.md" >}})!
diff --git a/docs/content/tutorial/customize-cluster.md b/docs/content/tutorial/customize-cluster.md
index e9be31c268..2fee92bb0a 100644
--- a/docs/content/tutorial/customize-cluster.md
+++ b/docs/content/tutorial/customize-cluster.md
@@ -184,6 +184,29 @@ There are many reasons why a PostgreSQL Pod may not be scheduled:
- **Node affinity rules cannot be satisfied**. If you assigned a node label, ensure that the Nodes with that label are available for scheduling. If they are, ensure that there are enough resources available.
- **Pod anti-affinity rules cannot be satisfied**. This most likely happens when [pod anti-affinity]({{< relref "architecture/high-availability/_index.md" >}}#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity) is set to `required` and there are not enough Nodes available for scheduling. Consider adding more Nodes or relaxing your anti-affinity rules.
+### PostgreSQL Pod reports "Authentication Failed for `ccp_monitoring`"
+
+This is a temporary error that occurs when a new PostgreSQL cluster is first
+initialized with the `--metrics` flag. The `crunchy-postgres-exporter` container
+within the PostgreSQL Pod may be ready before the container with PostgreSQL is
+ready. If a message in your logs further down displays a timestamp, e.g.:
+
+```
+ now
+-------------------------------
+2020-11-10 08:23:15.968196-05
+```
+
+Then the `ccp_monitoring` user is properly reconciled with the PostgreSQL
+cluster.
+
+If the error message does not go away, this could indicate a few things:
+
+- The PostgreSQL instance has not initialized. Check to ensure that PostgreSQL
+has successfully started.
+- The password for the `ccp_monitoring` user has changed. In this case you will
+need to update the Secret with the monitoring credentials.
+
## Next Steps
As mentioned at the beginning, there are a lot more customizations that you can make to your PostgreSQL cluster, and we will cover those as the tutorial progresses! This section was to get you familiar with some of the most common customizations, and to explore how many options `pgo create cluster` has!
From b25c6b1a6e646380f4847de3c26f87cabb503188 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 10 Nov 2020 09:12:40 -0500
Subject: [PATCH 008/276] Bump to v4.5.1
However, the README is staying on v4.5.0 to avoid confusion for
people trying to install the Operator before the v4.5.1 containers
are made available.
---
Makefile | 4 +-
bin/push-ccp-to-gcr.sh | 2 +-
conf/postgres-operator/pgo.yaml | 4 +-
docs/config.toml | 16 ++++----
docs/content/Configuration/compatibility.md | 10 ++++-
docs/content/releases/4.5.1.md | 38 +++++++++++++++++++
docs/content/tutorial/pgbouncer.md | 2 +-
examples/create-by-resource/fromcrd.json | 6 +--
examples/envs.sh | 2 +-
.../create-cluster/templates/pgcluster.yaml | 2 +-
examples/helm/create-cluster/values.yaml | 4 +-
installers/ansible/README.md | 2 +-
installers/ansible/values.yaml | 6 +--
installers/gcp-marketplace/Makefile | 2 +-
installers/gcp-marketplace/README.md | 2 +-
installers/gcp-marketplace/values.yaml | 6 +--
installers/helm/Chart.yaml | 2 +-
installers/helm/values.yaml | 6 +--
installers/kubectl/client-setup.sh | 2 +-
.../kubectl/postgres-operator-ocp311.yml | 8 ++--
installers/kubectl/postgres-operator.yml | 8 ++--
installers/metrics/ansible/README.md | 2 +-
installers/metrics/helm/Chart.yaml | 2 +-
installers/metrics/helm/helm_template.yaml | 2 +-
installers/metrics/helm/values.yaml | 2 +-
.../postgres-operator-metrics-ocp311.yml | 2 +-
.../kubectl/postgres-operator-metrics.yml | 2 +-
installers/olm/Makefile | 4 +-
pkg/apis/crunchydata.com/v1/doc.go | 8 ++--
pkg/apiservermsgs/common.go | 2 +-
redhat/atomic/help.1 | 2 +-
redhat/atomic/help.md | 2 +-
32 files changed, 105 insertions(+), 59 deletions(-)
create mode 100644 docs/content/releases/4.5.1.md
diff --git a/Makefile b/Makefile
index df717f4a70..19e150820a 100644
--- a/Makefile
+++ b/Makefile
@@ -5,9 +5,9 @@ PGOROOT ?= $(CURDIR)
PGO_BASEOS ?= centos7
PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
-PGO_VERSION ?= 4.5.0
+PGO_VERSION ?= 4.5.1
PGO_PG_VERSION ?= 12
-PGO_PG_FULLVERSION ?= 12.4
+PGO_PG_FULLVERSION ?= 12.5
PGO_BACKREST_VERSION ?= 2.29
PACKAGER ?= yum
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index 3b9de84ed0..59e2e329e8 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -16,7 +16,7 @@
GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
CCP_IMAGE_PREFIX=crunchydata
-CCP_IMAGE_TAG=centos7-12.4-4.5.0
+CCP_IMAGE_TAG=centos7-12.5-4.5.1
IMAGES=(
crunchy-prometheus
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index ff5c97ec7f..622fc268f8 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -2,7 +2,7 @@ Cluster:
CCPImagePrefix: registry.developers.crunchydata.com/crunchydata
Metrics: false
Badger: false
- CCPImageTag: centos7-12.4-4.5.0
+ CCPImageTag: centos7-12.5-4.5.1
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
@@ -82,4 +82,4 @@ Storage:
Pgo:
Audit: false
PGOImagePrefix: registry.developers.crunchydata.com/crunchydata
- PGOImageTag: centos7-4.5.0
+ PGOImageTag: centos7-4.5.1
diff --git a/docs/config.toml b/docs/config.toml
index 48ef2d760d..dc39d49daa 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -25,14 +25,14 @@ disableNavChevron = false # set true to hide next/prev chevron, default is false
highlightClientSide = false # set true to use highlight.pack.js instead of the default hugo chroma highlighter
menushortcutsnewtab = true # set true to open shortcuts links to a new tab/window
enableGitInfo = true
-operatorVersion = "4.5.0"
-postgresVersion = "12.4"
-postgresVersion13 = "13.0"
-postgresVersion12 = "12.4"
-postgresVersion11 = "11.9"
-postgresVersion10 = "10.14"
-postgresVersion96 = "9.6.19"
-postgresVersion95 = "9.5.23"
+operatorVersion = "4.5.1"
+postgresVersion = "12.5"
+postgresVersion13 = "13.1"
+postgresVersion12 = "12.5"
+postgresVersion11 = "11.10"
+postgresVersion10 = "10.15"
+postgresVersion96 = "9.6.20"
+postgresVersion95 = "9.5.24"
postgisVersion = "3.0"
centosBase = "centos7"
diff --git a/docs/content/Configuration/compatibility.md b/docs/content/Configuration/compatibility.md
index b805ef9c08..43af7021db 100644
--- a/docs/content/Configuration/compatibility.md
+++ b/docs/content/Configuration/compatibility.md
@@ -12,7 +12,15 @@ version dependencies between the two projects. Below are the operator releases a
| Operator Release | Container Release | Postgres | PgBackrest Version
|:----------|:-------------|:------------|:--------------
-| 4.5.0 | 4.5.0 | 12.4 | 2.29 |
+| 4.5.1 | 4.5.1 | 13.1 | 2.29 |
+|||12.5|2.29|
+|||11.10|2.29|
+|||10.15|2.29|
+|||9.6.20|2.29|
+|||9.5.24|2.29|
+||||
+| 4.5.0 | 4.5.0 | 13.0 | 2.29 |
+|||12.4|2.29|
|||11.9|2.29|
|||10.14|2.29|
|||9.6.19|2.29|
diff --git a/docs/content/releases/4.5.1.md b/docs/content/releases/4.5.1.md
new file mode 100644
index 0000000000..eeed22c013
--- /dev/null
+++ b/docs/content/releases/4.5.1.md
@@ -0,0 +1,38 @@
+---
+title: "4.5.1"
+date:
+draft: false
+weight: 69
+---
+
+Crunchy Data announces the release of the PostgreSQL Operator 4.5.1 on November 13, 2020.
+
+The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/).
+
+PostgreSQL Operator 4.5.1 release includes the following software versions upgrades:
+
+- [PostgreSQL](https://www.postgresql.org) is now at versions 13.1, 12.5, 11.10, 10.15, 9.6.20, and 9.5.24.
+- [Patroni](https://patroni.readthedocs.io/) is now at version 2.0.1.
+- PL/Perl can now be used in the PostGIS-enabled containers.
+
+## Changes
+
+- Simplified creation of a PostgreSQL cluster from a `pgcluster` resource. A user no longer has to provide a pgBackRest repository Secret: the Postgres Operator will now automatically generate this.
+- The exposed ports for Services associated with a cluster is now available from the `pgo show cluster` command.
+- If the `pgo-config` ConfigMap is not created during the installation of the Postgres Operator, the Postgres Operator will generate one when it initializes.
+- Providing a value for `pgo_admin_password` in the installer is now optional. If no value is provided, the password for the initial administrative user is randomly generated.
+- Added an example for how to create a PostgreSQL cluster that uses S3 for pgBackRest backups via a custom resource.
+
+## Fixes
+
+- Fix readiness check for a standby leader. Previously, the standby leader would not report as ready, even though it was. Reported by Alec Rooney (@alrooney).
+- Proper determination if a `pgcluster` custom resource creation has been processed by its corresponding Postgres Operator controller. This prevents the custom resource from being run by the creation logic multiple times.
+- Prevent `initdb` (cluster reinitialization) from occurring if the PostgreSQL container cannot initialize while bootstrapping from an existing PGDATA directory.
+- Fix issue with UBI 8 / CentOS 8 when running a pgBackRest bootstrap or restore job, where duplicate "repo types" could be set. Specifically, the ensures the name of the repo type is set via the `PGBACKREST_REPO1_TYPE` environmental variable. Reported by Alec Rooney (@alrooney).
+- Ensure external WAL and Tablespace PVCs are fully recreated during a restore. Reported by (@aurelien43).
+- Ensure `pgo show backup` will work regardless of state of any of the PostgreSQL clusters. This pulls the information directly from the pgBackRest Pod itself. Reported by (@saltenhub).
+- Ensure that sidecars (e.g. metrics collection, pgAdmin 4, pgBouncer) are deployable when using the PostGIS-enabled PostgreSQL image. Reported by Jean-Denis Giguère (@jdenisgiguere).
+- Allow for special characters in pgBackRest environmental variables. Reported by (@SockenSalat).
+- Ensure password for the `pgbouncer` administrative user stays synchronized between an existing Kubernetes Secret and PostgreSQL should the pgBouncer be recreated.
+- When uninstalling an instance of the Postgres Operator in a Kubernetes cluster that has multiple instances of the Postgres Operator, ensure that only the requested instance to be uninstalled is the one that's uninstalled.
+- The logger no longer defaults to using a log level of `DEBUG`.
diff --git a/docs/content/tutorial/pgbouncer.md b/docs/content/tutorial/pgbouncer.md
index 89ba8ce993..0349ff1eaf 100644
--- a/docs/content/tutorial/pgbouncer.md
+++ b/docs/content/tutorial/pgbouncer.md
@@ -116,7 +116,7 @@ PGPASSWORD=randompassword psql -h localhost -p 5432 -U pgbouncer pgbouncer
You should see something similar to this:
```
-psql (12.4, server 1.14.0/bouncer)
+psql (12.5, server 1.14.0/bouncer)
Type "help" for help.
pgbouncer=#
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index 987ec53d55..ea603808ac 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -16,7 +16,7 @@
"pg-cluster": "fromcrd",
"pg-pod-anti-affinity": "",
"pgo-backrest": "true",
- "pgo-version": "4.5.0",
+ "pgo-version": "4.5.1",
"pgouser": "pgoadmin",
"primary": "true"
},
@@ -62,7 +62,7 @@
},
"backrestResources": {},
"ccpimage": "crunchy-postgres-ha",
- "ccpimagetag": "centos7-12.4-4.5.0",
+ "ccpimagetag": "centos7-12.5-4.5.1",
"clustername": "fromcrd",
"customconfig": "",
"database": "userdb",
@@ -95,7 +95,7 @@
"userlabels": {
"crunchy-postgres-exporter": "false",
"pg-pod-anti-affinity": "",
- "pgo-version": "4.5.0",
+ "pgo-version": "4.5.1",
"pgouser": "pgoadmin",
"pgo-backrest": "true"
},
diff --git a/examples/envs.sh b/examples/envs.sh
index 10758085bd..848a7252b5 100644
--- a/examples/envs.sh
+++ b/examples/envs.sh
@@ -20,7 +20,7 @@ export PGO_CONF_DIR=$PGOROOT/installers/ansible/roles/pgo-operator/files
# the version of the Operator you run is set by these vars
export PGO_IMAGE_PREFIX=registry.developers.crunchydata.com/crunchydata
export PGO_BASEOS=centos7
-export PGO_VERSION=4.5.0
+export PGO_VERSION=4.5.1
export PGO_IMAGE_TAG=$PGO_BASEOS-$PGO_VERSION
# for setting the pgo apiserver port, disabling TLS or not verifying TLS
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
index 9dc5a4655d..0852b1f447 100644
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ b/examples/helm/create-cluster/templates/pgcluster.yaml
@@ -13,7 +13,7 @@ metadata:
pg-cluster: {{ .Values.pgclustername }}
pg-pod-anti-affinity: ""
pgo-backrest: "true"
- pgo-version: 4.5.0
+ pgo-version: 4.5.1
pgouser: admin
name: {{ .Values.pgclustername }}
namespace: {{ .Values.namespace }}
diff --git a/examples/helm/create-cluster/values.yaml b/examples/helm/create-cluster/values.yaml
index 09438cb745..4add0e560f 100644
--- a/examples/helm/create-cluster/values.yaml
+++ b/examples/helm/create-cluster/values.yaml
@@ -4,11 +4,11 @@
# The values is for the namespace and the postgresql cluster name
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
-ccpimagetag: centos7-12.4-4.5.0
+ccpimagetag: centos7-12.5-4.5.1
namespace: pgo
pgclustername: hippo
pgoimageprefix: registry.developers.crunchydata.com/crunchydata
-pgoversion: 4.5.0
+pgoversion: 4.5.1
hipposecretuser: "hippo"
hipposecretpassword: "Supersecurepassword*"
postgressecretuser: "postgres"
diff --git a/installers/ansible/README.md b/installers/ansible/README.md
index a9f0babd16..345d035037 100644
--- a/installers/ansible/README.md
+++ b/installers/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.5.0
+Latest Release: 4.5.1
## General
diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml
index 4eb672bcec..ebce0ed751 100644
--- a/installers/ansible/values.yaml
+++ b/installers/ansible/values.yaml
@@ -17,7 +17,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-12.4-4.5.0"
+ccp_image_tag: "centos7-12.5-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -50,14 +50,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.5.0"
+pgo_client_version: "4.5.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos7-4.5.0"
+pgo_image_tag: "centos7-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/gcp-marketplace/Makefile b/installers/gcp-marketplace/Makefile
index 5f4f0c6eb1..f10f5b7c27 100644
--- a/installers/gcp-marketplace/Makefile
+++ b/installers/gcp-marketplace/Makefile
@@ -6,7 +6,7 @@ MARKETPLACE_TOOLS ?= gcr.io/cloud-marketplace-tools/k8s/dev:$(MARKETPLACE_VERSIO
MARKETPLACE_VERSION ?= 0.9.4
KUBECONFIG ?= $(HOME)/.kube/config
PARAMETERS ?= {}
-PGO_VERSION ?= 4.5.0
+PGO_VERSION ?= 4.5.1
IMAGE_BUILD_ARGS = --build-arg MARKETPLACE_VERSION='$(MARKETPLACE_VERSION)' \
--build-arg PGO_VERSION='$(PGO_VERSION)'
diff --git a/installers/gcp-marketplace/README.md b/installers/gcp-marketplace/README.md
index fd686764ad..af2e60f80c 100644
--- a/installers/gcp-marketplace/README.md
+++ b/installers/gcp-marketplace/README.md
@@ -59,7 +59,7 @@ Google Cloud Marketplace.
```shell
IMAGE_REPOSITORY=gcr.io/crunchydata-public/postgres-operator
- export PGO_VERSION=4.5.0
+ export PGO_VERSION=4.5.1
export INSTALLER_IMAGE=${IMAGE_REPOSITORY}/deployer:${PGO_VERSION}
export OPERATOR_IMAGE=${IMAGE_REPOSITORY}:${PGO_VERSION}
export OPERATOR_IMAGE_API=${IMAGE_REPOSITORY}/pgo-apiserver:${PGO_VERSION}
diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml
index cb0840b35b..e2ed852df1 100644
--- a/installers/gcp-marketplace/values.yaml
+++ b/installers/gcp-marketplace/values.yaml
@@ -10,7 +10,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-12.4-4.5.0"
+ccp_image_tag: "centos7-12.5-4.5.1"
create_rbac: "true"
db_name: ""
db_password_age_days: "0"
@@ -32,9 +32,9 @@ pgo_admin_role_name: "pgoadmin"
pgo_admin_username: "admin"
pgo_client_container_install: "false"
pgo_client_install: 'false'
-pgo_client_version: "4.5.0"
+pgo_client_version: "4.5.1"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos7-4.5.0"
+pgo_image_tag: "centos7-4.5.1"
pgo_installation_name: '${OPERATOR_NAME}'
pgo_operator_namespace: '${OPERATOR_NAMESPACE}'
scheduler_timeout: "3600"
diff --git a/installers/helm/Chart.yaml b/installers/helm/Chart.yaml
index 6d7ffeaa30..e7a55444cb 100644
--- a/installers/helm/Chart.yaml
+++ b/installers/helm/Chart.yaml
@@ -3,7 +3,7 @@ name: postgres-operator
description: Crunchy PostgreSQL Operator Helm chart for Kubernetes
type: application
version: 0.1.0
-appVersion: 4.5.0
+appVersion: 4.5.1
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
keywords:
diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml
index 649436e0af..b2c5d441b2 100644
--- a/installers/helm/values.yaml
+++ b/installers/helm/values.yaml
@@ -37,7 +37,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-12.4-4.5.0"
+ccp_image_tag: "centos7-12.5-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -70,14 +70,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.5.0"
+pgo_client_version: "4.5.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos7-4.5.0"
+pgo_image_tag: "centos7-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/kubectl/client-setup.sh b/installers/kubectl/client-setup.sh
index 6956d63f6b..496f25abd4 100755
--- a/installers/kubectl/client-setup.sh
+++ b/installers/kubectl/client-setup.sh
@@ -14,7 +14,7 @@
# This script should be run after the operator has been deployed
PGO_OPERATOR_NAMESPACE="${PGO_OPERATOR_NAMESPACE:-pgo}"
PGO_USER_ADMIN="${PGO_USER_ADMIN:-pgouser-admin}"
-PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.5.0}"
+PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.5.1}"
PGO_CLIENT_URL="https://github.com/CrunchyData/postgres-operator/releases/download/${PGO_CLIENT_VERSION}"
PGO_CMD="${PGO_CMD-kubectl}"
diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml
index 9978d052d3..977c9ea790 100644
--- a/installers/kubectl/postgres-operator-ocp311.yml
+++ b/installers/kubectl/postgres-operator-ocp311.yml
@@ -44,7 +44,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos7-12.4-4.5.0"
+ ccp_image_tag: "centos7-12.5-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -77,14 +77,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.5.0"
+ pgo_client_version: "4.5.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos7-4.5.0"
+ pgo_image_tag: "centos7-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -161,7 +161,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index 2b516ef2ca..971e436d20 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -138,7 +138,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos7-12.4-4.5.0"
+ ccp_image_tag: "centos7-12.5-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -171,14 +171,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.5.0"
+ pgo_client_version: "4.5.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos7-4.5.0"
+ pgo_image_tag: "centos7-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -268,7 +268,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/ansible/README.md b/installers/metrics/ansible/README.md
index 1c047d1a85..57f68cd878 100644
--- a/installers/metrics/ansible/README.md
+++ b/installers/metrics/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.5.0
+Latest Release: 4.5.1
## General
diff --git a/installers/metrics/helm/Chart.yaml b/installers/metrics/helm/Chart.yaml
index 603cab3982..520204c2d1 100644
--- a/installers/metrics/helm/Chart.yaml
+++ b/installers/metrics/helm/Chart.yaml
@@ -3,6 +3,6 @@ name: postgres-operator-monitoring
description: Install for Crunchy PostgreSQL Operator Monitoring
type: application
version: 0.1.0
-appVersion: 4.5.0
+appVersion: 4.5.1
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
\ No newline at end of file
diff --git a/installers/metrics/helm/helm_template.yaml b/installers/metrics/helm/helm_template.yaml
index d5e346dbc7..b328adba55 100644
--- a/installers/metrics/helm/helm_template.yaml
+++ b/installers/metrics/helm/helm_template.yaml
@@ -20,5 +20,5 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos7-4.5.0"
+pgo_image_tag: "centos7-4.5.1"
diff --git a/installers/metrics/helm/values.yaml b/installers/metrics/helm/values.yaml
index 9f2ecefb63..616001b5ec 100644
--- a/installers/metrics/helm/values.yaml
+++ b/installers/metrics/helm/values.yaml
@@ -20,7 +20,7 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos7-4.5.0"
+pgo_image_tag: "centos7-4.5.1"
# =====================
# Configuration Options
diff --git a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
index ca4daafd16..f4643fc126 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
@@ -96,7 +96,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/kubectl/postgres-operator-metrics.yml b/installers/metrics/kubectl/postgres-operator-metrics.yml
index e1cc94fd5a..313698aaeb 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics.yml
@@ -165,7 +165,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/olm/Makefile b/installers/olm/Makefile
index 9fefafd441..ebc81698fa 100644
--- a/installers/olm/Makefile
+++ b/installers/olm/Makefile
@@ -2,7 +2,7 @@
.SUFFIXES:
CCP_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
-CCP_PG_FULLVERSION ?= 12.4
+CCP_PG_FULLVERSION ?= 12.5
CCP_POSTGIS_VERSION ?= 3.0
CONTAINER ?= docker
KUBECONFIG ?= $(HOME)/.kube/config
@@ -11,7 +11,7 @@ OLM_TOOLS ?= registry.localhost:5000/postgres-operator-olm-tools:$(OLM_SDK_VERSI
OLM_VERSION ?= 0.15.1
PGO_BASEOS ?= centos7
PGO_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
-PGO_VERSION ?= 4.5.0
+PGO_VERSION ?= 4.5.1
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
CCP_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(PGO_VERSION)
CCP_POSTGIS_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(CCP_POSTGIS_VERSION)-$(PGO_VERSION)
diff --git a/pkg/apis/crunchydata.com/v1/doc.go b/pkg/apis/crunchydata.com/v1/doc.go
index 3d5c49cd25..4c793e782f 100644
--- a/pkg/apis/crunchydata.com/v1/doc.go
+++ b/pkg/apis/crunchydata.com/v1/doc.go
@@ -53,7 +53,7 @@ cluster.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.5.0",
+ '{"ClientVersion":"4.5.1",
"Namespace":"pgouser1",
"Name":"mycluster",
$PGO_APISERVER_URL/clusters
@@ -72,7 +72,7 @@ show all of the clusters that are in the given namespace.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.5.0",
+ '{"ClientVersion":"4.5.1",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/showclusters
@@ -82,7 +82,7 @@ $PGO_APISERVER_URL/showclusters
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.5.0",
+ '{"ClientVersion":"4.5.1",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/clustersdelete
@@ -90,7 +90,7 @@ $PGO_APISERVER_URL/clustersdelete
Schemes: http, https
BasePath: /
- Version: 4.5.0
+ Version: 4.5.1
License: Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0
Contact: Crunchy Data https://www.crunchydata.com/
diff --git a/pkg/apiservermsgs/common.go b/pkg/apiservermsgs/common.go
index cce7f6be40..093405b4fd 100644
--- a/pkg/apiservermsgs/common.go
+++ b/pkg/apiservermsgs/common.go
@@ -15,7 +15,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
-const PGO_VERSION = "4.5.0"
+const PGO_VERSION = "4.5.1"
// Ok status
const Ok = "ok"
diff --git a/redhat/atomic/help.1 b/redhat/atomic/help.1
index bc21518dd8..6f9bfad143 100644
--- a/redhat/atomic/help.1
+++ b/redhat/atomic/help.1
@@ -56,4 +56,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
\fB\fCRelease=\fR
.PP
-The specific release number of the container. For example, Release="4.5.0"
+The specific release number of the container. For example, Release="4.5.1"
diff --git a/redhat/atomic/help.md b/redhat/atomic/help.md
index 8950e24d47..1a12dbc144 100644
--- a/redhat/atomic/help.md
+++ b/redhat/atomic/help.md
@@ -45,4 +45,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
`Release=`
-The specific release number of the container. For example, Release="4.5.0"
+The specific release number of the container. For example, Release="4.5.1"
From a7c590fa7f776a784d43d71b853c2390610e8908 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 11 Nov 2020 19:58:45 -0500
Subject: [PATCH 009/276] Bump version value in README
While the packages are not available, this is needed to finish
the wrap.
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index a58837efa6..c0607435f1 100644
--- a/README.md
+++ b/README.md
@@ -161,7 +161,7 @@ Based on your storage settings in your Kubernetes environment, you may be able t
```shell
kubectl create namespace pgo
-kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.5.0/installers/kubectl/postgres-operator.yml
+kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.5.1/installers/kubectl/postgres-operator.yml
```
Otherwise, we highly recommend following the instructions from our [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/latest/quickstart/).
From b0a276ab1abf028ea806911ab5524915a68ea621 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 15 Nov 2020 14:14:49 -0500
Subject: [PATCH 010/276] Tighter check for replica service in `pgo test`
The check looked to see if the word "replica" existed in one
of the Service Labels, when really we shold be checking for a
suffix of "replica".
Issue: [ch9764]
Issue: #2047
---
internal/apiserver/clusterservice/clusterimpl.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 37ce1f0eab..8369d421ec 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -489,7 +489,7 @@ func TestCluster(name, selector, ns, pgouser string, allFlag bool) msgs.ClusterT
switch {
default:
endpoint.InstanceType = msgs.ClusterTestInstanceTypePrimary
- case strings.Contains(service.Name, msgs.PodTypeReplica):
+ case strings.HasSuffix(service.Name, msgs.PodTypeReplica):
endpoint.InstanceType = msgs.ClusterTestInstanceTypeReplica
case service.Pgbouncer:
endpoint.InstanceType = msgs.ClusterTestInstanceTypePGBouncer
From d359c355786bc83079cd5652d469cbc8db478fc1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 16 Nov 2020 09:40:22 -0500
Subject: [PATCH 011/276] Update OLM installation instructions template
This updates a few of the OLM installation instruction steps
so that the dependency on `git` is removed, replaced by `curl`
to get the exact files from the repository that are needed.
---
installers/olm/description.openshift.md | 32 +++++++++++++------------
installers/olm/description.upstream.md | 32 +++++++++++++------------
2 files changed, 34 insertions(+), 30 deletions(-)
diff --git a/installers/olm/description.openshift.md b/installers/olm/description.openshift.md
index e4eb3c0831..76be0028da 100644
--- a/installers/olm/description.openshift.md
+++ b/installers/olm/description.openshift.md
@@ -49,13 +49,6 @@ export PGO_OPERATOR_NAMESPACE=pgo
oc create namespace "$PGO_OPERATOR_NAMESPACE"
```
-Next, clone the PostgreSQL Operator repository locally.
-
-```
-git clone -b v${PGO_VERSION} https://github.com/CrunchyData/postgres-operator.git
-cd postgres-operator
-```
-
### Security
For the PostgreSQL Operator and PostgreSQL clusters to run in the recommended `restricted` [Security Context Constraint][],
@@ -69,19 +62,23 @@ Configure pgBackRest for your environment. If you do not plan to use AWS S3 to s
the `aws-s3` keys below.
```
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config > config
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config > sshd_config
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt > aws-s3-ca.crt
+
oc -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt \
+ --from-file=./config \
+ --from-file=./sshd_config \
+ --from-file=./aws-s3-ca.crt \
--from-literal=aws-s3-key="" \
--from-literal=aws-s3-key-secret=""
```
### Certificates (optional)
-The PostgreSQL Operator has an API that uses TLS to communicate securely with clients. If you have
-a certificate bundle validated by your organization, you can install it now. If not, the API will
-automatically generate and use a self-signed certificate.
+The PostgreSQL Operator has an API that uses TLS to communicate securely with clients. If one is not provided, the API will automatically generated one for you.
+
+If you have a certificate bundle validated by your organization, you can install it now.
```
oc -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \
@@ -102,8 +99,13 @@ to use the [PostgreSQL Operator Client][pgo-client].
Install the first set of client credentials and download the `pgo` binary and client certificates.
```
-PGO_CMD=oc ./deploy/install-bootstrap-creds.sh
-PGO_CMD=oc ./installers/kubectl/client-setup.sh
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/deploy/install-bootstrap-creds.sh > install-bootstrap-creds.sh
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/kubectl/client-setup.sh > client-setup.sh
+
+chmod +x install-bootstrap-creds.sh client-setup.sh
+
+PGO_CMD=oc ./install-bootstrap-creds.sh
+PGO_CMD=oc ./client-setup.sh
```
The client needs to be able to reach the PostgreSQL Operator API from outside the OpenShift cluster.
diff --git a/installers/olm/description.upstream.md b/installers/olm/description.upstream.md
index 1e192fa9c2..7d1dcce69d 100644
--- a/installers/olm/description.upstream.md
+++ b/installers/olm/description.upstream.md
@@ -49,32 +49,29 @@ export PGO_OPERATOR_NAMESPACE=pgo
kubectl create namespace "$PGO_OPERATOR_NAMESPACE"
```
-Next, clone the PostgreSQL Operator repository locally.
-
-```
-git clone -b v${PGO_VERSION} https://github.com/CrunchyData/postgres-operator.git
-cd postgres-operator
-```
-
### Secrets
Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit
the `aws-s3` keys below.
```
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config > config
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config > sshd_config
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt > aws-s3-ca.crt
+
kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt \
+ --from-file=./config \
+ --from-file=./sshd_config \
+ --from-file=./aws-s3-ca.crt \
--from-literal=aws-s3-key="" \
--from-literal=aws-s3-key-secret=""
```
### Certificates (optional)
-The PostgreSQL Operator has an API that uses TLS to communicate securely with clients. If you have
-a certificate bundle validated by your organization, you can install it now. If not, the API will
-automatically generate and use a self-signed certificate.
+The PostgreSQL Operator has an API that uses TLS to communicate securely with clients. If one is not provided, the API will automatically generated one for you.
+
+If you have a certificate bundle validated by your organization, you can install it now.
```
kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \
@@ -95,8 +92,13 @@ to use the [PostgreSQL Operator Client][pgo-client].
Install the first set of client credentials and download the `pgo` binary and client certificates.
```
-PGO_CMD=kubectl ./deploy/install-bootstrap-creds.sh
-PGO_CMD=kubectl ./installers/kubectl/client-setup.sh
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/deploy/install-bootstrap-creds.sh > install-bootstrap-creds.sh
+curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/kubectl/client-setup.sh > client-setup.sh
+
+chmod +x install-bootstrap-creds.sh client-setup.sh
+
+PGO_CMD=kubectl ./install-bootstrap-creds.sh
+PGO_CMD=kubectl ./client-setup.sh
```
The client needs to be able to reach the PostgreSQL Operator API from outside the Kubernetes cluster.
From 9462bdb5746d7e057097c80ca4568388405bb55f Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Tue, 3 Nov 2020 17:32:35 -0600
Subject: [PATCH 012/276] Trace Kubernetes API calls using OpenTelemetry
OpenTelemetry offers instrumentation packages that are agnostic to the
tool(s) that finally store telemetry data. When the SDK for Go reaches
GA, we can leverage the OTLP exporter to offload further configuration
to an external OpenTelemetry Collector.
For now, traces can be sent to one of a handful of destinations chosen
by environment variable: Jaeger agent, Jaeger collector, stdout as JSON,
or a file as JSON.
Issue: [ch9735]
---
cmd/postgres-operator/main.go | 21 +-
cmd/postgres-operator/open_telemetry.go | 101 +++++++++
go.mod | 9 +-
go.sum | 191 ++++++++++++++++++
.../controller/manager/controllermanager.go | 6 +-
internal/kubeapi/client_config.go | 13 +-
6 files changed, 333 insertions(+), 8 deletions(-)
create mode 100644 cmd/postgres-operator/open_telemetry.go
diff --git a/cmd/postgres-operator/main.go b/cmd/postgres-operator/main.go
index 325303c9a2..e1884991be 100644
--- a/cmd/postgres-operator/main.go
+++ b/cmd/postgres-operator/main.go
@@ -39,6 +39,12 @@ import (
)
func main() {
+ if flush, err := initOpenTelemetry(); err != nil {
+ log.Error(err)
+ os.Exit(2)
+ } else {
+ defer flush()
+ }
debugFlag := os.Getenv("CRUNCHY_DEBUG")
//add logging configuration
@@ -53,7 +59,18 @@ func main() {
//give time for pgo-event to start up
time.Sleep(time.Duration(5) * time.Second)
- client, err := kubeapi.NewClient()
+ newKubernetesClient := func() (*kubeapi.Client, error) {
+ config, err := kubeapi.LoadClientConfig()
+ if err != nil {
+ return nil, err
+ }
+
+ config.Wrap(otelTransportWrapper())
+
+ return kubeapi.NewClientForConfig(config)
+ }
+
+ client, err := newKubernetesClient()
if err != nil {
log.Error(err)
os.Exit(2)
@@ -83,6 +100,8 @@ func main() {
}
log.Debug("controller manager created")
+ controllerManager.NewKubernetesClient = newKubernetesClient
+
// If not using the "disabled" namespace operating mode, start a real namespace controller
// that is able to resond to namespace events in the Kube cluster. If using the "disabled"
// operating mode, then create a fake client containing all namespaces defined for the install
diff --git a/cmd/postgres-operator/open_telemetry.go b/cmd/postgres-operator/open_telemetry.go
new file mode 100644
index 0000000000..382079687d
--- /dev/null
+++ b/cmd/postgres-operator/open_telemetry.go
@@ -0,0 +1,101 @@
+package main
+
+/*
+Copyright 2020 Crunchy Data
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+import (
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+
+ "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
+ "go.opentelemetry.io/otel/api/global"
+ "go.opentelemetry.io/otel/exporters/stdout"
+ "go.opentelemetry.io/otel/exporters/trace/jaeger"
+)
+
+func initOpenTelemetry() (func(), error) {
+ // At the time of this writing, the SDK (go.opentelemetry.io/otel@v0.13.0)
+ // does not automatically initialize any trace or metric exporter. An upcoming
+ // specification details environment variables that should facilitate this in
+ // the future.
+ //
+ // - https://github.com/open-telemetry/opentelemetry-specification/blob/f5519f2b/specification/sdk-environment-variables.md
+
+ switch os.Getenv("OTEL_EXPORTER") {
+ case "jaeger":
+ var endpoint jaeger.EndpointOption
+ agent := os.Getenv("JAEGER_AGENT_ENDPOINT")
+ collector := jaeger.CollectorEndpointFromEnv()
+
+ if agent != "" {
+ endpoint = jaeger.WithAgentEndpoint(agent)
+ }
+ if collector != "" {
+ endpoint = jaeger.WithCollectorEndpoint(collector)
+ }
+
+ provider, flush, err := jaeger.NewExportPipeline(endpoint)
+ if err != nil {
+ return nil, fmt.Errorf("unable to initialize Jaeger exporter: %w", err)
+ }
+
+ global.SetTracerProvider(provider)
+ return flush, nil
+
+ case "json":
+ var closer io.Closer
+ filename := os.Getenv("OTEL_JSON_FILE")
+ options := []stdout.Option{stdout.WithoutMetricExport()}
+
+ if filename != "" {
+ file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
+ if err != nil {
+ return nil, fmt.Errorf("unable to open exporter file: %w", err)
+ }
+ closer = file
+ options = append(options, stdout.WithWriter(file))
+ }
+
+ provider, pusher, err := stdout.NewExportPipeline(options, nil)
+ if err != nil {
+ return nil, fmt.Errorf("unable to initialize stdout exporter: %w", err)
+ }
+ flush := func() {
+ pusher.Stop()
+ if closer != nil {
+ _ = closer.Close()
+ }
+ }
+
+ global.SetTracerProvider(provider)
+ return flush, nil
+ }
+
+ // $OTEL_EXPORTER is unset or unknown, so no TracerProvider has been assigned.
+ // The default at this time is a single "no-op" tracer.
+
+ return func() {}, nil
+}
+
+// otelTransportWrapper creates a function that wraps the provided net/http.RoundTripper
+// with one that starts a span for each request, injects context into that request,
+// and ends the span when that request's response body is closed.
+func otelTransportWrapper(options ...otelhttp.Option) func(http.RoundTripper) http.RoundTripper {
+ return func(rt http.RoundTripper) http.RoundTripper {
+ return otelhttp.NewTransport(rt, options...)
+ }
+}
diff --git a/go.mod b/go.mod
index 0147d04c01..bef11e91fb 100644
--- a/go.mod
+++ b/go.mod
@@ -5,8 +5,6 @@ go 1.15
require (
github.com/docker/spdystream v0.0.0-20181023171402-6480d4af844c // indirect
github.com/fatih/color v1.9.0
- github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
- github.com/google/go-cmp v0.4.1 // indirect
github.com/gorilla/mux v1.7.4
github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect
github.com/mattn/go-colorable v0.1.6 // indirect
@@ -16,9 +14,12 @@ require (
github.com/spf13/cobra v0.0.5
github.com/spf13/pflag v1.0.5
github.com/xdg/stringprep v1.0.0
+ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.13.0
+ go.opentelemetry.io/otel v0.13.0
+ go.opentelemetry.io/otel/exporters/stdout v0.13.0
+ go.opentelemetry.io/otel/exporters/trace/jaeger v0.13.0
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9
- golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d // indirect
- golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a
+ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208
k8s.io/api v0.19.2
k8s.io/apimachinery v0.19.2
k8s.io/client-go v0.19.2
diff --git a/go.sum b/go.sum
index 9be2bc52bd..d3dad7eb97 100644
--- a/go.sum
+++ b/go.sum
@@ -5,11 +5,32 @@ cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6A
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.51.0/go.mod h1:hWtGJ6gnXH+KgDv+V0zFGDvpi07n3z8ZNj3T1RW0Gcw=
+cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
+cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
+cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
+cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
+cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
+cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
+cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
+cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
+cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
+cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
+cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
+cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
+cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
+cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
+cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
+cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
@@ -25,6 +46,8 @@ github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6L
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/DataDog/sketches-go v0.0.1 h1:RtG+76WKgZuz6FIaGsjoPePmadDBkuD/KC6+ZWu78b8=
+github.com/DataDog/sketches-go v0.0.1/go.mod h1:Q5DbzQ+3AkgGwymQO7aZFNP7ns2lZKGtvRBzRXfdi60=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
@@ -35,9 +58,13 @@ github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4Rq
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
+github.com/apache/thrift v0.13.0 h1:5hryIiq9gtn+MiLVn0wP37kb/uTeRZgN08WoCsAhIhI=
+github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
+github.com/benbjohnson/clock v1.0.3 h1:vkLuvpK4fmtSCuo60+yC63p7y0BmQ8gm5ZXGuBCJyXg=
+github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
@@ -47,6 +74,7 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
@@ -77,7 +105,9 @@ github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153 h1:yUdfgN0XgIJw7fo
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
@@ -86,6 +116,8 @@ github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
+github.com/felixge/httpsnoop v1.0.1 h1:lvB5Jl89CsZtGIWuTcDM1E/vkVs49/Ml7JJe07l8SPQ=
+github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
@@ -94,7 +126,9 @@ github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2H
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logr/logr v0.1.0 h1:M1Tv3VzNlEHg6uyACnRdtrploV2P7wZqH8BoQMtz0cg=
@@ -160,11 +194,17 @@ github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4er
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
@@ -183,13 +223,22 @@ github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMyw
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.4.1 h1:/exdXoGamhu5ONeUJH0deniYLWYvQwW66yvlfiiKTu0=
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.2 h1:X2ev0eStA3AbceY54o37/0PQ/UWqKEiiO2dKL5OPaFM=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
@@ -343,11 +392,14 @@ github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
+github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
@@ -357,6 +409,10 @@ github.com/xdg/stringprep v1.0.0 h1:d9X0esnoa3dFsV0FG35rAT0RIhYFlPq7MiP+DW89La0=
github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
+github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
@@ -365,6 +421,20 @@ go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qL
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opentelemetry.io/contrib v0.13.0 h1:q34CFu5REx9Dt2ksESHC/doIjFJkEg1oV3aSwlL5JR0=
+go.opentelemetry.io/contrib v0.13.0/go.mod h1:HzCu6ebm0ywgNxGaEfs3izyJOMP4rZnzxycyTgpI5Sg=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.13.0 h1:dnZy1afzxEDrHybTYoJE1bQ3fphNwZF2ipSsynlITP4=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.13.0/go.mod h1:SeQm4RTCcZ2/hlMSTuHb7nwIROe5odBtgfKx+7MMqEs=
+go.opentelemetry.io/otel v0.13.0 h1:2isEnyzjjJZq6r2EKMsFj4TxiQiexsM04AVhwbR/oBA=
+go.opentelemetry.io/otel v0.13.0/go.mod h1:dlSNewoRYikTkotEnxdmuBHgzT+k/idJSfDv/FxEnOY=
+go.opentelemetry.io/otel/exporters/stdout v0.13.0 h1:A+XiGIPQbGoJoBOJfKAKnZyiUSjSWvL3XWETUvtom5k=
+go.opentelemetry.io/otel/exporters/stdout v0.13.0/go.mod h1:JJt8RpNY6K+ft9ir3iKpceCvT/rhzJXEExGrWFCbv1o=
+go.opentelemetry.io/otel/exporters/trace/jaeger v0.13.0 h1:TjXcUVYbsjl3lYifrWptraZAL0OBmpMxRLm/eJ1GyZU=
+go.opentelemetry.io/otel/exporters/trace/jaeger v0.13.0/go.mod h1:RSg6E40NYGqN/aCrStCUue2e+jABeFk2bKdNucw63ao=
+go.opentelemetry.io/otel/sdk v0.13.0 h1:4VCfpKamZ8GtnepXxMRurSpHpMKkcxhtO33z1S4rGDQ=
+go.opentelemetry.io/otel/sdk v0.13.0/go.mod h1:dKvLH8Uu8LcEPlSAUsfW7kMGaJBhk/1NYvpPZ6wIMbU=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
@@ -387,7 +457,12 @@ golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
+golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -396,12 +471,18 @@ golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTk
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
+golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -419,21 +500,37 @@ golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7 h1:AeiKBIuRw3UomYXSbLy0Mc2dDLfdtbT/IVn4keq83P0=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgNrpq9mjcfDemuexIKsU=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up98WAHf3f/ulnJ62IyA=
+golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43 h1:ld7aEMNHoBnnDAX15v1T6z31v8HwR2A9FYOuAhWqkwc=
+golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -442,6 +539,8 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a h1:WXEvlFVvvGxCJLG6REjsT03iWnKLEWinaScsxF2Vm2o=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208 h1:qwRHBd0NqMbJxfbotnDhm2ByMI1Shq4Y6oRJo21SGJA=
+golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -461,8 +560,10 @@ golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -470,12 +571,26 @@ golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4 h1:5/PjkGUjvEU5Gl6BxmvKRPpqo2uNMv4rcHBMwzk/st8=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f h1:Fqb3ao1hUmOR3GkUOg/Y+BadLwykBIzs5q8Ez2SbHyc=
+golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
@@ -508,25 +623,66 @@ golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
+golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.0.1 h1:xyiBuvkD2g5n7cYzx6u2sxQvsAy4QJsZFCzGVdzOXZ0=
gomodules.xyz/jsonpatch/v2 v2.0.1/go.mod h1:IhYNNY4jnS53ZnfE4PAmpKtDpTCj1JFXc+3mwe7XcUU=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
+google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
+google.golang.org/api v0.32.0 h1:Le77IccnTqEa8ryp9wIpX5W3zYm7Gf9LhOp9PHcwFts=
+google.golang.org/api v0.32.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/appengine v1.6.6 h1:lMO5rYAqUxkmaj76jAkRUvt5JZgFymx/+Q5Mzfivuhc=
+google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
@@ -535,15 +691,42 @@ google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRn
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
+google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
+google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
+google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
+google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -555,6 +738,8 @@ google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0 h1:UhZDfRO8JRQru4/+LlLE0BRKGF8L+PICnvYZmx/fEGA=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
+google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
+google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -577,12 +762,16 @@ gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.18.6/go.mod h1:eeyxr+cwCjMdLAmr2W3RyDI0VvTawSg/3RFFBEnmZGI=
k8s.io/api v0.19.2 h1:q+/krnHWKsL7OBZg/rxnycsl9569Pud76UJ77MvKXms=
k8s.io/api v0.19.2/go.mod h1:IQpK0zFQ1xc5iNIQPqzgoOwuFugaYHK4iCknlAQP9nI=
@@ -616,6 +805,8 @@ k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/
k8s.io/utils v0.0.0-20200729134348-d5654de09c73 h1:uJmqzgNWG7XyClnU/mLPBWwfKKF1K8Hf8whTseBgJcg=
k8s.io/utils v0.0.0-20200729134348-d5654de09c73/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
+rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
+rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0=
sigs.k8s.io/controller-runtime v0.6.3 h1:SBbr+inLPEKhvlJtrvDcwIpm+uhDvp63Bl72xYJtoOE=
sigs.k8s.io/controller-runtime v0.6.3/go.mod h1:WlZNXcM0++oyaQt4B7C2lEE5JYRs8vJUzRP4N4JpdAY=
diff --git a/internal/controller/manager/controllermanager.go b/internal/controller/manager/controllermanager.go
index bb2c2e1039..236ed1bb74 100644
--- a/internal/controller/manager/controllermanager.go
+++ b/internal/controller/manager/controllermanager.go
@@ -61,6 +61,8 @@ type ControllerManager struct {
pgoConfig config.PgoConfig
pgoNamespace string
sem *semaphore.Weighted
+
+ NewKubernetesClient func() (*kubeapi.Client, error)
}
// controllerGroup is a struct for managing the various controllers created to handle events
@@ -90,6 +92,8 @@ func NewControllerManager(namespaces []string,
pgoConfig: pgoConfig,
pgoNamespace: pgoNamespace,
sem: semaphore.NewWeighted(1),
+
+ NewKubernetesClient: kubeapi.NewClient,
}
// create controller groups for each namespace provided
@@ -229,7 +233,7 @@ func (c *ControllerManager) addControllerGroup(namespace string) error {
}
// create a client for kube resources
- client, err := kubeapi.NewClient()
+ client, err := c.NewKubernetesClient()
if err != nil {
log.Error(err)
return err
diff --git a/internal/kubeapi/client_config.go b/internal/kubeapi/client_config.go
index 5d070fc25e..22fe39e9b2 100644
--- a/internal/kubeapi/client_config.go
+++ b/internal/kubeapi/client_config.go
@@ -55,7 +55,9 @@ var _ Interface = &Client{}
// CrunchydataV1 retrieves the CrunchydataV1Client
func (c *Client) CrunchydataV1() crunchydatav1.CrunchydataV1Interface { return c.crunchydataV1 }
-func loadClientConfig() (*rest.Config, error) {
+// LoadClientConfig prepares a configuration from the environment or home directory,
+// falling back to in-cluster when applicable.
+func LoadClientConfig() (*rest.Config, error) {
// The default loading rules try to read from the files specified in the
// environment or from the home directory.
loader := clientcmd.NewDefaultClientConfigLoadingRules()
@@ -69,11 +71,18 @@ func loadClientConfig() (*rest.Config, error) {
// NewClient returns a kubernetes.Clientset and its underlying configuration.
func NewClient() (*Client, error) {
- config, err := loadClientConfig()
+ config, err := LoadClientConfig()
if err != nil {
return nil, err
}
+ return NewClientForConfig(config)
+}
+
+// NewClientForConfig returns a kubernetes.Clientset using config.
+func NewClientForConfig(config *rest.Config) (*Client, error) {
+ var err error
+
// Match the settings applied by sigs.k8s.io/controller-runtime@v0.6.0;
// see https://github.com/kubernetes-sigs/controller-runtime/issues/365.
if config.QPS == 0.0 {
From 20c67488f54062c93d4fd0fe5dfd9c51cc01d4bd Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Fri, 13 Nov 2020 10:22:56 -0600
Subject: [PATCH 013/276] Use log.Fatal rather than os.Exit in
cmd/postgres-operator
---
cmd/postgres-operator/main.go | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/cmd/postgres-operator/main.go b/cmd/postgres-operator/main.go
index e1884991be..aa7e99ab27 100644
--- a/cmd/postgres-operator/main.go
+++ b/cmd/postgres-operator/main.go
@@ -40,8 +40,7 @@ import (
func main() {
if flush, err := initOpenTelemetry(); err != nil {
- log.Error(err)
- os.Exit(2)
+ log.Fatal(err)
} else {
defer flush()
}
@@ -72,8 +71,7 @@ func main() {
client, err := newKubernetesClient()
if err != nil {
- log.Error(err)
- os.Exit(2)
+ log.Fatal(err)
}
operator.Initialize(client)
@@ -83,8 +81,7 @@ func main() {
// list of target namespaces for the operator install
namespaceList, err := operator.SetupNamespaces(client)
if err != nil {
- log.Errorf("Error configuring operator namespaces: %v", err)
- os.Exit(2)
+ log.Fatalf("Error configuring operator namespaces: %v", err)
}
// set up signals so we handle the first shutdown signal gracefully
@@ -95,8 +92,7 @@ func main() {
controllerManager, err := manager.NewControllerManager(namespaceList, operator.Pgo,
operator.PgoNamespace, operator.InstallationName, operator.NamespaceOperatingMode())
if err != nil {
- log.Error(err)
- os.Exit(2)
+ log.Fatal(err)
}
log.Debug("controller manager created")
From 41dd921b53d8fcd1507b25b42714d33cd40f6174 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 17 Nov 2020 13:57:16 -0500
Subject: [PATCH 014/276] Bump Kubernetes client libs to 0.19.4
---
go.mod | 6 +++---
go.sum | 12 ++++++------
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/go.mod b/go.mod
index bef11e91fb..e85242cd76 100644
--- a/go.mod
+++ b/go.mod
@@ -20,9 +20,9 @@ require (
go.opentelemetry.io/otel/exporters/trace/jaeger v0.13.0
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208
- k8s.io/api v0.19.2
- k8s.io/apimachinery v0.19.2
- k8s.io/client-go v0.19.2
+ k8s.io/api v0.19.4
+ k8s.io/apimachinery v0.19.4
+ k8s.io/client-go v0.19.4
sigs.k8s.io/controller-runtime v0.6.3
sigs.k8s.io/yaml v1.2.0
)
diff --git a/go.sum b/go.sum
index d3dad7eb97..36a21fea4b 100644
--- a/go.sum
+++ b/go.sum
@@ -773,17 +773,17 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.18.6/go.mod h1:eeyxr+cwCjMdLAmr2W3RyDI0VvTawSg/3RFFBEnmZGI=
-k8s.io/api v0.19.2 h1:q+/krnHWKsL7OBZg/rxnycsl9569Pud76UJ77MvKXms=
-k8s.io/api v0.19.2/go.mod h1:IQpK0zFQ1xc5iNIQPqzgoOwuFugaYHK4iCknlAQP9nI=
+k8s.io/api v0.19.4 h1:I+1I4cgJYuCDgiLNjKx7SLmIbwgj9w7N7Zr5vSIdwpo=
+k8s.io/api v0.19.4/go.mod h1:SbtJ2aHCItirzdJ36YslycFNzWADYH3tgOhvBEFtZAk=
k8s.io/apiextensions-apiserver v0.18.6 h1:vDlk7cyFsDyfwn2rNAO2DbmUbvXy5yT5GE3rrqOzaMo=
k8s.io/apiextensions-apiserver v0.18.6/go.mod h1:lv89S7fUysXjLZO7ke783xOwVTm6lKizADfvUM/SS/M=
k8s.io/apimachinery v0.18.6/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko=
-k8s.io/apimachinery v0.19.2 h1:5Gy9vQpAGTKHPVOh5c4plE274X8D/6cuEiTO2zve7tc=
-k8s.io/apimachinery v0.19.2/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
+k8s.io/apimachinery v0.19.4 h1:+ZoddM7nbzrDCp0T3SWnyxqf8cbWPT2fkZImoyvHUG0=
+k8s.io/apimachinery v0.19.4/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
k8s.io/apiserver v0.18.6/go.mod h1:Zt2XvTHuaZjBz6EFYzpp+X4hTmgWGy8AthNVnTdm3Wg=
k8s.io/client-go v0.18.6/go.mod h1:/fwtGLjYMS1MaM5oi+eXhKwG+1UHidUEXRh6cNsdO0Q=
-k8s.io/client-go v0.19.2 h1:gMJuU3xJZs86L1oQ99R4EViAADUPMHHtS9jFshasHSc=
-k8s.io/client-go v0.19.2/go.mod h1:S5wPhCqyDNAlzM9CnEdgTGV4OqhsW3jGO1UM1epwfJA=
+k8s.io/client-go v0.19.4 h1:85D3mDNoLF+xqpyE9Dh/OtrJDyJrSRKkHmDXIbEzer8=
+k8s.io/client-go v0.19.4/go.mod h1:ZrEy7+wj9PjH5VMBCuu/BDlvtUAku0oVFk4MmnW9mWA=
k8s.io/code-generator v0.18.6/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
k8s.io/component-base v0.18.6/go.mod h1:knSVsibPR5K6EW2XOjEHik6sdU5nCvKMrzMt2D4In14=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
From 29ef4855cba68ddcc4dee13a21b697315e5fc88e Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 17 Nov 2020 13:49:57 -0500
Subject: [PATCH 015/276] Allow for Operator generation of deafult pgBackRest
configuration
The "pgo-backrest-repo-config" consists of several default
files that can be generated when the Operator is initially loaded
(and reconciled on initialization as well). This removes additional
manual steps required when deploying the Operator through methods
such as OLM, making it easier to get started.
---
build/postgres-operator/Dockerfile | 1 +
deploy/deploy.sh | 12 +--
.../installation/other/operator-hub.md | 39 +-------
.../ansible/roles/pgo-operator/tasks/main.yml | 22 ++---
installers/olm/description.openshift.md | 12 +--
installers/olm/description.upstream.md | 12 +--
internal/config/secrets.go | 18 ++++
internal/operator/common.go | 90 +++++++++++++++++--
internal/util/cluster.go | 2 +-
9 files changed, 127 insertions(+), 81 deletions(-)
create mode 100644 internal/config/secrets.go
diff --git a/build/postgres-operator/Dockerfile b/build/postgres-operator/Dockerfile
index d88621d73f..dd9895987e 100644
--- a/build/postgres-operator/Dockerfile
+++ b/build/postgres-operator/Dockerfile
@@ -28,6 +28,7 @@ RUN if [ "$DFSET" = "rhel" ] ; then \
fi
ADD bin/postgres-operator /usr/local/bin
+ADD installers/ansible/roles/pgo-operator/files/pgo-backrest-repo /default-pgo-backrest-repo
ADD installers/ansible/roles/pgo-operator/files/pgo-configs /default-pgo-config
ADD conf/postgres-operator/pgo.yaml /default-pgo-config/pgo.yaml
diff --git a/deploy/deploy.sh b/deploy/deploy.sh
index 823671c7d9..67478acb8b 100755
--- a/deploy/deploy.sh
+++ b/deploy/deploy.sh
@@ -46,12 +46,12 @@ fi
pgbackrest_aws_s3_key=$(awsKeySecret "aws-s3-key")
pgbackrest_aws_s3_key_secret=$(awsKeySecret "aws-s3-key-secret")
-$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create secret generic pgo-backrest-repo-config \
- --from-file=config=${PGO_CONF_DIR}/pgo-backrest-repo/config \
- --from-file=sshd_config=${PGO_CONF_DIR}/pgo-backrest-repo/sshd_config \
- --from-file=aws-s3-ca.crt=${PGO_CONF_DIR}/pgo-backrest-repo/aws-s3-ca.crt \
- --from-literal=aws-s3-key="${pgbackrest_aws_s3_key}" \
- --from-literal=aws-s3-key-secret="${pgbackrest_aws_s3_key_secret}"
+if [[ ! -z $pgbackrest_aws_s3_key ]] || [[ ! -z $pgbackrest_aws_s3_key_secret ]]
+then
+ $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create secret generic pgo-backrest-repo-config \
+ --from-literal=aws-s3-key="${pgbackrest_aws_s3_key}" \
+ --from-literal=aws-s3-key-secret="${pgbackrest_aws_s3_key_secret}"
+fi
#
# credentials for pgo-apiserver TLS REST API
diff --git a/docs/content/installation/other/operator-hub.md b/docs/content/installation/other/operator-hub.md
index b610ee2664..caebfb3a95 100644
--- a/docs/content/installation/other/operator-hub.md
+++ b/docs/content/installation/other/operator-hub.md
@@ -15,44 +15,14 @@ that is available in OperatorHub.io.
## Before You Begin
-There are a few manual steps that the cluster administrator must perform prior to installing the PostgreSQL Operator.
-At the very least, it must be provided with an initial configuration.
+There are some optional Secrets you can add before installing the PostgreSQL Operator into your cluster.
-First, make sure OLM and the OperatorHub.io catalog are installed by running
-`kubectl get CatalogSources --all-namespaces`. You should see something similar to the following:
+### Secrets (optional)
-```
-NAMESPACE NAME DISPLAY TYPE PUBLISHER
-olm operatorhubio-catalog Community Operators grpc OperatorHub.io
-```
-
-Take note of the name and namespace above, you will need them later on.
-
-Next, select a namespace in which to install the PostgreSQL Operator. PostgreSQL clusters will also be deployed here.
-If it does not exist, create it now.
-
-```
-export PGO_OPERATOR_NAMESPACE=pgo
-kubectl create namespace "$PGO_OPERATOR_NAMESPACE"
-```
-
-Next, clone the PostgreSQL Operator repository locally.
-
-```
-git clone -b v{{< param operatorVersion >}} https://github.com/CrunchyData/postgres-operator.git
-cd postgres-operator
-```
-
-### Secrets
-
-Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit
-the `aws-s3` keys below.
+If you plan to use AWS S3 to store backups and would like to have the keys available for every backup, you can create a Secret as described below:
```
kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config \
- --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt \
--from-literal=aws-s3-key="" \
--from-literal=aws-s3-key-secret=""
```
@@ -69,9 +39,6 @@ kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \
--key=/path/to/server.key
```
-Once these resources are in place, the PostgreSQL Operator can be installed into the cluster.
-
-
## Installation
Create an `OperatorGroup` and a `Subscription` in your chosen namespace.
diff --git a/installers/ansible/roles/pgo-operator/tasks/main.yml b/installers/ansible/roles/pgo-operator/tasks/main.yml
index c9fc36e6a0..92d14725ca 100644
--- a/installers/ansible/roles/pgo-operator/tasks/main.yml
+++ b/installers/ansible/roles/pgo-operator/tasks/main.yml
@@ -106,7 +106,7 @@
when: pgorole_pgoadmin_result.rc == 1
- name: PGO Service Account
- when:
+ when:
- create_rbac|bool
tags:
- install
@@ -128,7 +128,7 @@
when: pgo_service_account_result.rc == 1
- name: Cluster RBAC (namespace_mode 'dynamic')
- when:
+ when:
- create_rbac|bool
- namespace_mode == "dynamic"
tags:
@@ -151,7 +151,7 @@
when: cluster_rbac_result.rc == 1
- name: Cluster RBAC (namespace_mode 'readonly')
- when:
+ when:
- create_rbac|bool
- namespace_mode == "readonly"
tags:
@@ -179,7 +179,7 @@
tags:
- install
- update
- when:
+ when:
- create_rbac|bool
- namespace_mode == "disabled"
@@ -266,13 +266,13 @@
- name: Create PGO BackRest Repo Secret
command: |
{{ kubectl_or_oc }} create secret generic pgo-backrest-repo-config \
- --from-file=config='{{ role_path }}/files/pgo-backrest-repo/config' \
- --from-file=sshd_config='{{ role_path }}/files/pgo-backrest-repo/sshd_config' \
- --from-file=aws-s3-ca.crt='{{ role_path }}/files/pgo-backrest-repo/aws-s3-ca.crt' \
--from-literal=aws-s3-key='{{ backrest_aws_s3_key }}' \
--from-literal=aws-s3-key-secret='{{ backrest_aws_s3_secret }}' \
-n {{ pgo_operator_namespace }}
- when: pgo_backrest_repo_config_result.rc == 1
+ when:
+ - pgo_backrest_repo_config_result.rc == 1
+ - (backrest_aws_s3_key | default('') != '') or
+ (backrest_aws_s3_secret | default('') != '')
- name: PGO API Secret
tags:
@@ -307,7 +307,7 @@
shell: "{{ kubectl_or_oc }} get configmap pgo-config -n {{ pgo_operator_namespace }}"
register: pgo_config_result
failed_when: false
-
+
- name: Create PGO ConfigMap
command: |
{{ kubectl_or_oc }} create configmap pgo-config \
@@ -403,8 +403,8 @@
shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/pgo-client.json"
register: pgo_client_json_result
failed_when: false
-
+
- name: Create PGO-Client deployment
command: |
{{ kubectl_or_oc }} create --filename='{{ output_dir }}/pgo-client.json'
- when: pgo_client_json_result.rc == 1
\ No newline at end of file
+ when: pgo_client_json_result.rc == 1
diff --git a/installers/olm/description.openshift.md b/installers/olm/description.openshift.md
index 76be0028da..6b1e79184b 100644
--- a/installers/olm/description.openshift.md
+++ b/installers/olm/description.openshift.md
@@ -56,20 +56,12 @@ edit `conf/postgres-operator/pgo.yaml` and set `DisableFSGroup` to `true`.
[Security Context Constraint]: https://docs.openshift.com/container-platform/latest/authentication/managing-security-context-constraints.html
-### Secrets
+### Secrets (optional)
-Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit
-the `aws-s3` keys below.
+If you plan to use AWS S3 to store backups, you can configure your environment to automatically provide your AWS S3 credentials to all newly created PostgreSQL clusters:
```
-curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config > config
-curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config > sshd_config
-curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt > aws-s3-ca.crt
-
oc -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \
- --from-file=./config \
- --from-file=./sshd_config \
- --from-file=./aws-s3-ca.crt \
--from-literal=aws-s3-key="" \
--from-literal=aws-s3-key-secret=""
```
diff --git a/installers/olm/description.upstream.md b/installers/olm/description.upstream.md
index 7d1dcce69d..9851ee914c 100644
--- a/installers/olm/description.upstream.md
+++ b/installers/olm/description.upstream.md
@@ -49,20 +49,12 @@ export PGO_OPERATOR_NAMESPACE=pgo
kubectl create namespace "$PGO_OPERATOR_NAMESPACE"
```
-### Secrets
+### Secrets (optional)
-Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit
-the `aws-s3` keys below.
+If you plan to use AWS S3 to store backups, you can configure your environment to automatically provide your AWS S3 credentials to all newly created PostgreSQL clusters:
```
-curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config > config
-curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config > sshd_config
-curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v${PGO_VERSION}/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt > aws-s3-ca.crt
-
kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \
- --from-file=./config \
- --from-file=./sshd_config \
- --from-file=./aws-s3-ca.crt \
--from-literal=aws-s3-key="" \
--from-literal=aws-s3-key-secret=""
```
diff --git a/internal/config/secrets.go b/internal/config/secrets.go
new file mode 100644
index 0000000000..2cc2b5ba1b
--- /dev/null
+++ b/internal/config/secrets.go
@@ -0,0 +1,18 @@
+package config
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+const SecretOperatorBackrestRepoConfig = "pgo-backrest-repo-config"
diff --git a/internal/operator/common.go b/internal/operator/common.go
index 2d4360deb7..faeb64fcba 100644
--- a/internal/operator/common.go
+++ b/internal/operator/common.go
@@ -17,8 +17,11 @@ package operator
import (
"bytes"
+ "context"
"encoding/json"
+ "io/ioutil"
"os"
+ "path"
"strings"
"github.com/crunchydata/postgres-operator/internal/config"
@@ -27,10 +30,15 @@ import (
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
+ kerrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
)
const (
+ // defaultBackrestRepoConfigPath contains the default configuration that are used
+ // to set up a pgBackRest repository
+ defaultBackrestRepoConfigPath = "/default-pgo-backrest-repo/"
// defaultRegistry is the default registry to pull the container images from
defaultRegistry = "registry.developers.crunchydata.com/crunchydata"
)
@@ -62,6 +70,10 @@ type containerResourcesTemplateFields struct {
RequestsMemory, RequestsCPU string
}
+// defaultBackrestRepoConfigKeys are the default keys expected to be in the
+// pgBackRest repo config secret
+var defaultBackrestRepoConfigKeys = []string{"config", "sshd_config", "aws-s3-ca.crt"}
+
func Initialize(clientset kubernetes.Interface) {
tmp := os.Getenv("CRUNCHY_DEBUG")
@@ -89,16 +101,15 @@ func Initialize(clientset kubernetes.Interface) {
os.Exit(2)
}
- var err error
-
- err = Pgo.GetConfig(clientset, PgoNamespace)
- if err != nil {
+ if err := Pgo.GetConfig(clientset, PgoNamespace); err != nil {
log.Error(err)
- log.Error("pgo-config files and templates did not load")
- os.Exit(2)
+ log.Fatal("pgo-config files and templates did not load")
}
- log.Printf("PrimaryStorage=%v\n", Pgo.Storage["storage1"])
+ // initialize the general pgBackRest secret
+ if err := initializeOperatorBackrestSecret(clientset, PgoNamespace); err != nil {
+ log.Fatal(err)
+ }
if Pgo.Cluster.CCPImagePrefix == "" {
log.Debugf("pgo.yaml CCPImagePrefix not set, using default %q", defaultRegistry)
@@ -360,6 +371,71 @@ func initializeControllerWorkerCounts() {
}
}
+// initializeOperatorBackrestSecret ensures the generic pgBackRest configuration
+// is available
+func initializeOperatorBackrestSecret(clientset kubernetes.Interface, namespace string) error {
+ var isNew, isModified bool
+
+ ctx := context.TODO()
+
+ // determine if the Secret already exists
+ secret, err := clientset.
+ CoreV1().Secrets(namespace).
+ Get(ctx, config.SecretOperatorBackrestRepoConfig, metav1.GetOptions{})
+
+ // if there is a true error, return. Otherwise, initialize a new Secret
+ if err != nil {
+ if !kerrors.IsNotFound(err) {
+ return err
+ }
+
+ secret = &v1.Secret{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: config.SecretOperatorBackrestRepoConfig,
+ },
+ Data: map[string][]byte{},
+ }
+ isNew = true
+ }
+
+ // set any missing defaults
+ for _, filename := range defaultBackrestRepoConfigKeys {
+ // skip if there is already content
+ if len(secret.Data[filename]) != 0 {
+ continue
+ }
+
+ file := path.Join(defaultBackrestRepoConfigPath, filename)
+
+ // if we can't read the contents of the file for whatever reason, warn,
+ // but continue
+ // otherwise, update the entry in the Secret
+ if contents, err := ioutil.ReadFile(file); err != nil {
+ log.Warn(err)
+ continue
+ } else {
+ secret.Data[filename] = contents
+ }
+
+ isModified = true
+ }
+
+ // do not make any updates if the secret is not modified at all
+ if !isModified {
+ return nil
+ }
+
+ // make the API calls based on if we are creating or updating
+ if isNew {
+ _, err := clientset.CoreV1().Secrets(namespace).Create(ctx, secret, metav1.CreateOptions{})
+ return err
+ }
+
+ _, err = clientset.CoreV1().Secrets(namespace).Update(ctx, secret, metav1.UpdateOptions{})
+
+ return err
+}
+
// SetupNamespaces is responsible for the initial namespace configuration for the Operator
// install. This includes setting the proper namespace operating mode, creating and/or updating
// namespaces as needed (or as permitted by the current operator mode), and returning a valid list
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index 185bc04035..03fbb3c69b 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -152,7 +152,7 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
// SSHD(...?) and possible S3 credentials
configs, configErr := clientset.
CoreV1().Secrets(backrestRepoConfig.OperatorNamespace).
- Get(ctx, "pgo-backrest-repo-config", metav1.GetOptions{})
+ Get(ctx, config.SecretOperatorBackrestRepoConfig, metav1.GetOptions{})
if configErr != nil {
log.Error(configErr)
From cef395600f847aa836beea52151a4a72f051c0b6 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 17 Nov 2020 21:48:16 -0500
Subject: [PATCH 016/276] Provide clarity on retrieving credentials for
standbys
The `pgo show user` command has a `--show-system-accounts` flag
for retrieving the credentials of system accounts (e.g. superuser,
replication users, etc.), which can be helpful when setting up
standby clusters.
Issue: #2053
---
.../high-availability/multi-cluster-kubernetes.md | 11 +++++++++++
docs/content/pgo-client/common-tasks.md | 8 ++++++++
2 files changed, 19 insertions(+)
diff --git a/docs/content/architecture/high-availability/multi-cluster-kubernetes.md b/docs/content/architecture/high-availability/multi-cluster-kubernetes.md
index c6043adba4..f2be1e03c5 100644
--- a/docs/content/architecture/high-availability/multi-cluster-kubernetes.md
+++ b/docs/content/architecture/high-availability/multi-cluster-kubernetes.md
@@ -93,6 +93,14 @@ that matches that of the active cluster it is replicating.
- `--pgbackrest-s3-endpoint`: The S3 endpoint to use
- `--pgbackrest-s3-region`: The S3 region to use
+If you do not want to set the user credentials, you can retrieve them at a later
+time by using the [`pgo show user`]({{< relref "/pgo-client/reference/pgo_show_user.md" >}})
+command with the `--show-system-accounts` flag, e.g.
+
+```
+pgo show user --show-system-accounts hippo
+```
+
With respect to the credentials, it should be noted that when the standby
cluster is being created within the same Kubernetes cluster AND it has access to
the Kubernetes Secret created for the active cluster, one can use the
@@ -182,6 +190,9 @@ pgo create cluster hippo-standby --standby --pgbouncer --replica-count=2 \
--password=opensourcehippo
```
+(If you are unsure of your credentials, you can use
+`pgo show user hippo --show-system-accounts` to retrieve them).
+
Note the use of the `--pgbackrest-repo-path` flag as it points to the name of
the pgBackRest repository that is used for the original `hippo` cluster.
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index 50bdd46e72..33396c1214 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -1298,6 +1298,14 @@ pgo create cluster hippo-standby --standby --replica-count=2 \
--password=opensourcehippo
```
+If you are unsure of your user credentials form the original `hippo` cluster,
+you can retrieve them using the [`pgo show user`]({{< relref "/pgo-client/reference/pgo_show_user.md" >}})
+command with the `--show-system-accounts` flag:
+
+```
+pgo show user hippo --show-system-accounts
+```
+
The standby cluster will take a few moments to bootstrap, but it is now set up!
### Promoting a Standby Cluster
From 0d0c6a388b66d3fad794b14533fb3a6416ffd5d5 Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Tue, 17 Nov 2020 15:56:18 -0600
Subject: [PATCH 017/276] Use GitHub Actions to run golangci-lint on new Go
code
- https://github.com/features/actions
- https://docs.github.com/en/free-pro-team@latest/actions
---
.github/workflows/lint.yaml | 15 +++++++++++++++
.golangci.yaml | 16 ++++++++++++++++
2 files changed, 31 insertions(+)
create mode 100644 .github/workflows/lint.yaml
create mode 100644 .golangci.yaml
diff --git a/.github/workflows/lint.yaml b/.github/workflows/lint.yaml
new file mode 100644
index 0000000000..29e173c4fa
--- /dev/null
+++ b/.github/workflows/lint.yaml
@@ -0,0 +1,15 @@
+on:
+ pull_request:
+ branches:
+ - master
+
+jobs:
+ golangci-lint:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ - uses: golangci/golangci-lint-action@v2
+ with:
+ version: v1.32
+ args: --timeout=5m
+ only-new-issues: true
diff --git a/.golangci.yaml b/.golangci.yaml
new file mode 100644
index 0000000000..c8ac7c76ed
--- /dev/null
+++ b/.golangci.yaml
@@ -0,0 +1,16 @@
+# https://golangci-lint.run/usage/configuration/
+
+linters:
+ disable:
+ - scopelint
+ enable:
+ - gosimple
+ - misspell
+ presets:
+ - bugs
+ - format
+ - unused
+
+run:
+ skip-dirs:
+ - pkg/generated
From a08ac1d7e8d6d63edbc004fe1633c32265f20cd1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 20 Nov 2020 13:35:37 -0500
Subject: [PATCH 018/276] Add command defaults to some of the examples
Given the variable name `$PGO_CMD` can involve a lot of context
and given that some of the examples may be run without the
precursor "envs" executable being run, this adds a sane default
of `kubectl` to some of the example scripts, while also indicating
what are acceptable values, should one introspect the scripts.
Issue: #1928
---
examples/create-by-resource/run.sh | 2 ++
examples/custom-config/create.sh | 7 ++-----
2 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/examples/create-by-resource/run.sh b/examples/create-by-resource/run.sh
index ea034a4fe2..e6940ead12 100755
--- a/examples/create-by-resource/run.sh
+++ b/examples/create-by-resource/run.sh
@@ -18,6 +18,8 @@
#########
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+# PGO_CMD should either be "kubectl" or "oc" -- defaulting to kubectl
+PGO_CMD=${PGO_CMD:-kubectl}
# A namespace that exists in NAMESPACE env var - see examples/envs.sh
export NS=pgouser1
diff --git a/examples/custom-config/create.sh b/examples/custom-config/create.sh
index b0599f1b37..df6c701f2a 100755
--- a/examples/custom-config/create.sh
+++ b/examples/custom-config/create.sh
@@ -28,11 +28,8 @@ function echo_info() {
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-#Error if PGO_CMD not set
-if [[ -z ${PGO_CMD} ]]
-then
- echo_err "PGO_CMD is not set."
-fi
+# PGO_CMD should either be "kubectl" or "oc" -- defaulting to kubectl
+PGO_CMD=${PGO_CMD:-kubectl}
#Error is PGO_NAMESPACE not set
if [[ -z ${PGO_NAMESPACE} ]]
From 2f32be89b2cd785640bc833d7c709d4a99b0274a Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 27 Nov 2020 11:54:40 -0500
Subject: [PATCH 019/276] Fix crash in cluster shutdown logic
The Operator would crash if a shutdown was issued but there was
no in the cluster.
Issue: [ch9825]
Issue: #2073
---
internal/operator/cluster/clusterlogic.go | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index dd8a9e6827..7b78d66f00 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -630,7 +630,10 @@ func ShutdownCluster(clientset kubeapi.Interface, cluster crv1.Pgcluster) error
return err
}
- if len(pods.Items) > 1 {
+ if len(pods.Items) == 0 {
+ return fmt.Errorf("Cluster Operator: Could not find primary pod for shutdown of "+
+ "cluster %s", cluster.Name)
+ } else if len(pods.Items) > 1 {
return fmt.Errorf("Cluster Operator: Invalid number of primary pods (%d) found when "+
"shutting down cluster %s", len(pods.Items), cluster.Name)
}
From 7c22d5ef6929da44cbab21a2feb6e8b4311b0dd2 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 29 Nov 2020 14:56:01 -0500
Subject: [PATCH 020/276] Add TLS support for pgBouncer
Since 2b05f01b, the Postgres Operator has supported configuration of TLS
connections for PostgreSQL clusters. However, connections made to
pgBouncer did not support TLS directly, though one could do some heavy
work to the template files to bring about this support.
This introduces the ability to directly configure pgBouncer instances
with TLS, with the following preconditions:
- TLS MUST be enabled within the PostgreSQL cluster
- pgBouncer and the PostgreSQL share the same certificate authority (CA)
bundle
When TLS is enabled, connections are facilitated with the following
default rules:
- All connections to pgBouncer MUST be over TLS. Effectively, this
is "TLS only" if connecting via pgBouncer.
- Connections coming into pgBouncer have a PGSSLMODE of "require"
- Connections going into PostgreSQL a PGSSLMODE of "verify-ca"
This adds an attribute to the pgcluster Spec in the "pgBouncer" section
called "tlsSecret", which will store the name of the TLS secret to use
for pgBouncer.
One can also enable TLS for pgBouncer in the following ways:
- `pgo create cluster --pgbouncer-tls-secret` if `--pgbouncer` is set
- `pgo create pgbouncer --tls-secret`
Issue: [ch9777]
---
cmd/pgo/cmd/cluster.go | 1 +
cmd/pgo/cmd/create.go | 9 +++
cmd/pgo/cmd/pgbouncer.go | 1 +
docs/content/custom-resources/_index.md | 1 +
docs/content/pgo-client/common-tasks.md | 17 +----
.../reference/pgo_create_cluster.md | 5 +-
.../reference/pgo_create_pgbouncer.md | 5 +-
docs/content/tutorial/pgbouncer.md | 71 +++++++++++++++++++
examples/pgo-bash-completion | 4 --
.../files/pgo-configs/pgbouncer-template.json | 39 +++++++++-
.../files/pgo-configs/pgbouncer.ini | 8 +++
.../files/pgo-configs/pgbouncer_hba.conf | 4 ++
.../apiserver/clusterservice/clusterimpl.go | 31 +++++++-
.../pgbouncerservice/pgbouncerimpl.go | 46 +++++++++++-
internal/operator/cluster/pgbouncer.go | 37 ++++++++--
internal/operator/cluster/pgbouncer_test.go | 52 ++++++++++++++
pkg/apis/crunchydata.com/v1/cluster.go | 5 ++
pkg/apiservermsgs/clustermsgs.go | 6 +-
pkg/apiservermsgs/pgbouncermsgs.go | 3 +
19 files changed, 309 insertions(+), 36 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index 12179d8fcf..a125bc2b5b 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -327,6 +327,7 @@ func createCluster(args []string, ns string, createClusterCmd *cobra.Command) {
r.PgBouncerMemoryRequest = PgBouncerMemoryRequest
r.PgBouncerMemoryLimit = PgBouncerMemoryLimit
r.PgBouncerReplicas = PgBouncerReplicas
+ r.PgBouncerTLSSecret = PgBouncerTLSSecret
// determine if the user wants to create tablespaces as part of this request,
// and if so, set the values
r.Tablespaces = getTablespaces(Tablespaces)
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index 57e4e77eb6..25dd1119b9 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -122,6 +122,9 @@ var PasswordReplication string
// variables used for setting up TLS-enabled PostgreSQL clusters
var (
+ // PgBouncerTLSSecret is the name of the secret that contains the
+ // TLS information for enabling TLS for pgBouncer
+ PgBouncerTLSSecret string
// TLSOnly indicates that only TLS connections will be accepted for a
// PostgreSQL cluster
TLSOnly bool
@@ -435,6 +438,9 @@ func init() {
createClusterCmd.Flags().StringVar(&PgBouncerMemoryLimit, "pgbouncer-memory-limit", "", "Set the amount of memory to limit for "+
"pgBouncer.")
createClusterCmd.Flags().Int32Var(&PgBouncerReplicas, "pgbouncer-replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.")
+ createClusterCmd.Flags().StringVar(&PgBouncerTLSSecret, "pgbouncer-tls-secret", "", "The name of the secret "+
+ "that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. "+
+ "Must also set server-tls-secret and server-ca-secret.")
createClusterCmd.Flags().StringVarP(&ReplicaStorageConfig, "replica-storage-config", "", "", "The name of a Storage config in pgo.yaml to use for the cluster replica storage.")
createClusterCmd.Flags().StringVarP(&PodAntiAffinity, "pod-anti-affinity", "", "",
"Specifies the type of anti-affinity that should be utilized when applying "+
@@ -504,6 +510,9 @@ func init() {
"pgBouncer.")
createPgbouncerCmd.Flags().Int32Var(&PgBouncerReplicas, "replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.")
createPgbouncerCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
+ createPgbouncerCmd.Flags().StringVar(&PgBouncerTLSSecret, "tls-secret", "", "The name of the secret "+
+ "that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. "+
+ "The PostgreSQL cluster must have TLS enabled.")
// "pgo create pgouser" flags
createPgouserCmd.Flags().BoolVarP(&AllNamespaces, "all-namespaces", "", false, "specifies this user will have access to all namespaces.")
diff --git a/cmd/pgo/cmd/pgbouncer.go b/cmd/pgo/cmd/pgbouncer.go
index d787b1ebbe..9450623bf1 100644
--- a/cmd/pgo/cmd/pgbouncer.go
+++ b/cmd/pgo/cmd/pgbouncer.go
@@ -68,6 +68,7 @@ func createPgbouncer(args []string, ns string) {
Namespace: ns,
Replicas: PgBouncerReplicas,
Selector: Selector,
+ TLSSecret: PgBouncerTLSSecret,
}
if err := util.ValidateQuantity(request.CPURequest, "cpu"); err != nil {
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index f4384e311e..9869a29bb1 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -784,6 +784,7 @@ a PostgreSQL cluster to help with failover scenarios too.
| Limits | `create`, `update` | Specify the container resource limits that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Replicas | `create`, `update` | The number of pgBouncer instances to deploy. Must be set to at least `1` to deploy pgBouncer. Setting to `0` removes an existing pgBouncer deployment for the PostgreSQL cluster. |
| Resources | `create`, `update` | Specify the container resource requests that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| TLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the pgBouncer instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with the parent Spec `TLSSecret` and `CASecret`. |
##### Annotations Specification
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index 33396c1214..8eefb412c2 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -1485,22 +1485,7 @@ You can view policies as following:
### Connection Pooling via pgBouncer
-To add a pgbouncer Deployment to your Postgres cluster, enter:
-
- pgo create cluster hacluster --pgbouncer -n pgouser1
-
-You can add pgbouncer after a Postgres cluster is created as follows:
-
- pgo create pgbouncer hacluster
- pgo create pgbouncer --selector=name=hacluster
-
-You can also specify a pgbouncer password as follows:
-
- pgo create cluster hacluster --pgbouncer --pgbouncer-pass=somepass -n pgouser1
-
-You can remove a pgbouncer from a cluster as follows:
-
- pgo delete pgbouncer hacluster -n pgouser1
+Please see the [tutorial on pgBouncer]({{< relref "tutorial/pgbouncer.md" >}}).
### Query Analysis via pgBadger
diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md
index 265b3c5517..7a8661845d 100644
--- a/docs/content/pgo-client/reference/pgo_create_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_create_cluster.md
@@ -79,6 +79,7 @@ pgo create cluster [flags]
--pgbouncer-memory string Set the amount of memory to request for pgBouncer. Defaults to server value (24Mi).
--pgbouncer-memory-limit string Set the amount of memory to limit for pgBouncer.
--pgbouncer-replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.
+ --pgbouncer-tls-secret string The name of the secret that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. Must also set server-tls-secret and server-ca-secret.
--pgo-image-prefix string The PGOImagePrefix to use for cluster creation. If specified, overrides the global configuration.
--pod-anti-affinity string Specifies the type of anti-affinity that should be utilized when applying default pod anti-affinity rules to PG clusters (default "preferred")
--pod-anti-affinity-pgbackrest string Set the Pod anti-affinity rules specifically for the pgBackRest repository. Defaults to the default cluster pod anti-affinity (i.e. "preferred"), or the value set by --pod-anti-affinity
@@ -116,7 +117,7 @@ pgo create cluster [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -130,4 +131,4 @@ pgo create cluster [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 22-Nov-2020
diff --git a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md b/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
index ad406e60e0..156820f023 100644
--- a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
+++ b/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
@@ -25,12 +25,13 @@ pgo create pgbouncer [flags]
--memory-limit string Set the amount of memory to limit for pgBouncer.
--replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.
-s, --selector string The selector to use for cluster filtering.
+ --tls-secret string The name of the secret that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. The PostgreSQL cluster must have TLS enabled.
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -44,4 +45,4 @@ pgo create pgbouncer [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 22-Nov-2020
diff --git a/docs/content/tutorial/pgbouncer.md b/docs/content/tutorial/pgbouncer.md
index 0349ff1eaf..43c2534996 100644
--- a/docs/content/tutorial/pgbouncer.md
+++ b/docs/content/tutorial/pgbouncer.md
@@ -130,6 +130,77 @@ SHOW stats;
Success, you have connected to pgBouncer!
+## Setup pgBouncer with TLS
+
+Similarly to how you can [setup TLS for PostgreSQL]({{< relref "tutorial/tls.md" >}}), you can set up TLS connections for pgBouncer. To do this, the PostgreSQL Operator takes the following steps:
+
+- Ensuring TLS communication between a client (e.g. `psql`, your application, etc.) and pgBouncer
+- Ensuring TLS communication between pgBouncer and PostgreSQL
+
+When TLS is enabled, the PostgreSQL Operator configures pgBouncer to require each client to use TLS to communicate with pgBouncer. Additionally, the PostgreSQL Operator requires that pgBouncer and the PostgreSQL cluster share the same certificate authority (CA) bundle, which allows for pgBouncer to communicate with the PostgreSQL cluster using PostgreSQL's [`verify-ca` SSL mode](https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-PROTECTION).
+
+The below guide will show you how set up TLS for pgBouncer.
+
+### Prerequisites
+
+In order to set up TLS connections for pgBouncer, you must first [enable TLS on your PostgreSQL cluster]({{< relref "tutorial/tls.md" >}}).
+
+For the purposes of this exercise, we will re-use the Secret TLS keypair `hippo-tls-keypair` that was created for the PostgreSQL server. This is only being done for convenience: you can substitute `hippo-tls-keypair` with a different TLS key pair as long as it can be verified by the certificate authority (CA) that you selected for your PostgreSQL cluster. Recall that the certificate authority (CA) bundle is stored in a Secret named `postgresql-ca`.
+
+### Create pgBouncer with TLS
+
+Knowing that our TLS key pair is stored in a Secret called `hippo-tls-keypair`, you can setup pgBouncer with TLS using the following command:
+
+```
+pgo create pgbouncer hippo --tls-secret=hippo-tls-keypair
+```
+
+And that's it! So long as the prerequisites are satisfied, this will create a pgBouncer instance that is TLS enabled.
+
+Don't believe it? Try logging in. First, ensure you have a port-forward from pgBouncer to your host machine:
+
+```
+kubectl -n pgo port-forward svc/hippo-pgbouncer 5432:5432
+```
+
+Then, connect to the pgBouncer instances:
+
+```
+PGPASSWORD=securerandomlygeneratedpassword psql -h localhost -p 5432 -U testuser hippo
+```
+
+You should see something similar to this:
+
+```
+psql (12.5)
+SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
+Type "help" for help.
+
+hippo=>
+```
+
+Still don't believe it? You can verify your connection using the PostgreSQL `get_backend_pid()` function and the [`pg_stat_ssl`](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-SSL-VIEW) monitoring view:
+
+```
+hippo=> SELECT * FROM pg_stat_ssl WHERE pid = pg_backend_pid();
+ pid | ssl | version | cipher | bits | compression | client_dn | client_serial | issuer_dn
+-------+-----+---------+------------------------+------+-------------+-----------+---------------+-----------
+ 15653 | t | TLSv1.3 | TLS_AES_256_GCM_SHA384 | 256 | f | | |
+(1 row)
+```
+
+### Create a PostgreSQL cluster with pgBouncer and TLS
+
+Want to create a PostgreSQL cluster with pgBouncer with TLS enabled? You can with the [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) command and using the `--pgbouncer-tls-secret` flag. Using the same Secrets that were created in the [creating a PostgreSQL cluster with TLS]({{ relref "tutorial/tls.md" }}) tutorial, you can create a PostgreSQL cluster with pgBouncer and TLS with the following command:
+
+```
+pgo create cluster hippo \
+ --server-ca-secret=postgresql-ca \
+ --server-tls-secret=hippo-tls-keypair \
+ --pgbouncer \
+ --pgbouncer-tls-secret=hippo-tls-keypair
+```
+
## Customize CPU / Memory for pgBouncer
### Provisioning
diff --git a/examples/pgo-bash-completion b/examples/pgo-bash-completion
index 70271ccf0c..6c89ce0b37 100644
--- a/examples/pgo-bash-completion
+++ b/examples/pgo-bash-completion
@@ -383,8 +383,6 @@ _pgo_create_cluster()
local_nonpersistent_flags+=("--pgbadger")
flags+=("--pgbouncer")
local_nonpersistent_flags+=("--pgbouncer")
- flags+=("--pgbouncer-pass=")
- local_nonpersistent_flags+=("--pgbouncer-pass=")
flags+=("--policies=")
two_word_flags+=("-z")
local_nonpersistent_flags+=("--policies=")
@@ -454,8 +452,6 @@ _pgo_create_pgbouncer()
flags_with_completion=()
flags_completion=()
- flags+=("--pgbouncer-pass=")
- local_nonpersistent_flags+=("--pgbouncer-pass=")
flags+=("--selector=")
two_word_flags+=("-s")
local_nonpersistent_flags+=("--selector=")
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer-template.json
index 38202a7464..9e88f4bbdb 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer-template.json
@@ -39,6 +39,11 @@
},
"spec": {
"serviceAccountName": "pgo-default",
+ {{ if not .DisableFSGroup }}
+ "securityContext": {
+ "fsGroup": 2
+ },
+ {{ end }}
"containers": [{
"name": "pgbouncer",
"image": "{{.CCPImagePrefix}}/crunchy-pgbouncer:{{.CCPImageTag}}",
@@ -59,13 +64,41 @@
"name": "PG_PRIMARY_SERVICE_NAME",
"value": "{{.PrimaryServiceName}}"
}],
- "volumeMounts": [{
+ "volumeMounts": [
+ {{if .TLSEnabled}}
+ {
+ "mountPath": "/pgconf/tls/pgbouncer",
+ "name": "tls-pgbouncer"
+ },
+ {{ end }}
+ {
"name": "pgbouncer-conf",
"mountPath": "/pgconf/",
"readOnly": false
- }]
+ }
+ ]
}],
"volumes": [
+ {{if .TLSEnabled}}
+ {
+ "name": "tls-pgbouncer",
+ "defaultMode": 288,
+ "projected": {
+ "sources": [
+ {
+ "secret": {
+ "name": "{{.TLSSecret}}"
+ }
+ },
+ {
+ "secret": {
+ "name": "{{.CASecret}}"
+ }
+ }
+ ]
+ }
+ },
+ {{ end }}
{
"name": "pgbouncer-conf",
"projected": {
@@ -78,7 +111,7 @@
{
"secret": {
"name": "{{.PGBouncerSecret}}",
- "defaultMode": 511
+ "defaultMode": 288
}
}
]
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer.ini b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer.ini
index 157f9a96e1..5310692c37 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer.ini
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer.ini
@@ -20,3 +20,11 @@ reserve_pool_size = 0
reserve_pool_timeout = 5
query_timeout = 0
ignore_startup_parameters = extra_float_digits
+{{ if .TLSEnabled }}
+client_tls_sslmode = require
+client_tls_key_file = /pgconf/tls/pgbouncer/tls.key
+client_tls_cert_file = /pgconf/tls/pgbouncer/tls.crt
+client_tls_ca_file = /pgconf/tls/pgbouncer/ca.crt
+server_tls_sslmode = verify-ca
+server_tls_ca_file = /pgconf/tls/pgbouncer/ca.crt
+{{ end }}
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer_hba.conf b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer_hba.conf
index 824c82705e..aee753cd1a 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer_hba.conf
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer_hba.conf
@@ -1 +1,5 @@
+{{ if .TLSEnabled }}
+hostssl all all 0.0.0.0/0 md5
+{{ else }}
host all all 0.0.0.0/0 md5
+{{ end }}
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 8369d421ec..e099d0303f 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1264,6 +1264,11 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
spec.PgBouncer.Resources[v1.ResourceMemory] = apiserver.Pgo.Cluster.DefaultPgBouncerResourceMemory
}
+ // if TLS is enabled for pgBouncer, ensure the secret is specified
+ if request.PgBouncerTLSSecret != "" {
+ spec.PgBouncer.TLSSecret = request.PgBouncerTLSSecret
+ }
+
spec.PrimaryStorage, _ = apiserver.Pgo.GetStorageSpec(apiserver.Pgo.PrimaryStorage)
if request.StorageConfig != "" {
spec.PrimaryStorage, _ = apiserver.Pgo.GetStorageSpec(request.StorageConfig)
@@ -2148,12 +2153,25 @@ func validateBackrestStorageTypeOnCreate(request *msgs.CreateClusterRequest) err
func validateClusterTLS(request *msgs.CreateClusterRequest) error {
ctx := context.TODO()
- // if ReplicationTLSSecret is set, but neither TLSSecret nor CASecret is not
- // set, then return
+ // if ReplicationTLSSecret is set, but neither TLSSecret nor CASecret is set
+ // then return
if request.ReplicationTLSSecret != "" && (request.TLSSecret == "" || request.CASecret == "") {
return fmt.Errorf("Both TLS secret and CA secret must be set in order to enable certificate-based authentication for replication")
}
+ // if PgBouncerTLSSecret is set, return if:
+ // a) pgBouncer is not enabled OR
+ // b) neither TLSSecret nor CASecret is set
+ if request.PgBouncerTLSSecret != "" {
+ if !request.PgbouncerFlag {
+ return fmt.Errorf("pgBouncer must be enabled in order to enable TLS for pgBouncer")
+ }
+
+ if request.TLSSecret == "" || request.CASecret == "" {
+ return fmt.Errorf("Both TLS secret and CA secret must be set in order to enable TLS for pgBouncer")
+ }
+ }
+
// if TLSOnly is not set and neither TLSSecret no CASecret are set, just return
if !request.TLSOnly && request.TLSSecret == "" && request.CASecret == "" {
return nil
@@ -2192,6 +2210,15 @@ func validateClusterTLS(request *msgs.CreateClusterRequest) error {
}
}
+ // then, if set, the pgBouncer TLS secret
+ if request.PgBouncerTLSSecret != "" {
+ if _, err := apiserver.Clientset.
+ CoreV1().Secrets(request.Namespace).
+ Get(ctx, request.PgBouncerTLSSecret, metav1.GetOptions{}); err != nil {
+ return err
+ }
+ }
+
// after this, we are validated!
return nil
}
diff --git a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
index 6ce41ac784..85373855d6 100644
--- a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
+++ b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
@@ -80,7 +80,6 @@ func CreatePgbouncer(request *msgs.CreatePgbouncerRequest, ns, pgouser string) m
}
for _, cluster := range clusterList.Items {
-
// check if the current cluster is not upgraded to the deployed
// Operator version. If not, do not allow the command to complete
if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE {
@@ -89,6 +88,13 @@ func CreatePgbouncer(request *msgs.CreatePgbouncerRequest, ns, pgouser string) m
return resp
}
+ // validate the TLS settings
+ if err := validateTLS(cluster, request); err != nil {
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = err.Error()
+ return resp
+ }
+
log.Debugf("adding pgbouncer to cluster [%s]", cluster.Name)
resources := v1.ResourceList{}
@@ -132,6 +138,7 @@ func CreatePgbouncer(request *msgs.CreatePgbouncerRequest, ns, pgouser string) m
}
cluster.Spec.PgBouncer.Resources = resources
+ cluster.Spec.PgBouncer.TLSSecret = request.TLSSecret
// update the cluster CRD with these udpates. If there is an error
if _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).
@@ -534,3 +541,40 @@ func setPgBouncerServiceDetail(cluster crv1.Pgcluster, result *msgs.ShowPgBounce
}
}
}
+
+// validateTLS validates the parameters that allow a user to enable TLS
+// connections to a pgBouncer cluster. In essence, it requires both the
+// TLSSecret to be set for pgBouncer as well as a CASecret/TLSSecret for the
+// cluster itself
+func validateTLS(cluster crv1.Pgcluster, request *msgs.CreatePgbouncerRequest) error {
+ ctx := context.TODO()
+
+ // if TLSSecret is not set, well, this is valid
+ if request.TLSSecret == "" {
+ return nil
+ }
+
+ // if ReplicationTLSSecret is set, but neither TLSSecret nor CASecret is not
+ // set, then return
+ if request.TLSSecret != "" && (cluster.Spec.TLS.TLSSecret == "" || cluster.Spec.TLS.CASecret == "") {
+ return fmt.Errorf("%s: both TLS secret and CA secret must be set on the cluster in order to enable TLS for pgBouncer", cluster.Name)
+ }
+
+ // ensure the TLSSecret and CASecret for the cluster are actually present
+ // now check for the existence of the two secrets
+ // First the TLS secret
+ if _, err := apiserver.Clientset.
+ CoreV1().Secrets(cluster.Namespace).
+ Get(ctx, cluster.Spec.TLS.TLSSecret, metav1.GetOptions{}); err != nil {
+ return fmt.Errorf("%s: cannot find TLS secret for cluster: %w", cluster.Name, err)
+ }
+
+ if _, err := apiserver.Clientset.
+ CoreV1().Secrets(cluster.Namespace).
+ Get(ctx, cluster.Spec.TLS.CASecret, metav1.GetOptions{}); err != nil {
+ return fmt.Errorf("%s: cannot find CA secret for cluster: %w", cluster.Name, err)
+ }
+
+ // after this, we are validated!
+ return nil
+}
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index 569783a3c8..9c70884faa 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -51,13 +51,20 @@ type PgbouncerPasswdFields struct {
type PgbouncerConfFields struct {
PG_PRIMARY_SERVICE_NAME string
PG_PORT string
+ TLSEnabled bool
+}
+
+type pgBouncerHBATemplateFields struct {
+ TLSEnabled bool
}
type pgBouncerTemplateFields struct {
Name string
+ CASecret string
ClusterName string
CCPImagePrefix string
CCPImageTag string
+ DisableFSGroup bool
Port string
PrimaryServiceName string
ContainerResources string
@@ -68,6 +75,8 @@ type pgBouncerTemplateFields struct {
PodAntiAffinityLabelName string
PodAntiAffinityLabelValue string
Replicas int32 `json:",string"`
+ TLSEnabled bool
+ TLSSecret string
}
// pgBouncerDeploymentFormat is the name of the Kubernetes Deployment that
@@ -518,7 +527,7 @@ func createPgbouncerConfigMap(clientset kubernetes.Interface, cluster *crv1.Pgcl
}
// generate the pgbouncer HBA file
- pgbouncerHBA, err := generatePgBouncerHBA()
+ pgbouncerHBA, err := generatePgBouncerHBA(cluster)
if err != nil {
log.Error(err)
@@ -565,6 +574,7 @@ func createPgBouncerDeployment(clientset kubernetes.Interface, cluster *crv1.Pgc
ClusterName: cluster.Name,
CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
CCPImageTag: util.GetStandardImageTag(cluster.Spec.CCPImage, cluster.Spec.CCPImageTag),
+ DisableFSGroup: operator.Pgo.Cluster.DisableFSGroup,
Port: cluster.Spec.Port,
PGBouncerConfigMap: util.GeneratePgBouncerConfigMapName(cluster.Name),
PGBouncerSecret: util.GeneratePgBouncerSecretName(cluster.Name),
@@ -579,6 +589,13 @@ func createPgBouncerDeployment(clientset kubernetes.Interface, cluster *crv1.Pgc
Replicas: cluster.Spec.PgBouncer.Replicas,
}
+ // set appropriate fields if TLS is enabled
+ if isPgBouncerTLSEnabled(cluster) {
+ fields.CASecret = cluster.Spec.TLS.CASecret
+ fields.TLSEnabled = true
+ fields.TLSSecret = cluster.Spec.PgBouncer.TLSSecret
+ }
+
// For debugging purposes, put the template substitution in stdout
if operator.CRUNCHY_DEBUG {
config.PgbouncerTemplate.Execute(os.Stdout, fields)
@@ -750,6 +767,7 @@ func generatePgBouncerConf(cluster *crv1.Pgcluster) (string, error) {
fields := PgbouncerConfFields{
PG_PRIMARY_SERVICE_NAME: cluster.Name,
PG_PORT: port,
+ TLSEnabled: isPgBouncerTLSEnabled(cluster),
}
// perform the substitution
@@ -770,12 +788,15 @@ func generatePgBouncerConf(cluster *crv1.Pgcluster) (string, error) {
// generatePgBouncerHBA generates the pgBouncer host-based authentication file
// using the template that is vailable
-func generatePgBouncerHBA() (string, error) {
- // ...apparently this is overkill, but this is here from the legacy method
- // and it seems like it's "ok" to leave it like this for now...
+func generatePgBouncerHBA(cluster *crv1.Pgcluster) (string, error) {
+ // we may have some substitutions if this is a TLS enabled cluster
+ fields := pgBouncerHBATemplateFields{
+ TLSEnabled: isPgBouncerTLSEnabled(cluster),
+ }
+
doc := bytes.Buffer{}
- if err := config.PgbouncerHBATemplate.Execute(&doc, struct{}{}); err != nil {
+ if err := config.PgbouncerHBATemplate.Execute(&doc, fields); err != nil {
log.Error(err)
return "", err
@@ -852,6 +873,12 @@ func installPgBouncer(clientset kubernetes.Interface, restconfig *rest.Config, p
return nil
}
+// isPgBouncerTLSEnabled returns true if TLS is enabled for pgBouncer, which
+// means that TLS is enabled for the PostgreSQL cluster itself
+func isPgBouncerTLSEnabled(cluster *crv1.Pgcluster) bool {
+ return cluster.Spec.PgBouncer.TLSSecret != "" && cluster.Spec.TLS.IsTLSEnabled()
+}
+
// makePostgresPassword creates the expected hash for a password type for a
// PostgreSQL password
func makePostgresPassword(passwordType pgpassword.PasswordType, password string) string {
diff --git a/internal/operator/cluster/pgbouncer_test.go b/internal/operator/cluster/pgbouncer_test.go
index 06ff30d8b6..b95d96828c 100644
--- a/internal/operator/cluster/pgbouncer_test.go
+++ b/internal/operator/cluster/pgbouncer_test.go
@@ -19,8 +19,60 @@ import (
"testing"
pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
)
+func TestIsPgBouncerTLSEnabled(t *testing.T) {
+ cluster := &crv1.Pgcluster{
+ Spec: crv1.PgclusterSpec{
+ PgBouncer: crv1.PgBouncerSpec{},
+ TLS: crv1.TLSSpec{},
+ },
+ }
+
+ t.Run("true", func(t *testing.T) {
+ cluster.Spec.PgBouncer.TLSSecret = "pgbouncer-tls"
+ cluster.Spec.TLS.CASecret = "ca"
+ cluster.Spec.TLS.TLSSecret = "postgres-tls"
+
+ if !isPgBouncerTLSEnabled(cluster) {
+ t.Errorf("expected true")
+ }
+ })
+
+ t.Run("false", func(t *testing.T) {
+ t.Run("neither enabled", func(t *testing.T) {
+ cluster.Spec.PgBouncer.TLSSecret = ""
+ cluster.Spec.TLS.CASecret = ""
+ cluster.Spec.TLS.TLSSecret = ""
+
+ if isPgBouncerTLSEnabled(cluster) {
+ t.Errorf("expected false")
+ }
+ })
+
+ t.Run("postgres TLS enabled only", func(t *testing.T) {
+ cluster.Spec.PgBouncer.TLSSecret = ""
+ cluster.Spec.TLS.CASecret = "ca"
+ cluster.Spec.TLS.TLSSecret = "postgres-tls"
+
+ if isPgBouncerTLSEnabled(cluster) {
+ t.Errorf("expected false")
+ }
+ })
+
+ t.Run("pgbouncer TLS enabled only", func(t *testing.T) {
+ cluster.Spec.PgBouncer.TLSSecret = "pgbouncer-tls"
+ cluster.Spec.TLS.CASecret = ""
+ cluster.Spec.TLS.TLSSecret = ""
+
+ if isPgBouncerTLSEnabled(cluster) {
+ t.Errorf("expected false")
+ }
+ })
+ })
+}
+
func TestMakePostgresPassword(t *testing.T) {
t.Run("md5", func(t *testing.T) {
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index bdc02406da..91b7f8dad8 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -254,6 +254,11 @@ type PgBouncerSpec struct {
// Limits, if specified, contains the container resource limits
// for any pgBouncer Deployments that are part of a PostgreSQL cluster
Limits v1.ResourceList `json:"limits"`
+ // TLSSecret contains the name of the secret to use that contains the TLS
+ // keypair for pgBouncer
+ // This follows the Kubernetes secret format ("kubernetes.io/tls") which has
+ // two keys: tls.crt and tls.key
+ TLSSecret string `json:"tlsSecret"`
}
// Enabled returns true if the pgBouncer is enabled for the cluster, i.e. there
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index e8983613c4..69bbdafe49 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -87,7 +87,11 @@ type CreateClusterRequest struct {
// PgBouncerReplicas represents the total number of pgBouncer pods to deploy with a
// PostgreSQL cluster. Only works if PgbouncerFlag is set, and if so, it must
// be at least 1. If 0 is passed in, it will automatically be set to 1
- PgBouncerReplicas int32
+ PgBouncerReplicas int32
+ // PgBouncerTLSSecret is the name of the Secret containing the TLS keypair
+ // for enabling TLS with pgBouncer. This also requires for TLSSecret and
+ // CASecret to be set
+ PgBouncerTLSSecret string
CustomConfig string
StorageConfig string
WALStorageConfig string
diff --git a/pkg/apiservermsgs/pgbouncermsgs.go b/pkg/apiservermsgs/pgbouncermsgs.go
index 0feab5f15e..49669b8fe7 100644
--- a/pkg/apiservermsgs/pgbouncermsgs.go
+++ b/pkg/apiservermsgs/pgbouncermsgs.go
@@ -40,6 +40,9 @@ type CreatePgbouncerRequest struct {
// automatically be set to 1
Replicas int32
Selector string
+ // TLSSecret is the name of the secret that contains the keypair required to
+ // deploy TLS-enabled pgBouncer
+ TLSSecret string
}
// CreatePgbouncerResponse ...
From 8706c43df05498d99a757a5ecdbcfaccc6929053 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 30 Nov 2020 10:18:13 -0500
Subject: [PATCH 021/276] Catch error when generating default pgo-config
During the initialization of the default "pgo-config" ConfigMap,
there exists a case (likely a race condition that I did not track
down) that triggered an error, but we were not catching the error.
Regardless, we should, given the next line could trigger a panic.
Immediate remediation without the patch should be just restarting
the Operator Pod.
Issue: [ch9826]
Issue: #2075
---
internal/config/pgoconfig.go | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index ddb04cbf00..3fffa15a02 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -516,6 +516,11 @@ func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string)
cMap, err := initialize(clientset, namespace)
+ if err != nil {
+ log.Errorf("could not get ConfigMap: %s", err.Error())
+ return err
+ }
+
//get the pgo.yaml config file
str := cMap.Data[CONFIG_PATH]
if str == "" {
From 29cf5ae83aecf2c328993da03953fd8fd5a3cb2f Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 30 Nov 2020 16:27:01 -0500
Subject: [PATCH 022/276] Allow for pg_stat_statements collections from
pgMonitor
pgMonitor 4.4 introduced the ability to scrape metrics around
pg_stat_statments, which include:
- `ccp_pg_stat_statements_total_calls_count`
Total number of queries run per user/database
- `ccp_pg_stat_statements_total_exec_time_ms`
Total runtime of all queries per user/database
- `ccp_pg_stat_statements_total_mean_exec_time_ms`
Mean runtime of all queries per user/database
- ccp_pg_stat_statements_total_row_count
Total rows returned from all queries per user/database
While there may not be corresponding visuals in the pgMonitor
Kubernetes overlay at this point, this at least allows for the
collection and aggregation of these metrics.
This also retitles an error message.
Issue: [ch9841]
Issue: #2036
---
bin/crunchy-postgres-exporter/start.sh | 38 +++++++++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/bin/crunchy-postgres-exporter/start.sh b/bin/crunchy-postgres-exporter/start.sh
index f8e02e4094..2a0d543b70 100755
--- a/bin/crunchy-postgres-exporter/start.sh
+++ b/bin/crunchy-postgres-exporter/start.sh
@@ -144,6 +144,12 @@ else
else
echo_err "Custom Query file queries_pg95.yml does not exist (it should).."
fi
+ if [[ -f ${CONFIG_DIR?}/queries_pg_stat_statements_pg95.yml ]]
+ then
+ cat ${CONFIG_DIR?}/queries_pg_stat_statements_pg95.yml >> /tmp/queries.yml
+ else
+ echo_warn "Custom Query file queries_pg_stat_statements_pg95.yml not loaded."
+ fi
elif (( ${VERSION?} >= 90600 )) && (( ${VERSION?} < 100000 ))
then
if [[ -f ${CONFIG_DIR?}/queries_pg96.yml ]]
@@ -152,6 +158,12 @@ else
else
echo_err "Custom Query file queries_pg96.yml does not exist (it should).."
fi
+ if [[ -f ${CONFIG_DIR?}/queries_pg_stat_statements_pg96.yml ]]
+ then
+ cat ${CONFIG_DIR?}/queries_pg_stat_statements_pg96.yml >> /tmp/queries.yml
+ else
+ echo_warn "Custom Query file queries_pg_stat_statements_pg96.yml not loaded."
+ fi
elif (( ${VERSION?} >= 100000 )) && (( ${VERSION?} < 110000 ))
then
if [[ -f ${CONFIG_DIR?}/queries_pg10.yml ]]
@@ -160,6 +172,12 @@ else
else
echo_err "Custom Query file queries_pg10.yml does not exist (it should).."
fi
+ if [[ -f ${CONFIG_DIR?}/queries_pg_stat_statements_pg10.yml ]]
+ then
+ cat ${CONFIG_DIR?}/queries_pg_stat_statements_pg10.yml >> /tmp/queries.yml
+ else
+ echo_warn "Custom Query file queries_pg_stat_statements_pg10.yml not loaded."
+ fi
elif (( ${VERSION?} >= 110000 )) && (( ${VERSION?} < 120000 ))
then
if [[ -f ${CONFIG_DIR?}/queries_pg11.yml ]]
@@ -168,6 +186,12 @@ else
else
echo_err "Custom Query file queries_pg11.yml does not exist (it should).."
fi
+ if [[ -f ${CONFIG_DIR?}/queries_pg_stat_statements_pg11.yml ]]
+ then
+ cat ${CONFIG_DIR?}/queries_pg_stat_statements_pg11.yml >> /tmp/queries.yml
+ else
+ echo_warn "Custom Query file queries_pg_stat_statements_pg11.yml not loaded."
+ fi
elif (( ${VERSION?} >= 120000 )) && (( ${VERSION?} < 130000 ))
then
if [[ -f ${CONFIG_DIR?}/queries_pg12.yml ]]
@@ -176,13 +200,25 @@ else
else
echo_err "Custom Query file queries_pg12.yml does not exist (it should).."
fi
+ if [[ -f ${CONFIG_DIR?}/queries_pg_stat_statements_pg12.yml ]]
+ then
+ cat ${CONFIG_DIR?}/queries_pg_stat_statements_pg12.yml >> /tmp/queries.yml
+ else
+ echo_warn "Custom Query file queries_pg_stat_statements_pg12.yml not loaded."
+ fi
elif (( ${VERSION?} >= 130000 ))
then
if [[ -f ${CONFIG_DIR?}/queries_pg13.yml ]]
then
cat ${CONFIG_DIR?}/queries_pg13.yml >> /tmp/queries.yml
else
- echo_err "Custom Query file queries_pg12.yml does not exist (it should).."
+ echo_err "Custom Query file queries_pg13.yml does not exist (it should).."
+ fi
+ if [[ -f ${CONFIG_DIR?}/queries_pg_stat_statements_pg13.yml ]]
+ then
+ cat ${CONFIG_DIR?}/queries_pg_stat_statements_pg13.yml >> /tmp/queries.yml
+ else
+ echo_warn "Custom Query file queries_pg_stat_statements_pg13.yml not loaded."
fi
else
echo_err "Unknown or unsupported version of PostgreSQL. Exiting.."
From a33f61be1375e5c93e1f2e666e3f6a3a7abbe9fc Mon Sep 17 00:00:00 2001
From: jmckulk
Date: Fri, 20 Nov 2020 14:12:28 -0500
Subject: [PATCH 023/276] Compaction of pgo-sqlrunner into crunchy-postgres
As part of the compaction effort the pgo-sqlrunner is now a running mode in the
crunchy-postgres image. The pgo-sqlrunner image has been remove and related
files have been moved to the Crunchy Containers repo. References to the image
have been updated to use the crunchy-postgres image and the running mode `MODE:
sqlrunner`.
---
Makefile | 1 -
bin/pull-from-gcr.sh | 1 -
bin/push-to-gcr.sh | 1 -
build/pgo-sqlrunner/Dockerfile | 45 -------------------
cmd/pgo-scheduler/scheduler/policy.go | 6 +--
cmd/pgo-scheduler/scheduler/types.go | 4 +-
.../pgo-configs/pgo.sqlrunner-template.json | 12 +++--
.../olm/postgresoperator.csv.images.yaml | 1 -
.../apiserver/scheduleservice/scheduleimpl.go | 4 +-
internal/config/images.go | 2 -
10 files changed, 16 insertions(+), 61 deletions(-)
delete mode 100644 build/pgo-sqlrunner/Dockerfile
diff --git a/Makefile b/Makefile
index 19e150820a..4f9803c436 100644
--- a/Makefile
+++ b/Makefile
@@ -84,7 +84,6 @@ images = pgo-apiserver \
pgo-event \
pgo-rmdata \
pgo-scheduler \
- pgo-sqlrunner \
pgo-client \
pgo-deployer \
crunchy-postgres-exporter \
diff --git a/bin/pull-from-gcr.sh b/bin/pull-from-gcr.sh
index 25e4b267eb..3908630f43 100755
--- a/bin/pull-from-gcr.sh
+++ b/bin/pull-from-gcr.sh
@@ -21,7 +21,6 @@ IMAGES=(
pgo-event
pgo-backrest-repo
pgo-scheduler
- pgo-sqlrunner
postgres-operator
pgo-apiserver
pgo-rmdata
diff --git a/bin/push-to-gcr.sh b/bin/push-to-gcr.sh
index 3ef6a11199..78832c3bda 100755
--- a/bin/push-to-gcr.sh
+++ b/bin/push-to-gcr.sh
@@ -19,7 +19,6 @@ IMAGES=(
pgo-event
pgo-backrest-repo
pgo-scheduler
-pgo-sqlrunner
postgres-operator
pgo-apiserver
pgo-rmdata
diff --git a/build/pgo-sqlrunner/Dockerfile b/build/pgo-sqlrunner/Dockerfile
deleted file mode 100644
index 5b5dd2c45f..0000000000
--- a/build/pgo-sqlrunner/Dockerfile
+++ /dev/null
@@ -1,45 +0,0 @@
-ARG BASEOS
-ARG BASEVER
-ARG PREFIX
-FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER}
-
-ARG PGVERSION
-ARG BACKREST_VERSION
-ARG PACKAGER
-ARG DFSET
-
-LABEL name="pgo-sqlrunner" \
- summary="Crunchy PostgreSQL Operator - SQL Runner" \
- description="Crunchy PostgreSQL Operator - SQL Runner"
-
-ENV PGROOT="/usr/pgsql-${PGVERSION}"
-
-RUN if [ "$DFSET" = "centos" ] ; then \
- ${PACKAGER} -y install epel-release \
- && ${PACKAGER} -y install \
- --setopt=skip_missing_names_on_install=False \
- gettext \
- hostname \
- nss_wrapper \
- procps-ng \
- postgresql${PGVERSION} \
- && ${PACKAGER} -y clean all ; \
-fi
-
-RUN if [ "$DFSET" = "rhel" ] ; then \
- ${PACKAGER} -y install \
- --setopt=skip_missing_names_on_install=False \
- postgresql${PGVERSION} \
- && ${PACKAGER} -y clean all ; \
-fi
-
-RUN mkdir -p /opt/cpm/bin /opt/cpm/conf /pgconf \
- && chown -R 26:26 /opt/cpm /pgconf
-
-ADD bin/pgo-sqlrunner /opt/cpm/bin
-
-VOLUME ["/pgconf"]
-
-USER 26
-
-CMD ["/opt/cpm/bin/start.sh"]
diff --git a/cmd/pgo-scheduler/scheduler/policy.go b/cmd/pgo-scheduler/scheduler/policy.go
index bb81969951..e2be356d07 100644
--- a/cmd/pgo-scheduler/scheduler/policy.go
+++ b/cmd/pgo-scheduler/scheduler/policy.go
@@ -134,8 +134,8 @@ func (p PolicyJob) Run() {
policyJob := PolicyTemplate{
JobName: name,
ClusterName: p.cluster,
- PGOImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, p.ccpImagePrefix),
- PGOImageTag: p.ccpImageTag,
+ CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, p.ccpImagePrefix),
+ CCPImageTag: p.ccpImageTag,
PGHost: p.cluster,
PGPort: cluster.Spec.Port,
PGDatabase: p.database,
@@ -177,7 +177,7 @@ func (p PolicyJob) Run() {
}
// set the container image to an override value, if one exists
- operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_SQL_RUNNER,
+ operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA,
&newJob.Spec.Template.Spec.Containers[0])
_, err = clientset.BatchV1().Jobs(p.namespace).Create(ctx, newJob, metav1.CreateOptions{})
diff --git a/cmd/pgo-scheduler/scheduler/types.go b/cmd/pgo-scheduler/scheduler/types.go
index 3838e4d994..674ef86ad2 100644
--- a/cmd/pgo-scheduler/scheduler/types.go
+++ b/cmd/pgo-scheduler/scheduler/types.go
@@ -63,8 +63,8 @@ type Policy struct {
type PolicyTemplate struct {
JobName string
ClusterName string
- PGOImagePrefix string
- PGOImageTag string
+ CCPImagePrefix string
+ CCPImageTag string
PGHost string
PGPort string
PGDatabase string
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
index 56dbf8b035..a301df048f 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
@@ -5,7 +5,7 @@
"name": "{{.JobName}}",
"labels": {
"vendor": "crunchydata",
- "pgo-sqlrunner": "true",
+ "sqlrunner": "true",
"pg-cluster": "{{.ClusterName}}"
}
},
@@ -15,7 +15,7 @@
"name": "{{.JobName}}",
"labels": {
"vendor": "crunchydata",
- "pgo-sqlrunner": "true",
+ "sqlrunner": "true",
"pg-cluster": "{{.ClusterName}}"
}
},
@@ -24,8 +24,14 @@
"containers": [
{
"name": "sqlrunner",
- "image": "{{.PGOImagePrefix}}/pgo-sqlrunner:{{.PGOImageTag}}",
+ "image": "{{.CCPImagePrefix}}/crunchy-postgres-ha:{{.CCPImageTag}}",
+ "command": ["/opt/crunchy/bin/uid_postgres.sh"],
+ "args": ["/opt/crunchy/bin/start.sh"],
"env": [
+ {
+ "name": "MODE",
+ "value": "sqlrunner"
+ },
{
"name": "PG_HOST",
"value": "{{.PGHost}}"
diff --git a/installers/olm/postgresoperator.csv.images.yaml b/installers/olm/postgresoperator.csv.images.yaml
index 301d117a67..87e04f05b0 100644
--- a/installers/olm/postgresoperator.csv.images.yaml
+++ b/installers/olm/postgresoperator.csv.images.yaml
@@ -8,7 +8,6 @@
- { name: RELATED_IMAGE_PGO_BACKREST_REPO, value: '${PGO_IMAGE_PREFIX}/pgo-backrest-repo:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_PGO_CLIENT, value: '${PGO_IMAGE_PREFIX}/pgo-client:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_PGO_RMDATA, value: '${PGO_IMAGE_PREFIX}/pgo-rmdata:${PGO_IMAGE_TAG}' }
-- { name: RELATED_IMAGE_PGO_SQL_RUNNER, value: '${PGO_IMAGE_PREFIX}/pgo-sqlrunner:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER, value: '${PGO_IMAGE_PREFIX}/crunchy-postgres-exporter:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_ADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-admin:${CCP_IMAGE_TAG}' }
diff --git a/internal/apiserver/scheduleservice/scheduleimpl.go b/internal/apiserver/scheduleservice/scheduleimpl.go
index 09830a03d2..7aa2a9e194 100644
--- a/internal/apiserver/scheduleservice/scheduleimpl.go
+++ b/internal/apiserver/scheduleservice/scheduleimpl.go
@@ -93,8 +93,8 @@ func (s scheduleRequest) createPolicySchedule(cluster *crv1.Pgcluster, ns string
Name: s.Request.PolicyName,
Database: s.Request.Database,
Secret: s.Request.Secret,
- ImagePrefix: util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, apiserver.Pgo.Pgo.PGOImagePrefix),
- ImageTag: apiserver.Pgo.Pgo.PGOImageTag,
+ ImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, apiserver.Pgo.Cluster.CCPImagePrefix),
+ ImageTag: apiserver.Pgo.Cluster.CCPImageTag,
},
}
return schedule
diff --git a/internal/config/images.go b/internal/config/images.go
index 34845ee6a1..10a5227e13 100644
--- a/internal/config/images.go
+++ b/internal/config/images.go
@@ -21,7 +21,6 @@ const (
CONTAINER_IMAGE_PGO_BACKREST_REPO = "pgo-backrest-repo"
CONTAINER_IMAGE_PGO_CLIENT = "pgo-client"
CONTAINER_IMAGE_PGO_RMDATA = "pgo-rmdata"
- CONTAINER_IMAGE_PGO_SQL_RUNNER = "pgo-sqlrunner"
CONTAINER_IMAGE_CRUNCHY_ADMIN = "crunchy-admin"
CONTAINER_IMAGE_CRUNCHY_BACKREST_RESTORE = "crunchy-backrest-restore"
CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER = "crunchy-postgres-exporter"
@@ -46,7 +45,6 @@ var RelatedImageMap = map[string]string{
"RELATED_IMAGE_PGO_BACKREST_REPO": CONTAINER_IMAGE_PGO_BACKREST_REPO,
"RELATED_IMAGE_PGO_CLIENT": CONTAINER_IMAGE_PGO_CLIENT,
"RELATED_IMAGE_PGO_RMDATA": CONTAINER_IMAGE_PGO_RMDATA,
- "RELATED_IMAGE_PGO_SQL_RUNNER": CONTAINER_IMAGE_PGO_SQL_RUNNER,
"RELATED_IMAGE_CRUNCHY_ADMIN": CONTAINER_IMAGE_CRUNCHY_ADMIN,
"RELATED_IMAGE_CRUNCHY_BACKREST_RESTORE": CONTAINER_IMAGE_CRUNCHY_BACKREST_RESTORE,
"RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER": CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER,
From 98a6d8f130f77e032a9f7fe51b77af52d7d4490a Mon Sep 17 00:00:00 2001
From: jmckulk
Date: Fri, 20 Nov 2020 14:16:32 -0500
Subject: [PATCH 024/276] Updates to use compacted crunchy-postgres image
This change updates code and templates that reference any old image that is now
part of the compacted crunchy-postgres image. Templates have been updated to
use crunchy-postgres as the image and pass in the relevant running mode env
variable. The code has been updated to use the new `/opt/crunchy` path instead
of `/opt/cpm`.
The crunchy-pgdump and crunchy-pgrestore images were compacted into the
crunchy-postgres image. This change removes references to the old images from
pull scripts and images.go. The update to images.go removes the related images
for each of the compacted images and updates the code to use the related image
for crunchy-postgres.
---
bin/pull-ccp-from-gcr.sh | 2 --
.../postgres-operator-containers-overview.md | 13 ++++++-------
.../files/pgo-configs/cluster-deployment.json | 8 ++++++--
.../pgo-operator/files/pgo-configs/pgdump-job.json | 8 +++++++-
.../files/pgo-configs/pgrestore-job.json | 8 +++++++-
installers/olm/postgresoperator.csv.images.yaml | 2 --
internal/config/images.go | 4 ----
internal/operator/cluster/pgbouncer.go | 4 ++--
internal/operator/cluster/standby.go | 2 +-
internal/operator/config/localdb.go | 6 +++---
internal/operator/pgdump/dump.go | 2 +-
internal/operator/pgdump/restore.go | 2 +-
12 files changed, 34 insertions(+), 27 deletions(-)
diff --git a/bin/pull-ccp-from-gcr.sh b/bin/pull-ccp-from-gcr.sh
index 0e6dc20aea..17ce4ae360 100755
--- a/bin/pull-ccp-from-gcr.sh
+++ b/bin/pull-ccp-from-gcr.sh
@@ -8,8 +8,6 @@ IMAGES=(
crunchy-postgres-ha
crunchy-pgbadger
crunchy-pgbouncer
- crunchy-pgdump
- crunchy-pgrestore
)
function echo_green() {
diff --git a/docs/content/architecture/postgres-operator-containers-overview.md b/docs/content/architecture/postgres-operator-containers-overview.md
index 028b3b1f74..4397c63841 100644
--- a/docs/content/architecture/postgres-operator-containers-overview.md
+++ b/docs/content/architecture/postgres-operator-containers-overview.md
@@ -9,9 +9,13 @@ weight: 600
The PostgreSQL Operator orchestrates a series of PostgreSQL and PostgreSQL related containers containers that enable rapid deployment of PostgreSQL, including administration and monitoring tools in a Kubernetes environment. The PostgreSQL Operator supports PostgreSQL 9.5+ with multiple PostgreSQL cluster deployment strategies and a variety of PostgreSQL related extensions and tools enabling enterprise grade PostgreSQL-as-a-Service. A full list of the containers supported by the PostgreSQL Operator is provided below.
-### PostgreSQL Server and Extensions
+### PostgreSQL Server, Tools, and Extensions
-* **PostgreSQL** (crunchy-postgres-ha). PostgreSQL database server. The crunchy-postgres container image is unmodified, open source PostgreSQL packaged and maintained by Crunchy Data.
+* **PostgreSQL** (crunchy-postgres-ha). PostgreSQL database server. The crunchy-postgres container image is unmodified, open source PostgreSQL packaged and maintained by Crunchy Data. The container supports PostgreSQL tools by running in different modes, more information on running modes can be found in the [Crunchy Container](https://access.crunchydata.com/documentation/crunchy-postgres-containers/latest/) documentation. The PostgreSQL operator uses the following running modes:
+
+ - **pgdump** (MODE: pgdump) running in pgdump mode, the image executes either a pg_dump or pg_dumpall database backup against another PostgreSQL database.
+ - **pgrestore** (MODE: pgrestore) running in pgrestore mode, the image provides a means of performing a restore of a dump from pg_dump or pg_dumpall via psql or pg_restore to a PostgreSQL container database.
+ - **sqlrunner** (MODE: sqlrunner) running in sqlrunner mode, the image will use `psql` to issue specified queries, defined in SQL files, to a PostgreSQL container database.
* **PostGIS** (crunchy-postgres-ha-gis). PostgreSQL database server including the PostGIS extension. The crunchy-postgres-gis container image is unmodified, open source PostgreSQL packaged and maintained by Crunchy Data. This image is identical to the crunchy-postgres image except it includes the open source geospatial extension PostGIS for PostgreSQL in addition to the language extension PL/R which allows for writing functions in the R statistical computing language.
@@ -19,11 +23,6 @@ The PostgreSQL Operator orchestrates a series of PostgreSQL and PostgreSQL relat
* **pgBackRest** (crunchy-postgres-ha). pgBackRest is a high performance backup and restore utility for PostgreSQL. The crunchy-postgres-ha container executes the pgBackRest utility, allowing FULL and DELTA restore capability.
-* **pgdump** (crunchy-pgdump). The crunchy-pgdump container executes either a pg_dump or pg_dumpall database backup against another PostgreSQL database.
-
-* **crunchy-pgrestore** (restore). The restore image provides a means of performing a restore of a dump from pg_dump or pg_dumpall via psql or pg_restore to a PostgreSQL container database.
-
-
### Administration Tools
* **pgAdmin4** (crunchy-pgadmin4). PGAdmin4 is a graphical user interface administration tool for PostgreSQL. The crunchy-pgadmin4 container executes the pgAdmin4 web application.
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index 4a44785b27..7fd77e6449 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -39,7 +39,7 @@
"readinessProbe": {
"exec": {
"command": [
- "/opt/cpm/bin/health/pgha-readiness.sh"
+ "/opt/crunchy/bin/postgres-ha/health/pgha-readiness.sh"
]
},
"initialDelaySeconds": 15
@@ -47,7 +47,7 @@
"livenessProbe": {
"exec": {
"command": [
- "/opt/cpm/bin/health/pgha-liveness.sh"
+ "/opt/crunchy/bin/postgres-ha/health/pgha-liveness.sh"
]
},
"initialDelaySeconds": 30,
@@ -56,6 +56,10 @@
},
{{.ContainerResources }}
"env": [{
+ "name": "MODE",
+ "value": "postgres"
+ },
+ {
"name": "PGHA_PG_PORT",
"value": "{{.Port}}"
}, {
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json
index 3b827ecaac..ef6e1b6d5a 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json
@@ -33,7 +33,9 @@
"serviceAccountName": "pgo-default",
"containers": [{
"name": "pgdump",
- "image": "{{.CCPImagePrefix}}/crunchy-pgdump:{{.CCPImageTag}}",
+ "image": "{{.CCPImagePrefix}}/crunchy-postgres-ha:{{.CCPImageTag}}",
+ "command": ["/opt/crunchy/bin/uid_postgres.sh"],
+ "args": ["/opt/crunchy/bin/start.sh"],
"volumeMounts": [
{
"mountPath": "/pgdata",
@@ -42,6 +44,10 @@
}
],
"env": [
+ {
+ "name": "MODE",
+ "value": "pgdump"
+ },
{
"name": "PGDUMP_HOST",
"value": "{{.PgDumpHost}}"
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json
index 4dae8fda14..3759905e95 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json
@@ -34,7 +34,9 @@
"containers": [
{
"name": "pgrestore",
- "image": "{{.CCPImagePrefix}}/crunchy-pgrestore:{{.CCPImageTag}}",
+ "image": "{{.CCPImagePrefix}}/crunchy-postgres-ha:{{.CCPImageTag}}",
+ "command": ["/opt/crunchy/bin/uid_postgres.sh"],
+ "args": ["/opt/crunchy/bin/start.sh"],
"volumeMounts": [
{
"mountPath": "/pgdata",
@@ -43,6 +45,10 @@
}
],
"env": [
+ {
+ "name": "MODE",
+ "value": "pgrestore"
+ },
{
"name": "PGRESTORE_USER",
"valueFrom": {
diff --git a/installers/olm/postgresoperator.csv.images.yaml b/installers/olm/postgresoperator.csv.images.yaml
index 87e04f05b0..429a882893 100644
--- a/installers/olm/postgresoperator.csv.images.yaml
+++ b/installers/olm/postgresoperator.csv.images.yaml
@@ -15,7 +15,5 @@
- { name: RELATED_IMAGE_CRUNCHY_PGADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-pgadmin4:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGBADGER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbadger:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGBOUNCER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbouncer:${CCP_IMAGE_TAG}' }
-- { name: RELATED_IMAGE_CRUNCHY_PGDUMP, value: '${CCP_IMAGE_PREFIX}/crunchy-pgdump:${CCP_IMAGE_TAG}' }
-- { name: RELATED_IMAGE_CRUNCHY_PGRESTORE, value: '${CCP_IMAGE_PREFIX}/crunchy-pgrestore:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_HA, value: '${CCP_IMAGE_PREFIX}/crunchy-postgres-ha:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_GIS_HA, value: '${CCP_IMAGE_PREFIX}/crunchy-postgres-gis-ha:${CCP_POSTGIS_IMAGE_TAG}' }
diff --git a/internal/config/images.go b/internal/config/images.go
index 10a5227e13..71c0af7c1c 100644
--- a/internal/config/images.go
+++ b/internal/config/images.go
@@ -28,8 +28,6 @@ const (
CONTAINER_IMAGE_CRUNCHY_PGADMIN = "crunchy-pgadmin4"
CONTAINER_IMAGE_CRUNCHY_PGBADGER = "crunchy-pgbadger"
CONTAINER_IMAGE_CRUNCHY_PGBOUNCER = "crunchy-pgbouncer"
- CONTAINER_IMAGE_CRUNCHY_PGDUMP = "crunchy-pgdump"
- CONTAINER_IMAGE_CRUNCHY_PGRESTORE = "crunchy-pgrestore"
CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA = "crunchy-postgres-ha"
CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA = "crunchy-postgres-gis-ha"
CONTAINER_IMAGE_CRUNCHY_PROMETHEUS = "crunchy-prometheus"
@@ -51,8 +49,6 @@ var RelatedImageMap = map[string]string{
"RELATED_IMAGE_CRUNCHY_PGADMIN": CONTAINER_IMAGE_CRUNCHY_PGADMIN,
"RELATED_IMAGE_CRUNCHY_PGBADGER": CONTAINER_IMAGE_CRUNCHY_PGBADGER,
"RELATED_IMAGE_CRUNCHY_PGBOUNCER": CONTAINER_IMAGE_CRUNCHY_PGBOUNCER,
- "RELATED_IMAGE_CRUNCHY_PGDUMP": CONTAINER_IMAGE_CRUNCHY_PGDUMP,
- "RELATED_IMAGE_CRUNCHY_PGRESTORE": CONTAINER_IMAGE_CRUNCHY_PGRESTORE,
"RELATED_IMAGE_CRUNCHY_POSTGRES_HA": CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA,
"RELATED_IMAGE_CRUNCHY_POSTGRES_GIS_HA": CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA,
}
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index 9c70884faa..cf8dcaf7c0 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -88,10 +88,10 @@ const pgPort = "5432"
const (
// the path to the pgbouncer uninstallation script script
- pgBouncerUninstallScript = "/opt/cpm/bin/sql/pgbouncer/pgbouncer-uninstall.sql"
+ pgBouncerUninstallScript = "/opt/crunchy/bin/postgres-ha/sql/pgbouncer/pgbouncer-uninstall.sql"
// the path to the pgbouncer installation script
- pgBouncerInstallScript = "/opt/cpm/bin/sql/pgbouncer/pgbouncer-install.sql"
+ pgBouncerInstallScript = "/opt/crunchy/bin/postgres-ha/sql/pgbouncer/pgbouncer-install.sql"
)
const (
diff --git a/internal/operator/cluster/standby.go b/internal/operator/cluster/standby.go
index 1444e78a45..30bcc7edbe 100644
--- a/internal/operator/cluster/standby.go
+++ b/internal/operator/cluster/standby.go
@@ -58,7 +58,7 @@ const (
"create_replica_methods": [
"pgbackrest_standby"
],
- "restore_command": "source /opt/cpm/bin/pgbackrest/pgbackrest-set-env.sh && pgbackrest archive-get %f \"%p\""
+ "restore_command": "source /opt/crunchy/bin/postgres-ha/pgbackrest/pgbackrest-set-env.sh && pgbackrest archive-get %f \"%p\""
}`
)
diff --git a/internal/operator/config/localdb.go b/internal/operator/config/localdb.go
index 797c53544f..d7eef19bf8 100644
--- a/internal/operator/config/localdb.go
+++ b/internal/operator/config/localdb.go
@@ -39,13 +39,13 @@ var (
// readConfigCMD is the command used to read local cluster configuration in a database
// container
readConfigCMD []string = []string{"bash", "-c",
- "/opt/cpm/bin/yq r /tmp/postgres-ha-bootstrap.yaml postgresql | " +
- "/opt/cpm/bin/yq p - postgresql",
+ "/opt/crunchy/bin/yq r /tmp/postgres-ha-bootstrap.yaml postgresql | " +
+ "/opt/crunchy/bin/yq p - postgresql",
}
// applyAndReloadConfigCMD is the command for calling the script to apply and reload the local
// configuration for a database container. The required arguments are appended to this command
// when the script is called.
- applyAndReloadConfigCMD []string = []string{"/opt/cpm/bin/common/pgha-reload-local.sh"}
+ applyAndReloadConfigCMD []string = []string{"/opt/crunchy/bin/postgres-ha/common/pgha-reload-local.sh"}
// pghaLocalConfigName represents the name of the local configuration stored for each database
// server in the "-pgha-config" configMap, which is "-local-config"
diff --git a/internal/operator/pgdump/dump.go b/internal/operator/pgdump/dump.go
index 940713b3a0..060043d8c1 100644
--- a/internal/operator/pgdump/dump.go
+++ b/internal/operator/pgdump/dump.go
@@ -139,7 +139,7 @@ func Dump(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
}
// set the container image to an override value, if one exists
- operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_PGDUMP,
+ operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA,
&newjob.Spec.Template.Spec.Containers[0])
_, err = clientset.BatchV1().Jobs(namespace).Create(ctx, &newjob, metav1.CreateOptions{})
diff --git a/internal/operator/pgdump/restore.go b/internal/operator/pgdump/restore.go
index 57cc0f7b12..6d874f4a9e 100644
--- a/internal/operator/pgdump/restore.go
+++ b/internal/operator/pgdump/restore.go
@@ -115,7 +115,7 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
}
// set the container image to an override value, if one exists
- operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_PGRESTORE,
+ operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA,
&newjob.Spec.Template.Spec.Containers[0])
j, err := clientset.BatchV1().Jobs(namespace).Create(ctx, &newjob, metav1.CreateOptions{})
From 4de7d3f154ac65b9e7898b034ddae903bd027697 Mon Sep 17 00:00:00 2001
From: andrewlecuyer <43458182+andrewlecuyer@users.noreply.github.com>
Date: Wed, 2 Dec 2020 10:16:22 -0600
Subject: [PATCH 025/276] Remove disabling of autofailover from rmdata
The rmdata application no longer disables autofailover when deleting a
PostgreSQL cluster. In past versions of the PostgreSQL Operator, it
was necessary to first disable autofailover prior to cluster deletion
since the rmdata application itself would stop the PostgreSQL database
using 'pg_ctl stop'. However, since Patroni is now responsible for
cleanly shutting down the database (specifically upon receipt of a
SIGTERM signal), autofailver should no longer be disabled (if it is,
Patroni will not respond to the SIGTERM and will therefore not attempt
to cleanly shutdown the database).
This commit therefore ensures an attempt is made to cleanly shutdown
the database when deleting a PostgreSQL cluster. This, in turn, will
increase the likeliness that the cluster can later be recreated and
cleanly restarted.
Issue: [ch9856]
---
cmd/pgo-rmdata/process.go | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/cmd/pgo-rmdata/process.go b/cmd/pgo-rmdata/process.go
index 85a7e4ce4b..d0d79744f6 100644
--- a/cmd/pgo-rmdata/process.go
+++ b/cmd/pgo-rmdata/process.go
@@ -49,15 +49,6 @@ func Delete(request Request) {
ctx := context.TODO()
log.Infof("rmdata.Process %v", request)
- // if, check to see if this is a full cluster removal...i.e. "IsReplica"
- // and "IsBackup" is set to false
- //
- // if this is a full cluster removal, first disable autofailover
- if !(request.IsReplica || request.IsBackup) {
- log.Debug("disabling autofailover for cluster removal")
- util.ToggleAutoFailover(request.Clientset, false, request.ClusterPGHAScope, request.Namespace)
- }
-
//the case of 'pgo scaledown'
if request.IsReplica {
log.Info("rmdata.Process scaledown replica use case")
From b1e5421768f5b4c9fe519384af2b9aefd069d6cc Mon Sep 17 00:00:00 2001
From: Joseph Mckulka <16840147+jmckulk@users.noreply.github.com>
Date: Wed, 2 Dec 2020 12:31:57 -0500
Subject: [PATCH 026/276] Remove orphaned error check
This error check is using an `err` variable that is defined and handled
earlier in the code, which leads to an extra error in the logs.
---
internal/apiserver/clusterservice/clusterimpl.go | 3 ---
1 file changed, 3 deletions(-)
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index e099d0303f..90a062e40d 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -722,9 +722,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
//set the metrics flag with the global setting first
userLabelsMap[config.LABEL_EXPORTER] = strconv.FormatBool(apiserver.MetricsFlag)
- if err != nil {
- log.Error(err)
- }
//if metrics is chosen on the pgo command, stick it into the user labels
if request.MetricsFlag {
From a8bd519aa3881c011fdfa965d08c6eace1f98ef1 Mon Sep 17 00:00:00 2001
From: andrewlecuyer <43458182+andrewlecuyer@users.noreply.github.com>
Date: Mon, 7 Dec 2020 09:15:35 -0600
Subject: [PATCH 027/276] Delta Restore for In-Place Cluster Restore
When performing an in-place PostgreSQL cluster restore (such as when
using the 'pgo restore' command), the PVC for the current primary
database (includes the PGDATA PVC, along with any WAL and/or tablespace
PVCs) will now be preserved if found (as identified by the 'current-
primary' annotation on the pgcluster custom resource). This will cause
the 'crunchy-postgres-ha' container to attempt a pgBackRest "delta"
restore when bootstrapping the restored cluster, therefore leveraging
any existing data within the PGDATA directory efficiently restore the
database.
Issue: [ch9878]
---
internal/operator/backrest/restore.go | 19 ++++++++++++++++---
internal/operator/cluster/clusterlogic.go | 2 +-
2 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index 1b37817137..e6ce666f22 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -116,7 +116,6 @@ func PrepareClusterForRestore(clientset kubeapi.Interface, cluster *crv1.Pgclust
patch, err := kubeapi.NewMergePatch().
Add("metadata", "annotations")(map[string]string{
config.ANNOTATION_BACKREST_RESTORE: "",
- config.ANNOTATION_CURRENT_PRIMARY: clusterName,
}).
Add("metadata", "labels")(map[string]string{
config.LABEL_DEPLOYMENT_NAME: clusterName,
@@ -200,7 +199,7 @@ func PrepareClusterForRestore(clientset kubeapi.Interface, cluster *crv1.Pgclust
// find all database PVCs for the entire PostgreSQL cluster. Includes the PVCs for all PGDATA
// volumes, as well as the PVCs for any WAL and/or tablespace volumes
- databasePVCList, err := getPGDatabasePVCNames(clientset, replicas, clusterName, namespace)
+ databasePVCList, err := getPGDatabasePVCNames(clientset, replicas, cluster)
if err != nil {
return nil, err
}
@@ -316,9 +315,12 @@ func PublishRestore(id, clusterName, username, namespace string) {
// instances comprising the cluster, in addition to any additional volumes used by those
// instances, e.g. PVCs for external WAL and/or tablespace volumes.
func getPGDatabasePVCNames(clientset kubeapi.Interface, replicas *crv1.PgreplicaList,
- clusterName, namespace string) ([]string, error) {
+ cluster *crv1.Pgcluster) ([]string, error) {
ctx := context.TODO()
+ namespace := cluster.Namespace
+ clusterName := cluster.Name
+
// create a slice with the names of all database instances in the cluster. Even though the
// original primary database (with a name matching the cluster name) might no longer exist,
// add the cluster name to this list in the event that it does, along with the names of any
@@ -338,9 +340,20 @@ func getPGDatabasePVCNames(clientset kubeapi.Interface, replicas *crv1.Pgreplica
}
var databasePVCList []string
+ primary := cluster.Annotations[config.ANNOTATION_CURRENT_PRIMARY]
+
for _, instance := range instances {
for _, clusterPVC := range clusterPVCList.Items {
+
pvcName := clusterPVC.GetName()
+
+ // Keep the current primary PVC's in order to attempt a pgBackRest delta restore.
+ // Includes the PGDATA PVC, as well as any WAL and/or tablespace PVC's if present.
+ if pvcName == primary || pvcName == fmt.Sprintf(walPVCPattern, primary) ||
+ strings.HasPrefix(pvcName, fmt.Sprintf(tablespacePVCSuffixPattern, primary)) {
+ continue
+ }
+
if pvcName == instance || pvcName == fmt.Sprintf(walPVCPattern, instance) ||
strings.HasPrefix(pvcName, fmt.Sprintf(tablespacePVCSuffixPattern, instance)) {
databasePVCList = append(databasePVCList, pvcName)
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 7b78d66f00..f6ddf85fca 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -235,7 +235,7 @@ func getBootstrapJobFields(clientset kubeapi.Interface,
// Now override any backrest env vars for the bootstrap job
bootstrapBackrestVars, err := operator.GetPgbackrestBootstrapEnvVars(restoreClusterName,
- cluster.GetName(), restoreFromSecret)
+ cluster.GetAnnotations()[config.ANNOTATION_CURRENT_PRIMARY], restoreFromSecret)
if err != nil {
return bootstrapFields, err
}
From 102f345ff118d39d0bf0d5d723c9ed111f84be37 Mon Sep 17 00:00:00 2001
From: Joseph Mckulka <16840147+jmckulk@users.noreply.github.com>
Date: Mon, 7 Dec 2020 10:17:11 -0500
Subject: [PATCH 028/276] pgo-backrest and pgo-backrest-repo containers cleanup
The pgo-backrest and pgo-backrest-repo containers have been
moved to the Crunchy Containers project. As such, the associated
files are no longer needed in this repository.
Additionally, the references to these containers are now updated
to match the new naming convention being used, and the image
tag and prefix values are updated to reflect the new location
of the containers.
This change removes debug flags and references to the unused
sshd_port env variable. It also fixes a minor copy paste error in
the docs.
Co-authored-by: TJ Moore
---
Makefile | 6 -
bin/pgo-backrest-repo/archive-push-s3.sh | 3 -
bin/pgo-backrest-repo/pgo-backrest-repo.sh | 81 ---------
bin/pgo-backrest/.gitignore | 1 -
bin/pgo-backrest/README.txt | 3 -
bin/pgo-backrest/pgo-backrest.sh | 23 ---
bin/pull-from-gcr.sh | 2 -
bin/push-to-gcr.sh | 2 -
bin/uid_pgbackrest.sh | 22 ---
build/pgo-backrest-repo/Dockerfile | 46 ------
build/pgo-backrest/Dockerfile | 31 ----
cmd/pgo-backrest/main.go | 154 ------------------
cmd/pgo-scheduler/scheduler/pgbackrest.go | 4 +-
.../files/pgo-configs/backrest-job.json | 5 +-
.../pgo-backrest-repo-template.json | 10 +-
.../olm/postgresoperator.csv.images.yaml | 4 +-
.../apiserver/backrestservice/backrestimpl.go | 2 +-
internal/config/annotations.go | 2 +-
internal/config/images.go | 4 +-
internal/operator/backrest/backup.go | 10 +-
internal/operator/backrest/repo.go | 8 +-
internal/operator/backrest/restore.go | 4 +-
internal/operator/backrest/stanza.go | 4 +-
23 files changed, 30 insertions(+), 401 deletions(-)
delete mode 100755 bin/pgo-backrest-repo/archive-push-s3.sh
delete mode 100755 bin/pgo-backrest-repo/pgo-backrest-repo.sh
delete mode 100644 bin/pgo-backrest/.gitignore
delete mode 100644 bin/pgo-backrest/README.txt
delete mode 100755 bin/pgo-backrest/pgo-backrest.sh
delete mode 100755 bin/uid_pgbackrest.sh
delete mode 100644 build/pgo-backrest-repo/Dockerfile
delete mode 100644 build/pgo-backrest/Dockerfile
delete mode 100644 cmd/pgo-backrest/main.go
diff --git a/Makefile b/Makefile
index 4f9803c436..1488c2dc7c 100644
--- a/Makefile
+++ b/Makefile
@@ -79,8 +79,6 @@ endif
# To build a specific image, run 'make -image' (e.g. 'make pgo-apiserver-image')
images = pgo-apiserver \
- pgo-backrest \
- pgo-backrest-repo \
pgo-event \
pgo-rmdata \
pgo-scheduler \
@@ -117,9 +115,6 @@ deployoperator:
build-pgo-apiserver:
$(GO_BUILD) -o bin/apiserver ./cmd/apiserver
-build-pgo-backrest:
- $(GO_BUILD) -o bin/pgo-backrest/pgo-backrest ./cmd/pgo-backrest
-
build-pgo-rmdata:
$(GO_BUILD) -o bin/pgo-rmdata/pgo-rmdata ./cmd/pgo-rmdata
@@ -216,7 +211,6 @@ clean: clean-deprecated
rm -f bin/apiserver
rm -f bin/postgres-operator
rm -f bin/pgo bin/pgo-mac bin/pgo.exe
- rm -f bin/pgo-backrest/pgo-backrest
rm -f bin/pgo-rmdata/pgo-rmdata
rm -f bin/pgo-scheduler/pgo-scheduler
[ -z "$$(ls hack/tools)" ] || rm hack/tools/*
diff --git a/bin/pgo-backrest-repo/archive-push-s3.sh b/bin/pgo-backrest-repo/archive-push-s3.sh
deleted file mode 100755
index 2cafa76d90..0000000000
--- a/bin/pgo-backrest-repo/archive-push-s3.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/bash
-
-pgbackrest "$@"
diff --git a/bin/pgo-backrest-repo/pgo-backrest-repo.sh b/bin/pgo-backrest-repo/pgo-backrest-repo.sh
deleted file mode 100755
index 25fdec5f69..0000000000
--- a/bin/pgo-backrest-repo/pgo-backrest-repo.sh
+++ /dev/null
@@ -1,81 +0,0 @@
-#!/bin/bash
-
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-function trap_sigterm() {
- echo "Signal trap triggered, beginning shutdown.."
- killall sshd
-}
-
-trap 'trap_sigterm' SIGINT SIGTERM
-
-echo "Starting the pgBackRest repo"
-
-CONFIG=/sshd
-REPO=/backrestrepo
-
-if [ ! -d $PGBACKREST_REPO1_PATH ]; then
- echo "creating " $PGBACKREST_REPO1_PATH
- mkdir -p $PGBACKREST_REPO1_PATH
-fi
-
-# This is a workaround for changes introduced in pgBackRest v2.24. Specifically, a pg1-path
-# setting must now be visible when another container executes a pgBackRest command via SSH.
-# Since env vars, and therefore the PGBACKREST_DB_PATH setting, is not visible when another
-# container executes a command via SSH, this adds the pg1-path setting to the pgBackRest config
-# file instead, ensuring the setting is always available in the environment during SSH calls.
-# Additionally, since the value for pg1-path setting in the repository container is irrelevant
-# (i.e. the value specified by the container running the command via SSH is used instead), it is
-# simply set to a dummy directory within the config file.
-# If the URI style is set to 'path' instead of the default 'host' value, pgBackRest will
-# connect to S3 by prependinging bucket names to URIs instead of the default 'bucket.endpoint' style
-# Finally, if TLS verification is set to 'n', pgBackRest disables verification of the S3 server
-# certificate.
-mkdir -p /tmp/pg1path
-if ! grep -Fxq "[${PGBACKREST_STANZA}]" "/etc/pgbackrest/pgbackrest.conf" 2> /dev/null
-then
-
- printf "[%s]\npg1-path=/tmp/pg1path\n" "$PGBACKREST_STANZA" > /etc/pgbackrest/pgbackrest.conf
-
- # Additionally, if the PGBACKREST S3 variables are set, add them here
- if [[ "${PGBACKREST_REPO1_S3_KEY}" != "" ]]
- then
- printf "repo1-s3-key=%s\n" "${PGBACKREST_REPO1_S3_KEY}" >> /etc/pgbackrest/pgbackrest.conf
- fi
-
- if [[ "${PGBACKREST_REPO1_S3_KEY_SECRET}" != "" ]]
- then
- printf "repo1-s3-key-secret=%s\n" "${PGBACKREST_REPO1_S3_KEY_SECRET}" >> /etc/pgbackrest/pgbackrest.conf
- fi
-
- if [[ "${PGBACKREST_REPO1_S3_URI_STYLE}" != "" ]]
- then
- printf "repo1-s3-uri-style=%s\n" "${PGBACKREST_REPO1_S3_URI_STYLE}" >> /etc/pgbackrest/pgbackrest.conf
- fi
-
-fi
-
-mkdir -p ~/.ssh/
-cp $CONFIG/config ~/.ssh/
-#cp $CONFIG/authorized_keys ~/.ssh/
-cp $CONFIG/id_ed25519 /tmp
-chmod 400 /tmp/id_ed25519 ~/.ssh/config
-
-# start sshd which is used by pgbackrest for remote connections
-/usr/sbin/sshd -D -f $CONFIG/sshd_config &
-
-echo "The pgBackRest repo has been started"
-
-wait
diff --git a/bin/pgo-backrest/.gitignore b/bin/pgo-backrest/.gitignore
deleted file mode 100644
index 230c647366..0000000000
--- a/bin/pgo-backrest/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-pgo-backrest
diff --git a/bin/pgo-backrest/README.txt b/bin/pgo-backrest/README.txt
deleted file mode 100644
index 23f92ef4a4..0000000000
--- a/bin/pgo-backrest/README.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-pgo-backrest binary goes in this directory and gets
-copied into the pgo-backrest image, .gitignore is here
-to keep the binary from making its way into github
diff --git a/bin/pgo-backrest/pgo-backrest.sh b/bin/pgo-backrest/pgo-backrest.sh
deleted file mode 100755
index fda20af57c..0000000000
--- a/bin/pgo-backrest/pgo-backrest.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/sh
-
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-/opt/cpm/bin/pgo-backrest
-
-echo $UID "is the UID in the script"
-
-chown -R $UID:$UID $PGBACKREST_DB_PATH
-
-chmod -R o+rx $PGBACKREST_DB_PATH
diff --git a/bin/pull-from-gcr.sh b/bin/pull-from-gcr.sh
index 3908630f43..0e57fc13db 100755
--- a/bin/pull-from-gcr.sh
+++ b/bin/pull-from-gcr.sh
@@ -19,12 +19,10 @@ REGISTRY='us.gcr.io/container-suite'
VERSION=$PGO_IMAGE_TAG
IMAGES=(
pgo-event
- pgo-backrest-repo
pgo-scheduler
postgres-operator
pgo-apiserver
pgo-rmdata
- pgo-backrest
pgo-client
pgo-deployer
crunchy-postgres-exporter
diff --git a/bin/push-to-gcr.sh b/bin/push-to-gcr.sh
index 78832c3bda..4bc46b933c 100755
--- a/bin/push-to-gcr.sh
+++ b/bin/push-to-gcr.sh
@@ -17,12 +17,10 @@ GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
IMAGES=(
pgo-event
-pgo-backrest-repo
pgo-scheduler
postgres-operator
pgo-apiserver
pgo-rmdata
-pgo-backrest
pgo-client
pgo-deployer
crunchy-postgres-exporter
diff --git a/bin/uid_pgbackrest.sh b/bin/uid_pgbackrest.sh
deleted file mode 100755
index 3f9c9d1957..0000000000
--- a/bin/uid_pgbackrest.sh
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/bin/bash
-
-if ! whoami &> /dev/null
-then
- if [[ -w /etc/passwd ]]
- then
- sed "/pgbackrest:x:2000:/d" /etc/passwd >> /tmp/uid.tmp
- cp /tmp/uid.tmp /etc/passwd
- rm -f /tmp/uid.tmp
- echo "${USER_NAME:-pgbackrest}:x:$(id -u):0:${USER_NAME:-pgbackrest} user:${HOME}:/bin/bash" >> /etc/passwd
- fi
-
- if [[ -w /etc/group ]]
- then
- sed "/pgbackrest:x:2000/d" /etc/group >> /tmp/gid.tmp
- cp /tmp/gid.tmp /etc/group
- rm -f /tmp/gid.tmp
- echo "nfsnobody:x:65534:" >> /etc/group
- echo "pgbackrest:x:$(id -g):pgbackrest" >> /etc/group
- fi
-fi
-exec "$@"
diff --git a/build/pgo-backrest-repo/Dockerfile b/build/pgo-backrest-repo/Dockerfile
deleted file mode 100644
index 0d0e1dd6b8..0000000000
--- a/build/pgo-backrest-repo/Dockerfile
+++ /dev/null
@@ -1,46 +0,0 @@
-ARG BASEOS
-ARG BASEVER
-ARG PREFIX
-FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER}
-
-ARG BACKREST_VERSION
-ARG PACKAGER
-ARG DFSET
-
-LABEL name="pgo-backrest-repo" \
- summary="Crunchy PostgreSQL Operator - pgBackRest Repository" \
- description="Crunchy PostgreSQL Operator - pgBackRest Repository"
-
-RUN ${PACKAGER} -y install \
- --setopt=skip_missing_names_on_install=False \
- crunchy-backrest-"${BACKREST_VERSION}" \
- hostname \
- openssh-clients \
- openssh-server \
- procps-ng \
- psmisc \
- rsync \
- && ${PACKAGER} -y clean all
-
-RUN groupadd pgbackrest -g 2000 && useradd pgbackrest -u 2000 -g 2000
-ADD bin/pgo-backrest-repo /usr/local/bin
-RUN chmod +x /usr/local/bin/pgo-backrest-repo.sh /usr/local/bin/archive-push-s3.sh \
- && mkdir -p /opt/cpm/bin /etc/pgbackrest \
- && chown -R pgbackrest:pgbackrest /opt/cpm \
- && chown -R pgbackrest /etc/pgbackrest
-
-ADD bin/uid_pgbackrest.sh /opt/cpm/bin
-
-RUN chmod g=u /etc/passwd \
- && chmod g=u /etc/group \
- && chmod -R g=u /etc/pgbackrest \
- && rm -f /run/nologin
-
-RUN mkdir /.ssh && chown pgbackrest:pgbackrest /.ssh && chmod o+rwx /.ssh
-
-USER 2000
-
-ENTRYPOINT ["/opt/cpm/bin/uid_pgbackrest.sh"]
-VOLUME ["/sshd", "/backrestrepo" ]
-
-CMD ["pgo-backrest-repo.sh"]
diff --git a/build/pgo-backrest/Dockerfile b/build/pgo-backrest/Dockerfile
deleted file mode 100644
index 25adb20ee3..0000000000
--- a/build/pgo-backrest/Dockerfile
+++ /dev/null
@@ -1,31 +0,0 @@
-ARG BASEOS
-ARG BASEVER
-ARG PREFIX
-FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER}
-
-ARG PGVERSION
-ARG BACKREST_VERSION
-ARG PACKAGER
-ARG DFSET
-
-LABEL name="pgo-backrest" \
- summary="Crunchy PostgreSQL Operator - pgBackRest" \
- description="pgBackRest image that is integrated for use with Crunchy Data's PostgreSQL Operator."
-
-RUN ${PACKAGER} -y install \
- --setopt=skip_missing_names_on_install=False \
- postgresql${PGVERSION}-server \
- crunchy-backrest-"${BACKREST_VERSION}" \
- && ${PACKAGER} -y clean all
-
-RUN mkdir -p /opt/cpm/bin /pgdata /backrestrepo && chown -R 26:26 /opt/cpm
-ADD bin/pgo-backrest/ /opt/cpm/bin
-ADD bin/uid_postgres.sh /opt/cpm/bin
-
-RUN chmod g=u /etc/passwd && \
- chmod g=u /etc/group
-
-USER 26
-ENTRYPOINT ["/opt/cpm/bin/uid_postgres.sh"]
-VOLUME ["/pgdata","/backrestrepo"]
-CMD ["/opt/cpm/bin/pgo-backrest"]
diff --git a/cmd/pgo-backrest/main.go b/cmd/pgo-backrest/main.go
deleted file mode 100644
index 3ea782ab35..0000000000
--- a/cmd/pgo-backrest/main.go
+++ /dev/null
@@ -1,154 +0,0 @@
-package main
-
-/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-import (
- "os"
- "strconv"
- "strings"
-
- "github.com/crunchydata/postgres-operator/internal/kubeapi"
- crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
- log "github.com/sirupsen/logrus"
-)
-
-const backrestCommand = "pgbackrest"
-
-const backrestBackupCommand = `backup`
-const backrestInfoCommand = `info`
-const backrestStanzaCreateCommand = `stanza-create`
-const containername = "database"
-const repoTypeFlagS3 = "--repo1-type=s3"
-const noRepoS3VerifyTLS = "--no-repo1-s3-verify-tls"
-
-func main() {
- log.Info("pgo-backrest starts")
-
- debugFlag := os.Getenv("CRUNCHY_DEBUG")
- if debugFlag == "true" {
- log.SetLevel(log.DebugLevel)
- log.Debug("debug flag set to true")
- } else {
- log.Info("debug flag set to false")
- }
-
- Namespace := os.Getenv("NAMESPACE")
- log.Debugf("setting NAMESPACE to %s", Namespace)
- if Namespace == "" {
- log.Error("NAMESPACE env var not set")
- os.Exit(2)
- }
-
- Command := os.Getenv("COMMAND")
- log.Debugf("setting COMMAND to %s", Command)
- if Command == "" {
- log.Error("COMMAND env var not set")
- os.Exit(2)
- }
-
- CommandOpts := os.Getenv("COMMAND_OPTS")
- log.Debugf("setting COMMAND_OPTS to %s", CommandOpts)
-
- PodName := os.Getenv("PODNAME")
- log.Debugf("setting PODNAME to %s", PodName)
- if PodName == "" {
- log.Error("PODNAME env var not set")
- os.Exit(2)
- }
-
- RepoType := os.Getenv("PGBACKREST_REPO_TYPE")
- log.Debugf("setting REPO_TYPE to %s", RepoType)
-
- // determine the setting of PGHA_PGBACKREST_LOCAL_S3_STORAGE
- // we will discard the error and treat the value as "false" if it is not
- // explicitly set
- LocalS3Storage, _ := strconv.ParseBool(os.Getenv("PGHA_PGBACKREST_LOCAL_S3_STORAGE"))
- log.Debugf("setting PGHA_PGBACKREST_LOCAL_S3_STORAGE to %v", LocalS3Storage)
-
- // parse the environment variable and store the appropriate boolean value
- // we will discard the error and treat the value as "false" if it is not
- // explicitly set
- S3VerifyTLS, _ := strconv.ParseBool(os.Getenv("PGHA_PGBACKREST_S3_VERIFY_TLS"))
- log.Debugf("setting PGHA_PGBACKREST_S3_VERIFY_TLS to %v", S3VerifyTLS)
-
- client, err := kubeapi.NewClient()
- if err != nil {
- panic(err)
- }
-
- bashcmd := make([]string, 1)
- bashcmd[0] = "bash"
- cmdStrs := make([]string, 0)
-
- switch Command {
- case crv1.PgtaskBackrestStanzaCreate:
- log.Info("backrest stanza-create command requested")
- cmdStrs = append(cmdStrs, backrestCommand)
- cmdStrs = append(cmdStrs, backrestStanzaCreateCommand)
- cmdStrs = append(cmdStrs, CommandOpts)
- case crv1.PgtaskBackrestInfo:
- log.Info("backrest info command requested")
- cmdStrs = append(cmdStrs, backrestCommand)
- cmdStrs = append(cmdStrs, backrestInfoCommand)
- cmdStrs = append(cmdStrs, CommandOpts)
- case crv1.PgtaskBackrestBackup:
- log.Info("backrest backup command requested")
- cmdStrs = append(cmdStrs, backrestCommand)
- cmdStrs = append(cmdStrs, backrestBackupCommand)
- cmdStrs = append(cmdStrs, CommandOpts)
- default:
- log.Error("unsupported backup command specified " + Command)
- os.Exit(2)
- }
-
- if LocalS3Storage {
- firstCmd := cmdStrs
- cmdStrs = append(cmdStrs, "&&")
- cmdStrs = append(cmdStrs, strings.Join(firstCmd, " "))
- cmdStrs = append(cmdStrs, repoTypeFlagS3)
- // pass in the flag to disable TLS verification, if set
- // otherwise, maintain default behavior and verify TLS
- if !S3VerifyTLS {
- cmdStrs = append(cmdStrs, noRepoS3VerifyTLS)
- }
- log.Info("backrest command will be executed for both local and s3 storage")
- } else if RepoType == "s3" {
- cmdStrs = append(cmdStrs, repoTypeFlagS3)
- // pass in the flag to disable TLS verification, if set
- // otherwise, maintain default behavior and verify TLS
- if !S3VerifyTLS {
- cmdStrs = append(cmdStrs, noRepoS3VerifyTLS)
- }
- log.Info("s3 flag enabled for backrest command")
- }
-
- log.Infof("command to execute is [%s]", strings.Join(cmdStrs, " "))
-
- log.Infof("command is %s ", strings.Join(cmdStrs, " "))
- reader := strings.NewReader(strings.Join(cmdStrs, " "))
- output, stderr, err := kubeapi.ExecToPodThroughAPI(client.Config, client, bashcmd, containername, PodName, Namespace, reader)
- if err != nil {
- log.Info("output=[" + output + "]")
- log.Info("stderr=[" + stderr + "]")
- log.Error(err)
- os.Exit(2)
- }
- log.Info("output=[" + output + "]")
- log.Info("stderr=[" + stderr + "]")
-
- log.Info("pgo-backrest ends")
-
-}
diff --git a/cmd/pgo-scheduler/scheduler/pgbackrest.go b/cmd/pgo-scheduler/scheduler/pgbackrest.go
index eba3048da8..715b48dd57 100644
--- a/cmd/pgo-scheduler/scheduler/pgbackrest.go
+++ b/cmd/pgo-scheduler/scheduler/pgbackrest.go
@@ -115,7 +115,7 @@ func (b BackRestBackupJob) Run() {
return
}
- selector := fmt.Sprintf("%s=%s,pgo-backrest-repo=true", config.LABEL_PG_CLUSTER, b.cluster)
+ selector := fmt.Sprintf("%s=%s,crunchy-pgbackrest-repo=true", config.LABEL_PG_CLUSTER, b.cluster)
pods, err := clientset.CoreV1().Pods(b.namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
if err != nil {
contextLogger.WithFields(log.Fields{
@@ -142,7 +142,7 @@ func (b BackRestBackupJob) Run() {
backupOptions: fmt.Sprintf("--type=%s %s", b.backupType, b.options),
stanza: b.stanza,
storageType: b.storageType,
- imagePrefix: cluster.Spec.PGOImagePrefix,
+ imagePrefix: cluster.Spec.CCPImagePrefix,
}
_, err = clientset.CrunchydataV1().Pgtasks(b.namespace).
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
index 82b326c7cf..dddc0b14d9 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
@@ -31,7 +31,7 @@
"serviceAccountName": "pgo-backrest",
"containers": [{
"name": "backrest",
- "image": "{{.PGOImagePrefix}}/pgo-backrest:{{.PGOImageTag}}",
+ "image": "{{.CCPImagePrefix}}/crunchy-pgbackrest:{{.CCPImageTag}}",
"volumeMounts": [
{{.PgbackrestRestoreVolumeMounts}}
],
@@ -39,6 +39,9 @@
"name": "COMMAND",
"value": "{{.Command}}"
}, {
+ "name": "MODE",
+ "value": "pgbackrest"
+ },{
"name": "COMMAND_OPTS",
"value": "{{.CommandOpts}}"
}, {
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
index 5f9e5d5049..dba4a3d8d7 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
@@ -48,7 +48,7 @@
"serviceAccountName": "pgo-default",
"containers": [{
"name": "database",
- "image": "{{.PGOImagePrefix}}/pgo-backrest-repo:{{.PGOImageTag}}",
+ "image": "{{.CCPImagePrefix}}/crunchy-pgbackrest-repo:{{.CCPImageTag}}",
"ports": [{
"containerPort": {{.SshdPort}},
"protocol": "TCP"
@@ -57,12 +57,12 @@
"env": [
{{.PgbackrestS3EnvVars}}
{
- "name": "PGBACKREST_STANZA",
- "value": "{{.PgbackrestStanza}}"
+ "name": "MODE",
+ "value": "pgbackrest-repo"
},
{
- "name": "SSHD_PORT",
- "value": "{{.SshdPort}}"
+ "name": "PGBACKREST_STANZA",
+ "value": "{{.PgbackrestStanza}}"
},
{
"name": "PGBACKREST_DB_PATH",
diff --git a/installers/olm/postgresoperator.csv.images.yaml b/installers/olm/postgresoperator.csv.images.yaml
index 429a882893..97aa48299c 100644
--- a/installers/olm/postgresoperator.csv.images.yaml
+++ b/installers/olm/postgresoperator.csv.images.yaml
@@ -4,8 +4,8 @@
- { name: PGO_IMAGE_PREFIX, value: '${PGO_IMAGE_PREFIX}' }
- { name: PGO_IMAGE_TAG, value: '${PGO_IMAGE_TAG}' }
-- { name: RELATED_IMAGE_PGO_BACKREST, value: '${PGO_IMAGE_PREFIX}/pgo-backrest:${PGO_IMAGE_TAG}' }
-- { name: RELATED_IMAGE_PGO_BACKREST_REPO, value: '${PGO_IMAGE_PREFIX}/pgo-backrest-repo:${PGO_IMAGE_TAG}' }
+- { name: RELATED_IMAGE_PGO_BACKREST, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbackrest:${CCP_IMAGE_TAG}' }
+- { name: RELATED_IMAGE_PGO_BACKREST_REPO, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbackrest-repo:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_PGO_CLIENT, value: '${PGO_IMAGE_PREFIX}/pgo-client:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_PGO_RMDATA, value: '${PGO_IMAGE_PREFIX}/pgo-rmdata:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER, value: '${PGO_IMAGE_PREFIX}/crunchy-postgres-exporter:${PGO_IMAGE_TAG}' }
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index 8ffd051857..3206d7fcee 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -219,7 +219,7 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
getBackupParams(
cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER],
clusterName, taskName, crv1.PgtaskBackrestBackup, podname, "database",
- util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, apiserver.Pgo.Pgo.PGOImagePrefix),
+ util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, apiserver.Pgo.Cluster.CCPImagePrefix),
request.BackupOpts, request.BackrestStorageType, operator.GetS3VerifyTLSSetting(cluster), jobName, ns, pgouser),
metav1.CreateOptions{},
)
diff --git a/internal/config/annotations.go b/internal/config/annotations.go
index 7cf97b96ee..db8482fe0a 100644
--- a/internal/config/annotations.go
+++ b/internal/config/annotations.go
@@ -48,7 +48,7 @@ const (
// ANNOTATION_S3_VERIFY_TLS is for storing the setting that determines whether or not TLS should
// be used to access a pgBackRest repository
ANNOTATION_S3_VERIFY_TLS = "s3-verify-tls"
- // ANNOTATION_S3_BUCKET is for storing the SSHD port used by the pgBackRest repository
+ // ANNOTATION_SSHD_PORT is for storing the SSHD port used by the pgBackRest repository
// service in a cluster
ANNOTATION_SSHD_PORT = "sshd-port"
// ANNOTATION_SUPPLEMENTAL_GROUPS is for storing the supplemental groups used with a cluster
diff --git a/internal/config/images.go b/internal/config/images.go
index 71c0af7c1c..2811e927fc 100644
--- a/internal/config/images.go
+++ b/internal/config/images.go
@@ -17,8 +17,8 @@ package config
// a list of container images that are available
const (
- CONTAINER_IMAGE_PGO_BACKREST = "pgo-backrest"
- CONTAINER_IMAGE_PGO_BACKREST_REPO = "pgo-backrest-repo"
+ CONTAINER_IMAGE_PGO_BACKREST = "crunchy-pgbackrest"
+ CONTAINER_IMAGE_PGO_BACKREST_REPO = "crunchy-pgbackrest-repo"
CONTAINER_IMAGE_PGO_CLIENT = "pgo-client"
CONTAINER_IMAGE_PGO_RMDATA = "pgo-rmdata"
CONTAINER_IMAGE_CRUNCHY_ADMIN = "crunchy-admin"
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index 2351fe0b2b..89f4b8a29f 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -48,8 +48,8 @@ type backrestJobTemplateFields struct {
CommandOpts string
PITRTarget string
PodName string
- PGOImagePrefix string
- PGOImageTag string
+ CCPImagePrefix string
+ CCPImageTag string
SecurityContext string
PgbackrestStanza string
PgbackrestDBPath string
@@ -80,8 +80,8 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
Command: cmd,
CommandOpts: task.Spec.Parameters[config.LABEL_BACKREST_OPTS],
PITRTarget: "",
- PGOImagePrefix: util.GetValueOrDefault(task.Spec.Parameters[config.LABEL_IMAGE_PREFIX], operator.Pgo.Pgo.PGOImagePrefix),
- PGOImageTag: operator.Pgo.Pgo.PGOImageTag,
+ CCPImagePrefix: util.GetValueOrDefault(task.Spec.Parameters[config.LABEL_IMAGE_PREFIX], operator.Pgo.Cluster.CCPImagePrefix),
+ CCPImageTag: operator.Pgo.Cluster.CCPImageTag,
PgbackrestStanza: task.Spec.Parameters[config.LABEL_PGBACKREST_STANZA],
PgbackrestDBPath: task.Spec.Parameters[config.LABEL_PGBACKREST_DB_PATH],
PgbackrestRepoPath: task.Spec.Parameters[config.LABEL_PGBACKREST_REPO_PATH],
@@ -200,7 +200,7 @@ func CreateBackup(clientset pgo.Interface, namespace, clusterName, podName strin
spec.Parameters[config.LABEL_CONTAINER_NAME] = "database"
// pass along the appropriate image prefix for the backup task
// this will be used by the associated backrest job
- spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix)
+ spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix)
spec.Parameters[config.LABEL_BACKREST_COMMAND] = crv1.PgtaskBackrestBackup
spec.Parameters[config.LABEL_BACKREST_OPTS] = backupOpts
spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]
diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go
index e53427fd1d..6266afd510 100644
--- a/internal/operator/backrest/repo.go
+++ b/internal/operator/backrest/repo.go
@@ -44,8 +44,8 @@ var s3RepoTypeRegex = regexp.MustCompile(`--repo-type=["']?s3["']?`)
type RepoDeploymentTemplateFields struct {
SecurityContext string
- PGOImagePrefix string
- PGOImageTag string
+ CCPImagePrefix string
+ CCPImageTag string
ContainerResources string
BackrestRepoClaimName string
SshdSecretsName string
@@ -229,8 +229,8 @@ func getRepoDeploymentFields(clientset kubernetes.Interface, cluster *crv1.Pgclu
namespace := cluster.GetNamespace()
repoFields := RepoDeploymentTemplateFields{
- PGOImagePrefix: util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix),
- PGOImageTag: operator.Pgo.Pgo.PGOImageTag,
+ CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
+ CCPImageTag: operator.Pgo.Cluster.CCPImageTag,
ContainerResources: operator.GetResourcesJSON(cluster.Spec.BackrestResources, cluster.Spec.BackrestLimits),
BackrestRepoClaimName: fmt.Sprintf(util.BackrestRepoPVCName, cluster.Name),
SshdSecretsName: fmt.Sprintf(util.BackrestRepoSecretName, cluster.Name),
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index e6ce666f22..8107bcc09f 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -54,8 +54,8 @@ type BackrestRestoreJobTemplateFields struct {
WorkflowID string
ToClusterPVCName string
SecurityContext string
- PGOImagePrefix string
- PGOImageTag string
+ CCPImagePrefix string
+ CCPImageTag string
CommandOpts string
PITRTarget string
PgbackrestStanza string
diff --git a/internal/operator/backrest/stanza.go b/internal/operator/backrest/stanza.go
index 186996abd0..2607eb7a6e 100644
--- a/internal/operator/backrest/stanza.go
+++ b/internal/operator/backrest/stanza.go
@@ -89,10 +89,10 @@ func StanzaCreate(namespace, clusterName string, clientset kubeapi.Interface) {
spec.Parameters[config.LABEL_JOB_NAME] = jobName
spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName
spec.Parameters[config.LABEL_POD_NAME] = podName
- spec.Parameters[config.LABEL_CONTAINER_NAME] = "pgo-backrest-repo"
+ spec.Parameters[config.LABEL_CONTAINER_NAME] = "crunchy-pgbackrest-repo"
// pass along the appropriate image prefix for the backup task
// this will be used by the associated backrest job
- spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix)
+ spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix)
spec.Parameters[config.LABEL_BACKREST_COMMAND] = crv1.PgtaskBackrestStanzaCreate
// Handle stanza creation for a standby cluster, which requires some additional consideration.
From 5c16b645e18e6a86792235e3e07c9719518c8556 Mon Sep 17 00:00:00 2001
From: andrewlecuyer <43458182+andrewlecuyer@users.noreply.github.com>
Date: Tue, 8 Dec 2020 15:00:14 -0600
Subject: [PATCH 029/276] No post-failover backups for standby clusters
A post-failover backup is now only triggered for non-standby clusters.
Therefore, if a failover occurs within a standby cluster, and automatic
backup will no longer be run.
Issue: [ch9912]
Issue: #2102
---
internal/controller/pod/promotionhandler.go | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/internal/controller/pod/promotionhandler.go b/internal/controller/pod/promotionhandler.go
index dcdcf48590..2dbc34ab6f 100644
--- a/internal/controller/pod/promotionhandler.go
+++ b/internal/controller/pod/promotionhandler.go
@@ -63,7 +63,8 @@ func (c *Controller) handlePostgresPodPromotion(newPod *apiv1.Pod, cluster crv1.
}
}
- if cluster.Status.State == crv1.PgclusterStateInitialized {
+ // create a post-failover backup if not a standby cluster
+ if !cluster.Spec.Standby && cluster.Status.State == crv1.PgclusterStateInitialized {
if err := cleanAndCreatePostFailoverBackup(c.Client,
cluster.Name, newPod.Namespace); err != nil {
log.Error(err)
From fae52d31d6bd7c3d24667a9c1aa2424f135b8520 Mon Sep 17 00:00:00 2001
From: Joseph Mckulka <16840147+jmckulk@users.noreply.github.com>
Date: Tue, 8 Dec 2020 17:25:16 -0500
Subject: [PATCH 030/276] Revert label change
As part of the compaction changes some labels and label
checks were changed. This pr reverts these changes.
The `pgo-scheduler` code was updated to check for a
`crunchy-pgbackrest-repo` label instead of the `pgo-backrest-repo`
label. The deployment templates were not updated to use the
updated label so the scheduler would fail to create a backup
job when scheduling a backup.
the pgo.sqlrunner template was updated to have the `sqlrunner`
label instead of `pgo-sqlrunner`
---
cmd/pgo-scheduler/scheduler/pgbackrest.go | 2 +-
.../files/pgo-configs/pgo.sqlrunner-template.json | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/cmd/pgo-scheduler/scheduler/pgbackrest.go b/cmd/pgo-scheduler/scheduler/pgbackrest.go
index 715b48dd57..710e1f12d2 100644
--- a/cmd/pgo-scheduler/scheduler/pgbackrest.go
+++ b/cmd/pgo-scheduler/scheduler/pgbackrest.go
@@ -115,7 +115,7 @@ func (b BackRestBackupJob) Run() {
return
}
- selector := fmt.Sprintf("%s=%s,crunchy-pgbackrest-repo=true", config.LABEL_PG_CLUSTER, b.cluster)
+ selector := fmt.Sprintf("%s=%s,pgo-backrest-repo=true", config.LABEL_PG_CLUSTER, b.cluster)
pods, err := clientset.CoreV1().Pods(b.namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
if err != nil {
contextLogger.WithFields(log.Fields{
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
index a301df048f..56f55dd35e 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
@@ -5,7 +5,7 @@
"name": "{{.JobName}}",
"labels": {
"vendor": "crunchydata",
- "sqlrunner": "true",
+ "pgo-sqlrunner": "true",
"pg-cluster": "{{.ClusterName}}"
}
},
@@ -15,7 +15,7 @@
"name": "{{.JobName}}",
"labels": {
"vendor": "crunchydata",
- "sqlrunner": "true",
+ "pgo-sqlrunner": "true",
"pg-cluster": "{{.ClusterName}}"
}
},
From 83aef439895a9de4aa52fb9383c4e4292aa9feb5 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 28 Nov 2020 16:14:17 -0500
Subject: [PATCH 031/276] Refactor wait for deployment function in context of a
cluster
This was originally written for the pgAdmin 4 integration, but
can serve multiple purposes for some of the advanced updating
logic.
---
internal/operator/cluster/clusterlogic.go | 25 +++++++++++++++++
internal/operator/cluster/pgadmin.go | 34 ++---------------------
2 files changed, 28 insertions(+), 31 deletions(-)
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index f6ddf85fca..f0c34c2007 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -798,3 +798,28 @@ func ScaleClusterDeployments(clientset kubernetes.Interface, cluster crv1.Pgclus
}
return
}
+
+// waitFotDeploymentReady waits for a deployment to be ready, or times out
+func waitForDeploymentReady(clientset kubernetes.Interface, namespace, deploymentName string, periodSecs, timeoutSecs time.Duration) error {
+ ctx := context.TODO()
+
+ // set up the timer and timeout
+ // first, ensure that there is an available Pod
+ timeout := time.After(timeoutSecs)
+ tick := time.NewTicker(periodSecs)
+ defer tick.Stop()
+
+ for {
+ select {
+ case <-timeout:
+ return fmt.Errorf("readiness timeout reached for deployment %q", deploymentName)
+ case <-tick.C:
+ // check to see if the deployment is ready
+ if d, err := clientset.AppsV1().Deployments(namespace).Get(ctx, deploymentName, metav1.GetOptions{}); err != nil {
+ log.Warn(err)
+ } else if d.Status.Replicas == d.Status.ReadyReplicas {
+ return nil
+ }
+ }
+ }
+}
diff --git a/internal/operator/cluster/pgadmin.go b/internal/operator/cluster/pgadmin.go
index c49462e4d0..529bba6f13 100644
--- a/internal/operator/cluster/pgadmin.go
+++ b/internal/operator/cluster/pgadmin.go
@@ -20,7 +20,6 @@ import (
"context"
"encoding/base64"
"encoding/json"
- "errors"
"fmt"
weakrand "math/rand"
"os"
@@ -69,8 +68,8 @@ const pgAdminDeploymentFormat = "%s-pgadmin"
const initPassLen = 20
const (
- deployTimeout = 60
- pollInterval = 3
+ deployTimeout = 60 * time.Second
+ pollInterval = 3 * time.Second
)
// AddPgAdmin contains the various functions that are used to add a pgAdmin
@@ -159,7 +158,7 @@ func AddPgAdminFromPgTask(clientset kubeapi.Interface, restconfig *rest.Config,
}
deployName := fmt.Sprintf(pgAdminDeploymentFormat, clusterName)
- if err := waitForDeploymentReady(clientset, namespace, deployName, deployTimeout, pollInterval); err != nil {
+ if err := waitForDeploymentReady(clientset, namespace, deployName, pollInterval, deployTimeout); err != nil {
log.Error(err)
}
@@ -470,30 +469,3 @@ func publishPgAdminEvent(eventType string, task *crv1.Pgtask) {
log.Error(err.Error())
}
}
-
-// waitFotDeploymentReady waits for a deployment to be ready, or times out
-func waitForDeploymentReady(clientset kubernetes.Interface, namespace, deploymentName string, timeoutSecs, periodSecs time.Duration) error {
- ctx := context.TODO()
- timeout := time.After(timeoutSecs * time.Second)
- tick := time.NewTicker(periodSecs * time.Second)
- defer tick.Stop()
-
- // loop until the timeout is met, or that all the replicas are ready
- for {
- select {
- case <-timeout:
- return errors.New(fmt.Sprintf("Timed out waiting for deployment to become ready: [%s]", deploymentName))
- case <-tick.C:
- if deployment, err := clientset.AppsV1().Deployments(namespace).Get(ctx, deploymentName, metav1.GetOptions{}); err != nil {
- // if there is an error, log it but continue through the loop
- log.Error(err)
- } else {
- // check to see if the deployment status has succeed...if so, break out
- // of the loop
- if deployment.Status.ReadyReplicas == *deployment.Spec.Replicas {
- return nil
- }
- }
- }
- }
-}
From beb3c9dc147856a67321fa8bcbdf999958ef63c0 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 28 Nov 2020 15:27:26 -0500
Subject: [PATCH 032/276] Introduce rolling update interface for PostgreSQL
clusters
A rolling update of a PostgreSQL cluster involves applying any updates
that may require downtime to each replica within a PostgreSQL cluster,
followed by the promotion to a replica deemed suitable to be a primary,
followed by the update being applied to the former primary.
This commit introduces an interface to perform this exact behavior, by
allowing for any updates to the Deployments of PostgreSQL instances to
have any updates applied in a rolling fashion.
Issue: [ch9881]
---
.../architecture/high-availability/_index.md | 55 +++
internal/operator/cluster/rolling.go | 331 ++++++++++++++++++
internal/util/failover.go | 14 +-
3 files changed, 394 insertions(+), 6 deletions(-)
create mode 100644 internal/operator/cluster/rolling.go
diff --git a/docs/content/architecture/high-availability/_index.md b/docs/content/architecture/high-availability/_index.md
index c5f05eaf96..950b150105 100644
--- a/docs/content/architecture/high-availability/_index.md
+++ b/docs/content/architecture/high-availability/_index.md
@@ -276,3 +276,58 @@ The Node Affinity only uses the `preferred` scheduling strategy (similar to what
is described in the Pod Anti-Affinity section above), so if a Pod cannot be
scheduled to a particular Node matching the label, it will be scheduled to a
different Node.
+
+## Rolling Updates
+
+During the lifecycle of a PostgreSQL cluster, there are certain events that may
+require a planned restart, such as an update to a "restart required" PostgreSQL
+configuration setting (e.g. [`shared_buffers`](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-SHARED-BUFFERS))
+or a change to a Kubernetes Deployment template (e.g. [changing the memory request]({{< relref "tutorial/customize-cluster.md">}}#customize-cpu-memory)). Restarts can be disruptive in a high availability deployment, which is
+why many setups employ a ["rolling update" strategy](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/)
+(aka a "rolling restart") to minimize or eliminate downtime during a planned
+restart.
+
+Because PostgreSQL is a stateful application, a simple rolling restart strategy
+will not work: PostgreSQL needs to ensure that there is a primary available that
+can accept reads and writes. This requires following a method that will minimize
+the amount of downtime when the primary is taken offline for a restart.
+
+The PostgreSQL Operator provides a mechanism for rolling updates implicitly on
+certain operations that change the Deployment templates (e.g. memory updates,
+CPU updates, adding tablespaces, modifiny annotations) and explicitly through
+the [`pgo restart`]({{< relref "pgo-client/reference/pgo_restart.md">}})
+command with the `--rolling` flag. The PostgreSQL Operator uses the following
+algorithm to perform the rolling restart to minimize any potential
+interruptions:
+
+1. Each replica is updated in sequential order. This follows the following
+process:
+
+ 1. The replica is explicitly shut down to ensure any outstanding changes are
+ flushed to disk.
+
+ 2. If requested, the PostgreSQL Operator will apply any changes to the
+ Deployment.
+
+ 3. The replica is brought back online. The PostgreSQL Operator waits for the
+ replica to become available before it proceeds to the next replica.
+
+2. The above steps are repeated until all of the replicas are restarted.
+
+3. A controlled switchover is performed. The PostgreSQL Operator determines
+which replica is the best candidate to become the new primary. It then demotes
+the primary to become a replica and promotes the best candidate to become the
+new primary.
+
+4. The former primary follows a process similar to what is described in step 1.
+
+The downtime is thus constrained to the amount of time the switchover takes.
+
+A rolling update strategy will be used if any of the following changes are made
+to a PostgreSQL cluster, either through the `pgo update` command or from a
+modification to the custom resource:
+
+- Memory resource adjustments
+- CPU resource adjustments
+- Custom annotation changes
+- Tablespace additions
diff --git a/internal/operator/cluster/rolling.go b/internal/operator/cluster/rolling.go
new file mode 100644
index 0000000000..9f5351d7b0
--- /dev/null
+++ b/internal/operator/cluster/rolling.go
@@ -0,0 +1,331 @@
+package cluster
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "context"
+ "fmt"
+ "strings"
+ "time"
+
+ "github.com/crunchydata/postgres-operator/internal/config"
+ "github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/operator"
+ "github.com/crunchydata/postgres-operator/internal/util"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+ log "github.com/sirupsen/logrus"
+ appsv1 "k8s.io/api/apps/v1"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+)
+
+type deploymentType int
+
+const (
+ deploymentTypePrimary deploymentType = iota
+ deploymentTypeReplica
+)
+
+const (
+ rollingUpdatePeriod = 4 * time.Second
+ rollingUpdateTimeout = 60 * time.Second
+)
+
+// RollingUpdate performs a type of "rolling update" on a series of Deployments
+// of a PostgreSQL cluster in an attempt to minimize downtime.
+//
+// The functions take a function that serves to update the contents of a
+// Deployment.
+//
+// The rolling update is performed as such:
+//
+// 1. Each replica is updated. A replica is shut down and changes are applied
+// The Operator waits until the replica is back online (and/or a time period)
+// And moves on to the next one
+// 2. A controlled switchover is performed. The Operator chooses the best
+// candidate replica for the switch over.
+// 3. The former primary is then shut down and updated.
+//
+// If this is not a HA cluster, then the Deployment is just singly restarted
+//
+// Erroring during this process can be fun. If an error occurs within the middle
+// of a rolling update, in order to avoid placing the cluster in an
+// indeterminate state, most errors are just logged for later troubleshooting
+func RollingUpdate(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster,
+ updateFunc func(*crv1.Pgcluster, *appsv1.Deployment) error) error {
+ log.Debugf("rolling update for cluster %q", cluster.Name)
+
+ // we need to determine which deployments are replicas and which is the
+ // primary. Note, that based on external factors, this can change during the
+ // execution of this function, so this is our best guess at the time of the
+ // rolling update being performed.
+ //
+ // Given the craziness of a distributed world, we may even unearth two
+ // primaries, or no primaries! So we will need to gracefully handle that as
+ // well
+ //
+ // We will get this through the Pod list as the role label is on the Pod
+ instances, err := generateDeploymentTypeMap(clientset, cluster)
+ // If we fail to generate the deployment type map, we just have to fail here.
+ // We can't do any updates
+ if err != nil {
+ return err
+ }
+
+ // go through all of the replicas and perform the modifications
+ for i := range instances[deploymentTypeReplica] {
+ deployment := instances[deploymentTypeReplica][i]
+
+ // Try to apply the update. If it returns an error during the process,
+ // continue on to the next replica
+ if err := applyUpdateToPostgresInstance(clientset, restConfig, cluster, deployment, updateFunc); err != nil {
+ log.Error(err)
+ continue
+ }
+
+ // Ensure that the replica comes back up and can be connected to, otherwise
+ // keep moving on. This involves waiting for the Deployment to come back
+ // up...
+ if err := waitForDeploymentReady(clientset, deployment.Namespace, deployment.Name,
+ rollingUpdatePeriod, rollingUpdateTimeout); err != nil {
+ log.Warn(err)
+ }
+
+ // ...followed by wiating for the PostgreSQL instance to come back up
+ if err := waitForPostgresInstance(clientset, restConfig, cluster, deployment,
+ rollingUpdatePeriod, rollingUpdateTimeout); err != nil {
+ log.Warn(err)
+ }
+ }
+
+ // if there is at least one replica and only one primary, perform a controlled
+ // switchover.
+ //
+ // if multiple primaries were found, we don't know how we would want to
+ // properly switch over, so we will let Patroni make the decision in this case
+ // as part of an uncontrolled failover. At this point, we should have eligible
+ // replicas that have the updated Deployment state.
+ if len(instances[deploymentTypeReplica]) > 0 && len(instances[deploymentTypePrimary]) == 1 {
+ // if the switchover fails, warn that it failed but continue on
+ if err := switchover(clientset, restConfig, cluster); err != nil {
+ log.Warnf("switchover failed: %s", err.Error())
+ }
+ }
+
+ // finally, go through the list of primaries (which should only be one...)
+ // and apply the update. At this point we do not need to wait for anything,
+ // as we should have either already promoted a new primary, or this is a
+ // single instance cluster
+ for i := range instances[deploymentTypePrimary] {
+ if err := applyUpdateToPostgresInstance(clientset, restConfig, cluster,
+ instances[deploymentTypePrimary][i], updateFunc); err != nil {
+ log.Error(err)
+ }
+ }
+
+ return nil
+}
+
+// applyUpdateToPostgresInstance performs an update on an individual PostgreSQL
+// instance. It first ensures that the update can be applied. If it can, it will
+// safely turn of the PostgreSQL instance before modifying the Deployment
+// template.
+func applyUpdateToPostgresInstance(clientset kubernetes.Interface, restConfig *rest.Config,
+ cluster *crv1.Pgcluster, deployment appsv1.Deployment,
+ updateFunc func(*crv1.Pgcluster, *appsv1.Deployment) error) error {
+ ctx := context.TODO()
+
+ // apply any updates, if they cannot be applied, then return an error here
+ if err := updateFunc(cluster, &deployment); err != nil {
+ return err
+ }
+
+ // Before applying the update, we want to explicitly stop PostgreSQL on each
+ // instance. This prevents PostgreSQL from having to boot up in crash
+ // recovery mode.
+ //
+ // If an error is returned, warn, but proceed with the function
+ if err := stopPostgreSQLInstance(clientset, restConfig, deployment); err != nil {
+ log.Warn(err)
+ }
+
+ // Perform the update.
+ _, err := clientset.AppsV1().Deployments(deployment.Namespace).
+ Update(ctx, &deployment, metav1.UpdateOptions{})
+
+ return err
+}
+
+// generateDeploymentTypeMap takes a list of Deployments and determines what
+// they represent: a primary (hopefully only one) or replicas
+func generateDeploymentTypeMap(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (map[deploymentType][]appsv1.Deployment, error) {
+ ctx := context.TODO()
+
+ // get a list of all of the instance deployments for the cluster
+ deployments, err := operator.GetInstanceDeployments(clientset, cluster)
+ if err != nil {
+ return nil, err
+ }
+
+ options := metav1.ListOptions{
+ LabelSelector: fields.AndSelectors(
+ fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, cluster.Name),
+ fields.OneTermEqualSelector(config.LABEL_PG_DATABASE, config.LABEL_TRUE),
+ ).String(),
+ }
+
+ pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(ctx, options)
+ // if we can't find any of the Pods, we can't make the proper determiniation
+ if err != nil {
+ return nil, err
+ }
+
+ // go through each Deployment and make a determination about its type. If we
+ // ultimately cannot do that, treat the deployment as a "replica"
+ instances := map[deploymentType][]appsv1.Deployment{
+ deploymentTypePrimary: {},
+ deploymentTypeReplica: {},
+ }
+
+ for i, deployment := range deployments.Items {
+ for _, pod := range pods.Items {
+ // if the Pod doesn't match, continue
+ if deployment.Name != pod.ObjectMeta.GetLabels()[config.LABEL_DEPLOYMENT_NAME] {
+ continue
+ }
+
+ // found matching Pod, determine if it's a primary or replica
+ if pod.ObjectMeta.GetLabels()[config.LABEL_PGHA_ROLE] == config.LABEL_PGHA_ROLE_PRIMARY {
+ instances[deploymentTypePrimary] = append(instances[deploymentTypePrimary], deployments.Items[i])
+ } else {
+ instances[deploymentTypeReplica] = append(instances[deploymentTypeReplica], deployments.Items[i])
+ }
+
+ // we found the (or at least a) matching Pod, so we can break the loop now
+ break
+ }
+ }
+
+ return instances, nil
+}
+
+// generatePostgresReadyCommand creates the command used to test if a PostgreSQL
+// instance is ready
+func generatePostgresReadyCommand(port string) []string {
+ return []string{"pg_isready", "-p", port}
+}
+
+// generatePostgresSwitchoverCommand creates the command that is used to issue
+// a switchover (demote a primary, promote a replica). Takes the name of the
+// cluster; Patroni will choose the best candidate to switchover to
+func generatePostgresSwitchoverCommand(clusterName string) []string {
+ return []string{"patronictl", "switchover", "--force", clusterName}
+}
+
+// switchover performs a controlled switchover within a PostgreSQL cluster, i.e.
+// demoting a primary and promoting a replica. The method works as such:
+//
+// 1. The function looks for all available replicas as well as the current
+// primary. We look up the primary for convenience to avoid various API calls
+//
+// 2. We then search over the list to find both a primary and a suitable
+// candidate for promotion. A candidate is suitable if:
+// - It is on the latest timeline
+// - It has the least amount of replication lag
+//
+// This is done to limit the risk of data loss.
+//
+// If either a primary or candidate is **not** found, we do not switch over.
+//
+// 3. If all of the above works successfully, a switchover is attempted.
+func switchover(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster) error {
+ // we want to find a Pod to execute the switchover command on, i.e. the
+ // primary
+ pod, err := util.GetPrimaryPod(clientset, cluster)
+ if err != nil {
+ return err
+ }
+
+ // good to generally log which instances are being used in the switchover
+ log.Infof("controlled switchover started for cluster %q", cluster.Name)
+
+ cmd := generatePostgresSwitchoverCommand(cluster.Name)
+ if _, stderr, err := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
+ cmd, "database", pod.Name, cluster.Namespace, nil); err != nil {
+ return fmt.Errorf(stderr)
+ }
+
+ log.Infof("controlled switchover completed for cluster %q", cluster.Name)
+
+ // and that's all
+ return nil
+}
+
+// waitForPostgresInstance waits for a PostgreSQL instance within a Pod is ready
+// to accept connections
+func waitForPostgresInstance(clientset kubernetes.Interface, restConfig *rest.Config,
+ cluster *crv1.Pgcluster, deployment appsv1.Deployment, periodSecs, timeoutSecs time.Duration) error {
+ ctx := context.TODO()
+
+ // try to find the Pod that should be exec'd into
+ options := metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: fields.AndSelectors(
+ fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, cluster.Name),
+ fields.OneTermEqualSelector(config.LABEL_PG_DATABASE, config.LABEL_TRUE),
+ fields.OneTermEqualSelector(config.LABEL_DEPLOYMENT_NAME, deployment.Name),
+ ).String(),
+ }
+ pods, err := clientset.CoreV1().Pods(deployment.Namespace).List(ctx, options)
+
+ // if the Pod selection errors, we can't really proceed
+ if err != nil {
+ return fmt.Errorf("could not find pods to check postgres instance readiness: %w", err)
+ } else if len(pods.Items) == 0 {
+ return fmt.Errorf("could not find any postgres pods")
+ }
+
+ // get the first pod...we'll just have to presume this is the active primary
+ // as we've done all we good to narrow it down at this point
+ pod := pods.Items[0]
+ cmd := generatePostgresReadyCommand(cluster.Spec.Port)
+
+ // set up the timer and timeout
+ // first, ensure that there is an available Pod
+ timeout := time.After(timeoutSecs)
+ tick := time.NewTicker(periodSecs)
+ defer tick.Stop()
+
+ for {
+ select {
+ case <-timeout:
+ return fmt.Errorf("readiness timeout reached for start up of cluster %q instance %q", cluster.Name, deployment.Name)
+ case <-tick.C:
+ // check to see if PostgreSQL is ready to accept connections
+ s, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
+ cmd, "database", pod.Name, pod.Namespace, nil)
+
+ // really we should find a way to get the exit code in the future, but
+ // in the interim...
+ if strings.Contains(s, "accepting connections") {
+ return nil
+ }
+ }
+ }
+}
diff --git a/internal/util/failover.go b/internal/util/failover.go
index c17cd556ca..a4abba76f6 100644
--- a/internal/util/failover.go
+++ b/internal/util/failover.go
@@ -40,6 +40,7 @@ type InstanceReplicationInfo struct {
Status string
Timeline int
PendingRestart bool
+ PodName string
Role string
}
@@ -80,10 +81,10 @@ const (
// instanceReplicationInfoTypePrimaryStandby is the label used by Patroni to indicate that an
// instance is indeed a primary PostgreSQL instance, specifically within a standby cluster
instanceReplicationInfoTypePrimaryStandby = "Standby Leader"
- // instanceRolePrimary indicates that an instance is a primary
- instanceRolePrimary = "primary"
- // instanceRoleReplica indicates that an instance is a replica
- instanceRoleReplica = "replica"
+ // InstanceRolePrimary indicates that an instance is a primary
+ InstanceRolePrimary = "primary"
+ // InstanceRoleReplica indicates that an instance is a replica
+ InstanceRoleReplica = "replica"
// instanceRoleUnknown indicates that an instance is of an unknown typ
instanceRoleUnknown = "unknown"
// instanceStatusUnavailable indicates an instance is unavailable
@@ -266,9 +267,9 @@ func ReplicationStatus(request ReplicationStatusRequest, includePrimary, include
// determine the role of the instnace
switch rawInstance.Type {
default:
- role = instanceRoleReplica
+ role = InstanceRoleReplica
case instanceReplicationInfoTypePrimary, instanceReplicationInfoTypePrimaryStandby:
- role = instanceRolePrimary
+ role = InstanceRolePrimary
}
// set up the instance that will be returned
@@ -280,6 +281,7 @@ func ReplicationStatus(request ReplicationStatusRequest, includePrimary, include
Name: instanceInfoMap[rawInstance.PodName].name,
Node: instanceInfoMap[rawInstance.PodName].node,
PendingRestart: rawInstance.PendingRestart == "*",
+ PodName: rawInstance.PodName,
}
// update the instance info if the instance is busted
From d621d4ef959dd5eb8c3de7b7216879d49cf0f224 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 28 Nov 2020 18:29:38 -0500
Subject: [PATCH 033/276] Modify cluster update resources to utilize rolling
updates
As this is a change that can cause downtime, it is prudent to try
to limit the downtime by applying a rolling update methodology.
---
.../pgcluster/pgclustercontroller.go | 2 +-
internal/operator/cluster/cluster.go | 86 ++++++-------------
2 files changed, 29 insertions(+), 59 deletions(-)
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index d8da6658d9..346ea8cb5d 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -236,7 +236,7 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
!reflect.DeepEqual(oldcluster.Spec.Limits, newcluster.Spec.Limits) ||
!reflect.DeepEqual(oldcluster.Spec.ExporterResources, newcluster.Spec.ExporterResources) ||
!reflect.DeepEqual(oldcluster.Spec.ExporterLimits, newcluster.Spec.ExporterLimits) {
- if err := clusteroperator.UpdateResources(c.Client, c.Client.Config, newcluster); err != nil {
+ if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, newcluster, clusteroperator.UpdateResources); err != nil {
log.Error(err)
return
}
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 651cba0aa6..0994e31c0d 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -498,68 +498,38 @@ func UpdateAnnotations(clientset kubernetes.Interface, restConfig *rest.Config,
// UpdateResources updates the PostgreSQL instance Deployments to reflect the
// update resources (i.e. CPU, memory)
-func UpdateResources(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster) error {
- ctx := context.TODO()
-
- // get a list of all of the instance deployments for the cluster
- deployments, err := operator.GetInstanceDeployments(clientset, cluster)
-
- if err != nil {
- return err
- }
-
+func UpdateResources(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
// iterate through each PostgreSQL instance deployment and update the
// resource values for the database or exporter containers
- //
- // NOTE: a future version (near future) will first try to detect the primary
- // so that all the replicas are updated first, and then the primary gets the
- // update
- for _, deployment := range deployments.Items {
- // now, iterate through each container within that deployment
- for index, container := range deployment.Spec.Template.Spec.Containers {
- // first check for the database container
- if container.Name == "database" {
- // first, initialize the requests/limits resource to empty Resource Lists
- deployment.Spec.Template.Spec.Containers[index].Resources.Requests = v1.ResourceList{}
- deployment.Spec.Template.Spec.Containers[index].Resources.Limits = v1.ResourceList{}
-
- // now, simply deep copy the values from the CRD
- if cluster.Spec.Resources != nil {
- deployment.Spec.Template.Spec.Containers[index].Resources.Requests = cluster.Spec.Resources.DeepCopy()
- }
-
- if cluster.Spec.Limits != nil {
- deployment.Spec.Template.Spec.Containers[index].Resources.Limits = cluster.Spec.Limits.DeepCopy()
- }
- // next, check for the exporter container
- } else if container.Name == "exporter" {
- // first, initialize the requests/limits resource to empty Resource Lists
- deployment.Spec.Template.Spec.Containers[index].Resources.Requests = v1.ResourceList{}
- deployment.Spec.Template.Spec.Containers[index].Resources.Limits = v1.ResourceList{}
-
- // now, simply deep copy the values from the CRD
- if cluster.Spec.ExporterResources != nil {
- deployment.Spec.Template.Spec.Containers[index].Resources.Requests = cluster.Spec.ExporterResources.DeepCopy()
- }
-
- if cluster.Spec.ExporterLimits != nil {
- deployment.Spec.Template.Spec.Containers[index].Resources.Limits = cluster.Spec.ExporterLimits.DeepCopy()
- }
+ for index, container := range deployment.Spec.Template.Spec.Containers {
+ // first check for the database container
+ if container.Name == "database" {
+ // first, initialize the requests/limits resource to empty Resource Lists
+ deployment.Spec.Template.Spec.Containers[index].Resources.Requests = v1.ResourceList{}
+ deployment.Spec.Template.Spec.Containers[index].Resources.Limits = v1.ResourceList{}
+
+ // now, simply deep copy the values from the CRD
+ if cluster.Spec.Resources != nil {
+ deployment.Spec.Template.Spec.Containers[index].Resources.Requests = cluster.Spec.Resources.DeepCopy()
+ }
+ if cluster.Spec.Limits != nil {
+ deployment.Spec.Template.Spec.Containers[index].Resources.Limits = cluster.Spec.Limits.DeepCopy()
+ }
+ // next, check for the exporter container
+ } else if container.Name == "exporter" {
+ // first, initialize the requests/limits resource to empty Resource Lists
+ deployment.Spec.Template.Spec.Containers[index].Resources.Requests = v1.ResourceList{}
+ deployment.Spec.Template.Spec.Containers[index].Resources.Limits = v1.ResourceList{}
+
+ // now, simply deep copy the values from the CRD
+ if cluster.Spec.ExporterResources != nil {
+ deployment.Spec.Template.Spec.Containers[index].Resources.Requests = cluster.Spec.ExporterResources.DeepCopy()
+ }
+
+ if cluster.Spec.ExporterLimits != nil {
+ deployment.Spec.Template.Spec.Containers[index].Resources.Limits = cluster.Spec.ExporterLimits.DeepCopy()
}
- }
- // Before applying the update, we want to explicitly stop PostgreSQL on each
- // instance. This prevents PostgreSQL from having to boot up in crash
- // recovery mode.
- //
- // If an error is returned, we only issue a warning
- if err := stopPostgreSQLInstance(clientset, restConfig, deployment); err != nil {
- log.Warn(err)
- }
- // update the deployment with the new values
- if _, err := clientset.AppsV1().Deployments(deployment.Namespace).
- Update(ctx, &deployment, metav1.UpdateOptions{}); err != nil {
- return err
}
}
From 1a84813496a740684b821d28402fce4b96686e25 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 29 Nov 2020 14:51:22 -0500
Subject: [PATCH 034/276] Modify cluster update annotations to use rolling
updates
Modifying the annotations on the template portion of a Deployment
Spec causes each Pod that is under management of a Deployment to
be restarted. For managing a database server, this can be less than
ideal.
As such, it is prudent to employ a rolling update strategy for
annotation updates on database instances to minimize downtime on
the primary instance.
---
.../pgcluster/pgclustercontroller.go | 12 ++---
internal/operator/cluster/cluster.go | 46 ++++++-------------
2 files changed, 20 insertions(+), 38 deletions(-)
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 346ea8cb5d..9966ea6a66 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -371,12 +371,6 @@ func updateAnnotations(c *Controller, oldCluster *crv1.Pgcluster, newCluster *cr
// so if there are changes, we can apply them to the various deployments,
// but only do so if we have to
- if len(annotationsPostgres) != 0 {
- if err := clusteroperator.UpdateAnnotations(c.Client, c.Client.Config, newCluster, annotationsPostgres); err != nil {
- return err
- }
- }
-
if len(annotationsBackrest) != 0 {
if err := backrestoperator.UpdateAnnotations(c.Client, newCluster, annotationsBackrest); err != nil {
return err
@@ -389,6 +383,12 @@ func updateAnnotations(c *Controller, oldCluster *crv1.Pgcluster, newCluster *cr
}
}
+ if len(annotationsPostgres) != 0 {
+ if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, newCluster, clusteroperator.UpdateAnnotations); err != nil {
+ return err
+ }
+ }
+
return nil
}
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 0994e31c0d..95c2e7dca7 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -456,44 +456,26 @@ func ScaleDownBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespa
// UpdateAnnotations updates the annotations in the "template" portion of a
// PostgreSQL deployment
-func UpdateAnnotations(clientset kubernetes.Interface, restConfig *rest.Config,
- cluster *crv1.Pgcluster, annotations map[string]string) error {
- ctx := context.TODO()
- var updateError error
+func UpdateAnnotations(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
+ log.Debugf("update annotations on [%s]", deployment.Name)
+ annotations := map[string]string{}
- // first, get a list of all of the instance deployments for the cluster
- deployments, err := operator.GetInstanceDeployments(clientset, cluster)
+ // store the global annotations first
+ for k, v := range cluster.Spec.Annotations.Global {
+ annotations[k] = v
+ }
- if err != nil {
- return err
+ // then store the postgres specific annotations
+ for k, v := range cluster.Spec.Annotations.Postgres {
+ annotations[k] = v
}
- // now update each deployment with the new annotations
- for _, deployment := range deployments.Items {
- log.Debugf("update annotations on [%s]", deployment.Name)
- log.Debugf("new annotations: %v", annotations)
+ log.Debugf("new annotations: %v", annotations)
- deployment.Spec.Template.ObjectMeta.SetAnnotations(annotations)
+ // set the annotations on the deployment object
+ deployment.Spec.Template.ObjectMeta.SetAnnotations(annotations)
- // Before applying the update, we want to explicitly stop PostgreSQL on each
- // instance. This prevents PostgreSQL from having to boot up in crash
- // recovery mode.
- //
- // If an error is returned, we only issue a warning
- if err := stopPostgreSQLInstance(clientset, restConfig, deployment); err != nil {
- log.Warn(err)
- }
-
- // finally, update the Deployment. If something errors, we'll log that there
- // was an error, but continue with processing the other deployments
- if _, err := clientset.AppsV1().Deployments(deployment.Namespace).
- Update(ctx, &deployment, metav1.UpdateOptions{}); err != nil {
- log.Error(err)
- updateError = err
- }
- }
-
- return updateError
+ return nil
}
// UpdateResources updates the PostgreSQL instance Deployments to reflect the
From b5a84cfa16c6eaa42484757f1b0f01043c951f68 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 4 Dec 2020 15:54:50 -0500
Subject: [PATCH 035/276] Modify adding tablespaces to use rolling updates
The `pgo update cluser --tablespace` functionality now leverages
the rolling update algorithm to minimize the appearance of
downtime to any connected clients.
---
.../pgcluster/pgclustercontroller.go | 47 ++++--
internal/operator/cluster/cluster.go | 157 +++++++-----------
2 files changed, 90 insertions(+), 114 deletions(-)
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 9966ea6a66..d363445106 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -25,8 +25,10 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/operator"
backrestoperator "github.com/crunchydata/postgres-operator/internal/operator/backrest"
clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster"
+ "github.com/crunchydata/postgres-operator/internal/operator/pvc"
"github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1"
@@ -420,31 +422,46 @@ func updatePgBouncer(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1
// updateTablespaces updates the PostgreSQL instance Deployments to reflect the
// new PostgreSQL tablespaces that should be added
func updateTablespaces(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) error {
- // to help the Operator function do less work, we will get a list of new
- // tablespaces. Though these are already present in the CRD, this will isolate
- // exactly which PVCs need to be created
- //
- // To do this, iterate through the the tablespace mount map that is present in
- // the new cluster.
- newTablespaces := map[string]crv1.PgStorageSpec{}
+ // first, get a list of all of the instance deployments for the cluster
+ deployments, err := operator.GetInstanceDeployments(c.Client, newCluster)
+ if err != nil {
+ return err
+ }
+
+ // iterate through the the tablespace mount map that is present in and create
+ // any new PVCs
for tablespaceName, storageSpec := range newCluster.Spec.TablespaceMounts {
// if the tablespace does not exist in the old version of the cluster,
// then add it in!
- if _, ok := oldCluster.Spec.TablespaceMounts[tablespaceName]; !ok {
- log.Debugf("new tablespace found: [%s]", tablespaceName)
+ if _, ok := oldCluster.Spec.TablespaceMounts[tablespaceName]; ok {
+ continue
+ }
+
+ log.Debugf("new tablespace found: [%s]", tablespaceName)
+
+ // This is a new tablespace, great. Create the new PVCs.
+ // The PVCs are created for each **instance** in the cluster, as every
+ // instance needs to have a distinct PVC for each tablespace
+ // get the name of the tablespace PVC for that instance.
+ for _, deployment := range deployments.Items {
+ tablespacePVCName := operator.GetTablespacePVCName(deployment.Name, tablespaceName)
- newTablespaces[tablespaceName] = storageSpec
+ log.Debugf("creating tablespace PVC [%s] for [%s]", tablespacePVCName, deployment.Name)
+
+ // Now create it! If it errors, we just need to return, which
+ // potentially leaves things in an inconsistent state, but at this point
+ // only PVC objects have been created
+ if _, err := pvc.CreateIfNotExists(c.Client, storageSpec, tablespacePVCName,
+ newCluster.Name, newCluster.Namespace); err != nil {
+ return err
+ }
}
}
// alright, update the tablespace entries for this cluster!
// if it returns an error, pass the error back up to the caller
- if err := clusteroperator.UpdateTablespaces(c.Client, c.Client.Config, newCluster, newTablespaces); err != nil {
- return err
- }
-
- return nil
+ return clusteroperator.RollingUpdate(c.Client, c.Client.Config, newCluster, clusteroperator.UpdateTablespaces)
}
// WorkerCount returns the worker count for the controller
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 95c2e7dca7..76eb3834c1 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -520,118 +520,77 @@ func UpdateResources(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) er
// UpdateTablespaces updates the PostgreSQL instance Deployments to update
// what tablespaces are mounted.
-// Though any new tablespaces are present in the CRD, to attempt to do less work
-// this function takes a map of the new tablespaces that are being added, so we
-// only have to check and create the PVCs that are being mounted at this time
-//
-// To do this, iterate through the tablespace mount map that is present in the
-// new cluster.
-func UpdateTablespaces(clientset kubernetes.Interface, restConfig *rest.Config,
- cluster *crv1.Pgcluster, newTablespaces map[string]crv1.PgStorageSpec) error {
- ctx := context.TODO()
-
- // first, get a list of all of the instance deployments for the cluster
- deployments, err := operator.GetInstanceDeployments(clientset, cluster)
-
- if err != nil {
- return err
- }
-
- tablespaceVolumes := make([]map[string]operator.StorageResult, len(deployments.Items))
-
- // now we can start creating the new tablespaces! First, create the new
- // PVCs. The PVCs are created for each **instance** in the cluster, as every
- // instance needs to have a distinct PVC for each tablespace
- for i, deployment := range deployments.Items {
- tablespaceVolumes[i] = make(map[string]operator.StorageResult)
-
- for tablespaceName, storageSpec := range newTablespaces {
- // get the name of the tablespace PVC for that instance
- tablespacePVCName := operator.GetTablespacePVCName(deployment.Name, tablespaceName)
-
- log.Debugf("creating tablespace PVC [%s] for [%s]", tablespacePVCName, deployment.Name)
-
- // and now create it! If it errors, we just need to return, which
- // potentially leaves things in an inconsistent state, but at this point
- // only PVC objects have been created
- tablespaceVolumes[i][tablespaceName], err = pvc.CreateIfNotExists(clientset,
- storageSpec, tablespacePVCName, cluster.Name, cluster.Namespace)
- if err != nil {
- return err
+func UpdateTablespaces(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
+ // update the volume portion of the Deployment spec to reflect all of the
+ // available tablespaces
+ for tablespaceName, storageSpec := range cluster.Spec.TablespaceMounts {
+ // go through the volume list and see if there is already a volume for this
+ // if there is, skip
+ found := false
+ volumeName := operator.GetTablespaceVolumeName(tablespaceName)
+
+ for _, volume := range deployment.Spec.Template.Spec.Volumes {
+ if volume.Name == volumeName {
+ found = true
+ break
}
}
- }
- // now the fun step: update each deployment with the new volumes
- for i, deployment := range deployments.Items {
- log.Debugf("attach tablespace volumes to [%s]", deployment.Name)
-
- // iterate through each table space and prepare the Volume and
- // VolumeMount clause for each instance
- for tablespaceName := range newTablespaces {
- // this is the volume to be added for the tablespace
- volume := v1.Volume{
- Name: operator.GetTablespaceVolumeName(tablespaceName),
- VolumeSource: tablespaceVolumes[i][tablespaceName].VolumeSource(),
- }
-
- // add the volume to the list of volumes
- deployment.Spec.Template.Spec.Volumes = append(deployment.Spec.Template.Spec.Volumes, volume)
-
- // now add the volume mount point to that of the database container
- volumeMount := v1.VolumeMount{
- MountPath: fmt.Sprintf("%s%s", config.VOLUME_TABLESPACE_PATH_PREFIX, tablespaceName),
- Name: operator.GetTablespaceVolumeName(tablespaceName),
- }
-
- // we can do this as we always know that the "database" container is the
- // first container in the list
- deployment.Spec.Template.Spec.Containers[0].VolumeMounts = append(
- deployment.Spec.Template.Spec.Containers[0].VolumeMounts, volumeMount)
+ if found {
+ continue
+ }
- // add any supplemental groups specified in storage configuration.
- // SecurityContext is always initialized because we use fsGroup.
- deployment.Spec.Template.Spec.SecurityContext.SupplementalGroups = append(
- deployment.Spec.Template.Spec.SecurityContext.SupplementalGroups,
- tablespaceVolumes[i][tablespaceName].SupplementalGroups...)
+ // create the volume definition for the tablespace
+ storageResult := operator.StorageResult{
+ PersistentVolumeClaimName: operator.GetTablespacePVCName(deployment.Name, tablespaceName),
+ SupplementalGroups: storageSpec.GetSupplementalGroups(),
}
- // find the "PGHA_TABLESPACES" value and update it with the new tablespace
- // name list
- ok := false
- for i, envVar := range deployment.Spec.Template.Spec.Containers[0].Env {
- // yup, it's an old fashioned linear time lookup
- if envVar.Name == "PGHA_TABLESPACES" {
- deployment.Spec.Template.Spec.Containers[0].Env[i].Value = operator.GetTablespaceNames(
- cluster.Spec.TablespaceMounts)
- ok = true
- }
+ volume := v1.Volume{
+ Name: volumeName,
+ VolumeSource: storageResult.VolumeSource(),
}
- // if its not found, we need to add it to the env
- if !ok {
- envVar := v1.EnvVar{
- Name: "PGHA_TABLESPACES",
- Value: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts),
- }
- deployment.Spec.Template.Spec.Containers[0].Env = append(deployment.Spec.Template.Spec.Containers[0].Env, envVar)
+ // add the volume to the list of volumes
+ deployment.Spec.Template.Spec.Volumes = append(deployment.Spec.Template.Spec.Volumes, volume)
+
+ // now add the volume mount point to that of the database container
+ volumeMount := v1.VolumeMount{
+ MountPath: fmt.Sprintf("%s%s", config.VOLUME_TABLESPACE_PATH_PREFIX, tablespaceName),
+ Name: volumeName,
}
- // Before applying the update, we want to explicitly stop PostgreSQL on each
- // instance. This prevents PostgreSQL from having to boot up in crash
- // recovery mode.
- //
- // If an error is returned, we only issue a warning
- if err := stopPostgreSQLInstance(clientset, restConfig, deployment); err != nil {
- log.Warn(err)
+ // we can do this as we always know that the "database" container is the
+ // first container in the list
+ deployment.Spec.Template.Spec.Containers[0].VolumeMounts = append(
+ deployment.Spec.Template.Spec.Containers[0].VolumeMounts, volumeMount)
+
+ // add any supplemental groups specified in storage configuration.
+ // SecurityContext is always initialized because we use fsGroup.
+ deployment.Spec.Template.Spec.SecurityContext.SupplementalGroups = append(
+ deployment.Spec.Template.Spec.SecurityContext.SupplementalGroups,
+ storageResult.SupplementalGroups...)
+ }
+
+ // find the "PGHA_TABLESPACES" value and update it with the new tablespace
+ // name list
+ ok := false
+ for i, envVar := range deployment.Spec.Template.Spec.Containers[0].Env {
+ // yup, it's an old fashioned linear time lookup
+ if envVar.Name == "PGHA_TABLESPACES" {
+ deployment.Spec.Template.Spec.Containers[0].Env[i].Value = operator.GetTablespaceNames(
+ cluster.Spec.TablespaceMounts)
+ ok = true
}
+ }
- // finally, update the Deployment. Potential to put things into an
- // inconsistent state if any of these updates fail
- if _, err := clientset.AppsV1().Deployments(deployment.Namespace).
- Update(ctx, &deployment, metav1.UpdateOptions{}); err != nil {
- return err
+ // if its not found, we need to add it to the env
+ if !ok {
+ envVar := v1.EnvVar{
+ Name: "PGHA_TABLESPACES",
+ Value: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts),
}
+ deployment.Spec.Template.Spec.Containers[0].Env = append(deployment.Spec.Template.Spec.Containers[0].Env, envVar)
}
return nil
From d9959a5b968887c16abf89d7707a7922d82f14ab Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 5 Dec 2020 16:07:26 -0500
Subject: [PATCH 036/276] Add `--rolling` flag to `pgo restart`
The `--rolling` flag allows for one to specify a restart on a
PosgreSQL cluster to occur in a rolling fashion, i.e. all the
replicas a restarted, then a switchover occurs, then the newly
demoted primary is restarted.
This subsequently creates a task custom resource to perform the
rolling update, as said updates can take some time to process.
Issue: [ch9881]
---
cmd/pgo/cmd/restart.go | 12 +++
.../pgo-client/reference/pgo_restart.md | 8 +-
.../apiserver/restartservice/restartimpl.go | 42 +++++++++
internal/config/labels.go | 1 +
.../controller/pgtask/pgtaskcontroller.go | 16 ++++
pkg/apis/crunchydata.com/v1/task.go | 85 +++++++++++--------
pkg/apiservermsgs/restartmsgs.go | 1 +
7 files changed, 128 insertions(+), 37 deletions(-)
diff --git a/cmd/pgo/cmd/restart.go b/cmd/pgo/cmd/restart.go
index f784d82004..02d80a1229 100644
--- a/cmd/pgo/cmd/restart.go
+++ b/cmd/pgo/cmd/restart.go
@@ -28,6 +28,8 @@ import (
"github.com/spf13/cobra"
)
+var RollingUpdate bool
+
var restartCmd = &cobra.Command{
Use: "restart",
Short: "Restarts the PostgrSQL database within a PostgreSQL cluster",
@@ -36,6 +38,9 @@ var restartCmd = &cobra.Command{
For example, to restart the primary and all replicas:
pgo restart mycluster
+ To restart the primary and all replicas using a rolling update strategy:
+ pgo restart mycluster --rolling
+
Or target a specific instance within the cluster:
pgo restart mycluster --target=mycluster-abcd
@@ -78,6 +83,7 @@ func init() {
restartCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
restartCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`)
restartCmd.Flags().BoolVarP(&Query, "query", "", false, "Prints the list of instances that can be restarted.")
+ restartCmd.Flags().BoolVar(&RollingUpdate, "rolling", false, "Performs a rolling restart. Cannot be used with other flags.")
restartCmd.Flags().StringArrayVarP(&Targets, "target", "", []string{}, "The instance that will be restarted.")
}
@@ -91,6 +97,12 @@ func restart(clusterName, namespace string) {
request.ClusterName = clusterName
request.Targets = Targets
request.ClientVersion = msgs.PGO_VERSION
+ request.RollingUpdate = RollingUpdate
+
+ if request.RollingUpdate && len(request.Targets) > 0 {
+ fmt.Println("Error: cannot use --rolling with other flags")
+ os.Exit(1)
+ }
response, err := api.Restart(httpclient, &SessionCredentials, request)
if err != nil {
diff --git a/docs/content/pgo-client/reference/pgo_restart.md b/docs/content/pgo-client/reference/pgo_restart.md
index dc0517f1db..2a56f8ed12 100644
--- a/docs/content/pgo-client/reference/pgo_restart.md
+++ b/docs/content/pgo-client/reference/pgo_restart.md
@@ -12,6 +12,9 @@ Restarts one or more PostgreSQL databases within a PostgreSQL cluster.
For example, to restart the primary and all replicas:
pgo restart mycluster
+ To restart the primary and all replicas using a rolling update strategy:
+ pgo restart mycluster --rolling
+
Or target a specific instance within the cluster:
pgo restart mycluster --target=mycluster-abcd
@@ -29,13 +32,14 @@ pgo restart [flags]
--no-prompt No command line confirmation.
-o, --output string The output format. Supported types are: "json"
--query Prints the list of instances that can be restarted.
+ --rolling Performs a rolling restart. Cannot be used with other flags.
--target stringArray The instance that will be restarted.
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -49,4 +53,4 @@ pgo restart [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 5-Dec-2020
diff --git a/internal/apiserver/restartservice/restartimpl.go b/internal/apiserver/restartservice/restartimpl.go
index 5d1545d8e4..dc1b4f95ce 100644
--- a/internal/apiserver/restartservice/restartimpl.go
+++ b/internal/apiserver/restartservice/restartimpl.go
@@ -23,8 +23,10 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/patroni"
"github.com/crunchydata/postgres-operator/internal/util"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
+ kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -63,6 +65,46 @@ func Restart(request *msgs.RestartRequest, pgouser string) msgs.RestartResponse
return resp
}
+ // if a rolling update is requested, this takes a detour to create a pgtask
+ // to accomplish this
+ if request.RollingUpdate {
+ // since a rolling update takes time, this needs to be performed as a
+ // separate task
+ // Create a pgtask
+ task := &crv1.Pgtask{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: fmt.Sprintf("%s-%s", cluster.Name, config.LABEL_RESTART),
+ Namespace: cluster.Namespace,
+ Labels: map[string]string{
+ config.LABEL_PG_CLUSTER: cluster.Name,
+ config.LABEL_PGOUSER: pgouser,
+ },
+ },
+ Spec: crv1.PgtaskSpec{
+ TaskType: crv1.PgtaskRollingUpdate,
+ Parameters: map[string]string{
+ config.LABEL_PG_CLUSTER: cluster.Name,
+ },
+ },
+ }
+
+ // remove any previous rolling restart, then add a new one
+ if err := apiserver.Clientset.CrunchydataV1().Pgtasks(task.Namespace).Delete(ctx, task.Name,
+ metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) {
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = err.Error()
+ return resp
+ }
+
+ if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(cluster.Namespace).Create(ctx, task,
+ metav1.CreateOptions{}); err != nil {
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = err.Error()
+ }
+
+ return resp
+ }
+
var restartResults []patroni.RestartResult
// restart either the whole cluster, or just any targets specified
patroniClient := patroni.NewPatroniClient(apiserver.RESTConfig, apiserver.Clientset,
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 6a24b72494..eb5522b092 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -27,6 +27,7 @@ const LABEL_PGTASK = "pg-task"
const LABEL_AUTOFAIL = "autofail"
const LABEL_FAILOVER = "failover"
+const LABEL_RESTART = "restart"
const LABEL_TARGET = "target"
const LABEL_RMDATA = "pgrmdata"
diff --git a/internal/controller/pgtask/pgtaskcontroller.go b/internal/controller/pgtask/pgtaskcontroller.go
index 3d3706d5fa..0a60af9eb6 100644
--- a/internal/controller/pgtask/pgtaskcontroller.go
+++ b/internal/controller/pgtask/pgtaskcontroller.go
@@ -29,7 +29,9 @@ import (
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned"
informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1"
+
log "github.com/sirupsen/logrus"
+ appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/cache"
@@ -128,6 +130,20 @@ func (c *Controller) processNextItem() bool {
} else {
log.Debugf("skipping duplicate onAdd failover task %s/%s", keyNamespace, keyResourceName)
}
+ case crv1.PgtaskRollingUpdate:
+ log.Debug("rolling update task added")
+ // first, attempt to get the pgcluster object
+ clusterName := tmpTask.Spec.Parameters[config.LABEL_PG_CLUSTER]
+
+ if cluster, err := c.Client.CrunchydataV1().Pgclusters(tmpTask.Namespace).
+ Get(ctx, clusterName, metav1.GetOptions{}); err == nil {
+ if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, cluster,
+ func(*crv1.Pgcluster, *appsv1.Deployment) error { return nil }); err != nil {
+ log.Errorf("rolling update failed: %q", err.Error())
+ }
+ } else {
+ log.Debugf("rolling update failed: could not find cluster %q", clusterName)
+ }
case crv1.PgtaskDeleteData:
log.Debug("delete data task added")
diff --git a/pkg/apis/crunchydata.com/v1/task.go b/pkg/apis/crunchydata.com/v1/task.go
index 1475b61ad7..7b79896e67 100644
--- a/pkg/apis/crunchydata.com/v1/task.go
+++ b/pkg/apis/crunchydata.com/v1/task.go
@@ -22,41 +22,56 @@ import (
// PgtaskResourcePlural ...
const PgtaskResourcePlural = "pgtasks"
-const PgtaskDeleteBackups = "delete-backups"
-const PgtaskDeleteData = "delete-data"
-const PgtaskFailover = "failover"
-const PgtaskAutoFailover = "autofailover"
-const PgtaskAddPolicies = "addpolicies"
-
-const PgtaskUpgrade = "clusterupgrade"
-const PgtaskUpgradeCreated = "cluster upgrade - task created"
-const PgtaskUpgradeInProgress = "cluster upgrade - in progress"
-
-const PgtaskPgAdminAdd = "add-pgadmin"
-const PgtaskPgAdminDelete = "delete-pgadmin"
-
-const PgtaskWorkflow = "workflow"
-const PgtaskWorkflowCreateClusterType = "createcluster"
-const PgtaskWorkflowBackrestRestoreType = "pgbackrestrestore"
-const PgtaskWorkflowBackupType = "backupworkflow"
-const PgtaskWorkflowSubmittedStatus = "task submitted"
-const PgtaskWorkflowCompletedStatus = "task completed"
-const PgtaskWorkflowID = "workflowid"
-
-const PgtaskWorkflowBackrestRestorePVCCreatedStatus = "restored PVC created"
-const PgtaskWorkflowBackrestRestorePrimaryCreatedStatus = "restored Primary created"
-const PgtaskWorkflowBackrestRestoreJobCreatedStatus = "restore job created"
-
-const PgtaskBackrest = "backrest"
-const PgtaskBackrestBackup = "backup"
-const PgtaskBackrestInfo = "info"
-const PgtaskBackrestRestore = "restore"
-const PgtaskBackrestStanzaCreate = "stanza-create"
-
-const PgtaskpgDump = "pgdump"
-const PgtaskpgDumpBackup = "pgdumpbackup"
-const PgtaskpgDumpInfo = "pgdumpinfo"
-const PgtaskpgRestore = "pgrestore"
+const (
+ PgtaskDeleteBackups = "delete-backups"
+ PgtaskDeleteData = "delete-data"
+ PgtaskFailover = "failover"
+ PgtaskAutoFailover = "autofailover"
+ PgtaskAddPolicies = "addpolicies"
+ PgtaskRollingUpdate = "rolling update"
+)
+
+const (
+ PgtaskUpgrade = "clusterupgrade"
+ PgtaskUpgradeCreated = "cluster upgrade - task created"
+ PgtaskUpgradeInProgress = "cluster upgrade - in progress"
+)
+
+const (
+ PgtaskPgAdminAdd = "add-pgadmin"
+ PgtaskPgAdminDelete = "delete-pgadmin"
+)
+
+const (
+ PgtaskWorkflow = "workflow"
+ PgtaskWorkflowCreateClusterType = "createcluster"
+ PgtaskWorkflowBackrestRestoreType = "pgbackrestrestore"
+ PgtaskWorkflowBackupType = "backupworkflow"
+ PgtaskWorkflowSubmittedStatus = "task submitted"
+ PgtaskWorkflowCompletedStatus = "task completed"
+ PgtaskWorkflowID = "workflowid"
+)
+
+const (
+ PgtaskWorkflowBackrestRestorePVCCreatedStatus = "restored PVC created"
+ PgtaskWorkflowBackrestRestorePrimaryCreatedStatus = "restored Primary created"
+ PgtaskWorkflowBackrestRestoreJobCreatedStatus = "restore job created"
+)
+
+const (
+ PgtaskBackrest = "backrest"
+ PgtaskBackrestBackup = "backup"
+ PgtaskBackrestInfo = "info"
+ PgtaskBackrestRestore = "restore"
+ PgtaskBackrestStanzaCreate = "stanza-create"
+)
+
+const (
+ PgtaskpgDump = "pgdump"
+ PgtaskpgDumpBackup = "pgdumpbackup"
+ PgtaskpgDumpInfo = "pgdumpinfo"
+ PgtaskpgRestore = "pgrestore"
+)
// this is ported over from legacy backup code
const PgBackupJobSubmitted = "Backup Job Submitted"
diff --git a/pkg/apiservermsgs/restartmsgs.go b/pkg/apiservermsgs/restartmsgs.go
index c0b32d3d00..a36307739c 100644
--- a/pkg/apiservermsgs/restartmsgs.go
+++ b/pkg/apiservermsgs/restartmsgs.go
@@ -47,6 +47,7 @@ type InstanceDetail struct {
type RestartRequest struct {
Namespace string
ClusterName string
+ RollingUpdate bool
Targets []string
ClientVersion string
}
From 1aefe228cd45b0af3f5f51e644e77b898c6b742c Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 8 Dec 2020 22:19:40 -0500
Subject: [PATCH 037/276] Do not consider bootstrap Pod in `pgo df`
The bootstrap Pod, a remnaint of a cluster restore, gets caught
up in the `pgo df` search, but unfortunately this is not a valid
Pod. This exlcude this Pod from being considered.
Issue: [ch2029]
Issue: #2029
---
internal/apiserver/dfservice/dfimpl.go | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/internal/apiserver/dfservice/dfimpl.go b/internal/apiserver/dfservice/dfimpl.go
index 6c11bd9d36..0b2ac196af 100644
--- a/internal/apiserver/dfservice/dfimpl.go
+++ b/internal/apiserver/dfservice/dfimpl.go
@@ -147,7 +147,8 @@ func getClusterDf(cluster *crv1.Pgcluster, clusterResultsChannel chan msgs.DfDet
ctx := context.TODO()
log.Debugf("pod df: %s", cluster.Spec.Name)
- selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, cluster.Spec.Name)
+ selector := fmt.Sprintf("%s=%s,!%s",
+ config.LABEL_PG_CLUSTER, cluster.Spec.Name, config.LABEL_PGHA_BOOTSTRAP)
pods, err := apiserver.Clientset.CoreV1().Pods(cluster.Spec.Namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
From 23c935d7a7f96d57c0157624a0bc04396185e3cb Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 8 Dec 2020 22:41:06 -0500
Subject: [PATCH 038/276] Delete bootstrap Job once it successfully completes
When the bootstrap Job completes successfully after a restore,
it contains information that ends up being consumed by other
parts of the Operator system, such as Patroni. As the logs
from the Job do not provide much, if any, helpful information
after a restore succeeds, it's best to have the Operator
eliminate the job.
As such, this changes the behavior so that the bootstrap Job
is removed.
As this has lead to some buggy behavior, this is being
considered as a bug fix, as regular operational work would
dictate that the Job is removed anyway.
Issue: [ch9919]
---
internal/controller/job/bootstraphandler.go | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/internal/controller/job/bootstraphandler.go b/internal/controller/job/bootstraphandler.go
index 4c4f383d25..580da57041 100644
--- a/internal/controller/job/bootstraphandler.go
+++ b/internal/controller/job/bootstraphandler.go
@@ -82,7 +82,7 @@ func (c *Controller) handleBootstrapUpdate(job *apiv1.Job) error {
// If the job was successful we updated the state of the pgcluster to a "bootstrapped" status.
// This will then trigger full initialization of the cluster. We also cleanup any resources
- // from the bootstrap job.
+ // from the bootstrap job and delete the job itself
if cluster.Status.State == crv1.PgclusterStateBootstrapping {
if err := c.cleanupBootstrapResources(job, cluster, restore); err != nil {
@@ -103,6 +103,11 @@ func (c *Controller) handleBootstrapUpdate(job *apiv1.Job) error {
log.Error(err)
return err
}
+
+ // as it is no longer needed, delete the job
+ deletePropagation := metav1.DeletePropagationBackground
+ return c.Client.BatchV1().Jobs(namespace).Delete(ctx, job.Name,
+ metav1.DeleteOptions{PropagationPolicy: &deletePropagation})
}
if restore {
From 57c9815ab3ad14575725aa15daab286407371dc7 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 10 Dec 2020 12:38:36 -0500
Subject: [PATCH 039/276] Only consider running Pods for `pgo test`
By adding this limitation, Pods such as Evicted Pods would not
be considered as a part of `pgo test` output, as this could
present some odd scenarios, such as the presence of two primaries.
Issue: [ch9931]
Issue: #2095
---
.../apiserver/clusterservice/clusterimpl.go | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 90a062e40d..5c51d98514 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -38,6 +38,7 @@ import (
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/util/validation"
"k8s.io/client-go/kubernetes"
)
@@ -2065,10 +2066,16 @@ func GetPrimaryAndReplicaPods(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowCl
ctx := context.TODO()
output := make([]msgs.ShowClusterPod, 0)
+ // find all of the Pods that represent Postgres primary and replicas.
+ // only consider running Pods
selector := config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name + "," + config.LABEL_DEPLOYMENT_NAME
- log.Debugf("selector for GetPrimaryAndReplicaPods is %s", selector)
- pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
+ options := metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: selector,
+ }
+
+ pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(ctx, options)
if err != nil {
return output, err
}
@@ -2088,9 +2095,12 @@ func GetPrimaryAndReplicaPods(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowCl
}
selector = config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name + "-replica" + "," + config.LABEL_DEPLOYMENT_NAME
- log.Debugf("selector for GetPrimaryAndReplicaPods is %s", selector)
+ options = metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: selector,
+ }
- pods, err = apiserver.Clientset.CoreV1().Pods(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
+ pods, err = apiserver.Clientset.CoreV1().Pods(ns).List(ctx, options)
if err != nil {
return output, err
}
From 1b5fe49ce9f2993c30985acc75aed34bab25b391 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 10 Dec 2020 17:06:21 -0500
Subject: [PATCH 040/276] Use poll utility provided by Kubernetes for waiting
This moves several home-constructed methods to using a similarly
constructed one that is maintained upstream. Provides more
consistency across the code that can serve future implementation.
---
internal/controller/pod/promotionhandler.go | 56 ++++++++++-----------
internal/operator/backrest/backup.go | 48 +++++++++---------
internal/operator/cluster/clusterlogic.go | 46 +++++++----------
internal/operator/cluster/rolling.go | 38 +++++++-------
internal/operator/cluster/upgrade.go | 33 +++++-------
5 files changed, 100 insertions(+), 121 deletions(-)
diff --git a/internal/controller/pod/promotionhandler.go b/internal/controller/pod/promotionhandler.go
index 2dbc34ab6f..123f422d46 100644
--- a/internal/controller/pod/promotionhandler.go
+++ b/internal/controller/pod/promotionhandler.go
@@ -31,6 +31,7 @@ import (
log "github.com/sirupsen/logrus"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
@@ -132,39 +133,36 @@ func waitForStandbyPromotion(restConfig *rest.Config, clientset kubernetes.Inter
// wait for the server to accept writes to ensure standby has truly been disabled before
// proceeding
- duration := time.After(isStandbyDisabledTimeout)
- tick := time.NewTicker(isStandbyDisabledTick)
- defer tick.Stop()
- for {
- select {
- case <-duration:
- return fmt.Errorf("timed out waiting for cluster %s to accept writes after disabling "+
- "standby mode", cluster.Name)
- case <-tick.C:
+ if err := wait.Poll(isStandbyDisabledTick, isStandbyDisabledTimeout, func() (bool, error) {
+ if !recoveryDisabled {
+ cmd := isInRecoveryCMD
+ cmd = append(cmd, cluster.Spec.Port)
+
+ isInRecoveryStr, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
+ cmd, "database", newPod.Name, newPod.Namespace, nil)
+
+ recoveryDisabled = strings.Contains(isInRecoveryStr, "f")
+
if !recoveryDisabled {
- cmd := isInRecoveryCMD
- cmd = append(cmd, cluster.Spec.Port)
-
- isInRecoveryStr, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
- cmd, newPod.Spec.Containers[0].Name, newPod.Name,
- newPod.Namespace, nil)
- if strings.Contains(isInRecoveryStr, "f") {
- recoveryDisabled = true
- }
- }
- if recoveryDisabled {
- primaryJSONStr, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
- leaderStatusCMD, newPod.Spec.Containers[0].Name, newPod.Name,
- newPod.Namespace, nil)
- var primaryJSON map[string]interface{}
- json.Unmarshal([]byte(primaryJSONStr), &primaryJSON)
- if primaryJSON["state"] == "running" && (primaryJSON["pending_restart"] == nil ||
- !primaryJSON["pending_restart"].(bool)) {
- return nil
- }
+ return false, nil
}
}
+
+ primaryJSONStr, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
+ leaderStatusCMD, newPod.Spec.Containers[0].Name, newPod.Name,
+ newPod.Namespace, nil)
+
+ primaryJSON := map[string]interface{}{}
+ _ = json.Unmarshal([]byte(primaryJSONStr), &primaryJSON)
+
+ return (primaryJSON["state"] == "running" && (primaryJSON["pending_restart"] == nil ||
+ !primaryJSON["pending_restart"].(bool))), nil
+ }); err != nil {
+ return fmt.Errorf("timed out waiting for cluster %s to accept writes after disabling "+
+ "standby mode", cluster.Name)
}
+
+ return nil
}
// cleanAndCreatePostFailoverBackup cleans up any existing backup resources and then creates
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index 89f4b8a29f..8d3cdeba4a 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -19,6 +19,7 @@ import (
"bytes"
"context"
"encoding/json"
+ "errors"
"fmt"
"os"
"regexp"
@@ -37,6 +38,7 @@ import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
+ "k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
)
@@ -61,14 +63,16 @@ type backrestJobTemplateFields struct {
PgbackrestRestoreVolumeMounts string
}
-var backrestPgHostRegex = regexp.MustCompile("--db-host|--pg1-host")
-var backrestPgPathRegex = regexp.MustCompile("--db-path|--pg1-path")
+var (
+ backrestPgHostRegex = regexp.MustCompile("--db-host|--pg1-host")
+ backrestPgPathRegex = regexp.MustCompile("--db-path|--pg1-path")
+)
// Backrest ...
func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtask) {
ctx := context.TODO()
- //create the Job to run the backrest command
+ // create the Job to run the backrest command
cmd := task.Spec.Parameters[config.LABEL_BACKREST_COMMAND]
@@ -129,7 +133,7 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
}
clientset.BatchV1().Jobs(namespace).Create(ctx, &newjob, metav1.CreateOptions{})
- //publish backrest backup event
+ // publish backrest backup event
if cmd == "backup" {
topics := make([]string, 1)
topics[0] = events.EventTopicBackup
@@ -151,7 +155,6 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
log.Error(err.Error())
}
}
-
}
// CreateInitialBackup creates a Pgtask in order to initiate the initial pgBackRest backup for a cluster
@@ -244,7 +247,7 @@ func CleanBackupResources(clientset kubeapi.Interface, namespace, clusterName st
return err
}
- //remove previous backup job
+ // remove previous backup job
selector := config.LABEL_BACKREST_COMMAND + "=" + crv1.PgtaskBackrestBackup + "," +
config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_BACKREST + "=true"
deletePropagation := metav1.DeletePropagationForeground
@@ -257,27 +260,26 @@ func CleanBackupResources(clientset kubeapi.Interface, namespace, clusterName st
log.Error(err)
}
- timeout := time.After(30 * time.Second)
- tick := time.NewTicker(1 * time.Second)
- defer tick.Stop()
- for {
- select {
- case <-timeout:
+ if err := wait.Poll(1*time.Second, 30*time.Second, func() (bool, error) {
+ jobList, err := clientset.
+ BatchV1().Jobs(namespace).
+ List(ctx, metav1.ListOptions{LabelSelector: selector})
+ if err != nil {
+ log.Error(err)
+ return false, err
+ }
+
+ return len(jobList.Items) == 0, nil
+ }); err != nil {
+ if errors.Is(err, wait.ErrWaitTimeout) {
return fmt.Errorf("Timed out waiting for deletion of pgBackRest backup job for "+
"cluster %s", clusterName)
- case <-tick.C:
- jobList, err := clientset.
- BatchV1().Jobs(namespace).
- List(ctx, metav1.ListOptions{LabelSelector: selector})
- if err != nil {
- log.Error(err)
- return err
- }
- if len(jobList.Items) == 0 {
- return nil
- }
}
+
+ return err
}
+
+ return nil
}
// getCommandOptsFromPod adds command line options from the primary pod to a backrest job.
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index f0c34c2007..7acf0fe41c 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -43,6 +43,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
+ "k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
)
@@ -259,7 +260,6 @@ func getBootstrapJobFields(clientset kubeapi.Interface,
func getClusterDeploymentFields(clientset kubernetes.Interface,
cl *crv1.Pgcluster, dataVolume, walVolume operator.StorageResult,
tablespaceVolumes map[string]operator.StorageResult) operator.DeploymentTemplateFields {
-
namespace := cl.GetNamespace()
log.Infof("creating Pgcluster %s in namespace %s", cl.Name, namespace)
@@ -293,7 +293,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
supplementalGroups = append(supplementalGroups, v.SupplementalGroups...)
}
- //create the primary deployment
+ // create the primary deployment
deploymentFields := operator.DeploymentTemplateFields{
Name: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
IsInit: true,
@@ -343,12 +343,11 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
// DeleteCluster ...
func DeleteCluster(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) error {
-
var err error
log.Info("deleting Pgcluster object" + " in namespace " + namespace)
log.Info("deleting with Name=" + cl.Spec.Name + " in namespace " + namespace)
- //create rmdata job
+ // create rmdata job
isReplica := false
isBackup := false
removeData := true
@@ -362,7 +361,6 @@ func DeleteCluster(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace
}
return err
-
}
// scaleReplicaCreateMissingService creates a service for cluster replicas if
@@ -412,7 +410,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
var replicaDoc bytes.Buffer
serviceName := replica.Spec.ClusterName + "-replica"
- //replicaFlag := true
+ // replicaFlag := true
// replicaLabels := operator.GetPrimaryLabels(serviceName, replica.Spec.ClusterName, replicaFlag, cluster.Spec.UserLabels)
cluster.Spec.UserLabels[config.LABEL_REPLICA_NAME] = replica.Spec.Name
@@ -424,13 +422,13 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
archiveMode = "on"
}
if cluster.Labels[config.LABEL_BACKREST] == "true" {
- //backrest requires archive mode be set to on
+ // backrest requires archive mode be set to on
archiveMode = "on"
}
image := cluster.Spec.CCPImage
- //check for --ccp-image-tag at the command line
+ // check for --ccp-image-tag at the command line
imageTag := cluster.Spec.CCPImageTag
if replica.Spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY] != "" {
imageTag = replica.Spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY]
@@ -448,7 +446,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
supplementalGroups = append(supplementalGroups, v.SupplementalGroups...)
}
- //create the replica deployment
+ // create the replica deployment
replicaDeploymentFields := operator.DeploymentTemplateFields{
Name: replica.Spec.Name,
ClusterName: replica.Spec.ClusterName,
@@ -550,7 +548,6 @@ func DeleteReplica(clientset kubernetes.Interface, cl *crv1.Pgreplica, namespace
})
return err
-
}
func publishScaleError(namespace string, username string, cluster *crv1.Pgcluster) {
@@ -625,7 +622,6 @@ func ShutdownCluster(clientset kubeapi.Interface, cluster crv1.Pgcluster) error
// only consider pods that are running
pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(ctx, options)
-
if err != nil {
return err
}
@@ -695,7 +691,6 @@ func ShutdownCluster(clientset kubeapi.Interface, cluster crv1.Pgcluster) error
// includes changing the replica count for all clusters to 1, and then updating the pgcluster
// with a shutdown status.
func StartupCluster(clientset kubernetes.Interface, cluster crv1.Pgcluster) error {
-
log.Debugf("Cluster Operator: starting cluster %s", cluster.Name)
// ensure autofailover is enabled to ensure proper startup of the cluster
@@ -804,22 +799,17 @@ func waitForDeploymentReady(clientset kubernetes.Interface, namespace, deploymen
ctx := context.TODO()
// set up the timer and timeout
- // first, ensure that there is an available Pod
- timeout := time.After(timeoutSecs)
- tick := time.NewTicker(periodSecs)
- defer tick.Stop()
-
- for {
- select {
- case <-timeout:
- return fmt.Errorf("readiness timeout reached for deployment %q", deploymentName)
- case <-tick.C:
- // check to see if the deployment is ready
- if d, err := clientset.AppsV1().Deployments(namespace).Get(ctx, deploymentName, metav1.GetOptions{}); err != nil {
- log.Warn(err)
- } else if d.Status.Replicas == d.Status.ReadyReplicas {
- return nil
- }
+ if err := wait.Poll(periodSecs, timeoutSecs, func() (bool, error) {
+ // check to see if the deployment is ready
+ d, err := clientset.AppsV1().Deployments(namespace).Get(ctx, deploymentName, metav1.GetOptions{})
+ if err != nil {
+ log.Warn(err)
}
+
+ return err == nil && d.Status.Replicas == d.Status.ReadyReplicas, nil
+ }); err != nil {
+ return fmt.Errorf("readiness timeout reached for deployment %q", deploymentName)
}
+
+ return nil
}
diff --git a/internal/operator/cluster/rolling.go b/internal/operator/cluster/rolling.go
index 9f5351d7b0..feb2df24b1 100644
--- a/internal/operator/cluster/rolling.go
+++ b/internal/operator/cluster/rolling.go
@@ -31,6 +31,7 @@ import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
+ "k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
@@ -306,26 +307,21 @@ func waitForPostgresInstance(clientset kubernetes.Interface, restConfig *rest.Co
pod := pods.Items[0]
cmd := generatePostgresReadyCommand(cluster.Spec.Port)
- // set up the timer and timeout
- // first, ensure that there is an available Pod
- timeout := time.After(timeoutSecs)
- tick := time.NewTicker(periodSecs)
- defer tick.Stop()
-
- for {
- select {
- case <-timeout:
- return fmt.Errorf("readiness timeout reached for start up of cluster %q instance %q", cluster.Name, deployment.Name)
- case <-tick.C:
- // check to see if PostgreSQL is ready to accept connections
- s, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
- cmd, "database", pod.Name, pod.Namespace, nil)
-
- // really we should find a way to get the exit code in the future, but
- // in the interim...
- if strings.Contains(s, "accepting connections") {
- return nil
- }
- }
+ // start polling to test if the Postgres instance is available to accept
+ // connections
+ if err := wait.Poll(periodSecs, timeoutSecs, func() (bool, error) {
+ // check to see if PostgreSQL is ready to accept connections
+ s, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
+ cmd, "database", pod.Name, pod.Namespace, nil)
+
+ // really we should find a way to get the exit code in the future, but
+ // in the interim, we know that we can accept connections if the below
+ // string is present
+ return strings.Contains(s, "accepting connections"), nil
+ }); err != nil {
+ return fmt.Errorf("readiness timeout reached for start up of cluster %q instance %q",
+ cluster.Name, deployment.Name)
}
+
+ return nil
}
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index d497753c28..084f828505 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -36,6 +36,7 @@ import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
+ "k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"sigs.k8s.io/yaml"
)
@@ -131,7 +132,6 @@ func AddUpgrade(clientset kubeapi.Interface, upgrade *crv1.Pgtask, namespace str
PublishUpgradeEvent(events.EventUpgradeClusterCreateSubmitted, namespace, upgrade, "")
log.Debugf("finished main upgrade workflow for cluster: %s", upgradeTargetClusterName)
-
}
// getPrimaryPodDeploymentName searches through the pods associated with this pgcluster for the 'primary' pod,
@@ -151,7 +151,6 @@ func getPrimaryPodDeploymentName(clientset kubernetes.Interface, cluster *crv1.P
// only consider pods that are running
pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(ctx, options)
-
if err != nil {
log.Errorf("no pod with the primary role label was found for cluster %s. Error: %s", cluster.Name, err.Error())
return ""
@@ -251,7 +250,6 @@ func handleReplicas(clientset kubeapi.Interface, clusterName, currentPrimaryPVC,
// (e.g. pgo create cluster hippo --replica-count=2) but will not included any replicas
// created using the 'pgo scale' command
func SetReplicaNumber(pgcluster *crv1.Pgcluster, numReplicas string) {
-
pgcluster.Spec.Replicas = numReplicas
}
@@ -280,10 +278,12 @@ func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimar
}
// wait until the backrest shared repo pod deployment has been deleted before continuing
- waitStatus := deploymentWait(clientset, namespace, clusterName+"-backrest-shared-repo", 180, 10)
+ waitStatus := deploymentWait(clientset, namespace, clusterName+"-backrest-shared-repo",
+ 180*time.Second, 10*time.Second)
log.Debug(waitStatus)
// wait until the primary pod deployment has been deleted before continuing
- waitStatus = deploymentWait(clientset, namespace, currentPrimary, 180, 10)
+ waitStatus = deploymentWait(clientset, namespace, currentPrimary,
+ 180*time.Second, 10*time.Second)
log.Debug(waitStatus)
// delete the pgcluster
@@ -318,21 +318,15 @@ func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimar
// deletion to complete before proceeding with the rest of the pgcluster upgrade.
func deploymentWait(clientset kubernetes.Interface, namespace, deploymentName string, timeoutSecs, periodSecs time.Duration) string {
ctx := context.TODO()
- timeout := time.After(timeoutSecs * time.Second)
- tick := time.NewTicker(periodSecs * time.Second)
- defer tick.Stop()
-
- for {
- select {
- case <-timeout:
- return fmt.Sprintf("Timed out waiting for deployment to be deleted: [%s]", deploymentName)
- case <-tick.C:
- _, err := clientset.AppsV1().Deployments(namespace).Get(ctx, deploymentName, metav1.GetOptions{})
- if err != nil {
- return fmt.Sprintf("Deployment %s has been deleted.", deploymentName)
- }
- }
+
+ if err := wait.Poll(periodSecs, timeoutSecs, func() (bool, error) {
+ _, err := clientset.AppsV1().Deployments(namespace).Get(ctx, deploymentName, metav1.GetOptions{})
+ return err != nil, nil
+ }); err != nil {
+ return fmt.Sprintf("Timed out waiting for deployment to be deleted: [%s]", deploymentName)
}
+
+ return fmt.Sprintf("Deployment %s has been deleted.", deploymentName)
}
// deleteNonupgradePgtasks deletes all existing pgtasks by selector with the exception of the
@@ -459,7 +453,6 @@ func recreateBackrestRepoSecret(clientset kubernetes.Interface, clustername, nam
// for the current Postgres Operator version, updating or deleting values where appropriate, and sets
// an expected status so that the CRD object can be recreated.
func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string]string, oldpgoversion, currentPrimary string) {
-
// first, update the PGO version references to the current Postgres Operator version
pgcluster.ObjectMeta.Labels[config.LABEL_PGO_VERSION] = parameters[config.LABEL_PGO_VERSION]
pgcluster.Spec.UserLabels[config.LABEL_PGO_VERSION] = parameters[config.LABEL_PGO_VERSION]
From e42fe12699a6bfeba768e625e5ba892c6aacd14f Mon Sep 17 00:00:00 2001
From: tjmoore4 <42497036+tjmoore4@users.noreply.github.com>
Date: Fri, 11 Dec 2020 11:27:24 -0500
Subject: [PATCH 041/276] Remove references to crunchy-backrest-restore
The functionality of the crunchy-backrest-restore container is
now included in the new crunchy-pgbackrest. As such, the existing
reference to the obsolete container can now be removed.
---
installers/olm/postgresoperator.csv.images.yaml | 1 -
internal/config/images.go | 2 --
2 files changed, 3 deletions(-)
diff --git a/installers/olm/postgresoperator.csv.images.yaml b/installers/olm/postgresoperator.csv.images.yaml
index 97aa48299c..21d1f3c10f 100644
--- a/installers/olm/postgresoperator.csv.images.yaml
+++ b/installers/olm/postgresoperator.csv.images.yaml
@@ -11,7 +11,6 @@
- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER, value: '${PGO_IMAGE_PREFIX}/crunchy-postgres-exporter:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_ADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-admin:${CCP_IMAGE_TAG}' }
-- { name: RELATED_IMAGE_CRUNCHY_BACKREST_RESTORE, value: '${CCP_IMAGE_PREFIX}/crunchy-backrest-restore:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-pgadmin4:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGBADGER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbadger:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGBOUNCER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbouncer:${CCP_IMAGE_TAG}' }
diff --git a/internal/config/images.go b/internal/config/images.go
index 2811e927fc..3c7fdf4285 100644
--- a/internal/config/images.go
+++ b/internal/config/images.go
@@ -22,7 +22,6 @@ const (
CONTAINER_IMAGE_PGO_CLIENT = "pgo-client"
CONTAINER_IMAGE_PGO_RMDATA = "pgo-rmdata"
CONTAINER_IMAGE_CRUNCHY_ADMIN = "crunchy-admin"
- CONTAINER_IMAGE_CRUNCHY_BACKREST_RESTORE = "crunchy-backrest-restore"
CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER = "crunchy-postgres-exporter"
CONTAINER_IMAGE_CRUNCHY_GRAFANA = "crunchy-grafana"
CONTAINER_IMAGE_CRUNCHY_PGADMIN = "crunchy-pgadmin4"
@@ -44,7 +43,6 @@ var RelatedImageMap = map[string]string{
"RELATED_IMAGE_PGO_CLIENT": CONTAINER_IMAGE_PGO_CLIENT,
"RELATED_IMAGE_PGO_RMDATA": CONTAINER_IMAGE_PGO_RMDATA,
"RELATED_IMAGE_CRUNCHY_ADMIN": CONTAINER_IMAGE_CRUNCHY_ADMIN,
- "RELATED_IMAGE_CRUNCHY_BACKREST_RESTORE": CONTAINER_IMAGE_CRUNCHY_BACKREST_RESTORE,
"RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER": CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER,
"RELATED_IMAGE_CRUNCHY_PGADMIN": CONTAINER_IMAGE_CRUNCHY_PGADMIN,
"RELATED_IMAGE_CRUNCHY_PGBADGER": CONTAINER_IMAGE_CRUNCHY_PGBADGER,
From 2155584f2367251a4d3d3aed3223a444a38912bd Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 16 Dec 2020 09:43:40 -0500
Subject: [PATCH 042/276] Do not consider evicted Pods with `pgo df`
For a variety of reasons, including the need to exec into Pods
to get PVC status with `pgo df`, only running Pods should be
considered for this command and, in particular, no evicted Pods.
Issue: [ch9959]
Issue: #2129
---
internal/apiserver/dfservice/dfimpl.go | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/internal/apiserver/dfservice/dfimpl.go b/internal/apiserver/dfservice/dfimpl.go
index 0b2ac196af..5a0186f41b 100644
--- a/internal/apiserver/dfservice/dfimpl.go
+++ b/internal/apiserver/dfservice/dfimpl.go
@@ -25,10 +25,12 @@ import (
"github.com/crunchydata/postgres-operator/internal/kubeapi"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
+
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
"k8s.io/client-go/kubernetes"
)
@@ -150,7 +152,12 @@ func getClusterDf(cluster *crv1.Pgcluster, clusterResultsChannel chan msgs.DfDet
selector := fmt.Sprintf("%s=%s,!%s",
config.LABEL_PG_CLUSTER, cluster.Spec.Name, config.LABEL_PGHA_BOOTSTRAP)
- pods, err := apiserver.Clientset.CoreV1().Pods(cluster.Spec.Namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
+ options := metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: selector,
+ }
+
+ pods, err := apiserver.Clientset.CoreV1().Pods(cluster.Spec.Namespace).List(ctx, options)
// if there is an error attempting to get the pods, just return
if err != nil {
From fdef98956f763643867fb36f4126afc9659de52e Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 13 Dec 2020 14:33:30 -0500
Subject: [PATCH 043/276] Ignore a pgBouncer not found error when updating
annotations
pgBouncer is an optional deployment, as such, we should proceed
on if the pgBouncer is not found.
---
internal/controller/pgcluster/pgclustercontroller.go | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index d363445106..82111a11c0 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -34,6 +34,7 @@ import (
informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1"
log "github.com/sirupsen/logrus"
+ kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/cache"
@@ -380,7 +381,7 @@ func updateAnnotations(c *Controller, oldCluster *crv1.Pgcluster, newCluster *cr
}
if len(annotationsPgBouncer) != 0 {
- if err := clusteroperator.UpdatePgBouncerAnnotations(c.Client, newCluster, annotationsPgBouncer); err != nil {
+ if err := clusteroperator.UpdatePgBouncerAnnotations(c.Client, newCluster, annotationsPgBouncer); err != nil && !kerrors.IsNotFound(err) {
return err
}
}
From 86c1b7d1ec132480f30f1a4dada8e9a0555b9f8e Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 13 Dec 2020 15:15:13 -0500
Subject: [PATCH 044/276] Aggregate rolling update triggered behavior
Updates to a PostgreSQL cluster that warrant a rolling update
are now aggregated to only trigger a single rolling update per
action taken on a PostgreSQL cluster. This allows for the changes
to be rolled out more rapidly, as well as limit the number of
downtime events that need to take place.
---
.../pgcluster/pgclustercontroller.go | 58 +++++++++++++------
1 file changed, 39 insertions(+), 19 deletions(-)
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 82111a11c0..701429e576 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -34,6 +34,7 @@ import (
informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1"
log "github.com/sirupsen/logrus"
+ appsv1 "k8s.io/api/apps/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
@@ -176,6 +177,9 @@ func (c *Controller) processNextItem() bool {
func (c *Controller) onUpdate(oldObj, newObj interface{}) {
oldcluster := oldObj.(*crv1.Pgcluster)
newcluster := newObj.(*crv1.Pgcluster)
+ // initialize a slice that may contain functions that need to be executed
+ // as part of a rolling update
+ rollingUpdateFuncs := [](func(*crv1.Pgcluster, *appsv1.Deployment) error){}
log.Debugf("pgcluster onUpdate for cluster %s (namespace %s)", newcluster.ObjectMeta.Namespace,
newcluster.ObjectMeta.Name)
@@ -239,10 +243,7 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
!reflect.DeepEqual(oldcluster.Spec.Limits, newcluster.Spec.Limits) ||
!reflect.DeepEqual(oldcluster.Spec.ExporterResources, newcluster.Spec.ExporterResources) ||
!reflect.DeepEqual(oldcluster.Spec.ExporterLimits, newcluster.Spec.ExporterLimits) {
- if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, newcluster, clusteroperator.UpdateResources); err != nil {
- log.Error(err)
- return
- }
+ rollingUpdateFuncs = append(rollingUpdateFuncs, clusteroperator.UpdateResources)
}
// see if any of the pgBackRest repository resource values have changed, and
@@ -271,16 +272,40 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
log.Error(err)
return
}
+ rollingUpdateFuncs = append(rollingUpdateFuncs, clusteroperator.UpdateTablespaces)
}
// check to see if any of the annotations have been modified, in particular,
// the non-system annotations
if !reflect.DeepEqual(oldcluster.Spec.Annotations, newcluster.Spec.Annotations) {
- if err := updateAnnotations(c, oldcluster, newcluster); err != nil {
+ if changed, err := updateAnnotations(c, oldcluster, newcluster); err != nil {
log.Error(err)
return
+ } else if changed {
+ // append the PostgreSQL specific functions as part of a rolling update
+ rollingUpdateFuncs = append(rollingUpdateFuncs, clusteroperator.UpdateAnnotations)
}
}
+
+ // if there is no need to perform a rolling update, exit here
+ if len(rollingUpdateFuncs) == 0 {
+ return
+ }
+
+ // otherwise, create an anonymous function that executes each of the rolling
+ // update functions as part of the rolling update
+ if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, newcluster,
+ func(cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
+ for _, fn := range rollingUpdateFuncs {
+ if err := fn(cluster, deployment); err != nil {
+ return err
+ }
+ }
+ return nil
+ }); err != nil {
+ log.Error(err)
+ return
+ }
}
// onDelete is called when a pgcluster is deleted
@@ -317,10 +342,13 @@ func addIdentifier(clusterCopy *crv1.Pgcluster) {
// deployments, which includes:
//
// - globally applied annotations
-// - postgres instance specific annotations
// - pgBackRest instance specific annotations
// - pgBouncer instance specific annotations
-func updateAnnotations(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) error {
+//
+// The Postgres specific annotations need to be handled by the caller function,
+// due to the fact they need to be applied in a rolling update manner that can
+// be controlled. We indicate this to the calling function by returning "true"
+func updateAnnotations(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) (bool, error) {
// so we have a two-tier problem we need to solve:
// 1. Which of the deployment types are being modified (or in the case of
// global, all of them)?
@@ -376,23 +404,17 @@ func updateAnnotations(c *Controller, oldCluster *crv1.Pgcluster, newCluster *cr
// but only do so if we have to
if len(annotationsBackrest) != 0 {
if err := backrestoperator.UpdateAnnotations(c.Client, newCluster, annotationsBackrest); err != nil {
- return err
+ return false, err
}
}
if len(annotationsPgBouncer) != 0 {
if err := clusteroperator.UpdatePgBouncerAnnotations(c.Client, newCluster, annotationsPgBouncer); err != nil && !kerrors.IsNotFound(err) {
- return err
- }
- }
-
- if len(annotationsPostgres) != 0 {
- if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, newCluster, clusteroperator.UpdateAnnotations); err != nil {
- return err
+ return false, err
}
}
- return nil
+ return len(annotationsPostgres) != 0, nil
}
// updatePgBouncer updates the pgBouncer Deployment to reflect any changes that
@@ -460,9 +482,7 @@ func updateTablespaces(c *Controller, oldCluster *crv1.Pgcluster, newCluster *cr
}
}
- // alright, update the tablespace entries for this cluster!
- // if it returns an error, pass the error back up to the caller
- return clusteroperator.RollingUpdate(c.Client, c.Client.Config, newCluster, clusteroperator.UpdateTablespaces)
+ return nil
}
// WorkerCount returns the worker count for the controller
From 4a8266e4e31d7fa8821ea131187687b531a05824 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 13 Dec 2020 19:29:39 -0500
Subject: [PATCH 045/276] Modify enablement of a metrics-enabled PostgreSQL
cluster
This adds a CRD attribute to pgcluster called `exporter`, which
will ultimately allow for the toggling on/off of the metrics
sidecar within a PostgreSQL cluster.
Includes an upgrade path for eliminating confusing labels for
the enablement of the exporter.
---
.../apiserver/clusterservice/clusterimpl.go | 12 +++----
internal/config/labels.go | 1 -
internal/operator/cluster/clusterlogic.go | 24 +++++++-------
internal/operator/cluster/upgrade.go | 14 ++++++++
internal/operator/clusterutilities.go | 32 ++++++++++---------
pkg/apis/crunchydata.com/v1/cluster.go | 4 ++-
6 files changed, 52 insertions(+), 35 deletions(-)
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 5c51d98514..c411e8b498 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -721,13 +721,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
userLabelsMap[config.LABEL_CUSTOM_CONFIG] = request.CustomConfig
}
- //set the metrics flag with the global setting first
- userLabelsMap[config.LABEL_EXPORTER] = strconv.FormatBool(apiserver.MetricsFlag)
-
- //if metrics is chosen on the pgo command, stick it into the user labels
- if request.MetricsFlag {
- userLabelsMap[config.LABEL_EXPORTER] = "true"
- }
if request.ServiceType != "" {
if request.ServiceType != config.DEFAULT_SERVICE_TYPE && request.ServiceType != config.LOAD_BALANCER_SERVICE_TYPE && request.ServiceType != config.NODEPORT_SERVICE_TYPE {
resp.Status.Code = msgs.Error
@@ -1138,6 +1131,11 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
spec.CustomConfig = userLabelsMap[config.LABEL_CUSTOM_CONFIG]
}
+ // enable the exporter sidecar based on the what the user based in or what
+ // the default value is. the user value takes precedence, unless it's false,
+ // as the legacy check only looked for enablement
+ spec.Exporter = request.MetricsFlag || apiserver.MetricsFlag
+
// if the request has overriding CPU/Memory requests/limits parameters,
// these will take precedence over the defaults
if request.CPULimit != "" {
diff --git a/internal/config/labels.go b/internal/config/labels.go
index eb5522b092..4b540a5227 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -37,7 +37,6 @@ const LABEL_INGEST = "ingest"
const LABEL_PGREMOVE = "pgremove"
const LABEL_PVCNAME = "pvcname"
const LABEL_EXPORTER = "crunchy-postgres-exporter"
-const LABEL_EXPORTER_PG_USER = "ccp_monitoring"
const LABEL_ARCHIVE = "archive"
const LABEL_ARCHIVE_TIMEOUT = "archive-timeout"
const LABEL_CUSTOM_CONFIG = "custom-config"
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 7acf0fe41c..ded0c0e025 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -71,19 +71,11 @@ func addClusterCreateMissingService(clientset kubernetes.Interface, cl *crv1.Pgc
serviceFields.PGBadgerPort = cl.Spec.PGBadgerPort
}
- // ...due to legacy reasons, the exporter label may not be available yet in the
- // main labels. so we will check here first, and then check the user labels
- if val, ok := clusterLabels[config.LABEL_EXPORTER]; ok && val == config.LABEL_TRUE {
+ // set the exporter port if exporter is enabled
+ if cl.Spec.Exporter {
serviceFields.ExporterPort = cl.Spec.ExporterPort
}
- // ...this condition should be targeted for removal in the future
- if cl.Spec.UserLabels != nil {
- if val, ok := cl.Spec.UserLabels[config.LABEL_EXPORTER]; ok && val == config.LABEL_TRUE {
- serviceFields.ExporterPort = cl.Spec.ExporterPort
- }
- }
-
return CreateService(clientset, &serviceFields, namespace)
}
@@ -283,6 +275,11 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
// 'crunchy-pgha-scope' label on the pgcluster
cl.Spec.UserLabels[config.LABEL_PGHA_SCOPE] = cl.Spec.Name
+ // Set the exporter labels, if applicable
+ if cl.Spec.Exporter {
+ cl.Spec.UserLabels[config.LABEL_EXPORTER] = config.LABEL_TRUE
+ }
+
// set up a map of the names of the tablespaces as well as the storage classes
tablespaceStorageTypeMap := operator.GetTablespaceStorageTypeMap(cl.Spec.TablespaceMounts)
@@ -319,7 +316,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
ExporterAddon: operator.GetExporterAddon(clientset, namespace, &cl.Spec),
BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
- PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl.Spec.UserLabels[config.LABEL_EXPORTER], cl.Spec.CollectSecretName),
+ PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
ScopeLabel: config.LABEL_PGHA_SCOPE,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Labels[config.LABEL_BACKREST], cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
cl.Spec.Port, cl.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
@@ -436,6 +433,11 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
cluster.Spec.UserLabels[config.LABEL_DEPLOYMENT_NAME] = replica.Spec.Name
+ // Set the exporter labels, if applicable
+ if cluster.Spec.Exporter {
+ cluster.Spec.UserLabels[config.LABEL_EXPORTER] = config.LABEL_TRUE
+ }
+
// set up a map of the names of the tablespaces as well as the storage classes
tablespaceStorageTypeMap := operator.GetTablespaceStorageTypeMap(cluster.Spec.TablespaceMounts)
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index 084f828505..ecbd9d9985 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -459,6 +459,8 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
// next, capture the existing Crunchy Postgres Exporter configuration settings (previous to version
// 4.5.0 referred to as Crunchy Collect), if they exist, and store them in the current labels
+ // 4.6.0 added this value to the spec as "Exporter", so the next step ensure
+ // that the value is migrated over
if value, ok := pgcluster.ObjectMeta.Labels["crunchy_collect"]; ok {
pgcluster.ObjectMeta.Labels[config.LABEL_EXPORTER] = value
delete(pgcluster.ObjectMeta.Labels, "crunchy_collect")
@@ -469,6 +471,18 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
delete(pgcluster.Spec.UserLabels, "crunchy_collect")
}
+ // convert the metrics label over to using a proper definition. Give the user
+ // label precedence.
+ if value, ok := pgcluster.ObjectMeta.Labels[config.LABEL_EXPORTER]; ok {
+ pgcluster.Spec.Exporter, _ = strconv.ParseBool(value)
+ delete(pgcluster.ObjectMeta.Labels, config.LABEL_EXPORTER)
+ }
+
+ if value, ok := pgcluster.Spec.UserLabels[config.LABEL_EXPORTER]; ok {
+ pgcluster.Spec.Exporter, _ = strconv.ParseBool(value)
+ delete(pgcluster.Spec.UserLabels, config.LABEL_EXPORTER)
+ }
+
// since the current primary label is not used in this version of the Postgres Operator,
// delete it before moving on to other upgrade tasks
delete(pgcluster.ObjectMeta.Labels, config.LABEL_CURRENT_PRIMARY)
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index d4ae78706b..2fb2277330 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -359,11 +359,11 @@ func GetBadgerAddon(clientset kubernetes.Interface, namespace string, cluster *c
func GetExporterAddon(clientset kubernetes.Interface, namespace string, spec *crv1.PgclusterSpec) string {
- if spec.UserLabels[config.LABEL_EXPORTER] == "true" {
+ if spec.Exporter {
log.Debug("crunchy-postgres-exporter was found as a label on cluster create")
log.Debugf("creating exporter secret for cluster %s", spec.Name)
- err := util.CreateSecret(clientset, spec.Name, spec.CollectSecretName, config.LABEL_EXPORTER_PG_USER,
+ err := util.CreateSecret(clientset, spec.Name, spec.CollectSecretName, crv1.PGUserMonitor,
Pgo.Cluster.PgmonitorPassword, namespace)
if err != nil {
log.Error(err)
@@ -769,21 +769,23 @@ func GetPodAntiAffinityType(cluster *crv1.Pgcluster, deploymentType crv1.PodAnti
// GetPgmonitorEnvVars populates the pgmonitor env var template, which contains any
// pgmonitor env vars that need to be included in the Deployment spec for a PG cluster.
-func GetPgmonitorEnvVars(metricsEnabled, exporterSecret string) string {
- if metricsEnabled == "true" {
- fields := PgmonitorEnvVarsTemplateFields{
- ExporterSecret: exporterSecret,
- }
+func GetPgmonitorEnvVars(cluster *crv1.Pgcluster) string {
+ if !cluster.Spec.Exporter {
+ return ""
+ }
- var doc bytes.Buffer
- err := config.PgmonitorEnvVarsTemplate.Execute(&doc, fields)
- if err != nil {
- log.Error(err.Error())
- return ""
- }
- return doc.String()
+ fields := PgmonitorEnvVarsTemplateFields{
+ ExporterSecret: cluster.Spec.CollectSecretName,
}
- return ""
+
+ doc := bytes.Buffer{}
+
+ if err := config.PgmonitorEnvVarsTemplate.Execute(&doc, fields); err != nil {
+ log.Error(err)
+ return ""
+ }
+
+ return doc.String()
}
// GetPgbackrestS3EnvVars retrieves the values for the various configuration settings require to
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 91b7f8dad8..63d3914f0f 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -50,7 +50,9 @@ type PgclusterSpec struct {
PGOImagePrefix string `json:"pgoimageprefix"`
Port string `json:"port"`
PGBadgerPort string `json:"pgbadgerport"`
- ExporterPort string `json:"exporterport"`
+ // Exporter, if set to true, enables the exporter sidecar
+ Exporter bool `json:"exporter"`
+ ExporterPort string `json:"exporterport"`
PrimaryStorage PgStorageSpec
WALStorage PgStorageSpec
From 554aea609826fd11949dd44f40b12e822728bfe6 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 14 Dec 2020 15:18:24 -0500
Subject: [PATCH 046/276] Improve compatibility between templates and
Kubernetes objects
The updates the "exporter.json" template, which is used for deploying
the "crunchy-postgres-exporter" sidecar for metrics collection in a
PostgreSQL cluster, to not have a preceiding "," in it. This in turn
allows for the file to be mapped into a Kubernetes Container object,
for convenience of manipulation within a program.
---
.../pgo-operator/files/pgo-configs/cluster-deployment.json | 5 +++--
.../roles/pgo-operator/files/pgo-configs/exporter.json | 2 +-
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index 7fd77e6449..b5c823b3fd 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -234,8 +234,9 @@
],
"imagePullPolicy": "IfNotPresent"
}{{ end }}
-
- {{.ExporterAddon }}
+ {{ if .ExporterAddon }}
+ ,{{.ExporterAddon }}
+ {{ end }}
{{.BadgerAddon }}
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json
index c40a26e5ef..9d430c3c20 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json
@@ -1,4 +1,4 @@
-,{
+{
"name": "exporter",
"image": "{{.PGOImagePrefix}}/crunchy-postgres-exporter:{{.PGOImageTag}}",
"ports": [{
From 0f807838d05fba6035d5670ebf14062f715f47db Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 14 Dec 2020 15:20:43 -0500
Subject: [PATCH 047/276] Update cluster Deployment match labels
Match labels are immutable objects, and given some potentially mutable
labels exists within the match labels for the PostgreSQL Deployment
objects, it is necessary to modify this set of labels to use the
minimum needed for properly deploying a cluster. This reduces the
current set of match labels for a PostgreSQL instance to the following:
- vendor
- pg-cluster -- the name of the PostgreSQL cluster (group of all
instances)
- deployment-name -- the name of the PostgreSQL instance
- pgo-pg-database
---
.../files/pgo-configs/cluster-deployment.json | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index b5c823b3fd..0081bb205f 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -12,10 +12,12 @@
"spec": {
"replicas": {{.Replicas}},
"selector": {
- "matchLabels": {
+ "matchLabels": {
"vendor": "crunchydata",
- {{.DeploymentLabels }}
- }
+ "pg-cluster": "{{.ClusterName}}",
+ "pgo-pg-database": "true",
+ "deployment-name": "{{.Name}}"
+ }
},
"template": {
"metadata": {
From 0ee6b6f3ea73707832320027afcba0892ea58f85 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 14 Dec 2020 15:23:46 -0500
Subject: [PATCH 048/276] Enable / disable the metrics sidecar during cluster
lifetime
This commit introduces the ability to enable/disable the metrics
collection sidecar (`crunchy-postgres-exporter`) during the lifetime of
a PostgreSQL cluster. This can be toggled in multiple ways, including:
- The `exporter` attribute in pgclusters.crunchydata.com
- `pgo update cluster --enable-metrics`, which adds the sidecar
- `pgo update cluster --disable-metrics`, which removes the sidecar
As adding/removing a sidecar results in modifying a Deployment template,
this action will trigger a rolling update of the PostgreSQL cluster in
an effort to minimize any downtime.
This also has the net effect of moving the "ccp_monitoring" used that is
created to being fully managed by the Postgres Operator. The
`CollectSecretName` attribute is now removed from the pgcluster CRD, as
is the "PgMonitorPassword" attribute from the `pgo-deployer` and other
installers.
Issue: [ch7270]
Issue: #1413
---
bin/crunchy-postgres-exporter/start.sh | 24 +-
cmd/pgo/cmd/cluster.go | 7 +
cmd/pgo/cmd/update.go | 13 +-
.../Configuration/pgo-yaml-configuration.md | 1 -
.../architecture/high-availability/_index.md | 1 +
docs/content/custom-resources/_index.md | 13 +-
docs/content/pgo-client/common-tasks.md | 1 -
.../reference/pgo_update_cluster.md | 6 +-
.../files/pgo-configs/exporter.json | 4 +-
.../apiserver/clusterservice/clusterimpl.go | 14 +-
internal/apiserver/userservice/userimpl.go | 11 +-
internal/config/pgoconfig.go | 1 -
.../pgcluster/pgclustercontroller.go | 20 ++
internal/operator/cluster/cluster.go | 3 +
internal/operator/cluster/clusterlogic.go | 15 +-
internal/operator/cluster/common.go | 116 ++++++
internal/operator/cluster/common_test.go | 39 +++
internal/operator/cluster/exporter.go | 331 ++++++++++++++++++
internal/operator/cluster/pgbouncer.go | 91 +----
internal/operator/cluster/pgbouncer_test.go | 19 -
internal/operator/clusterutilities.go | 67 ++--
internal/operator/common.go | 5 -
internal/util/exporter.go | 29 ++
internal/util/exporter_test.go | 32 ++
pkg/apis/crunchydata.com/v1/cluster.go | 1 -
pkg/apis/crunchydata.com/v1/common.go | 3 -
pkg/apiservermsgs/clustermsgs.go | 21 +-
27 files changed, 697 insertions(+), 191 deletions(-)
create mode 100644 internal/operator/cluster/common.go
create mode 100644 internal/operator/cluster/common_test.go
create mode 100644 internal/operator/cluster/exporter.go
create mode 100644 internal/util/exporter.go
create mode 100644 internal/util/exporter_test.go
diff --git a/bin/crunchy-postgres-exporter/start.sh b/bin/crunchy-postgres-exporter/start.sh
index 2a0d543b70..a7397973a9 100755
--- a/bin/crunchy-postgres-exporter/start.sh
+++ b/bin/crunchy-postgres-exporter/start.sh
@@ -76,19 +76,7 @@ set_default_pg_exporter_env() {
trap 'trap_sigterm' SIGINT SIGTERM
set_default_postgres_exporter_env
-
-if [[ ! -v DATA_SOURCE_NAME ]]
-then
- set_default_pg_exporter_env
- if [[ ! -z "${EXPORTER_PG_PARAMS}" ]]
- then
- EXPORTER_PG_PARAMS="?${EXPORTER_PG_PARAMS}"
- fi
- export DATA_SOURCE_NAME="postgresql://${EXPORTER_PG_USER}:${EXPORTER_PG_PASSWORD}\
-@${EXPORTER_PG_HOST}:${EXPORTER_PG_PORT}/${EXPORTER_PG_DATABASE}${EXPORTER_PG_PARAMS}"
-fi
-
-
+set_default_pg_exporter_env
if [[ ! ${#default_exporter_env_vars[@]} -eq 0 ]]
then
@@ -99,16 +87,16 @@ fi
# Check that postgres is accepting connections.
echo_info "Waiting for PostgreSQL to be ready.."
while true; do
- ${PG_DIR?}/bin/pg_isready -d ${DATA_SOURCE_NAME}
+ ${PG_DIR?}/bin/pg_isready -q -h "${EXPORTER_PG_HOST}" -p "${EXPORTER_PG_PORT}"
if [ $? -eq 0 ]; then
break
fi
sleep 2
done
-echo_info "Checking if PostgreSQL is accepting queries.."
+echo_info "Checking if "${EXPORTER_PG_USER}" is is created.."
while true; do
- ${PG_DIR?}/bin/psql "${DATA_SOURCE_NAME}" -c "SELECT now();"
+ PGPASSWORD="${EXPORTER_PG_PASSWORD}" ${PG_DIR?}/bin/psql -q -h "${EXPORTER_PG_HOST}" -p "${EXPORTER_PG_PORT}" -U "${EXPORTER_PG_USER}" -c "SELECT 1;" "${EXPORTER_PG_DATABASE}"
if [ $? -eq 0 ]; then
break
fi
@@ -135,7 +123,7 @@ else
fi
done
- VERSION=$(${PG_DIR?}/bin/psql "${DATA_SOURCE_NAME}" -qtAX -c "SELECT current_setting('server_version_num')")
+ VERSION=$(PGPASSWORD="${EXPORTER_PG_PASSWORD}" ${PG_DIR?}/bin/psql -h "${EXPORTER_PG_HOST}" -p "${EXPORTER_PG_PORT}" -U "${EXPORTER_PG_USER}" -qtAX -c "SELECT current_setting('server_version_num')" "${EXPORTER_PG_DATABASE}")
if (( ${VERSION?} >= 90500 )) && (( ${VERSION?} < 90600 ))
then
if [[ -f ${CONFIG_DIR?}/queries_pg95.yml ]]
@@ -231,7 +219,7 @@ sed -i "s/#PGBACKREST_INFO_THROTTLE_MINUTES#/${PGBACKREST_INFO_THROTTLE_MINUTES:
PG_OPTIONS="--extend.query-path=${QUERY_DIR?}/queries.yml --web.listen-address=:${POSTGRES_EXPORTER_PORT}"
echo_info "Starting postgres-exporter.."
-${PG_EXP_HOME?}/postgres_exporter ${PG_OPTIONS?} >>/dev/stdout 2>&1 &
+DATA_SOURCE_URI="${EXPORTER_PG_HOST}:${EXPORTER_PG_PORT}/${EXPORTER_PG_DATABASE}?${EXPORTER_PG_PARAMS}" DATA_SOURCE_USER="${EXPORTER_PG_USER}" DATA_SOURCE_PASS="${EXPORTER_PG_PASSWORD}" ${PG_EXP_HOME?}/postgres_exporter ${PG_OPTIONS?} >>/dev/stdout 2>&1 &
echo $! > $POSTGRES_EXPORTER_PIDFILE
wait
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index a125bc2b5b..b4bf417697 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -628,6 +628,13 @@ func updateCluster(args []string, ns string) {
r.Autofail = msgs.UpdateClusterAutofailDisable
}
+ // check to see if the metrics sidecar needs to be enabled or disabled
+ if EnableMetrics {
+ r.Metrics = msgs.UpdateClusterMetricsEnable
+ } else if DisableMetrics {
+ r.Metrics = msgs.UpdateClusterMetricsDisable
+ }
+
// if the user provided resources for CPU or Memory, validate them to ensure
// they are valid Kubernetes values
if err := util.ValidateQuantity(r.CPURequest, "cpu"); err != nil {
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index 798ccf36f7..303f64b74b 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -29,9 +29,13 @@ var (
// DisableLogin allows a user to disable the ability for a PostgreSQL uesr to
// log in
DisableLogin bool
+ // DisableMetrics allows a user to disable metrics collection
+ DisableMetrics bool
// EnableLogin allows a user to enable the ability for a PostgreSQL uesr to
// log in
EnableLogin bool
+ // EnableMetrics allows a user to enbale metrics collection
+ EnableMetrics bool
// ExpireUser sets a user to having their password expired
ExpireUser bool
// PgoroleChangePermissions does something with the pgouser access controls,
@@ -83,6 +87,8 @@ func init() {
UpdateClusterCmd.Flags().StringVar(&CPULimit, "cpu-limit", "", "Set the number of millicores to limit for the CPU, e.g. "+
"\"100m\" or \"0.1\".")
UpdateClusterCmd.Flags().BoolVar(&DisableAutofailFlag, "disable-autofail", false, "Disables autofail capabitilies in the cluster.")
+ UpdateClusterCmd.Flags().BoolVar(&DisableMetrics, "disable-metrics", false,
+ "Disable the metrics collection sidecar. May cause brief downtime.")
UpdateClusterCmd.Flags().BoolVar(&EnableAutofailFlag, "enable-autofail", false, "Enables autofail capabitilies in the cluster.")
UpdateClusterCmd.Flags().StringVar(&MemoryRequest, "memory", "", "Set the amount of RAM to request, e.g. "+
"1GiB.")
@@ -107,7 +113,8 @@ func init() {
"the Crunchy Postgres Exporter sidecar container.")
UpdateClusterCmd.Flags().StringVar(&ExporterMemoryLimit, "exporter-memory-limit", "", "Set the amount of memory to limit for "+
"the Crunchy Postgres Exporter sidecar container.")
-
+ UpdateClusterCmd.Flags().BoolVar(&EnableMetrics, "enable-metrics", false,
+ "Enable the metrics collection sidecar. May cause brief downtime.")
UpdateClusterCmd.Flags().BoolVarP(&EnableStandby, "enable-standby", "", false,
"Enables standby mode in the cluster(s) specified.")
UpdateClusterCmd.Flags().BoolVar(&Startup, "startup", false, "Restart the database cluster if it "+
@@ -248,6 +255,10 @@ var UpdateClusterCmd = &cobra.Command{
"from has been properly shutdown before proceeding!")
}
+ if EnableMetrics || DisableMetrics {
+ fmt.Println("Adding or removing a metrics collection sidecar can cause downtime.")
+ }
+
if len(Tablespaces) > 0 {
fmt.Println("Adding tablespaces can cause downtime.")
}
diff --git a/docs/content/Configuration/pgo-yaml-configuration.md b/docs/content/Configuration/pgo-yaml-configuration.md
index c1b6a894e1..b9467258d3 100644
--- a/docs/content/Configuration/pgo-yaml-configuration.md
+++ b/docs/content/Configuration/pgo-yaml-configuration.md
@@ -23,7 +23,6 @@ The *pgo.yaml* file is broken into major sections as described below:
|User | the PostgreSQL normal user name
|Database | the PostgreSQL normal user database
|Replicas | the number of cluster replicas to create for newly created clusters, typically users will scale up replicas on the pgo CLI command line but this global value can be set as well
-|PgmonitorPassword | the password to use for pgmonitor metrics collection if you specify --metrics when creating a PG cluster
|Metrics | boolean, if set to true will cause each new cluster to include crunchy-postgres-exporter as a sidecar container for metrics collection, if set to false (default), users can still add metrics on a cluster-by-cluster basis using the pgo command flag --metrics
|Badger | boolean, if set to true will cause each new cluster to include crunchy-pgbadger as a sidecar container for static log analysis, if set to false (default), users can still add pgbadger on a cluster-by-cluster basis using the pgo create cluster command flag --pgbadger
|Policies | optional, list of policies to apply to a newly created cluster, comma separated, must be valid policies in the catalog
diff --git a/docs/content/architecture/high-availability/_index.md b/docs/content/architecture/high-availability/_index.md
index 950b150105..b3dc97f290 100644
--- a/docs/content/architecture/high-availability/_index.md
+++ b/docs/content/architecture/high-availability/_index.md
@@ -330,4 +330,5 @@ modification to the custom resource:
- Memory resource adjustments
- CPU resource adjustments
- Custom annotation changes
+- Enabling/disabling the monitoring sidecar on a PostgreSQL cluster (`--metrics`)
- Tablespace additions
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 9869a29bb1..b39dd3ae0b 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -395,7 +395,6 @@ spec:
user: hippo
userlabels:
backrest-storage-type: "s3"
- crunchy-postgres-exporter: "false"
pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
usersecretname: ${pgo_cluster_name}-hippo-secret
@@ -492,7 +491,6 @@ spec:
userlabels:
NodeLabelKey: ""
NodeLabelValue: ""
- crunchy-postgres-exporter: "false"
pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
EOF
@@ -502,6 +500,13 @@ kubectl apply -f "${pgo_cluster_name}-${pgo_cluster_replica_suffix}-pgreplica.ya
Add this time, removing a replica must be handled through the [`pgo` client]({{< relref "/pgo-client/common-tasks.md#high-availability-scaling-up-down">}}).
+### Monitoring
+
+To enable the [monitoring]({{< relref "/architecture/monitoring.md">}})
+(aka metrics) sidecar using the `crunchy-postgres-exporter` container, you need
+to set the `exporter` attribute in `pgclusters.crunchydata.com` custom resource.
+
+
### Add a Tablespace
Tablespaces can be added during the lifetime of a PostgreSQL cluster (tablespaces can be removed as well, but for a detailed explanation as to how, please see the [Tablespaces]({{< relref "/architecture/tablespaces.md">}}) section).
@@ -679,12 +684,12 @@ make changes, as described below.
| CCPImage | `create` | The name of the PostgreSQL container image to use, e.g. `crunchy-postgres-ha` or `crunchy-postgres-ha-gis`. |
| CCPImagePrefix | `create` | If provided, the image prefix (or registry) of the PostgreSQL container image, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
| CCPImageTag | `create` | The tag of the PostgreSQL container image to use, e.g. `{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}`. |
-| CollectSecretName | `create` | An optional attribute unless `crunchy-postgres-exporter` is specified in the `UserLabels`; contains the name of a Kubernetes Secret that contains the credentials for a PostgreSQL user that is used for metrics collection, and is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
| CustomConfig | `create` | If specified, references a custom ConfigMap to use when bootstrapping a PostgreSQL cluster. For the shape of this file, please see the section on [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) |
| Database | `create` | The name of a database that the PostgreSQL user can log into after the PostgreSQL cluster is created. |
| ExporterLimits | `create`, `update` | Specify the container resource limits that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| ExporterPort | `create` | If the `"crunchy-postgres-exporter"` label is set in `UserLabels`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) |
+| Exporter | `create`,`update` | If `true`, deploys the `crunchy-postgres-exporter` sidecar for metrics collection |
+| ExporterPort | `create` | If `Exporter` is `true`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) |
| ExporterResources | `create`, `update` | Specify the container resource requests that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index 8eefb412c2..58e7476dfa 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -128,7 +128,6 @@ Cluster:
BackrestS3URIStyle: ""
BackrestS3VerifyTLS: true
DisableAutofail: false
- PgmonitorPassword: ""
EnableCrunchyadm: false
DisableReplicaStartFailReinit: false
PodAntiAffinity: preferred
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index 007c34b0fc..8a5af0e066 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -38,7 +38,9 @@ pgo update cluster [flags]
--cpu string Set the number of millicores to request for the CPU, e.g. "100m" or "0.1".
--cpu-limit string Set the number of millicores to limit for the CPU, e.g. "100m" or "0.1".
--disable-autofail Disables autofail capabitilies in the cluster.
+ --disable-metrics Disable the metrics collection sidecar. May cause brief downtime.
--enable-autofail Enables autofail capabitilies in the cluster.
+ --enable-metrics Enable the metrics collection sidecar. May cause brief downtime.
--enable-standby Enables standby mode in the cluster(s) specified.
--exporter-cpu string Set the number of millicores to request for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1".
--exporter-cpu-limit string Set the number of millicores to limit for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1".
@@ -70,7 +72,7 @@ pgo update cluster [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -84,4 +86,4 @@ pgo update cluster [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Dec-2020
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json
index 9d430c3c20..3a3d5edddd 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json
@@ -35,7 +35,7 @@
"name": "EXPORTER_PG_USER",
"valueFrom": {
"secretKeyRef": {
- "name": "{{.CollectSecretName}}",
+ "name": "{{.ExporterSecretName}}",
"key": "username"
}
}
@@ -44,7 +44,7 @@
"name": "EXPORTER_PG_PASSWORD",
"valueFrom": {
"secretKeyRef": {
- "name": "{{.CollectSecretName}}",
+ "name": "{{.ExporterSecretName}}",
"key": "password"
}
}
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index c411e8b498..d1a78cba41 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -960,9 +960,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
resp.Result.Users = append(resp.Result.Users, user)
}
- // there's a secret for the monitoring user too
- newInstance.Spec.CollectSecretName = clusterName + crv1.ExporterSecretSuffix
-
// Create Backrest secret for S3/SSH Keys:
// We make this regardless if backrest is enabled or not because
// the deployment template always tries to mount /sshd volume
@@ -1131,7 +1128,7 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
spec.CustomConfig = userLabelsMap[config.LABEL_CUSTOM_CONFIG]
}
- // enable the exporter sidecar based on the what the user based in or what
+ // enable the exporter sidecar based on the what the user passed in or what
// the default value is. the user value takes precedence, unless it's false,
// as the legacy check only looked for enablement
spec.Exporter = request.MetricsFlag || apiserver.MetricsFlag
@@ -1905,6 +1902,15 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "false"
}
+ // enable or disable the metrics collection sidecar
+ switch request.Metrics {
+ case msgs.UpdateClusterMetricsEnable:
+ cluster.Spec.Exporter = true
+ case msgs.UpdateClusterMetricsDisable:
+ cluster.Spec.Exporter = false
+ case msgs.UpdateClusterMetricsDoNothing: // this is never reached -- no-op
+ }
+
// enable or disable standby mode based on UpdateClusterStandbyStatus provided in
// the request
switch request.Standby {
diff --git a/internal/apiserver/userservice/userimpl.go b/internal/apiserver/userservice/userimpl.go
index 603fad07bd..39e0396184 100644
--- a/internal/apiserver/userservice/userimpl.go
+++ b/internal/apiserver/userservice/userimpl.go
@@ -558,7 +558,16 @@ func ShowUser(request *msgs.ShowUserRequest) msgs.ShowUserResponse {
//
// We ignore any errors...if the password get set, we add it. If not, we
// don't
- secretName := fmt.Sprintf(util.UserSecretFormat, result.ClusterName, result.Username)
+ secretName := ""
+
+ // handle special cases with user names + secrets lining up
+ switch result.Username {
+ default:
+ secretName = fmt.Sprintf(util.UserSecretFormat, result.ClusterName, result.Username)
+ case "ccp_monitoring":
+ secretName = util.GenerateExporterSecretName(result.ClusterName)
+ }
+
password, _ := util.GetPasswordFromSecret(apiserver.Clientset, pod.Namespace, secretName)
if password != "" {
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index 3fffa15a02..bf7c6fb35d 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -211,7 +211,6 @@ type ClusterStruct struct {
BackrestS3URIStyle string
BackrestS3VerifyTLS string
DisableAutofail bool
- PgmonitorPassword string
EnableCrunchyadm bool
DisableReplicaStartFailReinit bool
PodAntiAffinity string
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 701429e576..72ee582be3 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -237,6 +237,26 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
}
}
+ // see if we are adding / removing the metrics collection sidecar
+ if oldcluster.Spec.Exporter != newcluster.Spec.Exporter {
+ var err error
+
+ // determine if the sidecar is being enabled/disabled and take the precursor
+ // actions before the deployment template is modified
+ switch newcluster.Spec.Exporter {
+ case true:
+ err = clusteroperator.AddExporter(c.Client, c.Client.Config, newcluster)
+ case false:
+ err = clusteroperator.RemoveExporter(c.Client, c.Client.Config, newcluster)
+ }
+
+ if err == nil {
+ rollingUpdateFuncs = append(rollingUpdateFuncs, clusteroperator.UpdateExporterSidecar)
+ } else {
+ log.Errorf("could not update metrics collection sidecar: %q", err.Error())
+ }
+ }
+
// see if any of the resource values have changed for the database or exporter container,
// if so, update them
if !reflect.DeepEqual(oldcluster.Spec.Resources, newcluster.Spec.Resources) ||
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 76eb3834c1..74840a8113 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -58,6 +58,9 @@ type ServiceTemplateFields struct {
// ReplicaSuffix ...
const ReplicaSuffix = "-replica"
+// exporterContainerName is the name of the exporter container
+const exporterContainerName = "exporter"
+
func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace string) {
ctx := context.TODO()
var err error
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index ded0c0e025..c464da161b 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -275,9 +275,17 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
// 'crunchy-pgha-scope' label on the pgcluster
cl.Spec.UserLabels[config.LABEL_PGHA_SCOPE] = cl.Spec.Name
- // Set the exporter labels, if applicable
+ // If applicable, set the exporter labels, used for the scrapers, and create
+ // the secret. We don't need to take any additional actions, as the cluster
+ // creation process will handle those. Magic!
if cl.Spec.Exporter {
cl.Spec.UserLabels[config.LABEL_EXPORTER] = config.LABEL_TRUE
+
+ log.Debugf("creating exporter secret for cluster %s", cl.Spec.Name)
+
+ if _, err := CreateExporterSecret(clientset, cl); err != nil {
+ log.Error(err)
+ }
}
// set up a map of the names of the tablespaces as well as the storage classes
@@ -314,7 +322,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
PodAntiAffinity: operator.GetPodAntiAffinity(cl, crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default),
ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
- ExporterAddon: operator.GetExporterAddon(clientset, namespace, &cl.Spec),
+ ExporterAddon: operator.GetExporterAddon(cl.Spec),
BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
ScopeLabel: config.LABEL_PGHA_SCOPE,
@@ -407,7 +415,6 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
var replicaDoc bytes.Buffer
serviceName := replica.Spec.ClusterName + "-replica"
- // replicaFlag := true
// replicaLabels := operator.GetPrimaryLabels(serviceName, replica.Spec.ClusterName, replicaFlag, cluster.Spec.UserLabels)
cluster.Spec.UserLabels[config.LABEL_REPLICA_NAME] = replica.Spec.Name
@@ -472,7 +479,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits),
NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
- ExporterAddon: operator.GetExporterAddon(clientset, namespace, &cluster.Spec),
+ ExporterAddon: operator.GetExporterAddon(cluster.Spec),
BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cluster, replica.Spec.Name),
ScopeLabel: config.LABEL_PGHA_SCOPE,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, cluster.Labels[config.LABEL_BACKREST], replica.Spec.Name,
diff --git a/internal/operator/cluster/common.go b/internal/operator/cluster/common.go
new file mode 100644
index 0000000000..250662846f
--- /dev/null
+++ b/internal/operator/cluster/common.go
@@ -0,0 +1,116 @@
+package cluster
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/operator"
+ pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password"
+ "github.com/crunchydata/postgres-operator/internal/util"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+ log "github.com/sirupsen/logrus"
+ v1 "k8s.io/api/core/v1"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+)
+
+const (
+ // disable the a Postgres user from logging in. This is safe from SQL
+ // injection as the string that is being interpolated is escaped
+ //
+ // This had the "PASSWORD NULL" feature, but this is only found in
+ // PostgreSQL 11+, and given we don't want to check for the PG version before
+ // running the command, we will not use it
+ sqlDisableLogin = `ALTER ROLE %s NOLOGIN;`
+
+ // sqlEnableLogin is the SQL to update the password
+ // NOTE: this is safe from SQL injection as we explicitly add the inerpolated
+ // string as a MD5 hash and we are using the username.
+ // However, the escaping is handled in the util.SetPostgreSQLPassword function
+ sqlEnableLogin = `ALTER ROLE %s PASSWORD %s LOGIN;`
+)
+
+// disablePostgresLogin disables the ability for a PostgreSQL user to log in
+func disablePostgresLogin(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster, username string) error {
+ log.Debugf("disable user %q on cluster %q", username, cluster.Name)
+ // disable the pgbouncer user in the PostgreSQL cluster.
+ // first, get the primary pod. If we cannot do this, let's consider it an
+ // error and abort
+ pod, err := util.GetPrimaryPod(clientset, cluster)
+ if err != nil {
+ return err
+ }
+
+ // This is safe from SQL injection as we are escaping the username
+ sql := strings.NewReader(fmt.Sprintf(sqlDisableLogin, util.SQLQuoteIdentifier(username)))
+ cmd := []string{"psql", "-p", cluster.Spec.Port}
+
+ // exec into the pod to run the query
+ _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset,
+ cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql)
+ // if there is an error, log the error from the stderr and return the error
+ if err != nil {
+ return fmt.Errorf(stderr)
+ }
+
+ return nil
+}
+
+// generatePassword generates a password that is used for the PostgreSQL user
+// system accounts. This goes off of the configured value for password length
+func generatePassword() (string, error) {
+ // first, get the length of what the password should be
+ generatedPasswordLength := util.GeneratedPasswordLength(operator.Pgo.Cluster.PasswordLength)
+ // from there, the password can be generated!
+ return util.GeneratePassword(generatedPasswordLength)
+}
+
+// makePostgresPassword creates the expected hash for a password type for a
+// PostgreSQL password
+func makePostgresPassword(passwordType pgpassword.PasswordType, username, password string) string {
+ // get the PostgreSQL password generate based on the password type
+ // as all of these values are valid, this not not error
+ postgresPassword, _ := pgpassword.NewPostgresPassword(passwordType, username, password)
+
+ // create the PostgreSQL style hashed password and return
+ hashedPassword, _ := postgresPassword.Build()
+
+ return hashedPassword
+}
+
+// setPostgreSQLPassword updates the password of a user in a PostgreSQL
+// cluster by executing into the Pod provided (i.e. a primary) and changing it
+func setPostgreSQLPassword(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port,
+ username, password string) error {
+ log.Debugf("set %q password in PostgreSQL", username)
+
+ // we use the PostgreSQL "md5" hashing mechanism here to pre-hash the
+ // password. This is semi-hard coded but is now prepped for SCRAM as a
+ // password type can be passed in. Almost to SCRAM!
+ passwordHash := makePostgresPassword(pgpassword.MD5, username, password)
+
+ if err := util.SetPostgreSQLPassword(clientset, restconfig, pod,
+ port, username, passwordHash, sqlEnableLogin); err != nil {
+ log.Error(err)
+ return err
+ }
+
+ // and that's all!
+ return nil
+}
diff --git a/internal/operator/cluster/common_test.go b/internal/operator/cluster/common_test.go
new file mode 100644
index 0000000000..24f0423969
--- /dev/null
+++ b/internal/operator/cluster/common_test.go
@@ -0,0 +1,39 @@
+package cluster
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "testing"
+
+ pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password"
+)
+
+func TestMakePostgresPassword(t *testing.T) {
+ t.Run("md5", func(t *testing.T) {
+ t.Run("valid", func(t *testing.T) {
+ passwordType := pgpassword.MD5
+ username := "pgbouncer"
+ password := "datalake"
+ expected := "md56294153764d389dc6830b6ce4f923cdb"
+
+ actual := makePostgresPassword(passwordType, username, password)
+
+ if actual != expected {
+ t.Errorf("expected: %q actual: %q", expected, actual)
+ }
+ })
+ })
+}
diff --git a/internal/operator/cluster/exporter.go b/internal/operator/cluster/exporter.go
new file mode 100644
index 0000000000..957e47a7d1
--- /dev/null
+++ b/internal/operator/cluster/exporter.go
@@ -0,0 +1,331 @@
+package cluster
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "strconv"
+
+ "github.com/crunchydata/postgres-operator/internal/config"
+ "github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/operator"
+ "github.com/crunchydata/postgres-operator/internal/util"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
+ log "github.com/sirupsen/logrus"
+ appsv1 "k8s.io/api/apps/v1"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+)
+
+const (
+ // exporterInstallScript references the embedded script that installs all of
+ // the pgMonitor functions
+ exporterInstallScript = "/opt/crunchy/bin/exporter/install.sh"
+
+ // exporterServicePortName is the name used to identify the exporter port in
+ // the service
+ exporterServicePortName = "postgres-exporter"
+)
+
+// AddExporter ensures that a PostgreSQL cluster is able to undertake the
+// actions required by the "crunchy-postgres-exporter", i.e.
+//
+// - enable a service port so scrapers can access the metrics
+// - it can authenticate as the "ccp_monitoring" user; manages the Secret as
+// well
+// - all of the monitoring views and functions are available
+func AddExporter(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
+ ctx := context.TODO()
+
+ // even if this is a standby, we can still set up a Secret (though the password
+ // value of the Secret is of limited use when the standby is promoted, it can
+ // be rotated, similar to the pgBouncer password)
+
+ // only create a password Secret if one does not already exist, which is
+ // handled in the delegated function
+ password, err := CreateExporterSecret(clientset, cluster)
+ if err != nil {
+ return err
+ }
+
+ // set up the Service, which is still needed on a standby
+ svc, err := clientset.CoreV1().Services(cluster.Namespace).Get(ctx, cluster.Name, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+
+ // loop over the service ports to see if exporter port is already set up. if
+ // it is, we can return from there
+ for _, svcPort := range svc.Spec.Ports {
+ if svcPort.Name == exporterServicePortName {
+ return nil
+ }
+ }
+
+ // otherwise, we need to append a service port to the list
+ port, err := strconv.ParseInt(
+ util.GetValueOrDefault(cluster.Spec.ExporterPort, operator.Pgo.Cluster.ExporterPort), 10, 32)
+ if err != nil {
+ return err
+ }
+
+ svcPort := v1.ServicePort{
+ Name: exporterServicePortName,
+ Protocol: v1.ProtocolTCP,
+ Port: int32(port),
+ }
+
+ svc.Spec.Ports = append(svc.Spec.Ports, svcPort)
+
+ if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
+ return err
+ }
+
+ // after the secret if this is a standby, exit early
+ // this can't be installed if this is a standby, so abort if that's the case
+ if cluster.Spec.Standby {
+ return ErrStandbyNotAllowed
+ }
+
+ // get the primary pod, which is needed to update the password for the
+ // exporter user
+ pod, err := util.GetPrimaryPod(clientset, cluster)
+ if err != nil {
+ return err
+ }
+
+ // add the m onitoring user and all the views associated with this
+ // user. this can be done by executing a script on the container itself
+ cmd := []string{"/bin/bash", exporterInstallScript}
+
+ if _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset,
+ cmd, "database", pod.Name, pod.ObjectMeta.Namespace, nil); err != nil {
+ return fmt.Errorf(stderr)
+ }
+
+ // attempt to update the password in PostgreSQL, as this is how the exporter
+ // will properly interface with PostgreSQL
+ return setPostgreSQLPassword(clientset, restconfig, pod, cluster.Spec.Port, crv1.PGUserMonitor, password)
+}
+
+// CreateExporterSecret create a secret used by the exporter containing the
+// user credientials. Sees if a Secret already exists and if it does, uses that.
+// Otherwise, it will generate the password. Returns an error if it fails.
+func CreateExporterSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (string, error) {
+ ctx := context.TODO()
+ secretName := util.GenerateExporterSecretName(cluster.Name)
+
+ // see if this secret already exists...if it does, then take an early exit
+ if password, err := util.GetPasswordFromSecret(clientset, cluster.Namespace, secretName); err == nil {
+ log.Infof("exporter secret %s already present, will reuse", secretName)
+ return password, nil
+ }
+
+ // well, we have to generate the password
+ password, err := generatePassword()
+ if err != nil {
+ return "", err
+ }
+
+ // the remainder of this is generating the various entries in the pgbouncer
+ // secret, i.e. substituting values into templates files that contain:
+ // - the pgbouncer user password
+ // - the pgbouncer "users.txt" file that contains the credentials for the
+ // "pgbouncer" user
+
+ // now, we can do what we came here to do, which is create the secret
+ secret := v1.Secret{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: secretName,
+ Labels: map[string]string{
+ config.LABEL_EXPORTER: config.LABEL_TRUE,
+ config.LABEL_PG_CLUSTER: cluster.Name,
+ config.LABEL_VENDOR: config.LABEL_CRUNCHY,
+ },
+ },
+ Data: map[string][]byte{
+ "username": []byte(crv1.PGUserMonitor),
+ "password": []byte(password),
+ },
+ }
+
+ if _, err := clientset.CoreV1().Secrets(cluster.Namespace).
+ Create(ctx, &secret, metav1.CreateOptions{}); err != nil {
+ log.Error(err)
+ return "", err
+ }
+
+ return password, nil
+}
+
+// RemoveExporter disables the ability for a PostgreSQL cluster to use the
+// exporter functionality. In particular this function:
+//
+// - disallows the login of the monitoring user (ccp_monitoring)
+// - removes the Secret that contains the ccp_monitoring user credentials
+// - removes the port on the cluster Service
+//
+// This does not modify the Deployment that has the exporter sidecar. That is
+// handled by the "UpdateExporter" function, so it can be handled as part of a
+// rolling update
+func RemoveExporter(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
+ ctx := context.TODO()
+
+ // close the service port
+ svc, err := clientset.CoreV1().Services(cluster.Namespace).Get(ctx, cluster.Name, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+
+ svcPorts := []v1.ServicePort{}
+
+ for _, svcPort := range svc.Spec.Ports {
+ // if we find the service port for the exporter, skip it in the loop
+ if svcPort.Name == exporterServicePortName {
+ continue
+ }
+
+ svcPorts = append(svcPorts, svcPort)
+ }
+
+ svc.Spec.Ports = svcPorts
+
+ if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
+ return err
+ }
+
+ // disable the user before clearing the Secret, so there does not end up being
+ // a race condition between the existence of the Secret and the Pod definition
+ // if this is a standby cluster, return as we cannot execute any SQL
+ if !cluster.Spec.Standby {
+ // if this fails, warn and continue
+ if err := disablePostgresLogin(clientset, restconfig, cluster, crv1.PGUserMonitor); err != nil {
+ log.Warn(err)
+ }
+ }
+
+ // delete the Secret. If there is an error deleting the Secret, log as info
+ // and continue on
+ if err := clientset.CoreV1().Secrets(cluster.Namespace).Delete(ctx,
+ util.GenerateExporterSecretName(cluster.Name), metav1.DeleteOptions{}); err != nil {
+ log.Warnf("could not remove exporter secret: %q", err.Error())
+ }
+
+ return nil
+}
+
+// UpdateExporterSidecar either adds or emoves the metrics sidcar from the
+// cluster. This is meant to be used as a rolling update callback function
+func UpdateExporterSidecar(cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
+ // need to determine if we are adding or removing
+ if cluster.Spec.Exporter {
+ return addExporterSidecar(cluster, deployment)
+ }
+
+ removeExporterSidecar(deployment)
+
+ return nil
+}
+
+// addExporterSidecar adds the metrics collection exporter to a Deployment
+// This does two things:
+// - adds the exporter container to the manifest. If the exporter manifest
+// already exists, this supersedes it.
+// - adds the exporter label to the label template, so it can be discovered that
+// this container has an exporter
+func addExporterSidecar(cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
+ // use the legacy template generation to make the appropriate substitutions,
+ // and then get said generation to be placed into an actual Container object
+ template := operator.GetExporterAddon(cluster.Spec)
+
+ container := v1.Container{}
+
+ if err := json.Unmarshal([]byte(template), &container); err != nil {
+ return fmt.Errorf("error unmarshalling exporter json into Container: %w ", err)
+ }
+
+ // append the container to the deployment container list. However, we are
+ // going to do this carefully, in case the exporter container already exists.
+ // this definition will supersede any exporter container already in the
+ // containers list
+ containers := []v1.Container{}
+ for _, c := range deployment.Spec.Template.Spec.Containers {
+ // skip if this is the exporter container
+ if c.Name == exporterContainerName {
+ continue
+ }
+
+ containers = append(containers, c)
+ }
+
+ // add the exporter container and override the containers list definition
+ containers = append(containers, container)
+ deployment.Spec.Template.Spec.Containers = containers
+
+ // add the label to the deployment template
+ deployment.Spec.Template.ObjectMeta.Labels[config.LABEL_EXPORTER] = config.LABEL_TRUE
+
+ return nil
+}
+
+// removeExporterSidecar removes the metrics collection exporter to a
+// Deployment.
+//
+// This involves:
+// - Removing the container entry for the exporter
+// - Removing the label from the deployment template
+func removeExporterSidecar(deployment *appsv1.Deployment) {
+ // first, find the container entry in the list of containers and remove it
+ containers := []v1.Container{}
+ for _, c := range deployment.Spec.Template.Spec.Containers {
+ // skip if this is the exporter container
+ if c.Name == exporterContainerName {
+ continue
+ }
+
+ containers = append(containers, c)
+ }
+
+ deployment.Spec.Template.Spec.Containers = containers
+
+ // alright, so this moves towards the mix of modern/legacy behavior, but we
+ // need to scan the environmental variables on the "database" container and
+ // remove one with the name "PGMONITOR_PASSWORD"
+ for i, c := range deployment.Spec.Template.Spec.Containers {
+ if c.Name == "database" {
+ env := []v1.EnvVar{}
+
+ for _, e := range c.Env {
+ if e.Name == "PGMONITOR_PASSWORD" {
+ continue
+ }
+
+ env = append(env, e)
+ }
+
+ deployment.Spec.Template.Spec.Containers[i].Env = env
+ break
+ }
+ }
+
+ // finally, remove the label
+ delete(deployment.Spec.Template.ObjectMeta.Labels, config.LABEL_EXPORTER)
+}
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index cf8dcaf7c0..a9a6d5ae2a 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -99,20 +99,6 @@ const (
// PostgreSQL cluster
sqlCheckPgBouncerInstall = `SELECT EXISTS (SELECT 1 FROM pg_catalog.pg_roles WHERE rolname = 'pgbouncer' LIMIT 1);`
- // disable the pgbouncer user from logging in. This is safe from SQL injection
- // as the string that is being interpolated is the util.PgBouncerUser constant
- //
- // This had the "PASSWORD NULL" feature, but this is only found in
- // PostgreSQL 11+, and given we don't want to check for the PG version before
- // running the command, we will not use it
- sqlDisableLogin = `ALTER ROLE "%s" NOLOGIN;`
-
- // sqlEnableLogin is the SQL to update the password
- // NOTE: this is safe from SQL injection as we explicitly add the inerpolated
- // string as a MD5 hash and we are using the crv1.PGUserPgBouncer constant
- // However, the escaping is handled in the util.SetPostgreSQLPassword function
- sqlEnableLogin = `ALTER ROLE %s PASSWORD %s LOGIN;`
-
// sqlGetDatabasesForPgBouncer gets all the databases where pgBouncer can be
// installed or uninstalled
sqlGetDatabasesForPgBouncer = `SELECT datname FROM pg_catalog.pg_database WHERE datname NOT IN ('template0') AND datallowconn;`
@@ -180,7 +166,7 @@ func AddPgbouncer(clientset kubernetes.Interface, restconfig *rest.Config, clust
// attempt to update the password in PostgreSQL, as this is how pgBouncer
// will properly interface with PostgreSQL
- if err := setPostgreSQLPassword(clientset, restconfig, pod, cluster.Spec.Port, pgBouncerPassword); err != nil {
+ if err := setPostgreSQLPassword(clientset, restconfig, pod, cluster.Spec.Port, crv1.PGUserPgBouncer, pgBouncerPassword); err != nil {
return err
}
}
@@ -317,7 +303,7 @@ func RotatePgBouncerPassword(clientset kubernetes.Interface, restconfig *rest.Co
// next, update the PostgreSQL primary with the new password. If this fails
// we definitely return an error
- if err := setPostgreSQLPassword(clientset, restconfig, primaryPod, cluster.Spec.Port, password); err != nil {
+ if err := setPostgreSQLPassword(clientset, restconfig, primaryPod, cluster.Spec.Port, crv1.PGUserPgBouncer, password); err != nil {
return err
}
@@ -326,7 +312,7 @@ func RotatePgBouncerPassword(clientset kubernetes.Interface, restconfig *rest.Co
// PostgreSQL to perform its authentication
secret.Data["password"] = []byte(password)
secret.Data["users.txt"] = util.GeneratePgBouncerUsersFileBytes(
- makePostgresPassword(pgpassword.MD5, password))
+ makePostgresPassword(pgpassword.MD5, crv1.PGUserPgBouncer, password))
// update the secret
if _, err := clientset.CoreV1().Secrets(cluster.Namespace).
@@ -658,7 +644,7 @@ func createPgbouncerSecret(clientset kubernetes.Interface, cluster *crv1.Pgclust
Data: map[string][]byte{
"password": []byte(password),
"users.txt": util.GeneratePgBouncerUsersFileBytes(
- makePostgresPassword(pgpassword.MD5, password)),
+ makePostgresPassword(pgpassword.MD5, crv1.PGUserPgBouncer, password)),
},
}
@@ -698,32 +684,7 @@ func createPgBouncerService(clientset kubernetes.Interface, cluster *crv1.Pgclus
// disable the "pgbouncer" role from being able to log in. It keeps the
// artificats that were created during normal pgBouncer operation
func disablePgBouncer(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
- log.Debugf("disable pgbouncer user on cluster [%s]", cluster.Name)
- // disable the pgbouncer user in the PostgreSQL cluster.
- // first, get the primary pod. If we cannot do this, let's consider it an
- // error and abort
- pod, err := util.GetPrimaryPod(clientset, cluster)
-
- if err != nil {
- return err
- }
-
- // This is safe from SQL injection as we are using constants and a well defined
- // string
- sql := strings.NewReader(fmt.Sprintf(sqlDisableLogin, crv1.PGUserPgBouncer))
- cmd := []string{"psql"}
-
- // exec into the pod to run the query
- _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset,
- cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql)
-
- // if there is an error, log the error from the stderr and return the error
- if err != nil {
- log.Error(stderr)
- return err
- }
-
- return nil
+ return disablePostgresLogin(clientset, restconfig, cluster, crv1.PGUserPgBouncer)
}
// execPgBouncerScript runs a script pertaining to the management of pgBouncer
@@ -744,15 +705,6 @@ func execPgBouncerScript(clientset kubernetes.Interface, restconfig *rest.Config
}
}
-// generatePassword generates a password that is used for the "pgbouncer"
-// PostgreSQL user that provides the associated pgBouncer functionality
-func generatePassword() (string, error) {
- // first, get the length of what the password should be
- generatedPasswordLength := util.GeneratedPasswordLength(operator.Pgo.Cluster.PasswordLength)
- // from there, the password can be generated!
- return util.GeneratePassword(generatedPasswordLength)
-}
-
// generatePgBouncerConf generates the content that is stored in the secret
// for the "pgbouncer.ini" file
func generatePgBouncerConf(cluster *crv1.Pgcluster) (string, error) {
@@ -879,19 +831,6 @@ func isPgBouncerTLSEnabled(cluster *crv1.Pgcluster) bool {
return cluster.Spec.PgBouncer.TLSSecret != "" && cluster.Spec.TLS.IsTLSEnabled()
}
-// makePostgresPassword creates the expected hash for a password type for a
-// PostgreSQL password
-func makePostgresPassword(passwordType pgpassword.PasswordType, password string) string {
- // get the PostgreSQL password generate based on the password type
- // as all of these values are valid, this not not error
- postgresPassword, _ := pgpassword.NewPostgresPassword(passwordType, crv1.PGUserPgBouncer, password)
-
- // create the PostgreSQL style hashed password and return
- hashedPassword, _ := postgresPassword.Build()
-
- return hashedPassword
-}
-
// publishPgBouncerEvent publishes one of the events on the event stream
func publishPgBouncerEvent(eventType string, cluster *crv1.Pgcluster) {
var event events.EventInterface
@@ -932,26 +871,6 @@ func publishPgBouncerEvent(eventType string, cluster *crv1.Pgcluster) {
}
}
-// setPostgreSQLPassword updates the pgBouncer password in the PostgreSQL
-// cluster by executing into the primary Pod and changing it
-func setPostgreSQLPassword(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port, password string) error {
- log.Debug("set pgbouncer password in PostgreSQL")
-
- // we use the PostgreSQL "md5" hashing mechanism here to pre-hash the
- // password. This is semi-hard coded but is now prepped for SCRAM as a
- // password type can be passed in. Almost to SCRAM!
- sqlpgBouncerPassword := makePostgresPassword(pgpassword.MD5, password)
-
- if err := util.SetPostgreSQLPassword(clientset, restconfig, pod,
- port, crv1.PGUserPgBouncer, sqlpgBouncerPassword, sqlEnableLogin); err != nil {
- log.Error(err)
- return err
- }
-
- // and that's all!
- return nil
-}
-
// updatePgBouncerReplicas updates the pgBouncer Deployment with the number
// of replicas (Pods) that it should run. Presently, this is fairly naive, but
// as pgBouncer is "semi-stateful" we may want to improve upon this in the
diff --git a/internal/operator/cluster/pgbouncer_test.go b/internal/operator/cluster/pgbouncer_test.go
index b95d96828c..0784afa58c 100644
--- a/internal/operator/cluster/pgbouncer_test.go
+++ b/internal/operator/cluster/pgbouncer_test.go
@@ -18,7 +18,6 @@ package cluster
import (
"testing"
- pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
)
@@ -72,21 +71,3 @@ func TestIsPgBouncerTLSEnabled(t *testing.T) {
})
})
}
-
-func TestMakePostgresPassword(t *testing.T) {
-
- t.Run("md5", func(t *testing.T) {
- t.Run("valid", func(t *testing.T) {
- passwordType := pgpassword.MD5
- password := "datalake"
- expected := "md56294153764d389dc6830b6ce4f923cdb"
-
- actual := makePostgresPassword(passwordType, password)
-
- if actual != expected {
- t.Errorf("expected: %q actual: %q", expected, actual)
- }
- })
-
- })
-}
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 2fb2277330..309f6d5689 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -94,7 +94,7 @@ type exporterTemplateFields struct {
PGOImagePrefix string
PgPort string
ExporterPort string
- CollectSecretName string
+ ExporterSecretName string
ContainerResources string
TLSOnly bool
}
@@ -357,46 +357,45 @@ func GetBadgerAddon(clientset kubernetes.Interface, namespace string, cluster *c
return ""
}
-func GetExporterAddon(clientset kubernetes.Interface, namespace string, spec *crv1.PgclusterSpec) string {
-
- if spec.Exporter {
- log.Debug("crunchy-postgres-exporter was found as a label on cluster create")
-
- log.Debugf("creating exporter secret for cluster %s", spec.Name)
- err := util.CreateSecret(clientset, spec.Name, spec.CollectSecretName, crv1.PGUserMonitor,
- Pgo.Cluster.PgmonitorPassword, namespace)
- if err != nil {
- log.Error(err)
- }
+// GetExporterAddon returns the template used to create an exporter container
+// for metrics. This is semi-legacy, but updated to match the current way of
+// handling this
+func GetExporterAddon(spec crv1.PgclusterSpec) string {
+ // do not execute if metrics are not enabled
+ if !spec.Exporter {
+ return ""
+ }
- exporterTemplateFields := exporterTemplateFields{}
- exporterTemplateFields.Name = spec.Name
- exporterTemplateFields.JobName = spec.Name
- exporterTemplateFields.PGOImageTag = Pgo.Pgo.PGOImageTag
- exporterTemplateFields.ExporterPort = spec.ExporterPort
- exporterTemplateFields.PGOImagePrefix = util.GetValueOrDefault(spec.PGOImagePrefix, Pgo.Pgo.PGOImagePrefix)
- exporterTemplateFields.PgPort = spec.Port
- exporterTemplateFields.CollectSecretName = spec.CollectSecretName
- exporterTemplateFields.ContainerResources = GetResourcesJSON(spec.ExporterResources, spec.ExporterLimits)
+ log.Debug("crunchy-postgres-exporter was found as a label on cluster create")
+
+ exporterTemplateFields := exporterTemplateFields{
+ ContainerResources: GetResourcesJSON(spec.ExporterResources, spec.ExporterLimits),
+ ExporterPort: spec.ExporterPort,
+ ExporterSecretName: util.GenerateExporterSecretName(spec.ClusterName),
+ JobName: spec.Name,
+ Name: spec.Name,
+ PGOImagePrefix: util.GetValueOrDefault(spec.PGOImagePrefix, Pgo.Pgo.PGOImagePrefix),
+ PGOImageTag: Pgo.Pgo.PGOImageTag,
+ PgPort: spec.Port,
// see if TLS only is set. however, this also requires checking to see if
// TLS is enabled in this case. The reason is that even if TLS is only just
// enabled, because the connection is over an internal interface, we do not
// need to have the overhead of a TLS connection
- exporterTemplateFields.TLSOnly = spec.TLS.IsTLSEnabled() && spec.TLSOnly
+ TLSOnly: (spec.TLS.IsTLSEnabled() && spec.TLSOnly),
+ }
- var exporterDoc bytes.Buffer
- err = config.ExporterTemplate.Execute(&exporterDoc, exporterTemplateFields)
- if err != nil {
- log.Error(err.Error())
- return ""
- }
+ if CRUNCHY_DEBUG {
+ _ = config.ExporterTemplate.Execute(os.Stdout, exporterTemplateFields)
+ }
- if CRUNCHY_DEBUG {
- config.ExporterTemplate.Execute(os.Stdout, exporterTemplateFields)
- }
- return exporterDoc.String()
+ exporterDoc := bytes.Buffer{}
+
+ if err := config.ExporterTemplate.Execute(&exporterDoc, exporterTemplateFields); err != nil {
+ log.Error(err)
+ return ""
}
- return ""
+
+ return exporterDoc.String()
}
//consolidate with cluster.GetConfVolume
@@ -775,7 +774,7 @@ func GetPgmonitorEnvVars(cluster *crv1.Pgcluster) string {
}
fields := PgmonitorEnvVarsTemplateFields{
- ExporterSecret: cluster.Spec.CollectSecretName,
+ ExporterSecret: util.GenerateExporterSecretName(cluster.Name),
}
doc := bytes.Buffer{}
diff --git a/internal/operator/common.go b/internal/operator/common.go
index faeb64fcba..a3917dfc27 100644
--- a/internal/operator/common.go
+++ b/internal/operator/common.go
@@ -124,11 +124,6 @@ func Initialize(clientset kubernetes.Interface) {
log.Debugf("PGOImagePrefix set, using %s", Pgo.Pgo.PGOImagePrefix)
}
- if Pgo.Cluster.PgmonitorPassword == "" {
- log.Debug("pgo.yaml PgmonitorPassword not set, using default")
- Pgo.Cluster.PgmonitorPassword = "password"
- }
-
// In a RELATED_IMAGE_* world, this does not _need_ to be set, but our
// installer does set it up so we could be ok...
if Pgo.Pgo.PGOImageTag == "" {
diff --git a/internal/util/exporter.go b/internal/util/exporter.go
new file mode 100644
index 0000000000..d46ad8cf53
--- /dev/null
+++ b/internal/util/exporter.go
@@ -0,0 +1,29 @@
+package util
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import "fmt"
+
+// exporterSecretFormat is the format of the name of the exporter secret, i.e.
+// "-exporter-secret"
+// #nosec G101
+const exporterSecretFormat = "%s-exporter-secret"
+
+// GenerateExporterSecretName returns the name of the secret that contains
+// information around a monitoring user
+func GenerateExporterSecretName(clusterName string) string {
+ return fmt.Sprintf(exporterSecretFormat, clusterName)
+}
diff --git a/internal/util/exporter_test.go b/internal/util/exporter_test.go
new file mode 100644
index 0000000000..ffbde3a6e1
--- /dev/null
+++ b/internal/util/exporter_test.go
@@ -0,0 +1,32 @@
+package util
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "testing"
+)
+
+func TestGenerateExporterSecretName(t *testing.T) {
+ t.Run("success", func(t *testing.T) {
+ clusterName := "hippo"
+ expected := clusterName + "-exporter-secret"
+ actual := GenerateExporterSecretName(clusterName)
+
+ if expected != actual {
+ t.Fatalf("expected %q actual %q", expected, actual)
+ }
+ })
+}
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 63d3914f0f..fb121ea835 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -111,7 +111,6 @@ type PgclusterSpec struct {
UserSecretName string `json:"usersecretname"`
RootSecretName string `json:"rootsecretname"`
PrimarySecretName string `json:"primarysecretname"`
- CollectSecretName string `json:"collectSecretName"`
Status string `json:"status"`
CustomConfig string `json:"customconfig"`
UserLabels map[string]string `json:"userlabels"`
diff --git a/pkg/apis/crunchydata.com/v1/common.go b/pkg/apis/crunchydata.com/v1/common.go
index 723c2a0a60..242025f038 100644
--- a/pkg/apis/crunchydata.com/v1/common.go
+++ b/pkg/apis/crunchydata.com/v1/common.go
@@ -31,9 +31,6 @@ const UserSecretSuffix = "-secret"
// PrimarySecretSuffix ...
const PrimarySecretSuffix = "-primaryuser-secret"
-// ExporterSecretSuffix ...
-const ExporterSecretSuffix = "-exporter-secret"
-
// StorageExisting ...
const StorageExisting = "existing"
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index 69bbdafe49..c83ffb6e25 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -356,6 +356,16 @@ const (
UpdateClusterAutofailDisable
)
+// UpdateClusterMetrics determines whether or not to enable/disable the metrics
+// collection sidecar in a cluster
+type UpdateClusterMetrics int
+
+const (
+ UpdateClusterMetricsDoNothing UpdateClusterMetrics = iota
+ UpdateClusterMetricsEnable
+ UpdateClusterMetricsDisable
+)
+
// UpdateClusterStandbyStatus defines the types for updating the Standby status
type UpdateClusterStandbyStatus int
@@ -426,10 +436,13 @@ type UpdateClusterRequest struct {
// MemoryRequest is the value of how much RAM should be requested for
// deploying the PostgreSQL cluster
MemoryRequest string
- Standby UpdateClusterStandbyStatus
- Startup bool
- Shutdown bool
- Tablespaces []ClusterTablespaceDetail
+ // Metrics allows for the enabling/disabling of the metrics sidecar. This can
+ // cause downtime and triggers a rolling update
+ Metrics UpdateClusterMetrics
+ Standby UpdateClusterStandbyStatus
+ Startup bool
+ Shutdown bool
+ Tablespaces []ClusterTablespaceDetail
}
// UpdateClusterResponse ...
From 021effba8eb56eeb06f75fe7b43db7a91404ceb1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 16 Dec 2020 18:37:28 -0500
Subject: [PATCH 049/276] Allow for the metrics agent password to be rotated
This introduces the `--exporter-rotate-password` flag to
`pgo update cluster` so that the metrics collection password
can be rotated.
---
cmd/pgo/cmd/cluster.go | 1 +
cmd/pgo/cmd/update.go | 4 ++
.../reference/pgo_update_cluster.md | 3 +-
.../apiserver/clusterservice/clusterimpl.go | 13 +++-
.../pgcluster/pgclustercontroller.go | 5 +-
internal/controller/pod/promotionhandler.go | 8 +++
internal/operator/cluster/common.go | 34 +++++++++--
internal/operator/cluster/common_test.go | 2 +-
internal/operator/cluster/exporter.go | 35 ++++++++++-
internal/operator/cluster/pgbouncer.go | 59 ++++---------------
internal/operator/clusterutilities.go | 2 -
pkg/apiservermsgs/clustermsgs.go | 3 +
12 files changed, 108 insertions(+), 61 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index b4bf417697..5c8a318fc3 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -597,6 +597,7 @@ func updateCluster(args []string, ns string) {
r.ExporterCPULimit = ExporterCPULimit
r.ExporterMemoryRequest = ExporterMemoryRequest
r.ExporterMemoryLimit = ExporterMemoryLimit
+ r.ExporterRotatePassword = ExporterRotatePassword
r.Clustername = args
r.Startup = Startup
r.Shutdown = Shutdown
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index 303f64b74b..140c06cecd 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -38,6 +38,9 @@ var (
EnableMetrics bool
// ExpireUser sets a user to having their password expired
ExpireUser bool
+ // ExporterRotatePassword rotates the password for the designed PostgreSQL
+ // user for handling metrics scraping
+ ExporterRotatePassword bool
// PgoroleChangePermissions does something with the pgouser access controls,
// I'm not sure but I wanted this at least to be documented
PgoroleChangePermissions bool
@@ -115,6 +118,7 @@ func init() {
"the Crunchy Postgres Exporter sidecar container.")
UpdateClusterCmd.Flags().BoolVar(&EnableMetrics, "enable-metrics", false,
"Enable the metrics collection sidecar. May cause brief downtime.")
+ UpdateClusterCmd.Flags().BoolVar(&ExporterRotatePassword, "exporter-rotate-password", false, "Used to rotate the password for the metrics collection agent.")
UpdateClusterCmd.Flags().BoolVarP(&EnableStandby, "enable-standby", "", false,
"Enables standby mode in the cluster(s) specified.")
UpdateClusterCmd.Flags().BoolVar(&Startup, "startup", false, "Restart the database cluster if it "+
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index 8a5af0e066..e480bf767f 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -46,6 +46,7 @@ pgo update cluster [flags]
--exporter-cpu-limit string Set the number of millicores to limit for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1".
--exporter-memory string Set the amount of memory to request for the Crunchy Postgres Exporter sidecar container.
--exporter-memory-limit string Set the amount of memory to limit for the Crunchy Postgres Exporter sidecar container.
+ --exporter-rotate-password Used to rotate the password for the metrics collection agent.
-h, --help help for cluster
--memory string Set the amount of RAM to request, e.g. 1GiB.
--memory-limit string Set the amount of RAM to limit, e.g. 1GiB.
@@ -86,4 +87,4 @@ pgo update cluster [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 14-Dec-2020
+###### Auto generated by spf13/cobra on 16-Dec-2020
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index d1a78cba41..d87c01da39 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -29,6 +29,7 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
"github.com/crunchydata/postgres-operator/internal/operator/backrest"
+ clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster"
"github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
@@ -1891,7 +1892,7 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
return response
}
- for _, cluster := range clusterList.Items {
+ for i, cluster := range clusterList.Items {
//set autofail=true or false on each pgcluster CRD
// Make the change based on the value of Autofail vis-a-vis UpdateClusterAutofailStatus
@@ -2026,6 +2027,16 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
cluster.Spec.ExporterResources[v1.ResourceMemory] = quantity
}
+ // an odd one...if rotating the password is requested, we can perform this
+ // as an operational action and handle it here.
+ // if it fails...just put a in the logs.
+ if cluster.Spec.Exporter && request.ExporterRotatePassword {
+ if err := clusteroperator.RotateExporterPassword(apiserver.Clientset, apiserver.RESTConfig,
+ &clusterList.Items[i]); err != nil {
+ log.Error(err)
+ }
+ }
+
// set any user-defined annotations
// go through each annotation grouping and make the appropriate changes in the
// equivalent cluster annotation group
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 72ee582be3..24a6b78a6b 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -243,10 +243,9 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
// determine if the sidecar is being enabled/disabled and take the precursor
// actions before the deployment template is modified
- switch newcluster.Spec.Exporter {
- case true:
+ if newcluster.Spec.Exporter {
err = clusteroperator.AddExporter(c.Client, c.Client.Config, newcluster)
- case false:
+ } else {
err = clusteroperator.RemoveExporter(c.Client, c.Client.Config, newcluster)
}
diff --git a/internal/controller/pod/promotionhandler.go b/internal/controller/pod/promotionhandler.go
index 123f422d46..2585f50653 100644
--- a/internal/controller/pod/promotionhandler.go
+++ b/internal/controller/pod/promotionhandler.go
@@ -106,6 +106,14 @@ func (c *Controller) handleStandbyPromotion(newPod *apiv1.Pod, cluster crv1.Pgcl
return err
}
+ // rotate the exporter password if the metrics sidecar is enabled
+ if cluster.Spec.Exporter {
+ if err := clusteroperator.RotateExporterPassword(c.Client, c.Client.Config, &cluster); err != nil {
+ log.Error(err)
+ return err
+ }
+ }
+
// rotate the pgBouncer passwords if pgbouncer is enabled within the cluster
if cluster.Spec.PgBouncer.Enabled() {
if err := clusteroperator.RotatePgBouncerPassword(c.Client, c.Client.Config, &cluster); err != nil {
diff --git a/internal/operator/cluster/common.go b/internal/operator/cluster/common.go
index 250662846f..ebdc5adfec 100644
--- a/internal/operator/cluster/common.go
+++ b/internal/operator/cluster/common.go
@@ -46,8 +46,8 @@ const (
sqlEnableLogin = `ALTER ROLE %s PASSWORD %s LOGIN;`
)
-// disablePostgresLogin disables the ability for a PostgreSQL user to log in
-func disablePostgresLogin(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster, username string) error {
+// disablePostgreSQLLogin disables the ability for a PostgreSQL user to log in
+func disablePostgreSQLLogin(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster, username string) error {
log.Debugf("disable user %q on cluster %q", username, cluster.Name)
// disable the pgbouncer user in the PostgreSQL cluster.
// first, get the primary pod. If we cannot do this, let's consider it an
@@ -81,9 +81,9 @@ func generatePassword() (string, error) {
return util.GeneratePassword(generatedPasswordLength)
}
-// makePostgresPassword creates the expected hash for a password type for a
+// makePostgreSQLPassword creates the expected hash for a password type for a
// PostgreSQL password
-func makePostgresPassword(passwordType pgpassword.PasswordType, username, password string) string {
+func makePostgreSQLPassword(passwordType pgpassword.PasswordType, username, password string) string {
// get the PostgreSQL password generate based on the password type
// as all of these values are valid, this not not error
postgresPassword, _ := pgpassword.NewPostgresPassword(passwordType, username, password)
@@ -94,6 +94,30 @@ func makePostgresPassword(passwordType pgpassword.PasswordType, username, passwo
return hashedPassword
}
+// rotatePostgreSQLPassword generates a new password for the specified
+// username/Secret pair and saves it both in PostgreSQL and the Secret itself
+func rotatePostgreSQLPassword(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster,
+ username string) (string, error) {
+ // determine if we are able to access the primary Pod
+ pod, err := util.GetPrimaryPod(clientset, cluster)
+ if err != nil {
+ return "", err
+ }
+
+ // generate a new password
+ password, err := generatePassword()
+ if err != nil {
+ return "", err
+ }
+
+ // update the PostgreSQL instance with the new password.
+ if err := setPostgreSQLPassword(clientset, restconfig, pod, cluster.Spec.Port, username, password); err != nil {
+ return "", err
+ }
+
+ return password, err
+}
+
// setPostgreSQLPassword updates the password of a user in a PostgreSQL
// cluster by executing into the Pod provided (i.e. a primary) and changing it
func setPostgreSQLPassword(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port,
@@ -103,7 +127,7 @@ func setPostgreSQLPassword(clientset kubernetes.Interface, restconfig *rest.Conf
// we use the PostgreSQL "md5" hashing mechanism here to pre-hash the
// password. This is semi-hard coded but is now prepped for SCRAM as a
// password type can be passed in. Almost to SCRAM!
- passwordHash := makePostgresPassword(pgpassword.MD5, username, password)
+ passwordHash := makePostgreSQLPassword(pgpassword.MD5, username, password)
if err := util.SetPostgreSQLPassword(clientset, restconfig, pod,
port, username, passwordHash, sqlEnableLogin); err != nil {
diff --git a/internal/operator/cluster/common_test.go b/internal/operator/cluster/common_test.go
index 24f0423969..8b83becb80 100644
--- a/internal/operator/cluster/common_test.go
+++ b/internal/operator/cluster/common_test.go
@@ -29,7 +29,7 @@ func TestMakePostgresPassword(t *testing.T) {
password := "datalake"
expected := "md56294153764d389dc6830b6ce4f923cdb"
- actual := makePostgresPassword(passwordType, username, password)
+ actual := makePostgreSQLPassword(passwordType, username, password)
if actual != expected {
t.Errorf("expected: %q actual: %q", expected, actual)
diff --git a/internal/operator/cluster/exporter.go b/internal/operator/cluster/exporter.go
index 957e47a7d1..1da55df006 100644
--- a/internal/operator/cluster/exporter.go
+++ b/internal/operator/cluster/exporter.go
@@ -99,7 +99,6 @@ func AddExporter(clientset kubernetes.Interface, restconfig *rest.Config, cluste
return err
}
- // after the secret if this is a standby, exit early
// this can't be installed if this is a standby, so abort if that's the case
if cluster.Spec.Standby {
return ErrStandbyNotAllowed
@@ -217,7 +216,7 @@ func RemoveExporter(clientset kubernetes.Interface, restconfig *rest.Config, clu
// if this is a standby cluster, return as we cannot execute any SQL
if !cluster.Spec.Standby {
// if this fails, warn and continue
- if err := disablePostgresLogin(clientset, restconfig, cluster, crv1.PGUserMonitor); err != nil {
+ if err := disablePostgreSQLLogin(clientset, restconfig, cluster, crv1.PGUserMonitor); err != nil {
log.Warn(err)
}
}
@@ -232,6 +231,38 @@ func RemoveExporter(clientset kubernetes.Interface, restconfig *rest.Config, clu
return nil
}
+// RotateExporterPassword rotates the password for the monitoring PostgreSQL
+// user
+func RotateExporterPassword(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
+ ctx := context.TODO()
+
+ // let's also go ahead and get the secret that contains the pgBouncer
+ // information. If we can't find the secret, we're basically done here
+ secretName := util.GenerateExporterSecretName(cluster.Name)
+ secret, err := clientset.CoreV1().Secrets(cluster.Namespace).Get(ctx, secretName, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+
+ // update the password on the PostgreSQL instance
+ password, err := rotatePostgreSQLPassword(clientset, restconfig, cluster, crv1.PGUserMonitor)
+ if err != nil {
+ return err
+ }
+
+ // next, update the password field of the secret.
+ secret.Data["password"] = []byte(password)
+
+ // update the secret
+ if _, err := clientset.CoreV1().Secrets(cluster.Namespace).
+ Update(ctx, secret, metav1.UpdateOptions{}); err != nil {
+ return err
+ }
+
+ // and that's it - the changes will be propagated to the exporter sidecars
+ return nil
+}
+
// UpdateExporterSidecar either adds or emoves the metrics sidcar from the
// cluster. This is meant to be used as a rolling update callback function
func UpdateExporterSidecar(cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index a9a6d5ae2a..0a78e305f2 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -104,11 +104,9 @@ const (
sqlGetDatabasesForPgBouncer = `SELECT datname FROM pg_catalog.pg_database WHERE datname NOT IN ('template0') AND datallowconn;`
)
-var (
- // sqlUninstallPgBouncer provides the final piece of SQL to uninstall
- // pgbouncer, which is to remove the user
- sqlUninstallPgBouncer = fmt.Sprintf(`DROP ROLE "%s";`, crv1.PGUserPgBouncer)
-)
+// sqlUninstallPgBouncer provides the final piece of SQL to uninstall
+// pgbouncer, which is to remove the user
+var sqlUninstallPgBouncer = fmt.Sprintf(`DROP ROLE "%s";`, crv1.PGUserPgBouncer)
// AddPgbouncer contains the various functions that are used to add a pgBouncer
// Deployment to a PostgreSQL cluster
@@ -120,7 +118,6 @@ func AddPgbouncer(clientset kubernetes.Interface, restconfig *rest.Config, clust
// get the primary pod, which is needed to update the password for the
// pgBouncer administrative user
pod, err := util.GetPrimaryPod(clientset, cluster)
-
if err != nil {
return err
}
@@ -146,11 +143,9 @@ func AddPgbouncer(clientset kubernetes.Interface, restconfig *rest.Config, clust
if !cluster.Spec.Standby {
secretName := util.GeneratePgBouncerSecretName(cluster.Name)
pgBouncerPassword, err := util.GetPasswordFromSecret(clientset, cluster.Namespace, secretName)
-
if err != nil {
// set the password that will be used for the "pgbouncer" PostgreSQL account
newPassword, err := generatePassword()
-
if err != nil {
return err
}
@@ -267,22 +262,6 @@ func DeletePgbouncer(clientset kubernetes.Interface, restconfig *rest.Config, cl
func RotatePgBouncerPassword(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
ctx := context.TODO()
- // determine if we are able to access the primary Pod
- primaryPod, err := util.GetPrimaryPod(clientset, cluster)
-
- if err != nil {
- return err
- }
-
- // let's also go ahead and get the secret that contains the pgBouncer
- // information. If we can't find the secret, we're basically done here
- secretName := util.GeneratePgBouncerSecretName(cluster.Name)
- secret, err := clientset.CoreV1().Secrets(cluster.Namespace).Get(ctx, secretName, metav1.GetOptions{})
-
- if err != nil {
- return err
- }
-
// there are a few steps that must occur in order for the password to be
// successfully rotated:
//
@@ -294,16 +273,17 @@ func RotatePgBouncerPassword(clientset kubernetes.Interface, restconfig *rest.Co
// ...wouldn't it be nice if we could run this in a transaction? rolling back
// is hard :(
- // first, generate a new password
- password, err := generatePassword()
-
+ // let's also go ahead and get the secret that contains the pgBouncer
+ // information. If we can't find the secret, we're basically done here
+ secretName := util.GeneratePgBouncerSecretName(cluster.Name)
+ secret, err := clientset.CoreV1().Secrets(cluster.Namespace).Get(ctx, secretName, metav1.GetOptions{})
if err != nil {
return err
}
- // next, update the PostgreSQL primary with the new password. If this fails
- // we definitely return an error
- if err := setPostgreSQLPassword(clientset, restconfig, primaryPod, cluster.Spec.Port, crv1.PGUserPgBouncer, password); err != nil {
+ // update the password on the PostgreSQL instance
+ password, err := rotatePostgreSQLPassword(clientset, restconfig, cluster, crv1.PGUserPgBouncer)
+ if err != nil {
return err
}
@@ -312,7 +292,7 @@ func RotatePgBouncerPassword(clientset kubernetes.Interface, restconfig *rest.Co
// PostgreSQL to perform its authentication
secret.Data["password"] = []byte(password)
secret.Data["users.txt"] = util.GeneratePgBouncerUsersFileBytes(
- makePostgresPassword(pgpassword.MD5, crv1.PGUserPgBouncer, password))
+ makePostgreSQLPassword(pgpassword.MD5, crv1.PGUserPgBouncer, password))
// update the secret
if _, err := clientset.CoreV1().Secrets(cluster.Namespace).
@@ -351,14 +331,12 @@ func UninstallPgBouncer(clientset kubernetes.Interface, restconfig *rest.Config,
// determine if we are able to access the primary Pod. If not, then the
// journey ends right here
pod, err := util.GetPrimaryPod(clientset, cluster)
-
if err != nil {
return err
}
// get the list of databases that we need to scan through
databases, err := getPgBouncerDatabases(clientset, restconfig, pod, cluster.Spec.Port)
-
if err != nil {
return err
}
@@ -379,7 +357,6 @@ func UninstallPgBouncer(clientset kubernetes.Interface, restconfig *rest.Config,
// exec into the pod to run the query
_, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset,
cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql)
-
// if there is an error executing the command, log the error message from
// stderr and return the error
if err != nil {
@@ -439,7 +416,6 @@ func UpdatePgBouncerAnnotations(clientset kubernetes.Interface, cluster *crv1.Pg
// get a list of all of the instance deployments for the cluster
deployment, err := getPgBouncerDeployment(clientset, cluster)
-
if err != nil {
return err
}
@@ -473,7 +449,6 @@ func checkPgBouncerInstall(clientset kubernetes.Interface, restconfig *rest.Conf
// exec into the pod to run the query
stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset,
cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql)
-
// if there is an error executing the command, log the error message from
// stderr and return the error
if err != nil {
@@ -506,7 +481,6 @@ func createPgbouncerConfigMap(clientset kubernetes.Interface, cluster *crv1.Pgcl
// generate the pgbouncer.ini information
pgBouncerConf, err := generatePgBouncerConf(cluster)
-
if err != nil {
log.Error(err)
return err
@@ -514,7 +488,6 @@ func createPgbouncerConfigMap(clientset kubernetes.Interface, cluster *crv1.Pgcl
// generate the pgbouncer HBA file
pgbouncerHBA, err := generatePgBouncerHBA(cluster)
-
if err != nil {
log.Error(err)
return err
@@ -644,7 +617,7 @@ func createPgbouncerSecret(clientset kubernetes.Interface, cluster *crv1.Pgclust
Data: map[string][]byte{
"password": []byte(password),
"users.txt": util.GeneratePgBouncerUsersFileBytes(
- makePostgresPassword(pgpassword.MD5, crv1.PGUserPgBouncer, password)),
+ makePostgreSQLPassword(pgpassword.MD5, crv1.PGUserPgBouncer, password)),
},
}
@@ -684,7 +657,7 @@ func createPgBouncerService(clientset kubernetes.Interface, cluster *crv1.Pgclus
// disable the "pgbouncer" role from being able to log in. It keeps the
// artificats that were created during normal pgBouncer operation
func disablePgBouncer(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
- return disablePostgresLogin(clientset, restconfig, cluster, crv1.PGUserPgBouncer)
+ return disablePostgreSQLLogin(clientset, restconfig, cluster, crv1.PGUserPgBouncer)
}
// execPgBouncerScript runs a script pertaining to the management of pgBouncer
@@ -695,7 +668,6 @@ func execPgBouncerScript(clientset kubernetes.Interface, restconfig *rest.Config
// exec into the pod to run the query
_, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset,
cmd, "database", pod.Name, pod.ObjectMeta.Namespace, nil)
-
// if there is an error executing the command, log the error as a warning
// that it failed, and continue. It's hard to rollback from this one :\
if err != nil {
@@ -773,7 +745,6 @@ func getPgBouncerDatabases(clientset kubernetes.Interface, restconfig *rest.Conf
// exec into the pod to run the query
stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset,
cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql)
-
// if there is an error executing the command, log the error message from
// stderr and return the error
if err != nil {
@@ -796,7 +767,6 @@ func getPgBouncerDeployment(clientset kubernetes.Interface, cluster *crv1.Pgclus
pgbouncerDeploymentName := fmt.Sprintf(pgBouncerDeploymentFormat, cluster.Name)
deployment, err := clientset.AppsV1().Deployments(cluster.Namespace).Get(ctx, pgbouncerDeploymentName, metav1.GetOptions{})
-
if err != nil {
return nil, err
}
@@ -809,7 +779,6 @@ func getPgBouncerDeployment(clientset kubernetes.Interface, cluster *crv1.Pgclus
func installPgBouncer(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port string) error {
// get the list of databases that we need to scan through
databases, err := getPgBouncerDatabases(clientset, restconfig, pod, port)
-
if err != nil {
return err
}
@@ -881,7 +850,6 @@ func updatePgBouncerReplicas(clientset kubernetes.Interface, cluster *crv1.Pgclu
// get the pgBouncer deployment so the resources can be updated
deployment, err := getPgBouncerDeployment(clientset, cluster)
-
if err != nil {
return err
}
@@ -907,7 +875,6 @@ func updatePgBouncerResources(clientset kubernetes.Interface, cluster *crv1.Pgcl
// get the pgBouncer deployment so the resources can be updated
deployment, err := getPgBouncerDeployment(clientset, cluster)
-
if err != nil {
return err
}
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 309f6d5689..8df208a7ad 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -366,8 +366,6 @@ func GetExporterAddon(spec crv1.PgclusterSpec) string {
return ""
}
- log.Debug("crunchy-postgres-exporter was found as a label on cluster create")
-
exporterTemplateFields := exporterTemplateFields{
ContainerResources: GetResourcesJSON(spec.ExporterResources, spec.ExporterLimits),
ExporterPort: spec.ExporterPort,
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index c83ffb6e25..251e32c7cd 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -423,6 +423,9 @@ type UpdateClusterRequest struct {
// ExporterMemoryRequest, if specified, is the value of how much RAM should
// be requested for the Crunchy Postgres Exporter instance.
ExporterMemoryRequest string
+ // ExporterRotatePassword, if specified, rotates the password of the metrics
+ // collection agent, i.e. the "ccp_monitoring" user.
+ ExporterRotatePassword bool
// CPULimit is the value of the max CPU utilization for a Pod that has a
// PostgreSQL cluster
CPULimit string
From 7b51d0df674e089b7f0d8f499edd5d093b83621e Mon Sep 17 00:00:00 2001
From: andrewlecuyer <43458182+andrewlecuyer@users.noreply.github.com>
Date: Fri, 18 Dec 2020 07:30:08 -0600
Subject: [PATCH 050/276] Wait for PG Deployment Deletion on Restore
Since the primary PVC for the cluster is now retained during an
in-place PostgreSQL cluster restore in support of pgBackRest delta
restores, when preparing a cluster for a restore we can no longer rely
on the deletion of all PVC's as an indicator that the 'config' and
'leader' ConfigMaps created by Patroni can be removed. Therefore,
the Operator now specifically waits for all Deployments to be
successfully removed prior to deleting these resources.
Issue: [ch9926]
---
internal/operator/backrest/restore.go | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index 8107bcc09f..1f23802deb 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -184,7 +184,24 @@ func PrepareClusterForRestore(clientset kubeapi.Interface, cluster *crv1.Pgclust
}); err != nil {
return nil, err
}
- log.Debugf("restore workflow: deleted primary and replicas %v", pgInstances)
+ log.Debugf("restore workflow: deleted primary and replica deployments for cluster %s",
+ clusterName)
+
+ // Wait for all primary and replica deployments to be removed. If unable to verify that all
+ // deployments have been removed, then the restore cannot proceed and the function returns.
+ if err := wait.Poll(time.Second/2, time.Minute*3, func() (bool, error) {
+ for _, deployment := range pgInstances.Items {
+ if _, err := clientset.AppsV1().Deployments(namespace).
+ Get(ctx, deployment.GetName(), metav1.GetOptions{}); err == nil || !kerrors.IsNotFound(err) {
+ return false, nil
+ }
+ }
+ return true, nil
+ }); err != nil {
+ return nil, err
+ }
+ log.Debugf("restore workflow: finished waiting for primary and replica deployments for "+
+ "cluster %s to be removed", clusterName)
// delete all existing jobs
deletePropagation := metav1.DeletePropagationBackground
From 475d57cbe6e6ef221d329829db1eb508ebf7fe0a Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 18 Dec 2020 15:00:24 -0500
Subject: [PATCH 051/276] Add a "build" Makefile target
This is a convenience for development, allowing all of the Golang
binaries to be built from a single target.
---
Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/Makefile b/Makefile
index 1488c2dc7c..1411047cb5 100644
--- a/Makefile
+++ b/Makefile
@@ -112,6 +112,8 @@ deployoperator:
#======= Binary builds =======
+build: build-postgres-operator build-pgo-apiserver build-pgo-client build-pgo-rmdata build-pgo-scheduler
+
build-pgo-apiserver:
$(GO_BUILD) -o bin/apiserver ./cmd/apiserver
From 693a78691d116786c7cb13eb1b56ad414dcb6604 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 18 Dec 2020 16:35:02 -0500
Subject: [PATCH 052/276] Fix several edge case out-of-index panics
While these should rarely, if ever, happen, the world of distributed
computing is unpredictable and we should ensure our code can fail
gracefully in these scenarios.
---
internal/operator/cluster/failoverlogic.go | 2 ++
internal/operator/clusterutilities.go | 3 +++
2 files changed, 5 insertions(+)
diff --git a/internal/operator/cluster/failoverlogic.go b/internal/operator/cluster/failoverlogic.go
index 6462002ad7..4ffa67e4d4 100644
--- a/internal/operator/cluster/failoverlogic.go
+++ b/internal/operator/cluster/failoverlogic.go
@@ -210,6 +210,8 @@ func RemovePrimaryOnRoleChangeTag(clientset kubernetes.Interface, restconfig *re
if err != nil {
log.Error(err)
return err
+ } else if len(pods.Items) == 0 {
+ return fmt.Errorf("no pods found for cluster %q", clusterName)
} else if len(pods.Items) > 1 {
log.Error("More than one primary found after completing the post-failover backup")
}
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 8df208a7ad..030e706404 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -941,6 +941,9 @@ func UpdatePGHAConfigInitFlag(clientset kubernetes.Interface, initVal bool, clus
case err != nil:
return fmt.Errorf("unable to find the default pgha configMap found for cluster %s using selector %s, unable to set "+
"init value to false", clusterName, selector)
+ case len(configMapList.Items) == 0:
+ return fmt.Errorf("no pgha configMaps found for cluster %s using selector %s, unable to set "+
+ "init value to false", clusterName, selector)
case len(configMapList.Items) > 1:
return fmt.Errorf("more than one default pgha configMap found for cluster %s using selector %s, unable to set "+
"init value to false", clusterName, selector)
From c209b16c6d9a88a83349b05198676df516df693a Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 18 Dec 2020 16:36:12 -0500
Subject: [PATCH 053/276] Modify syntax for checking for recovery status via
SQL (#2133)
There were cases where this was failing due to too many quotes
being used, so this should avoid said issues.
Issue: [ch9981]
Issue: #2108
---
internal/controller/pod/promotionhandler.go | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/internal/controller/pod/promotionhandler.go b/internal/controller/pod/promotionhandler.go
index 2585f50653..e420af589c 100644
--- a/internal/controller/pod/promotionhandler.go
+++ b/internal/controller/pod/promotionhandler.go
@@ -36,9 +36,14 @@ import (
"k8s.io/client-go/rest"
)
+const (
+ // recoverySQL is just the SQL to figure out if Postgres is in recovery mode
+ recoverySQL = "SELECT pg_is_in_recovery();"
+)
+
var (
// isInRecoveryCommand is the command run to determine if postgres is in recovery
- isInRecoveryCMD []string = []string{"psql", "-t", "-c", "'SELECT pg_is_in_recovery();'", "-p"}
+ isInRecoveryCMD []string = []string{"psql", "-t", "-c", recoverySQL, "-p"}
// leaderStatusCMD is the command run to get the Patroni status for the primary
leaderStatusCMD []string = []string{"curl", fmt.Sprintf("localhost:%s/master",
From 2fba41ae0759a64fddfc746c85958c24b35f263f Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 19 Dec 2020 16:51:05 -0500
Subject: [PATCH 054/276] Bump Grafana & Prometheus versions for Postgres
Operator Monitoring
This moves Grafana to 6.7.5 and Prometheus to 2.23.0. Note that
this continues to use the upstream version.
---
.../content/installation/metrics/metrics-configuration.md | 8 ++++----
.../metrics/ansible/roles/pgo-metrics/defaults/main.yml | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/docs/content/installation/metrics/metrics-configuration.md b/docs/content/installation/metrics/metrics-configuration.md
index 7d343480cf..559f9273ef 100644
--- a/docs/content/installation/metrics/metrics-configuration.md
+++ b/docs/content/installation/metrics/metrics-configuration.md
@@ -108,10 +108,10 @@ and tag as needed to use the RedHat certified containers:
| `alertmanager_image_tag` | v0.21.0 | **Required** | Configures the image tag to use for the Alertmanager container. |
| `grafana_image_prefix` | grafana | **Required** | Configures the image prefix to use for the Grafana container.|
| `grafana_image_name` | grafana | **Required** | Configures the image name to use for the Grafana container. |
-| `grafana_image_tag` | 6.7.4 | **Required** | Configures the image tag to use for the Grafana container. |
+| `grafana_image_tag` | 6.7.5 | **Required** | Configures the image tag to use for the Grafana container. |
| `prometheus_image_prefix` | prom | **Required** | Configures the image prefix to use for the Prometheus container. |
| `prometheus_image_name` | promtheus | **Required** | Configures the image name to use for the Prometheus container. |
-| `prometheus_image_tag` | v2.20.0 | **Required** | Configures the image tag to use for the Prometheus container. |
+| `prometheus_image_tag` | v2.23.0 | **Required** | Configures the image tag to use for the Prometheus container. |
Additionally, these same settings can be utilized as needed to support custom image names,
tags, and additional container registries.
@@ -124,7 +124,7 @@ PostgreSQL Operator Monitoring infrastructure:
| Name | Default | Required | Description |
|------|---------|----------|-------------|
-| `pgo_image_prefix` | registry.developers.crunchydata.com/crunchydata | **Required** | Configures the image prefix used by the `pgo-deployer` container |
+| `pgo_image_prefix` | registry.developers.crunchydata.com/crunchydata | **Required** | Configures the image prefix used by the `pgo-deployer` container |
| `pgo_image_tag` | {{< param centosBase >}}-{{< param operatorVersion >}} | **Required** | Configures the image tag used by the `pgo-deployer` container |
-[k8s-service-type]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
\ No newline at end of file
+[k8s-service-type]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
diff --git a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml b/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
index 775d6691f5..a16a017d63 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
@@ -29,7 +29,7 @@ grafana_admin_password: ""
grafana_install: "true"
grafana_image_prefix: "grafana"
grafana_image_name: "grafana"
-grafana_image_tag: "6.7.4"
+grafana_image_tag: "6.7.5"
grafana_port: "3000"
grafana_service_name: "crunchy-grafana"
grafana_service_type: "ClusterIP"
@@ -45,7 +45,7 @@ prometheus_custom_config: ""
prometheus_install: "true"
prometheus_image_prefix: "prom"
prometheus_image_name: "prometheus"
-prometheus_image_tag: "v2.20.0"
+prometheus_image_tag: "v2.23.0"
prometheus_port: "9090"
prometheus_service_name: "crunchy-prometheus"
prometheus_service_type: "ClusterIP"
From 4e5323659f3d4bbf4e39cb8ed8f210d32a860812 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 19 Dec 2020 17:15:04 -0500
Subject: [PATCH 055/276] Provide more documentation on metrics enablement
This adds examples to the monitoring architecture and tutorial
around how to enable metrics collection on an existing PostgreSQL
cluster.
---
docs/content/architecture/monitoring.md | 8 ++++++++
docs/content/tutorial/customize-cluster.md | 6 ++++++
2 files changed, 14 insertions(+)
diff --git a/docs/content/architecture/monitoring.md b/docs/content/architecture/monitoring.md
index 75c9b0eb5f..9258b18f33 100644
--- a/docs/content/architecture/monitoring.md
+++ b/docs/content/architecture/monitoring.md
@@ -35,6 +35,14 @@ command, for example:
pgo create cluster --metrics hippo
```
+If you have already created a cluster and want to add metrics collection to it,
+you can use the `--enable-metrics` flag as part of the [`pgo update cluster`]({{< relref "pgo-client/reference/pgo_update_cluster.md" >}})
+command, for example:
+
+```
+pgo update cluster --enable-metrics hippo
+```
+
## Components
The [PostgreSQL Operator Monitoring]({{< relref "installation/metrics/_index.md" >}})
diff --git a/docs/content/tutorial/customize-cluster.md b/docs/content/tutorial/customize-cluster.md
index 2fee92bb0a..3b30d0f0f2 100644
--- a/docs/content/tutorial/customize-cluster.md
+++ b/docs/content/tutorial/customize-cluster.md
@@ -30,6 +30,12 @@ pgo create cluster hippo --metrics
Note that the `--metrics` flag just enables a sidecar that can be scraped. You will need to install the [monitoring stack]({{< relref "installation/metrics/_index.md" >}}) separately, or tie it into your existing monitoring infrastructure.
+If you have an exiting cluster that you would like to add metrics collection to, you can use the `--enable-metrics` flag on the [`pgo update cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) command:
+
+```
+pgo update cluster hippo --enable-metrics
+```
+
## Customize PVC Size
Databases come in all different sizes, and those sizes can certainly change over time. As such, it is helpful to be able to specify what size PVC you want to store your PostgreSQL data.
From 36e10018e3812b88f0dafee78178bd37ea3c0196 Mon Sep 17 00:00:00 2001
From: Val
Date: Sun, 20 Dec 2020 11:15:53 -0500
Subject: [PATCH 056/276] Add Kustomize example for creating a Postgres
cluster
This adds an example for using the Kustomize configuration
management tool for how to manage a custom resource attributed
to pgclusters.crunchydata.com.
---
examples/kustomize/createcluster/README.md | 187 ++++++++++++++++++
.../createcluster/base/kustomization.yaml | 27 +++
.../createcluster/base/pgcluster.yaml | 105 ++++++++++
.../createcluster/overlay/dev/bouncer.json | 4 +
.../createcluster/overlay/dev/devhippo.json | 18 ++
.../overlay/dev/kustomization.yaml | 22 +++
.../overlay/prod/kustomization.yaml | 15 ++
.../createcluster/overlay/prod/prodhippo.json | 19 ++
.../overlay/staging/annotations.json | 6 +
.../overlay/staging/hippo-rpl1-pgreplica.yaml | 27 +++
.../overlay/staging/kustomization.yaml | 23 +++
.../overlay/staging/staginghippo.json | 19 ++
12 files changed, 472 insertions(+)
create mode 100644 examples/kustomize/createcluster/README.md
create mode 100644 examples/kustomize/createcluster/base/kustomization.yaml
create mode 100644 examples/kustomize/createcluster/base/pgcluster.yaml
create mode 100644 examples/kustomize/createcluster/overlay/dev/bouncer.json
create mode 100644 examples/kustomize/createcluster/overlay/dev/devhippo.json
create mode 100644 examples/kustomize/createcluster/overlay/dev/kustomization.yaml
create mode 100644 examples/kustomize/createcluster/overlay/prod/kustomization.yaml
create mode 100644 examples/kustomize/createcluster/overlay/prod/prodhippo.json
create mode 100644 examples/kustomize/createcluster/overlay/staging/annotations.json
create mode 100644 examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
create mode 100644 examples/kustomize/createcluster/overlay/staging/kustomization.yaml
create mode 100644 examples/kustomize/createcluster/overlay/staging/staginghippo.json
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
new file mode 100644
index 0000000000..0bfc762305
--- /dev/null
+++ b/examples/kustomize/createcluster/README.md
@@ -0,0 +1,187 @@
+# create cluster
+This is a working example that creates multiple clusters via the crd workflow using
+kustomize.
+
+## Prerequisites
+
+### Postgres Operator
+This example assumes you have the Crunchy PostgreSQL Operator installed
+in a namespace called `pgo`.
+
+### Kustomize
+Install the latest [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/) version available. Kustomise is availble in kubectl but it will not be the latest version.
+
+## Documenation
+Please see the [documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/) for more guidance using custom resources.
+
+## Example set up and execution
+Navigate to the createcluster directory under the examples/kustomize directory
+```
+cd ./examples/kustomize/createcluster/
+```
+In the createcluster directory you will see a base directory and an overlay directory. Base will create a simple crunchy data postgreSQL cluster. There are 3 directories located in the overlay directory, dev, staging and prod. You can run kustomize against each of those and a Crunchy PostgreSQL cluster will be created for each and each of them are slightly different.
+
+### base
+Lets generate the kustomize yaml for the base
+```
+kustomize build base/
+```
+If the yaml looks good lets apply it.
+```
+kustomize build base/ | kubectl apply -f -
+```
+You will see these items are created after running the above command
+```
+secret/hippo-hippo-secret created
+secret/hippo-postgres-secret created
+secret/hippo-primaryuser-secret created
+pgcluster.crunchydata.com/hippo created
+```
+You may need to wait a few seconds depending on the resources you have allocated to you kubernetes set up for the Crunchy PostgreSQL cluster to become available.
+
+After the cluster is finished creating lets take a look at the cluster with the Crunchy PostgreSQL Operator
+```
+pgo show cluster hippo -n pgo
+```
+You will see something like this if successful:
+```
+cluster : hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+ pod : hippo-8fb6bd96-j87wq (Running) on gke-xxxx-default-pool-38e946bd-257w (1/1) (primary)
+ pvc: hippo (1Gi)
+ deployment : hippo
+ deployment : hippo-backrest-shared-repo
+ service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
+ labels : pg-pod-anti-affinity= pgo-backrest=true pgo-version=4.5.1 crunchy-postgres-exporter=false name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata autofail=true crunchy-pgbadger=false
+```
+Feel free to run other pgo cli commands on the hippo cluster
+
+### overlay
+As mentioned above there are 3 overlays available in this example, these overlays will modify the common base.
+#### development
+The development overlay will deploy a simple Crunchy PostgreSQL cluster with pgbouncer
+
+Lets generate the kustomize yaml for the dev overlay
+```
+kustomize build overlay/dev/
+```
+The yaml looks good now lets apply it
+```
+kustomize build overlay/dev/ | kubectl apply -f -
+```
+You will see these items are created after running the above command
+```
+secret/dev-hippo-hippo-secret created
+secret/dev-hippo-postgres-secret created
+secret/dev-hippo-primaryuser-secret created
+pgcluster.crunchydata.com/dev-hippo created
+```
+After the cluster is finished creating lets take a look at the cluster with the Crunchy PostgreSQL Operator
+```
+pgo show cluster dev-hippo -n pgo
+```
+You will see something like this if successful:
+```
+cluster : dev-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+ pod : dev-hippo-588d4cb746-bwrxb (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
+ pvc: dev-hippo (1Gi)
+ deployment : dev-hippo
+ deployment : dev-hippo-backrest-shared-repo
+ deployment : dev-hippo-pgbouncer
+ service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
+ service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
+ labels : crunchy-pgha-scope=dev-hippo crunchy-postgres-exporter=false name=dev-hippo pg-cluster=dev-hippo pg-pod-anti-affinity= pgo-backrest=true vendor=crunchydata autofail=true crunchy-pgbadger=false deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
+```
+#### staging
+The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
+
+Lets generate the kustomize yaml for the staging overlay
+```
+kustomize build overlay/staging/
+```
+The yaml looks good now lets apply it
+```
+kustomize build overlay/staging/ | kubectl apply -f -
+```
+You will see these items are created after running the above command
+```
+secret/staging-hippo-hippo-secret created
+secret/staging-hippo-postgres-secret created
+secret/staging-hippo-primaryuser-secret created
+pgcluster.crunchydata.com/staging-hippo created
+pgreplica.crunchydata.com/staging-hippo-rpl1 created
+```
+After the cluster is finished creating lets take a look at the cluster with the crunchy postgreSQL operator
+```
+pgo show cluster staging-hippo -n pgo
+```
+You will see something like this if successful, (Notice one of the replicas is a different size):
+```
+cluster : staging-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+ pod : staging-hippo-85cf6dcb65-9h748 (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
+ pvc: staging-hippo (1Gi)
+ pod : staging-hippo-lnxw-cf47d8c8b-6r4wn (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
+ pvc: staging-hippo-lnxw (1Gi)
+ pod : staging-hippo-rpl1-5d89d66f9b-44znd (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
+ pvc: staging-hippo-rpl1 (2Gi)
+ deployment : staging-hippo
+ deployment : staging-hippo-backrest-shared-repo
+ deployment : staging-hippo-lnxw
+ deployment : staging-hippo-rpl1
+ service : staging-hippo - ClusterIP (10.0.56.253) - Ports (2022/TCP, 5432/TCP)
+ service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
+ pgreplica : staging-hippo-lnxw
+ pgreplica : staging-hippo-rpl1
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo pg-pod-anti-affinity= crunchy-postgres-exporter=false crunchy-pgbadger=false crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-backrest=true pgo-version=4.5.1 pgouser=admin vendor=crunchydata autofail=true
+```
+
+#### production
+The production overlay will deploy a crunchy postgreSQL cluster with one replica
+
+Lets generate the kustomize yaml for the prod overlay
+```
+kustomize build overlay/prod/
+```
+The yaml looks good now lets apply it
+```
+kustomize build overlay/prod/ | kubectl apply -f -
+```
+You will see these items are created after running the above command
+```
+secret/prod-hippo-hippo-secret created
+secret/prod-hippo-postgres-secret created
+secret/prod-hippo-primaryuser-secret created
+pgcluster.crunchydata.com/prod-hippo created
+```
+After the cluster is finished creating lets take a look at the cluster with the crunchy postgreSQL operator
+```
+pgo show cluster prod-hippo -n pgo
+```
+You will see something like this if successful, (Notice one of the replicas is a different size):
+```
+cluster : prod-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+ pod : prod-hippo-5d6dd46497-rr67c (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (primary)
+ pvc: prod-hippo (1Gi)
+ pod : prod-hippo-flty-84d97c8769-2pzbh (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (replica)
+ pvc: prod-hippo-flty (1Gi)
+ deployment : prod-hippo
+ deployment : prod-hippo-backrest-shared-repo
+ deployment : prod-hippo-flty
+ service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
+ service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
+ pgreplica : prod-hippo-flty
+ labels : pgo-backrest=true pgo-version=4.5.1 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=prod-hippo environment=production pg-cluster=prod-hippo pg-pod-anti-affinity= autofail=true crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+```
+### Delete the clusters
+To delete the clusters run the following pgo cli commands
+
+To delete all the clusters in the `pgo` namespace run the following:
+```
+pgo delete cluster --all -n pgo
+```
+Or to delete each cluster individually
+```
+pgo delete cluster hippo -n pgo
+pgo delete cluster dev-hippo -n pgo
+pgo delete cluster staging-hippo -n pgo
+pgo delete cluster prod-hippo -n pgo
+```
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/base/kustomization.yaml b/examples/kustomize/createcluster/base/kustomization.yaml
new file mode 100644
index 0000000000..a93d6f8eaa
--- /dev/null
+++ b/examples/kustomize/createcluster/base/kustomization.yaml
@@ -0,0 +1,27 @@
+apiVersion: kustomize.config.k8s.io/v1beta1
+kind: Kustomization
+namespace: pgo
+commonLabels:
+ vendor: crunchydata
+secretGenerator:
+ - name: hippo-hippo-secret
+ options:
+ disableNameSuffixHash: true
+ literals:
+ - username=hippo
+ - password=Moresecurepassword*
+ - name: hippo-primaryuser-secret
+ options:
+ disableNameSuffixHash: true
+ literals:
+ - username=primaryuser
+ - password=Anothersecurepassword*
+ - name: hippo-postgres-secret
+ options:
+ disableNameSuffixHash: true
+ literals:
+ - username=postgres
+ - password=Supersecurepassword*
+resources:
+- pgcluster.yaml
+
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
new file mode 100644
index 0000000000..29aa0c6e83
--- /dev/null
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -0,0 +1,105 @@
+apiVersion: crunchydata.com/v1
+kind: Pgcluster
+metadata:
+ annotations:
+ current-primary: hippo
+ labels:
+ autofail: "true"
+ crunchy-pgbadger: "false"
+ crunchy-pgha-scope: hippo
+ crunchy-postgres-exporter: "false"
+ deployment-name: hippo
+ name: hippo
+ pg-cluster: hippo
+ pg-pod-anti-affinity: ""
+ pgo-backrest: "true"
+ pgo-version: 4.5.1
+ pgouser: admin
+ name: hippo
+ namespace: pgo
+spec:
+ BackrestStorage:
+ accessmode: ReadWriteOnce
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ PrimaryStorage:
+ accessmode: ReadWriteOnce
+ matchLabels: ""
+ name: hippo
+ size: 1G
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ ReplicaStorage:
+ accessmode: ReadWriteOnce
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ annotations:
+ global:
+ favorite: ""
+ backrest:
+ chair: ""
+ pgBouncer:
+ pool: ""
+ postgres:
+ elephant: ""
+ backrestLimits: {}
+ backrestRepoPath: ""
+ backrestResources:
+ memory: 48Mi
+ backrestS3Bucket: ""
+ backrestS3Endpoint: ""
+ backrestS3Region: ""
+ backrestS3URIStyle: ""
+ backrestS3VerifyTLS: ""
+ ccpimage: crunchy-postgres-ha
+ ccpimageprefix: registry.developers.crunchydata.com/crunchydata
+ ccpimagetag: centos7-12.5-4.5.1
+ clustername: hippo
+ customconfig: ""
+ database: hippo
+ exporterport: "9187"
+ limits: {}
+ name: hippo
+ namespace: pgo
+ pgBouncer:
+ limits: {}
+ replicas: 0
+ resources:
+ memory: "0"
+ pgDataSource:
+ restoreFrom: ""
+ restoreOpts: ""
+ pgbadgerport: "10000"
+ pgoimageprefix: registry.developers.crunchydata.com/crunchydata
+ podAntiAffinity:
+ default: preferred
+ pgBackRest: preferred
+ pgBouncer: preferred
+ policies: ""
+ port: "5432"
+ primarysecretname: hippo-primaryuser-secret
+ replicas: "0"
+ rootsecretname: hippo-postgres-secret
+ shutdown: false
+ standby: false
+ tablespaceMounts: {}
+ tls:
+ caSecret: ""
+ replicationTLSSecret: ""
+ tlsSecret: ""
+ tlsOnly: false
+ user: hippo
+ userlabels:
+ crunchy-postgres-exporter: "false"
+ pg-pod-anti-affinity: ""
+ pgo-version: 4.5.1
+ usersecretname: hippo-hippo-secret
diff --git a/examples/kustomize/createcluster/overlay/dev/bouncer.json b/examples/kustomize/createcluster/overlay/dev/bouncer.json
new file mode 100644
index 0000000000..622283f1fe
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/dev/bouncer.json
@@ -0,0 +1,4 @@
+[
+ { "op": "add", "path": "/spec/pgBouncer/resources/memory", "value": "24Mi"},
+ { "op": "add", "path": "/spec/pgBouncer/replicas", "value": 1 }
+]
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/overlay/dev/devhippo.json b/examples/kustomize/createcluster/overlay/dev/devhippo.json
new file mode 100644
index 0000000000..ab7c2e5071
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/dev/devhippo.json
@@ -0,0 +1,18 @@
+[
+ { "op": "replace", "path": "/metadata/annotations/current-primary", "value": "dev-hippo" },
+ { "op": "replace", "path": "/metadata/labels/crunchy-pgha-scope", "value": "dev-hippo" },
+ { "op": "replace", "path": "/metadata/labels/deployment-name", "value": "dev-hippo" },
+ { "op": "replace", "path": "/metadata/labels/name", "value": "dev-hippo" },
+ { "op": "replace", "path": "/metadata/labels/pg-cluster", "value": "dev-hippo" },
+ { "op": "replace", "path": "/metadata/name", "value": "dev-hippo" },
+
+ { "op": "replace", "path": "/spec/PrimaryStorage/name", "value": "dev-hippo" },
+ { "op": "replace", "path": "/spec/clustername", "value": "dev-hippo" },
+ { "op": "replace", "path": "/spec/PrimaryStorage/name", "value": "dev-hippo" },
+ { "op": "replace", "path": "/spec/clustername", "value": "dev-hippo" },
+ { "op": "replace", "path": "/spec/database", "value": "dev-hippo" },
+ { "op": "replace", "path": "/spec/name", "value": "dev-hippo" },
+ { "op": "replace", "path": "/spec/primarysecretname", "value": "dev-hippo-primaryuser-secret" },
+ { "op": "replace", "path": "/spec/rootsecretname", "value": "dev-hippo-postgres-secret" },
+ { "op": "replace", "path": "/spec/usersecretname", "value": "dev-hippo-hippo-secret" }
+]
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/overlay/dev/kustomization.yaml b/examples/kustomize/createcluster/overlay/dev/kustomization.yaml
new file mode 100644
index 0000000000..a78fe401af
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/dev/kustomization.yaml
@@ -0,0 +1,22 @@
+resources:
+- ../../base
+namePrefix: dev-
+namespace: pgo
+commonLabels:
+ environment: development
+
+patchesJson6902:
+ - target:
+ group: crunchydata.com
+ version: v1
+ namespace: pgo
+ kind: Pgcluster
+ name: dev-hippo
+ path: devhippo.json
+ - target:
+ group: crunchydata.com
+ version: v1
+ namespace: pgo
+ kind: Pgcluster
+ name: dev-hippo
+ path: bouncer.json
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/overlay/prod/kustomization.yaml b/examples/kustomize/createcluster/overlay/prod/kustomization.yaml
new file mode 100644
index 0000000000..76e5756697
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/prod/kustomization.yaml
@@ -0,0 +1,15 @@
+resources:
+- ../../base
+namePrefix: prod-
+namespace: pgo
+commonLabels:
+ environment: production
+
+patchesJson6902:
+ - target:
+ group: crunchydata.com
+ version: v1
+ namespace: pgo
+ kind: Pgcluster
+ name: prod-hippo
+ path: prodhippo.json
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/overlay/prod/prodhippo.json b/examples/kustomize/createcluster/overlay/prod/prodhippo.json
new file mode 100644
index 0000000000..ef8313629d
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/prod/prodhippo.json
@@ -0,0 +1,19 @@
+[
+ { "op": "replace", "path": "/metadata/annotations/current-primary", "value": "prod-hippo" },
+ { "op": "replace", "path": "/metadata/labels/crunchy-pgha-scope", "value": "prod-hippo" },
+ { "op": "replace", "path": "/metadata/labels/deployment-name", "value": "prod-hippo" },
+ { "op": "replace", "path": "/metadata/labels/name", "value": "prod-hippo" },
+ { "op": "replace", "path": "/metadata/labels/pg-cluster", "value": "prod-hippo" },
+ { "op": "replace", "path": "/metadata/name", "value": "prod-hippo" },
+
+ { "op": "replace", "path": "/spec/PrimaryStorage/name", "value": "prod-hippo" },
+ { "op": "replace", "path": "/spec/clustername", "value": "prod-hippo" },
+ { "op": "replace", "path": "/spec/PrimaryStorage/name", "value": "prod-hippo" },
+ { "op": "replace", "path": "/spec/clustername", "value": "prod-hippo" },
+ { "op": "replace", "path": "/spec/database", "value": "prod-hippo" },
+ { "op": "replace", "path": "/spec/name", "value": "prod-hippo" },
+ { "op": "replace", "path": "/spec/primarysecretname", "value": "prod-hippo-primaryuser-secret" },
+ { "op": "replace", "path": "/spec/replicas", "value": "1"},
+ { "op": "replace", "path": "/spec/rootsecretname", "value": "prod-hippo-postgres-secret" },
+ { "op": "replace", "path": "/spec/usersecretname", "value": "prod-hippo-hippo-secret" }
+]
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/overlay/staging/annotations.json b/examples/kustomize/createcluster/overlay/staging/annotations.json
new file mode 100644
index 0000000000..34983a01c7
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/staging/annotations.json
@@ -0,0 +1,6 @@
+[
+ { "op": "add", "path": "/spec/annotations/global/favorite", "value": "hippo"},
+ { "op": "add", "path": "/spec/annotations/backrest/chair", "value": "comfy"},
+ { "op": "add", "path": "/spec/annotations/pgBouncer/pool", "value": "swimming"},
+ { "op": "add", "path": "/spec/annotations/postgres/elephant", "value": "cool"}
+]
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
new file mode 100644
index 0000000000..33a36b5ef9
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
@@ -0,0 +1,27 @@
+apiVersion: crunchydata.com/v1
+kind: Pgreplica
+metadata:
+ labels:
+ name: staging-hippo-rpl1
+ pg-cluster: staging-hippo
+ pgouser: admin
+ name: hippo-rpl1
+ namespace: pgo
+spec:
+ clustername: staging-hippo
+ name: staging-hippo-rpl1
+ namespace: pgo
+ replicastorage:
+ accessmode: ReadWriteOnce
+ matchLabels: ""
+ name: staging-hippo-rpl1
+ size: 2G
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ userlabels:
+ NodeLabelKey: ""
+ NodeLabelValue: ""
+ crunchy-postgres-exporter: "false"
+ pg-pod-anti-affinity: ""
+ pgo-version: 4.5.1
diff --git a/examples/kustomize/createcluster/overlay/staging/kustomization.yaml b/examples/kustomize/createcluster/overlay/staging/kustomization.yaml
new file mode 100644
index 0000000000..4fb92b8d16
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/staging/kustomization.yaml
@@ -0,0 +1,23 @@
+resources:
+- ../../base
+- hippo-rpl1-pgreplica.yaml
+namePrefix: staging-
+namespace: pgo
+commonLabels:
+ environment: staging
+
+patchesJson6902:
+ - target:
+ group: crunchydata.com
+ version: v1
+ namespace: pgo
+ kind: Pgcluster
+ name: staging-hippo
+ path: staginghippo.json
+ - target:
+ group: crunchydata.com
+ version: v1
+ namespace: pgo
+ kind: Pgcluster
+ name: staging-hippo
+ path: annotations.json
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/overlay/staging/staginghippo.json b/examples/kustomize/createcluster/overlay/staging/staginghippo.json
new file mode 100644
index 0000000000..c19acb5895
--- /dev/null
+++ b/examples/kustomize/createcluster/overlay/staging/staginghippo.json
@@ -0,0 +1,19 @@
+[
+ { "op": "replace", "path": "/metadata/annotations/current-primary", "value": "staging-hippo" },
+ { "op": "replace", "path": "/metadata/labels/crunchy-pgha-scope", "value": "staging-hippo" },
+ { "op": "replace", "path": "/metadata/labels/deployment-name", "value": "staging-hippo" },
+ { "op": "replace", "path": "/metadata/labels/name", "value": "staging-hippo" },
+ { "op": "replace", "path": "/metadata/labels/pg-cluster", "value": "staging-hippo" },
+ { "op": "replace", "path": "/metadata/name", "value": "staging-hippo" },
+
+ { "op": "replace", "path": "/spec/PrimaryStorage/name", "value": "staging-hippo" },
+ { "op": "replace", "path": "/spec/clustername", "value": "staging-hippo" },
+ { "op": "replace", "path": "/spec/PrimaryStorage/name", "value": "staging-hippo" },
+ { "op": "replace", "path": "/spec/clustername", "value": "staging-hippo" },
+ { "op": "replace", "path": "/spec/database", "value": "staging-hippo" },
+ { "op": "replace", "path": "/spec/name", "value": "staging-hippo" },
+ { "op": "replace", "path": "/spec/primarysecretname", "value": "staging-hippo-primaryuser-secret" },
+ { "op": "replace", "path": "/spec/replicas", "value": "1"},
+ { "op": "replace", "path": "/spec/rootsecretname", "value": "staging-hippo-postgres-secret" },
+ { "op": "replace", "path": "/spec/usersecretname", "value": "staging-hippo-hippo-secret" }
+]
\ No newline at end of file
From c18b29ec476b1353d4351addd0c86fd2103fa9cc Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 20 Dec 2020 11:37:57 -0500
Subject: [PATCH 057/276] Add missing `--no-prompt` flag to `pgo upgrade`
The mechanism for disabling the verification prompt for `pgo upgrade`
was always available, but the flag itself was not exposed.
Issue: [ch9988]
Issue: #2135
---
cmd/pgo/cmd/upgrade.go | 11 +++++------
docs/content/pgo-client/reference/pgo_upgrade.md | 11 ++++++-----
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/cmd/pgo/cmd/upgrade.go b/cmd/pgo/cmd/upgrade.go
index 0e734ade5e..31122cebf7 100644
--- a/cmd/pgo/cmd/upgrade.go
+++ b/cmd/pgo/cmd/upgrade.go
@@ -41,10 +41,10 @@ var UpgradeCCPImageTag string
var UpgradeCmd = &cobra.Command{
Use: "upgrade",
Short: "Perform a cluster upgrade.",
- Long: `UPGRADE allows you to perform a comprehensive PGCluster upgrade
- (for use after performing a Postgres Operator upgrade).
+ Long: `UPGRADE allows you to perform a comprehensive PGCluster upgrade
+ (for use after performing a Postgres Operator upgrade).
For example:
-
+
pgo upgrade mycluster
Upgrades the cluster for use with the upgraded Postgres Operator version.`,
Run: func(cmd *cobra.Command, args []string) {
@@ -69,8 +69,9 @@ func init() {
RootCmd.AddCommand(UpgradeCmd)
// flags for "pgo upgrade"
- UpgradeCmd.Flags().BoolVarP(&IgnoreValidation, "ignore-validation", "", false, "Disables version checking against the image tags when performing an cluster upgrade.")
UpgradeCmd.Flags().StringVarP(&UpgradeCCPImageTag, "ccp-image-tag", "", "", "The image tag to use for cluster creation. If specified, it overrides the default configuration setting and disables tag validation checking.")
+ UpgradeCmd.Flags().BoolVarP(&IgnoreValidation, "ignore-validation", "", false, "Disables version checking against the image tags when performing an cluster upgrade.")
+ UpgradeCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
}
func createUpgrade(args []string, ns string) {
@@ -90,7 +91,6 @@ func createUpgrade(args []string, ns string) {
request.UpgradeCCPImageTag = UpgradeCCPImageTag
response, err := api.CreateUpgrade(httpclient, &SessionCredentials, &request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -104,5 +104,4 @@ func createUpgrade(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
diff --git a/docs/content/pgo-client/reference/pgo_upgrade.md b/docs/content/pgo-client/reference/pgo_upgrade.md
index 78d787f6f0..534790f189 100644
--- a/docs/content/pgo-client/reference/pgo_upgrade.md
+++ b/docs/content/pgo-client/reference/pgo_upgrade.md
@@ -7,10 +7,10 @@ Perform a cluster upgrade.
### Synopsis
-UPGRADE allows you to perform a comprehensive PGCluster upgrade
- (for use after performing a Postgres Operator upgrade).
+UPGRADE allows you to perform a comprehensive PGCluster upgrade
+ (for use after performing a Postgres Operator upgrade).
For example:
-
+
pgo upgrade mycluster
Upgrades the cluster for use with the upgraded Postgres Operator version.
@@ -24,12 +24,13 @@ pgo upgrade [flags]
--ccp-image-tag string The image tag to use for cluster creation. If specified, it overrides the default configuration setting and disables tag validation checking.
-h, --help help for upgrade
--ignore-validation Disables version checking against the image tags when performing an cluster upgrade.
+ --no-prompt No command line confirmation.
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -43,4 +44,4 @@ pgo upgrade [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 20-Dec-2020
From 86d327abb28aaf9281e67403a22948ca691f9bbe Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 20 Dec 2020 12:30:26 -0500
Subject: [PATCH 058/276] Ensure consistent permissions for mounting repo
Secret
While the Secret volume mount is set to be readonly for the
pgBackRest Secret information, the defaultMode on the volume itself
was set to be more permissive. While it appears that the vast
majority of Kubernetes distributions gie precedence to the value
of the volume mount, so flavors do use the values set on the volume.
As such, it's prudent to remove the more permissive settings, which
this patch does.
Issue: [ch9989]
Issue: #2140
---
.../pgo-operator/files/pgo-configs/cluster-bootstrap-job.json | 3 +--
.../pgo-operator/files/pgo-configs/cluster-deployment.json | 3 +--
.../files/pgo-configs/pgo-backrest-repo-template.json | 3 +--
3 files changed, 3 insertions(+), 6 deletions(-)
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
index ecd2cf735a..9bd5a10f21 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
@@ -166,8 +166,7 @@
}, {
"name": "sshd",
"secret": {
- "secretName": "{{.RestoreFrom}}-backrest-repo-config",
- "defaultMode": 511
+ "secretName": "{{.RestoreFrom}}-backrest-repo-config"
}
},
{{if .TLSEnabled}}
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index 0081bb205f..c05ee7210c 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -259,8 +259,7 @@
}, {
"name": "sshd",
"secret": {
- "secretName": "{{.ClusterName}}-backrest-repo-config",
- "defaultMode": 511
+ "secretName": "{{.ClusterName}}-backrest-repo-config"
}
}, {
"name": "root-volume",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
index dba4a3d8d7..30b69dc4b6 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
@@ -102,8 +102,7 @@
"volumes": [{
"name": "sshd",
"secret": {
- "secretName": "{{.SshdSecretsName}}",
- "defaultMode": 511
+ "secretName": "{{.SshdSecretsName}}"
}
}, {
"name": "backrestrepo",
From 73b5b01864153f357d4880542f18d88cc62bdb1d Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 20 Dec 2020 16:29:57 -0500
Subject: [PATCH 059/276] Allow for explicit deletion of pgBackRest backups
This introduces the ability to delete pgBackRest backups using
the `pgo delete backup` command. The pgBackRest backup name
must be specified using the `--target` command, which can be
determined through a call to `pgo show backup`.
This also includes obligatory language on when to use explicit
backup deleting to ensure the user does not take actions on
their pgBackRest repository that they do not intend to.
Issue: #2111
---
cmd/pgo/api/backrest.go | 46 ++++++++-
cmd/pgo/cmd/backup.go | 33 ++++++-
cmd/pgo/cmd/delete.go | 46 +++++----
.../content/architecture/disaster-recovery.md | 93 ++++++++++++++++++
.../pgo-client/reference/pgo_delete.md | 6 +-
.../pgo-client/reference/pgo_delete_backup.md | 12 ++-
docs/content/tutorial/disaster-recovery.md | 86 +++++++++++++++++
.../apiserver/backrestservice/backrestimpl.go | 95 ++++++++++++++-----
.../backrestservice/backrestservice.go | 85 ++++++++++++++++-
internal/apiserver/routing/routes.go | 1 +
pkg/apiservermsgs/backrestmsgs.go | 21 ++++
11 files changed, 466 insertions(+), 58 deletions(-)
diff --git a/cmd/pgo/api/backrest.go b/cmd/pgo/api/backrest.go
index 13e2ff702b..e0e087efbf 100644
--- a/cmd/pgo/api/backrest.go
+++ b/cmd/pgo/api/backrest.go
@@ -25,8 +25,50 @@ import (
log "github.com/sirupsen/logrus"
)
-func ShowBackrest(httpclient *http.Client, arg, selector string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowBackrestResponse, error) {
+// DeleteBackup makes an API requests to delete a pgBackRest backup
+func DeleteBackup(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request msgs.DeleteBackrestBackupRequest) (msgs.DeleteBackrestBackupResponse, error) {
+ var response msgs.DeleteBackrestBackupResponse
+
+ // explicitly set the client version here
+ request.ClientVersion = msgs.PGO_VERSION
+
+ log.Debugf("DeleteBackup called [%+v]", request)
+
+ jsonValue, _ := json.Marshal(request)
+ url := SessionCredentials.APIServerURL + "/backrest"
+ action := "DELETE"
+ req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ if err != nil {
+ return response, err
+ }
+
+ req.Header.Set("Content-Type", "application/json")
+ req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password)
+
+ resp, err := httpclient.Do(req)
+ if err != nil {
+ return response, err
+ }
+
+ defer resp.Body.Close()
+
+ log.Debugf("%+v", resp)
+
+ if err := StatusCheck(resp); err != nil {
+ return response, err
+ }
+
+ if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
+ fmt.Print("Error: ")
+ fmt.Println(err)
+ return response, err
+ }
+
+ return response, nil
+}
+
+func ShowBackrest(httpclient *http.Client, arg, selector string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowBackrestResponse, error) {
var response msgs.ShowBackrestResponse
url := SessionCredentials.APIServerURL + "/backrest/" + arg + "?version=" + msgs.PGO_VERSION + "&selector=" + selector + "&namespace=" + ns
@@ -58,11 +100,9 @@ func ShowBackrest(httpclient *http.Client, arg, selector string, SessionCredenti
}
return response, err
-
}
func CreateBackrestBackup(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateBackrestBackupRequest) (msgs.CreateBackrestBackupResponse, error) {
-
var response msgs.CreateBackrestBackupResponse
jsonValue, _ := json.Marshal(request)
diff --git a/cmd/pgo/cmd/backup.go b/cmd/pgo/cmd/backup.go
index 61225cc3ea..d70877a198 100644
--- a/cmd/pgo/cmd/backup.go
+++ b/cmd/pgo/cmd/backup.go
@@ -18,8 +18,12 @@ package cmd
import (
"fmt"
+ "os"
+ "github.com/crunchydata/postgres-operator/cmd/pgo/api"
"github.com/crunchydata/postgres-operator/internal/config"
+ msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
+
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
@@ -74,7 +78,6 @@ var backupCmd = &cobra.Command{
}
}
-
},
}
@@ -89,10 +92,32 @@ func init() {
backupCmd.Flags().StringVarP(&PGDumpDB, "database", "d", "postgres", "The name of the database pgdump will backup.")
backupCmd.Flags().StringVar(&backupType, "backup-type", "pgbackrest", "The backup type to perform. Default is pgbackrest. Valid backup types are pgbackrest and pgdump.")
backupCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use when scheduling pgBackRest backups. Either \"local\", \"s3\" or both, comma separated. (default \"local\")")
-
}
// deleteBackup ....
-func deleteBackup(args []string, ns string) {
- log.Debugf("deleteBackup called %v", args)
+func deleteBackup(namespace, clusterName string) {
+ request := msgs.DeleteBackrestBackupRequest{
+ ClusterName: clusterName,
+ Namespace: namespace,
+ Target: Target,
+ }
+
+ // make the request
+ response, err := api.DeleteBackup(httpclient, &SessionCredentials, request)
+
+ // if everything is OK, exit early
+ if err == nil && response.Status.Code == msgs.Ok {
+ return
+ }
+
+ // if there is an error, or the response code is not ok, print the error and
+ // exit
+ if err != nil {
+ fmt.Println("Error: " + err.Error())
+ } else if response.Status.Code == msgs.Error {
+ fmt.Println("Error: " + response.Status.Msg)
+ }
+
+ // since we can only have errors at this point, exit with error
+ os.Exit(1)
}
diff --git a/cmd/pgo/cmd/delete.go b/cmd/pgo/cmd/delete.go
index c4dce568a6..df21d71ea4 100644
--- a/cmd/pgo/cmd/delete.go
+++ b/cmd/pgo/cmd/delete.go
@@ -17,6 +17,7 @@ package cmd
import (
"fmt"
+ "os"
"github.com/crunchydata/postgres-operator/cmd/pgo/util"
"github.com/spf13/cobra"
@@ -36,7 +37,7 @@ var deleteCmd = &cobra.Command{
Short: "Delete an Operator resource",
Long: `The delete command allows you to delete an Operator resource. For example:
- pgo delete backup mycluster
+ pgo delete backup mycluster --target=backup-name
pgo delete cluster mycluster
pgo delete cluster mycluster --delete-data
pgo delete cluster mycluster --delete-data --delete-backups
@@ -53,7 +54,6 @@ var deleteCmd = &cobra.Command{
pgo delete schedule mycluster
pgo delete user --username=testuser --selector=name=mycluster`,
Run: func(cmd *cobra.Command, args []string) {
-
if len(args) == 0 {
fmt.Println(`Error: You must specify the type of resource to delete. Valid resource types include:
* backup
@@ -94,7 +94,6 @@ var deleteCmd = &cobra.Command{
* user`)
}
}
-
},
}
@@ -118,6 +117,13 @@ func init() {
// "pgo delete backup"
// used to delete backups
deleteCmd.AddCommand(deleteBackupCmd)
+ // "pgo delete backup --no-prompt"
+ // disables the verification prompt for deleting a backup
+ deleteBackupCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
+ // "pgo delete backup --target"
+ // the backup target to expire
+ deleteBackupCmd.Flags().StringVar(&Target, "target", "", "The backup to expire, e.g. "+
+ "\"20201220-171801F\". Use \"pgo show backup\" to determine the target.")
// "pgo delete cluster"
// used to delete clusters
@@ -294,22 +300,30 @@ func init() {
var deleteBackupCmd = &cobra.Command{
Use: "backup",
Short: "Delete a backup",
- Long: `Delete a backup. For example:
+ Long: `Delete a backup from pgBackRest. Requires a target backup. For example:
- pgo delete backup mydatabase`,
+ pgo delete backup hippo --target=20201220-171801F`,
Run: func(cmd *cobra.Command, args []string) {
+ if len(args) == 0 {
+ fmt.Println("Error: A cluster name is required for this command.")
+ os.Exit(1)
+ }
+
+ if Target == "" {
+ fmt.Println("Error: --target must be specified.")
+ os.Exit(1)
+ }
+
+ if !util.AskForConfirmation(NoPrompt, "If you delete a backup that is *not* set to expire, you may be unable to meet your retention requirements. Proceed?") {
+ fmt.Println("Aborting...")
+ return
+ }
+
if Namespace == "" {
Namespace = PGONamespace
}
- if len(args) == 0 {
- fmt.Println("Error: A database or cluster name is required for this command.")
- } else {
- if util.AskForConfirmation(NoPrompt, "") {
- deleteBackup(args, Namespace)
- } else {
- fmt.Println("Aborting...")
- }
- }
+
+ deleteBackup(Namespace, args[0])
},
}
@@ -453,7 +467,6 @@ var deletePgAdminCmd = &cobra.Command{
} else {
if util.AskForConfirmation(NoPrompt, "") {
deletePgAdmin(args, Namespace)
-
} else {
fmt.Println("Aborting...")
}
@@ -477,7 +490,6 @@ var deletePgbouncerCmd = &cobra.Command{
} else {
if util.AskForConfirmation(NoPrompt, "") {
deletePgbouncer(args, Namespace)
-
} else {
fmt.Println("Aborting...")
}
@@ -542,7 +554,6 @@ var deleteUserCmd = &cobra.Command{
pgo delete user --username=someuser --selector=name=mycluster`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
@@ -551,7 +562,6 @@ var deleteUserCmd = &cobra.Command{
} else {
if util.AskForConfirmation(NoPrompt, "") {
deleteUser(args, Namespace)
-
} else {
fmt.Println("Aborting...")
}
diff --git a/docs/content/architecture/disaster-recovery.md b/docs/content/architecture/disaster-recovery.md
index 161bcedac0..7d3639e0fc 100644
--- a/docs/content/architecture/disaster-recovery.md
+++ b/docs/content/architecture/disaster-recovery.md
@@ -304,3 +304,96 @@ To enable a PostgreSQL cluster to use S3, the `--pgbackrest-storage-type` on the
Once configured, the `pgo backup` and `pgo restore` commands will work with S3
similarly to the above!
+
+## Deleting a Backup
+
+{{% notice warning %}}
+If you delete a backup that is *not* set to expire, you may be unable to meet
+your retention requirements. If you are deleting backups to free space, it is
+recommended to delete your oldest backups first.
+{{% /notice %}}
+
+A backup can be deleted using the [`pgo delete backup`]({{< relref "pgo-client/reference/pgo_delete_backup.md" >}})
+command. You must specify a specific backup to delete using the `--target` flag.
+You can get the backup names from the
+[`pgo show backup`]({{< relref "pgo-client/reference/pgo_show_backup.md" >}})
+command.
+
+For example, using a PostgreSQL cluster called `hippo`, pretend there is an
+example pgBackRest repository in the state shown after running the
+ `pgo show backup hippo` command:
+
+```
+cluster: hippo
+storage type: local
+
+stanza: db
+ status: ok
+ cipher: none
+
+ db (current)
+ wal archive min/max (12-1)
+
+ full backup: 20201220-171801F
+ timestamp start/stop: 2020-12-20 17:18:01 +0000 UTC / 2020-12-20 17:18:10 +0000 UTC
+ wal start/stop: 000000010000000000000002 / 000000010000000000000002
+ database size: 31.3MiB, backup size: 31.3MiB
+ repository size: 3.8MiB, repository backup size: 3.8MiB
+ backup reference list:
+
+ incr backup: 20201220-171801F_20201220-171939I
+ timestamp start/stop: 2020-12-20 17:19:39 +0000 UTC / 2020-12-20 17:19:41 +0000 UTC
+ wal start/stop: 000000010000000000000005 / 000000010000000000000005
+ database size: 31.3MiB, backup size: 216.3KiB
+ repository size: 3.8MiB, repository backup size: 25.9KiB
+ backup reference list: 20201220-171801F
+
+ incr backup: 20201220-171801F_20201220-172046I
+ timestamp start/stop: 2020-12-20 17:20:46 +0000 UTC / 2020-12-20 17:23:29 +0000 UTC
+ wal start/stop: 00000001000000000000000A / 00000001000000000000000A
+ database size: 65.9MiB, backup size: 37.5MiB
+ repository size: 7.7MiB, repository backup size: 4.3MiB
+ backup reference list: 20201220-171801F, 20201220-171801F_20201220-171939I
+
+ full backup: 20201220-201305F
+ timestamp start/stop: 2020-12-20 20:13:05 +0000 UTC / 2020-12-20 20:13:15 +0000 UTC
+ wal start/stop: 00000001000000000000000F / 00000001000000000000000F
+ database size: 65.9MiB, backup size: 65.9MiB
+ repository size: 7.7MiB, repository backup size: 7.7MiB
+ backup reference list:
+```
+
+The backup targets can be found after the backup type, e.g. `20201220-171801F`
+or `20201220-171801F_20201220-172046I`.
+
+One can delete the oldest backup, in this case `20201220-171801F`, by running
+the following command:
+
+```
+pgo delete backup hippo --target=20201220-171801F
+```
+
+Verify the backup is deleted with `pgo show backup hippo`:
+
+```
+cluster: hippo
+storage type: local
+
+stanza: db
+ status: ok
+ cipher: none
+
+ db (current)
+ wal archive min/max (12-1)
+
+ full backup: 20201220-201305F
+ timestamp start/stop: 2020-12-20 20:13:05 +0000 UTC / 2020-12-20 20:13:15 +0000 UTC
+ wal start/stop: 00000001000000000000000F / 00000001000000000000000F
+ database size: 65.9MiB, backup size: 65.9MiB
+ repository size: 7.7MiB, repository backup size: 7.7MiB
+ backup reference list:
+```
+
+(Note: this had the net effect of expiring all of the incremental backups
+associated with the full backup that as deleted. This is a feature of
+pgBackRest).
diff --git a/docs/content/pgo-client/reference/pgo_delete.md b/docs/content/pgo-client/reference/pgo_delete.md
index b2233b9c50..ea25f615b0 100644
--- a/docs/content/pgo-client/reference/pgo_delete.md
+++ b/docs/content/pgo-client/reference/pgo_delete.md
@@ -9,7 +9,7 @@ Delete an Operator resource
The delete command allows you to delete an Operator resource. For example:
- pgo delete backup mycluster
+ pgo delete backup mycluster --target=backup-name
pgo delete cluster mycluster
pgo delete cluster mycluster --delete-data
pgo delete cluster mycluster --delete-data --delete-backups
@@ -39,7 +39,7 @@ pgo delete [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -64,4 +64,4 @@ pgo delete [flags]
* [pgo delete schedule](/pgo-client/reference/pgo_delete_schedule/) - Delete a schedule
* [pgo delete user](/pgo-client/reference/pgo_delete_user/) - Delete a user
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 20-Dec-2020
diff --git a/docs/content/pgo-client/reference/pgo_delete_backup.md b/docs/content/pgo-client/reference/pgo_delete_backup.md
index 22bf95e3c9..67b40d1c53 100644
--- a/docs/content/pgo-client/reference/pgo_delete_backup.md
+++ b/docs/content/pgo-client/reference/pgo_delete_backup.md
@@ -7,9 +7,9 @@ Delete a backup
### Synopsis
-Delete a backup. For example:
+Delete a backup from pgBackRest. Requires a target backup. For example:
- pgo delete backup mydatabase
+ pgo delete backup hippo --target=20201220-171801F
```
pgo delete backup [flags]
@@ -18,13 +18,15 @@ pgo delete backup [flags]
### Options
```
- -h, --help help for backup
+ -h, --help help for backup
+ --no-prompt No command line confirmation.
+ --target string The backup to expire, e.g. "20201220-171801F". Use "pgo show backup" to determine the target.
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -38,4 +40,4 @@ pgo delete backup [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 20-Dec-2020
diff --git a/docs/content/tutorial/disaster-recovery.md b/docs/content/tutorial/disaster-recovery.md
index ca05361674..0ad19d0fe3 100644
--- a/docs/content/tutorial/disaster-recovery.md
+++ b/docs/content/tutorial/disaster-recovery.md
@@ -183,6 +183,92 @@ When the restore is complete, the cluster is immediately available for reads and
The PostgreSQL Operator supports the full set of pgBackRest restore options, which can be passed into the `--backup-opts` parameter. For more information, please review the [pgBackRest restore options](https://pgbackrest.org/command.html#command-restore)
+## Deleting a Backup
+
+You typically do not want to delete backups. Instead, it's better to set a backup retention policy as part of [scheduling your ackups](#schedule-backups).
+
+However, there are situations where you may want to explicitly delete backups, in particular, if you need to reclaim space on your backup disk or if you accidentally created too many backups.
+
+{{% notice warning %}}
+If you delete a backup that is *not* set to expire, you may be unable to meet your retention requirements. If you are deleting backups to free space, it is recommended to delete your oldest backups first.
+{{% /notice %}}
+
+In these cases, a backup can be deleted using the [`pgo delete backup`]({{< relref "pgo-client/reference/pgo_delete_backup.md" >}})
+command. You must specify a specific backup to delete using the `--target` flag. You can get the backup names from the [`pgo show backup`]({{< relref "pgo-client/reference/pgo_show_backup.md" >}}) command.
+
+Let's say that the `hippo` cluster currently has a set of backups that look like this, obtained from running the `pgo show backup hippo` command:
+
+```
+cluster: hippo
+storage type: local
+
+stanza: db
+ status: ok
+ cipher: none
+
+ db (current)
+ wal archive min/max (12-1)
+
+ full backup: 20201220-171801F
+ timestamp start/stop: 2020-12-20 17:18:01 +0000 UTC / 2020-12-20 17:18:10 +0000 UTC
+ wal start/stop: 000000010000000000000002 / 000000010000000000000002
+ database size: 31.3MiB, backup size: 31.3MiB
+ repository size: 3.8MiB, repository backup size: 3.8MiB
+ backup reference list:
+
+ incr backup: 20201220-171801F_20201220-171939I
+ timestamp start/stop: 2020-12-20 17:19:39 +0000 UTC / 2020-12-20 17:19:41 +0000 UTC
+ wal start/stop: 000000010000000000000005 / 000000010000000000000005
+ database size: 31.3MiB, backup size: 216.3KiB
+ repository size: 3.8MiB, repository backup size: 25.9KiB
+ backup reference list: 20201220-171801F
+
+ incr backup: 20201220-171801F_20201220-172046I
+ timestamp start/stop: 2020-12-20 17:20:46 +0000 UTC / 2020-12-20 17:23:29 +0000 UTC
+ wal start/stop: 00000001000000000000000A / 00000001000000000000000A
+ database size: 65.9MiB, backup size: 37.5MiB
+ repository size: 7.7MiB, repository backup size: 4.3MiB
+ backup reference list: 20201220-171801F, 20201220-171801F_20201220-171939I
+
+ full backup: 20201220-201305F
+ timestamp start/stop: 2020-12-20 20:13:05 +0000 UTC / 2020-12-20 20:13:15 +0000 UTC
+ wal start/stop: 00000001000000000000000F / 00000001000000000000000F
+ database size: 65.9MiB, backup size: 65.9MiB
+ repository size: 7.7MiB, repository backup size: 7.7MiB
+ backup reference list:
+```
+
+Note that the backup targets can be found after the backup type, e.g. `20201220-171801F` or `20201220-171801F_20201220-172046I`.
+
+One can delete the oldest backup, in this case `20201220-171801F`, by running the following command:
+
+```
+pgo delete backup hippo --target=20201220-171801F
+```
+
+You can then verify the backup is deleted with `pgo show backup hippo`:
+
+```
+cluster: hippo
+storage type: local
+
+stanza: db
+ status: ok
+ cipher: none
+
+ db (current)
+ wal archive min/max (12-1)
+
+ full backup: 20201220-201305F
+ timestamp start/stop: 2020-12-20 20:13:05 +0000 UTC / 2020-12-20 20:13:15 +0000 UTC
+ wal start/stop: 00000001000000000000000F / 00000001000000000000000F
+ database size: 65.9MiB, backup size: 65.9MiB
+ repository size: 7.7MiB, repository backup size: 7.7MiB
+ backup reference list:
+```
+
+Note that deleting the oldest backup also had the effect of deleting all of the backups that depended on it. This is a feature of [pgBackRest](https://pgbackrest.org/)!
+
## Next Steps
There are cases where you may want to take [logical backups]({{< relref "tutorial/pgdump.md" >}}), aka `pg_dump` / `pg_dumpall`. Let's learn how to do that with the PostgreSQL Operator!
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index 3206d7fcee..7acc51d052 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -42,9 +42,15 @@ import (
const containername = "database"
-// pgBackRestInfoCommand is the baseline command used for getting the
-// pgBackRest info
-var pgBackRestInfoCommand = []string{"pgbackrest", "info", "--output", "json"}
+var (
+ // pgBackRestExpireCommand is the baseline command used for deleting a
+ // pgBackRest backup
+ pgBackRestExpireCommand = []string{"pgbackrest", "expire", "--set"}
+
+ // pgBackRestInfoCommand is the baseline command used for getting the
+ // pgBackRest info
+ pgBackRestInfoCommand = []string{"pgbackrest", "info", "--output", "json"}
+)
// repoTypeFlagS3 is used for getting the pgBackRest info for a repository that
// is stored in S3
@@ -76,7 +82,7 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
clusterList := crv1.PgclusterList{}
var err error
if request.Selector != "" {
- //use the selector instead of an argument list to filter on
+ // use the selector instead of an argument list to filter on
cl, err := apiserver.Clientset.
CrunchydataV1().Pgclusters(ns).
List(ctx, metav1.ListOptions{LabelSelector: request.Selector})
@@ -165,9 +171,7 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
return resp
} else {
- //remove any previous backup job
-
- //selector := config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_BACKREST + "=true"
+ // remove any previous backup job
selector := config.LABEL_BACKREST_COMMAND + "=" + crv1.PgtaskBackrestBackup + "," + config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_BACKREST + "=true"
deletePropagation := metav1.DeletePropagationForeground
err = apiserver.Clientset.
@@ -179,7 +183,7 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
log.Error(err)
}
- //a hack sort of due to slow propagation
+ // a hack sort of due to slow propagation
for i := 0; i < 3; i++ {
jobList, err := apiserver.Clientset.BatchV1().Jobs(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
if err != nil {
@@ -195,7 +199,7 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
// get pod name from cluster
var podname string
- podname, err = getBackrestRepoPodName(cluster, ns)
+ podname, err = getBackrestRepoPodName(cluster)
if err != nil {
log.Error(err)
@@ -235,6 +239,54 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
return resp
}
+// DeleteBackup deletes a specific backup from a pgBackRest repository
+func DeleteBackup(request msgs.DeleteBackrestBackupRequest) msgs.DeleteBackrestBackupResponse {
+ ctx := context.TODO()
+ response := msgs.DeleteBackrestBackupResponse{
+ Status: msgs.Status{
+ Code: msgs.Ok,
+ },
+ }
+
+ // first, make an attempt to get the cluster. if it does not exist, return
+ // an error
+ cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).
+ Get(ctx, request.ClusterName, metav1.GetOptions{})
+ if err != nil {
+ log.Error(err)
+ response.Code = msgs.Error
+ response.Msg = err.Error()
+ return response
+ }
+
+ // so, either we can delete the backup, or we cant, and we can only find out
+ // by trying. so here goes...
+ log.Debugf("attempting to delete backup %q cluster %q", request.Target, cluster.Name)
+
+ // first, get the pgbackrest Pod name
+ podName, err := getBackrestRepoPodName(cluster)
+ if err != nil {
+ log.Error(err)
+ response.Code = msgs.Error
+ response.Msg = err.Error()
+ return response
+ }
+
+ // set up the command
+ cmd := pgBackRestExpireCommand
+ cmd = append(cmd, request.Target)
+
+ // and execute. if there is an error, return it, otherwise we are done
+ if _, stderr, err := kubeapi.ExecToPodThroughAPI(apiserver.RESTConfig,
+ apiserver.Clientset, cmd, containername, podName, cluster.Spec.Namespace, nil); err != nil {
+ log.Error(stderr)
+ response.Code = msgs.Error
+ response.Msg = stderr
+ }
+
+ return response
+}
+
func getBackupParams(identifier, clusterName, taskName, action, podName, containerName, imagePrefix, backupOpts, backrestStorageType, s3VerifyTLS, jobName, ns, pgouser string) *crv1.Pgtask {
var newInstance *crv1.Pgtask
spec := crv1.PgtaskSpec{}
@@ -270,10 +322,10 @@ func getBackupParams(identifier, clusterName, taskName, action, podName, contain
// getBackrestRepoPodName goes through the pod list to identify the
// pgBackRest repo pod and then returns the pod name.
-func getBackrestRepoPodName(cluster *crv1.Pgcluster, ns string) (string, error) {
+func getBackrestRepoPodName(cluster *crv1.Pgcluster) (string, error) {
ctx := context.TODO()
- //look up the backrest-repo pod name
+ // look up the backrest-repo pod name
selector := "pg-cluster=" + cluster.Spec.Name + ",pgo-backrest-repo=true"
options := metav1.ListOptions{
@@ -281,7 +333,7 @@ func getBackrestRepoPodName(cluster *crv1.Pgcluster, ns string) (string, error)
LabelSelector: selector,
}
- repopods, err := apiserver.Clientset.CoreV1().Pods(ns).List(ctx, options)
+ repopods, err := apiserver.Clientset.CoreV1().Pods(cluster.Namespace).List(ctx, options)
if len(repopods.Items) != 1 {
log.Errorf("pods len != 1 for cluster %s", cluster.Spec.Name)
return "", errors.New("backrestrepo pod not found for cluster " + cluster.Spec.Name)
@@ -301,7 +353,6 @@ func isPrimary(pod *v1.Pod, clusterName string) bool {
return true
}
return false
-
}
func isReady(pod *v1.Pod) bool {
@@ -317,7 +368,6 @@ func isReady(pod *v1.Pod) bool {
return false
}
return true
-
}
// isPrimaryReady goes through the pod list to first identify the
@@ -363,7 +413,7 @@ func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse {
}
}
- //get a list of all clusters
+ // get a list of all clusters
clusterList, err := apiserver.Clientset.
CrunchydataV1().Pgclusters(ns).
List(ctx, metav1.ListOptions{LabelSelector: selector})
@@ -375,9 +425,10 @@ func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse {
log.Debugf("clusters found len is %d\n", len(clusterList.Items))
- for _, c := range clusterList.Items {
- podname, err := getBackrestRepoPodName(&c, ns)
+ for i := range clusterList.Items {
+ c := &clusterList.Items[i]
+ podname, err := getBackrestRepoPodName(c)
if err != nil {
log.Error(err)
response.Status.Code = msgs.Error
@@ -409,11 +460,10 @@ func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse {
StorageType: storageType,
}
- verifyTLS, _ := strconv.ParseBool(operator.GetS3VerifyTLSSetting(&c))
+ verifyTLS, _ := strconv.ParseBool(operator.GetS3VerifyTLSSetting(c))
// get the pgBackRest info using this legacy function
info, err := getInfo(c.Name, storageType, podname, ns, verifyTLS)
-
// see if the function returned successfully, and if so, unmarshal the JSON
if err != nil {
log.Error(err)
@@ -454,7 +504,6 @@ func getInfo(clusterName, storageType, podname, ns string, verifyTLS bool) (stri
}
output, stderr, err := kubeapi.ExecToPodThroughAPI(apiserver.RESTConfig, apiserver.Clientset, cmd, containername, podname, ns, nil)
-
if err != nil {
log.Error(err, stderr)
return "", err
@@ -559,7 +608,7 @@ func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreRespo
return resp
}
- //create a pgtask for the restore workflow
+ // create a pgtask for the restore workflow
if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).
Create(ctx, pgtask, metav1.CreateOptions{}); err != nil {
resp.Status.Code = msgs.Error
@@ -616,13 +665,13 @@ func createRestoreWorkflowTask(clusterName, ns string) (string, error) {
taskName := clusterName + "-" + crv1.PgtaskWorkflowBackrestRestoreType
- //delete any existing pgtask with the same name
+ // delete any existing pgtask with the same name
if err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).
Delete(ctx, taskName, metav1.DeleteOptions{}); err != nil && !kubeapi.IsNotFound(err) {
return "", err
}
- //create pgtask CRD
+ // create pgtask CRD
spec := crv1.PgtaskSpec{}
spec.Namespace = ns
spec.Name = clusterName + "-" + crv1.PgtaskWorkflowBackrestRestoreType
diff --git a/internal/apiserver/backrestservice/backrestservice.go b/internal/apiserver/backrestservice/backrestservice.go
index e436afb878..49a1edbbdb 100644
--- a/internal/apiserver/backrestservice/backrestservice.go
+++ b/internal/apiserver/backrestservice/backrestservice.go
@@ -17,12 +17,13 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
"github.com/crunchydata/postgres-operator/internal/config"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
"github.com/gorilla/mux"
log "github.com/sirupsen/logrus"
- "net/http"
)
// CreateBackupHandler ...
@@ -76,6 +77,87 @@ func CreateBackupHandler(w http.ResponseWriter, r *http.Request) {
json.NewEncoder(w).Encode(resp)
}
+// DeleteBackrestHandler deletes a targeted backup from a pgBackRest repository
+// pgo delete backup hippo --target=pgbackrest-backup-id
+func DeleteBackrestHandler(w http.ResponseWriter, r *http.Request) {
+ // swagger:operation DELETE /backrest backrestservice
+ /*```
+ Delete a pgBackRest backup
+ */
+ // ---
+ // produces:
+ // - application/json
+ // parameters:
+ // - name: "PostgreSQL Cluster Disk Utilization"
+ // in: "body"
+ // schema:
+ // "$ref": "#/definitions/DeleteBackrestBackupRequest"
+ // responses:
+ // '200':
+ // description: Output
+ // schema:
+ // "$ref": "#/definitions/DeleteBackrestBackupResponse"
+ log.Debug("backrestservice.DeleteBackrestHandler called")
+
+ // first, check that the requesting user is authorized to make this request
+ username, err := apiserver.Authn(apiserver.DELETE_BACKUP_PERM, w, r)
+ if err != nil {
+ return
+ }
+
+ // decode the request paramaeters
+ var request msgs.DeleteBackrestBackupRequest
+
+ if err := json.NewDecoder(r.Body).Decode(&request); err != nil {
+ response := msgs.DeleteBackrestBackupResponse{
+ Status: msgs.Status{
+ Code: msgs.Error,
+ Msg: err.Error(),
+ },
+ }
+ _ = json.NewEncoder(w).Encode(response)
+ return
+ }
+
+ log.Debugf("DeleteBackrestHandler parameters [%+v]", request)
+
+ // set some of the header...though we really should not be setting the HTTP
+ // Status upfront, but whatever
+ w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`)
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(http.StatusOK)
+
+ // check that the client versions match. If they don't, error out
+ if request.ClientVersion != msgs.PGO_VERSION {
+ response := msgs.DeleteBackrestBackupResponse{
+ Status: msgs.Status{
+ Code: msgs.Error,
+ Msg: apiserver.VERSION_MISMATCH_ERROR,
+ },
+ }
+ _ = json.NewEncoder(w).Encode(response)
+ return
+ }
+
+ // ensure that the user has access to this namespace. if not, error out
+ if _, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace); err != nil {
+ response := msgs.DeleteBackrestBackupResponse{
+ Status: msgs.Status{
+ Code: msgs.Error,
+ Msg: err.Error(),
+ },
+ }
+ _ = json.NewEncoder(w).Encode(response)
+ return
+ }
+
+ // process the request
+ response := DeleteBackup(request)
+
+ // turn the response into JSON
+ _ = json.NewEncoder(w).Encode(response)
+}
+
// ShowBackrestHandler ...
// returns a ShowBackrestResponse
func ShowBackrestHandler(w http.ResponseWriter, r *http.Request) {
@@ -150,7 +232,6 @@ func ShowBackrestHandler(w http.ResponseWriter, r *http.Request) {
resp = ShowBackrest(backupname, selector, ns)
json.NewEncoder(w).Encode(resp)
-
}
// RestoreHandler ...
diff --git a/internal/apiserver/routing/routes.go b/internal/apiserver/routing/routes.go
index 96de93403a..378651e12b 100644
--- a/internal/apiserver/routing/routes.go
+++ b/internal/apiserver/routing/routes.go
@@ -75,6 +75,7 @@ func RegisterAllRoutes(r *mux.Router) {
func RegisterBackrestSvcRoutes(r *mux.Router) {
r.HandleFunc("/backrestbackup", backrestservice.CreateBackupHandler).Methods("POST")
r.HandleFunc("/backrest/{name}", backrestservice.ShowBackrestHandler).Methods("GET")
+ r.HandleFunc("/backrest", backrestservice.DeleteBackrestHandler).Methods("DELETE")
r.HandleFunc("/restore", backrestservice.RestoreHandler).Methods("POST")
}
diff --git a/pkg/apiservermsgs/backrestmsgs.go b/pkg/apiservermsgs/backrestmsgs.go
index 12d72844b9..b1c4887fa9 100644
--- a/pkg/apiservermsgs/backrestmsgs.go
+++ b/pkg/apiservermsgs/backrestmsgs.go
@@ -32,6 +32,27 @@ type CreateBackrestBackupRequest struct {
BackrestStorageType string
}
+// DeleteBackrestBackupRequest ...
+// swagger:model
+type DeleteBackrestBackupRequest struct {
+ // ClientVersion represents the version of the client that is making the API
+ // request
+ ClientVersion string
+ // ClusterName is the name of the pgcluster of which we want to delete the
+ // backup from
+ ClusterName string
+ // Namespace isthe namespace that the cluster is in
+ Namespace string
+ // Target is the nane of the backup to be deleted
+ Target string
+}
+
+// DeleteBackrestBackupRequest ...
+// swagger:model
+type DeleteBackrestBackupResponse struct {
+ Status
+}
+
// PgBackRestInfo and its associated structs are available for parsing the info
// that comes from the output of the "pgbackrest info --output json" command
type PgBackRestInfo struct {
From efbe87706ec63510dd545e0261248796d922d246 Mon Sep 17 00:00:00 2001
From: andrewlecuyer <43458182+andrewlecuyer@users.noreply.github.com>
Date: Mon, 21 Dec 2020 16:09:07 -0600
Subject: [PATCH 060/276] Update Init Flag if PGHA ConfigMap Already Exists
If an existing PGHA ConfigMap is identified when bootstrapping a new
PostgreSQL cluster, the Operator now updates the "init" flag within
the existing ConfigMap (specifically by setting it to "true"). This
ensures proper configuration of the "init" flag (and therefore the
proper functionality of any logic within the Operator that relies on
it, e.g. the execution of init logic within the 'crunchy-postgres-ha'
container) when the PGHA ConfigMap is pre-defined prior to PostgreSQL
cluster initialization.
Issue: [ch9986]
---
internal/controller/pod/inithandler.go | 6 ++++--
internal/operator/cluster/cluster.go | 21 +++++++++++++++++----
2 files changed, 21 insertions(+), 6 deletions(-)
diff --git a/internal/controller/pod/inithandler.go b/internal/controller/pod/inithandler.go
index 64b3134e8c..c33be8d2ed 100644
--- a/internal/controller/pod/inithandler.go
+++ b/internal/controller/pod/inithandler.go
@@ -115,8 +115,10 @@ func (c *Controller) handleCommonInit(cluster *crv1.Pgcluster) error {
cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE], cluster.Namespace)
}
- operator.UpdatePGHAConfigInitFlag(c.Client, false, cluster.Name,
- cluster.Namespace)
+ if err := operator.UpdatePGHAConfigInitFlag(c.Client, false, cluster.Name,
+ cluster.Namespace); err != nil {
+ log.Error(err)
+ }
return nil
}
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 74840a8113..09d91ce454 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -85,8 +85,13 @@ func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace s
// logic following a restart of the container.
// If the configmap already exists, the cluster creation will continue as this is required
// for certain pgcluster upgrades.
- if err = operator.CreatePGHAConfigMap(clientset, cl, namespace); err != nil &&
- !kerrors.IsAlreadyExists(err) {
+ if err = operator.CreatePGHAConfigMap(clientset, cl,
+ namespace); kerrors.IsAlreadyExists(err) {
+ log.Infof("found existing pgha ConfigMap for cluster %s, setting init flag to 'true'",
+ cl.GetName())
+ err = operator.UpdatePGHAConfigInitFlag(clientset, true, cl.Name, cl.Namespace)
+ }
+ if err != nil {
log.Error(err)
publishClusterCreateFailure(cl, err.Error())
return
@@ -234,8 +239,16 @@ func AddClusterBootstrap(clientset kubeapi.Interface, cluster *crv1.Pgcluster) e
ctx := context.TODO()
namespace := cluster.GetNamespace()
- if err := operator.CreatePGHAConfigMap(clientset, cluster, namespace); err != nil &&
- !kerrors.IsAlreadyExists(err) {
+ var err error
+
+ if err = operator.CreatePGHAConfigMap(clientset, cluster,
+ namespace); kerrors.IsAlreadyExists(err) {
+ log.Infof("found existing pgha ConfigMap for cluster %s, setting init flag to 'true'",
+ cluster.GetName())
+ err = operator.UpdatePGHAConfigInitFlag(clientset, true, cluster.Name, cluster.Namespace)
+ }
+ if err != nil {
+ log.Error(err)
publishClusterCreateFailure(cluster, err.Error())
return err
}
From ea1d3d1a3a2b81667dc07d68eaad75532d5029c7 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 21 Dec 2020 09:06:50 -0500
Subject: [PATCH 061/276] Apply linter recommendations to codebase
This brings the codebase inline with the modern linting standards
that we have defined, and thus allows us to run the full linting
checks on any new changes.
---
cmd/apiserver/main.go | 9 +-
cmd/pgo-rmdata/main.go | 1 -
cmd/pgo-rmdata/process.go | 61 ++--
cmd/pgo-scheduler/main.go | 35 +--
.../scheduler/configmapcontroller.go | 1 -
.../scheduler/controllermanager.go | 13 -
cmd/pgo-scheduler/scheduler/pgbackrest.go | 13 +-
cmd/pgo-scheduler/scheduler/policy.go | 8 +-
cmd/pgo-scheduler/scheduler/scheduler.go | 13 +-
cmd/pgo/api/backrest.go | 4 +-
cmd/pgo/api/cat.go | 4 +-
cmd/pgo/api/cluster.go | 21 +-
cmd/pgo/api/config.go | 9 +-
cmd/pgo/api/df.go | 6 +-
cmd/pgo/api/failover.go | 13 +-
cmd/pgo/api/label.go | 5 +-
cmd/pgo/api/namespace.go | 24 +-
cmd/pgo/api/pgadmin.go | 11 +-
cmd/pgo/api/pgbouncer.go | 22 +-
cmd/pgo/api/pgdump.go | 10 +-
cmd/pgo/api/pgorole.go | 23 +-
cmd/pgo/api/pgouser.go | 23 +-
cmd/pgo/api/policy.go | 23 +-
cmd/pgo/api/pvc.go | 9 +-
cmd/pgo/api/reload.go | 4 +-
cmd/pgo/api/restart.go | 9 +-
cmd/pgo/api/restore.go | 4 +-
cmd/pgo/api/restoreDump.go | 4 +-
cmd/pgo/api/scale.go | 6 +-
cmd/pgo/api/scaledown.go | 11 +-
cmd/pgo/api/schedule.go | 10 +-
cmd/pgo/api/status.go | 9 +-
cmd/pgo/api/test.go | 9 +-
cmd/pgo/api/upgrade.go | 5 +-
cmd/pgo/api/user.go | 23 +-
cmd/pgo/api/version.go | 9 +-
cmd/pgo/api/workflow.go | 9 +-
cmd/pgo/cmd/auth.go | 15 +-
cmd/pgo/cmd/backrest.go | 5 +-
cmd/pgo/cmd/cat.go | 3 -
cmd/pgo/cmd/cluster.go | 13 +-
cmd/pgo/cmd/config.go | 3 -
cmd/pgo/cmd/create.go | 96 +++---
cmd/pgo/cmd/delete.go | 6 +-
cmd/pgo/cmd/df.go | 2 -
cmd/pgo/cmd/failover.go | 4 -
cmd/pgo/cmd/flags.go | 64 ++--
cmd/pgo/cmd/label.go | 11 +-
cmd/pgo/cmd/namespace.go | 8 +-
cmd/pgo/cmd/pgadmin.go | 2 -
cmd/pgo/cmd/pgbouncer.go | 6 -
cmd/pgo/cmd/pgdump.go | 6 +-
cmd/pgo/cmd/pgorole.go | 11 +-
cmd/pgo/cmd/pgouser.go | 11 +-
cmd/pgo/cmd/policy.go | 12 +-
cmd/pgo/cmd/pvc.go | 7 +-
cmd/pgo/cmd/reload.go | 6 +-
cmd/pgo/cmd/restart.go | 4 -
cmd/pgo/cmd/restore.go | 8 +-
cmd/pgo/cmd/root.go | 6 +-
cmd/pgo/cmd/scale.go | 2 -
cmd/pgo/cmd/scaledown.go | 6 +-
cmd/pgo/cmd/schedule.go | 5 -
cmd/pgo/cmd/show.go | 9 +-
cmd/pgo/cmd/status.go | 5 -
cmd/pgo/cmd/test.go | 10 +-
cmd/pgo/cmd/update.go | 8 +-
cmd/pgo/cmd/user.go | 5 -
cmd/pgo/cmd/version.go | 2 -
cmd/pgo/cmd/watch.go | 15 +-
cmd/pgo/cmd/workflow.go | 4 -
cmd/pgo/generatedocs.go | 1 -
cmd/pgo/main.go | 1 -
cmd/pgo/util/validation.go | 3 +-
cmd/postgres-operator/main.go | 5 +-
cmd/postgres-operator/open_telemetry.go | 2 +-
.../apiserver/backrestservice/backrestimpl.go | 26 +-
.../backrestservice/backrestservice.go | 14 +-
.../backupoptions/backupoptionsutil.go | 8 +-
.../backupoptions/pgbackrestoptions.go | 3 -
.../apiserver/backupoptions/pgdumpoptions.go | 6 -
internal/apiserver/catservice/catimpl.go | 5 +-
internal/apiserver/catservice/catservice.go | 9 +-
.../apiserver/clusterservice/clusterimpl.go | 78 +++--
.../clusterservice/clusterservice.go | 34 +--
.../apiserver/clusterservice/scaleimpl.go | 9 +-
.../apiserver/clusterservice/scaleservice.go | 20 +-
internal/apiserver/common.go | 3 +-
.../apiserver/configservice/configservice.go | 9 +-
internal/apiserver/dfservice/dfimpl.go | 5 +-
internal/apiserver/dfservice/dfservice.go | 8 +-
.../apiserver/failoverservice/failoverimpl.go | 6 +-
.../failoverservice/failoverservice.go | 15 +-
internal/apiserver/labelservice/labelimpl.go | 24 +-
.../apiserver/labelservice/labelservice.go | 15 +-
.../namespaceservice/namespaceimpl.go | 12 +-
.../namespaceservice/namespaceservice.go | 18 +-
internal/apiserver/perms.go | 7 +-
.../apiserver/pgadminservice/pgadminimpl.go | 10 +-
.../pgadminservice/pgadminservice.go | 22 +-
.../pgbouncerservice/pgbouncerimpl.go | 20 +-
.../pgbouncerservice/pgbouncerservice.go | 34 +--
.../apiserver/pgdumpservice/pgdumpimpl.go | 94 +-----
.../apiserver/pgdumpservice/pgdumpservice.go | 18 +-
.../apiserver/pgoroleservice/pgoroleimpl.go | 21 +-
.../pgoroleservice/pgoroleservice.go | 19 +-
.../apiserver/pgouserservice/pgouserimpl.go | 22 +-
.../pgouserservice/pgouserservice.go | 19 +-
.../apiserver/policyservice/policyimpl.go | 31 +-
.../apiserver/policyservice/policyservice.go | 24 +-
internal/apiserver/pvcservice/pvcservice.go | 9 +-
.../apiserver/reloadservice/reloadimpl.go | 1 -
.../apiserver/reloadservice/reloadservice.go | 9 +-
.../restartservice/restartservice.go | 18 +-
internal/apiserver/root.go | 50 ++-
.../apiserver/scheduleservice/scheduleimpl.go | 12 +-
.../scheduleservice/scheduleservice.go | 12 +-
.../apiserver/statusservice/statusimpl.go | 20 +-
.../apiserver/statusservice/statusservice.go | 9 +-
.../apiserver/upgradeservice/upgradeimpl.go | 3 -
.../upgradeservice/upgradeservice.go | 6 +-
internal/apiserver/userservice/userimpl.go | 50 +--
.../apiserver/userservice/userimpl_test.go | 9 -
internal/apiserver/userservice/userservice.go | 32 +-
.../versionservice/versionservice.go | 9 +-
.../apiserver/workflowservice/workflowimpl.go | 3 +-
.../workflowservice/workflowservice.go | 9 +-
internal/config/labels.go | 285 ++++++++++--------
internal/config/pgoconfig.go | 41 +--
internal/config/secrets.go | 1 +
internal/config/volumes.go | 18 +-
.../configmap/configmapcontroller.go | 4 -
internal/controller/configmap/synchandler.go | 7 +-
internal/controller/controllerutil.go | 5 +-
internal/controller/job/backresthandler.go | 23 +-
internal/controller/job/bootstraphandler.go | 2 +-
internal/controller/job/jobcontroller.go | 12 +-
internal/controller/job/jobevents.go | 8 +-
internal/controller/job/pgdumphandler.go | 4 +-
internal/controller/job/rmdatahandler.go | 5 +-
.../controller/manager/controllermanager.go | 12 -
internal/controller/manager/rbac.go | 3 -
.../namespace/namespacecontroller.go | 4 -
.../pgcluster/pgclustercontroller.go | 15 +-
.../controller/pgpolicy/pgpolicycontroller.go | 11 +-
.../pgreplica/pgreplicacontroller.go | 20 +-
.../controller/pgtask/pgtaskcontroller.go | 33 +-
internal/controller/pod/inithandler.go | 32 +-
internal/controller/pod/podcontroller.go | 11 +-
internal/controller/pod/podevents.go | 3 +-
internal/controller/pod/promotionhandler.go | 8 +-
internal/kubeapi/client_config.go | 6 +-
internal/kubeapi/fake/fakeclients.go | 2 -
internal/kubeapi/volumes_test.go | 4 +-
internal/logging/loglib.go | 8 +-
internal/ns/nslogic.go | 42 +--
internal/operator/backrest/backup.go | 10 +-
internal/operator/backrest/repo.go | 9 +-
internal/operator/backrest/restore.go | 10 +-
internal/operator/backrest/stanza.go | 5 +-
internal/operator/cluster/cluster.go | 41 ++-
internal/operator/cluster/clusterlogic.go | 22 +-
internal/operator/cluster/common.go | 1 +
internal/operator/cluster/failover.go | 26 +-
internal/operator/cluster/failoverlogic.go | 30 +-
internal/operator/cluster/pgadmin.go | 5 +-
internal/operator/cluster/pgbouncer.go | 2 +-
internal/operator/cluster/rmdata.go | 2 +-
internal/operator/cluster/service.go | 5 +-
internal/operator/cluster/standby.go | 7 +-
internal/operator/cluster/upgrade.go | 20 +-
internal/operator/clusterutilities.go | 37 +--
internal/operator/clusterutilities_test.go | 4 -
internal/operator/common.go | 23 +-
internal/operator/config/configutil.go | 6 +-
internal/operator/config/dcs.go | 7 -
internal/operator/config/localdb.go | 16 +-
.../operator/operatorupgrade/version-check.go | 12 +-
internal/operator/pgbackrest.go | 4 +-
internal/operator/pgdump/dump.go | 7 +-
internal/operator/pgdump/restore.go | 5 +-
internal/operator/pvc/pvc.go | 6 +-
internal/operator/storage.go | 2 +-
internal/operator/storage_test.go | 12 +-
internal/operator/task/applypolicies.go | 9 +-
internal/operator/task/rmbackups.go | 42 ---
internal/operator/task/rmdata.go | 12 +-
internal/operator/task/workflow.go | 7 +-
internal/patroni/patroni.go | 18 +-
internal/pgadmin/backoff.go | 1 +
internal/pgadmin/backoff_test.go | 1 -
internal/pgadmin/crypto_test.go | 6 +-
internal/pgadmin/hash.go | 2 +-
internal/pgadmin/runner.go | 12 +-
internal/postgres/password/md5.go | 8 +-
internal/postgres/password/md5_test.go | 1 -
internal/postgres/password/password.go | 6 +-
internal/postgres/password/password_test.go | 5 +-
internal/postgres/password/scram.go | 1 -
internal/postgres/password/scram_test.go | 8 +-
internal/tlsutil/primitives_test.go | 12 +-
internal/util/backrest.go | 4 +-
internal/util/cluster.go | 49 ++-
internal/util/failover.go | 33 +-
internal/util/pgbouncer.go | 1 +
internal/util/policy.go | 7 +-
internal/util/secrets.go | 5 -
internal/util/ssh.go | 1 +
internal/util/util.go | 17 +-
internal/util/util_test.go | 1 -
pkg/apis/crunchydata.com/v1/common.go | 1 -
pkg/apis/crunchydata.com/v1/register.go | 2 +-
pkg/apis/crunchydata.com/v1/task.go | 1 -
pkg/apiservermsgs/clustermsgs.go | 16 +-
pkg/apiservermsgs/usermsgs.go | 8 +-
pkg/apiservermsgs/usermsgs_test.go | 4 +-
pkg/events/eventing.go | 13 +-
pkg/events/eventtype.go | 2 +
218 files changed, 1321 insertions(+), 1754 deletions(-)
delete mode 100644 internal/operator/task/rmbackups.go
diff --git a/cmd/apiserver/main.go b/cmd/apiserver/main.go
index 2c3858bb63..8b6b1216af 100644
--- a/cmd/apiserver/main.go
+++ b/cmd/apiserver/main.go
@@ -34,8 +34,10 @@ import (
)
// Created as part of the apiserver.WriteTLSCert call
-const serverCertPath = "/tmp/server.crt"
-const serverKeyPath = "/tmp/server.key"
+const (
+ serverCertPath = "/tmp/server.crt"
+ serverKeyPath = "/tmp/server.key"
+)
func main() {
// Environment-overridden variables
@@ -147,8 +149,9 @@ func main() {
svrCertFile.Close()
}
+ // #nosec: G402
cfg := &tls.Config{
- //specify pgo-apiserver in the CN....then, add ServerName: "pgo-apiserver",
+ // specify pgo-apiserver in the CN....then, add ServerName: "pgo-apiserver",
ServerName: "pgo-apiserver",
ClientAuth: tls.VerifyClientCertIfGiven,
InsecureSkipVerify: tlsNoVerify,
diff --git a/cmd/pgo-rmdata/main.go b/cmd/pgo-rmdata/main.go
index 201b138130..b4c5c2c4fc 100644
--- a/cmd/pgo-rmdata/main.go
+++ b/cmd/pgo-rmdata/main.go
@@ -65,5 +65,4 @@ func main() {
log.Infof("request is %s", request.String())
Delete(request)
-
}
diff --git a/cmd/pgo-rmdata/process.go b/cmd/pgo-rmdata/process.go
index d0d79744f6..b1eba3bc95 100644
--- a/cmd/pgo-rmdata/process.go
+++ b/cmd/pgo-rmdata/process.go
@@ -49,7 +49,7 @@ func Delete(request Request) {
ctx := context.TODO()
log.Infof("rmdata.Process %v", request)
- //the case of 'pgo scaledown'
+ // the case of 'pgo scaledown'
if request.IsReplica {
log.Info("rmdata.Process scaledown replica use case")
removeReplicaServices(request)
@@ -57,7 +57,7 @@ func Delete(request Request) {
if err != nil {
log.Error(err)
}
- //delete the pgreplica CRD
+ // delete the pgreplica CRD
if err := request.Clientset.
CrunchydataV1().Pgreplicas(request.Namespace).
Delete(ctx, request.ReplicaName, metav1.DeleteOptions{}); err != nil {
@@ -83,13 +83,13 @@ func Delete(request Request) {
removePVCs(pvcList, request)
}
- //scale down is its own use case so we leave when done
+ // scale down is its own use case so we leave when done
return
}
if request.IsBackup {
log.Info("rmdata.Process backup use case")
- //the case of removing a backup using `pgo delete backup`, only applies to
+ // the case of removing a backup using `pgo delete backup`, only applies to
// "backup-type=pgdump"
removeBackupJobs(request)
removeLogicalBackupPVCs(request)
@@ -104,13 +104,13 @@ func Delete(request Request) {
// executing asynchronously against any stale data
removeSchedules(request)
- //the user had done something like:
- //pgo delete cluster mycluster --delete-data
+ // the user had done something like:
+ // pgo delete cluster mycluster --delete-data
if request.RemoveData {
removeUserSecrets(request)
}
- //handle the case of 'pgo delete cluster mycluster'
+ // handle the case of 'pgo delete cluster mycluster'
removeCluster(request)
if err := request.Clientset.
CrunchydataV1().Pgclusters(request.Namespace).
@@ -122,7 +122,7 @@ func Delete(request Request) {
removePgreplicas(request)
removePgtasks(request)
removeClusterConfigmaps(request)
- //removeClusterJobs(request)
+ // removeClusterJobs(request)
if request.RemoveData {
if pvcList, err := getInstancePVCs(request); err != nil {
log.Error(err)
@@ -171,7 +171,7 @@ func removeBackrestRepo(request Request) {
log.Error(err)
}
- //delete the service for the backrest repo
+ // delete the service for the backrest repo
err = request.Clientset.
CoreV1().Services(request.Namespace).
Delete(ctx, deploymentName, metav1.DeleteOptions{})
@@ -260,12 +260,11 @@ func removeCluster(request Request) {
selector := fmt.Sprintf("%s=%s,%s!=true",
config.LABEL_PG_CLUSTER, request.ClusterName, config.LABEL_PGO_BACKREST_REPO)
+ // if there is an error here, return as we cannot iterate over the deployment
+ // list
deployments, err := request.Clientset.
AppsV1().Deployments(request.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: selector})
-
- // if there is an error here, return as we cannot iterate over the deployment
- // list
if err != nil {
log.Error(err)
return
@@ -315,7 +314,7 @@ func removeReplica(request Request) error {
return err
}
- //wait for the deployment to go away fully
+ // wait for the deployment to go away fully
var completed bool
for i := 0; i < maximumTries; i++ {
_, err = request.Clientset.
@@ -337,7 +336,7 @@ func removeReplica(request Request) error {
func removeUserSecrets(request Request) {
ctx := context.TODO()
- //get all that match pg-cluster=db
+ // get all that match pg-cluster=db
selector := config.LABEL_PG_CLUSTER + "=" + request.ClusterName
secrets, err := request.Clientset.
@@ -356,12 +355,11 @@ func removeUserSecrets(request Request) {
}
}
}
-
}
func removeAddons(request Request) {
ctx := context.TODO()
- //remove pgbouncer
+ // remove pgbouncer
pgbouncerDepName := request.ClusterName + "-pgbouncer"
@@ -370,7 +368,7 @@ func removeAddons(request Request) {
AppsV1().Deployments(request.Namespace).
Delete(ctx, pgbouncerDepName, metav1.DeleteOptions{PropagationPolicy: &deletePropagation})
- //delete the service name=-pgbouncer
+ // delete the service name=-pgbouncer
_ = request.Clientset.
CoreV1().Services(request.Namespace).
@@ -380,7 +378,7 @@ func removeAddons(request Request) {
func removeServices(request Request) {
ctx := context.TODO()
- //remove any service for this cluster
+ // remove any service for this cluster
selector := config.LABEL_PG_CLUSTER + "=" + request.ClusterName
@@ -400,13 +398,12 @@ func removeServices(request Request) {
log.Error(err)
}
}
-
}
func removePgreplicas(request Request) {
ctx := context.TODO()
- //get a list of pgreplicas for this cluster
+ // get a list of pgreplicas for this cluster
replicaList, err := request.Clientset.CrunchydataV1().Pgreplicas(request.Namespace).List(ctx, metav1.ListOptions{
LabelSelector: config.LABEL_PG_CLUSTER + "=" + request.ClusterName,
})
@@ -424,13 +421,12 @@ func removePgreplicas(request Request) {
log.Warn(err)
}
}
-
}
func removePgtasks(request Request) {
ctx := context.TODO()
- //get a list of pgtasks for this cluster
+ // get a list of pgtasks for this cluster
taskList, err := request.Clientset.
CrunchydataV1().Pgtasks(request.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + request.ClusterName})
@@ -446,7 +442,6 @@ func removePgtasks(request Request) {
log.Warn(err)
}
}
-
}
// getInstancePVCs gets all the PVCs that are associated with PostgreSQL
@@ -461,11 +456,10 @@ func getInstancePVCs(request Request) ([]string, error) {
log.Debugf("instance pvcs overall selector: [%s]", selector)
// get all of the PVCs to analyze (see the step below)
+ // if there is an error, return here and log the error in the calling function
pvcs, err := request.Clientset.
CoreV1().PersistentVolumeClaims(request.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: selector})
-
- // if there is an error, return here and log the error in the calling function
if err != nil {
return pvcList, err
}
@@ -496,14 +490,14 @@ func getInstancePVCs(request Request) ([]string, error) {
return pvcList, nil
}
-//get the pvc for this replica deployment
+// get the pvc for this replica deployment
func getReplicaPVC(request Request) ([]string, error) {
ctx := context.TODO()
pvcList := make([]string, 0)
- //at this point, the naming convention is useful
- //and ClusterName is the replica deployment name
- //when isReplica=true
+ // at this point, the naming convention is useful
+ // and ClusterName is the replica deployment name
+ // when isReplica=true
pvcList = append(pvcList, request.ReplicaName)
// see if there are any tablespaces or WAL volumes assigned to this replica,
@@ -515,11 +509,10 @@ func getReplicaPVC(request Request) ([]string, error) {
selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, request.ClusterName)
// get all of the PVCs that are specific to this replica and remove them
+ // if there is an error, return here and log the error in the calling function
pvcs, err := request.Clientset.
CoreV1().PersistentVolumeClaims(request.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: selector})
-
- // if there is an error, return here and log the error in the calling function
if err != nil {
return pvcList, err
}
@@ -547,7 +540,7 @@ func getReplicaPVC(request Request) ([]string, error) {
return pvcList, nil
}
-func removePVCs(pvcList []string, request Request) error {
+func removePVCs(pvcList []string, request Request) {
ctx := context.TODO()
for _, p := range pvcList {
@@ -560,9 +553,6 @@ func removePVCs(pvcList []string, request Request) error {
log.Error(err)
}
}
-
- return nil
-
}
// removeBackupJobs removes any job associated with a backup. These include:
@@ -591,7 +581,6 @@ func removeBackupJobs(request Request) {
jobs, err := request.Clientset.
BatchV1().Jobs(request.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: selector})
-
if err != nil {
log.Error(err)
continue
diff --git a/cmd/pgo-scheduler/main.go b/cmd/pgo-scheduler/main.go
index 4de3ac6cfb..63e0c17a00 100644
--- a/cmd/pgo-scheduler/main.go
+++ b/cmd/pgo-scheduler/main.go
@@ -19,7 +19,6 @@ import (
"fmt"
"os"
"os/signal"
- "strconv"
"syscall"
"time"
@@ -40,16 +39,15 @@ import (
const (
schedulerLabel = "crunchy-scheduler=true"
pgoNamespaceEnv = "PGO_OPERATOR_NAMESPACE"
- timeoutEnv = "TIMEOUT"
namespaceWorkerCount = 1
)
-var nsRefreshInterval = 10 * time.Minute
-var installationName string
-var pgoNamespace string
-var timeout time.Duration
-var seconds int
-var clientset kubeapi.Interface
+var (
+ nsRefreshInterval = 10 * time.Minute
+ installationName string
+ pgoNamespace string
+ clientset kubeapi.Interface
+)
// NamespaceOperatingMode defines the namespace operating mode for the cluster,
// e.g. "dynamic", "readonly" or "disabled". See type NamespaceOperatingMode
@@ -61,7 +59,7 @@ func init() {
log.SetLevel(log.InfoLevel)
debugFlag := os.Getenv("CRUNCHY_DEBUG")
- //add logging configuration
+ // add logging configuration
crunchylog.CrunchyLogger(crunchylog.SetParameters())
if debugFlag == "true" {
log.SetLevel(log.DebugLevel)
@@ -82,20 +80,6 @@ func init() {
log.WithFields(log.Fields{}).Fatalf("Failed to get PGO_OPERATOR_NAMESPACE environment: %s", pgoNamespaceEnv)
}
- secondsEnv := os.Getenv(timeoutEnv)
- seconds = 300
- if secondsEnv == "" {
- log.WithFields(log.Fields{}).Info("No timeout set, defaulting to 300 seconds")
- } else {
- seconds, err = strconv.Atoi(secondsEnv)
- if err != nil {
- log.WithFields(log.Fields{}).Fatalf("Failed to convert timeout env to seconds: %s", err)
- }
- }
-
- log.WithFields(log.Fields{}).Infof("Setting timeout to: %d", seconds)
- timeout = time.Second * time.Duration(seconds)
-
clientset, err = kubeapi.NewClient()
if err != nil {
log.WithFields(log.Fields{}).Fatalf("Failed to connect to kubernetes: %s", err)
@@ -116,7 +100,7 @@ func init() {
func main() {
log.Info("Starting Crunchy Scheduler")
- //give time for pgo-event to start up
+ // give time for pgo-event to start up
time.Sleep(time.Duration(5) * time.Second)
scheduler := sched.New(schedulerLabel, pgoNamespace, clientset)
@@ -150,7 +134,7 @@ func main() {
log.WithFields(log.Fields{}).Fatalf("Failed to create controller manager: %s", err)
os.Exit(2)
}
- controllerManager.RunAll()
+ _ = controllerManager.RunAll()
// if the namespace operating mode is not disabled, then create and start a namespace
// controller
@@ -211,7 +195,6 @@ func setNamespaceOperatingMode(clientset kubernetes.Interface) error {
// createAndStartNamespaceController creates a namespace controller and then starts it
func createAndStartNamespaceController(kubeClientset kubernetes.Interface,
controllerManager controller.Manager, stopCh <-chan struct{}) error {
-
nsKubeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(kubeClientset,
nsRefreshInterval,
kubeinformers.WithTweakListOptions(func(options *metav1.ListOptions) {
diff --git a/cmd/pgo-scheduler/scheduler/configmapcontroller.go b/cmd/pgo-scheduler/scheduler/configmapcontroller.go
index 41372f96b5..95d21d883c 100644
--- a/cmd/pgo-scheduler/scheduler/configmapcontroller.go
+++ b/cmd/pgo-scheduler/scheduler/configmapcontroller.go
@@ -62,7 +62,6 @@ func (c *Controller) onDelete(obj interface{}) {
// AddConfigMapEventHandler adds the pgcluster event handler to the pgcluster informer
func (c *Controller) AddConfigMapEventHandler() {
-
c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.onAdd,
DeleteFunc: c.onDelete,
diff --git a/cmd/pgo-scheduler/scheduler/controllermanager.go b/cmd/pgo-scheduler/scheduler/controllermanager.go
index 41c09ef8ef..055527fc64 100644
--- a/cmd/pgo-scheduler/scheduler/controllermanager.go
+++ b/cmd/pgo-scheduler/scheduler/controllermanager.go
@@ -57,7 +57,6 @@ type controllerGroup struct {
// namespace included in the 'namespaces' parameter.
func NewControllerManager(namespaces []string, scheduler *Scheduler, installationName string,
namespaceOperatingMode ns.NamespaceOperatingMode) (*ControllerManager, error) {
-
controllerManager := ControllerManager{
controllers: make(map[string]*controllerGroup),
installationName: installationName,
@@ -87,7 +86,6 @@ func NewControllerManager(namespaces []string, scheduler *Scheduler, installatio
// informers for this resource. Each controller group also receives its own clients, which can then
// be utilized by the controller within the controller group.
func (c *ControllerManager) AddGroup(namespace string) error {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -103,7 +101,6 @@ func (c *ControllerManager) AddGroup(namespace string) error {
// AddAndRunGroup is a convenience function that adds a controller group for the
// namespace specified, and then immediately runs the controllers in that group.
func (c *ControllerManager) AddAndRunGroup(namespace string) error {
-
if c.controllers[namespace] != nil {
// first try to clean if one is not already in progress
if err := c.clean(namespace); err != nil {
@@ -137,7 +134,6 @@ func (c *ControllerManager) AddAndRunGroup(namespace string) error {
// RemoveAll removes all controller groups managed by the controller manager, first stopping all
// controllers within each controller group managed by the controller manager.
func (c *ControllerManager) RemoveAll() {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -151,7 +147,6 @@ func (c *ControllerManager) RemoveAll() {
// RemoveGroup removes the controller group for the namespace specified, first stopping all
// controllers within that group
func (c *ControllerManager) RemoveGroup(namespace string) {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -160,7 +155,6 @@ func (c *ControllerManager) RemoveGroup(namespace string) {
// RunAll runs all controllers across all controller groups managed by the controller manager.
func (c *ControllerManager) RunAll() error {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -177,7 +171,6 @@ func (c *ControllerManager) RunAll() error {
// RunGroup runs the controllers within the controller group for the namespace specified.
func (c *ControllerManager) RunGroup(namespace string) error {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -198,7 +191,6 @@ func (c *ControllerManager) RunGroup(namespace string) error {
// addControllerGroup adds a new controller group for the namespace specified
func (c *ControllerManager) addControllerGroup(namespace string) error {
-
if _, ok := c.controllers[namespace]; ok {
log.Debugf("Controller Manager: a controller for namespace %s already exists", namespace)
return controller.ErrControllerGroupExists
@@ -241,7 +233,6 @@ func (c *ControllerManager) addControllerGroup(namespace string) error {
// clean removes and controller groups that no longer correspond to a valid namespace within
// the Kubernetes cluster, e.g. in the event that a namespace has been deleted.
func (c *ControllerManager) clean(namespace string) error {
-
if !c.sem.TryAcquire(1) {
return fmt.Errorf("controller group clean already in progress, namespace %s will not "+
"clean", namespace)
@@ -278,7 +269,6 @@ func (c *ControllerManager) clean(namespace string) error {
// hasListerPrivs verifies the Operator has the privileges required to start the controllers
// for the namespace specified.
func (c *ControllerManager) hasListerPrivs(namespace string) bool {
-
controllerGroup := c.controllers[namespace]
var err error
@@ -301,7 +291,6 @@ func (c *ControllerManager) hasListerPrivs(namespace string) bool {
// runControllerGroup is responsible running the controllers for the controller group corresponding
// to the namespace provided
func (c *ControllerManager) runControllerGroup(namespace string) error {
-
controllerGroup := c.controllers[namespace]
hasListerPrivs := c.hasListerPrivs(namespace)
@@ -335,7 +324,6 @@ func (c *ControllerManager) runControllerGroup(namespace string) error {
// queues associated with the controllers inside of the controller group are first shutdown
// prior to removing the controller group.
func (c *ControllerManager) removeControllerGroup(namespace string) {
-
if _, ok := c.controllers[namespace]; !ok {
log.Debugf("Controller Manager: no controller group to remove for ns %s", namespace)
return
@@ -351,7 +339,6 @@ func (c *ControllerManager) removeControllerGroup(namespace string) {
// done by calling the ShutdownWorker function associated with the controller. If the controller
// does not have a ShutdownWorker function then no action is taken.
func (c *ControllerManager) stopControllerGroup(namespace string) {
-
if _, ok := c.controllers[namespace]; !ok {
log.Debugf("Controller Manager: unable to stop controller group for namespace %s because "+
"a controller group for this namespace does not exist", namespace)
diff --git a/cmd/pgo-scheduler/scheduler/pgbackrest.go b/cmd/pgo-scheduler/scheduler/pgbackrest.go
index 710e1f12d2..ef0ba90a00 100644
--- a/cmd/pgo-scheduler/scheduler/pgbackrest.go
+++ b/cmd/pgo-scheduler/scheduler/pgbackrest.go
@@ -62,7 +62,8 @@ func (b BackRestBackupJob) Run() {
"container": b.container,
"backupType": b.backupType,
"cluster": b.cluster,
- "storageType": b.storageType})
+ "storageType": b.storageType,
+ })
contextLogger.Info("Running pgBackRest backup")
@@ -76,11 +77,11 @@ func (b BackRestBackupJob) Run() {
taskName := fmt.Sprintf("%s-%s-sch-backup", b.cluster, b.backupType)
- //if the cluster is found, check for an annotation indicating it has not been upgraded
- //if the annotation does not exist, then it is a new cluster and proceed as usual
- //if the annotation is set to "true", the cluster has already been upgraded and can proceed but
- //if the annotation is set to "false", this cluster will need to be upgraded before proceeding
- //log the issue, then return
+ // if the cluster is found, check for an annotation indicating it has not been upgraded
+ // if the annotation does not exist, then it is a new cluster and proceed as usual
+ // if the annotation is set to "true", the cluster has already been upgraded and can proceed but
+ // if the annotation is set to "false", this cluster will need to be upgraded before proceeding
+ // log the issue, then return
if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE {
contextLogger.WithFields(log.Fields{
"task": taskName,
diff --git a/cmd/pgo-scheduler/scheduler/policy.go b/cmd/pgo-scheduler/scheduler/policy.go
index e2be356d07..15a93cf27f 100644
--- a/cmd/pgo-scheduler/scheduler/policy.go
+++ b/cmd/pgo-scheduler/scheduler/policy.go
@@ -60,7 +60,8 @@ func (p PolicyJob) Run() {
contextLogger := log.WithFields(log.Fields{
"namespace": p.namespace,
"policy": p.policy,
- "cluster": p.cluster})
+ "cluster": p.cluster,
+ })
contextLogger.Info("Running Policy schedule")
@@ -98,7 +99,7 @@ func (p PolicyJob) Run() {
data := make(map[string]string)
data[filename] = string(policy.Spec.SQL)
- var labels = map[string]string{
+ labels := map[string]string{
"pg-cluster": p.cluster,
}
labels["pg-cluster"] = p.cluster
@@ -146,7 +147,8 @@ func (p PolicyJob) Run() {
var doc bytes.Buffer
if err := config.PolicyJobTemplate.Execute(&doc, policyJob); err != nil {
contextLogger.WithFields(log.Fields{
- "error": err}).Error("Failed to render job template")
+ "error": err,
+ }).Error("Failed to render job template")
return
}
diff --git a/cmd/pgo-scheduler/scheduler/scheduler.go b/cmd/pgo-scheduler/scheduler/scheduler.go
index a0f9b3dc46..8d6d326936 100644
--- a/cmd/pgo-scheduler/scheduler/scheduler.go
+++ b/cmd/pgo-scheduler/scheduler/scheduler.go
@@ -32,8 +32,8 @@ import (
func New(label, namespace string, client kubeapi.Interface) *Scheduler {
clientset = client
cronClient := cv3.New()
- cronClient.AddFunc("* * * * *", phony)
- cronClient.AddFunc("* * * * *", heartbeat)
+ _, _ = cronClient.AddFunc("* * * * *", phony)
+ _, _ = cronClient.AddFunc("* * * * *", heartbeat)
return &Scheduler{
namespace: namespace,
@@ -56,17 +56,17 @@ func (s *Scheduler) AddSchedule(config *v1.ConfigMap) error {
var schedule ScheduleTemplate
for _, data := range config.Data {
if err := json.Unmarshal([]byte(data), &schedule); err != nil {
- return fmt.Errorf("Failed unmarhsaling configMap: %s", err)
+ return fmt.Errorf("Failed unmarhsaling configMap: %w", err)
}
}
if err := validate(schedule); err != nil {
- return fmt.Errorf("Failed to validate schedule: %s", err)
+ return fmt.Errorf("Failed to validate schedule: %w", err)
}
id, err := s.schedule(schedule)
if err != nil {
- return fmt.Errorf("Failed to schedule configmap: %s", err)
+ return fmt.Errorf("Failed to schedule configmap: %w", err)
}
log.WithFields(log.Fields{
@@ -117,7 +117,8 @@ func phony() {
// heartbeat modifies a sentinel file used as part of the liveness test
// for the scheduler
func heartbeat() {
- err := ioutil.WriteFile("/tmp/scheduler.hb", []byte(time.Now().String()), 0644)
+ // #nosec: G303
+ err := ioutil.WriteFile("/tmp/scheduler.hb", []byte(time.Now().String()), 0o600)
if err != nil {
log.Errorln("error writing heartbeat file: ", err)
}
diff --git a/cmd/pgo/api/backrest.go b/cmd/pgo/api/backrest.go
index e0e087efbf..ff157058fe 100644
--- a/cmd/pgo/api/backrest.go
+++ b/cmd/pgo/api/backrest.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -34,11 +35,12 @@ func DeleteBackup(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCre
log.Debugf("DeleteBackup called [%+v]", request)
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/backrest"
action := "DELETE"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/cat.go b/cmd/pgo/api/cat.go
index 00d17c7fb6..1da601649e 100644
--- a/cmd/pgo/api/cat.go
+++ b/cmd/pgo/api/cat.go
@@ -18,13 +18,13 @@ package api
import (
"bytes"
"encoding/json"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func Cat(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CatRequest) (msgs.CatResponse, error) {
-
var response msgs.CatResponse
jsonValue, _ := json.Marshal(request)
diff --git a/cmd/pgo/api/cluster.go b/cmd/pgo/api/cluster.go
index 74407c0dbc..c51425b09b 100644
--- a/cmd/pgo/api/cluster.go
+++ b/cmd/pgo/api/cluster.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -33,15 +34,15 @@ const (
)
func ShowCluster(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowClusterRequest) (msgs.ShowClusterResponse, error) {
-
var response msgs.ShowClusterResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := fmt.Sprintf(showClusterURL, SessionCredentials.APIServerURL)
log.Debugf("showCluster called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -68,20 +69,19 @@ func ShowCluster(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCred
}
return response, err
-
}
func DeleteCluster(httpclient *http.Client, request *msgs.DeleteClusterRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeleteClusterResponse, error) {
-
var response msgs.DeleteClusterResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := fmt.Sprintf(deleteClusterURL, SessionCredentials.APIServerURL)
log.Debugf("delete cluster called %s", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -110,19 +110,18 @@ func DeleteCluster(httpclient *http.Client, request *msgs.DeleteClusterRequest,
}
return response, err
-
}
func CreateCluster(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateClusterRequest) (msgs.CreateClusterResponse, error) {
-
var response msgs.CreateClusterResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := fmt.Sprintf(createClusterURL, SessionCredentials.APIServerURL)
log.Debugf("createCluster called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -152,15 +151,14 @@ func CreateCluster(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
}
func UpdateCluster(httpclient *http.Client, request *msgs.UpdateClusterRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.UpdateClusterResponse, error) {
- //func UpdateCluster(httpclient *http.Client, arg, selector string, SessionCredentials *msgs.BasicAuthCredentials, autofailFlag, ns string) (msgs.UpdateClusterResponse, error) {
-
var response msgs.UpdateClusterResponse
jsonValue, _ := json.Marshal(request)
+ ctx := context.TODO()
url := fmt.Sprintf(updateClusterURL, SessionCredentials.APIServerURL)
log.Debugf("update cluster called %s", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -189,5 +187,4 @@ func UpdateCluster(httpclient *http.Client, request *msgs.UpdateClusterRequest,
}
return response, err
-
}
diff --git a/cmd/pgo/api/config.go b/cmd/pgo/api/config.go
index 90848edcfd..c64be16cd1 100644
--- a/cmd/pgo/api/config.go
+++ b/cmd/pgo/api/config.go
@@ -16,21 +16,23 @@ package api
*/
import (
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowConfig(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowConfigResponse, error) {
-
var response msgs.ShowConfigResponse
+ ctx := context.TODO()
url := SessionCredentials.APIServerURL + "/config?version=" + msgs.PGO_VERSION + "&namespace=" + ns
log.Debug(url)
- req, err := http.NewRequest("GET", url, nil)
+ req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return response, err
}
@@ -59,5 +61,4 @@ func ShowConfig(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCrede
}
return response, err
-
}
diff --git a/cmd/pgo/api/df.go b/cmd/pgo/api/df.go
index fa993051aa..cd64ea10da 100644
--- a/cmd/pgo/api/df.go
+++ b/cmd/pgo/api/df.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -33,12 +34,12 @@ func ShowDf(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentia
log.Debugf("ShowDf called [%+v]", request)
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/df"
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
-
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -47,7 +48,6 @@ func ShowDf(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentia
req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password)
resp, err := httpclient.Do(req)
-
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/failover.go b/cmd/pgo/api/failover.go
index 4ebbab9471..61731da12e 100644
--- a/cmd/pgo/api/failover.go
+++ b/cmd/pgo/api/failover.go
@@ -17,24 +17,26 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func CreateFailover(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateFailoverRequest) (msgs.CreateFailoverResponse, error) {
-
var response msgs.CreateFailoverResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/failover"
log.Debugf("create failover called [%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -63,15 +65,15 @@ func CreateFailover(httpclient *http.Client, SessionCredentials *msgs.BasicAuthC
}
func QueryFailover(httpclient *http.Client, arg string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.QueryFailoverResponse, error) {
-
var response msgs.QueryFailoverResponse
+ ctx := context.TODO()
url := SessionCredentials.APIServerURL + "/failover/" + arg + "?version=" + msgs.PGO_VERSION + "&namespace=" + ns
log.Debugf("query failover called [%s]", url)
action := "GET"
- req, err := http.NewRequest(action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, action, url, nil)
if err != nil {
return response, err
}
@@ -97,5 +99,4 @@ func QueryFailover(httpclient *http.Client, arg string, SessionCredentials *msgs
}
return response, err
-
}
diff --git a/cmd/pgo/api/label.go b/cmd/pgo/api/label.go
index e083f998a8..b96facc729 100644
--- a/cmd/pgo/api/label.go
+++ b/cmd/pgo/api/label.go
@@ -18,13 +18,13 @@ package api
import (
"bytes"
"encoding/json"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func LabelClusters(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.LabelRequest) (msgs.LabelResponse, error) {
-
var response msgs.LabelResponse
url := SessionCredentials.APIServerURL + "/label"
log.Debugf("label called...[%s]", url)
@@ -61,7 +61,6 @@ func LabelClusters(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
}
func DeleteLabel(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeleteLabelRequest) (msgs.LabelResponse, error) {
-
var response msgs.LabelResponse
url := SessionCredentials.APIServerURL + "/labeldelete"
log.Debugf("delete label called...[%s]", url)
diff --git a/cmd/pgo/api/namespace.go b/cmd/pgo/api/namespace.go
index 96f10ba8d7..1648e02384 100644
--- a/cmd/pgo/api/namespace.go
+++ b/cmd/pgo/api/namespace.go
@@ -17,23 +17,25 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowNamespace(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowNamespaceRequest) (msgs.ShowNamespaceResponse, error) {
-
var resp msgs.ShowNamespaceResponse
resp.Status.Code = msgs.Ok
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/namespace"
log.Debugf("ShowNamespace called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
resp.Status.Code = msgs.Error
return resp, err
@@ -61,19 +63,18 @@ func ShowNamespace(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
}
return resp, err
-
}
func CreateNamespace(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateNamespaceRequest) (msgs.CreateNamespaceResponse, error) {
-
var resp msgs.CreateNamespaceResponse
resp.Status.Code = msgs.Ok
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/namespacecreate"
log.Debugf("CreateNamespace called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
resp.Status.Code = msgs.Error
return resp, err
@@ -107,15 +108,15 @@ func CreateNamespace(httpclient *http.Client, SessionCredentials *msgs.BasicAuth
}
func DeleteNamespace(httpclient *http.Client, request *msgs.DeleteNamespaceRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeleteNamespaceResponse, error) {
-
var response msgs.DeleteNamespaceResponse
url := SessionCredentials.APIServerURL + "/namespacedelete"
log.Debugf("DeleteNamespace called [%s]", url)
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -142,18 +143,18 @@ func DeleteNamespace(httpclient *http.Client, request *msgs.DeleteNamespaceReque
}
return response, err
-
}
-func UpdateNamespace(httpclient *http.Client, request *msgs.UpdateNamespaceRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.UpdateNamespaceResponse, error) {
+func UpdateNamespace(httpclient *http.Client, request *msgs.UpdateNamespaceRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.UpdateNamespaceResponse, error) {
var response msgs.UpdateNamespaceResponse
url := SessionCredentials.APIServerURL + "/namespaceupdate"
log.Debugf("UpdateNamespace called [%s]", url)
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -180,5 +181,4 @@ func UpdateNamespace(httpclient *http.Client, request *msgs.UpdateNamespaceReque
}
return response, err
-
}
diff --git a/cmd/pgo/api/pgadmin.go b/cmd/pgo/api/pgadmin.go
index 0d410355cd..7f4ac09d89 100644
--- a/cmd/pgo/api/pgadmin.go
+++ b/cmd/pgo/api/pgadmin.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"io/ioutil"
"net/http"
@@ -29,12 +30,13 @@ import (
func CreatePgAdmin(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgAdminRequest) (msgs.CreatePgAdminResponse, error) {
var response msgs.CreatePgAdminResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgadmin"
log.Debugf("createPgAdmin called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -68,12 +70,13 @@ func CreatePgAdmin(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
func DeletePgAdmin(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeletePgAdminRequest) (msgs.DeletePgAdminResponse, error) {
var response msgs.DeletePgAdminResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgadmin"
log.Debugf("deletePgAdmin called...[%s]", url)
action := "DELETE"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -117,13 +120,13 @@ func ShowPgAdmin(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCred
log.Debugf("ShowPgAdmin called [%+v]", request)
// put the request into JSON format and format the URL and HTTP verb
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgadmin/show"
action := "POST"
// prepare the request!
- httpRequest, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
-
+ httpRequest, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
// if there is an error preparing the request, return here
if err != nil {
return msgs.ShowPgAdminResponse{}, err
diff --git a/cmd/pgo/api/pgbouncer.go b/cmd/pgo/api/pgbouncer.go
index efee86ca53..56be678166 100644
--- a/cmd/pgo/api/pgbouncer.go
+++ b/cmd/pgo/api/pgbouncer.go
@@ -17,22 +17,24 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func CreatePgbouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgbouncerRequest) (msgs.CreatePgbouncerResponse, error) {
-
var response msgs.CreatePgbouncerResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgbouncer"
log.Debugf("createPgbouncer called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -61,15 +63,15 @@ func CreatePgbouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuth
}
func DeletePgbouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeletePgbouncerRequest) (msgs.DeletePgbouncerResponse, error) {
-
var response msgs.DeletePgbouncerResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgbouncer"
log.Debugf("deletePgbouncer called...[%s]", url)
action := "DELETE"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -108,13 +110,13 @@ func ShowPgBouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
log.Debugf("ShowPgBouncer called [%+v]", request)
// put the request into JSON format and format the URL and HTTP verb
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgbouncer/show"
action := "POST"
// prepare the request!
- httpRequest, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
-
+ httpRequest, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
// if there is an error preparing the request, return here
if err != nil {
return msgs.ShowPgBouncerResponse{}, err
@@ -127,7 +129,6 @@ func ShowPgBouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
// make the request! if there is an error making the request, return
httpResponse, err := httpclient.Do(httpRequest)
-
if err != nil {
return msgs.ShowPgBouncerResponse{}, err
}
@@ -162,13 +163,13 @@ func UpdatePgBouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuth
log.Debugf("UpdatePgBouncer called [%+v]", request)
// put the request into JSON format and format the URL and HTTP verb
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgbouncer"
action := "PUT"
// prepare the request!
- httpRequest, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
-
+ httpRequest, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
// if there is an error preparing the request, return here
if err != nil {
return msgs.UpdatePgBouncerResponse{}, err
@@ -181,7 +182,6 @@ func UpdatePgBouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuth
// make the request! if there is an error making the request, return
httpResponse, err := httpclient.Do(httpRequest)
-
if err != nil {
return msgs.UpdatePgBouncerResponse{}, err
}
diff --git a/cmd/pgo/api/pgdump.go b/cmd/pgo/api/pgdump.go
index 3bc0804c7b..b228954c91 100644
--- a/cmd/pgo/api/pgdump.go
+++ b/cmd/pgo/api/pgdump.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -26,14 +27,14 @@ import (
)
func ShowpgDump(httpclient *http.Client, arg, selector string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowBackupResponse, error) {
-
var response msgs.ShowBackupResponse
url := SessionCredentials.APIServerURL + "/pgdump/" + arg + "?version=" + msgs.PGO_VERSION + "&selector=" + selector + "&namespace=" + ns
log.Debugf("show pgdump called [%s]", url)
+ ctx := context.TODO()
action := "GET"
- req, err := http.NewRequest(action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, action, url, nil)
if err != nil {
return response, err
}
@@ -58,13 +59,12 @@ func ShowpgDump(httpclient *http.Client, arg, selector string, SessionCredential
}
return response, err
-
}
func CreatepgDumpBackup(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatepgDumpBackupRequest) (msgs.CreatepgDumpBackupResponse, error) {
-
var response msgs.CreatepgDumpBackupResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgdumpbackup"
@@ -72,7 +72,7 @@ func CreatepgDumpBackup(httpclient *http.Client, SessionCredentials *msgs.BasicA
log.Debugf("create pgdump backup called [%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/pgorole.go b/cmd/pgo/api/pgorole.go
index 804f0c1eb2..1157677fba 100644
--- a/cmd/pgo/api/pgorole.go
+++ b/cmd/pgo/api/pgorole.go
@@ -17,22 +17,24 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowPgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowPgoroleRequest) (msgs.ShowPgoroleResponse, error) {
-
var response msgs.ShowPgoroleResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgoroleshow"
log.Debugf("ShowPgorole called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -58,18 +60,18 @@ func ShowPgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCred
}
return response, err
-
}
-func CreatePgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgoroleRequest) (msgs.CreatePgoroleResponse, error) {
+func CreatePgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgoroleRequest) (msgs.CreatePgoroleResponse, error) {
var resp msgs.CreatePgoroleResponse
resp.Status.Code = msgs.Ok
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgorolecreate"
log.Debugf("CreatePgorole called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
resp.Status.Code = msgs.Error
return resp, err
@@ -103,15 +105,15 @@ func CreatePgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
}
func DeletePgorole(httpclient *http.Client, request *msgs.DeletePgoroleRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeletePgoroleResponse, error) {
-
var response msgs.DeletePgoroleResponse
url := SessionCredentials.APIServerURL + "/pgoroledelete"
log.Debugf("DeletePgorole called [%s]", url)
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -138,18 +140,17 @@ func DeletePgorole(httpclient *http.Client, request *msgs.DeletePgoroleRequest,
}
return response, err
-
}
func UpdatePgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.UpdatePgoroleRequest) (msgs.UpdatePgoroleResponse, error) {
-
var response msgs.UpdatePgoroleResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgoroleupdate"
log.Debugf("UpdatePgorole called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/pgouser.go b/cmd/pgo/api/pgouser.go
index e0026d20ca..9f8cfaf63b 100644
--- a/cmd/pgo/api/pgouser.go
+++ b/cmd/pgo/api/pgouser.go
@@ -17,22 +17,24 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowPgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowPgouserRequest) (msgs.ShowPgouserResponse, error) {
-
var response msgs.ShowPgouserResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgousershow"
log.Debugf("ShowPgouser called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -58,18 +60,18 @@ func ShowPgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCred
}
return response, err
-
}
-func CreatePgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgouserRequest) (msgs.CreatePgouserResponse, error) {
+func CreatePgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgouserRequest) (msgs.CreatePgouserResponse, error) {
var resp msgs.CreatePgouserResponse
resp.Status.Code = msgs.Ok
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgousercreate"
log.Debugf("CreatePgouser called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
resp.Status.Code = msgs.Error
return resp, err
@@ -103,15 +105,15 @@ func CreatePgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCr
}
func DeletePgouser(httpclient *http.Client, request *msgs.DeletePgouserRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeletePgouserResponse, error) {
-
var response msgs.DeletePgouserResponse
url := SessionCredentials.APIServerURL + "/pgouserdelete"
log.Debugf("DeletePgouser called [%s]", url)
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -138,18 +140,17 @@ func DeletePgouser(httpclient *http.Client, request *msgs.DeletePgouserRequest,
}
return response, err
-
}
func UpdatePgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.UpdatePgouserRequest) (msgs.UpdatePgouserResponse, error) {
-
var response msgs.UpdatePgouserResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/pgouserupdate"
log.Debugf("UpdatePgouser called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/policy.go b/cmd/pgo/api/policy.go
index b7e9cf5d6f..61b4f4842c 100644
--- a/cmd/pgo/api/policy.go
+++ b/cmd/pgo/api/policy.go
@@ -17,23 +17,25 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowPolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowPolicyRequest) (msgs.ShowPolicyResponse, error) {
-
var response msgs.ShowPolicyResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/showpolicies"
log.Debugf("showPolicy called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -59,19 +61,19 @@ func ShowPolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCrede
}
return response, err
-
}
-func CreatePolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePolicyRequest) (msgs.CreatePolicyResponse, error) {
+func CreatePolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePolicyRequest) (msgs.CreatePolicyResponse, error) {
var resp msgs.CreatePolicyResponse
resp.Status.Code = msgs.Ok
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/policies"
log.Debugf("createPolicy called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
resp.Status.Code = msgs.Error
return resp, err
@@ -105,16 +107,16 @@ func CreatePolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCre
}
func DeletePolicy(httpclient *http.Client, request *msgs.DeletePolicyRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeletePolicyResponse, error) {
-
var response msgs.DeletePolicyResponse
url := SessionCredentials.APIServerURL + "/policiesdelete"
log.Debugf("delete policy called [%s]", url)
+ ctx := context.TODO()
action := "POST"
jsonValue, _ := json.Marshal(request)
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -141,19 +143,18 @@ func DeletePolicy(httpclient *http.Client, request *msgs.DeletePolicyRequest, Se
}
return response, err
-
}
func ApplyPolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ApplyPolicyRequest) (msgs.ApplyPolicyResponse, error) {
-
var response msgs.ApplyPolicyResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/policies/apply"
log.Debugf("applyPolicy called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/pvc.go b/cmd/pgo/api/pvc.go
index f4fac4ceb4..4c51b05423 100644
--- a/cmd/pgo/api/pvc.go
+++ b/cmd/pgo/api/pvc.go
@@ -17,17 +17,19 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowPVC(httpclient *http.Client, request *msgs.ShowPVCRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.ShowPVCResponse, error) {
-
var response msgs.ShowPVCResponse
+ ctx := context.TODO()
url := SessionCredentials.APIServerURL + "/showpvc"
log.Debugf("ShowPVC called...[%s]", url)
@@ -35,7 +37,7 @@ func ShowPVC(httpclient *http.Client, request *msgs.ShowPVCRequest, SessionCrede
log.Debugf("ShowPVC called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -61,5 +63,4 @@ func ShowPVC(httpclient *http.Client, request *msgs.ShowPVCRequest, SessionCrede
}
return response, err
-
}
diff --git a/cmd/pgo/api/reload.go b/cmd/pgo/api/reload.go
index 9235cc1ea9..ee33d79b76 100644
--- a/cmd/pgo/api/reload.go
+++ b/cmd/pgo/api/reload.go
@@ -18,13 +18,13 @@ package api
import (
"bytes"
"encoding/json"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func Reload(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ReloadRequest) (msgs.ReloadResponse, error) {
-
var response msgs.ReloadResponse
jsonValue, _ := json.Marshal(request)
diff --git a/cmd/pgo/api/restart.go b/cmd/pgo/api/restart.go
index 13dc205972..f73f6cc249 100644
--- a/cmd/pgo/api/restart.go
+++ b/cmd/pgo/api/restart.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -29,12 +30,12 @@ import (
// a PG cluster or one or more instances within it.
func Restart(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials,
request *msgs.RestartRequest) (msgs.RestartResponse, error) {
-
var response msgs.RestartResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := fmt.Sprintf("%s/%s", SessionCredentials.APIServerURL, "restart")
- req, err := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -69,11 +70,11 @@ func Restart(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredenti
// cluster specified.
func QueryRestart(httpclient *http.Client, clusterName string, SessionCredentials *msgs.BasicAuthCredentials,
namespace string) (msgs.QueryRestartResponse, error) {
-
var response msgs.QueryRestartResponse
+ ctx := context.TODO()
url := fmt.Sprintf("%s/%s/%s", SessionCredentials.APIServerURL, "restart", clusterName)
- req, err := http.NewRequest(http.MethodGet, url, nil)
+ req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/restore.go b/cmd/pgo/api/restore.go
index e22cea904b..f80efe32f5 100644
--- a/cmd/pgo/api/restore.go
+++ b/cmd/pgo/api/restore.go
@@ -18,13 +18,13 @@ package api
import (
"bytes"
"encoding/json"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func Restore(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.RestoreRequest) (msgs.RestoreResponse, error) {
-
var response msgs.RestoreResponse
jsonValue, _ := json.Marshal(request)
diff --git a/cmd/pgo/api/restoreDump.go b/cmd/pgo/api/restoreDump.go
index bd911c1b75..6e1f918f7d 100644
--- a/cmd/pgo/api/restoreDump.go
+++ b/cmd/pgo/api/restoreDump.go
@@ -18,13 +18,13 @@ package api
import (
"bytes"
"encoding/json"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func RestoreDump(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.PgRestoreRequest) (msgs.RestoreResponse, error) {
-
var response msgs.RestoreResponse
jsonValue, _ := json.Marshal(request)
diff --git a/cmd/pgo/api/scale.go b/cmd/pgo/api/scale.go
index 6defb09127..87eae783f3 100644
--- a/cmd/pgo/api/scale.go
+++ b/cmd/pgo/api/scale.go
@@ -16,6 +16,7 @@ package api
*/
import (
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -28,14 +29,14 @@ import (
func ScaleCluster(httpclient *http.Client, arg string, ReplicaCount int,
StorageConfig, NodeLabel, CCPImageTag, ServiceType string,
SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ClusterScaleResponse, error) {
-
var response msgs.ClusterScaleResponse
url := fmt.Sprintf("%s/clusters/scale/%s", SessionCredentials.APIServerURL, arg)
log.Debug(url)
+ ctx := context.TODO()
action := "GET"
- req, err := http.NewRequest(action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, action, url, nil)
if err != nil {
return response, err
}
@@ -71,5 +72,4 @@ func ScaleCluster(httpclient *http.Client, arg string, ReplicaCount int,
}
return response, err
-
}
diff --git a/cmd/pgo/api/scaledown.go b/cmd/pgo/api/scaledown.go
index 1cc6691b72..9075c23074 100644
--- a/cmd/pgo/api/scaledown.go
+++ b/cmd/pgo/api/scaledown.go
@@ -16,6 +16,7 @@ package api
*/
import (
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -29,13 +30,13 @@ import (
func ScaleDownCluster(httpclient *http.Client, clusterName, ScaleDownTarget string,
DeleteData bool, SessionCredentials *msgs.BasicAuthCredentials,
ns string) (msgs.ScaleDownResponse, error) {
-
var response msgs.ScaleDownResponse
url := fmt.Sprintf("%s/scaledown/%s", SessionCredentials.APIServerURL, clusterName)
log.Debug(url)
+ ctx := context.TODO()
action := "GET"
- req, err := http.NewRequest(action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, action, url, nil)
if err != nil {
return response, err
}
@@ -67,19 +68,18 @@ func ScaleDownCluster(httpclient *http.Client, clusterName, ScaleDownTarget stri
}
return response, err
-
}
func ScaleQuery(httpclient *http.Client, arg string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ScaleQueryResponse, error) {
-
var response msgs.ScaleQueryResponse
url := SessionCredentials.APIServerURL + "/scale/" + arg + "?version=" + msgs.PGO_VERSION + "&namespace=" + ns
log.Debug(url)
+ ctx := context.TODO()
action := "GET"
- req, err := http.NewRequest(action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, action, url, nil)
if err != nil {
return response, err
}
@@ -105,5 +105,4 @@ func ScaleQuery(httpclient *http.Client, arg string, SessionCredentials *msgs.Ba
}
return response, err
-
}
diff --git a/cmd/pgo/api/schedule.go b/cmd/pgo/api/schedule.go
index 4007e77e1b..444bd04832 100644
--- a/cmd/pgo/api/schedule.go
+++ b/cmd/pgo/api/schedule.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
"net/http"
@@ -34,12 +35,13 @@ const (
func CreateSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCredentials, r *msgs.CreateScheduleRequest) (msgs.CreateScheduleResponse, error) {
var response msgs.CreateScheduleResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(r)
url := fmt.Sprintf(createScheduleURL, SessionCredentials.APIServerURL)
log.Debugf("create schedule called [%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -69,12 +71,13 @@ func CreateSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCrede
func DeleteSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCredentials, r *msgs.DeleteScheduleRequest) (msgs.DeleteScheduleResponse, error) {
var response msgs.DeleteScheduleResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(r)
url := fmt.Sprintf(deleteScheduleURL, SessionCredentials.APIServerURL)
log.Debugf("delete schedule called [%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -105,11 +108,12 @@ func DeleteSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCrede
func ShowSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCredentials, r *msgs.ShowScheduleRequest) (msgs.ShowScheduleResponse, error) {
var response msgs.ShowScheduleResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(r)
url := fmt.Sprintf(showScheduleURL, SessionCredentials.APIServerURL)
log.Debugf("show schedule called [%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/status.go b/cmd/pgo/api/status.go
index ad70bd2f96..9def02a132 100644
--- a/cmd/pgo/api/status.go
+++ b/cmd/pgo/api/status.go
@@ -16,21 +16,23 @@ package api
*/
import (
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowStatus(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.StatusResponse, error) {
-
var response msgs.StatusResponse
url := SessionCredentials.APIServerURL + "/status?version=" + msgs.PGO_VERSION + "&namespace=" + ns
log.Debug(url)
+ ctx := context.TODO()
action := "GET"
- req, err := http.NewRequest(action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, action, url, nil)
if err != nil {
return response, err
}
@@ -55,5 +57,4 @@ func ShowStatus(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCrede
}
return response, err
-
}
diff --git a/cmd/pgo/api/test.go b/cmd/pgo/api/test.go
index 887d67b056..ca70b5c132 100644
--- a/cmd/pgo/api/test.go
+++ b/cmd/pgo/api/test.go
@@ -17,23 +17,25 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowTest(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ClusterTestRequest) (msgs.ClusterTestResponse, error) {
-
var response msgs.ClusterTestResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/testclusters"
log.Debug(url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -58,5 +60,4 @@ func ShowTest(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredent
}
return response, err
-
}
diff --git a/cmd/pgo/api/upgrade.go b/cmd/pgo/api/upgrade.go
index 6079a29023..3613b4527a 100644
--- a/cmd/pgo/api/upgrade.go
+++ b/cmd/pgo/api/upgrade.go
@@ -17,6 +17,7 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"net/http"
@@ -25,15 +26,15 @@ import (
)
func CreateUpgrade(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateUpgradeRequest) (msgs.CreateUpgradeResponse, error) {
-
var response msgs.CreateUpgradeResponse
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/upgrades"
log.Debugf("CreateUpgrade called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/user.go b/cmd/pgo/api/user.go
index 38424ab17b..ed1ad4fda5 100644
--- a/cmd/pgo/api/user.go
+++ b/cmd/pgo/api/user.go
@@ -17,25 +17,27 @@ package api
import (
"bytes"
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowUserRequest) (msgs.ShowUserResponse, error) {
-
var response msgs.ShowUserResponse
response.Status.Code = msgs.Ok
request.ClientVersion = msgs.PGO_VERSION
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/usershow"
log.Debugf("ShowUser called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
response.Status.Code = msgs.Error
return response, err
@@ -62,20 +64,20 @@ func ShowUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredent
}
return response, err
-
}
-func CreateUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateUserRequest) (msgs.CreateUserResponse, error) {
+func CreateUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateUserRequest) (msgs.CreateUserResponse, error) {
var response msgs.CreateUserResponse
request.ClientVersion = msgs.PGO_VERSION
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/usercreate"
log.Debugf("createUsers called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -104,17 +106,17 @@ func CreateUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCrede
}
func DeleteUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeleteUserRequest) (msgs.DeleteUserResponse, error) {
-
var response msgs.DeleteUserResponse
request.ClientVersion = msgs.PGO_VERSION
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/userdelete"
log.Debugf("deleteUser called...[%s]", url)
action := "POST"
- req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, action, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
@@ -140,20 +142,19 @@ func DeleteUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCrede
}
return response, err
-
}
func UpdateUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.UpdateUserRequest) (msgs.UpdateUserResponse, error) {
-
var response msgs.UpdateUserResponse
request.ClientVersion = msgs.PGO_VERSION
+ ctx := context.TODO()
jsonValue, _ := json.Marshal(request)
url := SessionCredentials.APIServerURL + "/userupdate"
log.Debugf("UpdateUser called...[%s]", url)
- req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue))
+ req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
diff --git a/cmd/pgo/api/version.go b/cmd/pgo/api/version.go
index 9ca743add1..a48ac5bd8e 100644
--- a/cmd/pgo/api/version.go
+++ b/cmd/pgo/api/version.go
@@ -16,23 +16,25 @@ package api
*/
import (
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowVersion(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials) (msgs.VersionResponse, error) {
-
var response msgs.VersionResponse
log.Debug("ShowVersion called ")
+ ctx := context.TODO()
url := SessionCredentials.APIServerURL + "/version"
log.Debug(url)
- req, err := http.NewRequest("GET", url, nil)
+ req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return response, err
}
@@ -61,5 +63,4 @@ func ShowVersion(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCred
}
return response, err
-
}
diff --git a/cmd/pgo/api/workflow.go b/cmd/pgo/api/workflow.go
index 3289329aa1..46381889f4 100644
--- a/cmd/pgo/api/workflow.go
+++ b/cmd/pgo/api/workflow.go
@@ -16,22 +16,24 @@ package api
*/
import (
+ "context"
"encoding/json"
"fmt"
+ "net/http"
+
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
func ShowWorkflow(httpclient *http.Client, workflowID string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowWorkflowResponse, error) {
-
var response msgs.ShowWorkflowResponse
url := SessionCredentials.APIServerURL + "/workflow/" + workflowID + "?version=" + msgs.PGO_VERSION + "&namespace=" + ns
log.Debugf("ShowWorkflow called...[%s]", url)
+ ctx := context.TODO()
action := "GET"
- req, err := http.NewRequest(action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, action, url, nil)
if err != nil {
return response, err
}
@@ -56,5 +58,4 @@ func ShowWorkflow(httpclient *http.Client, workflowID string, SessionCredentials
}
return response, err
-
}
diff --git a/cmd/pgo/cmd/auth.go b/cmd/pgo/cmd/auth.go
index 322e5f5e9c..fda8689df6 100644
--- a/cmd/pgo/cmd/auth.go
+++ b/cmd/pgo/cmd/auth.go
@@ -109,7 +109,7 @@ func getCredentialsFromFile() msgs.BasicAuthCredentials {
fullPath := dir + "/" + ".pgouser"
var creds msgs.BasicAuthCredentials
- //look in env var for pgouser file
+ // look in env var for pgouser file
pgoUser := os.Getenv(pgoUserFileEnvVar)
if pgoUser != "" {
fullPath = pgoUser
@@ -125,7 +125,7 @@ func getCredentialsFromFile() msgs.BasicAuthCredentials {
found = true
}
- //look in home directory for .pgouser file
+ // look in home directory for .pgouser file
if !found {
log.Debugf("looking in %s for credentials", fullPath)
dat, err := ioutil.ReadFile(fullPath)
@@ -140,7 +140,7 @@ func getCredentialsFromFile() msgs.BasicAuthCredentials {
}
}
- //look in etc for pgouser file
+ // look in etc for pgouser file
if !found {
fullPath = "/etc/pgo/pgouser"
dat, err := ioutil.ReadFile(fullPath)
@@ -210,7 +210,7 @@ func GetTLSTransport() (*http.Transport, error) {
caCertPool = x509.NewCertPool()
} else {
if pool, err := x509.SystemCertPool(); err != nil {
- return nil, fmt.Errorf("while loading System CA pool - %s", err)
+ return nil, fmt.Errorf("while loading System CA pool - %w", err)
} else {
caCertPool = pool
}
@@ -227,12 +227,12 @@ func GetTLSTransport() (*http.Transport, error) {
// Open trust file and extend trust pool
if trustFile, err := os.Open(caCertPath); err != nil {
- newErr := fmt.Errorf("unable to load TLS trust from %s - [%v]", caCertPath, err)
+ newErr := fmt.Errorf("unable to load TLS trust from %s - %w", caCertPath, err)
return nil, newErr
} else {
err = tlsutil.ExtendTrust(caCertPool, trustFile)
if err != nil {
- newErr := fmt.Errorf("error reading %s - %v", caCertPath, err)
+ newErr := fmt.Errorf("error reading %s - %w", caCertPath, err)
return nil, newErr
}
trustFile.Close()
@@ -258,10 +258,11 @@ func GetTLSTransport() (*http.Transport, error) {
certPair, err := tls.LoadX509KeyPair(clientCertPath, clientKeyPath)
if err != nil {
- return nil, fmt.Errorf("client certificate/key loading: %s", err)
+ return nil, fmt.Errorf("client certificate/key loading: %w", err)
}
// create a Transport object for use by the HTTP client
+ // #nosec: G402
return &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: caCertPool,
diff --git a/cmd/pgo/cmd/backrest.go b/cmd/pgo/cmd/backrest.go
index 46d843206e..2b149097cf 100644
--- a/cmd/pgo/cmd/backrest.go
+++ b/cmd/pgo/cmd/backrest.go
@@ -57,7 +57,6 @@ func createBackrestBackup(args []string, ns string) {
fmt.Println("No clusters found.")
return
}
-
}
// showBackrest ....
@@ -84,8 +83,8 @@ func showBackrest(args []string, ns string) {
log.Debugf("response = %v", response)
log.Debugf("len of items = %d", len(response.Items))
- for _, backup := range response.Items {
- printBackrest(&backup)
+ for i := range response.Items {
+ printBackrest(&response.Items[i])
}
}
}
diff --git a/cmd/pgo/cmd/cat.go b/cmd/pgo/cmd/cat.go
index 08b3a93109..7afe28126f 100644
--- a/cmd/pgo/cmd/cat.go
+++ b/cmd/pgo/cmd/cat.go
@@ -42,7 +42,6 @@ var catCmd = &cobra.Command{
} else {
cat(args, Namespace)
}
-
},
}
@@ -58,7 +57,6 @@ func cat(args []string, ns string) {
request.Args = args
request.Namespace = ns
response, err := api.Cat(httpclient, &SessionCredentials, request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -77,5 +75,4 @@ func cat(args []string, ns string) {
fmt.Println("No clusters found.")
return
}
-
}
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index 5c8a318fc3..4dc16a0fdc 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -75,7 +75,6 @@ func deleteCluster(args []string, ns string) {
for _, arg := range args {
r.Clustername = arg
response, err := api.DeleteCluster(httpclient, &r, &SessionCredentials)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -90,12 +89,10 @@ func deleteCluster(args []string, ns string) {
}
}
-
}
// showCluster ...
func showCluster(args []string, ns string) {
-
log.Debugf("showCluster called %v", args)
if OutputFormat != "" {
@@ -149,12 +146,11 @@ func showCluster(args []string, ns string) {
return
}
- for _, clusterDetail := range response.Results {
- printCluster(&clusterDetail)
+ for i := range response.Results {
+ printCluster(&response.Results[i])
}
}
-
}
// printCluster
@@ -173,7 +169,7 @@ func printCluster(detail *msgs.ShowClusterDetail) {
podStr := fmt.Sprintf("%spod : %s (%s) on %s (%s) %s", TreeBranch, pod.Name, string(pod.Phase), pod.NodeName, pod.ReadyStatus, podType)
fmt.Println(podStr)
for _, pvc := range pod.PVC {
- fmt.Println(fmt.Sprintf("%spvc: %s (%s)", TreeBranch+TreeBranch, pvc.Name, pvc.Capacity))
+ fmt.Printf("%spvc: %s (%s)\n", TreeBranch+TreeBranch, pvc.Name, pvc.Capacity)
}
}
@@ -237,7 +233,6 @@ func printCluster(detail *msgs.ShowClusterDetail) {
fmt.Printf("%s=%s ", k, v)
}
fmt.Println("")
-
}
func printPolicies(d *msgs.ShowClusterDeployment) {
@@ -699,7 +694,6 @@ func updateCluster(args []string, ns string) {
}
response, err := api.UpdateCluster(httpclient, &r, &SessionCredentials)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -712,5 +706,4 @@ func updateCluster(args []string, ns string) {
} else {
fmt.Println("Error: " + response.Status.Msg)
}
-
}
diff --git a/cmd/pgo/cmd/config.go b/cmd/pgo/cmd/config.go
index b8e4d89d0f..0d5f4f4335 100644
--- a/cmd/pgo/cmd/config.go
+++ b/cmd/pgo/cmd/config.go
@@ -28,11 +28,9 @@ import (
)
func showConfig(args []string, ns string) {
-
log.Debugf("showConfig called %v", args)
response, err := api.ShowConfig(httpclient, &SessionCredentials, ns)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -60,5 +58,4 @@ func showConfig(args []string, ns string) {
}
fmt.Println(string(y))
-
}
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index 25dd1119b9..eaa60e43fa 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -23,51 +23,53 @@ import (
"github.com/spf13/cobra"
)
-var ClusterReplicaCount int
-var ManagedUser bool
-var AllNamespaces bool
-var BackrestStorageConfig, ReplicaStorageConfig, StorageConfig string
-var CustomConfig string
-var ArchiveFlag, DisableAutofailFlag, EnableAutofailFlag, PgbouncerFlag, MetricsFlag, BadgerFlag bool
-var BackrestRestoreFrom string
-var CCPImage string
-var CCPImageTag string
-var CCPImagePrefix string
-var PGOImagePrefix string
-var Database string
-var Password string
-var SecretFrom string
-var PoliciesFlag, PolicyFile, PolicyURL string
-var UserLabels string
-var Tablespaces []string
-var ServiceType string
-var Schedule string
-var ScheduleOptions string
-var ScheduleType string
-var SchedulePolicy string
-var ScheduleDatabase string
-var ScheduleSecret string
-var PGBackRestType string
-var Secret string
-var PgouserPassword, PgouserRoles, PgouserNamespaces string
-var Permissions string
-var PodAntiAffinity string
-var PodAntiAffinityPgBackRest string
-var PodAntiAffinityPgBouncer string
-var SyncReplication bool
-var BackrestConfig string
-var BackrestS3Key string
-var BackrestS3KeySecret string
-var BackrestS3Bucket string
-var BackrestS3Endpoint string
-var BackrestS3Region string
-var BackrestS3URIStyle string
-var BackrestS3VerifyTLS bool
-var PVCSize string
-var BackrestPVCSize string
-var WALStorageConfig string
-var WALPVCSize string
-var RestoreFrom string
+var (
+ ClusterReplicaCount int
+ ManagedUser bool
+ AllNamespaces bool
+ BackrestStorageConfig, ReplicaStorageConfig, StorageConfig string
+ CustomConfig string
+ ArchiveFlag, DisableAutofailFlag, EnableAutofailFlag, PgbouncerFlag, MetricsFlag, BadgerFlag bool
+ BackrestRestoreFrom string
+ CCPImage string
+ CCPImageTag string
+ CCPImagePrefix string
+ PGOImagePrefix string
+ Database string
+ Password string
+ SecretFrom string
+ PoliciesFlag, PolicyFile, PolicyURL string
+ UserLabels string
+ Tablespaces []string
+ ServiceType string
+ Schedule string
+ ScheduleOptions string
+ ScheduleType string
+ SchedulePolicy string
+ ScheduleDatabase string
+ ScheduleSecret string
+ PGBackRestType string
+ Secret string
+ PgouserPassword, PgouserRoles, PgouserNamespaces string
+ Permissions string
+ PodAntiAffinity string
+ PodAntiAffinityPgBackRest string
+ PodAntiAffinityPgBouncer string
+ SyncReplication bool
+ BackrestConfig string
+ BackrestS3Key string
+ BackrestS3KeySecret string
+ BackrestS3Bucket string
+ BackrestS3Endpoint string
+ BackrestS3Region string
+ BackrestS3URIStyle string
+ BackrestS3VerifyTLS bool
+ PVCSize string
+ BackrestPVCSize string
+ WALStorageConfig string
+ WALPVCSize string
+ RestoreFrom string
+)
// group the annotation requests
var (
@@ -243,7 +245,6 @@ var createPgAdminCmd = &cobra.Command{
pgo create pgadmin mycluster`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
@@ -265,7 +266,6 @@ var createPgbouncerCmd = &cobra.Command{
pgo create pgbouncer mycluster`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
@@ -293,7 +293,6 @@ var createScheduleCmd = &cobra.Command{
pgo create schedule --schedule="* * * * *" --schedule-type=pgbackrest --pgbackrest-backup-type=full mycluster`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
@@ -317,7 +316,6 @@ var createUserCmd = &cobra.Command{
pgo create user --username=someuser -selector=name=mycluster --managed
pgo create user --username=user1 --selector=name=mycluster`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
diff --git a/cmd/pgo/cmd/delete.go b/cmd/pgo/cmd/delete.go
index df21d71ea4..86a6a68c7c 100644
--- a/cmd/pgo/cmd/delete.go
+++ b/cmd/pgo/cmd/delete.go
@@ -137,14 +137,14 @@ func init() {
// instructs that any backups associated with a cluster should be deleted
deleteClusterCmd.Flags().BoolVarP(&deleteBackups, "delete-backups", "b", false,
"Causes the backups for specified cluster to be removed permanently.")
- deleteClusterCmd.Flags().MarkDeprecated("delete-backups",
+ _ = deleteClusterCmd.Flags().MarkDeprecated("delete-backups",
"Backups are deleted by default. If you would like to keep your backups, use the --keep-backups flag")
// "pgo delete cluster --delete-data"
// "pgo delete cluster -d"
// instructs that the PostgreSQL cluster data should be deleted
deleteClusterCmd.Flags().BoolVarP(&DeleteData, "delete-data", "d", false,
"Causes the data for specified cluster to be removed permanently.")
- deleteClusterCmd.Flags().MarkDeprecated("delete-data",
+ _ = deleteClusterCmd.Flags().MarkDeprecated("delete-data",
"Data is deleted by default. You can preserve your data by keeping your backups with the --keep-backups flag")
// "pgo delete cluster --keep-backups"
// instructs that any backups associated with a cluster should be kept and not deleted
@@ -557,7 +557,7 @@ var deleteUserCmd = &cobra.Command{
if Namespace == "" {
Namespace = PGONamespace
}
- if len(args) == 0 && AllFlag == false && Selector == "" {
+ if len(args) == 0 && !AllFlag && Selector == "" {
fmt.Println("Error: --all, --selector, or a list of clusters is required for this command")
} else {
if util.AskForConfirmation(NoPrompt, "") {
diff --git a/cmd/pgo/cmd/df.go b/cmd/pgo/cmd/df.go
index 16f764830a..bad4301ceb 100644
--- a/cmd/pgo/cmd/df.go
+++ b/cmd/pgo/cmd/df.go
@@ -112,7 +112,6 @@ func init() {
dfCmd.Flags().BoolVar(&AllFlag, "all", false, "Get disk utilization for all managed clusters")
dfCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`)
dfCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
-
}
// getPVCType returns a "human readable" form of the PVC
@@ -250,7 +249,6 @@ func showDf(namespace, selector string) {
// make the request
response, err := api.ShowDf(httpclient, &SessionCredentials, request)
-
// if there is an error, or the response code is not ok, print the error and
// exit
if err != nil {
diff --git a/cmd/pgo/cmd/failover.go b/cmd/pgo/cmd/failover.go
index 8b2b6ac8c2..1459c659ae 100644
--- a/cmd/pgo/cmd/failover.go
+++ b/cmd/pgo/cmd/failover.go
@@ -53,7 +53,6 @@ var failoverCmd = &cobra.Command{
fmt.Println("Aborting...")
}
}
-
},
}
@@ -63,7 +62,6 @@ func init() {
failoverCmd.Flags().BoolVarP(&Query, "query", "", false, "Prints the list of failover candidates.")
failoverCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
failoverCmd.Flags().StringVarP(&Target, "target", "", "", "The replica target which the failover will occur on.")
-
}
// createFailover ....
@@ -77,7 +75,6 @@ func createFailover(args []string, ns string) {
request.ClientVersion = msgs.PGO_VERSION
response, err := api.CreateFailover(httpclient, &SessionCredentials, request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -91,7 +88,6 @@ func createFailover(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
// queryFailover is a helper function to return the user information about the
diff --git a/cmd/pgo/cmd/flags.go b/cmd/pgo/cmd/flags.go
index bb831e4006..28afacc651 100644
--- a/cmd/pgo/cmd/flags.go
+++ b/cmd/pgo/cmd/flags.go
@@ -15,7 +15,7 @@ package cmd
limitations under the License.
*/
-//flags used by more than 1 command
+// flags used by more than 1 command
var DeleteData bool
// KeepData, If set to "true", indicates that cluster data should be stored
@@ -24,29 +24,39 @@ var KeepData bool
var Query bool
-var Target string
-var Targets []string
-
-var OutputFormat string
-var Labelselector string
-var DebugFlag bool
-var Selector string
-var DryRun bool
-var ScheduleName string
-var NodeLabel string
-
-var BackupType string
-var RestoreType string
-var BackupOpts string
-var BackrestStorageType string
-
-var RED func(a ...interface{}) string
-var YELLOW func(a ...interface{}) string
-var GREEN func(a ...interface{}) string
-
-var Namespace string
-var PGONamespace string
-var APIServerURL string
-var PGO_CA_CERT, PGO_CLIENT_CERT, PGO_CLIENT_KEY string
-var PGO_DISABLE_TLS bool
-var EXCLUDE_OS_TRUST bool
+var (
+ Target string
+ Targets []string
+)
+
+var (
+ OutputFormat string
+ Labelselector string
+ DebugFlag bool
+ Selector string
+ DryRun bool
+ ScheduleName string
+ NodeLabel string
+)
+
+var (
+ BackupType string
+ RestoreType string
+ BackupOpts string
+ BackrestStorageType string
+)
+
+var (
+ RED func(a ...interface{}) string
+ YELLOW func(a ...interface{}) string
+ GREEN func(a ...interface{}) string
+)
+
+var (
+ Namespace string
+ PGONamespace string
+ APIServerURL string
+ PGO_CA_CERT, PGO_CLIENT_CERT, PGO_CLIENT_KEY string
+ PGO_DISABLE_TLS bool
+ EXCLUDE_OS_TRUST bool
+)
diff --git a/cmd/pgo/cmd/label.go b/cmd/pgo/cmd/label.go
index db2253d8a9..bd794e706b 100644
--- a/cmd/pgo/cmd/label.go
+++ b/cmd/pgo/cmd/label.go
@@ -25,9 +25,11 @@ import (
"github.com/spf13/cobra"
)
-var LabelCmdLabel string
-var LabelMap map[string]string
-var DeleteLabel bool
+var (
+ LabelCmdLabel string
+ LabelMap map[string]string
+ DeleteLabel bool
+)
var labelCmd = &cobra.Command{
Use: "label",
@@ -61,7 +63,6 @@ func init() {
labelCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
labelCmd.Flags().StringVarP(&LabelCmdLabel, "label", "", "", "The new label to apply for any selected or specified clusters.")
labelCmd.Flags().BoolVarP(&DryRun, "dry-run", "", false, "Shows the clusters that the label would be applied to, without labelling them.")
-
}
func labelClusters(clusters []string, ns string) {
@@ -100,7 +101,6 @@ func labelClusters(clusters []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
// deleteLabel ...
@@ -127,5 +127,4 @@ func deleteLabel(args []string, ns string) {
} else {
fmt.Println("Error: " + response.Status.Msg)
}
-
}
diff --git a/cmd/pgo/cmd/namespace.go b/cmd/pgo/cmd/namespace.go
index baa0f9ce92..e095328360 100644
--- a/cmd/pgo/cmd/namespace.go
+++ b/cmd/pgo/cmd/namespace.go
@@ -54,7 +54,7 @@ func showNamespace(args []string) {
r.Args = nsList
r.AllFlag = AllFlag
- if len(nsList) == 0 && AllFlag == false {
+ if len(nsList) == 0 && !AllFlag {
fmt.Println("Error: namespace args or --all is required")
os.Exit(2)
}
@@ -62,7 +62,6 @@ func showNamespace(args []string) {
log.Debugf("showNamespace called %v", nsList)
response, err := api.ShowNamespace(httpclient, &SessionCredentials, &r)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -107,7 +106,6 @@ func showNamespace(args []string) {
fmt.Printf("%s", accessible)
fmt.Printf("%s\n", iAccessible)
}
-
}
func createNamespace(args []string, ns string) {
@@ -167,8 +165,8 @@ func deleteNamespace(args []string, ns string) {
} else {
fmt.Println("Error: " + response.Status.Msg)
}
-
}
+
func updateNamespace(args []string) {
var err error
@@ -182,7 +180,6 @@ func updateNamespace(args []string) {
r.ClientVersion = msgs.PGO_VERSION
response, err := api.UpdateNamespace(httpclient, r, &SessionCredentials)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -194,5 +191,4 @@ func updateNamespace(args []string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
diff --git a/cmd/pgo/cmd/pgadmin.go b/cmd/pgo/cmd/pgadmin.go
index 8864de9abe..0d034f1710 100644
--- a/cmd/pgo/cmd/pgadmin.go
+++ b/cmd/pgo/cmd/pgadmin.go
@@ -47,7 +47,6 @@ func createPgAdmin(args []string, ns string) {
}
response, err := api.CreatePgAdmin(httpclient, &SessionCredentials, &request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(1)
@@ -95,7 +94,6 @@ func deletePgAdmin(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(1)
}
-
}
// makeShowPgAdminInterface returns an interface slice of the available values
diff --git a/cmd/pgo/cmd/pgbouncer.go b/cmd/pgo/cmd/pgbouncer.go
index 9450623bf1..f9096e4419 100644
--- a/cmd/pgo/cmd/pgbouncer.go
+++ b/cmd/pgo/cmd/pgbouncer.go
@@ -52,7 +52,6 @@ var PgBouncerReplicas int32
var PgBouncerUninstall bool
func createPgbouncer(args []string, ns string) {
-
if Selector == "" && len(args) == 0 {
fmt.Println("Error: The --selector flag is required.")
return
@@ -92,7 +91,6 @@ func createPgbouncer(args []string, ns string) {
}
response, err := api.CreatePgbouncer(httpclient, &SessionCredentials, &request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(1)
@@ -115,7 +113,6 @@ func createPgbouncer(args []string, ns string) {
}
func deletePgbouncer(args []string, ns string) {
-
if Selector == "" && len(args) == 0 {
fmt.Println("Error: The --selector flag or a cluster name is required.")
return
@@ -144,7 +141,6 @@ func deletePgbouncer(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
// makeShowPgBouncerInterface returns an interface slice of the available values
@@ -332,7 +328,6 @@ func showPgBouncer(namespace string, clusterNames []string) {
// and make the API request!
response, err := api.ShowPgBouncer(httpclient, &SessionCredentials, request)
-
// if there is a bona-fide error, log and exit
if err != nil {
fmt.Println("Error:", err.Error())
@@ -396,7 +391,6 @@ func updatePgBouncer(namespace string, clusterNames []string) {
// and make the API request!
response, err := api.UpdatePgBouncer(httpclient, &SessionCredentials, request)
-
// if there is a bona-fide error, log and exit
if err != nil {
fmt.Println("Error:", err.Error())
diff --git a/cmd/pgo/cmd/pgdump.go b/cmd/pgo/cmd/pgdump.go
index b9b64046bc..0ca8dfd637 100644
--- a/cmd/pgo/cmd/pgdump.go
+++ b/cmd/pgo/cmd/pgdump.go
@@ -57,7 +57,6 @@ func createpgDumpBackup(args []string, ns string) {
fmt.Println("No clusters found.")
return
}
-
}
// pgDump ....
@@ -84,8 +83,8 @@ func showpgDump(args []string, ns string) {
log.Debugf("response = %v", response)
log.Debugf("len of items = %d", len(response.BackupList.Items))
- for _, backup := range response.BackupList.Items {
- printDumpCRD(&backup)
+ for i := range response.BackupList.Items {
+ printDumpCRD(&response.BackupList.Items[i])
}
}
}
@@ -105,5 +104,4 @@ func printDumpCRD(result *msgs.Pgbackup) {
fmt.Printf("%s%s\n", TreeBranch, "Backup User Secret:\t"+result.BackupUserSecret)
fmt.Printf("%s%s\n", TreeTrunk, "Backup Port:\t"+result.BackupPort)
fmt.Printf("%s%s\n", TreeTrunk, "Backup Opts:\t"+result.BackupOpts)
-
}
diff --git a/cmd/pgo/cmd/pgorole.go b/cmd/pgo/cmd/pgorole.go
index 381542d3b4..180459d2ef 100644
--- a/cmd/pgo/cmd/pgorole.go
+++ b/cmd/pgo/cmd/pgorole.go
@@ -45,7 +45,6 @@ func updatePgorole(args []string, ns string) {
r.ClientVersion = msgs.PGO_VERSION
response, err := api.UpdatePgorole(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -57,11 +56,9 @@ func updatePgorole(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
func showPgorole(args []string, ns string) {
-
r := new(msgs.ShowPgoroleRequest)
r.PgoroleName = args
r.Namespace = ns
@@ -74,7 +71,6 @@ func showPgorole(args []string, ns string) {
}
response, err := api.ShowPgorole(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -97,11 +93,9 @@ func showPgorole(args []string, ns string) {
fmt.Println("pgorole : " + pgorole.Name)
fmt.Println("permissions : " + pgorole.Permissions)
}
-
}
func createPgorole(args []string, ns string) {
-
if Permissions == "" {
fmt.Println("Error: permissions flag is required.")
return
@@ -112,7 +106,7 @@ func createPgorole(args []string, ns string) {
return
}
var err error
- //create the request
+ // create the request
r := new(msgs.CreatePgoroleRequest)
r.PgoroleName = args[0]
r.PgorolePermissions = Permissions
@@ -133,11 +127,9 @@ func createPgorole(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
func deletePgorole(args []string, ns string) {
-
log.Debugf("deletePgorole called %v", args)
r := msgs.DeletePgoroleRequest{}
@@ -165,5 +157,4 @@ func deletePgorole(args []string, ns string) {
} else {
fmt.Println("Error: " + response.Status.Msg)
}
-
}
diff --git a/cmd/pgo/cmd/pgouser.go b/cmd/pgo/cmd/pgouser.go
index 9bfc58167f..31ea66b316 100644
--- a/cmd/pgo/cmd/pgouser.go
+++ b/cmd/pgo/cmd/pgouser.go
@@ -47,7 +47,6 @@ func updatePgouser(args []string, ns string) {
r.ClientVersion = msgs.PGO_VERSION
response, err := api.UpdatePgouser(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -59,11 +58,9 @@ func updatePgouser(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
func showPgouser(args []string, ns string) {
-
r := new(msgs.ShowPgouserRequest)
r.PgouserName = args
r.Namespace = ns
@@ -76,7 +73,6 @@ func showPgouser(args []string, ns string) {
}
response, err := api.ShowPgouser(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -100,11 +96,9 @@ func showPgouser(args []string, ns string) {
fmt.Printf("roles : %v\n", pgouser.Role)
fmt.Printf("namespaces : %v\n", pgouser.Namespace)
}
-
}
func createPgouser(args []string, ns string) {
-
if PgouserPassword == "" {
fmt.Println("Error: pgouser-password flag is required.")
return
@@ -128,7 +122,7 @@ func createPgouser(args []string, ns string) {
return
}
var err error
- //create the request
+ // create the request
r := new(msgs.CreatePgouserRequest)
r.PgouserName = args[0]
r.PgouserPassword = PgouserPassword
@@ -152,11 +146,9 @@ func createPgouser(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
func deletePgouser(args []string, ns string) {
-
log.Debugf("deletePgouser called %v", args)
r := msgs.DeletePgouserRequest{}
@@ -184,5 +176,4 @@ func deletePgouser(args []string, ns string) {
} else {
fmt.Println("Error: " + response.Status.Msg)
}
-
}
diff --git a/cmd/pgo/cmd/policy.go b/cmd/pgo/cmd/policy.go
index 90d8f04f26..96a858efc9 100644
--- a/cmd/pgo/cmd/policy.go
+++ b/cmd/pgo/cmd/policy.go
@@ -58,7 +58,6 @@ func init() {
applyCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
applyCmd.Flags().BoolVarP(&DryRun, "dry-run", "", false, "Shows the clusters that the label would be applied to, without labelling them.")
-
}
func applyPolicy(args []string, ns string) {
@@ -82,7 +81,6 @@ func applyPolicy(args []string, ns string) {
r.ClientVersion = msgs.PGO_VERSION
response, err := api.ApplyPolicy(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -104,10 +102,9 @@ func applyPolicy(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
-func showPolicy(args []string, ns string) {
+func showPolicy(args []string, ns string) {
r := new(msgs.ShowPolicyRequest)
r.Selector = Selector
r.Namespace = ns
@@ -122,7 +119,6 @@ func showPolicy(args []string, ns string) {
r.Policyname = v
response, err := api.ShowPolicy(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -148,17 +144,15 @@ func showPolicy(args []string, ns string) {
fmt.Println(TreeTrunk + "sql : " + policy.Spec.SQL)
}
}
-
}
func createPolicy(args []string, ns string) {
-
if len(args) == 0 {
fmt.Println("Error: A poliicy name argument is required.")
return
}
var err error
- //create the request
+ // create the request
r := new(msgs.CreatePolicyRequest)
r.Name = args[0]
r.Namespace = ns
@@ -190,7 +184,6 @@ func createPolicy(args []string, ns string) {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
}
-
}
func getPolicyString(filename string) (string, error) {
@@ -205,7 +198,6 @@ func getPolicyString(filename string) (string, error) {
}
func deletePolicy(args []string, ns string) {
-
log.Debugf("deletePolicy called %v", args)
r := msgs.DeletePolicyRequest{}
diff --git a/cmd/pgo/cmd/pvc.go b/cmd/pgo/cmd/pvc.go
index 856e134533..3991901442 100644
--- a/cmd/pgo/cmd/pvc.go
+++ b/cmd/pgo/cmd/pvc.go
@@ -34,22 +34,20 @@ func showPVC(args []string, ns string) {
r.ClientVersion = msgs.PGO_VERSION
if AllFlag {
- //special case to just list all the PVCs
+ // special case to just list all the PVCs
r.ClusterName = ""
printPVC(&r)
} else {
- //args are a list of pvc names...for this case show details
+ // args are a list of pvc names...for this case show details
for _, arg := range args {
r.ClusterName = arg
log.Debugf("show pvc called for %s", arg)
printPVC(&r)
}
}
-
}
func printPVC(r *msgs.ShowPVCRequest) {
-
response, err := api.ShowPVC(httpclient, r, &SessionCredentials)
log.Debugf("response = %v", response)
@@ -74,5 +72,4 @@ func printPVC(r *msgs.ShowPVCRequest) {
for _, v := range response.Results {
fmt.Printf("%-20s\t%-30s\n", v.ClusterName, v.PVCName)
}
-
}
diff --git a/cmd/pgo/cmd/reload.go b/cmd/pgo/cmd/reload.go
index 415c31a567..9d8b85b2b4 100644
--- a/cmd/pgo/cmd/reload.go
+++ b/cmd/pgo/cmd/reload.go
@@ -27,7 +27,7 @@ import (
"github.com/spf13/cobra"
)
-//unused but coming soon to a theatre near you
+// unused but coming soon to a theatre near you
var ConfigMapName string
var reloadCmd = &cobra.Command{
@@ -50,7 +50,6 @@ var reloadCmd = &cobra.Command{
fmt.Println("Aborting...")
}
}
-
},
}
@@ -59,7 +58,6 @@ func init() {
reloadCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
reloadCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
-
}
// reload ....
@@ -71,7 +69,6 @@ func reload(args []string, ns string) {
request.Selector = Selector
request.Namespace = ns
response, err := api.Reload(httpclient, &SessionCredentials, request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -95,5 +92,4 @@ func reload(args []string, ns string) {
fmt.Println("No clusters found.")
return
}
-
}
diff --git a/cmd/pgo/cmd/restart.go b/cmd/pgo/cmd/restart.go
index 02d80a1229..5349a1ff7e 100644
--- a/cmd/pgo/cmd/restart.go
+++ b/cmd/pgo/cmd/restart.go
@@ -47,7 +47,6 @@ var restartCmd = &cobra.Command{
And use the 'query' flag obtain a list of all instances within the cluster:
pgo restart mycluster --query`,
Run: func(cmd *cobra.Command, args []string) {
-
if OutputFormat != "" {
if OutputFormat != "json" {
fmt.Println("Error: ", "json is the only supported --output format value")
@@ -77,7 +76,6 @@ var restartCmd = &cobra.Command{
}
func init() {
-
RootCmd.AddCommand(restartCmd)
restartCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
@@ -89,7 +87,6 @@ func init() {
// restart sends a request to restart a PG cluster or one or more instances within it.
func restart(clusterName, namespace string) {
-
log.Debugf("restart called %v", clusterName)
request := new(msgs.RestartRequest)
@@ -141,7 +138,6 @@ func restart(clusterName, namespace string) {
// instances (the primary and all replicas) within a cluster. This is useful when the user
// would like to specify one or more instances for a restart using the "--target" flag.
func queryRestart(args []string, namespace string) {
-
log.Debugf("queryRestart called %v", args)
for _, clusterName := range args {
diff --git a/cmd/pgo/cmd/restore.go b/cmd/pgo/cmd/restore.go
index 4906bb7510..aa3b4bb725 100644
--- a/cmd/pgo/cmd/restore.go
+++ b/cmd/pgo/cmd/restore.go
@@ -28,8 +28,10 @@ import (
"github.com/spf13/cobra"
)
-var PITRTarget string
-var BackupPath, BackupPVC string
+var (
+ PITRTarget string
+ BackupPath, BackupPVC string
+)
var restoreCmd = &cobra.Command{
Use: "restore",
@@ -54,7 +56,6 @@ var restoreCmd = &cobra.Command{
fmt.Println("Aborting...")
}
}
-
},
}
@@ -123,5 +124,4 @@ func restore(args []string, ns string) {
fmt.Println("No clusters found.")
return
}
-
}
diff --git a/cmd/pgo/cmd/root.go b/cmd/pgo/cmd/root.go
index 249e0a8f91..3cd416c784 100644
--- a/cmd/pgo/cmd/root.go
+++ b/cmd/pgo/cmd/root.go
@@ -45,11 +45,9 @@ func Execute() {
log.Debug(err.Error())
os.Exit(-1)
}
-
}
func init() {
-
cobra.OnInitialize(initConfig)
log.Debug("init called")
GREEN = color.New(color.FgGreen).SprintFunc()
@@ -68,7 +66,6 @@ func init() {
RootCmd.PersistentFlags().BoolVar(&PGO_DISABLE_TLS, "disable-tls", false, "Disable TLS authentication to the Postgres Operator.")
RootCmd.PersistentFlags().BoolVar(&EXCLUDE_OS_TRUST, "exclude-os-trust", defExclOSTrust, "Exclude CA certs from OS default trust store")
RootCmd.PersistentFlags().BoolVar(&DebugFlag, "debug", false, "Enable additional output for debugging.")
-
}
func initConfig() {
@@ -115,10 +112,11 @@ func initConfig() {
func generateBashCompletion() {
log.Debugf("generating bash completion script")
+ // #nosec: G303
file, err2 := os.Create("/tmp/pgo-bash-completion.out")
if err2 != nil {
fmt.Println("Error: ", err2.Error())
}
defer file.Close()
- RootCmd.GenBashCompletion(file)
+ _ = RootCmd.GenBashCompletion(file)
}
diff --git a/cmd/pgo/cmd/scale.go b/cmd/pgo/cmd/scale.go
index e25709bc0e..0fd9bbdb16 100644
--- a/cmd/pgo/cmd/scale.go
+++ b/cmd/pgo/cmd/scale.go
@@ -65,12 +65,10 @@ func init() {
}
func scaleCluster(args []string, ns string) {
-
for _, arg := range args {
log.Debugf(" %s ReplicaCount is %d", arg, ReplicaCount)
response, err := api.ScaleCluster(httpclient, arg, ReplicaCount,
StorageConfig, NodeLabel, CCPImageTag, ServiceType, &SessionCredentials, ns)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
diff --git a/cmd/pgo/cmd/scaledown.go b/cmd/pgo/cmd/scaledown.go
index be60d27ed7..7b49350599 100644
--- a/cmd/pgo/cmd/scaledown.go
+++ b/cmd/pgo/cmd/scaledown.go
@@ -66,7 +66,7 @@ func init() {
scaledownCmd.Flags().StringVarP(&Target, "target", "", "", "The replica to target for scaling down")
scaledownCmd.Flags().BoolVarP(&DeleteData, "delete-data", "d", true,
"Causes the data for the scaled down replica to be removed permanently.")
- scaledownCmd.Flags().MarkDeprecated("delete-data", "Data is deleted by default.")
+ _ = scaledownCmd.Flags().MarkDeprecated("delete-data", "Data is deleted by default.")
scaledownCmd.Flags().BoolVar(&KeepData, "keep-data", false,
"Causes data for the scale down replica to *not* be deleted")
scaledownCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
@@ -76,7 +76,6 @@ func init() {
// available replicas that can be scaled down. This is called when the "--query"
// flag is specified
func queryCluster(args []string, ns string) {
-
// iterate through the clusters and output information about each one
for _, arg := range args {
@@ -134,7 +133,6 @@ func queryCluster(args []string, ns string) {
}
func scaleDownCluster(clusterName, ns string) {
-
// determine if the data should be deleted. The modern flag for handling this
// is "KeepData" which defaults to "false". We will honor the "DeleteData"
// flag (which defaults to "true"), but this will be removed in a future
@@ -143,7 +141,6 @@ func scaleDownCluster(clusterName, ns string) {
response, err := api.ScaleDownCluster(httpclient, clusterName, Target, deleteData,
&SessionCredentials, ns)
-
if err != nil {
fmt.Println("Error: ", err.Error())
return
@@ -156,5 +153,4 @@ func scaleDownCluster(clusterName, ns string) {
} else {
fmt.Println("Error: " + response.Status.Msg)
}
-
}
diff --git a/cmd/pgo/cmd/schedule.go b/cmd/pgo/cmd/schedule.go
index c91018c016..86a06aa8ae 100644
--- a/cmd/pgo/cmd/schedule.go
+++ b/cmd/pgo/cmd/schedule.go
@@ -81,7 +81,6 @@ func createSchedule(args []string, ns string) {
}
response, err := api.CreateSchedule(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
@@ -100,7 +99,6 @@ func createSchedule(args []string, ns string) {
fmt.Println("No clusters found.")
return
}
-
}
func deleteSchedule(args []string, ns string) {
@@ -124,7 +122,6 @@ func deleteSchedule(args []string, ns string) {
}
response, err := api.DeleteSchedule(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
@@ -143,7 +140,6 @@ func deleteSchedule(args []string, ns string) {
fmt.Println("No schedules found.")
return
}
-
}
func showSchedule(args []string, ns string) {
@@ -169,7 +165,6 @@ func showSchedule(args []string, ns string) {
}
response, err := api.ShowSchedule(httpclient, &SessionCredentials, r)
-
if err != nil {
fmt.Println("Error: " + response.Status.Msg)
os.Exit(2)
diff --git a/cmd/pgo/cmd/show.go b/cmd/pgo/cmd/show.go
index b42ba2a7d9..96a312b733 100644
--- a/cmd/pgo/cmd/show.go
+++ b/cmd/pgo/cmd/show.go
@@ -22,8 +22,10 @@ import (
"github.com/spf13/cobra"
)
-const TreeBranch = "\t"
-const TreeTrunk = "\t"
+const (
+ TreeBranch = "\t"
+ TreeTrunk = "\t"
+)
var AllFlag bool
@@ -80,7 +82,6 @@ Valid resource types include:
* user`)
}
}
-
},
}
@@ -327,7 +328,7 @@ var ShowUserCmd = &cobra.Command{
if Namespace == "" {
Namespace = PGONamespace
}
- if Selector == "" && AllFlag == false && len(args) == 0 {
+ if Selector == "" && !AllFlag && len(args) == 0 {
fmt.Println("Error: --selector, --all, or cluster name()s required for this command")
} else {
showUser(args, Namespace)
diff --git a/cmd/pgo/cmd/status.go b/cmd/pgo/cmd/status.go
index f7cea5a946..28c930f226 100644
--- a/cmd/pgo/cmd/status.go
+++ b/cmd/pgo/cmd/status.go
@@ -33,11 +33,9 @@ func init() {
RootCmd.AddCommand(statusCmd)
statusCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", "The output format. Currently, json is the only supported value.")
-
}
func showStatus(args []string, ns string) {
-
log.Debugf("showStatus called %v", args)
if OutputFormat != "" && OutputFormat != "json" {
@@ -46,7 +44,6 @@ func showStatus(args []string, ns string) {
}
response, err := api.ShowStatus(httpclient, &SessionCredentials, ns)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -67,11 +64,9 @@ func showStatus(args []string, ns string) {
}
printSummary(&response.Result)
-
}
func printSummary(status *msgs.StatusDetail) {
-
WID := 25
fmt.Printf("%s%d\n", util.Rpad("Databases:", " ", WID), status.NumDatabases)
fmt.Printf("%s%d\n", util.Rpad("Claims:", " ", WID), status.NumClaims)
diff --git a/cmd/pgo/cmd/test.go b/cmd/pgo/cmd/test.go
index a25ce23b9c..89da7b7b7f 100644
--- a/cmd/pgo/cmd/test.go
+++ b/cmd/pgo/cmd/test.go
@@ -57,11 +57,9 @@ func init() {
testCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
testCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", "The output format. Currently, json is the only supported value.")
testCmd.Flags().BoolVar(&AllFlag, "all", false, "test all resources.")
-
}
func showTest(args []string, ns string) {
-
log.Debugf("showCluster called %v", args)
log.Debugf("selector is %s", Selector)
@@ -110,7 +108,7 @@ func showTest(args []string, ns string) {
for _, result := range response.Results {
fmt.Println("")
- fmt.Println(fmt.Sprintf("cluster : %s", result.ClusterName))
+ fmt.Printf("cluster : %s\n", result.ClusterName)
// first, print the test results for the endpoints, which make up
// the services
@@ -124,15 +122,15 @@ func showTest(args []string, ns string) {
// prints out a set of test results
func printTestResults(testName string, results []msgs.ClusterTestDetail) {
// print out the header for this group of tests
- fmt.Println(fmt.Sprintf("%s%s", TreeBranch, testName))
+ fmt.Printf("%s%s\n", TreeBranch, testName)
// iterate though the results and print them!
for _, v := range results {
fmt.Printf("%s%s%s (%s): ",
TreeBranch, TreeBranch, v.InstanceType, v.Message)
if v.Available {
- fmt.Println(fmt.Sprintf("%s", GREEN("UP")))
+ fmt.Println(GREEN("UP"))
} else {
- fmt.Println(fmt.Sprintf("%s", RED("DOWN")))
+ fmt.Println(RED("DOWN"))
}
}
}
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index 140c06cecd..f5c5d6eb82 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -171,7 +171,6 @@ func init() {
UpdateUserCmd.Flags().BoolVar(&PasswordValidAlways, "valid-always", false, "Sets a password to never expire based on expiration time. Takes precedence over --valid-days")
UpdateUserCmd.Flags().BoolVar(&RotatePassword, "rotate-password", false, "Rotates the user's password with an automatically generated password. The length of the password is determine by either --password-length or the value set on the server, in that order.")
UpdateUserCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
-
}
// UpdateCmd represents the update command
@@ -191,7 +190,6 @@ var UpdateCmd = &cobra.Command{
pgo update pgorole somerole --pgorole-permission="Cat"
pgo update user mycluster --username=testuser --selector=name=mycluster --password=somepassword`,
Run: func(cmd *cobra.Command, args []string) {
-
if len(args) == 0 {
fmt.Println(`Error: You must specify the type of resource to update. Valid resource types include:
* cluster
@@ -214,7 +212,6 @@ var UpdateCmd = &cobra.Command{
* user`)
}
}
-
},
}
@@ -315,7 +312,6 @@ pgo update user mycluster --username=foobar --disable-login
pgo update user mycluster --username=foobar --enable-login
`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
@@ -379,7 +375,6 @@ var UpdatePgouserCmd = &cobra.Command{
pgo update pgouser myuser --pgouser-password=somepassword --pgouser-roles=somerole
pgo update pgouser myuser --pgouser-password=somepassword --no-prompt`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
@@ -391,13 +386,13 @@ var UpdatePgouserCmd = &cobra.Command{
}
},
}
+
var UpdatePgoroleCmd = &cobra.Command{
Use: "pgorole",
Short: "Update a pgorole",
Long: `UPDATE allows you to update a pgo role. For example:
pgo update pgorole somerole --permissions="Cat,Ls`,
Run: func(cmd *cobra.Command, args []string) {
-
if Namespace == "" {
Namespace = PGONamespace
}
@@ -416,7 +411,6 @@ var UpdateNamespaceCmd = &cobra.Command{
Long: `UPDATE allows you to update a Namespace. For example:
pgo update namespace mynamespace`,
Run: func(cmd *cobra.Command, args []string) {
-
if len(args) == 0 {
fmt.Println("Error: You must specify the name of a Namespace.")
} else {
diff --git a/cmd/pgo/cmd/user.go b/cmd/pgo/cmd/user.go
index a304ce4870..ae5793167e 100644
--- a/cmd/pgo/cmd/user.go
+++ b/cmd/pgo/cmd/user.go
@@ -94,7 +94,6 @@ func createUser(args []string, ns string) {
}
response, err := api.CreateUser(httpclient, &SessionCredentials, &request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(1)
@@ -113,7 +112,6 @@ func createUser(args []string, ns string) {
// deleteUser ...
func deleteUser(args []string, ns string) {
-
log.Debugf("deleting user %s selector=%s args=%v", Username, Selector, args)
if Username == "" {
@@ -130,7 +128,6 @@ func deleteUser(args []string, ns string) {
}
response, err := api.DeleteUser(httpclient, &SessionCredentials, &request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(1)
@@ -348,7 +345,6 @@ func showUser(args []string, ns string) {
}
response, err := api.ShowUser(httpclient, &SessionCredentials, &request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(1)
@@ -407,7 +403,6 @@ func updateUser(clusterNames []string, namespace string) {
}
response, err := api.UpdateUser(httpclient, &SessionCredentials, &request)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(1)
diff --git a/cmd/pgo/cmd/version.go b/cmd/pgo/cmd/version.go
index f66d22e5e2..969f146e85 100644
--- a/cmd/pgo/cmd/version.go
+++ b/cmd/pgo/cmd/version.go
@@ -47,7 +47,6 @@ func init() {
}
func showVersion() {
-
// print the client version
fmt.Println("pgo client version " + msgs.PGO_VERSION)
@@ -58,7 +57,6 @@ func showVersion() {
// otherwise, get the server version
response, err := api.ShowVersion(httpclient, &SessionCredentials)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
diff --git a/cmd/pgo/cmd/watch.go b/cmd/pgo/cmd/watch.go
index 69d288a1d8..fce2bcbd74 100644
--- a/cmd/pgo/cmd/watch.go
+++ b/cmd/pgo/cmd/watch.go
@@ -17,14 +17,15 @@ package cmd
import (
"fmt"
- "github.com/nsqio/go-nsq"
- log "github.com/sirupsen/logrus"
- "github.com/spf13/cobra"
"math/rand"
"os"
"os/signal"
"syscall"
"time"
+
+ "github.com/nsqio/go-nsq"
+ log "github.com/sirupsen/logrus"
+ "github.com/spf13/cobra"
)
type TailHandler struct {
@@ -45,7 +46,7 @@ var watchCmd = &cobra.Command{
}
log.Debug("watch called")
- watch(args, Namespace)
+ watch(args)
},
}
@@ -57,7 +58,7 @@ func init() {
watchCmd.Flags().StringVarP(&PGOEventAddress, "pgo-event-address", "a", "localhost:14150", "The address (host:port) where the event stream is.")
}
-func watch(args []string, ns string) {
+func watch(args []string) {
log.Debugf("watch called %v", args)
if len(args) == 0 {
@@ -66,10 +67,11 @@ func watch(args []string, ns string) {
topic := args[0]
- var totalMessages = 0
+ totalMessages := 0
var channel string
rand.Seed(time.Now().UnixNano())
+ // #nosec: G404
channel = fmt.Sprintf("tail%06d#ephemeral", rand.Int()%999999)
sigChan := make(chan os.Signal, 1)
@@ -107,7 +109,6 @@ func watch(args []string, ns string) {
for _, consumer := range consumers {
<-consumer.StopChan
}
-
}
func (th *TailHandler) HandleMessage(m *nsq.Message) error {
diff --git a/cmd/pgo/cmd/workflow.go b/cmd/pgo/cmd/workflow.go
index 56276cce9e..74b2741f0a 100644
--- a/cmd/pgo/cmd/workflow.go
+++ b/cmd/pgo/cmd/workflow.go
@@ -34,13 +34,10 @@ func showWorkflow(args []string, ns string) {
}
printWorkflow(args[0], ns)
-
}
func printWorkflow(id, ns string) {
-
response, err := api.ShowWorkflow(httpclient, id, &SessionCredentials, ns)
-
if err != nil {
fmt.Println("Error: " + err.Error())
os.Exit(2)
@@ -58,5 +55,4 @@ func printWorkflow(id, ns string) {
for k, v := range response.Results.Parameters {
fmt.Printf("%s%s\n", util.Rpad(k, " ", 20), v)
}
-
}
diff --git a/cmd/pgo/generatedocs.go b/cmd/pgo/generatedocs.go
index ddd859f214..a5b2d38271 100644
--- a/cmd/pgo/generatedocs.go
+++ b/cmd/pgo/generatedocs.go
@@ -35,7 +35,6 @@ title: "%s"
`
func main() {
-
fmt.Println("generate CLI markdown")
filePrepender := func(filename string) string {
diff --git a/cmd/pgo/main.go b/cmd/pgo/main.go
index f868a3ac7f..68aa34dee6 100644
--- a/cmd/pgo/main.go
+++ b/cmd/pgo/main.go
@@ -28,5 +28,4 @@ func main() {
fmt.Println(err)
os.Exit(1)
}
-
}
diff --git a/cmd/pgo/util/validation.go b/cmd/pgo/util/validation.go
index 7d90f6a6ac..33690de426 100644
--- a/cmd/pgo/util/validation.go
+++ b/cmd/pgo/util/validation.go
@@ -31,7 +31,6 @@ var validResourceName = regexp.MustCompile(`^[a-z0-9.\-]+$`).MatchString
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/
//
func IsValidForResourceName(target string) bool {
-
log.Debugf("IsValidForResourceName: %s", target)
return validResourceName(target)
@@ -48,7 +47,7 @@ func IsValidForResourceName(target string) bool {
func ValidateQuantity(quantity, flag string) error {
if quantity != "" {
if _, err := resource.ParseQuantity(quantity); err != nil {
- return fmt.Errorf("Error: \"%s\" - %s", flag, err.Error())
+ return fmt.Errorf("Error: \"%s\" - %w", flag, err)
}
}
diff --git a/cmd/postgres-operator/main.go b/cmd/postgres-operator/main.go
index aa7e99ab27..f457b79a0d 100644
--- a/cmd/postgres-operator/main.go
+++ b/cmd/postgres-operator/main.go
@@ -46,7 +46,7 @@ func main() {
}
debugFlag := os.Getenv("CRUNCHY_DEBUG")
- //add logging configuration
+ // add logging configuration
crunchylog.CrunchyLogger(crunchylog.SetParameters())
if debugFlag == "true" {
log.SetLevel(log.DebugLevel)
@@ -55,7 +55,7 @@ func main() {
log.Info("debug flag set to false")
}
- //give time for pgo-event to start up
+ // give time for pgo-event to start up
time.Sleep(time.Duration(5) * time.Second)
newKubernetesClient := func() (*kubeapi.Client, error) {
@@ -130,7 +130,6 @@ func main() {
// createAndStartNamespaceController creates a namespace controller and then starts it
func createAndStartNamespaceController(kubeClientset kubernetes.Interface,
controllerManager controller.Manager, stopCh <-chan struct{}) error {
-
nsKubeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(kubeClientset,
time.Duration(*operator.Pgo.Pgo.NamespaceRefreshInterval)*time.Second,
kubeinformers.WithTweakListOptions(func(options *metav1.ListOptions) {
diff --git a/cmd/postgres-operator/open_telemetry.go b/cmd/postgres-operator/open_telemetry.go
index 382079687d..e4b156bb35 100644
--- a/cmd/postgres-operator/open_telemetry.go
+++ b/cmd/postgres-operator/open_telemetry.go
@@ -62,7 +62,7 @@ func initOpenTelemetry() (func(), error) {
options := []stdout.Option{stdout.WithoutMetricExport()}
if filename != "" {
- file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
+ file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
if err != nil {
return nil, fmt.Errorf("unable to open exporter file: %w", err)
}
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index 7acc51d052..6f1aee538a 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -349,10 +349,7 @@ func getBackrestRepoPodName(cluster *crv1.Pgcluster) (string, error) {
}
func isPrimary(pod *v1.Pod, clusterName string) bool {
- if pod.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] == clusterName {
- return true
- }
- return false
+ return pod.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] == clusterName
}
func isReady(pod *v1.Pod) bool {
@@ -364,10 +361,8 @@ func isReady(pod *v1.Pod) bool {
readyCount++
}
}
- if readyCount != containerCount {
- return false
- }
- return true
+
+ return readyCount == containerCount
}
// isPrimaryReady goes through the pod list to first identify the
@@ -385,13 +380,14 @@ func isPrimaryReady(cluster *crv1.Pgcluster, ns string) error {
if err != nil {
return err
}
- for _, p := range pods.Items {
- if isPrimary(&p, cluster.Spec.Name) && isReady(&p) {
+ for i := range pods.Items {
+ p := &pods.Items[i]
+ if isPrimary(p, cluster.Spec.Name) && isReady(p) {
primaryReady = true
}
}
- if primaryReady == false {
+ if !primaryReady {
return errors.New("primary pod is not in Ready state")
}
return nil
@@ -463,7 +459,7 @@ func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse {
verifyTLS, _ := strconv.ParseBool(operator.GetS3VerifyTLSSetting(c))
// get the pgBackRest info using this legacy function
- info, err := getInfo(c.Name, storageType, podname, ns, verifyTLS)
+ info, err := getInfo(storageType, podname, ns, verifyTLS)
// see if the function returned successfully, and if so, unmarshal the JSON
if err != nil {
log.Error(err)
@@ -490,7 +486,7 @@ func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse {
return response
}
-func getInfo(clusterName, storageType, podname, ns string, verifyTLS bool) (string, error) {
+func getInfo(storageType, podname, ns string, verifyTLS bool) (string, error) {
log.Debug("backrest info command requested")
cmd := pgBackRestInfoCommand
@@ -589,7 +585,7 @@ func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreRespo
return resp
}
- pgtask, err := getRestoreParams(request, ns, *cluster)
+ pgtask, err := getRestoreParams(request, ns)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
@@ -624,7 +620,7 @@ func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreRespo
return resp
}
-func getRestoreParams(request *msgs.RestoreRequest, ns string, cluster crv1.Pgcluster) (*crv1.Pgtask, error) {
+func getRestoreParams(request *msgs.RestoreRequest, ns string) (*crv1.Pgtask, error) {
var newInstance *crv1.Pgtask
spec := crv1.PgtaskSpec{}
diff --git a/internal/apiserver/backrestservice/backrestservice.go b/internal/apiserver/backrestservice/backrestservice.go
index 49a1edbbdb..c9e5d4b030 100644
--- a/internal/apiserver/backrestservice/backrestservice.go
+++ b/internal/apiserver/backrestservice/backrestservice.go
@@ -69,12 +69,12 @@ func CreateBackupHandler(w http.ResponseWriter, r *http.Request) {
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreateBackup(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// DeleteBackrestHandler deletes a targeted backup from a pgBackRest repository
@@ -219,19 +219,19 @@ func ShowBackrestHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowBackrest(backupname, selector, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// RestoreHandler ...
@@ -276,10 +276,10 @@ func RestoreHandler(w http.ResponseWriter, r *http.Request) {
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = Restore(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/backupoptions/backupoptionsutil.go b/internal/apiserver/backupoptions/backupoptionsutil.go
index b5b7ba314a..5cf5901b29 100644
--- a/internal/apiserver/backupoptions/backupoptionsutil.go
+++ b/internal/apiserver/backupoptions/backupoptionsutil.go
@@ -35,7 +35,6 @@ type backupOptions interface {
// ValidateBackupOpts validates the backup/restore options that can be provided to the various backup
// and restore utilities supported by pgo (e.g. pg_dump, pg_restore, pgBackRest, etc.)
func ValidateBackupOpts(backupOpts string, request interface{}) error {
-
// some quick checks to make sure backup opts string is valid and should be processed and validated
if strings.TrimSpace(backupOpts) == "" {
return nil
@@ -52,7 +51,6 @@ func ValidateBackupOpts(backupOpts string, request interface{}) error {
return err
} else {
err := backupOptions.validate(setFlagFieldNames)
-
if err != nil {
return err
}
@@ -61,7 +59,6 @@ func ValidateBackupOpts(backupOpts string, request interface{}) error {
}
func convertBackupOptsToStruct(backupOpts string, request interface{}) (backupOptions, []string, error) {
-
parsedBackupOpts := parseBackupOpts(backupOpts)
optsStruct, utilityName, err := createBackupOptionsStruct(backupOpts, request)
@@ -92,6 +89,7 @@ func convertBackupOptsToStruct(backupOpts string, request interface{}) (backupOp
commandLine.BoolVarP(fieldVal.Addr().Interface().(*bool), flag, flagShort, false, "")
case reflect.Slice:
commandLine.StringArrayVarP(fieldVal.Addr().Interface().(*[]string), flag, flagShort, nil, "")
+ default: // no-op
}
}
}
@@ -109,7 +107,6 @@ func convertBackupOptsToStruct(backupOpts string, request interface{}) (backupOp
}
func parseBackupOpts(backupOpts string) []string {
-
newFields := []string{}
var newField string
for i, c := range backupOpts {
@@ -137,7 +134,6 @@ func parseBackupOpts(backupOpts string) []string {
}
func createBackupOptionsStruct(backupOpts string, request interface{}) (backupOptions, string, error) {
-
switch request := request.(type) {
case *msgs.CreateBackrestBackupRequest:
return &pgBackRestBackupOptions{}, "pgBackRest", nil
@@ -215,7 +211,7 @@ func handleCustomParseErrors(err error, usage *bytes.Buffer, optsStruct backupOp
func obtainSetFlagFieldNames(commandLine *pflag.FlagSet, structType reflect.Type) []string {
var setFlagFieldNames []string
- var visitBackupOptFlags = func(flag *pflag.Flag) {
+ visitBackupOptFlags := func(flag *pflag.Flag) {
for i := 0; i < structType.NumField(); i++ {
field := structType.Field(i)
flagName, _ := field.Tag.Lookup("flag")
diff --git a/internal/apiserver/backupoptions/pgbackrestoptions.go b/internal/apiserver/backupoptions/pgbackrestoptions.go
index 2c7a1e356e..7926f4c0da 100644
--- a/internal/apiserver/backupoptions/pgbackrestoptions.go
+++ b/internal/apiserver/backupoptions/pgbackrestoptions.go
@@ -131,7 +131,6 @@ func (backRestBackupOpts pgBackRestBackupOptions) validate(setFlagFieldNames []s
var errstrings []string
for _, setFlag := range setFlagFieldNames {
-
switch setFlag {
case "BackupType":
if !isValidValue([]string{"full", "diff", "incr"}, backRestBackupOpts.BackupType) {
@@ -194,11 +193,9 @@ func (backRestBackupOpts pgBackRestBackupOptions) validate(setFlagFieldNames []s
}
func (backRestRestoreOpts pgBackRestRestoreOptions) validate(setFlagFieldNames []string) error {
-
var errstrings []string
for _, setFlag := range setFlagFieldNames {
-
switch setFlag {
case "TargetAction":
if !isValidValue([]string{"pause", "promote", "shutdown"}, backRestRestoreOpts.TargetAction) {
diff --git a/internal/apiserver/backupoptions/pgdumpoptions.go b/internal/apiserver/backupoptions/pgdumpoptions.go
index 268aa42412..803a417e95 100644
--- a/internal/apiserver/backupoptions/pgdumpoptions.go
+++ b/internal/apiserver/backupoptions/pgdumpoptions.go
@@ -165,11 +165,9 @@ type pgRestoreOptions struct {
}
func (dumpOpts pgDumpOptions) validate(setFlagFieldNames []string) error {
-
var errstrings []string
for _, setFlag := range setFlagFieldNames {
-
switch setFlag {
case "Format":
if !isValidValue([]string{"p", "plain", "c", "custom", "t", "tar"}, dumpOpts.Format) {
@@ -214,11 +212,9 @@ func (dumpOpts pgDumpOptions) validate(setFlagFieldNames []string) error {
}
func (dumpAllOpts pgDumpAllOptions) validate(setFlagFieldNames []string) error {
-
var errstrings []string
for _, setFlag := range setFlagFieldNames {
-
switch setFlag {
case "SuperUser":
if !dumpAllOpts.DisableTriggers {
@@ -243,11 +239,9 @@ func (dumpAllOpts pgDumpAllOptions) validate(setFlagFieldNames []string) error {
}
func (restoreOpts pgRestoreOptions) validate(setFlagFieldNames []string) error {
-
var errstrings []string
for _, setFlag := range setFlagFieldNames {
-
switch setFlag {
case "Format":
if !isValidValue([]string{"p", "plain", "c", "custom", "t", "tar"}, restoreOpts.Format) {
diff --git a/internal/apiserver/catservice/catimpl.go b/internal/apiserver/catservice/catimpl.go
index af86a2a725..e21fd76266 100644
--- a/internal/apiserver/catservice/catimpl.go
+++ b/internal/apiserver/catservice/catimpl.go
@@ -101,7 +101,6 @@ func Cat(request *msgs.CatRequest, ns string) msgs.CatResponse {
// run cat on the postgres pod, remember we are assuming
// first container in the pod is always the postgres container.
func cat(pod *v1.Pod, ns string, args []string) (string, error) {
-
command := make([]string, 0)
command = append(command, "cat")
for i := 1; i < len(args); i++ {
@@ -120,10 +119,10 @@ func cat(pod *v1.Pod, ns string, args []string) (string, error) {
return stdout, err
}
-//make sure the parameters to the cat command dont' container mischief
+// make sure the parameters to the cat command dont' container mischief
func validateArgs(args []string) error {
var err error
- var bad = "&|;>"
+ bad := "&|;>"
for i := 1; i < len(args); i++ {
if strings.ContainsAny(args[i], bad) {
diff --git a/internal/apiserver/catservice/catservice.go b/internal/apiserver/catservice/catservice.go
index 439274271e..cf25ac5f9a 100644
--- a/internal/apiserver/catservice/catservice.go
+++ b/internal/apiserver/catservice/catservice.go
@@ -17,10 +17,11 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
// CatHandler ...
@@ -65,7 +66,7 @@ func CatHandler(w http.ResponseWriter, r *http.Request) {
resp := msgs.CatResponse{}
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -74,9 +75,9 @@ func CatHandler(w http.ResponseWriter, r *http.Request) {
resp := msgs.CatResponse{}
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
- json.NewEncoder(w).Encode(catResponse)
+ _ = json.NewEncoder(w).Encode(catResponse)
}
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index d87c01da39..3745d1d220 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -68,7 +68,7 @@ func DeleteCluster(name, selector string, deleteData, deleteBackups bool, ns, pg
log.Debugf("delete-data is [%t]", deleteData)
log.Debugf("delete-backups is [%t]", deleteBackups)
- //get the clusters list
+ // get the clusters list
clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
if err != nil {
response.Status.Code = msgs.Error
@@ -121,7 +121,6 @@ func DeleteCluster(name, selector string, deleteData, deleteBackups bool, ns, pg
}
return response
-
}
// ShowCluster ...
@@ -143,7 +142,7 @@ func ShowCluster(name, selector, ccpimagetag, ns string, allflag bool) msgs.Show
log.Debugf("selector on showCluster is %s", selector)
- //get a list of all clusters
+ // get a list of all clusters
clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
if err != nil {
response.Status.Code = msgs.Error
@@ -153,7 +152,8 @@ func ShowCluster(name, selector, ccpimagetag, ns string, allflag bool) msgs.Show
log.Debugf("clusters found len is %d", len(clusterList.Items))
- for _, c := range clusterList.Items {
+ for i := range clusterList.Items {
+ c := clusterList.Items[i]
detail := msgs.ShowClusterDetail{}
detail.Cluster = c
detail.Deployments, err = getDeployments(&c, ns)
@@ -192,7 +192,6 @@ func ShowCluster(name, selector, ccpimagetag, ns string, allflag bool) msgs.Show
}
return response
-
}
func getDeployments(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterDeployment, error) {
@@ -228,7 +227,7 @@ func GetPods(clientset kubernetes.Interface, cluster *crv1.Pgcluster) ([]msgs.Sh
ctx := context.TODO()
output := []msgs.ShowClusterPod{}
- //get pods, but exclude backup pods and backrest repo
+ // get pods, but exclude backup pods and backrest repo
selector := fmt.Sprintf("%s=%s,%s", config.LABEL_PG_CLUSTER, cluster.GetName(), config.LABEL_PG_DATABASE)
log.Debugf("selector for GetPods is %s", selector)
@@ -237,14 +236,15 @@ func GetPods(clientset kubernetes.Interface, cluster *crv1.Pgcluster) ([]msgs.Sh
return output, err
}
- for _, p := range pods.Items {
+ for i := range pods.Items {
+ p := &pods.Items[i]
d := msgs.ShowClusterPod{
PVC: []msgs.ShowClusterPodPVC{},
}
d.Name = p.Name
d.Phase = string(p.Status.Phase)
d.NodeName = p.Spec.NodeName
- d.ReadyStatus, d.Ready = getReadyStatus(&p)
+ d.ReadyStatus, d.Ready = getReadyStatus(p)
// get information about several of the PVCs. This borrows from a legacy
// method to get this information
@@ -262,7 +262,6 @@ func GetPods(clientset kubernetes.Interface, cluster *crv1.Pgcluster) ([]msgs.Sh
pvcName := v.VolumeSource.PersistentVolumeClaim.ClaimName
// query the PVC to get the storage capacity
pvc, err := clientset.CoreV1().PersistentVolumeClaims(cluster.Namespace).Get(ctx, pvcName, metav1.GetOptions{})
-
// if there is an error, ignore it, and move on to the next one
if err != nil {
log.Warn(err)
@@ -280,7 +279,7 @@ func GetPods(clientset kubernetes.Interface, cluster *crv1.Pgcluster) ([]msgs.Sh
}
d.Primary = false
- d.Type = getType(&p, cluster.Spec.Name)
+ d.Type = getType(p)
if d.Type == msgs.PodTypePrimary {
d.Primary = true
}
@@ -289,7 +288,6 @@ func GetPods(clientset kubernetes.Interface, cluster *crv1.Pgcluster) ([]msgs.Sh
}
return output, err
-
}
func getServices(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterService, error) {
@@ -368,7 +366,6 @@ func TestCluster(name, selector, ns, pgouser string, allFlag bool) msgs.ClusterT
// Find a list of a clusters that match the given selector
clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
-
// If the response errors, return here, as we won't be able to return any
// useful information in the test
if err != nil {
@@ -381,7 +378,8 @@ func TestCluster(name, selector, ns, pgouser string, allFlag bool) msgs.ClusterT
log.Debugf("Total clusters found: %d", len(clusterList.Items))
// Iterate through each cluster and perform the various tests against them
- for _, c := range clusterList.Items {
+ for i := range clusterList.Items {
+ c := clusterList.Items[i]
// Set up the object that will be appended to the response that
// indicates the availability of the endpoints / instances for this
// cluster
@@ -397,7 +395,6 @@ func TestCluster(name, selector, ns, pgouser string, allFlag bool) msgs.ClusterT
// Get the PostgreSQL instances!
log.Debugf("Looking up instance pods for cluster: %s", c.Name)
pods, err := GetPrimaryAndReplicaPods(&c, ns)
-
// if there is an error with returning the primary/replica instances,
// then error and continue
if err != nil {
@@ -718,7 +715,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
resp.Status.Msg = err.Error()
return resp
}
- //add a label for the custom config
+ // add a label for the custom config
userLabelsMap[config.LABEL_CUSTOM_CONFIG] = request.CustomConfig
}
@@ -815,7 +812,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
if request.ReplicaStorageConfig != "" {
- if apiserver.IsValidStorageName(request.ReplicaStorageConfig) == false {
+ if !apiserver.IsValidStorageName(request.ReplicaStorageConfig) {
resp.Status.Code = msgs.Error
resp.Status.Msg = request.ReplicaStorageConfig + " Storage config was not found "
return resp
@@ -897,7 +894,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
}
- validateConfigPolicies(clusterName, request.Policies, ns)
+ _ = validateConfigPolicies(clusterName, request.Policies, ns)
// create the user secrets
// first, the superuser
@@ -976,7 +973,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
backrestSecret, err := apiserver.Clientset.
CoreV1().Secrets(request.Namespace).
Get(ctx, request.BackrestS3CASecretName, metav1.GetOptions{})
-
if err != nil {
log.Error(err)
resp.Status.Code = msgs.Error
@@ -1019,7 +1015,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
return resp
}
- //create a workflow for this new cluster
+ // create a workflow for this new cluster
id, err = createWorkflowTask(clusterName, ns, pgouser)
if err != nil {
log.Error(err)
@@ -1034,7 +1030,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
newInstance.Spec.UserLabels[config.LABEL_WORKFLOW_ID] = id
resp.Result.Database = newInstance.Spec.Database
- //create CRD for new cluster
+ // create CRD for new cluster
_, err = apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Create(ctx, newInstance, metav1.CreateOptions{})
if err != nil {
resp.Status.Code = msgs.Error
@@ -1075,7 +1071,7 @@ func validateConfigPolicies(clusterName, PoliciesFlag, ns string) error {
log.Error("error getting pgpolicy " + v + err.Error())
return err
}
- //create a pgtask to add the policy after the db is ready
+ // create a pgtask to add the policy after the db is ready
}
spec := crv1.PgtaskSpec{}
@@ -1105,7 +1101,6 @@ func validateConfigPolicies(clusterName, PoliciesFlag, ns string) error {
}
func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabelsMap map[string]string, ns string) *crv1.Pgcluster {
-
spec := crv1.PgclusterSpec{
Annotations: crv1.ClusterAnnotations{
Backrest: map[string]string{},
@@ -1393,7 +1388,7 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
spec.UserLabels = userLabelsMap
spec.UserLabels[config.LABEL_PGO_VERSION] = msgs.PGO_VERSION
- //override any values from config file
+ // override any values from config file
str = apiserver.Pgo.Cluster.Port
log.Debugf("%s", apiserver.Pgo.Cluster.Port)
if str != "" {
@@ -1586,13 +1581,12 @@ func getReadyStatus(pod *v1.Pod) (string, bool) {
equal = true
}
return fmt.Sprintf("%d/%d", readyCount, containerCount), equal
-
}
func createWorkflowTask(clusterName, ns, pgouser string) (string, error) {
ctx := context.TODO()
- //create pgtask CRD
+ // create pgtask CRD
spec := crv1.PgtaskSpec{}
spec.Namespace = ns
spec.Name = clusterName + "-" + crv1.PgtaskWorkflowCreateClusterType
@@ -1628,9 +1622,7 @@ func createWorkflowTask(clusterName, ns, pgouser string) (string, error) {
return spec.Parameters[crv1.PgtaskWorkflowID], err
}
-func getType(pod *v1.Pod, clusterName string) string {
-
- //log.Debugf("%v\n", pod.ObjectMeta.Labels)
+func getType(pod *v1.Pod) string {
if pod.ObjectMeta.Labels[config.LABEL_PGO_BACKREST_REPO] != "" {
return msgs.PodTypePgbackrest
} else if pod.ObjectMeta.Labels[config.LABEL_PGBOUNCER] != "" {
@@ -1641,7 +1633,6 @@ func getType(pod *v1.Pod, clusterName string) string {
return msgs.PodTypeReplica
}
return msgs.PodTypeUnknown
-
}
func validateCustomConfig(configmapname, ns string) (bool, error) {
@@ -1727,7 +1718,6 @@ func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluste
// now attempt to load said secret
oldPassword, err := util.GetPasswordFromSecret(apiserver.Clientset, cluster.Spec.Namespace, secretFromSecretName)
-
// if there is an error, abandon here, otherwise set the oldPassword as the
// current password
if err != nil {
@@ -1745,7 +1735,6 @@ func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluste
}
generatedPassword, err := util.GeneratePassword(passwordLength)
-
// if the password fails to generate, return the error
if err != nil {
return "", "", err
@@ -1855,7 +1844,7 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
clusterList := crv1.PgclusterList{}
- //get the clusters list
+ // get the clusters list
if request.AllFlag {
cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).List(ctx, metav1.ListOptions{})
if err != nil {
@@ -1892,15 +1881,17 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
return response
}
- for i, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := clusterList.Items[i]
- //set autofail=true or false on each pgcluster CRD
+ // set autofail=true or false on each pgcluster CRD
// Make the change based on the value of Autofail vis-a-vis UpdateClusterAutofailStatus
switch request.Autofail {
case msgs.UpdateClusterAutofailEnable:
cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "true"
case msgs.UpdateClusterAutofailDisable:
cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "false"
+ case msgs.UpdateClusterAutofailDoNothing: // no-op
}
// enable or disable the metrics collection sidecar
@@ -1925,6 +1916,7 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
}
case msgs.UpdateClusterStandbyDisable:
cluster.Spec.Standby = false
+ case msgs.UpdateClusterStandbyDoNothing: // no-op
}
// return an error if attempting to enable standby for a cluster that does not have the
// required S3 settings
@@ -2032,7 +2024,7 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
// if it fails...just put a in the logs.
if cluster.Spec.Exporter && request.ExporterRotatePassword {
if err := clusteroperator.RotateExporterPassword(apiserver.Clientset, apiserver.RESTConfig,
- &clusterList.Items[i]); err != nil {
+ &cluster); err != nil {
log.Error(err)
}
}
@@ -2094,15 +2086,16 @@ func GetPrimaryAndReplicaPods(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowCl
if err != nil {
return output, err
}
- for _, p := range pods.Items {
+ for i := range pods.Items {
+ p := &pods.Items[i]
d := msgs.ShowClusterPod{}
d.Name = p.Name
d.Phase = string(p.Status.Phase)
d.NodeName = p.Spec.NodeName
- d.ReadyStatus, d.Ready = getReadyStatus(&p)
+ d.ReadyStatus, d.Ready = getReadyStatus(p)
d.Primary = false
- d.Type = getType(&p, cluster.Spec.Name)
+ d.Type = getType(p)
if d.Type == msgs.PodTypePrimary {
d.Primary = true
}
@@ -2119,15 +2112,16 @@ func GetPrimaryAndReplicaPods(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowCl
if err != nil {
return output, err
}
- for _, p := range pods.Items {
+ for i := range pods.Items {
+ p := &pods.Items[i]
d := msgs.ShowClusterPod{}
d.Name = p.Name
d.Phase = string(p.Status.Phase)
d.NodeName = p.Spec.NodeName
- d.ReadyStatus, d.Ready = getReadyStatus(&p)
+ d.ReadyStatus, d.Ready = getReadyStatus(p)
d.Primary = false
- d.Type = getType(&p, cluster.Spec.Name)
+ d.Type = getType(p)
if d.Type == msgs.PodTypePrimary {
d.Primary = true
}
@@ -2136,7 +2130,6 @@ func GetPrimaryAndReplicaPods(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowCl
}
return output, err
-
}
// setClusterAnnotationGroup helps with setting the specific annotation group
@@ -2155,7 +2148,6 @@ func setClusterAnnotationGroup(annotationGroup, annotations map[string]string) {
// a new cluster. This includes ensuring the type provided is valid, and that the required
// configuration settings (s3 bucket, region, etc.) are also present
func validateBackrestStorageTypeOnCreate(request *msgs.CreateClusterRequest) error {
-
requestBackRestStorageType := request.BackrestStorageType
if requestBackRestStorageType != "" && !util.IsValidBackrestStorageType(requestBackRestStorageType) {
diff --git a/internal/apiserver/clusterservice/clusterservice.go b/internal/apiserver/clusterservice/clusterservice.go
index d0f31df636..32b904eb61 100644
--- a/internal/apiserver/clusterservice/clusterservice.go
+++ b/internal/apiserver/clusterservice/clusterservice.go
@@ -77,19 +77,18 @@ func CreateClusterHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreateCluster(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// ShowClusterHandler ...
@@ -150,7 +149,7 @@ func ShowClusterHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
resp.Results = make([]msgs.ShowClusterDetail, 0)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -158,13 +157,12 @@ func ShowClusterHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
resp.Results = make([]msgs.ShowClusterDetail, 0)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowCluster(clustername, selector, ccpimagetag, ns, allflag)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// DeleteClusterHandler ...
@@ -225,19 +223,18 @@ func DeleteClusterHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
resp.Results = make([]string, 0)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
resp.Results = make([]string, 0)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeleteCluster(clustername, selector, deleteData, deleteBackups, ns, username)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// TestClusterHandler ...
@@ -290,19 +287,19 @@ func TestClusterHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = TestCluster(clustername, selector, ns, username, request.AllFlag)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// UpdateClusterHandler ...
@@ -352,7 +349,7 @@ func UpdateClusterHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
resp.Results = make([]string, 0)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -360,11 +357,10 @@ func UpdateClusterHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
resp.Results = make([]string, 0)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = UpdateCluster(&request)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index 4148283bfb..00be04242e 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -73,10 +73,10 @@ func ScaleCluster(name, replicaCount, storageConfig, nodeLabel,
spec := crv1.PgreplicaSpec{}
- //refer to the cluster's replica storage setting by default
+ // refer to the cluster's replica storage setting by default
spec.ReplicaStorage = cluster.Spec.ReplicaStorage
- //allow for user override
+ // allow for user override
if storageConfig != "" {
spec.ReplicaStorage, _ = apiserver.Pgo.GetStorageSpec(storageConfig)
}
@@ -97,7 +97,7 @@ func ScaleCluster(name, replicaCount, storageConfig, nodeLabel,
spec.UserLabels[config.LABEL_SERVICE_TYPE] = serviceType
}
- //set replica node lables to blank to start with, then check for overrides
+ // set replica node lables to blank to start with, then check for overrides
spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = ""
spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = ""
@@ -204,7 +204,6 @@ func ScaleQuery(name, ns string) msgs.ScaleQueryResponse {
}
replicationStatusResponse, err := util.ReplicationStatus(replicationStatusRequest, false, true)
-
// if an error is return, log the message, and return the response
if err != nil {
log.Error(err.Error())
@@ -301,7 +300,7 @@ func ScaleDown(deleteData bool, clusterName, replicaName, ns string) msgs.ScaleD
return response
}
- //create the rmdata task which does the cleanup
+ // create the rmdata task which does the cleanup
clusterPGHAScope := cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE]
deleteBackups := false
diff --git a/internal/apiserver/clusterservice/scaleservice.go b/internal/apiserver/clusterservice/scaleservice.go
index 92db853216..d0d54d6119 100644
--- a/internal/apiserver/clusterservice/scaleservice.go
+++ b/internal/apiserver/clusterservice/scaleservice.go
@@ -117,14 +117,14 @@ func ScaleClusterHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -132,7 +132,7 @@ func ScaleClusterHandler(w http.ResponseWriter, r *http.Request) {
resp = ScaleCluster(clusterName, replicaCount, storageConfig, nodeLabel,
ccpImageTag, serviceType, ns, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// ScaleQueryHandler ...
@@ -190,19 +190,19 @@ func ScaleQueryHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ScaleQuery(clusterName, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// ScaleDownHandler ...
@@ -273,23 +273,23 @@ func ScaleDownHandler(w http.ResponseWriter, r *http.Request) {
deleteData, err := strconv.ParseBool(tmp)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ScaleDown(deleteData, clusterName, replicaName, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go
index 15c070e0dc..a4c392a817 100644
--- a/internal/apiserver/common.go
+++ b/internal/apiserver/common.go
@@ -59,7 +59,7 @@ func CreateRMDataTask(clusterName, replicaName, taskName string, deleteBackups,
ctx := context.TODO()
var err error
- //create pgtask CRD
+ // create pgtask CRD
spec := crv1.PgtaskSpec{}
spec.Namespace = ns
spec.Name = taskName
@@ -91,7 +91,6 @@ func CreateRMDataTask(clusterName, replicaName, taskName string, deleteBackups,
}
return err
-
}
func GetBackrestStorageTypes() []string {
diff --git a/internal/apiserver/configservice/configservice.go b/internal/apiserver/configservice/configservice.go
index 1f70934888..d4066b5017 100644
--- a/internal/apiserver/configservice/configservice.go
+++ b/internal/apiserver/configservice/configservice.go
@@ -17,10 +17,11 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
// ShowConfigHandler ...
@@ -68,17 +69,17 @@ func ShowConfigHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
_, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowConfig()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/dfservice/dfimpl.go b/internal/apiserver/dfservice/dfimpl.go
index 5a0186f41b..1a24d235d2 100644
--- a/internal/apiserver/dfservice/dfimpl.go
+++ b/internal/apiserver/dfservice/dfimpl.go
@@ -126,7 +126,6 @@ func getClaimCapacity(clientset kubernetes.Interface, pvcName, ns string) (strin
log.Debugf("in df pvc name found to be %s", pvcName)
pvc, err := clientset.CoreV1().PersistentVolumeClaims(ns).Get(ctx, pvcName, metav1.GetOptions{})
-
if err != nil {
log.Error(err)
return "", err
@@ -158,7 +157,6 @@ func getClusterDf(cluster *crv1.Pgcluster, clusterResultsChannel chan msgs.DfDet
}
pods, err := apiserver.Clientset.CoreV1().Pods(cluster.Spec.Namespace).List(ctx, options)
-
// if there is an error attempting to get the pods, just return
if err != nil {
errorChannel <- err
@@ -307,7 +305,6 @@ func getPodDf(cluster *crv1.Pgcluster, pod *v1.Pod, podResultsChannel chan msgs.
stdout, stderr, err := kubeapi.ExecToPodThroughAPI(apiserver.RESTConfig,
apiserver.Clientset, cmd, pvcContainerName, pod.Name, cluster.Spec.Namespace, nil)
-
// if the command fails, exit here
if err != nil {
err := fmt.Errorf(stderr)
@@ -318,7 +315,7 @@ func getPodDf(cluster *crv1.Pgcluster, pod *v1.Pod, podResultsChannel chan msgs.
// have to parse the size out from the statement. Size is in bytes
if _, err = fmt.Sscan(stdout, &result.PVCUsed); err != nil {
- err := fmt.Errorf("could not find the size of pvc %s: %v", result.PVCName, err)
+ err := fmt.Errorf("could not find the size of pvc %s: %w", result.PVCName, err)
log.Error(err)
errorChannel <- err
return
diff --git a/internal/apiserver/dfservice/dfservice.go b/internal/apiserver/dfservice/dfservice.go
index 325e20257a..89c8796b27 100644
--- a/internal/apiserver/dfservice/dfservice.go
+++ b/internal/apiserver/dfservice/dfservice.go
@@ -69,7 +69,7 @@ func DfHandler(w http.ResponseWriter, r *http.Request) {
if err := json.NewDecoder(r.Body).Decode(&request); err != nil {
response := CreateErrorResponse(err.Error())
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
return
}
@@ -84,14 +84,14 @@ func DfHandler(w http.ResponseWriter, r *http.Request) {
// check that the client versions match. If they don't, error out
if request.ClientVersion != msgs.PGO_VERSION {
response := CreateErrorResponse(apiserver.VERSION_MISMATCH_ERROR)
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
return
}
// ensure that the user has access to this namespace. if not, error out
if _, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace); err != nil {
response := CreateErrorResponse(err.Error())
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
return
}
@@ -99,5 +99,5 @@ func DfHandler(w http.ResponseWriter, r *http.Request) {
response := DfCluster(request)
// turn the response into JSON
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
}
diff --git a/internal/apiserver/failoverservice/failoverimpl.go b/internal/apiserver/failoverservice/failoverimpl.go
index 0e6e56df58..7886e3ddfa 100644
--- a/internal/apiserver/failoverservice/failoverimpl.go
+++ b/internal/apiserver/failoverservice/failoverimpl.go
@@ -74,7 +74,7 @@ func CreateFailover(request *msgs.CreateFailoverRequest, ns, pgouser string) msg
spec.Name = request.ClusterName + "-" + config.LABEL_FAILOVER
// previous failovers will leave a pgtask so remove it first
- apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, spec.Name, metav1.DeleteOptions{})
+ _ = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, spec.Name, metav1.DeleteOptions{})
spec.TaskType = crv1.PgtaskFailover
spec.Parameters = make(map[string]string)
@@ -109,7 +109,6 @@ func CreateFailover(request *msgs.CreateFailoverRequest, ns, pgouser string) msg
// over to
// pgo failover mycluster --query
func QueryFailover(name, ns string) msgs.QueryFailoverResponse {
-
response := msgs.QueryFailoverResponse{
Results: make([]msgs.FailoverTargetSpec, 0),
Status: msgs.Status{Code: msgs.Ok, Msg: ""},
@@ -139,7 +138,6 @@ func QueryFailover(name, ns string) msgs.QueryFailoverResponse {
}
replicationStatusResponse, err := util.ReplicationStatus(replicationStatusRequest, false, false)
-
// if an error is return, log the message, and return the response
if err != nil {
log.Error(err.Error())
@@ -175,7 +173,6 @@ func QueryFailover(name, ns string) msgs.QueryFailoverResponse {
func validateClusterName(clusterName, ns string) (*crv1.Pgcluster, error) {
ctx := context.TODO()
cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(ctx, clusterName, metav1.GetOptions{})
-
if err != nil {
return cluster, errors.New("no cluster found named " + clusterName)
}
@@ -219,5 +216,4 @@ func isValidFailoverTarget(deployName, clusterName, ns string) (*v1.Deployment,
}
return &deployments.Items[0], nil
-
}
diff --git a/internal/apiserver/failoverservice/failoverservice.go b/internal/apiserver/failoverservice/failoverservice.go
index 164d7b1545..ad42873f77 100644
--- a/internal/apiserver/failoverservice/failoverservice.go
+++ b/internal/apiserver/failoverservice/failoverservice.go
@@ -17,11 +17,12 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
"github.com/gorilla/mux"
log "github.com/sirupsen/logrus"
- "net/http"
)
// CreateFailoverHandler ...
@@ -65,20 +66,20 @@ func CreateFailoverHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreateFailover(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// QueryFailoverHandler ...
@@ -137,17 +138,17 @@ func QueryFailoverHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = QueryFailover(name, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/labelservice/labelimpl.go b/internal/apiserver/labelservice/labelimpl.go
index a7129249dd..ed12991f58 100644
--- a/internal/apiserver/labelservice/labelimpl.go
+++ b/internal/apiserver/labelservice/labelimpl.go
@@ -50,7 +50,7 @@ func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
return resp
}
- labelsMap, err = validateLabel(request.LabelCmdLabel, ns)
+ labelsMap, err = validateLabel(request.LabelCmdLabel)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = "labels not formatted correctly"
@@ -90,7 +90,7 @@ func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
}
clusterList = *cl
} else {
- //each arg represents a cluster name
+ // each arg represents a cluster name
items := make([]crv1.Pgcluster, 0)
for _, cluster := range request.Args {
result, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(ctx, cluster, metav1.GetOptions{})
@@ -112,7 +112,6 @@ func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
addLabels(clusterList.Items, request.DryRun, request.LabelCmdLabel, labelsMap, ns, pgouser)
return resp
-
}
func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLabels map[string]string, ns, pgouser string) {
@@ -134,7 +133,7 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
log.Error(err.Error())
}
- //publish event for create label
+ // publish event for create label
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -158,7 +157,7 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
}
for i := 0; i < len(items); i++ {
- //get deployments for this CRD
+ // get deployments for this CRD
selector := config.LABEL_PG_CLUSTER + "=" + items[i].Spec.Name
deployments, err := apiserver.Clientset.
AppsV1().Deployments(ns).
@@ -168,7 +167,7 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
}
for _, d := range deployments.Items {
- //update Deployment with the label
+ // update Deployment with the label
if !DryRun {
log.Debugf("patching deployment %s: %s", d.Name, patchBytes)
_, err := apiserver.Clientset.AppsV1().Deployments(ns).
@@ -182,7 +181,7 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
}
}
-func validateLabel(LabelCmdLabel, ns string) (map[string]string, error) {
+func validateLabel(LabelCmdLabel string) (map[string]string, error) {
var err error
labelMap := make(map[string]string)
userValues := strings.Split(LabelCmdLabel, ",")
@@ -225,7 +224,7 @@ func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse
return resp
}
- labelsMap, err = validateLabel(request.LabelCmdLabel, ns)
+ labelsMap, err = validateLabel(request.LabelCmdLabel)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = "labels not formatted correctly"
@@ -263,7 +262,7 @@ func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse
}
clusterList = *cl
} else {
- //each arg represents a cluster name
+ // each arg represents a cluster name
items := make([]crv1.Pgcluster, 0)
for _, cluster := range request.Args {
result, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(ctx, cluster, metav1.GetOptions{})
@@ -282,7 +281,7 @@ func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse
resp.Results = append(resp.Results, "deleting label from "+c.Spec.Name)
}
- err = deleteLabels(clusterList.Items, request.LabelCmdLabel, labelsMap, ns)
+ err = deleteLabels(clusterList.Items, labelsMap, ns)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
@@ -290,10 +289,9 @@ func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse
}
return resp
-
}
-func deleteLabels(items []crv1.Pgcluster, LabelCmdLabel string, labelsMap map[string]string, ns string) error {
+func deleteLabels(items []crv1.Pgcluster, labelsMap map[string]string, ns string) error {
ctx := context.TODO()
patch := kubeapi.NewMergePatch()
for key := range labelsMap {
@@ -316,7 +314,7 @@ func deleteLabels(items []crv1.Pgcluster, LabelCmdLabel string, labelsMap map[st
}
for i := 0; i < len(items); i++ {
- //get deployments for this CRD
+ // get deployments for this CRD
selector := config.LABEL_PG_CLUSTER + "=" + items[i].Spec.Name
deployments, err := apiserver.Clientset.
AppsV1().Deployments(ns).
diff --git a/internal/apiserver/labelservice/labelservice.go b/internal/apiserver/labelservice/labelservice.go
index f13054fd17..e166d2d03f 100644
--- a/internal/apiserver/labelservice/labelservice.go
+++ b/internal/apiserver/labelservice/labelservice.go
@@ -17,10 +17,11 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
// LabelHandler ...
@@ -64,20 +65,20 @@ func LabelHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
resp.Status.Code = msgs.Error
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Msg: err.Error(), Code: msgs.Error}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = Label(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// DeleteLabelHandler ...
@@ -120,18 +121,18 @@ func DeleteLabelHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Msg: apiserver.VERSION_MISMATCH_ERROR, Code: msgs.Error}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Msg: err.Error(), Code: msgs.Error}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeleteLabel(&request, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/namespaceservice/namespaceimpl.go b/internal/apiserver/namespaceservice/namespaceimpl.go
index 36af3b14e2..24b6234b48 100644
--- a/internal/apiserver/namespaceservice/namespaceimpl.go
+++ b/internal/apiserver/namespaceservice/namespaceimpl.go
@@ -36,7 +36,7 @@ func ShowNamespace(clientset kubernetes.Interface, username string, request *msg
resp.Username = username
resp.Results = make([]msgs.NamespaceResult, 0)
- //namespaceList := util.GetNamespaces()
+ // namespaceList := util.GetNamespaces()
nsList := make([]string, 0)
@@ -91,14 +91,13 @@ func ShowNamespace(clientset kubernetes.Interface, username string, request *msg
// CreateNamespace ...
func CreateNamespace(clientset kubernetes.Interface, createdBy string, request *msgs.CreateNamespaceRequest) msgs.CreateNamespaceResponse {
-
log.Debugf("CreateNamespace %v", request)
resp := msgs.CreateNamespaceResponse{}
resp.Status.Code = msgs.Ok
resp.Status.Msg = ""
resp.Results = make([]string, 0)
- //iterate thru all the args (namespace names)
+ // iterate thru all the args (namespace names)
for _, namespace := range request.Args {
if err := ns.CreateNamespace(clientset, apiserver.InstallationName,
@@ -112,7 +111,6 @@ func CreateNamespace(clientset kubernetes.Interface, createdBy string, request *
}
return resp
-
}
// DeleteNamespace ...
@@ -125,7 +123,6 @@ func DeleteNamespace(clientset kubernetes.Interface, deletedBy string, request *
for _, namespace := range request.Args {
err := ns.DeleteNamespace(clientset, apiserver.InstallationName, apiserver.PgoNamespace, deletedBy, namespace)
-
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
@@ -136,19 +133,17 @@ func DeleteNamespace(clientset kubernetes.Interface, deletedBy string, request *
}
return resp
-
}
// UpdateNamespace ...
func UpdateNamespace(clientset kubernetes.Interface, updatedBy string, request *msgs.UpdateNamespaceRequest) msgs.UpdateNamespaceResponse {
-
log.Debugf("UpdateNamespace %v", request)
resp := msgs.UpdateNamespaceResponse{}
resp.Status.Code = msgs.Ok
resp.Status.Msg = ""
resp.Results = make([]string, 0)
- //iterate thru all the args (namespace names)
+ // iterate thru all the args (namespace names)
for _, namespace := range request.Args {
if err := ns.UpdateNamespace(clientset, apiserver.InstallationName,
@@ -162,5 +157,4 @@ func UpdateNamespace(clientset kubernetes.Interface, updatedBy string, request *
}
return resp
-
}
diff --git a/internal/apiserver/namespaceservice/namespaceservice.go b/internal/apiserver/namespaceservice/namespaceservice.go
index 1e27294c96..fa5918fe45 100644
--- a/internal/apiserver/namespaceservice/namespaceservice.go
+++ b/internal/apiserver/namespaceservice/namespaceservice.go
@@ -77,12 +77,12 @@ func ShowNamespaceHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowNamespace(apiserver.Clientset, username, &request)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
func CreateNamespaceHandler(w http.ResponseWriter, r *http.Request) {
@@ -132,12 +132,12 @@ func CreateNamespaceHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreateNamespace(apiserver.Clientset, username, &request)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
func DeleteNamespaceHandler(w http.ResponseWriter, r *http.Request) {
@@ -187,14 +187,14 @@ func DeleteNamespaceHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeleteNamespace(apiserver.Clientset, username, &request)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
+
func UpdateNamespaceHandler(w http.ResponseWriter, r *http.Request) {
// swagger:operation POST /namespaceupdate namespaceservice namespaceupdate
/*```
@@ -242,10 +242,10 @@ func UpdateNamespaceHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = UpdateNamespace(apiserver.Clientset, username, &request)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/perms.go b/internal/apiserver/perms.go
index 48c72099a6..71316b2c82 100644
--- a/internal/apiserver/perms.go
+++ b/internal/apiserver/perms.go
@@ -97,8 +97,10 @@ const (
UPDATE_USER_PERM = "UpdateUser"
)
-var RoleMap map[string]map[string]string
-var PermMap map[string]string
+var (
+ RoleMap map[string]map[string]string
+ PermMap map[string]string
+)
func initializePerms() {
RoleMap = make(map[string]map[string]string)
@@ -180,5 +182,4 @@ func initializePerms() {
}
log.Infof("loading PermMap with %d Permissions\n", len(PermMap))
-
}
diff --git a/internal/apiserver/pgadminservice/pgadminimpl.go b/internal/apiserver/pgadminservice/pgadminimpl.go
index 4f23a8d028..89bb7bcde3 100644
--- a/internal/apiserver/pgadminservice/pgadminimpl.go
+++ b/internal/apiserver/pgadminservice/pgadminimpl.go
@@ -183,7 +183,6 @@ func ShowPgAdmin(request *msgs.ShowPgAdminRequest, namespace string) msgs.ShowPg
// try to get the list of clusters. if there is an error, put it into the
// status and return
clusterList, err := getClusterList(request.Namespace, request.ClusterNames, request.Selector)
-
if err != nil {
response.SetError(err.Error())
return response
@@ -191,7 +190,8 @@ func ShowPgAdmin(request *msgs.ShowPgAdminRequest, namespace string) msgs.ShowPg
// iterate through the list of clusters to get the relevant pgAdmin
// information about them
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := &clusterList.Items[i]
result := msgs.ShowPgAdminDetail{
ClusterName: cluster.Spec.Name,
HasPgAdmin: true,
@@ -228,7 +228,7 @@ func ShowPgAdmin(request *msgs.ShowPgAdminRequest, namespace string) msgs.ShowPg
// In the future, construct results to contain individual error stati
// for now log and return empty content if encountered
- qr, err := pgadmin.GetPgAdminQueryRunner(apiserver.Clientset, apiserver.RESTConfig, &cluster)
+ qr, err := pgadmin.GetPgAdminQueryRunner(apiserver.Clientset, apiserver.RESTConfig, cluster)
if err != nil {
log.Error(err)
continue
@@ -267,8 +267,7 @@ func getClusterList(namespace string, clusterNames []string, selector string) (c
cl, err := apiserver.Clientset.
CrunchydataV1().Pgclusters(namespace).
List(ctx, metav1.ListOptions{LabelSelector: selector})
-
- // if there is an error, return here with an empty cluster list
+ // if there is an error, return here with an empty cluster list
if err != nil {
return crv1.PgclusterList{}, err
}
@@ -278,7 +277,6 @@ func getClusterList(namespace string, clusterNames []string, selector string) (c
// now try to get clusters based specific cluster names
for _, clusterName := range clusterNames {
cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(ctx, clusterName, metav1.GetOptions{})
-
// if there is an error, capture it here and return here with an empty list
if err != nil {
return crv1.PgclusterList{}, err
diff --git a/internal/apiserver/pgadminservice/pgadminservice.go b/internal/apiserver/pgadminservice/pgadminservice.go
index 90378868ca..68c1b1b3db 100644
--- a/internal/apiserver/pgadminservice/pgadminservice.go
+++ b/internal/apiserver/pgadminservice/pgadminservice.go
@@ -63,20 +63,19 @@ func CreatePgAdminHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.SetError(apiserver.VERSION_MISMATCH_ERROR)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.SetError(err.Error())
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreatePgAdmin(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// DeletePgAdminHandler ...
@@ -117,20 +116,19 @@ func DeletePgAdminHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.SetError(apiserver.VERSION_MISMATCH_ERROR)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.SetError(err.Error())
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeletePgAdmin(&request, ns)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// ShowPgAdminHandler is the HTTP handler to get information about a pgBouncer
@@ -173,21 +171,19 @@ func ShowPgAdminHandler(w http.ResponseWriter, r *http.Request) {
// ensure the versions align...
if request.ClientVersion != msgs.PGO_VERSION {
resp.SetError(apiserver.VERSION_MISMATCH_ERROR)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
// ensure the namespace being used exists
namespace, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
-
if err != nil {
resp.SetError(err.Error())
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
// get the information about a pgAdmin deployment(s)
resp = ShowPgAdmin(&request, namespace)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
index 85373855d6..bd4b0e8fa4 100644
--- a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
+++ b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
@@ -72,14 +72,14 @@ func CreatePgbouncer(request *msgs.CreatePgbouncerRequest, ns, pgouser string) m
// try to get the list of clusters. if there is an error, put it into the
// status and return
clusterList, err := getClusterList(request.Namespace, request.Args, request.Selector)
-
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
}
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := clusterList.Items[i]
// check if the current cluster is not upgraded to the deployed
// Operator version. If not, do not allow the command to complete
if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE {
@@ -170,7 +170,6 @@ func DeletePgbouncer(request *msgs.DeletePgbouncerRequest, ns string) msgs.Delet
// try to get the list of clusters. if there is an error, put it into the
// status and return
clusterList, err := getClusterList(request.Namespace, request.Args, request.Selector)
-
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
@@ -191,7 +190,8 @@ func DeletePgbouncer(request *msgs.DeletePgbouncerRequest, ns string) msgs.Delet
return resp
}
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := clusterList.Items[i]
log.Debugf("deleting pgbouncer from cluster [%s]", cluster.Name)
// check to see if the uninstall flag was set. If it was, apply the update
@@ -226,7 +226,6 @@ func DeletePgbouncer(request *msgs.DeletePgbouncerRequest, ns string) msgs.Delet
}
return resp
-
}
// ShowPgBouncer gets information about a PostgreSQL cluster's pgBouncer
@@ -249,7 +248,6 @@ func ShowPgBouncer(request *msgs.ShowPgBouncerRequest, namespace string) msgs.Sh
// try to get the list of clusters. if there is an error, put it into the
// status and return
clusterList, err := getClusterList(request.Namespace, request.ClusterNames, request.Selector)
-
if err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
@@ -331,7 +329,6 @@ func UpdatePgBouncer(request *msgs.UpdatePgBouncerRequest, namespace, pgouser st
// try to get the list of clusters. if there is an error, put it into the
// status and return
clusterList, err := getClusterList(request.Namespace, request.ClusterNames, request.Selector)
-
if err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
@@ -352,7 +349,8 @@ func UpdatePgBouncer(request *msgs.UpdatePgBouncerRequest, namespace, pgouser st
// iterate through the list of clusters to get the relevant pgBouncer
// information about them
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := clusterList.Items[i]
result := msgs.UpdatePgBouncerDetail{
ClusterName: cluster.Spec.Name,
HasPgBouncer: true,
@@ -449,7 +447,6 @@ func getClusterList(namespace string, clusterNames []string, selector string) (c
// of arguments...or both. First, start with the selector
if selector != "" {
cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
-
// if there is an error, return here with an empty cluster list
if err != nil {
return crv1.PgclusterList{}, err
@@ -460,7 +457,6 @@ func getClusterList(namespace string, clusterNames []string, selector string) (c
// now try to get clusters based specific cluster names
for _, clusterName := range clusterNames {
cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(ctx, clusterName, metav1.GetOptions{})
-
// if there is an error, capture it here and return here with an empty list
if err != nil {
return crv1.PgclusterList{}, err
@@ -490,7 +486,6 @@ func setPgBouncerPasswordDetail(cluster crv1.Pgcluster, result *msgs.ShowPgBounc
// attempt to get the secret, but only get the password
password, err := util.GetPasswordFromSecret(apiserver.Clientset,
cluster.Spec.Namespace, pgBouncerSecretName)
-
if err != nil {
log.Warn(err)
}
@@ -510,8 +505,7 @@ func setPgBouncerServiceDetail(cluster crv1.Pgcluster, result *msgs.ShowPgBounce
services, err := apiserver.Clientset.
CoreV1().Services(cluster.Spec.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: selector})
-
- // if there is an error, return without making any adjustments
+ // if there is an error, return without making any adjustments
if err != nil {
log.Warn(err)
return
diff --git a/internal/apiserver/pgbouncerservice/pgbouncerservice.go b/internal/apiserver/pgbouncerservice/pgbouncerservice.go
index 969aabd205..773514d48d 100644
--- a/internal/apiserver/pgbouncerservice/pgbouncerservice.go
+++ b/internal/apiserver/pgbouncerservice/pgbouncerservice.go
@@ -17,10 +17,11 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
// CreatePgbouncerHandler ...
@@ -63,7 +64,7 @@ func CreatePgbouncerHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -71,13 +72,12 @@ func CreatePgbouncerHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreatePgbouncer(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
/* The delete pgboucner handler is setup to be used by two different routes. To keep
@@ -141,7 +141,7 @@ func DeletePgbouncerHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -149,13 +149,12 @@ func DeletePgbouncerHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeletePgbouncer(&request, ns)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// ShowPgBouncerHandler is the HTTP handler to get information about a pgBouncer
@@ -182,7 +181,6 @@ func ShowPgBouncerHandler(w http.ResponseWriter, r *http.Request) {
// first, determine if the user is authorized to access this resource
username, err := apiserver.Authn(apiserver.SHOW_PGBOUNCER_PERM, w, r)
-
if err != nil {
return
}
@@ -202,13 +200,12 @@ func ShowPgBouncerHandler(w http.ResponseWriter, r *http.Request) {
Msg: apiserver.VERSION_MISMATCH_ERROR,
},
}
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
return
}
// ensure the namespace being used exists
namespace, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
-
if err != nil {
response := msgs.ShowPgBouncerResponse{
Status: msgs.Status{
@@ -216,14 +213,13 @@ func ShowPgBouncerHandler(w http.ResponseWriter, r *http.Request) {
Msg: err.Error(),
},
}
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
return
}
// get the information about a pgbouncer deployment(s)
response := ShowPgBouncer(&request, namespace)
- json.NewEncoder(w).Encode(response)
-
+ _ = json.NewEncoder(w).Encode(response)
}
// UpdatePgBouncerHandler is the HTTP handler to perform update tasks on a
@@ -250,7 +246,6 @@ func UpdatePgBouncerHandler(w http.ResponseWriter, r *http.Request) {
// first, determine if the user is authorized to access this resource
username, err := apiserver.Authn(apiserver.UPDATE_PGBOUNCER_PERM, w, r)
-
if err != nil {
return
}
@@ -270,13 +265,12 @@ func UpdatePgBouncerHandler(w http.ResponseWriter, r *http.Request) {
Msg: apiserver.VERSION_MISMATCH_ERROR,
},
}
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
return
}
// ensure the namespace being used exists
namespace, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
-
if err != nil {
response := msgs.UpdatePgBouncerResponse{
Status: msgs.Status{
@@ -284,11 +278,11 @@ func UpdatePgBouncerHandler(w http.ResponseWriter, r *http.Request) {
Msg: err.Error(),
},
}
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
return
}
// get the information about a pgbouncer deployment(s)
response := UpdatePgBouncer(&request, namespace, username)
- json.NewEncoder(w).Encode(response)
+ _ = json.NewEncoder(w).Encode(response)
}
diff --git a/internal/apiserver/pgdumpservice/pgdumpimpl.go b/internal/apiserver/pgdumpservice/pgdumpimpl.go
index 4a5f1a5d42..5a93045493 100644
--- a/internal/apiserver/pgdumpservice/pgdumpimpl.go
+++ b/internal/apiserver/pgdumpservice/pgdumpimpl.go
@@ -17,7 +17,6 @@ limitations under the License.
import (
"context"
- "errors"
"fmt"
"strconv"
"strings"
@@ -28,13 +27,14 @@ import (
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- v1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
-const pgDumpTaskExtension = "-pgdump"
-const pgDumpJobExtension = "-pgdump-job"
+const (
+ pgDumpTaskExtension = "-pgdump"
+ pgDumpJobExtension = "-pgdump-job"
+)
// CreateBackup ...
// pgo backup mycluster
@@ -50,7 +50,7 @@ func CreatepgDump(request *msgs.CreatepgDumpBackupRequest, ns string) msgs.Creat
log.Debug("CreatePgDump storage config... " + request.StorageConfig)
if request.StorageConfig != "" {
- if apiserver.IsValidStorageName(request.StorageConfig) == false {
+ if !apiserver.IsValidStorageName(request.StorageConfig) {
log.Debug("CreateBackup sc error is found " + request.StorageConfig)
resp.Status.Code = msgs.Error
resp.Status.Msg = request.StorageConfig + " Storage config was not found "
@@ -68,7 +68,7 @@ func CreatepgDump(request *msgs.CreatepgDumpBackupRequest, ns string) msgs.Creat
}
if request.Selector != "" {
- //use the selector instead of an argument list to filter on
+ // use the selector instead of an argument list to filter on
clusterList, err := apiserver.Clientset.
CrunchydataV1().Pgclusters(ns).
@@ -117,7 +117,7 @@ func CreatepgDump(request *msgs.CreatepgDumpBackupRequest, ns string) msgs.Creat
}
deletePropagation := metav1.DeletePropagationForeground
- apiserver.Clientset.
+ _ = apiserver.Clientset.
BatchV1().Jobs(ns).
Delete(ctx, clusterName+pgDumpJobExtension, metav1.DeleteOptions{PropagationPolicy: &deletePropagation})
@@ -132,9 +132,8 @@ func CreatepgDump(request *msgs.CreatepgDumpBackupRequest, ns string) msgs.Creat
} else {
log.Debugf("pgtask %s was found so we will recreate it", taskName)
- //remove the existing pgtask
+ // remove the existing pgtask
err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, taskName, metav1.DeleteOptions{})
-
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
@@ -142,21 +141,9 @@ func CreatepgDump(request *msgs.CreatepgDumpBackupRequest, ns string) msgs.Creat
}
}
- //get pod name from cluster
- // var podname, deployName string
- var podname string
- podname, err = getPrimaryPodName(cluster, ns)
-
- if err != nil {
- log.Error(err)
- resp.Status.Code = msgs.Error
- resp.Status.Msg = err.Error()
- return resp
- }
-
// where all the magic happens about the task.
// TODO: Needs error handling for invalid parameters in the request
- theTask := buildPgTaskForDump(clusterName, taskName, crv1.PgtaskpgDump, podname, "database", request)
+ theTask := buildPgTaskForDump(clusterName, taskName, crv1.PgtaskpgDump, "database", request)
_, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(ctx, theTask, metav1.CreateOptions{})
if err != nil {
@@ -194,7 +181,7 @@ func ShowpgDump(clusterName string, selector string, ns string) msgs.ShowBackupR
}
}
- //get a list of all clusters
+ // get a list of all clusters
clusterList, err := apiserver.Clientset.
CrunchydataV1().Pgclusters(ns).
List(ctx, metav1.ListOptions{LabelSelector: selector})
@@ -217,7 +204,7 @@ func ShowpgDump(clusterName string, selector string, ns string) msgs.ShowBackupR
pgTaskName := "backup-" + c.Name + pgDumpTaskExtension
- backupItem, error := getPgBackupForTask(c.Name, pgTaskName, ns)
+ backupItem, error := getPgBackupForTask(pgTaskName, ns)
if backupItem != nil {
log.Debugf("pgTask %s was found", pgTaskName)
@@ -238,13 +225,11 @@ func ShowpgDump(clusterName string, selector string, ns string) msgs.ShowBackupR
}
return response
-
}
// builds out a pgTask structure that can be handed to kube
-func buildPgTaskForDump(clusterName, taskName, action, podName, containerName string,
+func buildPgTaskForDump(clusterName, taskName, action, containerName string,
request *msgs.CreatepgDumpBackupRequest) *crv1.Pgtask {
-
var newInstance *crv1.Pgtask
var storageSpec crv1.PgStorageSpec
var pvcName string
@@ -298,50 +283,6 @@ func buildPgTaskForDump(clusterName, taskName, action, podName, containerName st
return newInstance
}
-func getPrimaryPodName(cluster *crv1.Pgcluster, ns string) (string, error) {
- ctx := context.TODO()
- var podname string
-
- selector := config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name
-
- pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
- if err != nil {
- return podname, err
- }
-
- for _, p := range pods.Items {
- if isPrimary(&p, cluster.Spec.Name) && isReady(&p) {
- return p.Name, err
- }
- }
-
- return podname, errors.New("primary pod is not in Ready state")
-}
-
-func isPrimary(pod *v1.Pod, clusterName string) bool {
- if pod.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] == clusterName {
- return true
- }
- return false
-
-}
-
-func isReady(pod *v1.Pod) bool {
- readyCount := 0
- containerCount := 0
- for _, stat := range pod.Status.ContainerStatuses {
- containerCount++
- if stat.Ready {
- readyCount++
- }
- }
- if readyCount != containerCount {
- return false
- }
- return true
-
-}
-
// dumpAllFlag, dumpOpts = parseOptionFlags(request.BackupOpt)
func parseOptionFlags(allFlags string) (bool, string) {
dumpFlag := false
@@ -353,14 +294,12 @@ func parseOptionFlags(allFlags string) (bool, string) {
options := strings.Split(allFlags, " ")
for _, token := range options {
-
// handle dump flag
if strings.Contains(token, "--dump-all") {
dumpFlag = true
} else {
parsedOptions = append(parsedOptions, token)
}
-
}
optionString := strings.Join(parsedOptions, " ")
@@ -368,11 +307,10 @@ func parseOptionFlags(allFlags string) (bool, string) {
log.Debugf("pgdump optionFlags: %s, dumpAll: %t", optionString, dumpFlag)
return dumpFlag, optionString
-
}
// if backup && err are nil, it simply wasn't found. Otherwise found or an error
-func getPgBackupForTask(clusterName string, taskName string, ns string) (*msgs.Pgbackup, error) {
+func getPgBackupForTask(taskName, ns string) (*msgs.Pgbackup, error) {
ctx := context.TODO()
task, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Get(ctx, taskName, metav1.GetOptions{})
@@ -388,7 +326,6 @@ func getPgBackupForTask(clusterName string, taskName string, ns string) (*msgs.P
// converts pgTask to a pgBackup structure
func buildPgBackupFrompgTask(dumpTask *crv1.Pgtask) *msgs.Pgbackup {
-
backup := msgs.Pgbackup{}
spec := dumpTask.Spec
@@ -461,7 +398,7 @@ func Restore(request *msgs.PgRestoreRequest, ns string) msgs.PgRestoreResponse {
return resp
}
- //delete any existing pgtask with the same name
+ // delete any existing pgtask with the same name
err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, pgtask.Name, metav1.DeleteOptions{})
if err != nil && !kerrors.IsNotFound(err) {
resp.Status.Code = msgs.Error
@@ -469,7 +406,7 @@ func Restore(request *msgs.PgRestoreRequest, ns string) msgs.PgRestoreResponse {
return resp
}
- //create a pgtask for the restore workflow
+ // create a pgtask for the restore workflow
_, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(ctx, pgtask, metav1.CreateOptions{})
if err != nil {
resp.Status.Code = msgs.Error
@@ -484,7 +421,6 @@ func Restore(request *msgs.PgRestoreRequest, ns string) msgs.PgRestoreResponse {
// builds out a pgTask structure that can be handed to kube
func buildPgTaskForRestore(taskName string, action string, request *msgs.PgRestoreRequest) (*crv1.Pgtask, error) {
-
var newInstance *crv1.Pgtask
var storageSpec crv1.PgStorageSpec
diff --git a/internal/apiserver/pgdumpservice/pgdumpservice.go b/internal/apiserver/pgdumpservice/pgdumpservice.go
index 755a9bbd98..0b57ed2d75 100644
--- a/internal/apiserver/pgdumpservice/pgdumpservice.go
+++ b/internal/apiserver/pgdumpservice/pgdumpservice.go
@@ -17,12 +17,13 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
"github.com/crunchydata/postgres-operator/internal/config"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
"github.com/gorilla/mux"
log "github.com/sirupsen/logrus"
- "net/http"
)
// BackupHandler ...
@@ -66,12 +67,12 @@ func BackupHandler(w http.ResponseWriter, r *http.Request) {
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreatepgDump(&request, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// ShowpgDumpHandler ...
@@ -135,7 +136,7 @@ func ShowDumpHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -143,13 +144,12 @@ func ShowDumpHandler(w http.ResponseWriter, r *http.Request) {
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowpgDump(clustername, selector, ns)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// RestoreHandler ...
@@ -195,7 +195,7 @@ func RestoreHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -205,5 +205,5 @@ func RestoreHandler(w http.ResponseWriter, r *http.Request) {
resp.Status.Msg = err.Error()
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/pgoroleservice/pgoroleimpl.go b/internal/apiserver/pgoroleservice/pgoroleimpl.go
index 47f3c11502..4192a4437b 100644
--- a/internal/apiserver/pgoroleservice/pgoroleimpl.go
+++ b/internal/apiserver/pgoroleservice/pgoroleimpl.go
@@ -34,7 +34,6 @@ import (
// CreatePgorole ...
func CreatePgorole(clientset kubernetes.Interface, createdBy string, request *msgs.CreatePgoroleRequest) msgs.CreatePgoroleResponse {
-
log.Debugf("CreatePgorole %v", request)
resp := msgs.CreatePgoroleResponse{}
resp.Status.Code = msgs.Ok
@@ -54,7 +53,7 @@ func CreatePgorole(clientset kubernetes.Interface, createdBy string, request *ms
return resp
}
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPGOUser
@@ -77,7 +76,6 @@ func CreatePgorole(clientset kubernetes.Interface, createdBy string, request *ms
}
return resp
-
}
// ShowPgorole ...
@@ -122,7 +120,6 @@ func ShowPgorole(clientset kubernetes.Interface, request *msgs.ShowPgoroleReques
}
return resp
-
}
// DeletePgorole ...
@@ -164,7 +161,6 @@ func DeletePgorole(clientset kubernetes.Interface, deletedBy string, request *ms
}
return resp
-
}
func UpdatePgorole(clientset kubernetes.Interface, updatedBy string, request *msgs.UpdatePgoroleRequest) msgs.UpdatePgoroleResponse {
@@ -200,7 +196,7 @@ func UpdatePgorole(clientset kubernetes.Interface, updatedBy string, request *ms
return resp
}
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPGOUser
@@ -223,13 +219,12 @@ func UpdatePgorole(clientset kubernetes.Interface, updatedBy string, request *ms
}
return resp
-
}
func createSecret(clientset kubernetes.Interface, createdBy, pgorolename, permissions string) error {
ctx := context.TODO()
- var enRolename = pgorolename
+ enRolename := pgorolename
secretName := "pgorole-" + pgorolename
@@ -269,7 +264,7 @@ func validPermissions(perms string) error {
func deleteRoleFromUsers(clientset kubernetes.Interface, roleName string) error {
ctx := context.TODO()
- //get pgouser Secrets
+ // get pgouser Secrets
selector := config.LABEL_PGO_PGOUSER + "=true"
pgouserSecrets, err := clientset.
@@ -280,7 +275,8 @@ func deleteRoleFromUsers(clientset kubernetes.Interface, roleName string) error
return err
}
- for _, s := range pgouserSecrets.Items {
+ for i := range pgouserSecrets.Items {
+ s := &pgouserSecrets.Items[i]
rolesString := string(s.Data[pgouserservice.MAP_KEY_ROLES])
roles := strings.Split(rolesString, ",")
resultRoles := make([]string, 0)
@@ -294,7 +290,7 @@ func deleteRoleFromUsers(clientset kubernetes.Interface, roleName string) error
}
}
- //update the pgouser Secret removing any roles as necessary
+ // update the pgouser Secret removing any roles as necessary
if rolesUpdated {
var resultingRoleString string
@@ -307,8 +303,7 @@ func deleteRoleFromUsers(clientset kubernetes.Interface, roleName string) error
}
s.Data[pgouserservice.MAP_KEY_ROLES] = []byte(resultingRoleString)
- _, err = clientset.CoreV1().Secrets(apiserver.PgoNamespace).Update(ctx, &s, metav1.UpdateOptions{})
- if err != nil {
+ if _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Update(ctx, s, metav1.UpdateOptions{}); err != nil {
return err
}
diff --git a/internal/apiserver/pgoroleservice/pgoroleservice.go b/internal/apiserver/pgoroleservice/pgoroleservice.go
index b3e3413e09..1ab26cdac6 100644
--- a/internal/apiserver/pgoroleservice/pgoroleservice.go
+++ b/internal/apiserver/pgoroleservice/pgoroleservice.go
@@ -17,11 +17,12 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
apiserver "github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/util/validation"
- "net/http"
)
func CreatePgoroleHandler(w http.ResponseWriter, r *http.Request) {
@@ -63,7 +64,7 @@ func CreatePgoroleHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -75,7 +76,7 @@ func CreatePgoroleHandler(w http.ResponseWriter, r *http.Request) {
resp = CreatePgorole(apiserver.Clientset, rolename, &request)
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
func DeletePgoroleHandler(w http.ResponseWriter, r *http.Request) {
@@ -117,14 +118,13 @@ func DeletePgoroleHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeletePgorole(apiserver.Clientset, rolename, &request)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
func ShowPgoroleHandler(w http.ResponseWriter, r *http.Request) {
@@ -167,14 +167,13 @@ func ShowPgoroleHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowPgorole(apiserver.Clientset, &request)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
func UpdatePgoroleHandler(w http.ResponseWriter, r *http.Request) {
@@ -213,5 +212,5 @@ func UpdatePgoroleHandler(w http.ResponseWriter, r *http.Request) {
resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""}
resp = UpdatePgorole(apiserver.Clientset, rolename, &request)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/pgouserservice/pgouserimpl.go b/internal/apiserver/pgouserservice/pgouserimpl.go
index aaa94fbc00..136deb2064 100644
--- a/internal/apiserver/pgouserservice/pgouserimpl.go
+++ b/internal/apiserver/pgouserservice/pgouserimpl.go
@@ -32,14 +32,15 @@ import (
"k8s.io/client-go/kubernetes"
)
-const MAP_KEY_USERNAME = "username"
-const MAP_KEY_PASSWORD = "password"
-const MAP_KEY_ROLES = "roles"
-const MAP_KEY_NAMESPACES = "namespaces"
+const (
+ MAP_KEY_USERNAME = "username"
+ MAP_KEY_PASSWORD = "password"
+ MAP_KEY_ROLES = "roles"
+ MAP_KEY_NAMESPACES = "namespaces"
+)
// CreatePgouser ...
func CreatePgouser(clientset kubernetes.Interface, createdBy string, request *msgs.CreatePgouserRequest) msgs.CreatePgouserResponse {
-
log.Debugf("CreatePgouser %v", request)
resp := msgs.CreatePgouserResponse{}
resp.Status.Code = msgs.Ok
@@ -71,7 +72,7 @@ func CreatePgouser(clientset kubernetes.Interface, createdBy string, request *ms
return resp
}
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPGOUser
@@ -94,7 +95,6 @@ func CreatePgouser(clientset kubernetes.Interface, createdBy string, request *ms
}
return resp
-
}
// ShowPgouser ...
@@ -147,7 +147,6 @@ func ShowPgouser(clientset kubernetes.Interface, request *msgs.ShowPgouserReques
}
return resp
-
}
// DeletePgouser ...
@@ -170,7 +169,7 @@ func DeletePgouser(clientset kubernetes.Interface, deletedBy string, request *ms
resp.Results = append(resp.Results, "error deleting secret "+secretName)
} else {
resp.Results = append(resp.Results, "deleted pgouser "+v)
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPGOUser
@@ -198,7 +197,6 @@ func DeletePgouser(clientset kubernetes.Interface, deletedBy string, request *ms
}
return resp
-
}
// UpdatePgouser - update the pgouser secret
@@ -253,7 +251,7 @@ func UpdatePgouser(clientset kubernetes.Interface, updatedBy string, request *ms
return resp
}
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPGOUser
@@ -275,7 +273,6 @@ func UpdatePgouser(clientset kubernetes.Interface, updatedBy string, request *ms
}
return resp
-
}
func createSecret(clientset kubernetes.Interface, createdBy string, request *msgs.CreatePgouserRequest) error {
@@ -323,7 +320,6 @@ func validRoles(clientset kubernetes.Interface, roles string) error {
}
func validNamespaces(namespaces string, allnamespaces bool) error {
-
if allnamespaces {
return nil
}
diff --git a/internal/apiserver/pgouserservice/pgouserservice.go b/internal/apiserver/pgouserservice/pgouserservice.go
index ccf1b1ce8f..f0205c5eea 100644
--- a/internal/apiserver/pgouserservice/pgouserservice.go
+++ b/internal/apiserver/pgouserservice/pgouserservice.go
@@ -17,11 +17,12 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
apiserver "github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/util/validation"
- "net/http"
)
func CreatePgouserHandler(w http.ResponseWriter, r *http.Request) {
@@ -63,7 +64,7 @@ func CreatePgouserHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -75,7 +76,7 @@ func CreatePgouserHandler(w http.ResponseWriter, r *http.Request) {
resp = CreatePgouser(apiserver.Clientset, username, &request)
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
func DeletePgouserHandler(w http.ResponseWriter, r *http.Request) {
@@ -117,14 +118,13 @@ func DeletePgouserHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeletePgouser(apiserver.Clientset, username, &request)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
func ShowPgouserHandler(w http.ResponseWriter, r *http.Request) {
@@ -167,14 +167,13 @@ func ShowPgouserHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowPgouser(apiserver.Clientset, &request)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
func UpdatePgouserHandler(w http.ResponseWriter, r *http.Request) {
@@ -213,5 +212,5 @@ func UpdatePgouserHandler(w http.ResponseWriter, r *http.Request) {
resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""}
resp = UpdatePgouser(apiserver.Clientset, username, &request)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/policyservice/policyimpl.go b/internal/apiserver/policyservice/policyimpl.go
index e0302eeb6c..a3153536bc 100644
--- a/internal/apiserver/policyservice/policyimpl.go
+++ b/internal/apiserver/policyservice/policyimpl.go
@@ -68,7 +68,6 @@ func CreatePolicy(client pgo.Interface, policyName, policyURL, policyFile, ns, p
}
return false, err
-
}
// ShowPolicy ...
@@ -77,7 +76,7 @@ func ShowPolicy(client pgo.Interface, name string, allflags bool, ns string) crv
policyList := crv1.PgpolicyList{}
if allflags {
- //get a list of all policies
+ // get a list of all policies
list, err := client.CrunchydataV1().Pgpolicies(ns).List(ctx, metav1.ListOptions{})
if list != nil && err == nil {
policyList = *list
@@ -90,7 +89,6 @@ func ShowPolicy(client pgo.Interface, name string, allflags bool, ns string) crv
}
return policyList
-
}
// DeletePolicy ...
@@ -110,18 +108,19 @@ func DeletePolicy(client pgo.Interface, policyName, ns, pgouser string) msgs.Del
policyFound := false
log.Debugf("deleting policy %s", policyName)
- for _, policy := range policyList.Items {
+ for i := range policyList.Items {
+ policy := &policyList.Items[i]
if policyName == "all" || policyName == policy.Spec.Name {
- //update pgpolicy with current pgouser so that
- //we can create an event holding the pgouser
- //that deleted the policy
+ // update pgpolicy with current pgouser so that
+ // we can create an event holding the pgouser
+ // that deleted the policy
policy.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser
- _, err = client.CrunchydataV1().Pgpolicies(ns).Update(ctx, &policy, metav1.UpdateOptions{})
+ _, err = client.CrunchydataV1().Pgpolicies(ns).Update(ctx, policy, metav1.UpdateOptions{})
if err != nil {
log.Error(err)
}
- //ok, now delete the pgpolicy
+ // ok, now delete the pgpolicy
policyFound = true
err = client.CrunchydataV1().Pgpolicies(ns).Delete(ctx, policy.Spec.Name, metav1.DeleteOptions{})
if err == nil {
@@ -145,7 +144,6 @@ func DeletePolicy(client pgo.Interface, policyName, ns, pgouser string) msgs.Del
}
return resp
-
}
// ApplyPolicy ...
@@ -159,7 +157,7 @@ func ApplyPolicy(request *msgs.ApplyPolicyRequest, ns, pgouser string) msgs.Appl
resp.Status.Msg = ""
resp.Status.Code = msgs.Ok
- //validate policy
+ // validate policy
err = util.ValidatePolicy(apiserver.Clientset, ns, request.Name)
if err != nil {
resp.Status.Code = msgs.Error
@@ -167,11 +165,11 @@ func ApplyPolicy(request *msgs.ApplyPolicyRequest, ns, pgouser string) msgs.Appl
return resp
}
- //get filtered list of Deployments
+ // get filtered list of Deployments
selector := request.Selector
log.Debugf("apply policy selector string=[%s]", selector)
- //get a list of all clusters
+ // get a list of all clusters
clusterList, err := apiserver.Clientset.
CrunchydataV1().Pgclusters(ns).
List(ctx, metav1.ListOptions{LabelSelector: selector})
@@ -232,7 +230,7 @@ func ApplyPolicy(request *msgs.ApplyPolicyRequest, ns, pgouser string) msgs.Appl
if d.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] != d.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] {
log.Debugf("skipping apply policy on deployment %s", d.Name)
continue
- //skip non primary deployments
+ // skip non primary deployments
}
log.Debugf("apply policy %s on deployment %s based on selector %s", request.Name, d.ObjectMeta.Name, selector)
@@ -261,7 +259,7 @@ func ApplyPolicy(request *msgs.ApplyPolicyRequest, ns, pgouser string) msgs.Appl
log.Error(err)
}
- //update the pgcluster crd labels with the new policy
+ // update the pgcluster crd labels with the new policy
log.Debugf("patching cluster %s: %s", cl.Name, patch)
_, err = apiserver.Clientset.CrunchydataV1().Pgclusters(ns).
Patch(ctx, cl.Name, types.MergePatchType, patch, metav1.PatchOptions{})
@@ -271,7 +269,7 @@ func ApplyPolicy(request *msgs.ApplyPolicyRequest, ns, pgouser string) msgs.Appl
resp.Name = append(resp.Name, d.ObjectMeta.Name)
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPolicy
@@ -294,5 +292,4 @@ func ApplyPolicy(request *msgs.ApplyPolicyRequest, ns, pgouser string) msgs.Appl
}
return resp
-
}
diff --git a/internal/apiserver/policyservice/policyservice.go b/internal/apiserver/policyservice/policyservice.go
index d2a3d6234f..4324faac21 100644
--- a/internal/apiserver/policyservice/policyservice.go
+++ b/internal/apiserver/policyservice/policyservice.go
@@ -65,7 +65,7 @@ func CreatePolicyHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -73,7 +73,7 @@ func CreatePolicyHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -95,7 +95,7 @@ func CreatePolicyHandler(w http.ResponseWriter, r *http.Request) {
}
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// DeletePolicyHandler ...
@@ -145,7 +145,7 @@ func DeletePolicyHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -153,14 +153,13 @@ func DeletePolicyHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeletePolicy(apiserver.Clientset, policyname, ns, username)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// ShowPolicyHandler ...
@@ -212,7 +211,7 @@ func ShowPolicyHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -220,14 +219,13 @@ func ShowPolicyHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp.PolicyList = ShowPolicy(apiserver.Clientset, policyname, request.AllFlag, ns)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// ApplyPolicyHandler ...
@@ -271,10 +269,10 @@ func ApplyPolicyHandler(w http.ResponseWriter, r *http.Request) {
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ApplyPolicy(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/pvcservice/pvcservice.go b/internal/apiserver/pvcservice/pvcservice.go
index a12979cb3c..332b3740b2 100644
--- a/internal/apiserver/pvcservice/pvcservice.go
+++ b/internal/apiserver/pvcservice/pvcservice.go
@@ -17,10 +17,11 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
// ShowPVCHandler ...
@@ -76,7 +77,7 @@ func ShowPVCHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -84,7 +85,7 @@ func ShowPVCHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -94,5 +95,5 @@ func ShowPVCHandler(w http.ResponseWriter, r *http.Request) {
resp.Status.Msg = err.Error()
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/reloadservice/reloadimpl.go b/internal/apiserver/reloadservice/reloadimpl.go
index dba6e6cd8f..25ed88be67 100644
--- a/internal/apiserver/reloadservice/reloadimpl.go
+++ b/internal/apiserver/reloadservice/reloadimpl.go
@@ -116,7 +116,6 @@ func Reload(request *msgs.ReloadRequest, ns, username string) msgs.ReloadRespons
// publishReloadClusterEvent publishes an event when a cluster is reloaded
func publishReloadClusterEvent(clusterName, username, namespace string) error {
-
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
diff --git a/internal/apiserver/reloadservice/reloadservice.go b/internal/apiserver/reloadservice/reloadservice.go
index 9d1096c3c9..149b660125 100644
--- a/internal/apiserver/reloadservice/reloadservice.go
+++ b/internal/apiserver/reloadservice/reloadservice.go
@@ -17,10 +17,11 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
// ReloadHandler ...
@@ -67,7 +68,7 @@ func ReloadHandler(w http.ResponseWriter, r *http.Request) {
resp := msgs.ReloadResponse{}
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -76,9 +77,9 @@ func ReloadHandler(w http.ResponseWriter, r *http.Request) {
resp := msgs.ReloadResponse{}
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
- json.NewEncoder(w).Encode(reloadResponse)
+ _ = json.NewEncoder(w).Encode(reloadResponse)
}
diff --git a/internal/apiserver/restartservice/restartservice.go b/internal/apiserver/restartservice/restartservice.go
index a1bfb97194..374cb3ca93 100644
--- a/internal/apiserver/restartservice/restartservice.go
+++ b/internal/apiserver/restartservice/restartservice.go
@@ -56,14 +56,14 @@ func RestartHandler(w http.ResponseWriter, r *http.Request) {
var request msgs.RestartRequest
if err := json.NewDecoder(r.Body).Decode(&request); err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
username, err := apiserver.Authn(apiserver.RESTART_PERM, w, r)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -73,17 +73,17 @@ func RestartHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
if _, err := apiserver.GetNamespace(apiserver.Clientset, username,
request.Namespace); err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
- json.NewEncoder(w).Encode(Restart(&request, username))
+ _ = json.NewEncoder(w).Encode(Restart(&request, username))
}
// QueryRestartHandler handles requests to query a cluster for instances available to use as
@@ -131,7 +131,7 @@ func QueryRestartHandler(w http.ResponseWriter, r *http.Request) {
username, err := apiserver.Authn(apiserver.RESTART_PERM, w, r)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -141,14 +141,14 @@ func QueryRestartHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
if _, err := apiserver.GetNamespace(apiserver.Clientset, username, namespace); err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
- json.NewEncoder(w).Encode(QueryRestart(clusterName, namespace))
+ _ = json.NewEncoder(w).Encode(QueryRestart(clusterName, namespace))
}
diff --git a/internal/apiserver/root.go b/internal/apiserver/root.go
index f2a5ab149b..769ee79ab7 100644
--- a/internal/apiserver/root.go
+++ b/internal/apiserver/root.go
@@ -64,8 +64,10 @@ var DebugFlag bool
var BasicAuth bool
// Namespace comes from the apiserver config in this version
-var PgoNamespace string
-var InstallationName string
+var (
+ PgoNamespace string
+ InstallationName string
+)
var CRUNCHY_DEBUG bool
@@ -90,7 +92,6 @@ var Pgo config.PgoConfig
var namespaceOperatingMode ns.NamespaceOperatingMode
func Initialize() {
-
PgoNamespace = os.Getenv("PGO_OPERATOR_NAMESPACE")
if PgoNamespace == "" {
log.Info("PGO_OPERATOR_NAMESPACE environment variable is not set and is required, this is the namespace that the Operator is to run within.")
@@ -151,7 +152,6 @@ func Initialize() {
}
func connectToKube() {
-
client, err := kubeapi.NewClient()
if err != nil {
panic(err)
@@ -193,14 +193,13 @@ func initConfig() {
func BasicAuthCheck(username, password string) bool {
ctx := context.TODO()
- if BasicAuth == false {
+ if !BasicAuth {
return true
}
- //see if there is a pgouser Secret for this username
+ // see if there is a pgouser Secret for this username
secretName := "pgouser-" + username
secret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(ctx, secretName, metav1.GetOptions{})
-
if err != nil {
log.Errorf("could not get pgouser secret %s: %s", username, err.Error())
return false
@@ -213,13 +212,12 @@ func BasicAuthzCheck(username, perm string) bool {
ctx := context.TODO()
secretName := "pgouser-" + username
secret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(ctx, secretName, metav1.GetOptions{})
-
if err != nil {
log.Errorf("could not get pgouser secret %s: %s", username, err.Error())
return false
}
- //get the roles for this user
+ // get the roles for this user
rolesString := string(secret.Data["roles"])
roles := strings.Split(rolesString, ",")
if len(roles) == 0 {
@@ -227,13 +225,12 @@ func BasicAuthzCheck(username, perm string) bool {
return false
}
- //venture thru each role this user has looking for a perm match
+ // venture thru each role this user has looking for a perm match
for _, r := range roles {
- //get the pgorole
+ // get the pgorole
roleSecretName := "pgorole-" + r
rolesecret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(ctx, roleSecretName, metav1.GetOptions{})
-
if err != nil {
log.Errorf("could not get pgorole secret %s: %s", r, err.Error())
return false
@@ -262,14 +259,12 @@ func BasicAuthzCheck(username, perm string) bool {
}
return false
-
}
-//GetNamespace determines if a user has permission for
-//a namespace they are requesting
-//a valid requested namespace is required
+// GetNamespace determines if a user has permission for
+// a namespace they are requesting
+// a valid requested namespace is required
func GetNamespace(clientset kubernetes.Interface, username, requestedNS string) (string, error) {
-
log.Debugf("GetNamespace username [%s] ns [%s]", username, requestedNS)
if requestedNS == "" {
@@ -281,11 +276,11 @@ func GetNamespace(clientset kubernetes.Interface, username, requestedNS string)
return requestedNS, fmt.Errorf("Error when determining whether user [%s] is allowed access to "+
"namespace [%s]: %s", username, requestedNS, err.Error())
}
- if iAccess == false {
+ if !iAccess {
errMsg := fmt.Sprintf("namespace [%s] is not part of the Operator installation", requestedNS)
return requestedNS, errors.New(errMsg)
}
- if uAccess == false {
+ if !uAccess {
errMsg := fmt.Sprintf("user [%s] is not allowed access to namespace [%s]", username, requestedNS)
return requestedNS, errors.New(errMsg)
}
@@ -339,7 +334,6 @@ func Authn(perm string, w http.ResponseWriter, r *http.Request) (string, error)
log.Debug("Authentication Success")
return username, err
-
}
func IsValidStorageName(name string) bool {
@@ -375,7 +369,7 @@ func UserIsPermittedInNamespace(username, requestedNS string) (bool, bool, error
}
if iAccess {
- //get the pgouser Secret for this username
+ // get the pgouser Secret for this username
userSecretName := "pgouser-" + username
userSecret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(ctx, userSecretName, metav1.GetOptions{})
if err != nil {
@@ -408,7 +402,6 @@ func UserIsPermittedInNamespace(username, requestedNS string) (bool, bool, error
func WriteTLSCert(certPath, keyPath string) error {
ctx := context.TODO()
pgoSecret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(ctx, PGOSecretName, metav1.GetOptions{})
-
// if the TLS certificate secret is not found, attempt to generate one
if err != nil {
log.Infof("%s Secret NOT found in namespace %s", PGOSecretName, PgoNamespace)
@@ -425,13 +418,13 @@ func WriteTLSCert(certPath, keyPath string) error {
log.Infof("%s Secret found in namespace %s", PGOSecretName, PgoNamespace)
log.Infof("cert key data len is %d", len(pgoSecret.Data[corev1.TLSCertKey]))
- if err := ioutil.WriteFile(certPath, pgoSecret.Data[corev1.TLSCertKey], 0644); err != nil {
+ if err := ioutil.WriteFile(certPath, pgoSecret.Data[corev1.TLSCertKey], 0o600); err != nil {
return err
}
log.Infof("private key data len is %d", len(pgoSecret.Data[corev1.TLSPrivateKeyKey]))
- if err := ioutil.WriteFile(keyPath, pgoSecret.Data[corev1.TLSPrivateKeyKey], 0644); err != nil {
+ if err := ioutil.WriteFile(keyPath, pgoSecret.Data[corev1.TLSPrivateKeyKey], 0o600); err != nil {
return err
}
@@ -444,7 +437,7 @@ func generateTLSCert(certPath, keyPath string) error {
ctx := context.TODO()
var err error
- //generate private key
+ // generate private key
var privateKey *rsa.PrivateKey
privateKey, err = tlsutil.NewPrivateKey()
if err != nil {
@@ -481,15 +474,14 @@ func generateTLSCert(certPath, keyPath string) error {
os.Exit(2)
}
- if err := ioutil.WriteFile(certPath, newSecret.Data[corev1.TLSCertKey], 0644); err != nil {
+ if err := ioutil.WriteFile(certPath, newSecret.Data[corev1.TLSCertKey], 0o600); err != nil {
return err
}
- if err := ioutil.WriteFile(keyPath, newSecret.Data[corev1.TLSPrivateKeyKey], 0644); err != nil {
+ if err := ioutil.WriteFile(keyPath, newSecret.Data[corev1.TLSPrivateKeyKey], 0o600); err != nil {
return err
}
return err
-
}
// setNamespaceOperatingMode set the namespace operating mode for the Operator by calling the
@@ -530,7 +522,6 @@ func setRandomPgouserPasswords() {
// generate the password using the default password length
generatedPassword, err := util.GeneratePassword(util.DefaultGeneratedPasswordLength)
-
if err != nil {
log.Errorf("Could not generate password for pgouser secret %s for operator installation %s in "+
"namespace %s", secret.Name, InstallationName, PgoNamespace)
@@ -539,7 +530,6 @@ func setRandomPgouserPasswords() {
// create the password patch
patch, err := kubeapi.NewMergePatch().Add("stringData", "password")(generatedPassword).Bytes()
-
if err != nil {
log.Errorf("Could not generate password patch for pgouser secret %s for operator installation "+
"%s in namespace %s", secret.Name, InstallationName, PgoNamespace)
diff --git a/internal/apiserver/scheduleservice/scheduleimpl.go b/internal/apiserver/scheduleservice/scheduleimpl.go
index 7aa2a9e194..f8c70ae059 100644
--- a/internal/apiserver/scheduleservice/scheduleimpl.go
+++ b/internal/apiserver/scheduleservice/scheduleimpl.go
@@ -143,8 +143,8 @@ func CreateSchedule(request *msgs.CreateScheduleRequest, ns string) msgs.CreateS
log.Debug("Making schedules")
var schedules []*PgScheduleSpec
- for _, cluster := range clusterList.Items {
-
+ for i := range clusterList.Items {
+ cluster := &clusterList.Items[i]
// check if the current cluster is not upgraded to the deployed
// Operator version. If not, do not allow the command to complete
if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE {
@@ -154,10 +154,10 @@ func CreateSchedule(request *msgs.CreateScheduleRequest, ns string) msgs.CreateS
}
switch sr.Request.ScheduleType {
case "pgbackrest":
- schedule := sr.createBackRestSchedule(&cluster, ns)
+ schedule := sr.createBackRestSchedule(cluster, ns)
schedules = append(schedules, schedule)
case "policy":
- schedule := sr.createPolicySchedule(&cluster, ns)
+ schedule := sr.createPolicySchedule(cluster, ns)
schedules = append(schedules, schedule)
default:
sr.Response.Status.Code = msgs.Error
@@ -232,7 +232,7 @@ func DeleteSchedule(request *msgs.DeleteScheduleRequest, ns string) msgs.DeleteS
if request.ScheduleName == "" && request.ClusterName == "" && request.Selector == "" {
sr.Status.Code = msgs.Error
- sr.Status.Msg = fmt.Sprintf("Cluster name, schedule name or selector must be provided")
+ sr.Status.Msg = "Cluster name, schedule name or selector must be provided"
return *sr
}
@@ -280,7 +280,7 @@ func ShowSchedule(request *msgs.ShowScheduleRequest, ns string) msgs.ShowSchedul
if request.ScheduleName == "" && request.ClusterName == "" && request.Selector == "" {
sr.Status.Code = msgs.Error
- sr.Status.Msg = fmt.Sprintf("Cluster name, schedule name or selector must be provided")
+ sr.Status.Msg = "Cluster name, schedule name or selector must be provided"
return *sr
}
diff --git a/internal/apiserver/scheduleservice/scheduleservice.go b/internal/apiserver/scheduleservice/scheduleservice.go
index b88fa16d7e..3ac9205a59 100644
--- a/internal/apiserver/scheduleservice/scheduleservice.go
+++ b/internal/apiserver/scheduleservice/scheduleservice.go
@@ -98,12 +98,12 @@ func CreateScheduleHandler(w http.ResponseWriter, r *http.Request) {
},
Results: make([]string, 0),
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp := CreateSchedule(&request, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
func DeleteScheduleHandler(w http.ResponseWriter, r *http.Request) {
@@ -150,13 +150,13 @@ func DeleteScheduleHandler(w http.ResponseWriter, r *http.Request) {
},
Results: make([]string, 0),
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp := DeleteSchedule(&request, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
func ShowScheduleHandler(w http.ResponseWriter, r *http.Request) {
@@ -204,10 +204,10 @@ func ShowScheduleHandler(w http.ResponseWriter, r *http.Request) {
Results: make([]string, 0),
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp := ShowSchedule(&request, ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/statusservice/statusimpl.go b/internal/apiserver/statusservice/statusimpl.go
index 958da66604..7cc81278b0 100644
--- a/internal/apiserver/statusservice/statusimpl.go
+++ b/internal/apiserver/statusservice/statusimpl.go
@@ -46,7 +46,7 @@ func Status(ns string) msgs.StatusResponse {
func getNumClaims(ns string) int {
ctx := context.TODO()
- //count number of PVCs with pgremove=true
+ // count number of PVCs with pgremove=true
pvcs, err := apiserver.Clientset.
CoreV1().PersistentVolumeClaims(ns).
List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PGREMOVE})
@@ -59,7 +59,7 @@ func getNumClaims(ns string) int {
func getNumDatabases(ns string) int {
ctx := context.TODO()
- //count number of Deployments with pg-cluster
+ // count number of Deployments with pg-cluster
deps, err := apiserver.Clientset.
AppsV1().Deployments(ns).
List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER})
@@ -72,7 +72,7 @@ func getNumDatabases(ns string) int {
func getVolumeCap(ns string) string {
ctx := context.TODO()
- //sum all PVCs storage capacity
+ // sum all PVCs storage capacity
pvcs, err := apiserver.Clientset.
CoreV1().PersistentVolumeClaims(ns).
List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PGREMOVE})
@@ -83,18 +83,18 @@ func getVolumeCap(ns string) string {
var capTotal int64
capTotal = 0
- for _, p := range pvcs.Items {
- capTotal = capTotal + getClaimCapacity(&p)
+ for i := range pvcs.Items {
+ capTotal = capTotal + getClaimCapacity(&pvcs.Items[i])
}
q := resource.NewQuantity(capTotal, resource.BinarySI)
- //log.Infof("capTotal string is %s\n", q.String())
+ // log.Infof("capTotal string is %s\n", q.String())
return q.String()
}
func getDBTags(ns string) map[string]int {
ctx := context.TODO()
results := make(map[string]int)
- //count all pods with pg-cluster, sum by image tag value
+ // count all pods with pg-cluster, sum by image tag value
pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER})
if err != nil {
log.Error(err)
@@ -111,7 +111,7 @@ func getDBTags(ns string) map[string]int {
func getNotReady(ns string) []string {
ctx := context.TODO()
- //show all database pods for each pgcluster that are not yet running
+ // show all database pods for each pgcluster that are not yet running
agg := make([]string, 0)
clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(ctx, metav1.ListOptions{})
if err != nil {
@@ -152,7 +152,6 @@ func getClaimCapacity(pvc *v1.PersistentVolumeClaim) int64 {
diskSizeInt64, _ := diskSize.AsInt64()
return diskSizeInt64
-
}
func getLabels(ns string) []msgs.KeyValue {
@@ -168,7 +167,6 @@ func getLabels(ns string) []msgs.KeyValue {
}
for _, dep := range deps.Items {
-
for k, v := range dep.ObjectMeta.Labels {
lv := k + "=" + v
if results[lv] == 0 {
@@ -177,7 +175,6 @@ func getLabels(ns string) []msgs.KeyValue {
results[lv] = results[lv] + 1
}
}
-
}
for k, v := range results {
@@ -189,5 +186,4 @@ func getLabels(ns string) []msgs.KeyValue {
})
return ss
-
}
diff --git a/internal/apiserver/statusservice/statusservice.go b/internal/apiserver/statusservice/statusservice.go
index ecab0047c2..3adecd9156 100644
--- a/internal/apiserver/statusservice/statusservice.go
+++ b/internal/apiserver/statusservice/statusservice.go
@@ -17,11 +17,12 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
//"github.com/gorilla/mux"
- "net/http"
)
// StatusHandler ...
@@ -71,7 +72,7 @@ func StatusHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp = msgs.StatusResponse{}
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -79,11 +80,11 @@ func StatusHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp = msgs.StatusResponse{}
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = Status(ns)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/upgradeservice/upgradeimpl.go b/internal/apiserver/upgradeservice/upgradeimpl.go
index 69e3a40927..66d6a806b0 100644
--- a/internal/apiserver/upgradeservice/upgradeimpl.go
+++ b/internal/apiserver/upgradeservice/upgradeimpl.go
@@ -230,14 +230,12 @@ func supportedOperatorVersion(version string) bool {
// If none of the above is true, the upgrade can continue
return true
-
}
// upgradeTagValid compares and validates the PostgreSQL version values stored
// in the image tag of the existing pgcluster CR against the values set in the
// Postgres Operator's configuration
func upgradeTagValid(upgradeFrom, upgradeTo string) bool {
-
log.Debugf("Validating upgrade from %s to %s", upgradeFrom, upgradeTo)
versionRegex := regexp.MustCompile(`-(\d+)\.(\d+)(\.\d+)?-`)
@@ -280,5 +278,4 @@ func upgradeTagValid(upgradeFrom, upgradeTo string) bool {
// if none of the above conditions are met, a two digit Major version upgrade is likely being
// attempted, or a tag value or general error occurred, so we cannot continue
return false
-
}
diff --git a/internal/apiserver/upgradeservice/upgradeservice.go b/internal/apiserver/upgradeservice/upgradeservice.go
index dee9c68dc2..b058345e6a 100644
--- a/internal/apiserver/upgradeservice/upgradeservice.go
+++ b/internal/apiserver/upgradeservice/upgradeservice.go
@@ -71,17 +71,17 @@ func CreateUpgradeHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreateUpgrade(&request, ns, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/userservice/userimpl.go b/internal/apiserver/userservice/userimpl.go
index 39e0396184..cc63850ba2 100644
--- a/internal/apiserver/userservice/userimpl.go
+++ b/internal/apiserver/userservice/userimpl.go
@@ -104,10 +104,8 @@ const (
sqlDelimiter = "|"
)
-var (
- // sqlCommand is the command that needs to be executed for running SQL
- sqlCommand = []string{"psql", "-A", "-t"}
-)
+// sqlCommand is the command that needs to be executed for running SQL
+var sqlCommand = []string{"psql", "-A", "-t"}
// CreatueUser allows one to create a PostgreSQL user in one of more PostgreSQL
// clusters, and provides the abilit to do the following:
@@ -138,7 +136,6 @@ func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUser
// try to get a list of clusters. if there is an error, return
clusterList, err := getClusterList(request.Namespace, request.Clusters, request.Selector, request.AllFlag)
-
if err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
@@ -159,7 +156,6 @@ func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUser
// determine if the user passed in a valid password type
passwordType, err := msgs.GetPasswordType(request.PasswordType)
-
if err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
@@ -182,7 +178,8 @@ func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUser
}
// iterate through each cluster and add the new PostgreSQL role to each pod
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := &clusterList.Items[i]
result := msgs.UserResponseDetail{
ClusterName: cluster.Spec.ClusterName,
Username: request.Username,
@@ -192,8 +189,7 @@ func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUser
log.Debugf("creating user [%s] on cluster [%s]", result.Username, cluster.Spec.ClusterName)
// first, find the primary Pod
- pod, err := util.GetPrimaryPod(apiserver.Clientset, &cluster)
-
+ pod, err := util.GetPrimaryPod(apiserver.Clientset, cluster)
// if the primary Pod cannot be found, we're going to continue on for the
// other clusters, but provide some sort of error message in the response
if err != nil {
@@ -226,7 +222,6 @@ func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUser
// Set the password. We want a password to be generated if the user did not
// set a password
_, password, hashedPassword, err := generatePassword(result.Username, request.Password, passwordType, true, request.PasswordLength)
-
// on the off-chance there is an error, record it and continue
if err != nil {
log.Error(err)
@@ -268,7 +263,7 @@ func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUser
}
// if a pgAdmin deployment exists, attempt to add the user to it
- if err := updatePgAdmin(&cluster, result.Username, result.Password); err != nil {
+ if err := updatePgAdmin(cluster, result.Username, result.Password); err != nil {
log.Error(err)
result.Error = true
result.ErrorMessage = err.Error()
@@ -306,7 +301,6 @@ func DeleteUser(request *msgs.DeleteUserRequest, pgouser string) msgs.DeleteUser
// try to get a list of clusters. if there is an error, return
clusterList, err := getClusterList(request.Namespace, request.Clusters, request.Selector, request.AllFlag)
-
if err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
@@ -315,7 +309,8 @@ func DeleteUser(request *msgs.DeleteUserRequest, pgouser string) msgs.DeleteUser
// iterate through each cluster and try to delete the user!
loop:
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := clusterList.Items[i]
result := msgs.UserResponseDetail{
ClusterName: cluster.Spec.ClusterName,
Username: request.Username,
@@ -325,7 +320,6 @@ loop:
// first, find the primary Pod
pod, err := util.GetPrimaryPod(apiserver.Clientset, &cluster)
-
// if the primary Pod cannot be found, we're going to continue on for the
// other clusters, but provide some sort of error message in the response
if err != nil {
@@ -341,7 +335,6 @@ loop:
// first, get a list of all the databases in the cluster. We will need to
// go through each database and drop any object that the user owns
output, err := executeSQL(pod, cluster.Spec.Port, sqlFindDatabases, []string{})
-
// if there is an error, record it and move on as we cannot actually deleted
// the user
if err != nil {
@@ -452,7 +445,6 @@ func ShowUser(request *msgs.ShowUserRequest) msgs.ShowUserResponse {
// them. If if this returns an error, exit here
clusterList, err := getClusterList(request.Namespace,
request.Clusters, request.Selector, request.AllFlag)
-
if err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
@@ -471,10 +463,10 @@ func ShowUser(request *msgs.ShowUserRequest) msgs.ShowUserResponse {
}
// iterate through each cluster and look up information about each user
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := clusterList.Items[i]
// first, find the primary Pod
pod, err := util.GetPrimaryPod(apiserver.Clientset, &cluster)
-
// if the primary Pod cannot be found, we're going to continue on for the
// other clusters, but provide some sort of error message in the response
if err != nil {
@@ -503,7 +495,6 @@ func ShowUser(request *msgs.ShowUserRequest) msgs.ShowUserResponse {
// great, now we can perform the user lookup
output, err := executeSQL(pod, cluster.Spec.Port, sql, []string{})
-
// if there is an error, record it and move on to the next cluster
if err != nil {
log.Error(err)
@@ -627,7 +618,6 @@ func UpdateUser(request *msgs.UpdateUserRequest, pgouser string) msgs.UpdateUser
// try to get a list of clusters. if there is an error, return
clusterList, err := getClusterList(request.Namespace, request.Clusters, request.Selector, request.AllFlag)
-
if err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
@@ -644,20 +634,21 @@ func UpdateUser(request *msgs.UpdateUserRequest, pgouser string) msgs.UpdateUser
return response
}
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
var result msgs.UserResponseDetail
+ cluster := &clusterList.Items[i]
// determine which update user actions needs to be performed
switch {
// determine if any passwords expiring in X days should be updated
// it returns a slice of results, which are then append to the list
case request.Expired > 0:
- results := rotateExpiredPasswords(request, &cluster)
+ results := rotateExpiredPasswords(request, cluster)
response.Results = append(response.Results, results...)
// otherwise, perform a regular "update user" request which covers all the
// other "regular" cases. It returns a result, which is append to the list
default:
- result = updateUser(request, &cluster)
+ result = updateUser(request, cluster)
response.Results = append(response.Results, result)
}
}
@@ -674,7 +665,6 @@ func deleteUserSecret(cluster crv1.Pgcluster, username string) {
secretName := fmt.Sprintf(util.UserSecretFormat, cluster.Spec.ClusterName, username)
err := apiserver.Clientset.CoreV1().Secrets(cluster.Spec.Namespace).
Delete(ctx, secretName, metav1.DeleteOptions{})
-
if err != nil {
log.Error(err)
}
@@ -746,7 +736,6 @@ func generatePassword(username, password string, passwordType pgpassword.Passwor
// generate the password
generatedPassword, err := util.GeneratePassword(passwordLength)
-
// if there is an error, return
if err != nil {
return false, "", "", err
@@ -757,13 +746,11 @@ func generatePassword(username, password string, passwordType pgpassword.Passwor
// finally, hash the password
postgresPassword, err := pgpassword.NewPostgresPassword(passwordType, username, password)
-
if err != nil {
return false, "", "", err
}
hashedPassword, err := postgresPassword.Build()
-
if err != nil {
return false, "", "", err
}
@@ -830,7 +817,6 @@ func getClusterList(namespace string, clusterNames []string, selector string, al
// of arguments...or both. First, start with the selector
if selector != "" {
cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
-
// if there is an error, return here with an empty cluster list
if err != nil {
return crv1.PgclusterList{}, err
@@ -841,7 +827,6 @@ func getClusterList(namespace string, clusterNames []string, selector string, al
// now try to get clusters based specific cluster names
for _, clusterName := range clusterNames {
cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(ctx, clusterName, metav1.GetOptions{})
-
// if there is an error, capture it here and return here with an empty list
if err != nil {
return crv1.PgclusterList{}, err
@@ -876,7 +861,6 @@ func rotateExpiredPasswords(request *msgs.UpdateUserRequest, cluster *crv1.Pgclu
// first, find the primary Pod. If we can't do that, no rense in continuing
pod, err := util.GetPrimaryPod(apiserver.Clientset, cluster)
-
if err != nil {
result := msgs.UserResponseDetail{
ClusterName: cluster.Spec.ClusterName,
@@ -902,7 +886,6 @@ func rotateExpiredPasswords(request *msgs.UpdateUserRequest, cluster *crv1.Pgclu
// alright, time to find if there are any expired accounts. If this errors,
// then we will abort here
output, err := executeSQL(pod, cluster.Spec.Port, sql, []string{})
-
if err != nil {
result := msgs.UserResponseDetail{
ClusterName: cluster.Spec.ClusterName,
@@ -972,7 +955,6 @@ func rotateExpiredPasswords(request *msgs.UpdateUserRequest, cluster *crv1.Pgclu
// length of the password, or passed in a password to rotate (though that
// is not advised...). This forced the password to change
_, password, hashedPassword, err := generatePassword(result.Username, request.Password, passwordType, true, request.PasswordLength)
-
// on the off-chance there's an error in generating the password, record it
// and continue
if err != nil {
@@ -1016,7 +998,6 @@ func updatePgAdmin(cluster *crv1.Pgcluster, username, password string) error {
// Sync user to pgAdmin, if enabled
qr, err := pgadmin.GetPgAdminQueryRunner(apiserver.Clientset, apiserver.RESTConfig, cluster)
-
// if there is an error, return as such
if err != nil {
return err
@@ -1073,7 +1054,6 @@ func updateUser(request *msgs.UpdateUserRequest, cluster *crv1.Pgcluster) msgs.U
// first, find the primary Pod
pod, err := util.GetPrimaryPod(apiserver.Clientset, cluster)
-
// if the primary Pod cannot be found, we're going to continue on for the
// other clusters, but provide some sort of error message in the response
if err != nil {
@@ -1104,7 +1084,6 @@ func updateUser(request *msgs.UpdateUserRequest, cluster *crv1.Pgcluster) msgs.U
passwordType, _ := msgs.GetPasswordType(request.PasswordType)
isChanged, password, hashedPassword, err := generatePassword(result.Username,
request.Password, passwordType, request.RotatePassword, request.PasswordLength)
-
// in the off-chance there is an error generating the password, record it
// and return
if err != nil {
@@ -1171,6 +1150,7 @@ func updateUser(request *msgs.UpdateUserRequest, cluster *crv1.Pgcluster) msgs.U
sql = fmt.Sprintf("%s %s", sql, sqlEnableLoginClause)
case msgs.UpdateUserLoginDisable:
sql = fmt.Sprintf("%s %s", sql, sqlDisableLoginClause)
+ case msgs.UpdateUserLoginDoNothing: // this is never reached -- no-op
}
// execute the SQL! if there is an error, return the results
diff --git a/internal/apiserver/userservice/userimpl_test.go b/internal/apiserver/userservice/userimpl_test.go
index 71d3aa5fcf..f49d171ff4 100644
--- a/internal/apiserver/userservice/userimpl_test.go
+++ b/internal/apiserver/userservice/userimpl_test.go
@@ -30,9 +30,7 @@ func TestGeneratePassword(t *testing.T) {
generatedPasswordLength := 32
t.Run("no changes", func(t *testing.T) {
-
changed, _, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength)
-
if err != nil {
t.Error(err)
return
@@ -48,7 +46,6 @@ func TestGeneratePassword(t *testing.T) {
t.Run("valid", func(t *testing.T) {
changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength)
-
if err != nil {
t.Error(err)
return
@@ -66,7 +63,6 @@ func TestGeneratePassword(t *testing.T) {
t.Run("does not override custom password", func(t *testing.T) {
password := "custom"
changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength)
-
if err != nil {
t.Error(err)
return
@@ -84,7 +80,6 @@ func TestGeneratePassword(t *testing.T) {
t.Run("password length can be adjusted", func(t *testing.T) {
generatedPasswordLength := 16
changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength)
-
if err != nil {
t.Error(err)
return
@@ -102,7 +97,6 @@ func TestGeneratePassword(t *testing.T) {
t.Run("should be nonzero length", func(t *testing.T) {
generatedPasswordLength := 0
changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength)
-
if err != nil {
t.Error(err)
return
@@ -125,7 +119,6 @@ func TestGeneratePassword(t *testing.T) {
t.Run("md5", func(t *testing.T) {
changed, _, hashedPassword, err := generatePassword(username, password,
passwordType, generateNewPassword, generatedPasswordLength)
-
if err != nil {
t.Error(err)
return
@@ -144,7 +137,6 @@ func TestGeneratePassword(t *testing.T) {
passwordType := pgpassword.SCRAM
changed, _, hashedPassword, err := generatePassword(username, password,
passwordType, generateNewPassword, generatedPasswordLength)
-
if err != nil {
t.Error(err)
return
@@ -159,5 +151,4 @@ func TestGeneratePassword(t *testing.T) {
}
})
})
-
}
diff --git a/internal/apiserver/userservice/userservice.go b/internal/apiserver/userservice/userservice.go
index 83994c90fa..94a9be299b 100644
--- a/internal/apiserver/userservice/userservice.go
+++ b/internal/apiserver/userservice/userservice.go
@@ -17,10 +17,11 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- "net/http"
)
// UserHandler provides a means to update a PostgreSQL user
@@ -52,7 +53,7 @@ func UpdateUserHandler(w http.ResponseWriter, r *http.Request) {
username, err := apiserver.Authn(apiserver.UPDATE_USER_PERM, w, r)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -62,20 +63,20 @@ func UpdateUserHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
_, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = UpdateUser(&request, username)
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// CreateUserHandler ...
@@ -117,20 +118,19 @@ func CreateUserHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = CreateUser(&request, username)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// DeleteUserHandler ...
@@ -163,7 +163,7 @@ func DeleteUserHandler(w http.ResponseWriter, r *http.Request) {
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -182,13 +182,12 @@ func DeleteUserHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = DeleteUser(&request, pgouser)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
// ShowUserHandler allows one to display information about PostgreSQL uesrs that
@@ -237,18 +236,17 @@ func ShowUserHandler(w http.ResponseWriter, r *http.Request) {
resp := msgs.ShowUserResponse{}
if request.ClientVersion != msgs.PGO_VERSION {
resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
_, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace)
if err != nil {
resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
resp = ShowUser(&request)
- json.NewEncoder(w).Encode(resp)
-
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/apiserver/versionservice/versionservice.go b/internal/apiserver/versionservice/versionservice.go
index 49735dff1a..bcb2adc7b0 100644
--- a/internal/apiserver/versionservice/versionservice.go
+++ b/internal/apiserver/versionservice/versionservice.go
@@ -17,9 +17,10 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
log "github.com/sirupsen/logrus"
- "net/http"
)
// VersionHandler ...
@@ -50,7 +51,7 @@ func VersionHandler(w http.ResponseWriter, r *http.Request) {
resp := Version()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// HealthHandler ...
@@ -71,7 +72,7 @@ func HealthHandler(w http.ResponseWriter, r *http.Request) {
resp := Health()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
// HealthyHandler follows the health endpoint convention of HTTP/200 and
@@ -88,5 +89,5 @@ func HealthyHandler(w http.ResponseWriter, r *http.Request) {
// '200':
// description: "Healthy: server is responding as expected"
w.WriteHeader(http.StatusOK)
- w.Write([]byte("ok"))
+ _, _ = w.Write([]byte("ok"))
}
diff --git a/internal/apiserver/workflowservice/workflowimpl.go b/internal/apiserver/workflowservice/workflowimpl.go
index e07ff79d59..3f46b2a26c 100644
--- a/internal/apiserver/workflowservice/workflowimpl.go
+++ b/internal/apiserver/workflowservice/workflowimpl.go
@@ -34,7 +34,7 @@ func ShowWorkflow(id, ns string) (msgs.ShowWorkflowDetail, error) {
log.Debugf("ShowWorkflow called with id %s", id)
detail := msgs.ShowWorkflowDetail{}
- //get the pgtask for this workflow
+ // get the pgtask for this workflow
selector := crv1.PgtaskWorkflowID + "=" + id
@@ -53,5 +53,4 @@ func ShowWorkflow(id, ns string) (msgs.ShowWorkflowDetail, error) {
detail.Parameters = t.Spec.Parameters
return detail, err
-
}
diff --git a/internal/apiserver/workflowservice/workflowservice.go b/internal/apiserver/workflowservice/workflowservice.go
index 81aea9ff98..35adb2b0a4 100644
--- a/internal/apiserver/workflowservice/workflowservice.go
+++ b/internal/apiserver/workflowservice/workflowservice.go
@@ -17,11 +17,12 @@ limitations under the License.
import (
"encoding/json"
+ "net/http"
+
"github.com/crunchydata/postgres-operator/internal/apiserver"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
"github.com/gorilla/mux"
log "github.com/sirupsen/logrus"
- "net/http"
)
// ShowWorkflowHandler ...
@@ -79,7 +80,7 @@ func ShowWorkflowHandler(w http.ResponseWriter, r *http.Request) {
if clientVersion != msgs.PGO_VERSION {
resp.Status.Code = msgs.Error
resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -87,7 +88,7 @@ func ShowWorkflowHandler(w http.ResponseWriter, r *http.Request) {
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
return
}
@@ -97,5 +98,5 @@ func ShowWorkflowHandler(w http.ResponseWriter, r *http.Request) {
resp.Status.Msg = err.Error()
}
- json.NewEncoder(w).Encode(resp)
+ _ = json.NewEncoder(w).Encode(resp)
}
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 4b540a5227..d6d851eb52 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -16,149 +16,184 @@ package config
*/
// resource labels used by the operator
-const LABEL_NAME = "name"
-const LABEL_SELECTOR = "selector"
-const LABEL_OPERATOR = "postgres-operator"
-const LABEL_PG_CLUSTER = "pg-cluster"
-const LABEL_PG_CLUSTER_IDENTIFIER = "pg-cluster-id"
-const LABEL_PG_DATABASE = "pgo-pg-database"
+const (
+ LABEL_NAME = "name"
+ LABEL_SELECTOR = "selector"
+ LABEL_OPERATOR = "postgres-operator"
+ LABEL_PG_CLUSTER = "pg-cluster"
+ LABEL_PG_CLUSTER_IDENTIFIER = "pg-cluster-id"
+ LABEL_PG_DATABASE = "pgo-pg-database"
+)
const LABEL_PGTASK = "pg-task"
-const LABEL_AUTOFAIL = "autofail"
-const LABEL_FAILOVER = "failover"
-const LABEL_RESTART = "restart"
-
-const LABEL_TARGET = "target"
-const LABEL_RMDATA = "pgrmdata"
-
-const LABEL_PGPOLICY = "pgpolicy"
-const LABEL_INGEST = "ingest"
-const LABEL_PGREMOVE = "pgremove"
-const LABEL_PVCNAME = "pvcname"
-const LABEL_EXPORTER = "crunchy-postgres-exporter"
-const LABEL_ARCHIVE = "archive"
-const LABEL_ARCHIVE_TIMEOUT = "archive-timeout"
-const LABEL_CUSTOM_CONFIG = "custom-config"
-const LABEL_NODE_LABEL_KEY = "NodeLabelKey"
-const LABEL_NODE_LABEL_VALUE = "NodeLabelValue"
-const LABEL_REPLICA_NAME = "replica-name"
-const LABEL_CCP_IMAGE_TAG_KEY = "ccp-image-tag"
-const LABEL_CCP_IMAGE_KEY = "ccp-image"
-const LABEL_IMAGE_PREFIX = "image-prefix"
-const LABEL_SERVICE_TYPE = "service-type"
-const LABEL_POD_ANTI_AFFINITY = "pg-pod-anti-affinity"
-const LABEL_SYNC_REPLICATION = "sync-replication"
-
-const LABEL_REPLICA_COUNT = "replica-count"
-const LABEL_STORAGE_CONFIG = "storage-config"
-const LABEL_NODE_LABEL = "node-label"
-const LABEL_VERSION = "version"
-const LABEL_PGO_VERSION = "pgo-version"
-const LABEL_DELETE_DATA = "delete-data"
-const LABEL_DELETE_DATA_STARTED = "delete-data-started"
-const LABEL_DELETE_BACKUPS = "delete-backups"
-const LABEL_IS_REPLICA = "is-replica"
-const LABEL_IS_BACKUP = "is-backup"
-const LABEL_STARTUP = "startup"
-const LABEL_SHUTDOWN = "shutdown"
+const (
+ LABEL_AUTOFAIL = "autofail"
+ LABEL_FAILOVER = "failover"
+ LABEL_RESTART = "restart"
+)
+
+const (
+ LABEL_TARGET = "target"
+ LABEL_RMDATA = "pgrmdata"
+)
+
+const (
+ LABEL_PGPOLICY = "pgpolicy"
+ LABEL_INGEST = "ingest"
+ LABEL_PGREMOVE = "pgremove"
+ LABEL_PVCNAME = "pvcname"
+ LABEL_EXPORTER = "crunchy-postgres-exporter"
+ LABEL_ARCHIVE = "archive"
+ LABEL_ARCHIVE_TIMEOUT = "archive-timeout"
+ LABEL_CUSTOM_CONFIG = "custom-config"
+ LABEL_NODE_LABEL_KEY = "NodeLabelKey"
+ LABEL_NODE_LABEL_VALUE = "NodeLabelValue"
+ LABEL_REPLICA_NAME = "replica-name"
+ LABEL_CCP_IMAGE_TAG_KEY = "ccp-image-tag"
+ LABEL_CCP_IMAGE_KEY = "ccp-image"
+ LABEL_IMAGE_PREFIX = "image-prefix"
+ LABEL_SERVICE_TYPE = "service-type"
+ LABEL_POD_ANTI_AFFINITY = "pg-pod-anti-affinity"
+ LABEL_SYNC_REPLICATION = "sync-replication"
+)
+
+const (
+ LABEL_REPLICA_COUNT = "replica-count"
+ LABEL_STORAGE_CONFIG = "storage-config"
+ LABEL_NODE_LABEL = "node-label"
+ LABEL_VERSION = "version"
+ LABEL_PGO_VERSION = "pgo-version"
+ LABEL_DELETE_DATA = "delete-data"
+ LABEL_DELETE_DATA_STARTED = "delete-data-started"
+ LABEL_DELETE_BACKUPS = "delete-backups"
+ LABEL_IS_REPLICA = "is-replica"
+ LABEL_IS_BACKUP = "is-backup"
+ LABEL_STARTUP = "startup"
+ LABEL_SHUTDOWN = "shutdown"
+)
// label for the pgcluster upgrade
const LABEL_UPGRADE = "upgrade"
-const LABEL_BACKREST = "pgo-backrest"
-const LABEL_BACKREST_JOB = "pgo-backrest-job"
-const LABEL_BACKREST_RESTORE = "pgo-backrest-restore"
-const LABEL_CONTAINER_NAME = "containername"
-const LABEL_POD_NAME = "podname"
-const LABEL_BACKREST_REPO_SECRET = "backrest-repo-config"
-const LABEL_BACKREST_COMMAND = "backrest-command"
-const LABEL_BACKREST_RESTORE_FROM_CLUSTER = "backrest-restore-from-cluster"
-const LABEL_BACKREST_RESTORE_OPTS = "backrest-restore-opts"
-const LABEL_BACKREST_BACKUP_OPTS = "backrest-backup-opts"
-const LABEL_BACKREST_OPTS = "backrest-opts"
-const LABEL_BACKREST_PITR_TARGET = "backrest-pitr-target"
-const LABEL_BACKREST_STORAGE_TYPE = "backrest-storage-type"
-const LABEL_BACKREST_S3_VERIFY_TLS = "backrest-s3-verify-tls"
-const LABEL_BADGER = "crunchy-pgbadger"
-const LABEL_BADGER_CCPIMAGE = "crunchy-pgbadger"
-const LABEL_BACKUP_TYPE_BACKREST = "pgbackrest"
-const LABEL_BACKUP_TYPE_PGDUMP = "pgdump"
-
-const LABEL_PGDUMP_COMMAND = "pgdump"
-const LABEL_PGDUMP_RESTORE = "pgdump-restore"
-const LABEL_PGDUMP_OPTS = "pgdump-opts"
-const LABEL_PGDUMP_HOST = "pgdump-host"
-const LABEL_PGDUMP_DB = "pgdump-db"
-const LABEL_PGDUMP_USER = "pgdump-user"
-const LABEL_PGDUMP_PORT = "pgdump-port"
-const LABEL_PGDUMP_ALL = "pgdump-all"
-const LABEL_PGDUMP_PVC = "pgdump-pvc"
-
-const LABEL_RESTORE_TYPE_PGRESTORE = "pgrestore"
-const LABEL_PGRESTORE_COMMAND = "pgrestore"
-const LABEL_PGRESTORE_HOST = "pgrestore-host"
-const LABEL_PGRESTORE_DB = "pgrestore-db"
-const LABEL_PGRESTORE_USER = "pgrestore-user"
-const LABEL_PGRESTORE_PORT = "pgrestore-port"
-const LABEL_PGRESTORE_FROM_CLUSTER = "pgrestore-from-cluster"
-const LABEL_PGRESTORE_FROM_PVC = "pgrestore-from-pvc"
-const LABEL_PGRESTORE_OPTS = "pgrestore-opts"
-const LABEL_PGRESTORE_PITR_TARGET = "pgrestore-pitr-target"
-
-const LABEL_DATA_ROOT = "data-root"
-const LABEL_PVC_NAME = "pvc-name"
-const LABEL_VOLUME_NAME = "volume-name"
-
-const LABEL_SESSION_ID = "sessionid"
-const LABEL_USERNAME = "username"
-const LABEL_ROLENAME = "rolename"
-const LABEL_PASSWORD = "password"
-
-const LABEL_PGADMIN = "crunchy-pgadmin"
-const LABEL_PGADMIN_TASK_ADD = "pgadmin-add"
-const LABEL_PGADMIN_TASK_CLUSTER = "pgadmin-cluster"
-const LABEL_PGADMIN_TASK_DELETE = "pgadmin-delete"
+const (
+ LABEL_BACKREST = "pgo-backrest"
+ LABEL_BACKREST_JOB = "pgo-backrest-job"
+ LABEL_BACKREST_RESTORE = "pgo-backrest-restore"
+ LABEL_CONTAINER_NAME = "containername"
+ LABEL_POD_NAME = "podname"
+ // #nosec: G101
+ LABEL_BACKREST_REPO_SECRET = "backrest-repo-config"
+ LABEL_BACKREST_COMMAND = "backrest-command"
+ LABEL_BACKREST_RESTORE_FROM_CLUSTER = "backrest-restore-from-cluster"
+ LABEL_BACKREST_RESTORE_OPTS = "backrest-restore-opts"
+ LABEL_BACKREST_BACKUP_OPTS = "backrest-backup-opts"
+ LABEL_BACKREST_OPTS = "backrest-opts"
+ LABEL_BACKREST_PITR_TARGET = "backrest-pitr-target"
+ LABEL_BACKREST_STORAGE_TYPE = "backrest-storage-type"
+ LABEL_BACKREST_S3_VERIFY_TLS = "backrest-s3-verify-tls"
+ LABEL_BADGER = "crunchy-pgbadger"
+ LABEL_BADGER_CCPIMAGE = "crunchy-pgbadger"
+ LABEL_BACKUP_TYPE_BACKREST = "pgbackrest"
+ LABEL_BACKUP_TYPE_PGDUMP = "pgdump"
+)
+
+const (
+ LABEL_PGDUMP_COMMAND = "pgdump"
+ LABEL_PGDUMP_RESTORE = "pgdump-restore"
+ LABEL_PGDUMP_OPTS = "pgdump-opts"
+ LABEL_PGDUMP_HOST = "pgdump-host"
+ LABEL_PGDUMP_DB = "pgdump-db"
+ LABEL_PGDUMP_USER = "pgdump-user"
+ LABEL_PGDUMP_PORT = "pgdump-port"
+ LABEL_PGDUMP_ALL = "pgdump-all"
+ LABEL_PGDUMP_PVC = "pgdump-pvc"
+)
+
+const (
+ LABEL_RESTORE_TYPE_PGRESTORE = "pgrestore"
+ LABEL_PGRESTORE_COMMAND = "pgrestore"
+ LABEL_PGRESTORE_HOST = "pgrestore-host"
+ LABEL_PGRESTORE_DB = "pgrestore-db"
+ LABEL_PGRESTORE_USER = "pgrestore-user"
+ LABEL_PGRESTORE_PORT = "pgrestore-port"
+ LABEL_PGRESTORE_FROM_CLUSTER = "pgrestore-from-cluster"
+ LABEL_PGRESTORE_FROM_PVC = "pgrestore-from-pvc"
+ LABEL_PGRESTORE_OPTS = "pgrestore-opts"
+ LABEL_PGRESTORE_PITR_TARGET = "pgrestore-pitr-target"
+)
+
+const (
+ LABEL_DATA_ROOT = "data-root"
+ LABEL_PVC_NAME = "pvc-name"
+ LABEL_VOLUME_NAME = "volume-name"
+)
+
+const (
+ LABEL_SESSION_ID = "sessionid"
+ LABEL_USERNAME = "username"
+ LABEL_ROLENAME = "rolename"
+ LABEL_PASSWORD = "password"
+)
+
+const (
+ LABEL_PGADMIN = "crunchy-pgadmin"
+ LABEL_PGADMIN_TASK_ADD = "pgadmin-add"
+ LABEL_PGADMIN_TASK_CLUSTER = "pgadmin-cluster"
+ LABEL_PGADMIN_TASK_DELETE = "pgadmin-delete"
+)
const LABEL_PGBOUNCER = "crunchy-pgbouncer"
-const LABEL_JOB_NAME = "job-name"
-const LABEL_PGBACKREST_STANZA = "pgbackrest-stanza"
-const LABEL_PGBACKREST_DB_PATH = "pgbackrest-db-path"
-const LABEL_PGBACKREST_REPO_PATH = "pgbackrest-repo-path"
-const LABEL_PGBACKREST_REPO_HOST = "pgbackrest-repo-host"
+const (
+ LABEL_JOB_NAME = "job-name"
+ LABEL_PGBACKREST_STANZA = "pgbackrest-stanza"
+ LABEL_PGBACKREST_DB_PATH = "pgbackrest-db-path"
+ LABEL_PGBACKREST_REPO_PATH = "pgbackrest-repo-path"
+ LABEL_PGBACKREST_REPO_HOST = "pgbackrest-repo-host"
+)
const LABEL_PGO_BACKREST_REPO = "pgo-backrest-repo"
-const LABEL_DEPLOYMENT_NAME = "deployment-name"
-const LABEL_SERVICE_NAME = "service-name"
-const LABEL_CURRENT_PRIMARY = "current-primary"
+const (
+ LABEL_DEPLOYMENT_NAME = "deployment-name"
+ LABEL_SERVICE_NAME = "service-name"
+ LABEL_CURRENT_PRIMARY = "current-primary"
+)
const LABEL_CLAIM_NAME = "claimName"
-const LABEL_PGO_PGOUSER = "pgo-pgouser"
-const LABEL_PGO_PGOROLE = "pgo-pgorole"
-const LABEL_PGOUSER = "pgouser"
-const LABEL_WORKFLOW_ID = "workflowid" // NOTE: this now matches crv1.PgtaskWorkflowID
-
-const LABEL_TRUE = "true"
-const LABEL_FALSE = "false"
-
-const LABEL_NAMESPACE = "namespace"
-const LABEL_PGO_INSTALLATION_NAME = "pgo-installation-name"
-const LABEL_VENDOR = "vendor"
-const LABEL_CRUNCHY = "crunchydata"
-const LABEL_PGO_CREATED_BY = "pgo-created-by"
-const LABEL_PGO_UPDATED_BY = "pgo-updated-by"
+const (
+ LABEL_PGO_PGOUSER = "pgo-pgouser"
+ LABEL_PGO_PGOROLE = "pgo-pgorole"
+ LABEL_PGOUSER = "pgouser"
+ LABEL_WORKFLOW_ID = "workflowid" // NOTE: this now matches crv1.PgtaskWorkflowID
+)
+
+const (
+ LABEL_TRUE = "true"
+ LABEL_FALSE = "false"
+)
+
+const (
+ LABEL_NAMESPACE = "namespace"
+ LABEL_PGO_INSTALLATION_NAME = "pgo-installation-name"
+ LABEL_VENDOR = "vendor"
+ LABEL_CRUNCHY = "crunchydata"
+ LABEL_PGO_CREATED_BY = "pgo-created-by"
+ LABEL_PGO_UPDATED_BY = "pgo-updated-by"
+)
const LABEL_FAILOVER_STARTED = "failover-started"
const GLOBAL_CUSTOM_CONFIGMAP = "pgo-custom-pg-config"
-const LABEL_PGHA_SCOPE = "crunchy-pgha-scope"
-const LABEL_PGHA_CONFIGMAP = "pgha-config"
-const LABEL_PGHA_BACKUP_TYPE = "pgha-backup-type"
-const LABEL_PGHA_ROLE = "role"
-const LABEL_PGHA_ROLE_PRIMARY = "master"
-const LABEL_PGHA_ROLE_REPLICA = "replica"
-const LABEL_PGHA_BOOTSTRAP = "pgha-bootstrap"
+const (
+ LABEL_PGHA_SCOPE = "crunchy-pgha-scope"
+ LABEL_PGHA_CONFIGMAP = "pgha-config"
+ LABEL_PGHA_BACKUP_TYPE = "pgha-backup-type"
+ LABEL_PGHA_ROLE = "role"
+ LABEL_PGHA_ROLE_PRIMARY = "master"
+ LABEL_PGHA_ROLE_REPLICA = "replica"
+ LABEL_PGHA_BOOTSTRAP = "pgha-bootstrap"
+)
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index bf7c6fb35d..c073d9c43f 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -38,8 +38,10 @@ import (
"sigs.k8s.io/yaml"
)
-const CustomConfigMapName = "pgo-config"
-const defaultConfigPath = "/default-pgo-config/"
+const (
+ CustomConfigMapName = "pgo-config"
+ defaultConfigPath = "/default-pgo-config/"
+)
var PgoDefaultServiceAccountTemplate *template.Template
@@ -260,17 +262,21 @@ type PgoConfig struct {
Storage map[string]StorageStruct
}
-const DEFAULT_SERVICE_TYPE = "ClusterIP"
-const LOAD_BALANCER_SERVICE_TYPE = "LoadBalancer"
-const NODEPORT_SERVICE_TYPE = "NodePort"
-const CONFIG_PATH = "pgo.yaml"
+const (
+ DEFAULT_SERVICE_TYPE = "ClusterIP"
+ LOAD_BALANCER_SERVICE_TYPE = "LoadBalancer"
+ NODEPORT_SERVICE_TYPE = "NodePort"
+ CONFIG_PATH = "pgo.yaml"
+)
-const DEFAULT_BACKREST_PORT = 2022
-const DEFAULT_PGADMIN_PORT = "5050"
-const DEFAULT_PGBADGER_PORT = "10000"
-const DEFAULT_EXPORTER_PORT = "9187"
-const DEFAULT_POSTGRES_PORT = "5432"
-const DEFAULT_PATRONI_PORT = "8009"
+const (
+ DEFAULT_BACKREST_PORT = 2022
+ DEFAULT_PGADMIN_PORT = "5050"
+ DEFAULT_PGBADGER_PORT = "10000"
+ DEFAULT_EXPORTER_PORT = "9187"
+ DEFAULT_POSTGRES_PORT = "5432"
+ DEFAULT_PATRONI_PORT = "8009"
+)
func (c *PgoConfig) Validate() error {
var err error
@@ -508,19 +514,16 @@ func (c *PgoConfig) GetStorageSpec(name string) (crv1.PgStorageSpec, error) {
}
return storage, err
-
}
func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string) error {
-
cMap, err := initialize(clientset, namespace)
-
if err != nil {
log.Errorf("could not get ConfigMap: %s", err.Error())
return err
}
- //get the pgo.yaml config file
+ // get the pgo.yaml config file
str := cMap.Data[CONFIG_PATH]
if str == "" {
return fmt.Errorf("could not get %s from ConfigMap", CONFIG_PATH)
@@ -541,7 +544,7 @@ func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string)
c.CheckEnv()
- //load up all the templates
+ // load up all the templates
PgoDefaultServiceAccountTemplate, err = c.LoadTemplate(cMap, PGODefaultServiceAccountPath)
if err != nil {
return err
@@ -729,7 +732,7 @@ func getOperatorConfigMap(clientset kubernetes.Interface, namespace string) (*v1
return clientset.CoreV1().ConfigMaps(namespace).Get(ctx, CustomConfigMapName, metav1.GetOptions{})
}
-// initialize attemps to get the configuration ConfigMap based on a name.
+// initialize attempts to get the configuration ConfigMap based on a name.
// If the ConfigMap does not exist, a ConfigMap is created from the default
// configuration path
func initialize(clientset kubernetes.Interface, namespace string) (*v1.ConfigMap, error) {
@@ -812,7 +815,6 @@ func (c *PgoConfig) LoadTemplate(cMap *v1.ConfigMap, path string) (*template.Tem
// if we have a value for the templated file, return
return template.Must(template.New(path).Parse(value)), nil
-
}
// DefaultTemplate attempts to load a default configuration template file
@@ -825,7 +827,6 @@ func (c *PgoConfig) DefaultTemplate(path string) (string, error) {
// read in the file from the default path
buf, err := ioutil.ReadFile(fullPath)
-
if err != nil {
log.Errorf("error: could not read %s", fullPath)
log.Error(err)
diff --git a/internal/config/secrets.go b/internal/config/secrets.go
index 2cc2b5ba1b..f518c813ba 100644
--- a/internal/config/secrets.go
+++ b/internal/config/secrets.go
@@ -15,4 +15,5 @@ package config
limitations under the License.
*/
+// #nosec: G101
const SecretOperatorBackrestRepoConfig = "pgo-backrest-repo-config"
diff --git a/internal/config/volumes.go b/internal/config/volumes.go
index d21c2d6a4e..8723f9670a 100644
--- a/internal/config/volumes.go
+++ b/internal/config/volumes.go
@@ -22,8 +22,10 @@ import (
)
// volume configuration settings used by the PostgreSQL data directory and mount
-const VOLUME_POSTGRESQL_DATA = "pgdata"
-const VOLUME_POSTGRESQL_DATA_MOUNT_PATH = "/pgdata"
+const (
+ VOLUME_POSTGRESQL_DATA = "pgdata"
+ VOLUME_POSTGRESQL_DATA_MOUNT_PATH = "/pgdata"
+)
// PostgreSQLWALVolumeMount returns the VolumeMount for the PostgreSQL WAL directory.
func PostgreSQLWALVolumeMount() core_v1.VolumeMount {
@@ -36,12 +38,16 @@ func PostgreSQLWALPath(cluster string) string {
}
// volume configuration settings used by the pgBackRest repo mount
-const VOLUME_PGBACKREST_REPO_NAME = "backrestrepo"
-const VOLUME_PGBACKREST_REPO_MOUNT_PATH = "/backrestrepo"
+const (
+ VOLUME_PGBACKREST_REPO_NAME = "backrestrepo"
+ VOLUME_PGBACKREST_REPO_MOUNT_PATH = "/backrestrepo"
+)
// volume configuration settings used by the SSHD secret
-const VOLUME_SSHD_NAME = "sshd"
-const VOLUME_SSHD_MOUNT_PATH = "/sshd"
+const (
+ VOLUME_SSHD_NAME = "sshd"
+ VOLUME_SSHD_MOUNT_PATH = "/sshd"
+)
// volume configuration settings used by tablespaces
diff --git a/internal/controller/configmap/configmapcontroller.go b/internal/controller/configmap/configmapcontroller.go
index a7145ea6bf..a390075b67 100644
--- a/internal/controller/configmap/configmapcontroller.go
+++ b/internal/controller/configmap/configmapcontroller.go
@@ -48,7 +48,6 @@ type Controller struct {
func NewConfigMapController(restConfig *rest.Config,
clientset kubernetes.Interface, coreInformer coreinformers.ConfigMapInformer,
pgoInformer pgoinformers.PgclusterInformer, workerCount int) (*Controller, error) {
-
controller := &Controller{
cmRESTConfig: restConfig,
kubeclientset: clientset,
@@ -77,7 +76,6 @@ func NewConfigMapController(restConfig *rest.Config,
// function in order to read and process a message on the worker queue. Once the worker queue
// is instructed to shutdown, a message is written to the done channel.
func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) {
-
go c.waitForShutdown(stopCh)
for c.processNextWorkItem() {
@@ -105,7 +103,6 @@ func (c *Controller) ShutdownWorker() {
// so, the configMap resource is converted into a namespace/name string and is then added to the
// work queue
func (c *Controller) enqueueConfigMap(obj interface{}) {
-
configMap := obj.(*apiv1.ConfigMap)
labels := configMap.GetObjectMeta().GetLabels()
@@ -128,7 +125,6 @@ func (c *Controller) enqueueConfigMap(obj interface{}) {
// processNextWorkItem will read a single work item off the work queue and processes it via
// the ConfigMap sync handler
func (c *Controller) processNextWorkItem() bool {
-
obj, shutdown := c.workqueue.Get()
if shutdown {
diff --git a/internal/controller/configmap/synchandler.go b/internal/controller/configmap/synchandler.go
index 9309c0555c..b556f1561a 100644
--- a/internal/controller/configmap/synchandler.go
+++ b/internal/controller/configmap/synchandler.go
@@ -31,7 +31,6 @@ import (
// handleConfigMapSync is responsible for syncing a configMap resource that has obtained from
// the ConfigMap controller's worker queue
func (c *Controller) handleConfigMapSync(key string) error {
-
log.Debugf("ConfigMap Controller: handling a configmap sync for key %s", key)
namespace, configMapName, err := cache.SplitMetaNamespaceKey(key)
@@ -72,7 +71,7 @@ func (c *Controller) handleConfigMapSync(key string) error {
return nil
}
- c.syncPGHAConfig(c.createPGHAConfigs(configMap, clusterName,
+ c.syncPGHAConfig(c.createPGHAConfigs(configMap,
cluster.GetObjectMeta().GetLabels()[config.LABEL_PGHA_SCOPE]))
return nil
@@ -80,8 +79,7 @@ func (c *Controller) handleConfigMapSync(key string) error {
// createConfigurerMap creates the configs needed to sync the PGHA configMap
func (c *Controller) createPGHAConfigs(configMap *corev1.ConfigMap,
- clusterName, clusterScope string) []cfg.Syncer {
-
+ clusterScope string) []cfg.Syncer {
var configSyncers []cfg.Syncer
configSyncers = append(configSyncers, cfg.NewDCS(configMap, c.kubeclientset, clusterScope))
@@ -100,7 +98,6 @@ func (c *Controller) createPGHAConfigs(configMap *corev1.ConfigMap,
// syncAllConfigs takes a map of configurers and runs their sync functions concurrently
func (c *Controller) syncPGHAConfig(configSyncers []cfg.Syncer) {
-
var wg sync.WaitGroup
for _, configSyncer := range configSyncers {
diff --git a/internal/controller/controllerutil.go b/internal/controller/controllerutil.go
index 7bbcd2b981..4b1f5da6ba 100644
--- a/internal/controller/controllerutil.go
+++ b/internal/controller/controllerutil.go
@@ -62,7 +62,8 @@ func InitializeReplicaCreation(clientset pgo.Interface, clusterName,
log.Error(err)
return err
}
- for _, pgreplica := range pgreplicaList.Items {
+ for i := range pgreplicaList.Items {
+ pgreplica := &pgreplicaList.Items[i]
if pgreplica.Annotations == nil {
pgreplica.Annotations = make(map[string]string)
@@ -70,7 +71,7 @@ func InitializeReplicaCreation(clientset pgo.Interface, clusterName,
pgreplica.Annotations[config.ANNOTATION_PGHA_BOOTSTRAP_REPLICA] = "true"
- if _, err = clientset.CrunchydataV1().Pgreplicas(namespace).Update(ctx, &pgreplica, metav1.UpdateOptions{}); err != nil {
+ if _, err = clientset.CrunchydataV1().Pgreplicas(namespace).Update(ctx, pgreplica, metav1.UpdateOptions{}); err != nil {
log.Error(err)
return err
}
diff --git a/internal/controller/job/backresthandler.go b/internal/controller/job/backresthandler.go
index cab5f3d068..40c10110ae 100644
--- a/internal/controller/job/backresthandler.go
+++ b/internal/controller/job/backresthandler.go
@@ -32,32 +32,29 @@ import (
)
// backrestUpdateHandler is responsible for handling updates to backrest jobs
-func (c *Controller) handleBackrestUpdate(job *apiv1.Job) error {
-
+func (c *Controller) handleBackrestUpdate(job *apiv1.Job) {
// return if job wasn't successful
if !isJobSuccessful(job) {
log.Debugf("jobController onUpdate job %s was unsuccessful and will be ignored",
job.Name)
- return nil
+ return
}
// return if job is being deleted
if isJobInForegroundDeletion(job) {
log.Debugf("jobController onUpdate job %s is being deleted and will be ignored",
job.Name)
- return nil
+ return
}
labels := job.GetObjectMeta().GetLabels()
switch {
case labels[config.LABEL_BACKREST_COMMAND] == "backup":
- c.handleBackrestBackupUpdate(job)
+ _ = c.handleBackrestBackupUpdate(job)
case labels[config.LABEL_BACKREST_COMMAND] == crv1.PgtaskBackrestStanzaCreate:
- c.handleBackrestStanzaCreateUpdate(job)
+ _ = c.handleBackrestStanzaCreateUpdate(job)
}
-
- return nil
}
// handleBackrestRestoreUpdate is responsible for handling updates to backrest backup jobs
@@ -79,7 +76,7 @@ func (c *Controller) handleBackrestBackupUpdate(job *apiv1.Job) error {
if err != nil {
log.Errorf("error in patching pgtask %s: %s", job.ObjectMeta.SelfLink, err.Error())
}
- publishBackupComplete(labels[config.LABEL_PG_CLUSTER], job.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], job.ObjectMeta.Labels[config.LABEL_PGOUSER], "pgbackrest", job.ObjectMeta.Namespace, "")
+ publishBackupComplete(labels[config.LABEL_PG_CLUSTER], job.ObjectMeta.Labels[config.LABEL_PGOUSER], "pgbackrest", job.ObjectMeta.Namespace, "")
// If the completed backup was a cluster bootstrap backup, then mark the cluster as initialized
// and initiate the creation of any replicas. Otherwise if the completed backup was taken as
@@ -87,11 +84,11 @@ func (c *Controller) handleBackrestBackupUpdate(job *apiv1.Job) error {
if labels[config.LABEL_PGHA_BACKUP_TYPE] == crv1.BackupTypeBootstrap {
log.Debugf("jobController onUpdate initial backup complete")
- controller.SetClusterInitializedStatus(c.Client, labels[config.LABEL_PG_CLUSTER],
+ _ = controller.SetClusterInitializedStatus(c.Client, labels[config.LABEL_PG_CLUSTER],
job.ObjectMeta.Namespace)
// now initialize the creation of any replica
- controller.InitializeReplicaCreation(c.Client, labels[config.LABEL_PG_CLUSTER],
+ _ = controller.InitializeReplicaCreation(c.Client, labels[config.LABEL_PG_CLUSTER],
job.ObjectMeta.Namespace)
} else if labels[config.LABEL_PGHA_BACKUP_TYPE] == crv1.BackupTypeFailover {
@@ -141,7 +138,7 @@ func (c *Controller) handleBackrestStanzaCreateUpdate(job *apiv1.Job) error {
if cluster.Spec.Standby {
log.Debugf("job Controller: standby cluster %s will now be set to an initialized "+
"status", clusterName)
- controller.SetClusterInitializedStatus(c.Client, clusterName, namespace)
+ _ = controller.SetClusterInitializedStatus(c.Client, clusterName, namespace)
return nil
}
@@ -153,7 +150,7 @@ func (c *Controller) handleBackrestStanzaCreateUpdate(job *apiv1.Job) error {
return err
}
- backrest.CreateInitialBackup(c.Client, job.ObjectMeta.Namespace,
+ _, _ = backrest.CreateInitialBackup(c.Client, job.ObjectMeta.Namespace,
clusterName, backrestRepoPodName)
}
diff --git a/internal/controller/job/bootstraphandler.go b/internal/controller/job/bootstraphandler.go
index 580da57041..7b64937642 100644
--- a/internal/controller/job/bootstraphandler.go
+++ b/internal/controller/job/bootstraphandler.go
@@ -115,7 +115,7 @@ func (c *Controller) handleBootstrapUpdate(job *apiv1.Job) error {
namespace, crv1.PgtaskWorkflowBackrestRestorePrimaryCreatedStatus); err != nil {
log.Warn(err)
}
- publishRestoreComplete(labels[config.LABEL_PG_CLUSTER], labels[config.LABEL_PG_CLUSTER_IDENTIFIER],
+ publishRestoreComplete(labels[config.LABEL_PG_CLUSTER],
labels[config.LABEL_PGOUSER], job.ObjectMeta.Namespace)
}
diff --git a/internal/controller/job/jobcontroller.go b/internal/controller/job/jobcontroller.go
index 85e5e82c57..aa11399b47 100644
--- a/internal/controller/job/jobcontroller.go
+++ b/internal/controller/job/jobcontroller.go
@@ -33,11 +33,10 @@ type Controller struct {
// onAdd is called when a postgresql operator job is created and an associated add event is
// generated
func (c *Controller) onAdd(obj interface{}) {
-
job := obj.(*apiv1.Job)
labels := job.GetObjectMeta().GetLabels()
- //only process jobs with with vendor=crunchydata label
+ // only process jobs with with vendor=crunchydata label
if labels[config.LABEL_VENDOR] != "crunchydata" {
return
}
@@ -48,12 +47,11 @@ func (c *Controller) onAdd(obj interface{}) {
// onUpdate is called when a postgresql operator job is created and an associated update event is
// generated
func (c *Controller) onUpdate(oldObj, newObj interface{}) {
-
var err error
job := newObj.(*apiv1.Job)
labels := job.GetObjectMeta().GetLabels()
- //only process jobs with with vendor=crunchydata label
+ // only process jobs with with vendor=crunchydata label
if labels[config.LABEL_VENDOR] != "crunchydata" {
return
}
@@ -69,7 +67,7 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
err = c.handleRMDataUpdate(job)
case labels[config.LABEL_BACKREST] == "true" ||
labels[config.LABEL_BACKREST_RESTORE] == "true":
- err = c.handleBackrestUpdate(job)
+ c.handleBackrestUpdate(job)
case labels[config.LABEL_BACKUP_TYPE_PGDUMP] == "true":
err = c.handlePGDumpUpdate(job)
case labels[config.LABEL_RESTORE_TYPE_PGRESTORE] == "true":
@@ -85,11 +83,10 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
// onDelete is called when a postgresql operator job is deleted
func (c *Controller) onDelete(obj interface{}) {
-
job := obj.(*apiv1.Job)
labels := job.GetObjectMeta().GetLabels()
- //only process jobs with with vendor=crunchydata label
+ // only process jobs with with vendor=crunchydata label
if labels[config.LABEL_VENDOR] != "crunchydata" {
return
}
@@ -99,7 +96,6 @@ func (c *Controller) onDelete(obj interface{}) {
// AddJobEventHandler adds the job event handler to the job informer
func (c *Controller) AddJobEventHandler() {
-
c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.onAdd,
UpdateFunc: c.onUpdate,
diff --git a/internal/controller/job/jobevents.go b/internal/controller/job/jobevents.go
index ef4f1a1760..df21ba3ef6 100644
--- a/internal/controller/job/jobevents.go
+++ b/internal/controller/job/jobevents.go
@@ -22,7 +22,7 @@ import (
log "github.com/sirupsen/logrus"
)
-func publishBackupComplete(clusterName, clusterIdentifier, username, backuptype, namespace, path string) {
+func publishBackupComplete(clusterName, username, backuptype, namespace, path string) {
topics := make([]string, 2)
topics[0] = events.EventTopicCluster
topics[1] = events.EventTopicBackup
@@ -44,10 +44,9 @@ func publishBackupComplete(clusterName, clusterIdentifier, username, backuptype,
if err != nil {
log.Error(err.Error())
}
-
}
-func publishRestoreComplete(clusterName, identifier, username, namespace string) {
+func publishRestoreComplete(clusterName, username, namespace string) {
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -66,10 +65,9 @@ func publishRestoreComplete(clusterName, identifier, username, namespace string)
if err != nil {
log.Error(err.Error())
}
-
}
-func publishDeleteClusterComplete(clusterName, identifier, username, namespace string) {
+func publishDeleteClusterComplete(clusterName, username, namespace string) {
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
diff --git a/internal/controller/job/pgdumphandler.go b/internal/controller/job/pgdumphandler.go
index 5b33285789..0fd25918b2 100644
--- a/internal/controller/job/pgdumphandler.go
+++ b/internal/controller/job/pgdumphandler.go
@@ -45,7 +45,7 @@ func (c *Controller) handlePGDumpUpdate(job *apiv1.Job) error {
status = crv1.JobErrorStatus + " [" + job.ObjectMeta.Name + "]"
}
- //update the pgdump task status to submitted - updates task, not the job.
+ // update the pgdump task status to submitted - updates task, not the job.
dumpTask := labels[config.LABEL_PGTASK]
patch, err := kubeapi.NewJSONPatch().Add("spec", "status")(status).Bytes()
if err == nil {
@@ -81,7 +81,7 @@ func (c *Controller) handlePGRestoreUpdate(job *apiv1.Job) error {
status = crv1.JobErrorStatus + " [" + job.ObjectMeta.Name + "]"
}
- //update the pgdump task status to submitted - updates task, not the job.
+ // update the pgdump task status to submitted - updates task, not the job.
restoreTask := labels[config.LABEL_PGTASK]
patch, err := kubeapi.NewJSONPatch().Add("spec", "status")(status).Bytes()
if err == nil {
diff --git a/internal/controller/job/rmdatahandler.go b/internal/controller/job/rmdatahandler.go
index 0aa1624f7d..5fb8b0ed6c 100644
--- a/internal/controller/job/rmdatahandler.go
+++ b/internal/controller/job/rmdatahandler.go
@@ -48,7 +48,6 @@ func (c *Controller) handleRMDataUpdate(job *apiv1.Job) error {
log.Debugf("jobController onUpdate rmdata job succeeded")
publishDeleteClusterComplete(labels[config.LABEL_PG_CLUSTER],
- job.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER],
job.ObjectMeta.Labels[config.LABEL_PGOUSER],
job.ObjectMeta.Namespace)
@@ -77,7 +76,7 @@ func (c *Controller) handleRMDataUpdate(job *apiv1.Job) error {
return fmt.Errorf("could not remove Job %s for some reason after max tries", job.Name)
}
- //if a user has specified --archive for a cluster then
+ // if a user has specified --archive for a cluster then
// an xlog PVC will be present and can be removed
pvcName := clusterName + "-xlog"
if err := pvc.DeleteIfExists(c.Client.Clientset, pvcName, job.Namespace); err != nil {
@@ -85,7 +84,7 @@ func (c *Controller) handleRMDataUpdate(job *apiv1.Job) error {
return err
}
- //delete any completed jobs for this cluster as a cleanup
+ // delete any completed jobs for this cluster as a cleanup
jobList, err := c.Client.
BatchV1().Jobs(job.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + clusterName})
diff --git a/internal/controller/manager/controllermanager.go b/internal/controller/manager/controllermanager.go
index 236ed1bb74..4a6ac9dc2c 100644
--- a/internal/controller/manager/controllermanager.go
+++ b/internal/controller/manager/controllermanager.go
@@ -84,7 +84,6 @@ type controllerGroup struct {
func NewControllerManager(namespaces []string,
pgoConfig config.PgoConfig, pgoNamespace, installationName string,
namespaceOperatingMode ns.NamespaceOperatingMode) (*ControllerManager, error) {
-
controllerManager := ControllerManager{
controllers: make(map[string]*controllerGroup),
installationName: installationName,
@@ -123,7 +122,6 @@ func NewControllerManager(namespaces []string,
// easily started as needed). Each controller group also receives its own clients, which can then
// be utilized by the various controllers within that controller group.
func (c *ControllerManager) AddGroup(namespace string) error {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -139,7 +137,6 @@ func (c *ControllerManager) AddGroup(namespace string) error {
// AddAndRunGroup is a convenience function that adds a controller group for the
// namespace specified, and then immediately runs the controllers in that group.
func (c *ControllerManager) AddAndRunGroup(namespace string) error {
-
if c.controllers[namespace] != nil && !c.pgoConfig.Pgo.DisableReconcileRBAC {
// first reconcile RBAC in the target namespace if RBAC reconciliation is enabled
c.reconcileRBAC(namespace)
@@ -165,7 +162,6 @@ func (c *ControllerManager) AddAndRunGroup(namespace string) error {
// RemoveAll removes all controller groups managed by the controller manager, first stopping all
// controllers within each controller group managed by the controller manager.
func (c *ControllerManager) RemoveAll() {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -179,7 +175,6 @@ func (c *ControllerManager) RemoveAll() {
// RemoveGroup removes the controller group for the namespace specified, first stopping all
// controllers within that group
func (c *ControllerManager) RemoveGroup(namespace string) {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -188,7 +183,6 @@ func (c *ControllerManager) RemoveGroup(namespace string) {
// RunAll runs all controllers across all controller groups managed by the controller manager.
func (c *ControllerManager) RunAll() error {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -205,7 +199,6 @@ func (c *ControllerManager) RunAll() error {
// RunGroup runs the controllers within the controller group for the namespace specified.
func (c *ControllerManager) RunGroup(namespace string) error {
-
c.mgrMutex.Lock()
defer c.mgrMutex.Unlock()
@@ -226,7 +219,6 @@ func (c *ControllerManager) RunGroup(namespace string) error {
// addControllerGroup adds a new controller group for the namespace specified
func (c *ControllerManager) addControllerGroup(namespace string) error {
-
if _, ok := c.controllers[namespace]; ok {
log.Debugf("Controller Manager: a controller for namespace %s already exists", namespace)
return controller.ErrControllerGroupExists
@@ -340,7 +332,6 @@ func (c *ControllerManager) addControllerGroup(namespace string) error {
// hasListerPrivs verifies the Operator has the privileges required to start the controllers
// for the namespace specified.
func (c *ControllerManager) hasListerPrivs(namespace string) bool {
-
controllerGroup := c.controllers[namespace]
var err error
@@ -389,7 +380,6 @@ func (c *ControllerManager) hasListerPrivs(namespace string) bool {
// runControllerGroup is responsible running the controllers for the controller group corresponding
// to the namespace provided
func (c *ControllerManager) runControllerGroup(namespace string) error {
-
controllerGroup := c.controllers[namespace]
hasListerPrivs := c.hasListerPrivs(namespace)
@@ -442,7 +432,6 @@ func (c *ControllerManager) runControllerGroup(namespace string) error {
// queues associated with the controllers inside of the controller group are first shutdown
// prior to removing the controller group.
func (c *ControllerManager) removeControllerGroup(namespace string) {
-
if _, ok := c.controllers[namespace]; !ok {
log.Debugf("Controller Manager: no controller group to remove for ns %s", namespace)
return
@@ -458,7 +447,6 @@ func (c *ControllerManager) removeControllerGroup(namespace string) {
// done by calling the ShutdownWorker function associated with the controller. If the controller
// does not have a ShutdownWorker function then no action is taken.
func (c *ControllerManager) stopControllerGroup(namespace string) {
-
if _, ok := c.controllers[namespace]; !ok {
log.Debugf("Controller Manager: unable to stop controller group for namespace %s because "+
"a controller group for this namespace does not exist", namespace)
diff --git a/internal/controller/manager/rbac.go b/internal/controller/manager/rbac.go
index bb972e5202..8cad4ff247 100644
--- a/internal/controller/manager/rbac.go
+++ b/internal/controller/manager/rbac.go
@@ -83,7 +83,6 @@ func (c *ControllerManager) reconcileRBAC(targetNamespace string) {
// reconcileRoles reconciles the Roles required by the operator in a target namespace
func (c *ControllerManager) reconcileRoles(targetNamespace string) {
-
reconcileRoles := map[string]*template.Template{
ns.PGO_TARGET_ROLE: config.PgoTargetRoleTemplate,
ns.PGO_BACKREST_ROLE: config.PgoBackrestRoleTemplate,
@@ -101,7 +100,6 @@ func (c *ControllerManager) reconcileRoles(targetNamespace string) {
// reconcileRoleBindings reconciles the RoleBindings required by the operator in a
// target namespace
func (c *ControllerManager) reconcileRoleBindings(targetNamespace string) {
-
reconcileRoleBindings := map[string]*template.Template{
ns.PGO_TARGET_ROLE_BINDING: config.PgoTargetRoleBindingTemplate,
ns.PGO_BACKREST_ROLE_BINDING: config.PgoBackrestRoleBindingTemplate,
@@ -120,7 +118,6 @@ func (c *ControllerManager) reconcileRoleBindings(targetNamespace string) {
// target namespace
func (c *ControllerManager) reconcileServiceAccounts(targetNamespace string,
imagePullSecrets []v1.LocalObjectReference) (saCreatedOrUpdated bool) {
-
reconcileServiceAccounts := map[string]*template.Template{
ns.PGO_DEFAULT_SERVICE_ACCOUNT: config.PgoDefaultServiceAccountTemplate,
ns.PGO_TARGET_SERVICE_ACCOUNT: config.PgoTargetServiceAccountTemplate,
diff --git a/internal/controller/namespace/namespacecontroller.go b/internal/controller/namespace/namespacecontroller.go
index 6fc85f644f..609a0715d0 100644
--- a/internal/controller/namespace/namespacecontroller.go
+++ b/internal/controller/namespace/namespacecontroller.go
@@ -43,7 +43,6 @@ type Controller struct {
// PostgreSQL Operator are added and deleted.
func NewNamespaceController(controllerManager controller.Manager,
informer coreinformers.NamespaceInformer, workerCount int) (*Controller, error) {
-
controller := &Controller{
ControllerManager: controllerManager,
Informer: informer,
@@ -72,7 +71,6 @@ func NewNamespaceController(controllerManager controller.Manager,
// function in order to read and process a message on the worker queue. Once the worker queue
// is instructed to shutdown, a message is written to the done channel.
func (c *Controller) RunWorker(stopCh <-chan struct{}) {
-
go c.waitForShutdown(stopCh)
for c.processNextWorkItem() {
@@ -96,7 +94,6 @@ func (c *Controller) ShutdownWorker() {
// so, the namespace resource is converted into a namespace/name string and is then added to the
// work queue
func (c *Controller) enqueueNamespace(obj interface{}) {
-
var key string
var err error
if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil {
@@ -109,7 +106,6 @@ func (c *Controller) enqueueNamespace(obj interface{}) {
// processNextWorkItem will read a single work item off the work queue and processes it via
// the Namespace sync handler
func (c *Controller) processNextWorkItem() bool {
-
obj, shutdown := c.workqueue.Get()
if shutdown {
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 24a6b78a6b..5c745fec7c 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -63,7 +63,6 @@ func (c *Controller) onAdd(obj interface{}) {
// processNextWorkItem function in order to read and process a message on the
// workqueue.
func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) {
-
go c.waitForShutdown(stopCh)
for c.processNextItem() {
@@ -101,7 +100,7 @@ func (c *Controller) processNextItem() bool {
// parallel.
defer c.Queue.Done(key)
- //get the pgcluster
+ // get the pgcluster
cluster, err := c.Client.CrunchydataV1().Pgclusters(keyNamespace).Get(ctx, keyResourceName, metav1.GetOptions{})
if err != nil {
log.Debugf("cluster add - pgcluster not found, this is invalid")
@@ -196,10 +195,10 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
// shutdown or started but its current status does not properly reflect that it is, then
// proceed with the logic needed to either shutdown or start the cluster
if newcluster.Spec.Shutdown && newcluster.Status.State != crv1.PgclusterStateShutdown {
- clusteroperator.ShutdownCluster(c.Client, *newcluster)
+ _ = clusteroperator.ShutdownCluster(c.Client, *newcluster)
} else if !newcluster.Spec.Shutdown &&
newcluster.Status.State == crv1.PgclusterStateShutdown {
- clusteroperator.StartupCluster(c.Client, *newcluster)
+ _ = clusteroperator.StartupCluster(c.Client, *newcluster)
}
// check to see if the "autofail" label on the pgcluster CR has been changed from either true to false, or from
@@ -217,7 +216,7 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
return
}
if autofailEnabledNew != autofailEnabledOld {
- util.ToggleAutoFailover(c.Client, autofailEnabledNew,
+ _ = util.ToggleAutoFailover(c.Client, autofailEnabledNew,
newcluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE],
newcluster.ObjectMeta.Namespace)
}
@@ -329,16 +328,15 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
// onDelete is called when a pgcluster is deleted
func (c *Controller) onDelete(obj interface{}) {
- //cluster := obj.(*crv1.Pgcluster)
+ // cluster := obj.(*crv1.Pgcluster)
// log.Debugf("[Controller] ns=%s onDelete %s", cluster.ObjectMeta.Namespace, cluster.ObjectMeta.SelfLink)
- //handle pgcluster cleanup
+ // handle pgcluster cleanup
// clusteroperator.DeleteClusterBase(c.PgclusterClientset, c.PgclusterClient, cluster, cluster.ObjectMeta.Namespace)
}
// AddPGClusterEventHandler adds the pgcluster event handler to the pgcluster informer
func (c *Controller) AddPGClusterEventHandler() {
-
c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.onAdd,
UpdateFunc: c.onUpdate,
@@ -466,7 +464,6 @@ func updatePgBouncer(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1
func updateTablespaces(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) error {
// first, get a list of all of the instance deployments for the cluster
deployments, err := operator.GetInstanceDeployments(c.Client, newCluster)
-
if err != nil {
return err
}
diff --git a/internal/controller/pgpolicy/pgpolicycontroller.go b/internal/controller/pgpolicy/pgpolicycontroller.go
index a9eef2e3ec..27d640475b 100644
--- a/internal/controller/pgpolicy/pgpolicycontroller.go
+++ b/internal/controller/pgpolicy/pgpolicycontroller.go
@@ -44,8 +44,8 @@ func (c *Controller) onAdd(obj interface{}) {
policy := obj.(*crv1.Pgpolicy)
log.Debugf("[pgpolicy Controller] onAdd ns=%s %s", policy.ObjectMeta.Namespace, policy.ObjectMeta.SelfLink)
- //handle the case of when a pgpolicy is already processed, which
- //is the case when the operator restarts
+ // handle the case of when a pgpolicy is already processed, which
+ // is the case when the operator restarts
if policy.Status.State == crv1.PgpolicyStateProcessed {
log.Debug("pgpolicy " + policy.ObjectMeta.Name + " already processed")
return
@@ -65,7 +65,7 @@ func (c *Controller) onAdd(obj interface{}) {
log.Errorf("ERROR updating pgpolicy status: %s", err.Error())
}
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPolicy
@@ -84,7 +84,6 @@ func (c *Controller) onAdd(obj interface{}) {
if err != nil {
log.Error(err.Error())
}
-
}
// onUpdate is called when a pgpolicy is updated
@@ -98,7 +97,7 @@ func (c *Controller) onDelete(obj interface{}) {
log.Debugf("DELETED pgpolicy %s", policy.ObjectMeta.Name)
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPolicy
@@ -117,12 +116,10 @@ func (c *Controller) onDelete(obj interface{}) {
if err != nil {
log.Error(err.Error())
}
-
}
// AddPGPolicyEventHandler adds the pgpolicy event handler to the pgpolicy informer
func (c *Controller) AddPGPolicyEventHandler() {
-
c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.onAdd,
UpdateFunc: c.onUpdate,
diff --git a/internal/controller/pgreplica/pgreplicacontroller.go b/internal/controller/pgreplica/pgreplicacontroller.go
index e3d10128c9..91325b7066 100644
--- a/internal/controller/pgreplica/pgreplicacontroller.go
+++ b/internal/controller/pgreplica/pgreplicacontroller.go
@@ -44,7 +44,6 @@ type Controller struct {
// processNextWorkItem function in order to read and process a message on the
// workqueue.
func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) {
-
go c.waitForShutdown(stopCh)
for c.processNextItem() {
@@ -96,8 +95,8 @@ func (c *Controller) processNextItem() bool {
} else {
log.Debugf("working...no replica found, means we process")
- //handle the case of when a pgreplica is added which is
- //scaling up a cluster
+ // handle the case of when a pgreplica is added which is
+ // scaling up a cluster
replica, err := c.Clientset.CrunchydataV1().Pgreplicas(keyNamespace).Get(ctx, keyResourceName, metav1.GetOptions{})
if err != nil {
log.Error(err)
@@ -155,8 +154,8 @@ func (c *Controller) processNextItem() bool {
func (c *Controller) onAdd(obj interface{}) {
replica := obj.(*crv1.Pgreplica)
- //handle the case of pgreplicas being processed already and
- //when the operator restarts
+ // handle the case of pgreplicas being processed already and
+ // when the operator restarts
if replica.Status.State == crv1.PgreplicaStateProcessed {
log.Debug("pgreplica " + replica.ObjectMeta.Name + " already processed")
return
@@ -167,7 +166,6 @@ func (c *Controller) onAdd(obj interface{}) {
log.Debugf("onAdd putting key in queue %s", key)
c.Queue.Add(key)
}
-
}
// onUpdate is called when a pgreplica is updated
@@ -215,26 +213,24 @@ func (c *Controller) onDelete(obj interface{}) {
replica := obj.(*crv1.Pgreplica)
log.Debugf("[pgreplica Controller] OnDelete ns=%s %s", replica.ObjectMeta.Namespace, replica.ObjectMeta.SelfLink)
- //make sure we are not removing a replica deployment
- //that is now the primary after a failover
+ // make sure we are not removing a replica deployment
+ // that is now the primary after a failover
dep, err := c.Clientset.
AppsV1().Deployments(replica.ObjectMeta.Namespace).
Get(ctx, replica.Spec.Name, metav1.GetOptions{})
if err == nil {
if dep.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] == dep.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] {
- //the replica was made a primary at some point
- //we will not scale down the deployment
+ // the replica was made a primary at some point
+ // we will not scale down the deployment
log.Debugf("[pgreplica Controller] OnDelete not scaling down the replica since it is acting as a primary")
} else {
clusteroperator.ScaleDownBase(c.Clientset, replica, replica.ObjectMeta.Namespace)
}
}
-
}
// AddPGReplicaEventHandler adds the pgreplica event handler to the pgreplica informer
func (c *Controller) AddPGReplicaEventHandler() {
-
// Your custom resource event handlers.
c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.onAdd,
diff --git a/internal/controller/pgtask/pgtaskcontroller.go b/internal/controller/pgtask/pgtaskcontroller.go
index 0a60af9eb6..788de4606a 100644
--- a/internal/controller/pgtask/pgtaskcontroller.go
+++ b/internal/controller/pgtask/pgtaskcontroller.go
@@ -50,7 +50,6 @@ type Controller struct {
// processNextWorkItem function in order to read and process a message on the
// workqueue.
func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) {
-
go c.waitForShutdown(stopCh)
for c.processNextItem() {
@@ -95,7 +94,7 @@ func (c *Controller) processNextItem() bool {
return true
}
- //update pgtask
+ // update pgtask
patch, err := json.Marshal(map[string]interface{}{
"status": crv1.PgtaskStatus{
State: crv1.PgtaskStateProcessed,
@@ -112,7 +111,7 @@ func (c *Controller) processNextItem() bool {
return true
}
- //process the incoming task
+ // process the incoming task
switch tmpTask.Spec.TaskType {
case crv1.PgtaskPgAdminAdd:
log.Debug("add pgadmin task added")
@@ -152,9 +151,6 @@ func (c *Controller) processNextItem() bool {
} else {
log.Debugf("skipping duplicate onAdd delete data task %s/%s", keyNamespace, keyResourceName)
}
- case crv1.PgtaskDeleteBackups:
- log.Debug("delete backups task added")
- taskoperator.RemoveBackups(keyNamespace, c.Client, tmpTask)
case crv1.PgtaskBackrest:
log.Debug("backrest task added")
backrestoperator.Backrest(keyNamespace, c.Client, tmpTask)
@@ -180,15 +176,14 @@ func (c *Controller) processNextItem() bool {
c.Queue.Forget(key)
return true
-
}
// onAdd is called when a pgtask is added
func (c *Controller) onAdd(obj interface{}) {
task := obj.(*crv1.Pgtask)
- //handle the case of when the operator restarts, we do not want
- //to process pgtasks already processed
+ // handle the case of when the operator restarts, we do not want
+ // to process pgtasks already processed
if task.Status.State == crv1.PgtaskStateProcessed {
log.Debug("pgtask " + task.ObjectMeta.Name + " already processed")
return
@@ -199,12 +194,11 @@ func (c *Controller) onAdd(obj interface{}) {
log.Debugf("task putting key in queue %s", key)
c.Queue.Add(key)
}
-
}
// onUpdate is called when a pgtask is updated
func (c *Controller) onUpdate(oldObj, newObj interface{}) {
- //task := newObj.(*crv1.Pgtask)
+ // task := newObj.(*crv1.Pgtask)
// log.Debugf("[Controller] onUpdate ns=%s %s", task.ObjectMeta.Namespace, task.ObjectMeta.SelfLink)
}
@@ -214,7 +208,6 @@ func (c *Controller) onDelete(obj interface{}) {
// AddPGTaskEventHandler adds the pgtask event handler to the pgtask informer
func (c *Controller) AddPGTaskEventHandler() {
-
c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.onAdd,
UpdateFunc: c.onUpdate,
@@ -224,14 +217,14 @@ func (c *Controller) AddPGTaskEventHandler() {
log.Debugf("pgtask Controller: added event handler to informer")
}
-//de-dupe logic for a failover, if the failover started
-//parameter is set, it means a failover has already been
-//started on this
+// de-dupe logic for a failover, if the failover started
+// parameter is set, it means a failover has already been
+// started on this
func dupeFailover(clientset pgo.Interface, task *crv1.Pgtask, ns string) bool {
ctx := context.TODO()
tmp, err := clientset.CrunchydataV1().Pgtasks(ns).Get(ctx, task.Spec.Name, metav1.GetOptions{})
if err != nil {
- //a big time error if this occurs
+ // a big time error if this occurs
return false
}
@@ -242,14 +235,14 @@ func dupeFailover(clientset pgo.Interface, task *crv1.Pgtask, ns string) bool {
return true
}
-//de-dupe logic for a delete data, if the delete data job started
-//parameter is set, it means a delete data job has already been
-//started on this
+// de-dupe logic for a delete data, if the delete data job started
+// parameter is set, it means a delete data job has already been
+// started on this
func dupeDeleteData(clientset pgo.Interface, task *crv1.Pgtask, ns string) bool {
ctx := context.TODO()
tmp, err := clientset.CrunchydataV1().Pgtasks(ns).Get(ctx, task.Spec.Name, metav1.GetOptions{})
if err != nil {
- //a big time error if this occurs
+ // a big time error if this occurs
return false
}
diff --git a/internal/controller/pod/inithandler.go b/internal/controller/pod/inithandler.go
index c33be8d2ed..2f09dbef3c 100644
--- a/internal/controller/pod/inithandler.go
+++ b/internal/controller/pod/inithandler.go
@@ -40,7 +40,6 @@ import (
// handleClusterInit is responsible for proceeding with initialization of the PG cluster once the
// primary PG pod for a new or restored PG cluster reaches a ready status
func (c *Controller) handleClusterInit(newPod *apiv1.Pod, cluster *crv1.Pgcluster) error {
-
clusterName := cluster.GetName()
// first check to see if the update is a repo pod. If so, then call repo init handler and
@@ -76,7 +75,6 @@ func (c *Controller) handleClusterInit(newPod *apiv1.Pod, cluster *crv1.Pgcluste
// handleBackRestRepoInit handles cluster initialization tasks that must be executed once
// as a result of an update to a cluster's pgBackRest repository pod
func (c *Controller) handleBackRestRepoInit(newPod *apiv1.Pod, cluster *crv1.Pgcluster) error {
-
// if the repo pod is for a cluster bootstrap, the kick of the bootstrap job and return
if _, ok := newPod.GetLabels()[config.LABEL_PGHA_BOOTSTRAP]; ok {
if err := clusteroperator.AddClusterBootstrap(c.Client, cluster); err != nil {
@@ -103,7 +101,6 @@ func (c *Controller) handleBackRestRepoInit(newPod *apiv1.Pod, cluster *crv1.Pgc
// regardless of the specific type of cluster (e.g. regualar or standby) or the reason the
// cluster is being initialized (initial bootstrap or restore)
func (c *Controller) handleCommonInit(cluster *crv1.Pgcluster) error {
-
// Disable autofailover in the cluster that is now "Ready" if the autofail label is set
// to "false" on the pgcluster (i.e. label "autofail=true")
autofailEnabled, err := strconv.ParseBool(cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL])
@@ -111,7 +108,7 @@ func (c *Controller) handleCommonInit(cluster *crv1.Pgcluster) error {
log.Error(err)
return err
} else if !autofailEnabled {
- util.ToggleAutoFailover(c.Client, false,
+ _ = util.ToggleAutoFailover(c.Client, false,
cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE], cluster.Namespace)
}
@@ -151,9 +148,8 @@ func (c *Controller) handleBootstrapInit(newPod *apiv1.Pod, cluster *crv1.Pgclus
taskoperator.CompleteCreateClusterWorkflow(clusterName, c.Client, namespace)
- //publish event for cluster complete
- publishClusterComplete(clusterName, namespace, cluster)
- //
+ // publish event for cluster complete
+ _ = publishClusterComplete(clusterName, namespace, cluster)
// first clean any stanza create resources from a previous stanza-create, e.g. during a
// restore when these resources may already exist from initial creation of the cluster
@@ -187,12 +183,11 @@ func (c *Controller) handleStandbyInit(cluster *crv1.Pgcluster) error {
taskoperator.CompleteCreateClusterWorkflow(clusterName, c.Client, namespace)
- //publish event for cluster complete
- publishClusterComplete(clusterName, namespace, cluster)
- //
+ // publish event for cluster complete
+ _ = publishClusterComplete(clusterName, namespace, cluster)
// now scale any replicas deployments to 1
- clusteroperator.ScaleClusterDeployments(c.Client, *cluster, 1, false, true, false, false)
+ _, _ = clusteroperator.ScaleClusterDeployments(c.Client, *cluster, 1, false, true, false, false)
// Proceed with stanza-creation of this is not a standby cluster, or if its
// a standby cluster that does not have "s3" storage only enabled.
@@ -214,7 +209,7 @@ func (c *Controller) handleStandbyInit(cluster *crv1.Pgcluster) error {
}
backrestoperator.StanzaCreate(namespace, clusterName, c.Client)
} else {
- controller.SetClusterInitializedStatus(c.Client, clusterName, namespace)
+ _ = controller.SetClusterInitializedStatus(c.Client, clusterName, namespace)
}
// If a standby cluster initialize the creation of any replicas. Replicas
@@ -222,7 +217,7 @@ func (c *Controller) handleStandbyInit(cluster *crv1.Pgcluster) error {
// stanza-creation and/or the creation of any backups, since the replicas
// will be generated from the pgBackRest repository of an external PostgreSQL
// database (which should already exist).
- controller.InitializeReplicaCreation(c.Client, clusterName, namespace)
+ _ = controller.InitializeReplicaCreation(c.Client, clusterName, namespace)
// if this is a pgbouncer enabled cluster, add a pgbouncer
// Note: we only warn if we cannot create the pgBouncer, so eecution can
@@ -262,13 +257,13 @@ func (c *Controller) labelPostgresPodAndDeployment(newpod *apiv1.Pod) {
log.Debug("which means its pod was restarted for some reason")
log.Debug("we will use the service name on the deployment")
serviceName = dep.ObjectMeta.Labels[config.LABEL_SERVICE_NAME]
- } else if replica == false {
+ } else if !replica {
log.Debugf("primary pod ADDED %s service-name=%s", newpod.Name, newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER])
- //add label onto pod "service-name=clustername"
+ // add label onto pod "service-name=clustername"
serviceName = newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER]
- } else if replica == true {
+ } else if replica {
log.Debugf("replica pod ADDED %s service-name=%s", newpod.Name, newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER]+"-replica")
- //add label onto pod "service-name=clustername-replica"
+ // add label onto pod "service-name=clustername-replica"
serviceName = newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] + "-replica"
}
@@ -283,12 +278,11 @@ func (c *Controller) labelPostgresPodAndDeployment(newpod *apiv1.Pod) {
return
}
- //add the service name label to the Deployment
+ // add the service name label to the Deployment
log.Debugf("patching deployment %s: %s", dep.Name, patch)
_, err = c.Client.AppsV1().Deployments(ns).Patch(ctx, dep.Name, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
log.Error("could not add label to deployment on pod add")
return
}
-
}
diff --git a/internal/controller/pod/podcontroller.go b/internal/controller/pod/podcontroller.go
index 95b916df5e..95d81bea32 100644
--- a/internal/controller/pod/podcontroller.go
+++ b/internal/controller/pod/podcontroller.go
@@ -40,17 +40,16 @@ type Controller struct {
// onAdd is called when a pod is added
func (c *Controller) onAdd(obj interface{}) {
-
newPod := obj.(*apiv1.Pod)
newPodLabels := newPod.GetObjectMeta().GetLabels()
- //only process pods with with vendor=crunchydata label
+ // only process pods with with vendor=crunchydata label
if newPodLabels[config.LABEL_VENDOR] == "crunchydata" {
log.Debugf("Pod Controller: onAdd processing the addition of pod %s in namespace %s",
newPod.Name, newPod.Namespace)
}
- //handle the case when a pg database pod is added
+ // handle the case when a pg database pod is added
if isPostgresPod(newPod) {
c.labelPostgresPodAndDeployment(newPod)
return
@@ -65,7 +64,7 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
newPodLabels := newPod.GetObjectMeta().GetLabels()
- //only process pods with with vendor=crunchydata label
+ // only process pods with with vendor=crunchydata label
if newPodLabels[config.LABEL_VENDOR] != "crunchydata" {
return
}
@@ -153,7 +152,6 @@ func setCurrentPrimary(clientset pgo.Interface, newPod *apiv1.Pod, cluster *crv1
// onDelete is called when a pgcluster is deleted
func (c *Controller) onDelete(obj interface{}) {
-
pod := obj.(*apiv1.Pod)
labels := pod.GetObjectMeta().GetLabels()
@@ -165,7 +163,6 @@ func (c *Controller) onDelete(obj interface{}) {
// AddPodEventHandler adds the pod event handler to the pod informer
func (c *Controller) AddPodEventHandler() {
-
c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.onAdd,
UpdateFunc: c.onUpdate,
@@ -190,7 +187,6 @@ func isBackRestRepoBecomingReady(oldPod, newPod *apiv1.Pod) bool {
// assumed to be present), specifically because this label will only be included on pgBackRest
// repository Pods.
func isBackRestRepoPod(newpod *apiv1.Pod) bool {
-
_, backrestRepoLabelExists := newpod.ObjectMeta.Labels[config.LABEL_PGO_BACKREST_REPO]
return backrestRepoLabelExists
@@ -237,7 +233,6 @@ func isDBContainerBecomingReady(oldPod, newPod *apiv1.Pod) bool {
// this label will only be included on primary and replica PostgreSQL database pods (and will be
// present as soon as the deployment and pod is created).
func isPostgresPod(newpod *apiv1.Pod) bool {
-
_, pgDatabaseLabelExists := newpod.ObjectMeta.Labels[config.LABEL_PG_DATABASE]
return pgDatabaseLabelExists
diff --git a/internal/controller/pod/podevents.go b/internal/controller/pod/podevents.go
index 100175baed..b3086355ca 100644
--- a/internal/controller/pod/podevents.go
+++ b/internal/controller/pod/podevents.go
@@ -25,7 +25,7 @@ import (
)
func publishClusterComplete(clusterName, namespace string, cluster *crv1.Pgcluster) error {
- //capture the cluster creation event
+ // capture the cluster creation event
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -47,5 +47,4 @@ func publishClusterComplete(clusterName, namespace string, cluster *crv1.Pgclust
return err
}
return err
-
}
diff --git a/internal/controller/pod/promotionhandler.go b/internal/controller/pod/promotionhandler.go
index e420af589c..a1eb83530a 100644
--- a/internal/controller/pod/promotionhandler.go
+++ b/internal/controller/pod/promotionhandler.go
@@ -62,7 +62,6 @@ var (
// of a failover. Specifically, this handler is triggered when a replica has been promoted, and
// it now has either the "promoted" or "primary" role label.
func (c *Controller) handlePostgresPodPromotion(newPod *apiv1.Pod, cluster crv1.Pgcluster) error {
-
if cluster.Status.State == crv1.PgclusterStateShutdown {
if err := c.handleStartupInit(cluster); err != nil {
return err
@@ -84,7 +83,6 @@ func (c *Controller) handlePostgresPodPromotion(newPod *apiv1.Pod, cluster crv1.
// handleStartupInit is resposible for handling cluster initilization for a cluster that has been
// restarted (after it was previously shutdown)
func (c *Controller) handleStartupInit(cluster crv1.Pgcluster) error {
-
// since the cluster is just being restarted, it can just be set to initialized once the
// primary is ready
if err := controller.SetClusterInitializedStatus(c.Client, cluster.Name,
@@ -94,7 +92,7 @@ func (c *Controller) handleStartupInit(cluster crv1.Pgcluster) error {
}
// now scale any replicas deployments to 1
- clusteroperator.ScaleClusterDeployments(c.Client, cluster, 1, false, true, false, false)
+ _, _ = clusteroperator.ScaleClusterDeployments(c.Client, cluster, 1, false, true, false, false)
return nil
}
@@ -103,7 +101,6 @@ func (c *Controller) handleStartupInit(cluster crv1.Pgcluster) error {
// of disabling standby mode. Specifically, this handler is triggered when a standby leader
// is turned into a regular leader.
func (c *Controller) handleStandbyPromotion(newPod *apiv1.Pod, cluster crv1.Pgcluster) error {
-
clusterName := cluster.Name
namespace := cluster.Namespace
@@ -141,7 +138,6 @@ func (c *Controller) handleStandbyPromotion(newPod *apiv1.Pod, cluster crv1.Pgcl
// done by confirming
func waitForStandbyPromotion(restConfig *rest.Config, clientset kubernetes.Interface, newPod apiv1.Pod,
cluster crv1.Pgcluster) error {
-
var recoveryDisabled bool
// wait for the server to accept writes to ensure standby has truly been disabled before
@@ -183,7 +179,7 @@ func waitForStandbyPromotion(restConfig *rest.Config, clientset kubernetes.Inter
func cleanAndCreatePostFailoverBackup(clientset kubeapi.Interface, clusterName, namespace string) error {
ctx := context.TODO()
- //look up the backrest-repo pod name
+ // look up the backrest-repo pod name
selector := fmt.Sprintf("%s=%s,%s=true", config.LABEL_PG_CLUSTER,
clusterName, config.LABEL_PGO_BACKREST_REPO)
pods, err := clientset.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
diff --git a/internal/kubeapi/client_config.go b/internal/kubeapi/client_config.go
index 22fe39e9b2..b3eeed0c2a 100644
--- a/internal/kubeapi/client_config.go
+++ b/internal/kubeapi/client_config.go
@@ -37,8 +37,10 @@ type Interface interface {
}
// Interface should satisfy both our typed Interface and the standard one.
-var _ crunchydata.Interface = Interface(nil)
-var _ kubernetes.Interface = Interface(nil)
+var (
+ _ crunchydata.Interface = Interface(nil)
+ _ kubernetes.Interface = Interface(nil)
+)
// Client provides methods for interacting with Kubernetes resources.
// It implements both kubernetes and crunchydata clientset Interfaces.
diff --git a/internal/kubeapi/fake/fakeclients.go b/internal/kubeapi/fake/fakeclients.go
index 6a263818d4..8c7395549d 100644
--- a/internal/kubeapi/fake/fakeclients.go
+++ b/internal/kubeapi/fake/fakeclients.go
@@ -55,7 +55,6 @@ var (
// initialization of the Operator in various unit tests where the various resources loaded
// during initialization (e.g. templates, config and/or global variables) are required.
func NewFakePGOClient() (kubeapi.Interface, error) {
-
if pgoRoot == "" {
return nil, errors.New("Environment variable PGOROOT must be set to the root directory " +
"of the PostgreSQL Operator project repository in order to create a fake client")
@@ -84,7 +83,6 @@ func NewFakePGOClient() (kubeapi.Interface, error) {
// utilized when testing to similate and environment containing the various PostgreSQL Operator
// configuration files (e.g. templates) required to run the Operator.
func createMockPGOConfigMap(pgoNamespace string) (*v1.ConfigMap, error) {
-
// create a configMap that will hold the default configs
pgoConfigMap := &v1.ConfigMap{
Data: make(map[string]string),
diff --git a/internal/kubeapi/volumes_test.go b/internal/kubeapi/volumes_test.go
index b793ac5269..c2933d9e87 100644
--- a/internal/kubeapi/volumes_test.go
+++ b/internal/kubeapi/volumes_test.go
@@ -26,7 +26,7 @@ func TestFindOrAppendVolume(t *testing.T) {
t.Run("empty", func(t *testing.T) {
var volumes []v1.Volume
- var volume = FindOrAppendVolume(&volumes, "v1")
+ volume := FindOrAppendVolume(&volumes, "v1")
if expected, actual := 1, len(volumes); expected != actual {
t.Fatalf("expected appended volume, got %v", actual)
}
@@ -69,7 +69,7 @@ func TestFindOrAppendVolumeMount(t *testing.T) {
t.Run("empty", func(t *testing.T) {
var mounts []v1.VolumeMount
- var mount = FindOrAppendVolumeMount(&mounts, "v1")
+ mount := FindOrAppendVolumeMount(&mounts, "v1")
if expected, actual := 1, len(mounts); expected != actual {
t.Fatalf("expected appended mount, got %v", actual)
}
diff --git a/internal/logging/loglib.go b/internal/logging/loglib.go
index b443e47b4d..e346317f6e 100644
--- a/internal/logging/loglib.go
+++ b/internal/logging/loglib.go
@@ -1,4 +1,4 @@
-//Package logging Functions to set unique configuration for use with the logrus logger
+// Package logging Functions to set unique configuration for use with the logrus logger
package logging
/*
@@ -34,7 +34,7 @@ func SetParameters() LogValues {
return logval
}
-//LogValues holds the standard log value types
+// LogValues holds the standard log value types
type LogValues struct {
version string
}
@@ -53,9 +53,9 @@ func (f *formatter) Format(e *log.Entry) ([]byte, error) {
return f.lf.Format(e)
}
-//CrunchyLogger adds the customized logging fields to the logrus instance context
+// CrunchyLogger adds the customized logging fields to the logrus instance context
func CrunchyLogger(logDetails LogValues) {
- //Sets calling method as a field
+ // Sets calling method as a field
log.SetReportCaller(true)
crunchyTextFormatter := &log.TextFormatter{
diff --git a/internal/ns/nslogic.go b/internal/ns/nslogic.go
index 21f0499b7f..34df014bed 100644
--- a/internal/ns/nslogic.go
+++ b/internal/ns/nslogic.go
@@ -43,20 +43,28 @@ import (
"k8s.io/client-go/kubernetes/fake"
)
-const OPERATOR_SERVICE_ACCOUNT = "postgres-operator"
-const PGO_DEFAULT_SERVICE_ACCOUNT = "pgo-default"
+const (
+ OPERATOR_SERVICE_ACCOUNT = "postgres-operator"
+ PGO_DEFAULT_SERVICE_ACCOUNT = "pgo-default"
+)
-const PGO_TARGET_ROLE = "pgo-target-role"
-const PGO_TARGET_ROLE_BINDING = "pgo-target-role-binding"
-const PGO_TARGET_SERVICE_ACCOUNT = "pgo-target"
+const (
+ PGO_TARGET_ROLE = "pgo-target-role"
+ PGO_TARGET_ROLE_BINDING = "pgo-target-role-binding"
+ PGO_TARGET_SERVICE_ACCOUNT = "pgo-target"
+)
-const PGO_BACKREST_ROLE = "pgo-backrest-role"
-const PGO_BACKREST_SERVICE_ACCOUNT = "pgo-backrest"
-const PGO_BACKREST_ROLE_BINDING = "pgo-backrest-role-binding"
+const (
+ PGO_BACKREST_ROLE = "pgo-backrest-role"
+ PGO_BACKREST_SERVICE_ACCOUNT = "pgo-backrest"
+ PGO_BACKREST_ROLE_BINDING = "pgo-backrest-role-binding"
+)
-const PGO_PG_ROLE = "pgo-pg-role"
-const PGO_PG_ROLE_BINDING = "pgo-pg-role-binding"
-const PGO_PG_SERVICE_ACCOUNT = "pgo-pg"
+const (
+ PGO_PG_ROLE = "pgo-pg-role"
+ PGO_PG_ROLE_BINDING = "pgo-pg-role-binding"
+ PGO_PG_SERVICE_ACCOUNT = "pgo-pg"
+)
// PgoServiceAccount is used to populate the following ServiceAccount templates:
// pgo-default-sa.json
@@ -135,7 +143,6 @@ var (
// CreateFakeNamespaceClient creates a fake namespace client for use with the "disabled" namespace
// operating mode
func CreateFakeNamespaceClient(installationName string) (kubernetes.Interface, error) {
-
var namespaces []runtime.Object
for _, namespace := range getNamespacesFromEnv() {
namespaces = append(namespaces, &v1.Namespace{
@@ -161,7 +168,7 @@ func CreateNamespace(clientset kubernetes.Interface, installationName, pgoNamesp
log.Debugf("CreateNamespace %s %s %s", pgoNamespace, createdBy, newNs)
- //define the new namespace
+ // define the new namespace
n := v1.Namespace{}
n.ObjectMeta.Labels = make(map[string]string)
n.ObjectMeta.Labels[config.LABEL_VENDOR] = config.LABEL_CRUNCHY
@@ -177,7 +184,7 @@ func CreateNamespace(clientset kubernetes.Interface, installationName, pgoNamesp
log.Debugf("CreateNamespace %s created by %s", newNs, createdBy)
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPGO
@@ -206,7 +213,7 @@ func DeleteNamespace(clientset kubernetes.Interface, installationName, pgoNamesp
log.Debugf("DeleteNamespace %s deleted by %s", ns, deletedBy)
- //publish the namespace delete event
+ // publish the namespace delete event
topics := make([]string, 1)
topics[0] = events.EventTopicPGO
@@ -441,7 +448,7 @@ func UpdateNamespace(clientset kubernetes.Interface, installationName, pgoNamesp
return err
}
- //publish event
+ // publish event
topics := make([]string, 1)
topics[0] = events.EventTopicPGO
@@ -567,7 +574,6 @@ func GetCurrentNamespaceList(clientset kubernetes.Interface,
func ValidateNamespacesWatched(clientset kubernetes.Interface,
namespaceOperatingMode NamespaceOperatingMode,
installationName string, namespaces ...string) error {
-
var err error
var currNSList []string
if namespaceOperatingMode != NamespaceOperatingModeDisabled {
@@ -640,7 +646,6 @@ func ValidateNamespaceNames(namespace ...string) error {
// (please see the various NamespaceOperatingMode types for a detailed explanation of each
// operating mode).
func GetNamespaceOperatingMode(clientset kubernetes.Interface) (NamespaceOperatingMode, error) {
-
// first check to see if dynamic namespace capabilities can be enabled
isDynamic, err := CheckAccessPrivs(clientset, namespacePrivsCoreDynamic, "", "")
if err != nil {
@@ -710,7 +715,6 @@ func CheckAccessPrivs(clientset kubernetes.Interface,
func GetInitialNamespaceList(clientset kubernetes.Interface,
namespaceOperatingMode NamespaceOperatingMode,
installationName, pgoNamespace string) ([]string, error) {
-
// next grab the namespaces provided using the NAMESPACE env var
namespaceList := getNamespacesFromEnv()
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index 8d3cdeba4a..41a03a3d89 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -110,7 +110,7 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
}
if operator.CRUNCHY_DEBUG {
- config.BackrestjobTemplate.Execute(os.Stdout, jobFields)
+ _ = config.BackrestjobTemplate.Execute(os.Stdout, jobFields)
}
newjob := v1batch.Job{}
@@ -131,7 +131,7 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
if backupType != "" {
newjob.ObjectMeta.Labels[config.LABEL_PGHA_BACKUP_TYPE] = backupType
}
- clientset.BatchV1().Jobs(namespace).Create(ctx, &newjob, metav1.CreateOptions{})
+ _, _ = clientset.BatchV1().Jobs(namespace).Create(ctx, &newjob, metav1.CreateOptions{})
// publish backrest backup event
if cmd == "backup" {
@@ -160,8 +160,7 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
// CreateInitialBackup creates a Pgtask in order to initiate the initial pgBackRest backup for a cluster
// as needed to support replica creation
func CreateInitialBackup(clientset pgo.Interface, namespace, clusterName, podName string) (*crv1.Pgtask, error) {
- var params map[string]string
- params = make(map[string]string)
+ params := make(map[string]string)
params[config.LABEL_PGHA_BACKUP_TYPE] = crv1.BackupTypeBootstrap
return CreateBackup(clientset, namespace, clusterName, podName, params, "--type=full")
}
@@ -169,8 +168,7 @@ func CreateInitialBackup(clientset pgo.Interface, namespace, clusterName, podNam
// CreatePostFailoverBackup creates a Pgtask in order to initiate the a pgBackRest backup following a failure
// event to ensure proper replica creation and/or reinitialization
func CreatePostFailoverBackup(clientset pgo.Interface, namespace, clusterName, podName string) (*crv1.Pgtask, error) {
- var params map[string]string
- params = make(map[string]string)
+ params := make(map[string]string)
params[config.LABEL_PGHA_BACKUP_TYPE] = crv1.BackupTypeFailover
return CreateBackup(clientset, namespace, clusterName, podName, params, "")
}
diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go
index 6266afd510..988b279548 100644
--- a/internal/operator/backrest/repo.go
+++ b/internal/operator/backrest/repo.go
@@ -97,7 +97,7 @@ func CreateRepoDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluste
serviceName = fmt.Sprintf(util.BackrestRepoServiceName, cluster.Name)
}
- //create backrest repo service
+ // create backrest repo service
serviceFields := RepoServiceTemplateFields{
Name: serviceName,
ClusterName: cluster.Name,
@@ -135,7 +135,7 @@ func CreateRepoDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluste
}
if operator.CRUNCHY_DEBUG {
- config.PgoBackrestRepoTemplate.Execute(os.Stdout, repoFields)
+ _ = config.PgoBackrestRepoTemplate.Execute(os.Stdout, repoFields)
}
deployment := appsv1.Deployment{}
@@ -225,7 +225,6 @@ func setBootstrapRepoOverrides(clientset kubernetes.Interface, cluster *crv1.Pgc
// specific PostgreSQL cluster.
func getRepoDeploymentFields(clientset kubernetes.Interface, cluster *crv1.Pgcluster,
replicas int) *RepoDeploymentTemplateFields {
-
namespace := cluster.GetNamespace()
repoFields := RepoDeploymentTemplateFields{
@@ -265,7 +264,6 @@ func UpdateAnnotations(clientset kubernetes.Interface, cluster *crv1.Pgcluster,
// get a list of all of the instance deployments for the cluster
deployment, err := operator.GetBackrestDeployment(clientset, cluster)
-
if err != nil {
return err
}
@@ -291,7 +289,6 @@ func UpdateResources(clientset kubernetes.Interface, cluster *crv1.Pgcluster) er
// get a list of all of the instance deployments for the cluster
deployment, err := operator.GetBackrestDeployment(clientset, cluster)
-
if err != nil {
return err
}
@@ -333,7 +330,7 @@ func createService(clientset kubernetes.Interface, fields *RepoServiceTemplateFi
}
if operator.CRUNCHY_DEBUG {
- config.PgoBackrestRepoServiceTemplate.Execute(os.Stdout, fields)
+ _ = config.PgoBackrestRepoServiceTemplate.Execute(os.Stdout, fields)
}
s := v1.Service{}
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index 1f23802deb..5d727522f6 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -73,7 +73,6 @@ type BackrestRestoreJobTemplateFields struct {
// perform a restore
func UpdatePGClusterSpecForRestore(clientset kubeapi.Interface, cluster *crv1.Pgcluster,
task *crv1.Pgtask) {
-
cluster.Spec.PGDataSource.RestoreFrom = cluster.GetName()
restoreOpts := task.Spec.Parameters[config.LABEL_BACKREST_RESTORE_OPTS]
@@ -250,8 +249,10 @@ func PrepareClusterForRestore(clientset kubeapi.Interface, cluster *crv1.Pgclust
clusterName)
// Delete the DCS and leader ConfigMaps. These will be recreated during the restore.
- configMaps := []string{fmt.Sprintf("%s-config", clusterName),
- fmt.Sprintf("%s-leader", clusterName)}
+ configMaps := []string{
+ fmt.Sprintf("%s-config", clusterName),
+ fmt.Sprintf("%s-leader", clusterName),
+ }
for _, c := range configMaps {
if err := clientset.CoreV1().ConfigMaps(namespace).
Delete(ctx, c, metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) {
@@ -281,7 +282,7 @@ func PrepareClusterForRestore(clientset kubeapi.Interface, cluster *crv1.Pgclust
func UpdateWorkflow(clientset pgo.Interface, workflowID, namespace, status string) error {
ctx := context.TODO()
- //update workflow
+ // update workflow
log.Debugf("restore workflow: update workflow %s", workflowID)
selector := crv1.PgtaskWorkflowID + "=" + workflowID
taskList, err := clientset.CrunchydataV1().Pgtasks(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
@@ -324,7 +325,6 @@ func PublishRestore(id, clusterName, username, namespace string) {
if err != nil {
log.Error(err.Error())
}
-
}
// getPGDatabasePVCNames returns the names of all PostgreSQL database PVCs for a specific
diff --git a/internal/operator/backrest/stanza.go b/internal/operator/backrest/stanza.go
index 2607eb7a6e..a2a0176452 100644
--- a/internal/operator/backrest/stanza.go
+++ b/internal/operator/backrest/stanza.go
@@ -58,7 +58,7 @@ func StanzaCreate(namespace, clusterName string, clientset kubeapi.Interface) {
ctx := context.TODO()
taskName := clusterName + "-" + crv1.PgtaskBackrestStanzaCreate
- //look up the backrest-repo pod name
+ // look up the backrest-repo pod name
selector := config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_PGO_BACKREST_REPO + "=true"
pods, err := clientset.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
if len(pods.Items) != 1 {
@@ -78,7 +78,7 @@ func StanzaCreate(namespace, clusterName string, clientset kubeapi.Interface) {
return
}
- //create the stanza-create task
+ // create the stanza-create task
spec := crv1.PgtaskSpec{}
spec.Name = taskName
@@ -133,5 +133,4 @@ func StanzaCreate(namespace, clusterName string, clientset kubeapi.Interface) {
if err != nil {
log.Error(err)
}
-
}
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 09d91ce454..464e1bd28e 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -156,8 +156,8 @@ func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace s
log.Error("error in pvcname patch " + err.Error())
}
- //publish create cluster event
- //capture the cluster creation event
+ // publish create cluster event
+ // capture the cluster creation event
pgouser := cl.ObjectMeta.Labels[config.LABEL_PGOUSER]
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -190,15 +190,15 @@ func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace s
publishClusterCreateFailure(cl, err.Error())
return
}
- //create a CRD for each replica
+ // create a CRD for each replica
for i := 0; i < replicaCount; i++ {
spec := crv1.PgreplicaSpec{}
- //get the storage config
+ // get the storage config
spec.ReplicaStorage = cl.Spec.ReplicaStorage
spec.UserLabels = cl.Spec.UserLabels
- //the replica should not use the same node labels as the primary
+ // the replica should not use the same node labels as the primary
spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = ""
spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = ""
@@ -324,17 +324,16 @@ func AddBootstrapRepo(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (
// DeleteClusterBase ...
func DeleteClusterBase(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) {
+ _ = DeleteCluster(clientset, cl, namespace)
- DeleteCluster(clientset, cl, namespace)
-
- //delete any existing configmaps
+ // delete any existing configmaps
if err := deleteConfigMaps(clientset, cl.Spec.Name, namespace); err != nil {
log.Error(err)
}
- //delete any existing pgtasks ???
+ // delete any existing pgtasks ???
- //publish delete cluster event
+ // publish delete cluster event
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -363,7 +362,7 @@ func ScaleBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace s
return
}
- //get the pgcluster CRD to base the replica off of
+ // get the pgcluster CRD to base the replica off of
cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).
Get(ctx, replica.Spec.ClusterName, metav1.GetOptions{})
if err != nil {
@@ -378,7 +377,7 @@ func ScaleBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace s
return
}
- //update the replica CRD pvcname
+ // update the replica CRD pvcname
patch, err := kubeapi.NewJSONPatch().Add("spec", "replicastorage", "name")(dataVolume.PersistentVolumeClaimName).Bytes()
if err == nil {
log.Debugf("patching replica %s: %s", replica.Spec.Name, patch)
@@ -389,20 +388,20 @@ func ScaleBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace s
log.Error("error in pvcname patch " + err.Error())
}
- //create the replica service if it doesnt exist
+ // create the replica service if it doesnt exist
if err = scaleReplicaCreateMissingService(clientset, replica, cluster, namespace); err != nil {
log.Error(err)
publishScaleError(namespace, replica.ObjectMeta.Labels[config.LABEL_PGOUSER], cluster)
return
}
- //instantiate the replica
+ // instantiate the replica
if err = scaleReplicaCreateDeployment(clientset, replica, cluster, namespace, dataVolume, walVolume, tablespaceVolumes); err != nil {
publishScaleError(namespace, replica.ObjectMeta.Labels[config.LABEL_PGOUSER], cluster)
return
}
- //update the replica CRD status
+ // update the replica CRD status
patch, err = kubeapi.NewJSONPatch().Add("spec", "status")(crv1.CompletedStatus).Bytes()
if err == nil {
log.Debugf("patching replica %s: %s", replica.Spec.Name, patch)
@@ -413,7 +412,7 @@ func ScaleBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace s
log.Error("error in status patch " + err.Error())
}
- //publish event for replica creation
+ // publish event for replica creation
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -438,16 +437,16 @@ func ScaleBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace s
func ScaleDownBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace string) {
ctx := context.TODO()
- //get the pgcluster CRD for this replica
+ // get the pgcluster CRD for this replica
_, err := clientset.CrunchydataV1().Pgclusters(namespace).
Get(ctx, replica.Spec.ClusterName, metav1.GetOptions{})
if err != nil {
return
}
- DeleteReplica(clientset, replica, namespace)
+ _ = DeleteReplica(clientset, replica, namespace)
- //publish event for scale down
+ // publish event for scale down
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -467,7 +466,6 @@ func ScaleDownBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespa
log.Error(err.Error())
return
}
-
}
// UpdateAnnotations updates the annotations in the "template" portion of a
@@ -693,10 +691,9 @@ func publishClusterCreateFailure(cl *crv1.Pgcluster, errorMsg string) {
}
func publishClusterShutdown(cluster crv1.Pgcluster) error {
-
clusterName := cluster.Name
- //capture the cluster creation event
+ // capture the cluster creation event
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index c464da161b..3af8b13729 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -86,7 +86,7 @@ func addClusterBootstrapJob(clientset kubeapi.Interface,
tablespaceVolumes map[string]operator.StorageResult) error {
ctx := context.TODO()
- bootstrapFields, err := getBootstrapJobFields(clientset, cl, dataVolume, walVolume,
+ bootstrapFields, err := getBootstrapJobFields(clientset, cl, dataVolume,
tablespaceVolumes)
if err != nil {
return err
@@ -98,7 +98,7 @@ func addClusterBootstrapJob(clientset kubeapi.Interface,
}
if operator.CRUNCHY_DEBUG {
- config.DeploymentTemplate.Execute(os.Stdout, bootstrapFields)
+ _ = config.DeploymentTemplate.Execute(os.Stdout, bootstrapFields)
}
job := &batchv1.Job{}
@@ -134,7 +134,7 @@ func addClusterDeployments(clientset kubeapi.Interface,
}
deploymentFields := getClusterDeploymentFields(clientset, cl,
- dataVolume, walVolume, tablespaceVolumes)
+ dataVolume, tablespaceVolumes)
var primaryDoc bytes.Buffer
if err := config.DeploymentTemplate.Execute(&primaryDoc, deploymentFields); err != nil {
@@ -142,7 +142,7 @@ func addClusterDeployments(clientset kubeapi.Interface,
}
if operator.CRUNCHY_DEBUG {
- config.DeploymentTemplate.Execute(os.Stdout, deploymentFields)
+ _ = config.DeploymentTemplate.Execute(os.Stdout, deploymentFields)
}
deployment := &appsv1.Deployment{}
@@ -170,7 +170,7 @@ func addClusterDeployments(clientset kubeapi.Interface,
// getBootstrapJobFields obtains the fields needed to populate the cluster bootstrap job template
func getBootstrapJobFields(clientset kubeapi.Interface,
- cluster *crv1.Pgcluster, dataVolume, walVolume operator.StorageResult,
+ cluster *crv1.Pgcluster, dataVolume operator.StorageResult,
tablespaceVolumes map[string]operator.StorageResult) (operator.BootstrapJobTemplateFields, error) {
ctx := context.TODO()
@@ -179,7 +179,7 @@ func getBootstrapJobFields(clientset kubeapi.Interface,
bootstrapFields := operator.BootstrapJobTemplateFields{
DeploymentTemplateFields: getClusterDeploymentFields(clientset, cluster, dataVolume,
- walVolume, tablespaceVolumes),
+ tablespaceVolumes),
RestoreFrom: cluster.Spec.PGDataSource.RestoreFrom,
RestoreOpts: restoreOpts[1 : len(restoreOpts)-1],
}
@@ -250,7 +250,7 @@ func getBootstrapJobFields(clientset kubeapi.Interface,
// getClusterDeploymentFields obtains the fields needed to populate the cluster deployment template
func getClusterDeploymentFields(clientset kubernetes.Interface,
- cl *crv1.Pgcluster, dataVolume, walVolume operator.StorageResult,
+ cl *crv1.Pgcluster, dataVolume operator.StorageResult,
tablespaceVolumes map[string]operator.StorageResult) operator.DeploymentTemplateFields {
namespace := cl.GetNamespace()
@@ -362,7 +362,7 @@ func DeleteCluster(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace
log.Error(err)
return err
} else {
- publishDeleteCluster(namespace, cl.ObjectMeta.Labels[config.LABEL_PGOUSER], cl.Spec.Name, cl.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER])
+ publishDeleteCluster(namespace, cl.ObjectMeta.Labels[config.LABEL_PGOUSER], cl.Spec.Name)
}
return err
@@ -513,7 +513,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
}
if operator.CRUNCHY_DEBUG {
- config.DeploymentTemplate.Execute(os.Stdout, replicaDeploymentFields)
+ _ = config.DeploymentTemplate.Execute(os.Stdout, replicaDeploymentFields)
}
replicaDeployment := appsv1.Deployment{}
@@ -581,7 +581,7 @@ func publishScaleError(namespace string, username string, cluster *crv1.Pgcluste
}
}
-func publishDeleteCluster(namespace, username, clusterName, identifier string) {
+func publishDeleteCluster(namespace, username, clusterName string) {
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -691,7 +691,7 @@ func ShutdownCluster(clientset kubeapi.Interface, cluster crv1.Pgcluster) error
return err
}
- publishClusterShutdown(cluster)
+ _ = publishClusterShutdown(cluster)
return nil
}
diff --git a/internal/operator/cluster/common.go b/internal/operator/cluster/common.go
index ebdc5adfec..82bf11c3de 100644
--- a/internal/operator/cluster/common.go
+++ b/internal/operator/cluster/common.go
@@ -83,6 +83,7 @@ func generatePassword() (string, error) {
// makePostgreSQLPassword creates the expected hash for a password type for a
// PostgreSQL password
+// nolint:unparam // this is set up to accept SCRAM in the not-too-distant future
func makePostgreSQLPassword(passwordType pgpassword.PasswordType, username, password string) string {
// get the PostgreSQL password generate based on the password type
// as all of these values are valid, this not not error
diff --git a/internal/operator/cluster/failover.go b/internal/operator/cluster/failover.go
index 5f64b86f08..5c2b43e173 100644
--- a/internal/operator/cluster/failover.go
+++ b/internal/operator/cluster/failover.go
@@ -40,9 +40,9 @@ func FailoverBase(namespace string, clientset kubeapi.Interface, task *crv1.Pgta
ctx := context.TODO()
var err error
- //look up the pgcluster for this task
- //in the case, the clustername is passed as a key in the
- //parameters map
+ // look up the pgcluster for this task
+ // in the case, the clustername is passed as a key in the
+ // parameters map
var clusterName string
for k := range task.Spec.Parameters {
clusterName = k
@@ -53,14 +53,14 @@ func FailoverBase(namespace string, clientset kubeapi.Interface, task *crv1.Pgta
return
}
- //create marker (clustername, namespace)
+ // create marker (clustername, namespace)
err = PatchpgtaskFailoverStatus(clientset, task, namespace)
if err != nil {
log.Errorf("could not set failover started marker for task %s cluster %s", task.Spec.Name, clusterName)
return
}
- //get initial count of replicas --selector=pg-cluster=clusterName
+ // get initial count of replicas --selector=pg-cluster=clusterName
selector := config.LABEL_PG_CLUSTER + "=" + clusterName
replicaList, err := clientset.CrunchydataV1().Pgreplicas(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
if err != nil {
@@ -69,7 +69,7 @@ func FailoverBase(namespace string, clientset kubeapi.Interface, task *crv1.Pgta
}
log.Debugf("replica count before failover is %d", len(replicaList.Items))
- //publish event for failover
+ // publish event for failover
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -90,9 +90,9 @@ func FailoverBase(namespace string, clientset kubeapi.Interface, task *crv1.Pgta
log.Error(err)
}
- Failover(cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], clientset, clusterName, task, namespace, restconfig)
+ _ = Failover(cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], clientset, clusterName, task, namespace, restconfig)
- //publish event for failover completed
+ // publish event for failover completed
topics = make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -113,17 +113,16 @@ func FailoverBase(namespace string, clientset kubeapi.Interface, task *crv1.Pgta
log.Error(err)
}
- //remove marker
-
+ // remove marker
}
func PatchpgtaskFailoverStatus(clientset pgo.Interface, oldCrd *crv1.Pgtask, namespace string) error {
ctx := context.TODO()
- //change it
+ // change it
oldCrd.Spec.Parameters[config.LABEL_FAILOVER_STARTED] = time.Now().Format(time.RFC3339)
- //create the patch
+ // create the patch
patchBytes, err := json.Marshal(map[string]interface{}{
"spec": map[string]interface{}{
"parameters": oldCrd.Spec.Parameters,
@@ -133,10 +132,9 @@ func PatchpgtaskFailoverStatus(clientset pgo.Interface, oldCrd *crv1.Pgtask, nam
return err
}
- //apply patch
+ // apply patch
_, err6 := clientset.CrunchydataV1().Pgtasks(namespace).
Patch(ctx, oldCrd.Name, types.MergePatchType, patchBytes, metav1.PatchOptions{})
return err6
-
}
diff --git a/internal/operator/cluster/failoverlogic.go b/internal/operator/cluster/failoverlogic.go
index 4ffa67e4d4..f1ec1183d6 100644
--- a/internal/operator/cluster/failoverlogic.go
+++ b/internal/operator/cluster/failoverlogic.go
@@ -54,19 +54,19 @@ func Failover(identifier string, clientset kubeapi.Interface, clusterName string
}
log.Debugf("pod selected to failover to is %s", pod.Name)
- updateFailoverStatus(clientset, task, namespace, clusterName, "deleted primary deployment "+clusterName)
+ updateFailoverStatus(clientset, task, namespace, "deleted primary deployment "+clusterName)
- //trigger the failover to the selected replica
+ // trigger the failover to the selected replica
if err := promote(pod, clientset, namespace, restconfig); err != nil {
log.Warn(err)
}
- publishPromoteEvent(identifier, namespace, task.ObjectMeta.Labels[config.LABEL_PGOUSER], clusterName, target)
+ publishPromoteEvent(namespace, task.ObjectMeta.Labels[config.LABEL_PGOUSER], clusterName, target)
- updateFailoverStatus(clientset, task, namespace, clusterName, "promoting pod "+pod.Name+" target "+target)
+ updateFailoverStatus(clientset, task, namespace, "promoting pod "+pod.Name+" target "+target)
- //relabel the deployment with primary labels
- //by setting service-name=clustername
+ // relabel the deployment with primary labels
+ // by setting service-name=clustername
upod, err := clientset.CoreV1().Pods(namespace).Get(ctx, pod.Name, metav1.GetOptions{})
if err != nil {
log.Error(err)
@@ -74,8 +74,8 @@ func Failover(identifier string, clientset kubeapi.Interface, clusterName string
return err
}
- //set the service-name label to the cluster name to match
- //the primary service selector
+ // set the service-name label to the cluster name to match
+ // the primary service selector
log.Debugf("setting label on pod %s=%s", config.LABEL_SERVICE_NAME, clusterName)
patch, err := kubeapi.NewMergePatch().Add("metadata", "labels", config.LABEL_SERVICE_NAME)(clusterName).Bytes()
@@ -100,9 +100,9 @@ func Failover(identifier string, clientset kubeapi.Interface, clusterName string
return err
}
- updateFailoverStatus(clientset, task, namespace, clusterName, "updating label deployment...pod "+pod.Name+"was the failover target...failover completed")
+ updateFailoverStatus(clientset, task, namespace, "updating label deployment...pod "+pod.Name+"was the failover target...failover completed")
- //update the pgcluster current-primary to new deployment name
+ // update the pgcluster current-primary to new deployment name
cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(ctx, clusterName, metav1.GetOptions{})
if err != nil {
log.Errorf("could not find pgcluster %s with labels", clusterName)
@@ -117,15 +117,14 @@ func Failover(identifier string, clientset kubeapi.Interface, clusterName string
}
return nil
-
}
-func updateFailoverStatus(clientset pgo.Interface, task *crv1.Pgtask, namespace, clusterName, message string) {
+func updateFailoverStatus(clientset pgo.Interface, task *crv1.Pgtask, namespace, message string) {
ctx := context.TODO()
log.Debugf("updateFailoverStatus namespace=[%s] taskName=[%s] message=[%s]", namespace, task.Name, message)
- //update the task
+ // update the task
t, err := clientset.CrunchydataV1().Pgtasks(task.Namespace).Get(ctx, task.Name, metav1.GetOptions{})
if err != nil {
return
@@ -139,14 +138,12 @@ func updateFailoverStatus(clientset pgo.Interface, task *crv1.Pgtask, namespace,
return
}
*task = *t
-
}
func promote(
pod *v1.Pod,
clientset kubernetes.Interface,
namespace string, restconfig *rest.Config) error {
-
// generate the curl command that will be run on the pod selected for the failover in order
// to trigger the failover and promote that specific pod to primary
command := make([]string, 3)
@@ -165,7 +162,7 @@ func promote(
return err
}
-func publishPromoteEvent(identifier, namespace, username, clusterName, target string) {
+func publishPromoteEvent(namespace, username, clusterName, target string) {
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
@@ -185,7 +182,6 @@ func publishPromoteEvent(identifier, namespace, username, clusterName, target st
if err != nil {
log.Error(err.Error())
}
-
}
// RemovePrimaryOnRoleChangeTag sets the 'primary_on_role_change' tag to null in the
diff --git a/internal/operator/cluster/pgadmin.go b/internal/operator/cluster/pgadmin.go
index 529bba6f13..9ed1bdbea5 100644
--- a/internal/operator/cluster/pgadmin.go
+++ b/internal/operator/cluster/pgadmin.go
@@ -346,6 +346,7 @@ func createPgAdminDeployment(clientset kubernetes.Interface, cluster *crv1.Pgclu
// This password is throwaway so low entropy genreation method is fine
randBytes := make([]byte, initPassLen)
// weakrand Read is always nil error
+ // #nosec: G404
weakrand.Read(randBytes)
throwawayPass := base64.RawStdEncoding.EncodeToString(randBytes)
@@ -364,7 +365,7 @@ func createPgAdminDeployment(clientset kubernetes.Interface, cluster *crv1.Pgclu
// For debugging purposes, put the template substitution in stdout
if operator.CRUNCHY_DEBUG {
- config.PgAdminTemplate.Execute(os.Stdout, fields)
+ _ = config.PgAdminTemplate.Execute(os.Stdout, fields)
}
// perform the actual template substitution
@@ -409,7 +410,7 @@ func createPgAdminService(clientset kubernetes.Interface, cluster *crv1.Pgcluste
// For debugging purposes, put the template substitution in stdout
if operator.CRUNCHY_DEBUG {
- config.PgAdminServiceTemplate.Execute(os.Stdout, fields)
+ _ = config.PgAdminServiceTemplate.Execute(os.Stdout, fields)
}
// perform the actual template substitution
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index 0a78e305f2..7ee7487bd4 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -557,7 +557,7 @@ func createPgBouncerDeployment(clientset kubernetes.Interface, cluster *crv1.Pgc
// For debugging purposes, put the template substitution in stdout
if operator.CRUNCHY_DEBUG {
- config.PgbouncerTemplate.Execute(os.Stdout, fields)
+ _ = config.PgbouncerTemplate.Execute(os.Stdout, fields)
}
// perform the actual template substitution
diff --git a/internal/operator/cluster/rmdata.go b/internal/operator/cluster/rmdata.go
index d82405fd18..27c224eaec 100644
--- a/internal/operator/cluster/rmdata.go
+++ b/internal/operator/cluster/rmdata.go
@@ -72,7 +72,7 @@ func CreateRmdataJob(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespa
}
if operator.CRUNCHY_DEBUG {
- config.RmdatajobTemplate.Execute(os.Stdout, jobFields)
+ _ = config.RmdatajobTemplate.Execute(os.Stdout, jobFields)
}
newjob := v1batch.Job{}
diff --git a/internal/operator/cluster/service.go b/internal/operator/cluster/service.go
index d2438f77c5..bca0eb9e45 100644
--- a/internal/operator/cluster/service.go
+++ b/internal/operator/cluster/service.go
@@ -37,7 +37,7 @@ func CreateService(clientset kubernetes.Interface, fields *ServiceTemplateFields
ctx := context.TODO()
var serviceDoc bytes.Buffer
- //create the service if it doesn't exist
+ // create the service if it doesn't exist
_, err := clientset.CoreV1().Services(namespace).Get(ctx, fields.Name, metav1.GetOptions{})
if err != nil {
@@ -48,7 +48,7 @@ func CreateService(clientset kubernetes.Interface, fields *ServiceTemplateFields
}
if operator.CRUNCHY_DEBUG {
- config.ServiceTemplate.Execute(os.Stdout, fields)
+ _ = config.ServiceTemplate.Execute(os.Stdout, fields)
}
service := corev1.Service{}
@@ -62,5 +62,4 @@ func CreateService(clientset kubernetes.Interface, fields *ServiceTemplateFields
}
return err
-
}
diff --git a/internal/operator/cluster/standby.go b/internal/operator/cluster/standby.go
index 30bcc7edbe..0ec759e025 100644
--- a/internal/operator/cluster/standby.go
+++ b/internal/operator/cluster/standby.go
@@ -189,10 +189,10 @@ func EnableStandby(clientset kubernetes.Interface, cluster crv1.Pgcluster) error
// grab the json stored in the config annotation
configJSONStr := dcsConfigMap.ObjectMeta.Annotations["config"]
var configJSON map[string]interface{}
- json.Unmarshal([]byte(configJSONStr), &configJSON)
+ _ = json.Unmarshal([]byte(configJSONStr), &configJSON)
var standbyJSON map[string]interface{}
- json.Unmarshal([]byte(standbyClusterConfigJSON), &standbyJSON)
+ _ = json.Unmarshal([]byte(standbyClusterConfigJSON), &standbyJSON)
// set standby_cluster to default config unless already set
if _, ok := configJSON["standby_cluster"]; !ok {
@@ -244,10 +244,9 @@ func EnableStandby(clientset kubernetes.Interface, cluster crv1.Pgcluster) error
}
func publishStandbyEnabled(cluster *crv1.Pgcluster) error {
-
clusterName := cluster.Name
- //capture the cluster creation event
+ // capture the cluster creation event
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index ecbd9d9985..4bb6d63a53 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -77,7 +77,7 @@ func AddUpgrade(clientset kubeapi.Interface, upgrade *crv1.Pgtask, namespace str
}
// update the workflow status to 'in progress' while the upgrade takes place
- updateUpgradeWorkflow(clientset, namespace, upgrade.ObjectMeta.Labels[crv1.PgtaskWorkflowID], crv1.PgtaskUpgradeInProgress)
+ _ = updateUpgradeWorkflow(clientset, namespace, upgrade.ObjectMeta.Labels[crv1.PgtaskWorkflowID], crv1.PgtaskUpgradeInProgress)
// grab the existing pgo version
oldpgoversion := pgcluster.ObjectMeta.Labels[config.LABEL_PGO_VERSION]
@@ -100,10 +100,10 @@ func AddUpgrade(clientset kubeapi.Interface, upgrade *crv1.Pgtask, namespace str
SetReplicaNumber(pgcluster, replicas)
// create the 'pgha-config' configmap while taking the init value from any existing 'pgha-default-config' configmap
- createUpgradePGHAConfigMap(clientset, pgcluster, namespace)
+ _ = createUpgradePGHAConfigMap(clientset, pgcluster, namespace)
// delete the existing pgcluster CRDs and other resources that will be recreated
- deleteBeforeUpgrade(clientset, pgcluster.Name, currentPrimary, namespace, pgcluster.Spec.Standby)
+ deleteBeforeUpgrade(clientset, pgcluster.Name, currentPrimary, namespace)
// recreate new Backrest Repo secret that was just deleted
recreateBackrestRepoSecret(clientset, upgradeTargetClusterName, namespace, operator.PgoNamespace)
@@ -222,14 +222,14 @@ func handleReplicas(clientset kubeapi.Interface, clusterName, currentPrimaryPVC,
log.Debugf("scaling down pgreplica: %s", replicaList.Items[index].Name)
ScaleDownBase(clientset, &replicaList.Items[index], namespace)
log.Debugf("deleting pgreplica CRD: %s", replicaList.Items[index].Name)
- clientset.CrunchydataV1().Pgreplicas(namespace).Delete(ctx, replicaList.Items[index].Name, metav1.DeleteOptions{})
+ _ = clientset.CrunchydataV1().Pgreplicas(namespace).Delete(ctx, replicaList.Items[index].Name, metav1.DeleteOptions{})
// if the existing replica PVC is not being used as the primary PVC, delete
// note this will not remove any leftover PVCs from previous failovers,
// those will require manual deletion so as to avoid any accidental
// deletion of valid PVCs.
if replicaList.Items[index].Name != currentPrimaryPVC {
deletePropagation := metav1.DeletePropagationForeground
- clientset.
+ _ = clientset.
CoreV1().PersistentVolumeClaims(namespace).
Delete(ctx, replicaList.Items[index].Name, metav1.DeleteOptions{PropagationPolicy: &deletePropagation})
log.Debugf("deleting replica pvc: %s", replicaList.Items[index].Name)
@@ -256,7 +256,7 @@ func SetReplicaNumber(pgcluster *crv1.Pgcluster, numReplicas string) {
// deleteBeforeUpgrade deletes the deployments, services, pgcluster, jobs, tasks and default configmaps before attempting
// to upgrade the pgcluster deployment. This preserves existing secrets, non-standard configmaps and service definitions
// for use in the newly upgraded cluster.
-func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimary, namespace string, isStandby bool) {
+func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimary, namespace string) {
ctx := context.TODO()
// first, get all deployments for the pgcluster in question
@@ -287,11 +287,11 @@ func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimar
log.Debug(waitStatus)
// delete the pgcluster
- clientset.CrunchydataV1().Pgclusters(namespace).Delete(ctx, clusterName, metav1.DeleteOptions{})
+ _ = clientset.CrunchydataV1().Pgclusters(namespace).Delete(ctx, clusterName, metav1.DeleteOptions{})
// delete all existing job references
deletePropagation := metav1.DeletePropagationForeground
- clientset.
+ _ = clientset.
BatchV1().Jobs(namespace).
DeleteCollection(ctx,
metav1.DeleteOptions{PropagationPolicy: &deletePropagation},
@@ -307,11 +307,11 @@ func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimar
// delete the leader configmap used by the Postgres Operator since this information may change after
// the upgrade is complete
// Note: deletion is required for cluster recreation
- clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, clusterName+"-leader", metav1.DeleteOptions{})
+ _ = clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, clusterName+"-leader", metav1.DeleteOptions{})
// delete the '-pgha-default-config' configmap, if it exists so the config syncer
// will not try to use it instead of '-pgha-config'
- clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, clusterName+"-pgha-default-config", metav1.DeleteOptions{})
+ _ = clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, clusterName+"-pgha-default-config", metav1.DeleteOptions{})
}
// deploymentWait is modified from cluster.waitForDeploymentDelete. It simply waits for the current primary deployment
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 030e706404..da27c3c614 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -37,8 +37,10 @@ import (
)
// consolidate with cluster.affinityTemplateFields
-const AffinityInOperator = "In"
-const AFFINITY_NOTINOperator = "NotIn"
+const (
+ AffinityInOperator = "In"
+ AFFINITY_NOTINOperator = "NotIn"
+)
// PGHAConfigMapSuffix defines the suffix for the name of the PGHA configMap created for each PG
// cluster
@@ -99,7 +101,7 @@ type exporterTemplateFields struct {
TLSOnly bool
}
-//consolidate
+// consolidate
type badgerTemplateFields struct {
CCPImageTag string
CCPImagePrefix string
@@ -253,6 +255,7 @@ func GetAnnotations(cluster *crv1.Pgcluster, annotationType crv1.ClusterAnnotati
for k, v := range cluster.Spec.Annotations.Postgres {
annotations[k] = v
}
+ case crv1.ClusterAnnotationGlobal: // no-op as its handled in the loop above
}
// if the map is empty, return an empty string
@@ -262,7 +265,6 @@ func GetAnnotations(cluster *crv1.Pgcluster, annotationType crv1.ClusterAnnotati
// let's try to create a JSON document out of the above
doc, err := json.Marshal(annotations)
-
// if there is an error, warn in our logs and return an empty string
if err != nil {
log.Errorf("could not set custom annotations: %q", err)
@@ -272,7 +274,7 @@ func GetAnnotations(cluster *crv1.Pgcluster, annotationType crv1.ClusterAnnotati
return string(doc)
}
-//consolidate with cluster.GetPgbackrestEnvVars
+// consolidate with cluster.GetPgbackrestEnvVars
func GetPgbackrestEnvVars(cluster *crv1.Pgcluster, backrestEnabled, depName, port, storageType string) string {
if backrestEnabled == "true" {
fields := PgbackrestEnvVarsTemplateFields{
@@ -294,14 +296,12 @@ func GetPgbackrestEnvVars(cluster *crv1.Pgcluster, backrestEnabled, depName, por
return doc.String()
}
return ""
-
}
// GetPgbackrestBootstrapEnvVars returns a string containing the pgBackRest environment variables
// for a bootstrap job
func GetPgbackrestBootstrapEnvVars(restoreClusterName, depName string,
restoreFromSecret *v1.Secret) (string, error) {
-
fields := PgbackrestEnvVarsTemplateFields{
PgbackrestStanza: "db",
PgbackrestDBPath: fmt.Sprintf("/pgdata/%s", depName),
@@ -331,7 +331,6 @@ func GetBackrestDeployment(clientset kubernetes.Interface, cluster *crv1.Pgclust
}
func GetBadgerAddon(clientset kubernetes.Interface, namespace string, cluster *crv1.Pgcluster, pgbadger_target string) string {
-
spec := cluster.Spec
if cluster.Labels[config.LABEL_BADGER] == "true" {
@@ -350,7 +349,7 @@ func GetBadgerAddon(clientset kubernetes.Interface, namespace string, cluster *c
}
if CRUNCHY_DEBUG {
- config.BadgerTemplate.Execute(os.Stdout, badgerTemplateFields)
+ _ = config.BadgerTemplate.Execute(os.Stdout, badgerTemplateFields)
}
return badgerDoc.String()
}
@@ -396,16 +395,16 @@ func GetExporterAddon(spec crv1.PgclusterSpec) string {
return exporterDoc.String()
}
-//consolidate with cluster.GetConfVolume
+// consolidate with cluster.GetConfVolume
func GetConfVolume(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) string {
ctx := context.TODO()
var configMapStr string
- //check for user provided configmap
+ // check for user provided configmap
if cl.Spec.CustomConfig != "" {
_, err := clientset.CoreV1().ConfigMaps(namespace).Get(ctx, cl.Spec.CustomConfig, metav1.GetOptions{})
if err != nil {
- //you should NOT get this error because of apiserver validation of this value!
+ // you should NOT get this error because of apiserver validation of this value!
log.Errorf("%s was not found, error, skipping user provided configMap", cl.Spec.CustomConfig)
} else {
log.Debugf("user provided configmap %s was used for this cluster", cl.Spec.CustomConfig)
@@ -413,7 +412,7 @@ func GetConfVolume(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace
}
}
- //check for global custom configmap "pgo-custom-pg-config"
+ // check for global custom configmap "pgo-custom-pg-config"
_, err := clientset.CoreV1().ConfigMaps(namespace).Get(ctx, config.GLOBAL_CUSTOM_CONFIGMAP, metav1.GetOptions{})
if err == nil {
return `"pgo-custom-pg-config"`
@@ -494,7 +493,6 @@ func GetInstanceDeployments(clientset kubernetes.Interface, cluster *crv1.Pgclus
clusterDeployments, err := clientset.
AppsV1().Deployments(cluster.Namespace).
List(ctx, metav1.ListOptions{LabelSelector: selector})
-
if err != nil {
return nil, err
}
@@ -653,7 +651,7 @@ func GetAffinity(nodeLabelKey, nodeLabelValue string, affoperator string) string
}
if CRUNCHY_DEBUG {
- config.AffinityTemplate.Execute(os.Stdout, affinityTemplateFields)
+ _ = config.AffinityTemplate.Execute(os.Stdout, affinityTemplateFields)
}
return affinityDoc.String()
@@ -662,7 +660,6 @@ func GetAffinity(nodeLabelKey, nodeLabelValue string, affoperator string) string
// GetPodAntiAffinity returns the populated pod anti-affinity json that should be attached to
// the various pods comprising the pg cluster
func GetPodAntiAffinity(cluster *crv1.Pgcluster, deploymentType crv1.PodAntiAffinityDeployment, podAntiAffinityType crv1.PodAntiAffinityType) string {
-
log.Debugf("GetPodAnitAffinity with clusterName=[%s]", cluster.Spec.Name)
// run through the checks on the pod anti-affinity type to see if it is not
@@ -689,6 +686,7 @@ func GetPodAntiAffinity(cluster *crv1.Pgcluster, deploymentType crv1.PodAntiAffi
return ""
case crv1.PodAntiAffinityRequired:
templateAffinityType = requireScheduleIgnoreExec
+ case crv1.PodAntiAffinityPreffered: // no-op as its the default value
}
podAntiAffinityTemplateFields := podAntiAffinityTemplateFields{
@@ -708,7 +706,7 @@ func GetPodAntiAffinity(cluster *crv1.Pgcluster, deploymentType crv1.PodAntiAffi
}
if CRUNCHY_DEBUG {
- config.PodAntiAffinityTemplate.Execute(os.Stdout, podAntiAffinityTemplateFields)
+ _ = config.PodAntiAffinityTemplate.Execute(os.Stdout, podAntiAffinityTemplateFields)
}
return podAntiAffinityDoc.String()
@@ -751,6 +749,7 @@ func GetPodAntiAffinityType(cluster *crv1.Pgcluster, deploymentType crv1.PodAnti
return podAntiAffinityType
}
}
+ case crv1.PodAntiAffinityDeploymentDefault: // no-op as its the default setting
}
// check to see if the value for the cluster anti-affinity is set. If so, use
@@ -795,7 +794,6 @@ func GetPgmonitorEnvVars(cluster *crv1.Pgcluster) string {
// and pgBackRest deployments.
func GetPgbackrestS3EnvVars(cluster crv1.Pgcluster, clientset kubernetes.Interface,
ns string) string {
-
if !strings.Contains(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], "s3") {
return ""
}
@@ -865,7 +863,6 @@ func GetPgbackrestS3EnvVars(cluster crv1.Pgcluster, clientset kubernetes.Interfa
// option is used, then returns the pgBackRest S3 configuration value to either enable
// or disable TLS verification as the expected string value.
func GetS3VerifyTLSSetting(cluster *crv1.Pgcluster) string {
-
// If the pgcluster has already been set, either by the PGO client or from the
// CRD definition, parse the boolean value given.
// If this value is not set, then parse the value stored in the default
@@ -890,7 +887,6 @@ func GetS3VerifyTLSSetting(cluster *crv1.Pgcluster) string {
// for inclusion in the PG and pgBackRest deployments.
func GetPgbackrestBootstrapS3EnvVars(pgDataSourceRestoreFrom string,
restoreFromSecret *v1.Secret) string {
-
s3EnvVars := PgbackrestS3EnvVarsTemplateFields{
PgbackrestS3Key: util.BackRestRepoSecretKeyAWSS3KeyAWSS3Key,
PgbackrestS3KeySecret: util.BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret,
@@ -1010,7 +1006,6 @@ func OverrideClusterContainerImages(containers []v1.Container) {
// into the current buffer
func writeTablespaceJSON(w *bytes.Buffer, jsonFields interface{}) error {
json, err := json.Marshal(jsonFields)
-
// if there is an error, log the error and continue
if err != nil {
return err
diff --git a/internal/operator/clusterutilities_test.go b/internal/operator/clusterutilities_test.go
index 081e5eab62..7a3643f777 100644
--- a/internal/operator/clusterutilities_test.go
+++ b/internal/operator/clusterutilities_test.go
@@ -123,7 +123,6 @@ func TestGetAnnotations(t *testing.T) {
}
func TestOverrideClusterContainerImages(t *testing.T) {
-
containerDefaults := map[string]struct {
name string
image string
@@ -250,7 +249,6 @@ func TestOverrideClusterContainerImages(t *testing.T) {
}
func TestGetPgbackrestBootstrapS3EnvVars(t *testing.T) {
-
// create a fake client that will be used to "fake" the initialization of the operator for
// this test
fakePGOClient, err := fakekubeapi.NewFakePGOClient()
@@ -283,7 +281,6 @@ func TestGetPgbackrestBootstrapS3EnvVars(t *testing.T) {
// test all env vars are properly set according the contents of an existing pgBackRest
// repo secret
t.Run("populate from secret", func(t *testing.T) {
-
backRestRepoSecret := mockBackRestRepoSecret.DeepCopy()
s3EnvVars := GetPgbackrestBootstrapS3EnvVars(defaultRestoreFromCluster, backRestRepoSecret)
// massage the results a bit so that we can parse as proper JSON to validate contents
@@ -332,7 +329,6 @@ func TestGetPgbackrestBootstrapS3EnvVars(t *testing.T) {
// test that the proper default S3 URI style is set for the bootstrap S3 env vars when the
// S3 URI style annotation is an empty string in a pgBackRest repo secret
t.Run("default URI style", func(t *testing.T) {
-
// the expected default for the pgBackRest URI style
defaultURIStyle := "host"
diff --git a/internal/operator/common.go b/internal/operator/common.go
index a3917dfc27..819fdc84d5 100644
--- a/internal/operator/common.go
+++ b/internal/operator/common.go
@@ -43,12 +43,16 @@ const (
defaultRegistry = "registry.developers.crunchydata.com/crunchydata"
)
-var CRUNCHY_DEBUG bool
-var NAMESPACE string
+var (
+ CRUNCHY_DEBUG bool
+ NAMESPACE string
+)
-var InstallationName string
-var PgoNamespace string
-var EventTCPAddress = "localhost:4150"
+var (
+ InstallationName string
+ PgoNamespace string
+ EventTCPAddress = "localhost:4150"
+)
var Pgo config.PgoConfig
@@ -75,7 +79,6 @@ type containerResourcesTemplateFields struct {
var defaultBackrestRepoConfigKeys = []string{"config", "sshd_config", "aws-s3-ca.crt"}
func Initialize(clientset kubernetes.Interface) {
-
tmp := os.Getenv("CRUNCHY_DEBUG")
if tmp == "true" {
CRUNCHY_DEBUG = true
@@ -170,7 +173,6 @@ func GetPodSecurityContext(supplementalGroups []int64) string {
// ...convert to JSON. Errors are ignored
doc, err := json.Marshal(securityContext)
-
// if there happens to be an error, warn about it
if err != nil {
log.Warn(err)
@@ -223,7 +225,7 @@ func GetResourcesJSON(resources, limits v1.ResourceList) string {
}
if log.GetLevel() == log.DebugLevel {
- config.ContainerResourcesTemplate.Execute(os.Stdout, fields)
+ _ = config.ContainerResourcesTemplate.Execute(os.Stdout, fields)
}
return doc.String()
@@ -314,7 +316,6 @@ func initializeControllerRefreshIntervals() {
// attempting to utilize the worker counts defined in the pgo.yaml config file, and if not
// present then falling back to a default value.
func initializeControllerWorkerCounts() {
-
if Pgo.Pgo.ConfigMapWorkerCount == nil {
log.Debugf("ConfigMapWorkerCount not set, defaulting to %d worker(s)",
config.DefaultConfigMapWorkerCount)
@@ -377,8 +378,7 @@ func initializeOperatorBackrestSecret(clientset kubernetes.Interface, namespace
secret, err := clientset.
CoreV1().Secrets(namespace).
Get(ctx, config.SecretOperatorBackrestRepoConfig, metav1.GetOptions{})
-
- // if there is a true error, return. Otherwise, initialize a new Secret
+ // if there is a true error, return. Otherwise, initialize a new Secret
if err != nil {
if !kerrors.IsNotFound(err) {
return err
@@ -436,7 +436,6 @@ func initializeOperatorBackrestSecret(clientset kubernetes.Interface, namespace
// namespaces as needed (or as permitted by the current operator mode), and returning a valid list
// of namespaces for the current Operator install.
func SetupNamespaces(clientset kubernetes.Interface) ([]string, error) {
-
// First set the proper namespace operating mode for the Operator install. The mode identified
// determines whether or not certain namespace capabilities are enabled.
if err := setNamespaceOperatingMode(clientset); err != nil {
diff --git a/internal/operator/config/configutil.go b/internal/operator/config/configutil.go
index 1cbefba2ac..9c11483dac 100644
--- a/internal/operator/config/configutil.go
+++ b/internal/operator/config/configutil.go
@@ -42,10 +42,8 @@ const (
pghLocalConfigSuffix = "-local-config"
)
-var (
- // ErrMissingClusterConfig is the error thrown when configuration is missing from a configMap
- ErrMissingClusterConfig error = errors.New("Configuration is missing from configMap")
-)
+// ErrMissingClusterConfig is the error thrown when configuration is missing from a configMap
+var ErrMissingClusterConfig error = errors.New("Configuration is missing from configMap")
// Syncer defines a resource that is able to sync its configuration stored configuration with a
// service, application, etc.
diff --git a/internal/operator/config/dcs.go b/internal/operator/config/dcs.go
index dc7663acbc..fe405d05c1 100644
--- a/internal/operator/config/dcs.go
+++ b/internal/operator/config/dcs.go
@@ -99,7 +99,6 @@ type SlotDCS struct {
// include a configMap that will be used to configure the DCS for a specific cluster.
func NewDCS(configMap *corev1.ConfigMap, kubeclientset kubernetes.Interface,
clusterScope string) *DCS {
-
clusterName := configMap.GetLabels()[config.LABEL_PG_CLUSTER]
return &DCS{
@@ -114,7 +113,6 @@ func NewDCS(configMap *corev1.ConfigMap, kubeclientset kubernetes.Interface,
// configuration is missing from the configMap, then and attempt is made to add it by refreshing
// the DCS configuration.
func (d *DCS) Sync() error {
-
clusterName := d.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER]
namespace := d.configMap.GetObjectMeta().GetNamespace()
@@ -123,7 +121,6 @@ func (d *DCS) Sync() error {
if err := d.apply(); err != nil &&
errors.Is(err, ErrMissingClusterConfig) {
-
if err := d.refresh(); err != nil {
return err
}
@@ -140,7 +137,6 @@ func (d *DCS) Sync() error {
// Update updates the contents of the DCS configuration stored within the configMap included
// in the DCS.
func (d *DCS) Update(dcsConfig *DCSConfig) error {
-
clusterName := d.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER]
namespace := d.configMap.GetObjectMeta().GetNamespace()
@@ -167,7 +163,6 @@ func (d *DCS) Update(dcsConfig *DCSConfig) error {
// "-config" configMap, with the contents of the ""
// configuration included in the DCS's configMap.
func (d *DCS) apply() error {
-
clusterName := d.configMap.GetLabels()[config.LABEL_PG_CLUSTER]
namespace := d.configMap.GetObjectMeta().GetNamespace()
@@ -250,7 +245,6 @@ func (d *DCS) getClusterDCSConfig() (*DCSConfig, map[string]json.RawMessage, err
// configMap, i.e. the contents of the "" configuration unmarshalled
// into a DCSConfig struct.
func (d *DCS) GetDCSConfig() (*DCSConfig, map[string]json.RawMessage, error) {
-
dcsYAML, ok := d.configMap.Data[d.configName]
if !ok {
return nil, nil, ErrMissingClusterConfig
@@ -291,7 +285,6 @@ func (d *DCS) patchDCSAnnotation(content string) error {
// configMap with the current DCS configuration for the cluster. Specifically, it is updated with
// the configuration stored in the "config" annotation of the "-config" configMap.
func (d *DCS) refresh() error {
-
clusterName := d.configMap.Labels[config.LABEL_PG_CLUSTER]
namespace := d.configMap.GetObjectMeta().GetNamespace()
diff --git a/internal/operator/config/localdb.go b/internal/operator/config/localdb.go
index d7eef19bf8..2e4a630563 100644
--- a/internal/operator/config/localdb.go
+++ b/internal/operator/config/localdb.go
@@ -38,7 +38,8 @@ import (
var (
// readConfigCMD is the command used to read local cluster configuration in a database
// container
- readConfigCMD []string = []string{"bash", "-c",
+ readConfigCMD []string = []string{
+ "bash", "-c",
"/opt/crunchy/bin/yq r /tmp/postgres-ha-bootstrap.yaml postgresql | " +
"/opt/crunchy/bin/yq p - postgresql",
}
@@ -120,7 +121,6 @@ type CreateReplicaMethod struct {
// servers.
func NewLocalDB(configMap *corev1.ConfigMap, restConfig *rest.Config,
kubeclientset kubernetes.Interface) (*LocalDB, error) {
-
clusterName := configMap.GetLabels()[config.LABEL_PG_CLUSTER]
namespace := configMap.GetObjectMeta().GetNamespace()
@@ -142,7 +142,6 @@ func NewLocalDB(configMap *corev1.ConfigMap, restConfig *rest.Config,
// configMap, then and attempt is made to add it by refreshing that specific configuration. Also, any
// configurations within the configMap associated with servers that no longer exist are removed.
func (l *LocalDB) Sync() error {
-
clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER]
namespace := l.configMap.GetObjectMeta().GetNamespace()
@@ -156,7 +155,7 @@ func (l *LocalDB) Sync() error {
// delete any configs that are in the configMap but don't have an associated DB server in the
// cluster
go func() {
- l.clean()
+ _ = l.clean()
wg.Done()
}()
@@ -166,11 +165,9 @@ func (l *LocalDB) Sync() error {
wg.Add(1)
go func(config string) {
-
// attempt to apply DCS config
if err := l.apply(config); err != nil &&
errors.Is(err, ErrMissingClusterConfig) {
-
if err := l.refresh(config); err != nil {
// log the error and move on
log.Error(err)
@@ -195,7 +192,6 @@ func (l *LocalDB) Sync() error {
// Update updates the contents of the configuration for a specific database server in
// the PG cluster, specifically within the configMap included in the LocalDB.
func (l *LocalDB) Update(configName string, localDBConfig LocalDBConfig) error {
-
clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER]
namespace := l.configMap.GetObjectMeta().GetNamespace()
@@ -255,7 +251,6 @@ func (l *LocalDB) apply(configName string) error {
stdout, stderr, err := kubeapi.ExecToPodThroughAPI(l.restConfig, l.kubeclientset, applyCommand,
dbPod.Spec.Containers[0].Name, dbPod.GetName(), namespace, nil)
-
if err != nil {
log.Error(stderr, stdout)
return err
@@ -271,7 +266,7 @@ func (l *LocalDB) apply(configName string) error {
// LocalDB if the database server they are associated with no longer exists
func (l *LocalDB) clean() error {
ctx := context.TODO()
- var patch = kubeapi.NewJSONPatch()
+ patch := kubeapi.NewJSONPatch()
var cmlocalConfigs []string
// first grab all current local configs from the configMap
@@ -320,7 +315,6 @@ func (l *LocalDB) getLocalConfigFromCluster(configName string) (*LocalDBConfig,
dbPodList, err := l.kubeclientset.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{
LabelSelector: selector,
})
-
if err != nil {
return nil, err
}
@@ -353,7 +347,6 @@ func (l *LocalDB) getLocalConfigFromCluster(configName string) (*LocalDBConfig,
// configMap for a specific database server, i.e. the contents of the ""
// configuration unmarshalled into a LocalConfig struct.
func (l *LocalDB) getLocalConfig(configName string) (string, error) {
-
localYAML, ok := l.configMap.Data[configName]
if !ok {
return "", ErrMissingClusterConfig
@@ -379,7 +372,6 @@ func (l *LocalDB) getLocalConfig(configName string) (string, error) {
// with the contents of the Patroni YAML configuration file stored in the container running the
// server.
func (l *LocalDB) refresh(configName string) error {
-
clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER]
namespace := l.configMap.GetObjectMeta().GetNamespace()
diff --git a/internal/operator/operatorupgrade/version-check.go b/internal/operator/operatorupgrade/version-check.go
index dce4196634..8544c77b77 100644
--- a/internal/operator/operatorupgrade/version-check.go
+++ b/internal/operator/operatorupgrade/version-check.go
@@ -45,7 +45,8 @@ func CheckVersion(clientset pgo.Interface, ns string) error {
}
// where the Operator versions do not match, label the pgclusters accordingly
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := &clusterList.Items[i]
if msgs.PGO_VERSION != cluster.Spec.UserLabels[config.LABEL_PGO_VERSION] {
log.Infof("operator version check - pgcluster %s version is currently %s, current version is %s", cluster.Name, cluster.Spec.UserLabels[config.LABEL_PGO_VERSION], msgs.PGO_VERSION)
// check if the annotations map has been created
@@ -54,8 +55,7 @@ func CheckVersion(clientset pgo.Interface, ns string) error {
cluster.Annotations = map[string]string{}
}
cluster.Annotations[config.ANNOTATION_IS_UPGRADED] = config.ANNOTATIONS_FALSE
- _, err = clientset.CrunchydataV1().Pgclusters(ns).Update(ctx, &cluster, metav1.UpdateOptions{})
- if err != nil {
+ if _, err := clientset.CrunchydataV1().Pgclusters(ns).Update(ctx, cluster, metav1.UpdateOptions{}); err != nil {
return fmt.Errorf("%s: %w", ErrUnsuccessfulVersionCheck, err)
}
}
@@ -69,7 +69,8 @@ func CheckVersion(clientset pgo.Interface, ns string) error {
}
// where the Operator versions do not match, label the replicas accordingly
- for _, replica := range replicaList.Items {
+ for i := range replicaList.Items {
+ replica := &replicaList.Items[i]
if msgs.PGO_VERSION != replica.Spec.UserLabels[config.LABEL_PGO_VERSION] {
log.Infof("operator version check - pgcluster replica %s version is currently %s, current version is %s", replica.Name, replica.Spec.UserLabels[config.LABEL_PGO_VERSION], msgs.PGO_VERSION)
// check if the annotations map has been created
@@ -78,8 +79,7 @@ func CheckVersion(clientset pgo.Interface, ns string) error {
replica.Annotations = map[string]string{}
}
replica.Annotations[config.ANNOTATION_IS_UPGRADED] = config.ANNOTATIONS_FALSE
- _, err = clientset.CrunchydataV1().Pgreplicas(ns).Update(ctx, &replica, metav1.UpdateOptions{})
- if err != nil {
+ if _, err := clientset.CrunchydataV1().Pgreplicas(ns).Update(ctx, replica, metav1.UpdateOptions{}); err != nil {
return fmt.Errorf("%s: %w", ErrUnsuccessfulVersionCheck, err)
}
}
diff --git a/internal/operator/pgbackrest.go b/internal/operator/pgbackrest.go
index 42a8f645d1..8e369e764c 100644
--- a/internal/operator/pgbackrest.go
+++ b/internal/operator/pgbackrest.go
@@ -59,8 +59,8 @@ func addBackRestConfigDirectoryVolumeAndMounts(podSpec *v1.PodSpec, volumeName s
// Any projections are included as custom pgBackRest configuration.
func AddBackRestConfigVolumeAndMounts(podSpec *v1.PodSpec, clusterName string, projections []v1.VolumeProjection) {
var combined []v1.VolumeProjection
- var defaultConfigNames = clusterName + "-config-backrest"
- var varTrue = true
+ defaultConfigNames := clusterName + "-config-backrest"
+ varTrue := true
// Start with custom configurations from the CRD.
combined = append(combined, projections...)
diff --git a/internal/operator/pgdump/dump.go b/internal/operator/pgdump/dump.go
index 060043d8c1..1df0efbc79 100644
--- a/internal/operator/pgdump/dump.go
+++ b/internal/operator/pgdump/dump.go
@@ -60,7 +60,7 @@ func Dump(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
ctx := context.TODO()
var err error
- //create the Job to run the pgdump command
+ // create the Job to run the pgdump command
cmd := task.Spec.Parameters[config.LABEL_PGDUMP_COMMAND]
@@ -128,7 +128,7 @@ func Dump(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
}
if operator.CRUNCHY_DEBUG {
- config.PgDumpBackupJobTemplate.Execute(os.Stdout, jobFields)
+ _ = config.PgDumpBackupJobTemplate.Execute(os.Stdout, jobFields)
}
newjob := v1batch.Job{}
@@ -148,7 +148,7 @@ func Dump(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
return
}
- //update the pgdump task status to submitted - updates task, not the job.
+ // update the pgdump task status to submitted - updates task, not the job.
patch, err := kubeapi.NewJSONPatch().Add("spec", "status")(crv1.PgBackupJobSubmitted).Bytes()
if err == nil {
log.Debugf("patching task %s: %s", task.Spec.Name, patch)
@@ -158,5 +158,4 @@ func Dump(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
if err != nil {
log.Error(err.Error())
}
-
}
diff --git a/internal/operator/pgdump/restore.go b/internal/operator/pgdump/restore.go
index 6d874f4a9e..95169c06de 100644
--- a/internal/operator/pgdump/restore.go
+++ b/internal/operator/pgdump/restore.go
@@ -72,7 +72,7 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
return
}
- //use the storage config from the primary PostgreSQL cluster
+ // use the storage config from the primary PostgreSQL cluster
storage := cluster.Spec.PrimaryStorage
taskName := task.Name
@@ -104,7 +104,7 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
}
if operator.CRUNCHY_DEBUG {
- config.PgRestoreJobTemplate.Execute(os.Stdout, jobFields)
+ _ = config.PgRestoreJobTemplate.Execute(os.Stdout, jobFields)
}
newjob := v1batch.Job{}
@@ -125,5 +125,4 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
return
}
log.Debugf("pgrestore job %s created", j.Name)
-
}
diff --git a/internal/operator/pvc/pvc.go b/internal/operator/pvc/pvc.go
index 24304b895a..21d2fc2808 100644
--- a/internal/operator/pvc/pvc.go
+++ b/internal/operator/pvc/pvc.go
@@ -152,7 +152,7 @@ func Create(clientset kubernetes.Interface, name, clusterName string, storageSpe
log.Debug("using dynamic PVC template")
err = config.PVCStorageClassTemplate.Execute(&doc2, pvcFields)
if operator.CRUNCHY_DEBUG {
- config.PVCStorageClassTemplate.Execute(os.Stdout, pvcFields)
+ _ = config.PVCStorageClassTemplate.Execute(os.Stdout, pvcFields)
}
} else {
log.Debugf("matchlabels from spec is [%s]", storageSpec.MatchLabels)
@@ -168,7 +168,7 @@ func Create(clientset kubernetes.Interface, name, clusterName string, storageSpe
err = config.PVCTemplate.Execute(&doc2, pvcFields)
if operator.CRUNCHY_DEBUG {
- config.PVCTemplate.Execute(os.Stdout, pvcFields)
+ _ = config.PVCTemplate.Execute(os.Stdout, pvcFields)
}
}
if err != nil {
@@ -217,7 +217,6 @@ func Exists(clientset kubernetes.Interface, name string, namespace string) bool
}
func getMatchLabels(key, value string) string {
-
matchLabelsTemplateFields := matchLabelsTemplateFields{}
matchLabelsTemplateFields.Key = key
matchLabelsTemplateFields.Value = value
@@ -230,5 +229,4 @@ func getMatchLabels(key, value string) string {
}
return doc.String()
-
}
diff --git a/internal/operator/storage.go b/internal/operator/storage.go
index da06087deb..83b3918ae7 100644
--- a/internal/operator/storage.go
+++ b/internal/operator/storage.go
@@ -33,7 +33,7 @@ func (s StorageResult) InlineVolumeSource() string {
b := new(bytes.Buffer)
e := json.NewEncoder(b)
e.SetEscapeHTML(false)
- e.Encode(s.VolumeSource())
+ _ = e.Encode(s.VolumeSource())
// remove trailing newline and surrounding brackets
return b.String()[1 : b.Len()-2]
diff --git a/internal/operator/storage_test.go b/internal/operator/storage_test.go
index 280b1c6cd0..44a235fff0 100644
--- a/internal/operator/storage_test.go
+++ b/internal/operator/storage_test.go
@@ -32,10 +32,14 @@ func TestStorageResultInlineVolumeSource(t *testing.T) {
expected string
}{
{StorageResult{}, `"emptyDir":{}`},
- {StorageResult{PersistentVolumeClaimName: "<\x00"},
- `"persistentVolumeClaim":{"claimName":"<\u0000"}`},
- {StorageResult{PersistentVolumeClaimName: "some-name"},
- `"persistentVolumeClaim":{"claimName":"some-name"}`},
+ {
+ StorageResult{PersistentVolumeClaimName: "<\x00"},
+ `"persistentVolumeClaim":{"claimName":"<\u0000"}`,
+ },
+ {
+ StorageResult{PersistentVolumeClaimName: "some-name"},
+ `"persistentVolumeClaim":{"claimName":"some-name"}`,
+ },
} {
if actual := tt.value.InlineVolumeSource(); actual != tt.expected {
t.Errorf("expected %q for %v, got %q", tt.expected, tt.value, actual)
diff --git a/internal/operator/task/applypolicies.go b/internal/operator/task/applypolicies.go
index c3d1d306c9..11b568f0c4 100644
--- a/internal/operator/task/applypolicies.go
+++ b/internal/operator/task/applypolicies.go
@@ -33,13 +33,13 @@ func ApplyPolicies(clusterName string, clientset kubeapi.Interface, RESTConfig *
task, err := clientset.CrunchydataV1().Pgtasks(ns).Get(ctx, taskName, metav1.GetOptions{})
if err == nil {
- //apply those policies
+ // apply those policies
for k := range task.Spec.Parameters {
log.Debugf("applying policy %s to %s", k, clusterName)
applyPolicy(clientset, RESTConfig, k, clusterName, ns)
}
- //delete the pgtask to not redo this again
- clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, taskName, metav1.DeleteOptions{})
+ // delete the pgtask to not redo this again
+ _ = clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, taskName, metav1.DeleteOptions{})
}
}
@@ -70,11 +70,10 @@ func applyPolicy(clientset kubeapi.Interface, restconfig *rest.Config, policyNam
log.Error(err)
}
- //update the pgcluster crd labels with the new policy
+ // update the pgcluster crd labels with the new policy
log.Debugf("patching cluster %s: %s", cl.Spec.Name, patch)
_, err = clientset.CrunchydataV1().Pgclusters(ns).Patch(ctx, cl.Spec.Name, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
log.Error(err)
}
-
}
diff --git a/internal/operator/task/rmbackups.go b/internal/operator/task/rmbackups.go
deleted file mode 100644
index 91f48ee2e6..0000000000
--- a/internal/operator/task/rmbackups.go
+++ /dev/null
@@ -1,42 +0,0 @@
-package task
-
-/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-import (
- "context"
-
- "github.com/crunchydata/postgres-operator/internal/config"
- crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
- log "github.com/sirupsen/logrus"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/client-go/kubernetes"
-)
-
-// RemoveBackups ...
-func RemoveBackups(namespace string, clientset kubernetes.Interface, task *crv1.Pgtask) {
- ctx := context.TODO()
-
- //delete any backup jobs for this cluster
- //kubectl delete job --selector=pg-cluster=clustername
-
- log.Debugf("deleting backup jobs with selector=%s=%s", config.LABEL_PG_CLUSTER, task.Spec.Parameters[config.LABEL_PG_CLUSTER])
- deletePropagation := metav1.DeletePropagationForeground
- clientset.
- BatchV1().Jobs(namespace).
- DeleteCollection(ctx,
- metav1.DeleteOptions{PropagationPolicy: &deletePropagation},
- metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + task.Spec.Parameters[config.LABEL_PG_CLUSTER]})
-}
diff --git a/internal/operator/task/rmdata.go b/internal/operator/task/rmdata.go
index b44c529b4b..eb9c9c2fe8 100644
--- a/internal/operator/task/rmdata.go
+++ b/internal/operator/task/rmdata.go
@@ -53,7 +53,7 @@ type rmdatajobTemplateFields struct {
func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
ctx := context.TODO()
- //create marker (clustername, namespace)
+ // create marker (clustername, namespace)
patch, err := kubeapi.NewJSONPatch().
Add("spec", "parameters", config.LABEL_DELETE_DATA_STARTED)(time.Now().Format(time.RFC3339)).
Bytes()
@@ -67,8 +67,8 @@ func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask
return
}
- //create the Job to remove the data
- //pvcName := task.Spec.Parameters[config.LABEL_PVC_NAME]
+ // create the Job to remove the data
+ // pvcName := task.Spec.Parameters[config.LABEL_PVC_NAME]
clusterName := task.Spec.Parameters[config.LABEL_PG_CLUSTER]
clusterPGHAScope := task.Spec.Parameters[config.LABEL_PGHA_SCOPE]
replicaName := task.Spec.Parameters[config.LABEL_REPLICA_NAME]
@@ -116,7 +116,7 @@ func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask
}
if operator.CRUNCHY_DEBUG {
- config.RmdatajobTemplate.Execute(os.Stdout, jobFields)
+ _ = config.RmdatajobTemplate.Execute(os.Stdout, jobFields)
}
newjob := v1batch.Job{}
@@ -137,11 +137,11 @@ func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask
}
log.Debugf("successfully created rmdata job %s", j.Name)
- publishDeleteCluster(task.Spec.Parameters[config.LABEL_PG_CLUSTER], task.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER],
+ publishDeleteCluster(task.Spec.Parameters[config.LABEL_PG_CLUSTER],
task.ObjectMeta.Labels[config.LABEL_PGOUSER], namespace)
}
-func publishDeleteCluster(clusterName, identifier, username, namespace string) {
+func publishDeleteCluster(clusterName, username, namespace string) {
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
diff --git a/internal/operator/task/workflow.go b/internal/operator/task/workflow.go
index 43c6e1100b..531e3e5eac 100644
--- a/internal/operator/task/workflow.go
+++ b/internal/operator/task/workflow.go
@@ -30,19 +30,15 @@ import (
// CompleteCreateClusterWorkflow ... update the pgtask for the
// create cluster workflow for a given cluster
func CompleteCreateClusterWorkflow(clusterName string, clientset pgo.Interface, ns string) {
-
taskName := clusterName + "-" + crv1.PgtaskWorkflowCreateClusterType
completeWorkflow(clientset, ns, taskName)
-
}
func CompleteBackupWorkflow(clusterName string, clientset pgo.Interface, ns string) {
-
taskName := clusterName + "-" + crv1.PgtaskWorkflowBackupType
completeWorkflow(clientset, ns, taskName)
-
}
func completeWorkflow(clientset pgo.Interface, taskNamespace, taskName string) {
@@ -54,7 +50,7 @@ func completeWorkflow(clientset pgo.Interface, taskNamespace, taskName string) {
return
}
- //mark this workflow as completed
+ // mark this workflow as completed
id := task.Spec.Parameters[crv1.PgtaskWorkflowID]
log.Debugf("completing workflow %s id %s", taskName, id)
@@ -72,5 +68,4 @@ func completeWorkflow(clientset pgo.Interface, taskNamespace, taskName string) {
if err != nil {
log.Error(err)
}
-
}
diff --git a/internal/patroni/patroni.go b/internal/patroni/patroni.go
index 50d75bec33..111515c02a 100644
--- a/internal/patroni/patroni.go
+++ b/internal/patroni/patroni.go
@@ -37,12 +37,16 @@ const dbContainerName = "database"
var (
// reloadCMD is the command for reloading a specific PG instance (primary or replica) within a
// PG cluster
- reloadCMD = []string{"/bin/bash", "-c",
- fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/reload", config.DEFAULT_PATRONI_PORT)}
+ reloadCMD = []string{
+ "/bin/bash", "-c",
+ fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/reload", config.DEFAULT_PATRONI_PORT),
+ }
// restartCMD is the command for restart a specific PG database (primary or replica) within a
// PG cluster
- restartCMD = []string{"/bin/bash", "-c",
- fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/restart", config.DEFAULT_PATRONI_PORT)}
+ restartCMD = []string{
+ "/bin/bash", "-c",
+ fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/restart", config.DEFAULT_PATRONI_PORT),
+ }
// ErrInstanceNotFound is the error thrown when a target instance cannot be found in the cluster
ErrInstanceNotFound = errors.New("The instance does not exist in the cluster")
@@ -77,7 +81,6 @@ type RestartResult struct {
// NewPatroniClient creates a new Patroni client
func NewPatroniClient(restConfig *rest.Config, kubeclientset kubernetes.Interface,
clusterName, namespace string) Client {
-
return &patroniClient{
restConfig: restConfig,
kubeclientset: kubeclientset,
@@ -112,7 +115,6 @@ func (p *patroniClient) getClusterInstances() (map[string]corev1.Pod, error) {
// ReloadCluster reloads the configuration for a PostgreSQL cluster. Specififcally, a Patroni
// reload (which includes a PG reload) is executed on the primary and each replica within the cluster.
func (p *patroniClient) ReloadCluster() error {
-
instanceMap, err := p.getClusterInstances()
if err != nil {
return err
@@ -131,7 +133,6 @@ func (p *patroniClient) ReloadCluster() error {
// Patroni restart is executed on the primary and each replica within the cluster. A slice is also
// returned containing the names of all instances restarted within the cluster.
func (p *patroniClient) RestartCluster() ([]RestartResult, error) {
-
var restartResult []RestartResult
instanceMap, err := p.getClusterInstances()
@@ -156,7 +157,6 @@ func (p *patroniClient) RestartCluster() ([]RestartResult, error) {
// RestartInstances restarts the PostgreSQL databases for the instances specified. Specififcally, a
// Patroni restart is executed on the primary and each replica within the cluster.
func (p *patroniClient) RestartInstances(instances ...string) ([]RestartResult, error) {
-
var restartResult []RestartResult
instanceMap, err := p.getClusterInstances()
@@ -195,7 +195,6 @@ func (p *patroniClient) RestartInstances(instances ...string) ([]RestartResult,
// reload performs a Patroni reload (which includes a PG reload) on a specific instance (primary or
// replica) within a PG cluster
func (p *patroniClient) reload(podName string) error {
-
stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset, reloadCMD,
dbContainerName, podName, p.namespace, nil)
if err != nil {
@@ -212,7 +211,6 @@ func (p *patroniClient) reload(podName string) error {
// restart performs a Patroni restart on a specific instance (primary or replica) within a PG
// cluster.
func (p *patroniClient) restart(podName string) error {
-
stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset, restartCMD,
dbContainerName, podName, p.namespace, nil)
if err != nil {
diff --git a/internal/pgadmin/backoff.go b/internal/pgadmin/backoff.go
index d1df68c80d..40cacaec13 100644
--- a/internal/pgadmin/backoff.go
+++ b/internal/pgadmin/backoff.go
@@ -40,6 +40,7 @@ const (
)
// Apply provides a new time with respect to t based on the jitter mode
+// #nosec: G404
func (jm Jitter) Apply(t time.Duration) time.Duration {
switch jm {
case JitterNone: // being explicit in case default case changes
diff --git a/internal/pgadmin/backoff_test.go b/internal/pgadmin/backoff_test.go
index aeae16f7a5..09c0445526 100644
--- a/internal/pgadmin/backoff_test.go
+++ b/internal/pgadmin/backoff_test.go
@@ -108,7 +108,6 @@ func TestSubscripts(t *testing.T) {
t.Fail()
}
}
-
}
func TestUniformPolicy(t *testing.T) {
diff --git a/internal/pgadmin/crypto_test.go b/internal/pgadmin/crypto_test.go
index 36f8468379..221b23fb80 100644
--- a/internal/pgadmin/crypto_test.go
+++ b/internal/pgadmin/crypto_test.go
@@ -28,8 +28,10 @@ var testData = struct {
clearPW: "w052H0UBM783B$x6N___",
encPW: "5PN+lp8XXalwRzCptI21hmT5S9FvvEYpD8chWa39akY6Srwl",
key: "$pbkdf2-sha512$19000$knLuvReC8H7v/T8n5JwTwg$OsVGpDa/zpCE2pKEOsZ4/SqdxcQZ0UU6v41ev/gkk4ROsrws/4I03oHqN37k.v1d25QckESs3NlPxIUv5gTf2Q",
- iv: []byte{0xe4, 0xf3, 0x7e, 0x96, 0x9f, 0x17, 0x5d, 0xa9,
- 0x70, 0x47, 0x30, 0xa9, 0xb4, 0x8d, 0xb5, 0x86},
+ iv: []byte{
+ 0xe4, 0xf3, 0x7e, 0x96, 0x9f, 0x17, 0x5d, 0xa9,
+ 0x70, 0x47, 0x30, 0xa9, 0xb4, 0x8d, 0xb5, 0x86,
+ },
}
func TestSymmetry(t *testing.T) {
diff --git a/internal/pgadmin/hash.go b/internal/pgadmin/hash.go
index b73222fb8b..728faed22b 100644
--- a/internal/pgadmin/hash.go
+++ b/internal/pgadmin/hash.go
@@ -54,7 +54,7 @@ func HashPassword(qr *queryRunner, pass string) (string, error) {
// Generate a "new" password derived from the provided password
// Satisfies OWASP sec. 2.4.5: 'provide additional iteration of a key derivation'
mac := hmac.New(sha512.New, saltBytes)
- mac.Write([]byte(pass))
+ _, _ = mac.Write([]byte(pass))
macBytes := mac.Sum(nil)
macBase64 := base64.StdEncoding.EncodeToString(macBytes)
diff --git a/internal/pgadmin/runner.go b/internal/pgadmin/runner.go
index 233dc092be..7ce80a484c 100644
--- a/internal/pgadmin/runner.go
+++ b/internal/pgadmin/runner.go
@@ -102,7 +102,7 @@ func (qr *queryRunner) EnsureReady() error {
cmd, qr.Pod.Spec.Containers[0].Name, qr.Pod.Name, qr.Namespace, nil)
if err != nil && !strings.Contains(stderr, "no such table") {
- lastError = fmt.Errorf("%v - %v", err, stderr)
+ lastError = fmt.Errorf("%w - %v", err, stderr)
nextRoundIn := qr.BackoffPolicy.Duration(i)
log.Debugf("[InitWait attempt %02d]: %v - retry in %v", i, err, nextRoundIn)
time.Sleep(nextRoundIn)
@@ -121,7 +121,7 @@ func (qr *queryRunner) EnsureReady() error {
}
}
if lastError != nil && output == "" {
- return fmt.Errorf("error executing query: %v", lastError)
+ return fmt.Errorf("error executing query: %w", lastError)
}
return nil
@@ -141,7 +141,7 @@ func (qr *queryRunner) Exec(query string) error {
_, stderr, err := kubeapi.ExecToPodThroughAPI(qr.apicfg, qr.clientset,
cmd, qr.Pod.Spec.Containers[0].Name, qr.Pod.Name, qr.Namespace, nil)
if err != nil {
- lastError = fmt.Errorf("%v - %v", err, stderr)
+ lastError = fmt.Errorf("%w - %v", err, stderr)
nextRoundIn := qr.BackoffPolicy.Duration(i)
log.Debugf("[Exec attempt %02d]: %v - retry in %v", i, err, nextRoundIn)
time.Sleep(nextRoundIn)
@@ -151,7 +151,7 @@ func (qr *queryRunner) Exec(query string) error {
}
}
if lastError != nil {
- return fmt.Errorf("error executing query: %vv", lastError)
+ return fmt.Errorf("error executing query: %w", lastError)
}
return nil
@@ -178,7 +178,7 @@ func (qr *queryRunner) Query(query string) (string, error) {
stdout, stderr, err := kubeapi.ExecToPodThroughAPI(qr.apicfg, qr.clientset,
cmd, qr.Pod.Spec.Containers[0].Name, qr.Pod.Name, qr.Namespace, nil)
if err != nil {
- lastError = fmt.Errorf("%v - %v", err, stderr)
+ lastError = fmt.Errorf("%w - %v", err, stderr)
nextRoundIn := qr.BackoffPolicy.Duration(i)
log.Debugf("[Query attempt %02d]: %v - retry in %v", i, err, nextRoundIn)
time.Sleep(nextRoundIn)
@@ -189,7 +189,7 @@ func (qr *queryRunner) Query(query string) (string, error) {
}
}
if lastError != nil && output == "" {
- return "", fmt.Errorf("error executing query: %v", lastError)
+ return "", fmt.Errorf("error executing query: %w", lastError)
}
return output, nil
diff --git a/internal/postgres/password/md5.go b/internal/postgres/password/md5.go
index 030fd21765..56f9504608 100644
--- a/internal/postgres/password/md5.go
+++ b/internal/postgres/password/md5.go
@@ -16,15 +16,14 @@ package password
*/
import (
+ // #nosec G501
"crypto/md5"
"errors"
"fmt"
)
-var (
- // ErrMD5PasswordInvalid is returned when the password attributes are invalid
- ErrMD5PasswordInvalid = errors.New(`invalid password attributes. must provide "username" and "password"`)
-)
+// ErrMD5PasswordInvalid is returned when the password attributes are invalid
+var ErrMD5PasswordInvalid = errors.New(`invalid password attributes. must provide "username" and "password"`)
// MD5Password implements the PostgresPassword interface for hashing passwords
// using the PostgreSQL MD5 method
@@ -42,6 +41,7 @@ func (m *MD5Password) Build() (string, error) {
plaintext := []byte(m.password + m.username)
// finish the transformation by getting the string value of the MD5 hash and
// encoding it in hexadecimal for PostgreSQL, appending "md5" to the front
+ // #nosec G401
return fmt.Sprintf("md5%x", md5.Sum(plaintext)), nil
}
diff --git a/internal/postgres/password/md5_test.go b/internal/postgres/password/md5_test.go
index c77c8abf43..41c0711b04 100644
--- a/internal/postgres/password/md5_test.go
+++ b/internal/postgres/password/md5_test.go
@@ -38,7 +38,6 @@ func TestMD5Build(t *testing.T) {
}
hash, err := md5.Build()
-
if err != nil {
t.Error(err)
}
diff --git a/internal/postgres/password/password.go b/internal/postgres/password/password.go
index b70112a4c3..f63fb31492 100644
--- a/internal/postgres/password/password.go
+++ b/internal/postgres/password/password.go
@@ -31,10 +31,8 @@ const (
SCRAM
)
-var (
- // ErrPasswordType is returned when a password type does not exist
- ErrPasswordType = errors.New("password type does not exist")
-)
+// ErrPasswordType is returned when a password type does not exist
+var ErrPasswordType = errors.New("password type does not exist")
// PostgresPassword is the interface that defines the methods required to build
// a password for PostgreSQL in a desired format (e.g. MD5)
diff --git a/internal/postgres/password/password_test.go b/internal/postgres/password/password_test.go
index b9b7094dbc..d315c966bc 100644
--- a/internal/postgres/password/password_test.go
+++ b/internal/postgres/password/password_test.go
@@ -16,6 +16,7 @@ package password
*/
import (
+ "errors"
"testing"
)
@@ -27,7 +28,6 @@ func TestNewPostgresPassword(t *testing.T) {
passwordType := MD5
postgresPassword, err := NewPostgresPassword(passwordType, username, password)
-
if err != nil {
t.Error(err)
}
@@ -49,7 +49,6 @@ func TestNewPostgresPassword(t *testing.T) {
passwordType := SCRAM
postgresPassword, err := NewPostgresPassword(passwordType, username, password)
-
if err != nil {
t.Error(err)
}
@@ -66,7 +65,7 @@ func TestNewPostgresPassword(t *testing.T) {
t.Run("invalid", func(t *testing.T) {
passwordType := PasswordType(-1)
- if _, err := NewPostgresPassword(passwordType, username, password); err != ErrPasswordType {
+ if _, err := NewPostgresPassword(passwordType, username, password); !errors.Is(err, ErrPasswordType) {
t.Errorf("expected error: %q", err.Error())
}
})
diff --git a/internal/postgres/password/scram.go b/internal/postgres/password/scram.go
index aa6eee3df8..794575e4dd 100644
--- a/internal/postgres/password/scram.go
+++ b/internal/postgres/password/scram.go
@@ -96,7 +96,6 @@ type SCRAMPassword struct {
func (s *SCRAMPassword) Build() (string, error) {
// get a generated salt
salt, err := s.generateSalt(s.SaltLength)
-
if err != nil {
return "", err
}
diff --git a/internal/postgres/password/scram_test.go b/internal/postgres/password/scram_test.go
index 6de92bb17c..7cdac419d0 100644
--- a/internal/postgres/password/scram_test.go
+++ b/internal/postgres/password/scram_test.go
@@ -54,7 +54,6 @@ func TestScramGenerateSalt(t *testing.T) {
for _, saltLength := range saltLengths {
t.Run(fmt.Sprintf("salt length %d", saltLength), func(t *testing.T) {
salt, err := scramGenerateSalt(saltLength)
-
if err != nil {
t.Error(err)
}
@@ -71,7 +70,6 @@ func TestScramGenerateSalt(t *testing.T) {
for _, saltLength := range saltLengths {
t.Run(fmt.Sprintf("salt length %d", saltLength), func(t *testing.T) {
-
if _, err := scramGenerateSalt(saltLength); err == nil {
t.Errorf("error expected for salt length of %d", saltLength)
}
@@ -82,7 +80,6 @@ func TestScramGenerateSalt(t *testing.T) {
func TestSCRAMBuild(t *testing.T) {
t.Run("scram-sha-256", func(t *testing.T) {
-
t.Run("valid", func(t *testing.T) {
// check a few different password combinations. note: the salt is kept the
// same so we can get a reproducible result
@@ -104,7 +101,6 @@ func TestSCRAMBuild(t *testing.T) {
scram.generateSalt = mockGenerateSalt
hash, err := scram.Build()
-
if err != nil {
t.Error(err)
}
@@ -152,7 +148,7 @@ func TestSCRAMHash(t *testing.T) {
expected, _ := hex.DecodeString("877cc977e7b033e10d6e0b0d666da1f463bc51b1de48869250a0347ec1b2b8b3")
actual := scram.hash(sha256.New, []byte("hippo"))
- if bytes.Compare(expected, actual) != 0 {
+ if !bytes.Equal(expected, actual) {
t.Errorf("expected: %x actual %x", expected, actual)
}
})
@@ -164,7 +160,7 @@ func TestSCRAMHMAC(t *testing.T) {
expected, _ := hex.DecodeString("ac9872eb21043142c3bf073c9fa4caf9553940750ef7b85116905aaa456a2d07")
actual := scram.hmac(sha256.New, []byte("hippo"), []byte("datalake"))
- if bytes.Compare(expected, actual) != 0 {
+ if !bytes.Equal(expected, actual) {
t.Errorf("expected: %x actual %x", expected, actual)
}
})
diff --git a/internal/tlsutil/primitives_test.go b/internal/tlsutil/primitives_test.go
index 22676e9fbc..1c6538f543 100644
--- a/internal/tlsutil/primitives_test.go
+++ b/internal/tlsutil/primitives_test.go
@@ -17,6 +17,7 @@ limitations under the License.
import (
"bytes"
+ "context"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
@@ -93,8 +94,9 @@ func TestExtendedTrust(t *testing.T) {
defer srv.Close()
caTrust := x509.NewCertPool()
- ExtendTrust(caTrust, bytes.NewReader(pemCert))
+ _ = ExtendTrust(caTrust, bytes.NewReader(pemCert))
+ // #nosec G402
srv.TLS = &tls.Config{
ServerName: "Stom",
ClientAuth: tls.RequireAndVerifyClientCert,
@@ -111,6 +113,7 @@ func TestExtendedTrust(t *testing.T) {
}
client := srv.Client()
+ // #nosec G402
client.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
Certificates: []tls.Certificate{
@@ -122,7 +125,12 @@ func TestExtendedTrust(t *testing.T) {
}
// Confirm server response
- res, err := client.Get(srv.URL)
+ req, err := http.NewRequestWithContext(context.TODO(), http.MethodGet, srv.URL, nil)
+ if err != nil {
+ t.Fatalf("error getting request - %s\n", err)
+ }
+
+ res, err := client.Do(req)
if err != nil {
t.Fatalf("error getting response - %s\n", err)
}
diff --git a/internal/util/backrest.go b/internal/util/backrest.go
index 66e3a2dec6..2c42d2ae4b 100644
--- a/internal/util/backrest.go
+++ b/internal/util/backrest.go
@@ -27,7 +27,8 @@ const (
BackrestRepoDeploymentName = "%s-backrest-shared-repo"
BackrestRepoServiceName = "%s-backrest-shared-repo"
BackrestRepoPVCName = "%s-pgbr-repo"
- BackrestRepoSecretName = "%s-backrest-repo-config"
+ // #nosec: G101
+ BackrestRepoSecretName = "%s-backrest-repo-config"
)
// defines the default repo1-path for pgBackRest for use when a specic path is not provided
@@ -42,7 +43,6 @@ const defaultBackrestRepoPath = "/backrestrepo/%s-backrest-shared-repo"
// validation is ocurring for a restore, the ensure only one storage type is selected.
func ValidateBackrestStorageTypeOnBackupRestore(newBackRestStorageType,
currentBackRestStorageType string, restore bool) error {
-
if newBackRestStorageType != "" && !IsValidBackrestStorageType(newBackRestStorageType) {
return fmt.Errorf("Invalid value provided for pgBackRest storage type. The following "+
"values are allowed: %s", "\""+strings.Join(crv1.BackrestStorageTypes, "\", \"")+"\"")
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index 03fbb3c69b..9bab736209 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -69,14 +69,20 @@ const (
const (
// three of these are exported, as they are used to help add the information
// into the templates. Say the last one 10 times fast
- BackRestRepoSecretKeyAWSS3KeyAWSS3CACert = "aws-s3-ca.crt"
- BackRestRepoSecretKeyAWSS3KeyAWSS3Key = "aws-s3-key"
+ // #nosec: G101
+ BackRestRepoSecretKeyAWSS3KeyAWSS3CACert = "aws-s3-ca.crt"
+ // #nosec: G101
+ BackRestRepoSecretKeyAWSS3KeyAWSS3Key = "aws-s3-key"
+ // #nosec: G101
BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret = "aws-s3-key-secret"
// the rest are private
- backRestRepoSecretKeyAuthorizedKeys = "authorized_keys"
- backRestRepoSecretKeySSHConfig = "config"
- backRestRepoSecretKeySSHDConfig = "sshd_config"
- backRestRepoSecretKeySSHPrivateKey = "id_ed25519"
+ backRestRepoSecretKeyAuthorizedKeys = "authorized_keys"
+ backRestRepoSecretKeySSHConfig = "config"
+ // #nosec: G101
+ backRestRepoSecretKeySSHDConfig = "sshd_config"
+ // #nosec: G101
+ backRestRepoSecretKeySSHPrivateKey = "id_ed25519"
+ // #nosec: G101
backRestRepoSecretKeySSHHostPrivateKey = "ssh_host_ed25519_key"
)
@@ -94,23 +100,21 @@ const (
//
// The escaping for SQL injections is handled in the SetPostgreSQLPassword
// function
+ // #nosec: G101
sqlSetPasswordDefault = `ALTER ROLE %s PASSWORD %s;`
)
-var (
- // ErrMissingConfigAnnotation represents an error thrown when the 'config' annotation is found
- // to be missing from the 'config' configMap created to store cluster-wide configuration
- ErrMissingConfigAnnotation error = errors.New("'config' annotation missing from cluster " +
- "configutation")
-)
+// ErrMissingConfigAnnotation represents an error thrown when the 'config' annotation is found
+// to be missing from the 'config' configMap created to store cluster-wide configuration
+var ErrMissingConfigAnnotation error = errors.New("'config' annotation missing from cluster " +
+ "configutation")
-var (
- // CmdStopPostgreSQL is the command used to stop a PostgreSQL instance, which
- // uses the "fast" shutdown mode. This needs a data directory appended to it
- cmdStopPostgreSQL = []string{"pg_ctl", "stop",
- "-m", "fast", "-D",
- }
-)
+// CmdStopPostgreSQL is the command used to stop a PostgreSQL instance, which
+// uses the "fast" shutdown mode. This needs a data directory appended to it
+var cmdStopPostgreSQL = []string{
+ "pg_ctl", "stop",
+ "-m", "fast", "-D",
+}
// CreateBackrestRepoSecrets creates the secrets required to manage the
// pgBackRest repo container
@@ -229,7 +233,6 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
// IsAutofailEnabled - returns true if autofail label is set to true, false if not.
func IsAutofailEnabled(cluster *crv1.Pgcluster) bool {
-
labels := cluster.ObjectMeta.Labels
failLabel := labels[config.LABEL_AUTOFAIL]
@@ -248,7 +251,6 @@ func GeneratedPasswordValidUntilDays(configuredValidUntilDays string) int {
// note that "configuredPasswordLength" may be an empty string, and as such
// the below line could fail. That's ok though! as we have a default set up
validUntilDays, err := strconv.Atoi(configuredValidUntilDays)
-
// if there is an error...set it to a default
if err != nil {
validUntilDays = DefaultPasswordValidUntilDays
@@ -269,7 +271,6 @@ func GetPrimaryPod(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (*v1
// query the pods
pods, err := clientset.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
-
// if there is an error, log it and abort
if err != nil {
return nil, err
@@ -277,8 +278,7 @@ func GetPrimaryPod(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (*v1
// if no pods are retirn, then also raise an error
if len(pods.Items) == 0 {
- err := errors.New(fmt.Sprintf("primary pod not found for selector [%s]", selector))
- return nil, err
+ return nil, fmt.Errorf("primary pod not found for selector %q", selector)
}
// Grab the first pod from the list as this is presumably the primary pod
@@ -294,7 +294,6 @@ func GetS3CredsFromBackrestRepoSecret(clientset kubernetes.Interface, namespace,
s3Secret := AWSS3Secret{}
secret, err := clientset.CoreV1().Secrets(namespace).Get(ctx, secretName, metav1.GetOptions{})
-
if err != nil {
log.Error(err)
return s3Secret, err
diff --git a/internal/util/failover.go b/internal/util/failover.go
index a4abba76f6..a19d887b25 100644
--- a/internal/util/failover.go
+++ b/internal/util/failover.go
@@ -91,12 +91,10 @@ const (
instanceStatusUnavailable = "unavailable"
)
-var (
- // instanceInfoCommand is the command used to get information about the status
- // and other statistics about the instances in a PostgreSQL cluster, e.g.
- // replication lag
- instanceInfoCommand = []string{"patronictl", "list", "-f", "json"}
-)
+// instanceInfoCommand is the command used to get information about the status
+// and other statistics about the instances in a PostgreSQL cluster, e.g.
+// replication lag
+var instanceInfoCommand = []string{"patronictl", "list", "-f", "json"}
// GetPod determines the best target to fail to
func GetPod(clientset kubernetes.Interface, deploymentName, namespace string) (*v1.Pod, error) {
@@ -115,13 +113,13 @@ func GetPod(clientset kubernetes.Interface, deploymentName, namespace string) (*
return pod, errors.New("could not determine which pod to failover to")
}
- for _, v := range pods.Items {
- pod = &v
+ for i := range pods.Items {
+ pod = &pods.Items[i]
}
found := false
- //make sure the pod has a database container it it
+ // make sure the pod has a database container it it
for _, c := range pod.Spec.Containers {
if c.Name == "database" {
found = true
@@ -179,7 +177,6 @@ func ReplicationStatus(request ReplicationStatusRequest, includePrimary, include
log.Debugf(`searching for pods with "%s"`, selector)
pods, err := request.Clientset.CoreV1().Pods(request.Namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
-
// If there is an error trying to get the pods, return here. Allow the caller
// to handle the error
if err != nil {
@@ -204,9 +201,9 @@ func ReplicationStatus(request ReplicationStatusRequest, includePrimary, include
// From executing and running a command in the first active pod
var pod *v1.Pod
- for _, p := range pods.Items {
- if p.Status.Phase == v1.PodRunning {
- pod = &p
+ for i := range pods.Items {
+ if pods.Items[i].Status.Phase == v1.PodRunning {
+ pod = &pods.Items[i]
break
}
}
@@ -236,7 +233,6 @@ func ReplicationStatus(request ReplicationStatusRequest, includePrimary, include
commandStdOut, _, err := kubeapi.ExecToPodThroughAPI(
request.RESTConfig, request.Clientset, instanceInfoCommand,
pod.Spec.Containers[0].Name, pod.Name, request.Namespace, nil)
-
// if there is an error, return. We will log the error at a higher level
if err != nil {
return response, err
@@ -244,7 +240,7 @@ func ReplicationStatus(request ReplicationStatusRequest, includePrimary, include
// parse the JSON and plast it into instanceInfoList
var rawInstances []instanceReplicationInfoJSON
- json.Unmarshal([]byte(commandStdOut), &rawInstances)
+ _ = json.Unmarshal([]byte(commandStdOut), &rawInstances)
log.Debugf("patroni instance info: %v", rawInstances)
@@ -327,14 +323,14 @@ func ToggleAutoFailover(clientset kubernetes.Interface, enable bool, pghaScope,
configJSONStr := configMap.ObjectMeta.Annotations["config"]
var configJSON map[string]interface{}
- json.Unmarshal([]byte(configJSONStr), &configJSON)
+ _ = json.Unmarshal([]byte(configJSONStr), &configJSON)
if !enable {
// disable autofail condition
- disableFailover(clientset, configMap, configJSON, namespace)
+ _ = disableFailover(clientset, configMap, configJSON, namespace)
} else {
// enable autofail
- enableFailover(clientset, configMap, configJSON, namespace)
+ _ = enableFailover(clientset, configMap, configJSON, namespace)
}
return nil
@@ -344,7 +340,6 @@ func ToggleAutoFailover(clientset kubernetes.Interface, enable bool, pghaScope,
// pods in a cluster to the a struct containing the associated instance name and the
// Nodes that it runs on, all based upon the output from a Kubernetes API query
func createInstanceInfoMap(pods *v1.PodList) map[string]instanceInfo {
-
instanceInfoMap := make(map[string]instanceInfo)
// Iterate through each pod that is returned and get the mapping between the
diff --git a/internal/util/pgbouncer.go b/internal/util/pgbouncer.go
index 2fdd645126..0b3a9a528d 100644
--- a/internal/util/pgbouncer.go
+++ b/internal/util/pgbouncer.go
@@ -29,6 +29,7 @@ const pgBouncerConfigMapFormat = "%s-pgbouncer-cm"
// pgBouncerSecretFormat is the name of the Kubernetes Secret that pgBouncer
// uses that stores configuration and pgbouncer user information, and follows
// the format "-pgbouncer-secret"
+// #nosec: G101
const pgBouncerSecretFormat = "%s-pgbouncer-secret"
// pgBouncerUserFileFormat is the format of what the pgBouncer user management
diff --git a/internal/util/policy.go b/internal/util/policy.go
index 2a0e2fcf83..6b19784dca 100644
--- a/internal/util/policy.go
+++ b/internal/util/policy.go
@@ -19,6 +19,7 @@ import (
"context"
"errors"
"fmt"
+ "io/ioutil"
"net/http"
"strings"
@@ -26,8 +27,6 @@ import (
"github.com/crunchydata/postgres-operator/internal/kubeapi"
pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned"
- "io/ioutil"
-
log "github.com/sirupsen/logrus"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -38,9 +37,8 @@ import (
func ExecPolicy(clientset kubeapi.Interface, restconfig *rest.Config, namespace, policyName, serviceName, port string) error {
ctx := context.TODO()
- //fetch the policy sql
+ // fetch the policy sql
sql, err := GetPolicySQL(clientset, namespace, policyName)
-
if err != nil {
return err
}
@@ -136,7 +134,6 @@ func readSQLFromURL(urlstring string) (string, error) {
}
return string(bodyBytes), err
-
}
// ValidatePolicy tests to see if a policy exists
diff --git a/internal/util/secrets.go b/internal/util/secrets.go
index eed3348d31..733392eabf 100644
--- a/internal/util/secrets.go
+++ b/internal/util/secrets.go
@@ -69,7 +69,6 @@ func CreateSecret(clientset kubernetes.Interface, db, secretName, username, pass
_, err := clientset.CoreV1().Secrets(namespace).Create(ctx, &secret, metav1.CreateOptions{})
return err
-
}
// GeneratePassword generates a password of a given length out of the acceptable
@@ -79,7 +78,6 @@ func GeneratePassword(length int) (string, error) {
for i := 0; i < length; i++ {
char, err := rand.Int(rand.Reader, passwordCharSelector)
-
// if there is an error generating the random integer, return
if err != nil {
return "", err
@@ -100,7 +98,6 @@ func GeneratedPasswordLength(configuredPasswordLength string) int {
// note that "configuredPasswordLength" may be an empty string, and as such
// the below line could fail. That's ok though! as we have a default set up
generatedPasswordLength, err := strconv.Atoi(configuredPasswordLength)
-
// if there is an error...set it to a default
if err != nil {
generatedPasswordLength = DefaultGeneratedPasswordLength
@@ -113,7 +110,6 @@ func GeneratedPasswordLength(configuredPasswordLength string) int {
func GetPasswordFromSecret(clientset kubernetes.Interface, namespace, secretName string) (string, error) {
ctx := context.TODO()
secret, err := clientset.CoreV1().Secrets(namespace).Get(ctx, secretName, metav1.GetOptions{})
-
if err != nil {
return "", err
}
@@ -154,7 +150,6 @@ func UpdateUserSecret(clientset kubernetes.Interface, clustername, username, pas
// see if the secret already exists
secret, err := clientset.CoreV1().Secrets(namespace).Get(ctx, secretName, metav1.GetOptions{})
-
// if this returns an error and it's not the "not found" error, return
// However, if it is the "not found" error, treat this as creating the user
// secret
diff --git a/internal/util/ssh.go b/internal/util/ssh.go
index aa886bbca7..e116716d12 100644
--- a/internal/util/ssh.go
+++ b/internal/util/ssh.go
@@ -91,6 +91,7 @@ func newPrivateKey(key ed25519.PrivateKey) ([]byte, error) {
// check fields should match to easily verify
// that a decryption was successful
+ // #nosec: G404
private.Check1 = rand.Uint32()
private.Check2 = private.Check1
diff --git a/internal/util/util.go b/internal/util/util.go
index 95a8310742..3559d09894 100644
--- a/internal/util/util.go
+++ b/internal/util/util.go
@@ -44,7 +44,6 @@ var gisImageTagRegex = regexp.MustCompile(`(.+-[\d|\.]+)-[\d|\.]+?(-[\d|\.]+.*)`
func init() {
rand.Seed(time.Now().UnixNano())
-
}
// GetLabels ...
@@ -58,19 +57,19 @@ func GetLabels(name, clustername string, replica bool) string {
return output
}
-//CurrentPrimaryUpdate prepares the needed data structures with the correct current primary value
-//before passing them along to be patched into the current pgcluster CRD's annotations
+// CurrentPrimaryUpdate prepares the needed data structures with the correct current primary value
+// before passing them along to be patched into the current pgcluster CRD's annotations
func CurrentPrimaryUpdate(clientset pgo.Interface, cluster *crv1.Pgcluster, currentPrimary, namespace string) error {
- //create a new map
+ // create a new map
metaLabels := make(map[string]string)
- //copy the relevant values into the new map
+ // copy the relevant values into the new map
for k, v := range cluster.ObjectMeta.Labels {
metaLabels[k] = v
}
- //update this map with the new deployment label
+ // update this map with the new deployment label
metaLabels[config.LABEL_DEPLOYMENT_NAME] = currentPrimary
- //Update CRD with the current primary name and the new deployment to point to after the failover
+ // Update CRD with the current primary name and the new deployment to point to after the failover
if err := PatchClusterCRD(clientset, metaLabels, cluster, currentPrimary, namespace); err != nil {
log.Errorf("failoverlogic: could not patch pgcluster %s with the current primary", currentPrimary)
}
@@ -112,7 +111,6 @@ func PatchClusterCRD(clientset pgo.Interface, labelMap map[string]string, oldCrd
Patch(ctx, oldCrd.Spec.Name, types.MergePatchType, patchBytes, metav1.PatchOptions{})
return err6
-
}
// GetValueOrDefault checks whether the first value given is set. If it is,
@@ -149,7 +147,6 @@ func GetSecretPassword(clientset kubernetes.Interface, db, suffix, Namespace str
log.Error("primary secret not found for " + db)
return "", errors.New("primary secret not found for " + db)
-
}
// GetStandardImageTag takes the current image name and the image tag value
@@ -158,7 +155,6 @@ func GetSecretPassword(clientset kubernetes.Interface, db, suffix, Namespace str
// the tag without the addition of the GIS version. This tag value can then
// be used when provisioning containers using the standard containers tag.
func GetStandardImageTag(imageName, imageTag string) string {
-
if imageName == "crunchy-postgres-gis-ha" && strings.Count(imageTag, "-") > 2 {
return gisImageTagRegex.ReplaceAllString(imageTag, "$1$2")
}
@@ -170,6 +166,7 @@ func GetStandardImageTag(imageName, imageTag string) string {
func RandStringBytesRmndr(n int) string {
b := make([]byte, n)
for i := range b {
+ // #nosec: G404
b[i] = letterBytes[rand.Int63()%int64(len(letterBytes))]
}
return string(b)
diff --git a/internal/util/util_test.go b/internal/util/util_test.go
index 30d6d8b65d..d867351a85 100644
--- a/internal/util/util_test.go
+++ b/internal/util/util_test.go
@@ -3,7 +3,6 @@ package util
import "testing"
func TestGetStandardImageTag(t *testing.T) {
-
assertCorrectMessage := func(t testing.TB, got, want string) {
t.Helper()
if got != want {
diff --git a/pkg/apis/crunchydata.com/v1/common.go b/pkg/apis/crunchydata.com/v1/common.go
index 242025f038..6bf14408dc 100644
--- a/pkg/apis/crunchydata.com/v1/common.go
+++ b/pkg/apis/crunchydata.com/v1/common.go
@@ -111,7 +111,6 @@ func (s PgStorageSpec) GetSupplementalGroups() []int64 {
}
supplementalGroup, err := strconv.Atoi(result)
-
// if there is an error, only warn about it and continue through the loop
if err != nil {
log.Warnf("malformed storage supplemental group: %v", err)
diff --git a/pkg/apis/crunchydata.com/v1/register.go b/pkg/apis/crunchydata.com/v1/register.go
index 00db119bda..7b7359f504 100644
--- a/pkg/apis/crunchydata.com/v1/register.go
+++ b/pkg/apis/crunchydata.com/v1/register.go
@@ -30,7 +30,7 @@ var (
)
// GroupName is the group name used in this package.
-//const GroupName = "cr.client-go.k8s.io"
+// const GroupName = "cr.client-go.k8s.io"
const GroupName = "crunchydata.com"
// SchemeGroupVersion is the group version used to register these objects.
diff --git a/pkg/apis/crunchydata.com/v1/task.go b/pkg/apis/crunchydata.com/v1/task.go
index 7b79896e67..463c532f80 100644
--- a/pkg/apis/crunchydata.com/v1/task.go
+++ b/pkg/apis/crunchydata.com/v1/task.go
@@ -23,7 +23,6 @@ import (
const PgtaskResourcePlural = "pgtasks"
const (
- PgtaskDeleteBackups = "delete-backups"
PgtaskDeleteData = "delete-data"
PgtaskFailover = "failover"
PgtaskAutoFailover = "autofailover"
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index 251e32c7cd..52b68909a2 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -82,7 +82,7 @@ type CreateClusterRequest struct {
AutofailFlag bool
ArchiveFlag bool
BackrestStorageType string
- //BackrestRestoreFrom string
+ // BackrestRestoreFrom string
PgbouncerFlag bool
// PgBouncerReplicas represents the total number of pgBouncer pods to deploy with a
// PostgreSQL cluster. Only works if PgbouncerFlag is set, and if so, it must
@@ -254,12 +254,14 @@ type ShowClusterService struct {
BackrestRepo bool
}
-const PodTypePrimary = "primary"
-const PodTypeReplica = "replica"
-const PodTypePgbouncer = "pgbouncer"
-const PodTypePgbackrest = "pgbackrest"
-const PodTypeBackup = "backup"
-const PodTypeUnknown = "unknown"
+const (
+ PodTypePrimary = "primary"
+ PodTypeReplica = "replica"
+ PodTypePgbouncer = "pgbouncer"
+ PodTypePgbackrest = "pgbackrest"
+ PodTypeBackup = "backup"
+ PodTypeUnknown = "unknown"
+)
// ShowClusterPod
//
diff --git a/pkg/apiservermsgs/usermsgs.go b/pkg/apiservermsgs/usermsgs.go
index 1f0ba56295..9c63c3d483 100644
--- a/pkg/apiservermsgs/usermsgs.go
+++ b/pkg/apiservermsgs/usermsgs.go
@@ -31,11 +31,9 @@ const (
UpdateUserLoginDisable
)
-var (
- // ErrPasswordTypeInvalid is used when a string that's not included in
- // PasswordTypeStrings is used
- ErrPasswordTypeInvalid = errors.New("invalid password type. choices are (md5, scram-sha-256)")
-)
+// ErrPasswordTypeInvalid is used when a string that's not included in
+// PasswordTypeStrings is used
+var ErrPasswordTypeInvalid = errors.New("invalid password type. choices are (md5, scram-sha-256)")
// passwordTypeStrings is a mapping of strings of password types to their
// corresponding value of the structured password type
diff --git a/pkg/apiservermsgs/usermsgs_test.go b/pkg/apiservermsgs/usermsgs_test.go
index d2f70388fc..207eda3757 100644
--- a/pkg/apiservermsgs/usermsgs_test.go
+++ b/pkg/apiservermsgs/usermsgs_test.go
@@ -16,6 +16,7 @@ package apiservermsgs
*/
import (
+ "errors"
"testing"
pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password"
@@ -33,7 +34,6 @@ func TestGetPasswordType(t *testing.T) {
for passwordTypeStr, expected := range tests {
t.Run(passwordTypeStr, func(t *testing.T) {
passwordType, err := GetPasswordType(passwordTypeStr)
-
if err != nil {
t.Error(err)
return
@@ -54,7 +54,7 @@ func TestGetPasswordType(t *testing.T) {
for passwordTypeStr, expected := range tests {
t.Run(passwordTypeStr, func(t *testing.T) {
- if _, err := GetPasswordType(passwordTypeStr); err != expected {
+ if _, err := GetPasswordType(passwordTypeStr); !errors.Is(err, expected) {
t.Errorf("password type %q should yield error %q", passwordTypeStr, expected.Error())
}
})
diff --git a/pkg/events/eventing.go b/pkg/events/eventing.go
index 96b544405b..5c40861352 100644
--- a/pkg/events/eventing.go
+++ b/pkg/events/eventing.go
@@ -19,17 +19,18 @@ import (
"encoding/json"
"errors"
"fmt"
- crunchylog "github.com/crunchydata/postgres-operator/internal/logging"
- "github.com/nsqio/go-nsq"
- log "github.com/sirupsen/logrus"
"os"
"reflect"
"time"
+
+ crunchylog "github.com/crunchydata/postgres-operator/internal/logging"
+ "github.com/nsqio/go-nsq"
+ log "github.com/sirupsen/logrus"
)
// String returns the string form for a given LogLevel
func Publish(e EventInterface) error {
- //Add logging configuration
+ // Add logging configuration
crunchylog.CrunchyLogger(crunchylog.SetParameters())
eventAddr := os.Getenv("EVENT_ADDR")
if eventAddr == "" {
@@ -41,7 +42,7 @@ func Publish(e EventInterface) error {
}
cfg := nsq.NewConfig()
- //cfg.UserAgent = fmt.Sprintf("to_nsq/%s go-nsq/%s", version.Binary, nsq.VERSION)
+ // cfg.UserAgent = fmt.Sprintf("to_nsq/%s go-nsq/%s", version.Binary, nsq.VERSION)
cfg.UserAgent = fmt.Sprintf("go-nsq/%s", nsq.VERSION)
log.Debugf("publishing %s message %s", reflect.TypeOf(e), e.String())
@@ -78,7 +79,7 @@ func Publish(e EventInterface) error {
}
}
- //always publish to the All topic
+ // always publish to the All topic
err = producer.Publish(EventTopicAll, b)
if err != nil {
log.Errorf("Error: %s", err)
diff --git a/pkg/events/eventtype.go b/pkg/events/eventtype.go
index 43a709abde..95a09f8121 100644
--- a/pkg/events/eventtype.go
+++ b/pkg/events/eventtype.go
@@ -33,6 +33,7 @@ const (
EventTopicPGOUser = "pgousertopic"
EventTopicUpgrade = "upgradetopic"
)
+
const (
EventReloadCluster = "ReloadCluster"
EventPrimaryNotReady = "PrimaryNotReady"
@@ -161,6 +162,7 @@ type EventCreateClusterCompletedFormat struct {
func (p EventCreateClusterCompletedFormat) GetHeader() EventHeader {
return p.EventHeader
}
+
func (lvl EventCreateClusterCompletedFormat) String() string {
msg := fmt.Sprintf("Event %s - (create cluster completed) clustername %s workflow %s", lvl.EventHeader, lvl.Clustername, lvl.WorkflowID)
return msg
From 26e6144fe315b7ddf26dced2e0a8a447498ecff1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 21 Dec 2020 16:13:27 -0500
Subject: [PATCH 062/276] Return an error with invalid type for backup options
If we cannot determine the type for a backup option, return an
error instead of just passing through.
---
internal/apiserver/backupoptions/backupoptionsutil.go | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/internal/apiserver/backupoptions/backupoptionsutil.go b/internal/apiserver/backupoptions/backupoptionsutil.go
index 5cf5901b29..b31bd8e00f 100644
--- a/internal/apiserver/backupoptions/backupoptionsutil.go
+++ b/internal/apiserver/backupoptions/backupoptionsutil.go
@@ -89,7 +89,8 @@ func convertBackupOptsToStruct(backupOpts string, request interface{}) (backupOp
commandLine.BoolVarP(fieldVal.Addr().Interface().(*bool), flag, flagShort, false, "")
case reflect.Slice:
commandLine.StringArrayVarP(fieldVal.Addr().Interface().(*[]string), flag, flagShort, nil, "")
- default: // no-op
+ default:
+ return nil, nil, fmt.Errorf("invalid value for (%q/%q): %v", flag, flagShort, fieldVal)
}
}
}
From 590e315a18549f0ddc41953c02f1153c3ad278c2 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 21 Dec 2020 16:25:00 -0500
Subject: [PATCH 063/276] Update policy format acceptance
Policies can now only be executed from SQL that is stored in a
"pgpolicies.crunchydata.com" custom resource, or loaded in
via the `pgo create policy -in-file`.
---
cmd/pgo/cmd/create.go | 7 +++---
cmd/pgo/cmd/policy.go | 4 ----
.../pgo-client/reference/pgo_create_policy.md | 5 ++--
.../apiserver/policyservice/policyimpl.go | 3 +--
.../apiserver/policyservice/policyservice.go | 2 +-
internal/util/policy.go | 24 +------------------
pkg/apis/crunchydata.com/v1/policy.go | 1 -
pkg/apiservermsgs/policymsgs.go | 1 -
8 files changed, 8 insertions(+), 39 deletions(-)
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index eaa60e43fa..2e6bf8c41e 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -38,7 +38,7 @@ var (
Database string
Password string
SecretFrom string
- PoliciesFlag, PolicyFile, PolicyURL string
+ PoliciesFlag, PolicyFile string
UserLabels string
Tablespaces []string
ServiceType string
@@ -224,8 +224,8 @@ var createPolicyCmd = &cobra.Command{
Namespace = PGONamespace
}
log.Debug("create policy called ")
- if PolicyFile == "" && PolicyURL == "" {
- fmt.Println(`Error: The --in-file or --url flags are required to create a policy.`)
+ if PolicyFile == "" {
+ fmt.Println(`Error: The --in-file is required to create a policy.`)
return
}
@@ -521,7 +521,6 @@ func init() {
// "pgo create policy" flags
createPolicyCmd.Flags().StringVarP(&PolicyFile, "in-file", "i", "", "The policy file path to use for adding a policy.")
- createPolicyCmd.Flags().StringVarP(&PolicyURL, "url", "u", "", "The url to use for adding a policy.")
// "pgo create schedule" flags
createScheduleCmd.Flags().StringVarP(&ScheduleDatabase, "database", "", "", "The database to run the SQL policy against.")
diff --git a/cmd/pgo/cmd/policy.go b/cmd/pgo/cmd/policy.go
index 96a858efc9..ed4aae82a8 100644
--- a/cmd/pgo/cmd/policy.go
+++ b/cmd/pgo/cmd/policy.go
@@ -139,7 +139,6 @@ func showPolicy(args []string, ns string) {
for _, policy := range response.PolicyList.Items {
fmt.Println("")
fmt.Println("policy : " + policy.Spec.Name)
- fmt.Println(TreeBranch + "url : " + policy.Spec.URL)
fmt.Println(TreeBranch + "status : " + policy.Spec.Status)
fmt.Println(TreeTrunk + "sql : " + policy.Spec.SQL)
}
@@ -158,9 +157,6 @@ func createPolicy(args []string, ns string) {
r.Namespace = ns
r.ClientVersion = msgs.PGO_VERSION
- if PolicyURL != "" {
- r.URL = PolicyURL
- }
if PolicyFile != "" {
r.SQL, err = getPolicyString(PolicyFile)
diff --git a/docs/content/pgo-client/reference/pgo_create_policy.md b/docs/content/pgo-client/reference/pgo_create_policy.md
index ac17b059f6..7705067853 100644
--- a/docs/content/pgo-client/reference/pgo_create_policy.md
+++ b/docs/content/pgo-client/reference/pgo_create_policy.md
@@ -20,13 +20,12 @@ pgo create policy [flags]
```
-h, --help help for policy
-i, --in-file string The policy file path to use for adding a policy.
- -u, --url string The url to use for adding a policy.
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +39,4 @@ pgo create policy [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 21-Dec-2020
diff --git a/internal/apiserver/policyservice/policyimpl.go b/internal/apiserver/policyservice/policyimpl.go
index a3153536bc..21b77d4d63 100644
--- a/internal/apiserver/policyservice/policyimpl.go
+++ b/internal/apiserver/policyservice/policyimpl.go
@@ -37,7 +37,7 @@ import (
)
// CreatePolicy ...
-func CreatePolicy(client pgo.Interface, policyName, policyURL, policyFile, ns, pgouser string) (bool, error) {
+func CreatePolicy(client pgo.Interface, policyName, policyFile, ns, pgouser string) (bool, error) {
ctx := context.TODO()
log.Debugf("create policy called for %s", policyName)
@@ -46,7 +46,6 @@ func CreatePolicy(client pgo.Interface, policyName, policyURL, policyFile, ns, p
spec := crv1.PgpolicySpec{}
spec.Namespace = ns
spec.Name = policyName
- spec.URL = policyURL
spec.SQL = policyFile
myLabels := make(map[string]string)
diff --git a/internal/apiserver/policyservice/policyservice.go b/internal/apiserver/policyservice/policyservice.go
index 4324faac21..00a499be64 100644
--- a/internal/apiserver/policyservice/policyservice.go
+++ b/internal/apiserver/policyservice/policyservice.go
@@ -83,7 +83,7 @@ func CreatePolicyHandler(w http.ResponseWriter, r *http.Request) {
resp.Status.Msg = "invalid policy name format " + errs[0]
} else {
- found, err := CreatePolicy(apiserver.Clientset, request.Name, request.URL, request.SQL, ns, username)
+ found, err := CreatePolicy(apiserver.Clientset, request.Name, request.SQL, ns, username)
if err != nil {
log.Error(err.Error())
resp.Status.Code = msgs.Error
diff --git a/internal/util/policy.go b/internal/util/policy.go
index 6b19784dca..25fe2953d7 100644
--- a/internal/util/policy.go
+++ b/internal/util/policy.go
@@ -19,8 +19,6 @@ import (
"context"
"errors"
"fmt"
- "io/ioutil"
- "net/http"
"strings"
"github.com/crunchydata/postgres-operator/internal/config"
@@ -106,10 +104,7 @@ func GetPolicySQL(clientset pgo.Interface, namespace, policyName string) (string
ctx := context.TODO()
p, err := clientset.CrunchydataV1().Pgpolicies(namespace).Get(ctx, policyName, metav1.GetOptions{})
if err == nil {
- if p.Spec.URL != "" {
- return readSQLFromURL(p.Spec.URL)
- }
- return p.Spec.SQL, err
+ return p.Spec.SQL, nil
}
if kerrors.IsNotFound(err) {
@@ -119,23 +114,6 @@ func GetPolicySQL(clientset pgo.Interface, namespace, policyName string) (string
return "", err
}
-// readSQLFromURL returns the SQL string from a URL
-func readSQLFromURL(urlstring string) (string, error) {
- var bodyBytes []byte
- response, err := http.Get(urlstring)
- if err == nil {
- bodyBytes, err = ioutil.ReadAll(response.Body)
- defer response.Body.Close()
- }
-
- if err != nil {
- log.Error(err)
- return "", err
- }
-
- return string(bodyBytes), err
-}
-
// ValidatePolicy tests to see if a policy exists
func ValidatePolicy(clientset pgo.Interface, namespace string, policyName string) error {
ctx := context.TODO()
diff --git a/pkg/apis/crunchydata.com/v1/policy.go b/pkg/apis/crunchydata.com/v1/policy.go
index 28347f9950..904567d496 100644
--- a/pkg/apis/crunchydata.com/v1/policy.go
+++ b/pkg/apis/crunchydata.com/v1/policy.go
@@ -27,7 +27,6 @@ const PgpolicyResourcePlural = "pgpolicies"
type PgpolicySpec struct {
Namespace string `json:"namespace"`
Name string `json:"name"`
- URL string `json:"url"`
SQL string `json:"sql"`
Status string `json:"status"`
}
diff --git a/pkg/apiservermsgs/policymsgs.go b/pkg/apiservermsgs/policymsgs.go
index ec3e7cf2f9..ba5c6cea21 100644
--- a/pkg/apiservermsgs/policymsgs.go
+++ b/pkg/apiservermsgs/policymsgs.go
@@ -33,7 +33,6 @@ type ShowPolicyRequest struct {
// swagger:model
type CreatePolicyRequest struct {
Name string
- URL string
SQL string
Namespace string
ClientVersion string
From 449b9d4d9c6824b5c5598447a95319a5e871257c Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 21 Dec 2020 16:04:23 -0500
Subject: [PATCH 064/276] Update linter CI to include "default" as exhaustive
There are some switch statements that may have a larger amount
of values that can be considered, and as such we should allow
for default to mean all of them.
This now also runs on the full code base each time a pull request
is opened. Additionally, it disables the gofumpt check.
---
.github/workflows/lint.yaml | 1 -
.golangci.yaml | 5 +++++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/.github/workflows/lint.yaml b/.github/workflows/lint.yaml
index 29e173c4fa..40ee71c8e9 100644
--- a/.github/workflows/lint.yaml
+++ b/.github/workflows/lint.yaml
@@ -12,4 +12,3 @@ jobs:
with:
version: v1.32
args: --timeout=5m
- only-new-issues: true
diff --git a/.golangci.yaml b/.golangci.yaml
index c8ac7c76ed..9918bb1a8b 100644
--- a/.golangci.yaml
+++ b/.golangci.yaml
@@ -2,6 +2,7 @@
linters:
disable:
+ - gofumpt
- scopelint
enable:
- gosimple
@@ -11,6 +12,10 @@ linters:
- format
- unused
+linters-settings:
+ exhaustive:
+ default-signifies-exhaustive: true
+
run:
skip-dirs:
- pkg/generated
From eaa88138bc866ab58b4ebb12f0782b38ad3b75e5 Mon Sep 17 00:00:00 2001
From: Joseph Mckulka <16840147+jmckulk@users.noreply.github.com>
Date: Tue, 22 Dec 2020 15:30:22 -0500
Subject: [PATCH 065/276] Ensure PgBackRest env vars use index
The PostgreSQL Operator currently utilizes two different formats for defining
pgBackRest "repo" settings via environment variables. For instance, certain
variables are defined with an index (e.g. `PGBACKREST_REPO1_`), while others are
not(e.g. `PGBACKREST_REPO_`). This inconsistency has led to confusion when using
the operator. The operator will now standardize on using an indexed variable.
Issue: [ch9871]
---
.../pgo-operator/files/pgo-configs/backrest-job.json | 8 ++++----
.../files/pgo-configs/pgo-backrest-repo-template.json | 4 ++--
internal/operator/backrest/backup.go | 8 ++++----
internal/operator/backrest/repo.go | 10 +++++-----
internal/operator/clusterutilities.go | 4 ++--
5 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
index dddc0b14d9..bf89971aa6 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
@@ -57,11 +57,11 @@
"name": "PGBACKREST_DB_PATH",
"value": "{{.PgbackrestDBPath}}"
}, {
- "name": "PGBACKREST_REPO_PATH",
- "value": "{{.PgbackrestRepoPath}}"
+ "name": "PGBACKREST_REPO1_PATH",
+ "value": "{{.PgbackrestRepo1Path}}"
}, {
- "name": "PGBACKREST_REPO_TYPE",
- "value": "{{.PgbackrestRepoType}}"
+ "name": "PGBACKREST_REPO1_TYPE",
+ "value": "{{.PgbackrestRepo1Type}}"
},{
"name": "PGHA_PGBACKREST_LOCAL_S3_STORAGE",
"value": "{{.BackrestLocalAndS3Storage}}"
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
index 30b69dc4b6..885396322b 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
@@ -69,8 +69,8 @@
"value": "{{.PgbackrestDBPath}}"
},
{
- "name": "PGBACKREST_REPO_PATH",
- "value": "{{.PgbackrestRepoPath}}"
+ "name": "PGBACKREST_REPO1_PATH",
+ "value": "{{.PgbackrestRepo1Path}}"
},
{
"name": "PGBACKREST_PG1_PORT",
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index 41a03a3d89..e486104961 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -55,8 +55,8 @@ type backrestJobTemplateFields struct {
SecurityContext string
PgbackrestStanza string
PgbackrestDBPath string
- PgbackrestRepoPath string
- PgbackrestRepoType string
+ PgbackrestRepo1Path string
+ PgbackrestRepo1Type string
BackrestLocalAndS3Storage bool
PgbackrestS3VerifyTLS string
PgbackrestRestoreVolumes string
@@ -88,10 +88,10 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
CCPImageTag: operator.Pgo.Cluster.CCPImageTag,
PgbackrestStanza: task.Spec.Parameters[config.LABEL_PGBACKREST_STANZA],
PgbackrestDBPath: task.Spec.Parameters[config.LABEL_PGBACKREST_DB_PATH],
- PgbackrestRepoPath: task.Spec.Parameters[config.LABEL_PGBACKREST_REPO_PATH],
+ PgbackrestRepo1Path: task.Spec.Parameters[config.LABEL_PGBACKREST_REPO_PATH],
PgbackrestRestoreVolumes: "",
PgbackrestRestoreVolumeMounts: "",
- PgbackrestRepoType: operator.GetRepoType(task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE]),
+ PgbackrestRepo1Type: operator.GetRepoType(task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE]),
BackrestLocalAndS3Storage: operator.IsLocalAndS3Storage(task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE]),
PgbackrestS3VerifyTLS: task.Spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS],
}
diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go
index 988b279548..f36fee6dce 100644
--- a/internal/operator/backrest/repo.go
+++ b/internal/operator/backrest/repo.go
@@ -50,12 +50,12 @@ type RepoDeploymentTemplateFields struct {
BackrestRepoClaimName string
SshdSecretsName string
PGbackrestDBHost string
- PgbackrestRepoPath string
+ PgbackrestRepo1Path string
PgbackrestDBPath string
PgbackrestPGPort string
SshdPort int
PgbackrestStanza string
- PgbackrestRepoType string
+ PgbackrestRepo1Type string
PgbackrestS3EnvVars string
Name string
ClusterName string
@@ -197,7 +197,7 @@ func setBootstrapRepoOverrides(clientset kubernetes.Interface, cluster *crv1.Pgc
return err
}
- repoFields.PgbackrestRepoPath = restoreFromSecret.Annotations[config.ANNOTATION_REPO_PATH]
+ repoFields.PgbackrestRepo1Path = restoreFromSecret.Annotations[config.ANNOTATION_REPO_PATH]
repoFields.PgbackrestPGPort = restoreFromSecret.Annotations[config.ANNOTATION_PG_PORT]
sshdPort, err := strconv.Atoi(restoreFromSecret.Annotations[config.ANNOTATION_SSHD_PORT])
@@ -234,12 +234,12 @@ func getRepoDeploymentFields(clientset kubernetes.Interface, cluster *crv1.Pgclu
BackrestRepoClaimName: fmt.Sprintf(util.BackrestRepoPVCName, cluster.Name),
SshdSecretsName: fmt.Sprintf(util.BackrestRepoSecretName, cluster.Name),
PGbackrestDBHost: cluster.Name,
- PgbackrestRepoPath: util.GetPGBackRestRepoPath(*cluster),
+ PgbackrestRepo1Path: util.GetPGBackRestRepoPath(*cluster),
PgbackrestDBPath: "/pgdata/" + cluster.Name,
PgbackrestPGPort: cluster.Spec.Port,
SshdPort: operator.Pgo.Cluster.BackrestPort,
PgbackrestStanza: "db",
- PgbackrestRepoType: operator.GetRepoType(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
+ PgbackrestRepo1Type: operator.GetRepoType(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cluster, clientset, namespace),
Name: fmt.Sprintf(util.BackrestRepoServiceName, cluster.Name),
ClusterName: cluster.Name,
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index da27c3c614..776c264579 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -54,9 +54,9 @@ const (
PGHAConfigInitSetting = "init"
// PGHAConfigReplicaBootstrapRepoType defines an override for the type of repo (local, S3, etc.)
// that should be utilized when bootstrapping a replica (i.e. it override the
- // PGBACKREST_REPO_TYPE env var in the environment). Allows for dynamic changing of the
+ // PGBACKREST_REPO1_TYPE env var in the environment). Allows for dynamic changing of the
// backrest repo type without requiring container restarts (as would be required to update
- // PGBACKREST_REPO_TYPE).
+ // PGBACKREST_REPO1_TYPE).
PGHAConfigReplicaBootstrapRepoType = "replica-bootstrap-repo-type"
)
From 96cc73248f9c8f9cbb10cb96b384357a253b1585 Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Thu, 24 Sep 2020 11:41:28 -0500
Subject: [PATCH 066/276] Build the GCP Marketplace installer with a smaller
context
This reduces the time it takes to build with Docker.
---
installers/gcp-marketplace/Dockerfile | 18 +++++++++---------
installers/gcp-marketplace/Makefile | 4 ++--
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/installers/gcp-marketplace/Dockerfile b/installers/gcp-marketplace/Dockerfile
index adf85a355a..0e84cb2f27 100644
--- a/installers/gcp-marketplace/Dockerfile
+++ b/installers/gcp-marketplace/Dockerfile
@@ -20,21 +20,21 @@ RUN apt-get update \
&& apt-get install -y --no-install-recommends ansible=2.9.* openssh-client \
&& rm -rf /var/lib/apt/lists/*
-COPY installers/ansible/* \
+COPY ansible/* \
/opt/postgres-operator/ansible/
-COPY installers/favicon.png \
- installers/gcp-marketplace/install-job.yaml \
- installers/gcp-marketplace/install.sh \
- installers/gcp-marketplace/values.yaml \
+COPY favicon.png \
+ gcp-marketplace/install-job.yaml \
+ gcp-marketplace/install.sh \
+ gcp-marketplace/values.yaml \
/opt/postgres-operator/
-COPY installers/gcp-marketplace/install-hook.sh \
+COPY gcp-marketplace/install-hook.sh \
/bin/create_manifests.sh
-COPY installers/gcp-marketplace/schema.yaml \
+COPY gcp-marketplace/schema.yaml \
/data/
-COPY installers/gcp-marketplace/application.yaml \
+COPY gcp-marketplace/application.yaml \
/data/manifest/
-COPY installers/gcp-marketplace/test-pod.yaml \
+COPY gcp-marketplace/test-pod.yaml \
/data-test/manifest/
ARG PGO_VERSION
diff --git a/installers/gcp-marketplace/Makefile b/installers/gcp-marketplace/Makefile
index f10f5b7c27..6236ae3ad8 100644
--- a/installers/gcp-marketplace/Makefile
+++ b/installers/gcp-marketplace/Makefile
@@ -37,12 +37,12 @@ image: image-$(IMAGE_BUILDER)
.PHONY: image-buildah
image-buildah: ## Build the deployer image with Buildah
- sudo buildah bud --file Dockerfile --tag '$(DEPLOYER_IMAGE)' $(IMAGE_BUILD_ARGS) --layers ../..
+ sudo buildah bud --file Dockerfile --tag '$(DEPLOYER_IMAGE)' $(IMAGE_BUILD_ARGS) --layers ..
sudo buildah push '$(DEPLOYER_IMAGE)' docker-daemon:'$(DEPLOYER_IMAGE)'
.PHONY: image-docker
image-docker: ## Build the deployer image with Docker
- docker build --file Dockerfile --tag '$(DEPLOYER_IMAGE)' $(IMAGE_BUILD_ARGS) ../..
+ docker build --file Dockerfile --tag '$(DEPLOYER_IMAGE)' $(IMAGE_BUILD_ARGS) ..
# PARAMETERS='{"OPERATOR_NAMESPACE": "", "OPERATOR_NAME": "", "OPERATOR_ADMIN_PASSWORD": ""}'
.PHONY: install
From a8db592c10b201c2b228d1f87149a80bed776018 Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Tue, 22 Dec 2020 12:54:57 -0600
Subject: [PATCH 067/276] Wait for RBAC reconciliation in GCP Marketplace
Marketplace verification checks that the namespace has no RBAC objects
after the application is removed. These objects used to be created
during the install, but now they are created by the operator during
reconcile.
See: 78b39759cef7f6994165a937348291f03500db5c
Issue: [ch10000]
---
installers/gcp-marketplace/install.sh | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/installers/gcp-marketplace/install.sh b/installers/gcp-marketplace/install.sh
index 6dc770b993..3b65f8976d 100755
--- a/installers/gcp-marketplace/install.sh
+++ b/installers/gcp-marketplace/install.sh
@@ -37,16 +37,36 @@ resources=(
clusterrolebinding/pgo-cluster-role
configmap/pgo-config
deployment/postgres-operator
+ role/pgo-backrest-role
+ role/pgo-pg-role
role/pgo-role
+ role/pgo-target-role
+ rolebinding/pgo-backrest-role-binding
+ rolebinding/pgo-pg-role-binding
rolebinding/pgo-role
+ rolebinding/pgo-target-role-binding
secret/pgo.tls
secret/pgo-backrest-repo-config
secret/pgorole-pgoadmin
secret/pgouser-admin
service/postgres-operator
+ serviceaccount/pgo-backrest
+ serviceaccount/pgo-default
+ serviceaccount/pgo-pg
+ serviceaccount/pgo-target
serviceaccount/postgres-operator
)
for resource in "${resources[@]}"; do
+ kind="${resource%/*}"
+ name="${resource#*/}"
+
+ for _ in $(seq 5); do
+ if [ "$( kc get "$kind" --field-selector="metadata.name=$name" --output=name )" ]
+ then break
+ else sleep 1s
+ fi
+ done
+
kc patch "$resource" --type=strategic --patch="$application_ownership"
done
From b680e59663e4c10a99ec9213a29c2d2c8b0451dc Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Tue, 22 Dec 2020 12:46:48 -0600
Subject: [PATCH 068/276] No ownership on default pgBackRest configuration
This object is no longer created during install.
See: 29ef4855cba68ddcc4dee13a21b697315e5fc88e
---
installers/gcp-marketplace/install.sh | 1 -
1 file changed, 1 deletion(-)
diff --git a/installers/gcp-marketplace/install.sh b/installers/gcp-marketplace/install.sh
index 3b65f8976d..cbe6d6890d 100755
--- a/installers/gcp-marketplace/install.sh
+++ b/installers/gcp-marketplace/install.sh
@@ -46,7 +46,6 @@ resources=(
rolebinding/pgo-role
rolebinding/pgo-target-role-binding
secret/pgo.tls
- secret/pgo-backrest-repo-config
secret/pgorole-pgoadmin
secret/pgouser-admin
service/postgres-operator
From 44358d4fd07218becb298d86a126a645917f8ac2 Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Tue, 22 Dec 2020 13:46:19 -0600
Subject: [PATCH 069/276] Verify GCP Marketplace installer without parameters
To be on the marketplace, the deployer image must pass verification
without any specified parameters. Contrary to the documentation, the
verify command still takes parameters which allows us to test different
configuration values.
See: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/commit/aca27e694dc2
Issue: [ch10000]
---
installers/gcp-marketplace/Dockerfile | 2 ++
installers/gcp-marketplace/test-schema.yaml | 6 ++++++
2 files changed, 8 insertions(+)
create mode 100644 installers/gcp-marketplace/test-schema.yaml
diff --git a/installers/gcp-marketplace/Dockerfile b/installers/gcp-marketplace/Dockerfile
index 0e84cb2f27..464e7d74fd 100644
--- a/installers/gcp-marketplace/Dockerfile
+++ b/installers/gcp-marketplace/Dockerfile
@@ -36,6 +36,8 @@ COPY gcp-marketplace/application.yaml \
/data/manifest/
COPY gcp-marketplace/test-pod.yaml \
/data-test/manifest/
+COPY gcp-marketplace/test-schema.yaml \
+ /data-test/schema.yaml
ARG PGO_VERSION
RUN for file in \
diff --git a/installers/gcp-marketplace/test-schema.yaml b/installers/gcp-marketplace/test-schema.yaml
new file mode 100644
index 0000000000..5dae182d7e
--- /dev/null
+++ b/installers/gcp-marketplace/test-schema.yaml
@@ -0,0 +1,6 @@
+properties:
+ OPERATOR_ADMIN_PASSWORD:
+ type: string
+ default: insecure
+ x-google-marketplace:
+ type: MASKED_FIELD
From 00f4c72bca641f22f9279f6a1d66442e4eff3f71 Mon Sep 17 00:00:00 2001
From: Chris Bandy
Date: Tue, 22 Dec 2020 14:22:07 -0600
Subject: [PATCH 070/276] Move GCP Marketplace service account description
Recent versions of marketplace verification expect a service account's
description under a different schema key. Tested with tools v0.10.10.
See: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/commit/10c92d4bb52c
Issue: [ch10000]
---
installers/gcp-marketplace/schema.yaml | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/installers/gcp-marketplace/schema.yaml b/installers/gcp-marketplace/schema.yaml
index 6f0ec5320f..6b7e3df965 100644
--- a/installers/gcp-marketplace/schema.yaml
+++ b/installers/gcp-marketplace/schema.yaml
@@ -11,13 +11,13 @@ properties:
INSTALLER_SERVICE_ACCOUNT: # This key appears in the ClusterRoleBinding name.
title: Cluster Admin Service Account
- description: >-
- Name of a service account in the target namespace that has cluster-admin permissions.
- This is used by the operator installer to create Custom Resource Definitions.
type: string
x-google-marketplace:
type: SERVICE_ACCOUNT
serviceAccount:
+ description: >-
+ Name of a service account in the target namespace that has cluster-admin permissions.
+ This is used by the operator installer to create Custom Resource Definitions.
roles:
- type: ClusterRole
rulesType: PREDEFINED
From 67d9b38df20233b76e7b8500c786e464fd45cc52 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 24 Dec 2020 09:54:20 -0500
Subject: [PATCH 071/276] Revise example custom resources around monitoring
This ensures that the "exporter" attribute is present in the
example, though it is defaulted to false.
---
docs/content/custom-resources/_index.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index b39dd3ae0b..6a578bc242 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -312,7 +312,6 @@ metadata:
backrest-storage-type: "s3"
crunchy-pgbadger: "false"
crunchy-pgha-scope: ${pgo_cluster_name}
- crunchy-postgres-exporter: "false"
deployment-name: ${pgo_cluster_name}
name: ${pgo_cluster_name}
pg-cluster: ${pgo_cluster_name}
@@ -363,6 +362,7 @@ spec:
clustername: ${pgo_cluster_name}
customconfig: ""
database: ${pgo_cluster_name}
+ exporter: false
exporterport: "9187"
limits: {}
name: ${pgo_cluster_name}
@@ -506,7 +506,6 @@ To enable the [monitoring]({{< relref "/architecture/monitoring.md">}})
(aka metrics) sidecar using the `crunchy-postgres-exporter` container, you need
to set the `exporter` attribute in `pgclusters.crunchydata.com` custom resource.
-
### Add a Tablespace
Tablespaces can be added during the lifetime of a PostgreSQL cluster (tablespaces can be removed as well, but for a detailed explanation as to how, please see the [Tablespaces]({{< relref "/architecture/tablespaces.md">}}) section).
From c9ea08397725868b050259589fee295de94e7d03 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 25 Dec 2020 11:21:06 -0500
Subject: [PATCH 072/276] Documentation around HA troubleshooting scenarios
This provides an analysis around a scenario that can crop up
with a failure around synchronous replication and how to
troubleshoot and exit the situation.
Issue: #2132
---
docs/content/tutorial/high-availability.md | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/docs/content/tutorial/high-availability.md b/docs/content/tutorial/high-availability.md
index a3c2a12bea..3a528ab456 100644
--- a/docs/content/tutorial/high-availability.md
+++ b/docs/content/tutorial/high-availability.md
@@ -96,6 +96,20 @@ Please understand the tradeoffs of synchronous replication before using it.
To leran how to use pod anti-affinity and node affinity, please refer to the [high availability architecture documentation]({{< relref "architecture/high-availability/_index.md" >}})
+## Troubleshooting
+
+### No Primary Available After Both Synchronous Replication Instances Fail
+
+Though synchronous replication is available for guarding against transaction loss for [write sensitive workloads]({{< relref "architecture/high-availability/_index.md" >}}#synchronous-replication-guarding-against-transactions-loss), by default the high availability systems prefers availability over consistency and will continue to accept writes to a primary even if a replica fails. Additionally, in most scenarios, a system using synchronous replication will be able to recover and self heal should a primary or a replica go down.
+
+However, in the case that both a primary and its synchronous replica go down at the same time, a new primary may not be promoted. To guard against transaction loss, the high availability system will not promote any instances if it cannot determine if they had been one of the synchronous instances. As such, when it recovers, it will bring up all the instances as replicas.
+
+To get out of this situation, inspect the replicas using `pgo failover --query` to determine the best candidate (typically the one with the least amount of replication lag). After determining the best candidate, promote one of the replicas using `pgo failover --target` command.
+
+If you are still having issues, you may need to execute into one of the Pods and inspect the state with the `patronictl` command.
+
+A detailed breakdown of this case be found [here](https://github.com/CrunchyData/postgres-operator/issues/2132#issuecomment-748719843).
+
## Next Steps
Backups, restores, point-in-time-recoveries: [disaster recovery]({{< relref "architecture/disaster-recovery.md" >}}) is a big topic! We'll learn about you can [perform disaster recovery]({{< relref "tutorial/disaster-recovery.md" >}}) and more in the PostgreSQL Operator.
From b471e7160cfa6cfed1decac32ec57656fb47e05c Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 26 Dec 2020 15:14:47 -0500
Subject: [PATCH 073/276] Modify character space for random password generation
This removes a couple of characters from consideration for
the randomly generated passwords, as these characters could pose
problems when applying them in shell environments. The character
entropy is still quite large even with this removal.
---
internal/util/secrets.go | 19 ++++++++++++++++---
internal/util/secrets_test.go | 12 +++++++++---
2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/internal/util/secrets.go b/internal/util/secrets.go
index 733392eabf..c8509e332f 100644
--- a/internal/util/secrets.go
+++ b/internal/util/secrets.go
@@ -46,6 +46,10 @@ const (
// passwordCharUpper is the highest ASCII character to use for generating a
// password, which is 126
passwordCharUpper = 126
+ // passwordCharExclude is a map of characters that we choose to exclude from
+ // the password to simplify usage in the shell. There is still enough entropy
+ // that exclusion of these characters is OK.
+ passwordCharExclude = "`\\"
)
// passwordCharSelector is a "big int" that we need to select the random ASCII
@@ -75,15 +79,24 @@ func CreateSecret(clientset kubernetes.Interface, db, secretName, username, pass
// ASCII characters suitable for a password
func GeneratePassword(length int) (string, error) {
password := make([]byte, length)
+ i := 0
- for i := 0; i < length; i++ {
- char, err := rand.Int(rand.Reader, passwordCharSelector)
+ for i < length {
+ val, err := rand.Int(rand.Reader, passwordCharSelector)
// if there is an error generating the random integer, return
if err != nil {
return "", err
}
- password[i] = byte(passwordCharLower + char.Int64())
+ char := byte(passwordCharLower + val.Int64())
+
+ // if the character is in the exclusion list, continue
+ if idx := strings.IndexAny(string(char), passwordCharExclude); idx > -1 {
+ continue
+ }
+
+ password[i] = char
+ i++
}
return string(password), nil
diff --git a/internal/util/secrets_test.go b/internal/util/secrets_test.go
index 89cbcebac9..423beb5e03 100644
--- a/internal/util/secrets_test.go
+++ b/internal/util/secrets_test.go
@@ -23,7 +23,7 @@ import (
func TestGeneratePassword(t *testing.T) {
// different lengths
- for _, length := range []int{1, 2, 3, 5, 20} {
+ for _, length := range []int{1, 2, 3, 5, 20, 200} {
password, err := GeneratePassword(length)
if err != nil {
t.Fatalf("expected no error, got %v", err)
@@ -31,9 +31,12 @@ func TestGeneratePassword(t *testing.T) {
if expected, actual := length, len(password); expected != actual {
t.Fatalf("expected length %v, got %v", expected, actual)
}
- if i := strings.IndexFunc(password, unicode.IsPrint); i > 0 {
+ if i := strings.IndexFunc(password, func(r rune) bool { return !unicode.IsPrint(r) }); i > -1 {
t.Fatalf("expected only printable characters, got %q in %q", password[i], password)
}
+ if i := strings.IndexAny(password, passwordCharExclude); i > -1 {
+ t.Fatalf("expected no exclude characters, got %q in %q", password[i], password)
+ }
}
// random contents
@@ -44,9 +47,12 @@ func TestGeneratePassword(t *testing.T) {
if err != nil {
t.Fatalf("expected no error, got %v", err)
}
- if i := strings.IndexFunc(password, unicode.IsPrint); i > 0 {
+ if i := strings.IndexFunc(password, func(r rune) bool { return !unicode.IsPrint(r) }); i > -1 {
t.Fatalf("expected only printable characters, got %q in %q", password[i], password)
}
+ if i := strings.IndexAny(password, passwordCharExclude); i > -1 {
+ t.Fatalf("expected no exclude characters, got %q in %q", password[i], password)
+ }
for i := range previous {
if password == previous[i] {
From 02a53114d7906a85ade3154c158e90674ea975a5 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 26 Dec 2020 15:22:19 -0500
Subject: [PATCH 074/276] Ensure "hack" directory is skipped for linter
Given the nature of this directory, let alone the name, we do
not need to concern ourselves with the lint state of it.
---
.golangci.yaml | 1 +
1 file changed, 1 insertion(+)
diff --git a/.golangci.yaml b/.golangci.yaml
index 9918bb1a8b..937735ce02 100644
--- a/.golangci.yaml
+++ b/.golangci.yaml
@@ -18,4 +18,5 @@ linters-settings:
run:
skip-dirs:
+ - hack
- pkg/generated
From 82718f00a2f0171e48ba2d8aa0e9689277274a5f Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 27 Dec 2020 16:29:18 -0500
Subject: [PATCH 075/276] Remove reference to deprecated CRD attribute
"ArchiveStorage" is no longer used, so there is no need to
continue to carry this CRD attribute forward.
---
examples/create-by-resource/fromcrd.json | 9 ---------
pkg/apis/crunchydata.com/v1/cluster.go | 1 -
pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go | 1 -
3 files changed, 11 deletions(-)
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index ea603808ac..58dd2f515e 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -24,15 +24,6 @@
"namespace": "pgouser1"
},
"spec": {
- "ArchiveStorage": {
- "accessmode": "",
- "matchLabels": "",
- "name": "",
- "size": "",
- "storageclass": "",
- "storagetype": "",
- "supplementalgroups": ""
- },
"BackrestStorage": {
"accessmode": "ReadWriteOnce",
"matchLabels": "",
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index fb121ea835..e2180e90dc 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -56,7 +56,6 @@ type PgclusterSpec struct {
PrimaryStorage PgStorageSpec
WALStorage PgStorageSpec
- ArchiveStorage PgStorageSpec
ReplicaStorage PgStorageSpec
BackrestStorage PgStorageSpec
diff --git a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
index 80fd389e4f..69a3a673f5 100644
--- a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
+++ b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
@@ -196,7 +196,6 @@ func (in *PgclusterSpec) DeepCopyInto(out *PgclusterSpec) {
*out = *in
out.PrimaryStorage = in.PrimaryStorage
out.WALStorage = in.WALStorage
- out.ArchiveStorage = in.ArchiveStorage
out.ReplicaStorage = in.ReplicaStorage
out.BackrestStorage = in.BackrestStorage
if in.Resources != nil {
From 72281ca2dd535ec3ebc26833aefe63f95664a578 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 28 Dec 2020 10:17:58 -0500
Subject: [PATCH 076/276] Remove crunchy-admin container references
This container was added provisionally in 4.2.0 to be used as
part of the authentication and administration scheme for the
Operator. However, this never was fully built out due to a variety
of reasons, and as such in 4.6.0 this is being abandoned.
A file found in `/crunchyadm/pgha_initialized` is still being used
as one of the Operator's readiness checks, and as such, this
directory is being kept for the time being.
Additionally, a check for the "system account" named `crunchyadm` is
still performed, as some legacy systems may still have this account
in their PostgreSQL instantiation.
Issue: [ch10007]
---
docs/content/advanced/custom-configuration.md | 3 +-
docs/content/pgo-client/common-tasks.md | 1 -
examples/custom-config/postgres-ha.yaml | 3 +-
.../roles/pgo-operator/defaults/main.yml | 1 -
.../pgo-configs/cluster-bootstrap-job.json | 10 -----
.../files/pgo-configs/cluster-deployment.json | 41 +------------------
.../roles/pgo-operator/templates/pgo.yaml.j2 | 1 -
.../olm/postgresoperator.csv.images.yaml | 1 -
internal/config/images.go | 2 -
internal/config/pgoconfig.go | 1 -
internal/operator/cluster/clusterlogic.go | 2 -
internal/operator/clusterutilities.go | 8 +---
internal/operator/clusterutilities_test.go | 9 ++--
pkg/apis/crunchydata.com/v1/common.go | 4 +-
testing/pgo_cli/cluster_restart_test.go | 2 +-
15 files changed, 12 insertions(+), 77 deletions(-)
diff --git a/docs/content/advanced/custom-configuration.md b/docs/content/advanced/custom-configuration.md
index 4ac15d1578..5a7e4f0e34 100644
--- a/docs/content/advanced/custom-configuration.md
+++ b/docs/content/advanced/custom-configuration.md
@@ -150,7 +150,7 @@ postgresql:
shared_buffers: 128MB
shared_preload_libraries: pgaudit.so,pg_stat_statements.so
temp_buffers: 8MB
- unix_socket_directories: /tmp,/crunchyadm
+ unix_socket_directories: /tmp
wal_level: logical
work_mem: 4MB
recovery_conf:
@@ -168,7 +168,6 @@ postgresql:
- basebackup
pg_hba:
- local all postgres peer
- - local all crunchyadm peer
- host replication primaryuser 0.0.0.0/0 md5
- host all primaryuser 0.0.0.0/0 reject
- host all all 0.0.0.0/0 md5
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index 58e7476dfa..02243ea62f 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -128,7 +128,6 @@ Cluster:
BackrestS3URIStyle: ""
BackrestS3VerifyTLS: true
DisableAutofail: false
- EnableCrunchyadm: false
DisableReplicaStartFailReinit: false
PodAntiAffinity: preferred
SyncReplication: false
diff --git a/examples/custom-config/postgres-ha.yaml b/examples/custom-config/postgres-ha.yaml
index 0f4cd6fbab..5d823d4a81 100644
--- a/examples/custom-config/postgres-ha.yaml
+++ b/examples/custom-config/postgres-ha.yaml
@@ -12,10 +12,9 @@ bootstrap:
shared_buffers: 256MB
temp_buffers: 10MB
work_mem: 5MB
-postgresql:
+postgresql:
pg_hba:
- local all postgres peer
- - local all crunchyadm peer
- host replication primaryuser 0.0.0.0/0 md5
- host all primaryuser 0.0.0.0/0 reject
- host all postgres 0.0.0.0/0 md5
diff --git a/installers/ansible/roles/pgo-operator/defaults/main.yml b/installers/ansible/roles/pgo-operator/defaults/main.yml
index 39fb88c679..fb41fa471e 100644
--- a/installers/ansible/roles/pgo-operator/defaults/main.yml
+++ b/installers/ansible/roles/pgo-operator/defaults/main.yml
@@ -16,7 +16,6 @@ service_type: "ClusterIP"
cleanup: "false"
common_name: "crunchydata"
crunchy_debug: "false"
-enable_crunchyadm: "false"
disable_replica_start_fail_reinit: "false"
disable_fsgroup: "false"
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
index 9bd5a10f21..ee5e5307a9 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
@@ -57,9 +57,6 @@
{
"name": "PGHA_DATABASE",
"value": "{{.Database}}"
- }, {
- "name": "PGHA_CRUNCHYADM",
- "value": "true"
}, {
"name": "PGHA_REPLICA_REINIT_ON_START_FAIL",
"value": "{{.ReplicaReinitOnStartFail}}"
@@ -137,9 +134,6 @@
}, {
"mountPath": "/etc/pgbackrest/conf.d",
"name": "pgbackrest-config"
- }, {
- "mountPath": "/crunchyadm",
- "name": "crunchyadm"
}
{{.TablespaceVolumeMounts}}
],
@@ -189,10 +183,6 @@
}
},
{{ end }}
- {
- "name": "crunchyadm",
- "emptyDir": {}
- },
{
"name": "dshm",
"emptyDir": {
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index c05ee7210c..33c02937c4 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -95,9 +95,6 @@
{
"name": "PGHA_DATABASE",
"value": "{{.Database}}"
- }, {
- "name": "PGHA_CRUNCHYADM",
- "value": "true"
}, {
"name": "PGHA_REPLICA_REINIT_ON_START_FAIL",
"value": "{{.ReplicaReinitOnStartFail}}"
@@ -187,10 +184,6 @@
"mountPath": "/etc/pgbackrest/conf.d",
"name": "pgbackrest-config"
},
- {
- "mountPath": "/crunchyadm",
- "name": "crunchyadm"
- },
{
"mountPath": "/etc/podinfo",
"name": "podinfo"
@@ -206,36 +199,7 @@
"protocol": "TCP"
}],
"imagePullPolicy": "IfNotPresent"
- }{{if .EnableCrunchyadm}},
- {
- "name": "crunchyadm",
- "image": "{{.CCPImagePrefix}}/crunchy-admin:{{.CCPImageTag}}",
- "securityContext": {
- "runAsUser": 17
- },
- "readinessProbe": {
- "exec": {
- "command": [
- "/opt/cpm/bin/crunchyadm-readiness.sh"
- ]
- },
- "initialDelaySeconds": 30,
- "timeoutSeconds": 10
- },
- "env": [
- {
- "name": "PGHOST",
- "value": "/crunchyadm"
- }
- ],
- "volumeMounts": [
- {
- "mountPath": "/crunchyadm",
- "name": "crunchyadm"
- }
- ],
- "imagePullPolicy": "IfNotPresent"
- }{{ end }}
+ }
{{ if .ExporterAddon }}
,{{.ExporterAddon }}
{{ end }}
@@ -319,9 +283,6 @@
}, {
"name": "report",
"emptyDir": { "medium": "Memory" }
- }, {
- "name": "crunchyadm",
- "emptyDir": {}
},
{
"name": "dshm",
diff --git a/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2
index f1b21fbbcb..e87a245bed 100644
--- a/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2
+++ b/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2
@@ -20,7 +20,6 @@ Cluster:
Replicas: {{ db_replicas }}
ArchiveMode: {{ archive_mode }}
ServiceType: {{ service_type }}
- EnableCrunchyadm: {{ enable_crunchyadm }}
DisableReplicaStartFailReinit: {{ disable_replica_start_fail_reinit }}
PodAntiAffinity: {{ pod_anti_affinity }}
PodAntiAffinityPgBackRest: {{ pod_anti_affinity_pgbackrest }}
diff --git a/installers/olm/postgresoperator.csv.images.yaml b/installers/olm/postgresoperator.csv.images.yaml
index 21d1f3c10f..c8f3a386d6 100644
--- a/installers/olm/postgresoperator.csv.images.yaml
+++ b/installers/olm/postgresoperator.csv.images.yaml
@@ -10,7 +10,6 @@
- { name: RELATED_IMAGE_PGO_RMDATA, value: '${PGO_IMAGE_PREFIX}/pgo-rmdata:${PGO_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER, value: '${PGO_IMAGE_PREFIX}/crunchy-postgres-exporter:${PGO_IMAGE_TAG}' }
-- { name: RELATED_IMAGE_CRUNCHY_ADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-admin:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-pgadmin4:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGBADGER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbadger:${CCP_IMAGE_TAG}' }
- { name: RELATED_IMAGE_CRUNCHY_PGBOUNCER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbouncer:${CCP_IMAGE_TAG}' }
diff --git a/internal/config/images.go b/internal/config/images.go
index 3c7fdf4285..7ab595ed98 100644
--- a/internal/config/images.go
+++ b/internal/config/images.go
@@ -21,7 +21,6 @@ const (
CONTAINER_IMAGE_PGO_BACKREST_REPO = "crunchy-pgbackrest-repo"
CONTAINER_IMAGE_PGO_CLIENT = "pgo-client"
CONTAINER_IMAGE_PGO_RMDATA = "pgo-rmdata"
- CONTAINER_IMAGE_CRUNCHY_ADMIN = "crunchy-admin"
CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER = "crunchy-postgres-exporter"
CONTAINER_IMAGE_CRUNCHY_GRAFANA = "crunchy-grafana"
CONTAINER_IMAGE_CRUNCHY_PGADMIN = "crunchy-pgadmin4"
@@ -42,7 +41,6 @@ var RelatedImageMap = map[string]string{
"RELATED_IMAGE_PGO_BACKREST_REPO": CONTAINER_IMAGE_PGO_BACKREST_REPO,
"RELATED_IMAGE_PGO_CLIENT": CONTAINER_IMAGE_PGO_CLIENT,
"RELATED_IMAGE_PGO_RMDATA": CONTAINER_IMAGE_PGO_RMDATA,
- "RELATED_IMAGE_CRUNCHY_ADMIN": CONTAINER_IMAGE_CRUNCHY_ADMIN,
"RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER": CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER,
"RELATED_IMAGE_CRUNCHY_PGADMIN": CONTAINER_IMAGE_CRUNCHY_PGADMIN,
"RELATED_IMAGE_CRUNCHY_PGBADGER": CONTAINER_IMAGE_CRUNCHY_PGBADGER,
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index c073d9c43f..6c870686a0 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -213,7 +213,6 @@ type ClusterStruct struct {
BackrestS3URIStyle string
BackrestS3VerifyTLS string
DisableAutofail bool
- EnableCrunchyadm bool
DisableReplicaStartFailReinit bool
PodAntiAffinity string
PodAntiAffinityPgBackRest string
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 3af8b13729..2a779614d2 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -329,7 +329,6 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Labels[config.LABEL_BACKREST], cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
cl.Spec.Port, cl.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cl, clientset, namespace),
- EnableCrunchyadm: operator.Pgo.Cluster.EnableCrunchyadm,
ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
SyncReplication: operator.GetSyncReplication(cl.Spec.SyncReplication),
Tablespaces: operator.GetTablespaceNames(cl.Spec.TablespaceMounts),
@@ -485,7 +484,6 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, cluster.Labels[config.LABEL_BACKREST], replica.Spec.Name,
cluster.Spec.Port, cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cluster, clientset, namespace),
- EnableCrunchyadm: operator.Pgo.Cluster.EnableCrunchyadm,
ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
SyncReplication: operator.GetSyncReplication(cluster.Spec.SyncReplication),
Tablespaces: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts),
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 776c264579..f96d5e5832 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -178,7 +178,6 @@ type DeploymentTemplateFields struct {
ScopeLabel string
Replicas string
IsInit bool
- EnableCrunchyadm bool
ReplicaReinitOnStartFail bool
PodAntiAffinity string
SyncReplication bool
@@ -978,15 +977,12 @@ func OverrideClusterContainerImages(containers []v1.Container) {
var containerImageName string
// there are a few images we need to check for:
// 1. "database" image, which is PostgreSQL or some flavor of it
- // 2. "crunchyadm" image, which helps with administration
- // 3. "exporter" image, which helps with monitoring
- // 4. "pgbadger" image, which helps with...pgbadger
+ // 2. "exporter" image, which helps with monitoring
+ // 3. "pgbadger" image, which helps with...pgbadger
switch container.Name {
case "exporter":
containerImageName = config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER
- case "crunchyadm":
- containerImageName = config.CONTAINER_IMAGE_CRUNCHY_ADMIN
case "database":
containerImageName = config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA
// one more step here...determine if this is GIS enabled
diff --git a/internal/operator/clusterutilities_test.go b/internal/operator/clusterutilities_test.go
index 7a3643f777..52e31aa66c 100644
--- a/internal/operator/clusterutilities_test.go
+++ b/internal/operator/clusterutilities_test.go
@@ -127,11 +127,10 @@ func TestOverrideClusterContainerImages(t *testing.T) {
name string
image string
}{
- "database": {name: "database", image: config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA},
- "crunchyadm": {name: "crunchyadm", image: config.CONTAINER_IMAGE_CRUNCHY_ADMIN},
- "exporter": {name: "exporter", image: config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER},
- "pgbadger": {name: "pgbadger", image: config.CONTAINER_IMAGE_CRUNCHY_PGBADGER},
- "future": {name: "future", image: "crunchy-future"},
+ "database": {name: "database", image: config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA},
+ "exporter": {name: "exporter", image: config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER},
+ "pgbadger": {name: "pgbadger", image: config.CONTAINER_IMAGE_CRUNCHY_PGBADGER},
+ "future": {name: "future", image: "crunchy-future"},
}
t.Run("no override", func(t *testing.T) {
diff --git a/pkg/apis/crunchydata.com/v1/common.go b/pkg/apis/crunchydata.com/v1/common.go
index 6bf14408dc..fcd2238f36 100644
--- a/pkg/apis/crunchydata.com/v1/common.go
+++ b/pkg/apis/crunchydata.com/v1/common.go
@@ -46,8 +46,8 @@ const StorageDynamic = "dynamic"
// the following are standard PostgreSQL user service accounts that are created
// as part of managed the PostgreSQL cluster environment via the Operator
const (
- // PGUserAdmin is a special user that can perform administrative actions
- // without being a superuser itself
+ // PGUserAdmin is a DEPRECATED user and is only included to filter this out
+ // as a system user in older systems
PGUserAdmin = "crunchyadm"
// PGUserMonitor is the monitoring user that can access metric data
PGUserMonitor = "ccp_monitoring"
diff --git a/testing/pgo_cli/cluster_restart_test.go b/testing/pgo_cli/cluster_restart_test.go
index 3f438b6570..9daeea644e 100644
--- a/testing/pgo_cli/cluster_restart_test.go
+++ b/testing/pgo_cli/cluster_restart_test.go
@@ -114,7 +114,7 @@ func TestRestart(t *testing.T) {
// now update a PG setting
updatePGConfigDCS(t, cluster(), namespace(),
- map[string]string{"unix_socket_directories": "/tmp,/crunchyadm,/tmp/e2e"})
+ map[string]string{"unix_socket_directories": "/tmp,/tmp/e2e"})
requiresRestartPrimaryReplica := func() bool {
output, err := pgo(restartQueryCMD...).Exec(t)
From 43aef7613b31268edc9267a42aed22102012260b Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 28 Dec 2020 10:28:05 -0500
Subject: [PATCH 077/276] Move TLS key generation to ECDSA
This moves the default TLS key generation for the API server
to use ECDSA keys with a P-256 curve and a SHA384 signature.
This only affects newly created Operator deployments, or
Operator deployments that have deleted their TLS secret.
---
deploy/gen-api-keys.sh | 32 ++++++-------------
.../roles/pgo-operator/tasks/certs.yml | 30 +++++------------
internal/apiserver/root.go | 4 +--
internal/tlsutil/primitives.go | 22 +++++++------
internal/tlsutil/primitives_test.go | 30 +----------------
5 files changed, 32 insertions(+), 86 deletions(-)
diff --git a/deploy/gen-api-keys.sh b/deploy/gen-api-keys.sh
index 8aece10000..15b310f85f 100755
--- a/deploy/gen-api-keys.sh
+++ b/deploy/gen-api-keys.sh
@@ -13,28 +13,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-#
# generate self signed cert for apiserver REST service
-#
-
openssl req \
--x509 \
--nodes \
--newkey rsa:2048 \
--keyout $PGOROOT/conf/postgres-operator/server.key \
--out $PGOROOT/conf/postgres-operator/server.crt \
--days 3650 \
--subj "/C=US/ST=Texas/L=Austin/O=TestOrg/OU=TestDepartment/CN=*"
-
-# generate CA
-#openssl genrsa -out $PGOROOT/conf/apiserver/rootCA.key 4096
-#openssl req -x509 -new -key $PGOROOT/conf/apiserver/rootCA.key -days 3650 -out $PGOROOT/conf/apiserver/rootCA.crt
-
-# generate cert for secure.domain.com signed with the created CA
-#openssl genrsa -out $PGOROOT/conf/apiserver/secure.domain.com.key 2048
-#openssl req -new -key $PGOROOT/conf/apiserver/secure.domain.com.key -out $PGOROOT/conf/apiserver/secure.domain.com.csr
-#In answer to question `Common Name (e.g. server FQDN or YOUR name) []:` you should set `secure.domain.com` (your real domain name)
-#openssl x509 -req -in $PGOROOT/conf/apiserver/secure.domain.com.csr -CA $PGOROOT/conf/apiserver/rootCA.crt -CAkey $PGOROOT/conf/apiserver/rootCA.key -CAcreateserial -days 365 -out $PGOROOT/conf/apiserver/secure.domain.com.crt
-
-#openssl genrsa 2048 > $PGOROOT/conf/apiserver/key.pem
-#openssl req -new -x509 -key $PGOROOT/conf/apiserver/key.pem -out $PGOROOT/conf/apiserver/cert.pem -days 1095
+ -x509 \
+ -nodes \
+ -newkey ec \
+ -pkeyopt ec_paramgen_curve:prime256v1 \
+ -sha384 \
+ -keyout $PGOROOT/conf/postgres-operator/server.key \
+ -out $PGOROOT/conf/postgres-operator/server.crt \
+ -days 3650 \
+ -subj "/CN=*"
diff --git a/installers/ansible/roles/pgo-operator/tasks/certs.yml b/installers/ansible/roles/pgo-operator/tasks/certs.yml
index 4c66e89892..07e3077eee 100644
--- a/installers/ansible/roles/pgo-operator/tasks/certs.yml
+++ b/installers/ansible/roles/pgo-operator/tasks/certs.yml
@@ -6,33 +6,19 @@
tags:
- install
-- name: Generate RSA Key
- command: openssl genrsa -out "{{ output_dir }}/server.key" 2048
- args:
- creates: "{{ output_dir }}/server.key"
- tags:
- - install
-
-- name: Generate CSR
- command: openssl req \
- -new \
- -subj '/C=US/ST=SC/L=Charleston/O=CrunchyData/CN=pg-operator' \
- -key "{{ output_dir }}/server.key" \
- -out "{{ output_dir }}/server.csr"
- args:
- creates: "{{ output_dir }}/server.csr"
- tags:
- - install
-
- name: Generate Self-signed Certificate
command: openssl req \
-x509 \
+ -nodes \
+ -newkey ec \
+ -pkeyopt ec_paramgen_curve:prime256v1 \
+ -sha384 \
-days 1825 \
- -key "{{ output_dir }}/server.key" \
- -in "{{ output_dir }}/server.csr" \
- -out "{{ output_dir }}/server.crt"
+ -subj "/CN=*" \
+ -keyout {{ output_dir }}/server.key \
+ -out {{ output_dir }}/server.crt
args:
- creates: "{{ output_dir }}/server.crt"
+ creates: "{{ output_dir }}/server.[kc][er][yt]"
tags:
- install
diff --git a/internal/apiserver/root.go b/internal/apiserver/root.go
index 769ee79ab7..bf68f1b870 100644
--- a/internal/apiserver/root.go
+++ b/internal/apiserver/root.go
@@ -17,7 +17,7 @@ limitations under the License.
import (
"context"
- "crypto/rsa"
+ "crypto/ecdsa"
"crypto/x509"
"errors"
"fmt"
@@ -438,7 +438,7 @@ func generateTLSCert(certPath, keyPath string) error {
var err error
// generate private key
- var privateKey *rsa.PrivateKey
+ var privateKey *ecdsa.PrivateKey
privateKey, err = tlsutil.NewPrivateKey()
if err != nil {
fmt.Println(err.Error())
diff --git a/internal/tlsutil/primitives.go b/internal/tlsutil/primitives.go
index 03fb73f744..2ed4881e8e 100644
--- a/internal/tlsutil/primitives.go
+++ b/internal/tlsutil/primitives.go
@@ -16,8 +16,9 @@ limitations under the License.
*/
import (
+ "crypto/ecdsa"
+ "crypto/elliptic"
"crypto/rand"
- "crypto/rsa"
"crypto/x509"
"encoding/pem"
"errors"
@@ -29,20 +30,20 @@ import (
)
const (
- rsaKeySize = 2048
duration365d = time.Hour * 24 * 365
)
// newPrivateKey returns randomly generated RSA private key.
-func NewPrivateKey() (*rsa.PrivateKey, error) {
- return rsa.GenerateKey(rand.Reader, rsaKeySize)
+func NewPrivateKey() (*ecdsa.PrivateKey, error) {
+ return ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
}
// encodePrivateKeyPEM encodes the given private key pem and returns bytes (base64).
-func EncodePrivateKeyPEM(key *rsa.PrivateKey) []byte {
+func EncodePrivateKeyPEM(key *ecdsa.PrivateKey) []byte {
+ raw, _ := x509.MarshalECPrivateKey(key)
return pem.EncodeToMemory(&pem.Block{
- Type: "RSA PRIVATE KEY",
- Bytes: x509.MarshalPKCS1PrivateKey(key),
+ Type: "EC PRIVATE KEY",
+ Bytes: raw,
})
}
@@ -64,17 +65,17 @@ func ParsePEMEncodedCert(pemdata []byte) (*x509.Certificate, error) {
}
// parsePEMEncodedPrivateKey parses a private key from given pemdata
-func ParsePEMEncodedPrivateKey(pemdata []byte) (*rsa.PrivateKey, error) {
+func ParsePEMEncodedPrivateKey(pemdata []byte) (*ecdsa.PrivateKey, error) {
decoded, _ := pem.Decode(pemdata)
if decoded == nil {
return nil, errors.New("no PEM data found")
}
- return x509.ParsePKCS1PrivateKey(decoded.Bytes)
+ return x509.ParseECPrivateKey(decoded.Bytes)
}
// newSelfSignedCACertificate returns a self-signed CA certificate based on given configuration and private key.
// The certificate has one-year lease.
-func NewSelfSignedCACertificate(key *rsa.PrivateKey) (*x509.Certificate, error) {
+func NewSelfSignedCACertificate(key *ecdsa.PrivateKey) (*x509.Certificate, error) {
serial, err := rand.Int(rand.Reader, new(big.Int).SetInt64(math.MaxInt64))
if err != nil {
return nil, err
@@ -87,6 +88,7 @@ func NewSelfSignedCACertificate(key *rsa.PrivateKey) (*x509.Certificate, error)
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
IsCA: true,
+ SignatureAlgorithm: x509.ECDSAWithSHA384,
}
certDERBytes, err := x509.CreateCertificate(rand.Reader, &tmpl, &tmpl, key.Public(), key)
if err != nil {
diff --git a/internal/tlsutil/primitives_test.go b/internal/tlsutil/primitives_test.go
index 1c6538f543..09d4dab6ce 100644
--- a/internal/tlsutil/primitives_test.go
+++ b/internal/tlsutil/primitives_test.go
@@ -18,7 +18,6 @@ limitations under the License.
import (
"bytes"
"context"
- "crypto/rsa"
"crypto/tls"
"crypto/x509"
"encoding/base64"
@@ -43,7 +42,7 @@ func TestKeyPEMSymmetry(t *testing.T) {
t.Log(base64.StdEncoding.EncodeToString(pemKey))
- if !keysEq(oldKey, newKey) {
+ if !(oldKey.Equal(newKey) && oldKey.PublicKey.Equal(newKey.Public())) {
t.Fatal("Decoded key did not match its input source")
}
}
@@ -145,30 +144,3 @@ func TestExtendedTrust(t *testing.T) {
t.Fatalf("expected [%s], got [%s] instead\n", expected, recv)
}
}
-
-func keysEq(a, b *rsa.PrivateKey) bool {
- if a.E != b.E {
- // PublicKey exponent different
- return false
- }
- if a.N.Cmp(b.N) != 0 {
- // PublicKey modulus different
- return false
- }
- if a.D.Cmp(b.D) != 0 {
- // PrivateKey exponent different
- return false
- }
- if len(a.Primes) != len(b.Primes) {
- // Prime factor difference (Tier 1)
- return false
- }
- for i, aPrime := range a.Primes {
- if aPrime.Cmp(b.Primes[i]) != 0 {
- // Prime factor difference (Tier 2)
- return false
- }
- }
-
- return true
-}
From 68e638446dd3e2265214f4b7771baa8d29327f30 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 24 Dec 2020 17:41:30 -0500
Subject: [PATCH 078/276] Rewrite `pgo scale` endpoint to accept POST requests
This rewrites the `pgo scale` API endpoint (/custers/scale/{name})
to accept POST instead of GET requests, and moves all of the
parameters to the POST body. However, the "{name}" parameter in
the URL path still takes precedence in reference to the cluster
name.
This change allows for more complex parameters to be passed into
the scale request, as well as moves the request itself to something
that is closer to the REST convention.
---
cmd/pgo/api/scale.go | 38 ++----
cmd/pgo/cmd/scale.go | 29 ++--
.../apiserver/clusterservice/scaleimpl.go | 53 +++----
.../apiserver/clusterservice/scaleservice.go | 129 ++++++++----------
internal/apiserver/routing/routes.go | 2 +-
pkg/apiservermsgs/clustermsgs.go | 28 ++++
6 files changed, 141 insertions(+), 138 deletions(-)
diff --git a/cmd/pgo/api/scale.go b/cmd/pgo/api/scale.go
index 87eae783f3..574cbc8b4c 100644
--- a/cmd/pgo/api/scale.go
+++ b/cmd/pgo/api/scale.go
@@ -16,56 +16,48 @@ package api
*/
import (
+ "bytes"
"context"
"encoding/json"
"fmt"
"net/http"
- "strconv"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
)
-func ScaleCluster(httpclient *http.Client, arg string, ReplicaCount int,
- StorageConfig, NodeLabel, CCPImageTag, ServiceType string,
- SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ClusterScaleResponse, error) {
- var response msgs.ClusterScaleResponse
+func ScaleCluster(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials,
+ request msgs.ClusterScaleRequest) (msgs.ClusterScaleResponse, error) {
+ response := msgs.ClusterScaleResponse{}
+ ctx := context.TODO()
+ request.ClientVersion = msgs.PGO_VERSION
- url := fmt.Sprintf("%s/clusters/scale/%s", SessionCredentials.APIServerURL, arg)
- log.Debug(url)
+ url := fmt.Sprintf("%s/clusters/scale/%s", SessionCredentials.APIServerURL, request.Name)
+ jsonValue, _ := json.Marshal(request)
- ctx := context.TODO()
- action := "GET"
- req, err := http.NewRequestWithContext(ctx, action, url, nil)
+ req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewBuffer(jsonValue))
if err != nil {
return response, err
}
- q := req.URL.Query()
- q.Add("replica-count", strconv.Itoa(ReplicaCount))
- q.Add("storage-config", StorageConfig)
- q.Add("node-label", NodeLabel)
- q.Add("version", msgs.PGO_VERSION)
- q.Add("ccp-image-tag", CCPImageTag)
- q.Add("service-type", ServiceType)
- q.Add("namespace", ns)
- req.URL.RawQuery = q.Encode()
-
+ req.Header.Set("Content-Type", "application/json")
req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password)
resp, err := httpclient.Do(req)
if err != nil {
return response, err
}
+
defer resp.Body.Close()
+
log.Debugf("%v", resp)
- err = StatusCheck(resp)
- if err != nil {
+
+ if err := StatusCheck(resp); err != nil {
return response, err
}
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
- log.Printf("%v\n", resp.Body)
+ log.Debugf("%+v", resp.Body)
fmt.Println("Error: ", err)
log.Println(err)
return response, err
diff --git a/cmd/pgo/cmd/scale.go b/cmd/pgo/cmd/scale.go
index 0fd9bbdb16..d74a38f035 100644
--- a/cmd/pgo/cmd/scale.go
+++ b/cmd/pgo/cmd/scale.go
@@ -58,29 +58,38 @@ func init() {
scaleCmd.Flags().StringVarP(&ServiceType, "service-type", "", "", "The service type to use in the replica Service. If not set, the default in pgo.yaml will be used.")
scaleCmd.Flags().StringVarP(&CCPImageTag, "ccp-image-tag", "", "", "The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting.")
+ scaleCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key) to use in placing the replica database. If not set, any node is used.")
scaleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
scaleCmd.Flags().IntVarP(&ReplicaCount, "replica-count", "", 1, "The replica count to apply to the clusters.")
scaleCmd.Flags().StringVarP(&StorageConfig, "storage-config", "", "", "The name of a Storage config in pgo.yaml to use for the replica storage.")
- scaleCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key) to use in placing the replica database. If not set, any node is used.")
}
func scaleCluster(args []string, ns string) {
for _, arg := range args {
- log.Debugf(" %s ReplicaCount is %d", arg, ReplicaCount)
- response, err := api.ScaleCluster(httpclient, arg, ReplicaCount,
- StorageConfig, NodeLabel, CCPImageTag, ServiceType, &SessionCredentials, ns)
+ request := msgs.ClusterScaleRequest{
+ CCPImageTag: CCPImageTag,
+ Name: arg,
+ Namespace: ns,
+ NodeLabel: NodeLabel,
+ ReplicaCount: ReplicaCount,
+ ServiceType: ServiceType,
+ StorageConfig: StorageConfig,
+ }
+
+ response, err := api.ScaleCluster(httpclient, &SessionCredentials, request)
+
if err != nil {
fmt.Println("Error: " + err.Error())
- os.Exit(2)
+ os.Exit(1)
}
- if response.Status.Code == msgs.Ok {
- for _, v := range response.Results {
- fmt.Println(v)
- }
- } else {
+ if response.Status.Code != msgs.Ok {
fmt.Println("Error: " + response.Status.Msg)
+ os.Exit(1)
}
+ for _, v := range response.Results {
+ fmt.Println(v)
+ }
}
}
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index 00be04242e..b8dadef637 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -18,7 +18,6 @@ limitations under the License.
import (
"context"
"fmt"
- "strconv"
"strings"
"github.com/crunchydata/postgres-operator/internal/apiserver"
@@ -33,21 +32,21 @@ import (
)
// ScaleCluster ...
-func ScaleCluster(name, replicaCount, storageConfig, nodeLabel,
- ccpImageTag, serviceType, ns, pgouser string) msgs.ClusterScaleResponse {
+func ScaleCluster(request msgs.ClusterScaleRequest, pgouser string) msgs.ClusterScaleResponse {
ctx := context.TODO()
var err error
response := msgs.ClusterScaleResponse{}
response.Status = msgs.Status{Code: msgs.Ok, Msg: ""}
- if name == "all" {
+ if request.ReplicaCount < 1 {
+ log.Error("replica count less than 1, no replicas added")
response.Status.Code = msgs.Error
- response.Status.Msg = "all is not allowed for the scale command"
+ response.Status.Msg = "replica count must be at least 1"
return response
}
- cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(ctx, name, metav1.GetOptions{})
+ cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Get(ctx, request.Name, metav1.GetOptions{})
if kerrors.IsNotFound(err) {
log.Error("no clusters found")
@@ -77,24 +76,24 @@ func ScaleCluster(name, replicaCount, storageConfig, nodeLabel,
spec.ReplicaStorage = cluster.Spec.ReplicaStorage
// allow for user override
- if storageConfig != "" {
- spec.ReplicaStorage, _ = apiserver.Pgo.GetStorageSpec(storageConfig)
+ if request.StorageConfig != "" {
+ spec.ReplicaStorage, _ = apiserver.Pgo.GetStorageSpec(request.StorageConfig)
}
spec.UserLabels = cluster.Spec.UserLabels
- if ccpImageTag != "" {
- spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY] = ccpImageTag
+ if request.CCPImageTag != "" {
+ spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY] = request.CCPImageTag
}
- if serviceType != "" {
- if serviceType != config.DEFAULT_SERVICE_TYPE &&
- serviceType != config.NODEPORT_SERVICE_TYPE &&
- serviceType != config.LOAD_BALANCER_SERVICE_TYPE {
+ if request.ServiceType != "" {
+ if request.ServiceType != config.DEFAULT_SERVICE_TYPE &&
+ request.ServiceType != config.NODEPORT_SERVICE_TYPE &&
+ request.ServiceType != config.LOAD_BALANCER_SERVICE_TYPE {
response.Status.Code = msgs.Error
response.Status.Msg = "error --service-type should be either ClusterIP, NodePort, or LoadBalancer "
return response
}
- spec.UserLabels[config.LABEL_SERVICE_TYPE] = serviceType
+ spec.UserLabels[config.LABEL_SERVICE_TYPE] = request.ServiceType
}
// set replica node lables to blank to start with, then check for overrides
@@ -102,14 +101,14 @@ func ScaleCluster(name, replicaCount, storageConfig, nodeLabel,
spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = ""
// validate & parse nodeLabel if exists
- if nodeLabel != "" {
- if err = apiserver.ValidateNodeLabel(nodeLabel); err != nil {
+ if request.NodeLabel != "" {
+ if err = apiserver.ValidateNodeLabel(request.NodeLabel); err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
return response
}
- parts := strings.Split(nodeLabel, "=")
+ parts := strings.Split(request.NodeLabel, "=")
spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = parts[0]
spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = parts[1]
@@ -121,23 +120,13 @@ func ScaleCluster(name, replicaCount, storageConfig, nodeLabel,
spec.ClusterName = cluster.Spec.Name
- var rc int
- rc, err = strconv.Atoi(replicaCount)
- if err != nil {
- log.Error(err.Error())
- response.Status.Code = msgs.Error
- response.Status.Msg = err.Error()
- return response
- }
-
labels[config.LABEL_PGOUSER] = pgouser
labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]
- for i := 0; i < rc; i++ {
-
+ for i := 0; i < request.ReplicaCount; i++ {
uniqueName := util.RandStringBytesRmndr(4)
labels[config.LABEL_NAME] = cluster.Spec.Name + "-" + uniqueName
- spec.Namespace = ns
+ spec.Namespace = cluster.Namespace
spec.Name = labels[config.LABEL_NAME]
newInstance := &crv1.Pgreplica{
@@ -152,8 +141,8 @@ func ScaleCluster(name, replicaCount, storageConfig, nodeLabel,
},
}
- _, err = apiserver.Clientset.CrunchydataV1().Pgreplicas(ns).Create(ctx, newInstance, metav1.CreateOptions{})
- if err != nil {
+ if _, err := apiserver.Clientset.CrunchydataV1().Pgreplicas(cluster.Namespace).Create(ctx,
+ newInstance, metav1.CreateOptions{}); err != nil {
log.Error(" in creating Pgreplica instance" + err.Error())
}
diff --git a/internal/apiserver/clusterservice/scaleservice.go b/internal/apiserver/clusterservice/scaleservice.go
index d0d54d6119..810eba4086 100644
--- a/internal/apiserver/clusterservice/scaleservice.go
+++ b/internal/apiserver/clusterservice/scaleservice.go
@@ -40,99 +40,84 @@ func ScaleClusterHandler(w http.ResponseWriter, r *http.Request) {
// produces:
// - application/json
// parameters:
- // - name: "name"
- // description: "Cluster Name"
- // in: "path"
- // type: "string"
- // required: true
- // - name: "version"
- // description: "Client Version"
- // in: "path"
- // type: "string"
- // required: true
- // - name: "namespace"
- // description: "Namespace"
- // in: "path"
- // type: "string"
- // required: true
- // - name: "replica-count"
- // description: "The replica count to apply to the clusters."
- // in: "path"
- // type: "int"
- // required: true
- // - name: "storage-config"
- // description: "The service type to use in the replica Service. If not set, the default in pgo.yaml will be used."
- // in: "path"
- // type: "string"
- // required: false
- // - name: "node-label"
- // description: "The node label (key) to use in placing the replica database. If not set, any node is used."
- // in: "path"
- // type: "string"
- // required: false
- // - name: "service-type"
- // description: "The service type to use in the replica Service. If not set, the default in pgo.yaml will be used."
- // in: "path"
- // type: "string"
- // required: false
- // - name: "ccp-image-tag"
- // description: "The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting."
- // in: "path"
- // type: "string"
- // required: false
+ // - name: "PostgreSQL Scale Cluster"
+ // in: "body"
+ // schema:
+ // "$ref": "#/definitions/ClusterScaleRequest"
// responses:
// '200':
// description: Output
// schema:
// "$ref": "#/definitions/ClusterScaleResponse"
- //SCALE_CLUSTER_PERM
- // This is a pain to document because it doesn't use a struct...
- var ns string
- vars := mux.Vars(r)
-
- clusterName := vars[config.LABEL_NAME]
- namespace := r.URL.Query().Get(config.LABEL_NAMESPACE)
- replicaCount := r.URL.Query().Get(config.LABEL_REPLICA_COUNT)
- storageConfig := r.URL.Query().Get(config.LABEL_STORAGE_CONFIG)
- nodeLabel := r.URL.Query().Get(config.LABEL_NODE_LABEL)
- serviceType := r.URL.Query().Get(config.LABEL_SERVICE_TYPE)
- clientVersion := r.URL.Query().Get(config.LABEL_VERSION)
- ccpImageTag := r.URL.Query().Get(config.LABEL_CCP_IMAGE_TAG_KEY)
-
- log.Debugf("ScaleClusterHandler parameters name [%s] namespace [%s] replica-count [%s] "+
- "storage-config [%s] node-label [%s] service-type [%s] version [%s]"+
- "ccp-image-tag [%s]", clusterName, namespace, replicaCount,
- storageConfig, nodeLabel, serviceType, clientVersion, ccpImageTag)
+ log.Debug("clusterservice.ScaleClusterHandler called")
+ // first, check that the requesting user is authorized to make this request
username, err := apiserver.Authn(apiserver.SCALE_CLUSTER_PERM, w, r)
if err != nil {
return
}
- w.WriteHeader(http.StatusOK)
+ // decode the request parameters
+ request := msgs.ClusterScaleRequest{}
+
+ if err := json.NewDecoder(r.Body).Decode(&request); err != nil {
+ _ = json.NewEncoder(w).Encode(msgs.ClusterScaleResponse{
+ Status: msgs.Status{
+ Code: msgs.Error,
+ Msg: err.Error(),
+ },
+ })
+ return
+ }
+
+ // set some of the header...though we really should not be setting the HTTP
+ // Status upfront, but whatever
+ w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`)
w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(http.StatusOK)
- resp := msgs.ClusterScaleResponse{}
- resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""}
+ // determine if this is the correct client version
+ if request.ClientVersion != msgs.PGO_VERSION {
+ _ = json.NewEncoder(w).Encode(msgs.ClusterScaleResponse{
+ Status: msgs.Status{
+ Code: msgs.Error,
+ Msg: apiserver.VERSION_MISMATCH_ERROR,
+ },
+ })
+ return
+ }
- if clientVersion != msgs.PGO_VERSION {
- resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR}
- _ = json.NewEncoder(w).Encode(resp)
+ // ensure that the user has access to this namespace. if not, error out
+ if _, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace); err != nil {
+ _ = json.NewEncoder(w).Encode(msgs.ClusterScaleResponse{
+ Status: msgs.Status{
+ Code: msgs.Error,
+ Msg: err.Error(),
+ },
+ })
return
}
- ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace)
- if err != nil {
- resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()}
- _ = json.NewEncoder(w).Encode(resp)
+ // ensure that the cluster name is set in the URL, as the request parameters
+ // will use that as precedence
+ vars := mux.Vars(r)
+ clusterName, ok := vars[config.LABEL_NAME]
+
+ if !ok {
+ _ = json.NewEncoder(w).Encode(msgs.ClusterScaleResponse{
+ Status: msgs.Status{
+ Code: msgs.Error,
+ Msg: "cluster name required in URL",
+ },
+ })
return
}
- // TODO too many params need to create a struct for this
- resp = ScaleCluster(clusterName, replicaCount, storageConfig, nodeLabel,
- ccpImageTag, serviceType, ns, username)
+ request.Name = clusterName
- _ = json.NewEncoder(w).Encode(resp)
+ response := ScaleCluster(request, username)
+
+ _ = json.NewEncoder(w).Encode(response)
}
// ScaleQueryHandler ...
diff --git a/internal/apiserver/routing/routes.go b/internal/apiserver/routing/routes.go
index 378651e12b..eb3f69c862 100644
--- a/internal/apiserver/routing/routes.go
+++ b/internal/apiserver/routing/routes.go
@@ -91,7 +91,7 @@ func RegisterClusterSvcRoutes(r *mux.Router) {
r.HandleFunc("/clustersdelete", clusterservice.DeleteClusterHandler).Methods("POST")
r.HandleFunc("/clustersupdate", clusterservice.UpdateClusterHandler).Methods("POST")
r.HandleFunc("/testclusters", clusterservice.TestClusterHandler).Methods("POST")
- r.HandleFunc("/clusters/scale/{name}", clusterservice.ScaleClusterHandler)
+ r.HandleFunc("/clusters/scale/{name}", clusterservice.ScaleClusterHandler).Methods("POST")
r.HandleFunc("/scale/{name}", clusterservice.ScaleQueryHandler).Methods("GET")
r.HandleFunc("/scaledown/{name}", clusterservice.ScaleDownHandler).Methods("GET")
}
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index 52b68909a2..b995d0ea13 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -533,6 +533,34 @@ type ScaleDownResponse struct {
Status
}
+// ClusterScaleRequest superimposes on the legacy model of handling the ability
+// to scale up the number of instances on a cluster
+// swagger:model
+type ClusterScaleRequest struct {
+ // CCPImageTag is the image tag to use for cluster creation. If this is not
+ // provided, this defaults to what the cluster is using, which is likely
+ // the preferred behavior at this point.
+ CCPImageTag string `json:"ccpImageTag"`
+ // ClientVersion is the version of the client that is being used
+ ClientVersion string `json:"clientVersion"`
+ // Name is the name of the cluster to scale. This is set by the value in the
+ // URL
+ Name string `json:"name"`
+ // Namespace is the namespace in which the queried cluster resides.
+ Namespace string `json:"namespace"`
+ // NodeLabel if provided is a node label to use.
+ NodeLabel string `json:"nodeLabel"`
+ // ReplicaCount is the number of replicas to add to the cluster. This is
+ // required and should be at least 1.
+ ReplicaCount int `json:"replicaCount"`
+ // ServiceType is the kind of Service to deploy with this instance. Defaults
+ // to the value on the cluster.
+ ServiceType string `json:"serviceType"`
+ // StorageConfig, if provided, specifies which of the storage configuration
+ // options should be used. Defaults to what the main cluster definition uses.
+ StorageConfig string `json:"storageConfig"`
+}
+
// ClusterScaleResponse ...
// swagger:model
type ClusterScaleResponse struct {
From 55ad25c3b43b74600122a9623a66c0f6de5ee522 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 24 Dec 2020 12:12:19 -0500
Subject: [PATCH 079/276] Add support for Pod Tolerations for pgcluster and
pgreplica
This patch introduces support for being able to specify Pod
Tolerations both at a cluster-wide level and for individual
instances specified on the pgreplica custom resource. Both of the
custom resources can be modified and have their changes reconciled
across all managed PostgreSQL instances.
The structure for adding Tolerations matches that of the
Kubernetes spec.
This also introduces the "pgo create cluster --toleration" and
"pgo scale cluster --toleration" flag, which can accept one or
more tolerations using the format:
"rule:Effect", e.g.
`--toleration=ssd:NoSchedule,zone=east:NoSchedule`
Issue: #2056
---
README.md | 8 +-
cmd/pgo/cmd/cluster.go | 79 ++++++++++++++++++-
cmd/pgo/cmd/create.go | 15 ++++
cmd/pgo/cmd/scale.go | 5 ++
docs/content/_index.md | 6 +-
.../architecture/high-availability/_index.md | 52 ++++++++++++
docs/content/custom-resources/_index.md | 4 +
.../reference/pgo_create_cluster.md | 5 +-
.../content/pgo-client/reference/pgo_scale.md | 7 +-
docs/content/tutorial/customize-cluster.md | 30 ++++++-
docs/content/tutorial/high-availability.md | 14 +++-
.../files/pgo-configs/cluster-deployment.json | 3 +
.../apiserver/clusterservice/clusterimpl.go | 3 +
.../apiserver/clusterservice/scaleimpl.go | 1 +
.../controller/manager/controllermanager.go | 2 +-
.../pgcluster/pgclustercontroller.go | 11 ++-
.../pgreplica/pgreplicacontroller.go | 72 ++++++++++++++---
.../controller/pgtask/pgtaskcontroller.go | 2 +-
internal/operator/cluster/cluster.go | 48 ++++++++++-
internal/operator/cluster/clusterlogic.go | 6 ++
internal/operator/cluster/exporter.go | 2 +-
internal/operator/cluster/rolling.go | 11 +--
internal/operator/clusterutilities.go | 22 ++++++
pkg/apis/crunchydata.com/v1/cluster.go | 4 +
pkg/apis/crunchydata.com/v1/replica.go | 4 +
.../v1/zz_generated.deepcopy.go | 14 ++++
pkg/apiservermsgs/clustermsgs.go | 6 ++
27 files changed, 397 insertions(+), 39 deletions(-)
diff --git a/README.md b/README.md
index c0607435f1..784fad181e 100644
--- a/README.md
+++ b/README.md
@@ -61,9 +61,9 @@ Create new clusters from your existing clusters or backups with [`pgo create clu
Use [pgBouncer][] for connection pooling
-#### Node Affinity
+#### Affinity and Tolerations
-Have your PostgreSQL clusters deployed to [Kubernetes Nodes][k8s-nodes] of your preference
+Have your PostgreSQL clusters deployed to [Kubernetes Nodes][k8s-nodes] of your preference with [node affinity][high-availability-node-affinity], or designate which nodes Kubernetes can schedule PostgreSQL instances to with Kubneretes [tolerations][high-availability-tolerations].
#### Scheduled Backups
@@ -99,7 +99,9 @@ The Crunchy PostgreSQL Operator makes it easy to get your own PostgreSQL-as-a-Se
[disaster-recovery-s3]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/#using-s3
[disaster-recovery-scheduling]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/#scheduling-backups
[high-availability]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/
+[high-availability-node-affinity]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#node-affinity
[high-availability-sync]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#synchronous-replication-guarding-against-transactions-loss
+[high-availability-tolerations]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#tolerations
[monitoring]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/monitoring/
[multiple-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/multi-cluster-kubernetes/
[pgo-create-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_cluster/
@@ -111,7 +113,7 @@ The Crunchy PostgreSQL Operator makes it easy to get your own PostgreSQL-as-a-Se
[k8s-nodes]: https://kubernetes.io/docs/concepts/architecture/nodes/
[pgBackRest]: https://www.pgbackrest.org
-[pgBouncer]: https://access.crunchydata.com/documentation/pgbouncer/
+[pgBouncer]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/pgbouncer/
[pgMonitor]: https://github.com/CrunchyData/pgmonitor
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index 4dc16a0fdc..de7b895de4 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -21,13 +21,14 @@ import (
"os"
"strings"
+ log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
+ v1 "k8s.io/api/core/v1"
"github.com/crunchydata/postgres-operator/cmd/pgo/api"
"github.com/crunchydata/postgres-operator/cmd/pgo/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
- log "github.com/sirupsen/logrus"
)
// below are the tablespace parameters and the expected values of each
@@ -333,6 +334,8 @@ func createCluster(args []string, ns string, createClusterCmd *cobra.Command) {
// set any annotations
r.Annotations = getClusterAnnotations(Annotations, AnnotationsPostgres, AnnotationsBackrest,
AnnotationsPgBouncer)
+ // set any tolerations
+ r.Tolerations = getClusterTolerations(Tolerations)
// only set SyncReplication in the request if actually provided via the CLI
if createClusterCmd.Flag("sync-replication").Changed {
@@ -543,6 +546,80 @@ func getTablespaces(tablespaceParams []string) []msgs.ClusterTablespaceDetail {
return tablespaces
}
+// getClusterTolerations determines if there are any Pod tolerations to set
+// and converts from the defined string form to the standard Toleration object
+//
+// The strings should follow the following formats:
+//
+// Operator - rule:Effect
+//
+// Exists - key:Effect
+// Equals - key=value:Effect
+func getClusterTolerations(tolerationList []string) []v1.Toleration {
+ tolerations := make([]v1.Toleration, 0)
+
+ // if no tolerations, exit early
+ if len(tolerationList) == 0 {
+ return tolerations
+ }
+
+ // begin the joys of parsing
+ for _, t := range tolerationList {
+ toleration := v1.Toleration{}
+ ruleEffect := strings.Split(t, ":")
+
+ // if we don't have exactly two items, then error
+ if len(ruleEffect) != 2 {
+ fmt.Printf("invalid format for toleration: %q\n", t)
+ os.Exit(1)
+ }
+
+ // for ease of reading
+ rule, effect := ruleEffect[0], v1.TaintEffect(ruleEffect[1])
+
+ // see if the effect is a valid effect
+ if !isValidTaintEffect(effect) {
+ fmt.Printf("invalid taint effect for toleration: %q\n", effect)
+ os.Exit(1)
+ }
+
+ toleration.Effect = effect
+
+ // determine if the rule is an Exists or Equals operation
+ keyValue := strings.Split(rule, "=")
+
+ if len(keyValue) < 1 || len(keyValue) > 2 {
+ fmt.Printf("invalid rule for toleration: %q\n", rule)
+ os.Exit(1)
+ }
+
+ // no matter what we have a key
+ toleration.Key = keyValue[0]
+
+ // the following determine the operation to use for the toleration and if
+ // we should assign a value
+ if len(keyValue) == 1 {
+ toleration.Operator = v1.TolerationOpExists
+ } else {
+ toleration.Operator = v1.TolerationOpEqual
+ toleration.Value = keyValue[1]
+ }
+
+ // and append to the list of tolerations
+ tolerations = append(tolerations, toleration)
+ }
+
+ return tolerations
+}
+
+// isValidTaintEffect returns true if the effect passed in is a valid
+// TaintEffect, otherwise false
+func isValidTaintEffect(taintEffect v1.TaintEffect) bool {
+ return (taintEffect == v1.TaintEffectNoSchedule ||
+ taintEffect == v1.TaintEffectPreferNoSchedule ||
+ taintEffect == v1.TaintEffectNoExecute)
+}
+
// isTablespaceParam returns true if the parameter in question is acceptable for
// using with a tablespace.
func isTablespaceParam(param string) bool {
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index 2e6bf8c41e..aff3f53dca 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -142,6 +142,17 @@ var (
CASecret string
)
+// Tolerations is a collection of Pod tolerations that can be applied, which
+// use the following format for the different operations
+//
+// Exists - key:Effect
+// Equals - key=value:Effect
+//
+// Example:
+//
+// zone=east:NoSchedule,highspeed:NoSchedule
+var Tolerations []string
+
var CreateCmd = &cobra.Command{
Use: "create",
Short: "Create a Postgres Operator resource",
@@ -476,6 +487,10 @@ func init() {
"Enables synchronous replication for the cluster.")
createClusterCmd.Flags().BoolVar(&TLSOnly, "tls-only", false, "If true, forces all PostgreSQL connections to be over TLS. "+
"Must also set \"server-tls-secret\" and \"server-ca-secret\"")
+ createClusterCmd.Flags().StringSliceVar(&Tolerations, "toleration", []string{},
+ "Set Pod tolerations for each PostgreSQL instance in a cluster.\n"+
+ "The general format is \"key=value:Effect\"\n"+
+ "For example, to add an Exists and an Equals toleration: \"--toleration=ssd:NoSchedule,zone=east:NoSchedule\"")
createClusterCmd.Flags().BoolVarP(&Standby, "standby", "", false, "Creates a standby cluster "+
"that replicates from a pgBackRest repository in AWS S3.")
createClusterCmd.Flags().StringSliceVar(&Tablespaces, "tablespace", []string{},
diff --git a/cmd/pgo/cmd/scale.go b/cmd/pgo/cmd/scale.go
index d74a38f035..6352e91cd3 100644
--- a/cmd/pgo/cmd/scale.go
+++ b/cmd/pgo/cmd/scale.go
@@ -62,6 +62,10 @@ func init() {
scaleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
scaleCmd.Flags().IntVarP(&ReplicaCount, "replica-count", "", 1, "The replica count to apply to the clusters.")
scaleCmd.Flags().StringVarP(&StorageConfig, "storage-config", "", "", "The name of a Storage config in pgo.yaml to use for the replica storage.")
+ scaleCmd.Flags().StringSliceVar(&Tolerations, "toleration", []string{},
+ "Set Pod tolerations for each PostgreSQL instance in a cluster.\n"+
+ "The general format is \"key=value:Effect\"\n"+
+ "For example, to add an Exists and an Equals toleration: \"--toleration=ssd:NoSchedule,zone=east:NoSchedule\"")
}
func scaleCluster(args []string, ns string) {
@@ -74,6 +78,7 @@ func scaleCluster(args []string, ns string) {
ReplicaCount: ReplicaCount,
ServiceType: ServiceType,
StorageConfig: StorageConfig,
+ Tolerations: getClusterTolerations(Tolerations),
}
response, err := api.ScaleCluster(httpclient, &SessionCredentials, request)
diff --git a/docs/content/_index.md b/docs/content/_index.md
index b879f3db2a..96f6807560 100644
--- a/docs/content/_index.md
+++ b/docs/content/_index.md
@@ -56,11 +56,11 @@ Create new clusters from your existing clusters or backups with [`pgo create clu
#### Connection Pooling
- Use [pgBouncer](https://access.crunchydata.com/documentation/pgbouncer/) for connection pooling
+ Use [pgBouncer]({{< relref "tutorial/pgbouncer.md" >}}) for connection pooling.
-#### Node Affinity
+#### Affinity and Tolerations
-Have your PostgreSQL clusters deployed to [Kubernetes Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) of your preference
+Have your PostgreSQL clusters deployed to [Kubernetes Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) of your preference with [node affinity]({{< relref "architecture/high-availability/_index.md">}}#node-affinity), or designate which nodes Kubernetes can schedule PostgreSQL instances to with [tolerations]({{< relref "architecture/high-availability/_index.md">}}#tolerations).
#### Scheduled Backups
diff --git a/docs/content/architecture/high-availability/_index.md b/docs/content/architecture/high-availability/_index.md
index b3dc97f290..3a5d79c806 100644
--- a/docs/content/architecture/high-availability/_index.md
+++ b/docs/content/architecture/high-availability/_index.md
@@ -277,6 +277,57 @@ is described in the Pod Anti-Affinity section above), so if a Pod cannot be
scheduled to a particular Node matching the label, it will be scheduled to a
different Node.
+## Tolerations
+
+Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
+can help with the scheduling of Pods to appropriate nodes. There are many
+reasons that a Kubernetes administrator may want to use tolerations, such as
+restricting the types of Pods that can be assigned to particular Nodes.
+Reasoning and strategy for using taints and tolerations is outside the scope of
+this documentation.
+
+The PostgreSQL Operator supports the setting of tolerations across all
+PostgreSQL instances in a cluster, as well as for each particular PostgreSQL
+instance within a cluster. Both the [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md">}})
+and [`pgo scale`]({{< relref "pgo-client/reference/pgo_scale.md">}}) commands
+support the `--toleration` flag, which allows for one or more tolerations to be
+added to a PostgreSQL cluster. Values accepted by the `--toleration` use the
+following format:
+
+```
+rule:Effect
+```
+
+where a `rule` can represent existence (e.g. `key`) or equality (`key=value`)
+and `Effect` is one of `NoSchedule`, `PreferNoSchedule`, or `NoExecute`. For
+more information on how tolerations work, please refer to the
+[Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
+
+For example, to add two tolerations to a new PostgreSQL cluster, one that is an
+existence toleration for a key of `ssd` and the other that is an equality
+toleration for a key/value pair of `zone`/`east`, you can run the following
+command:
+
+```
+pgo create cluster hippo \
+ --toleration=ssd:NoSchedule \
+ --toleration=zone=east:NoSchedule
+```
+
+For another example, to assign equality toleration for a key/value pair of
+`zone`/`west` to a new instance in the `hippo` cluster, you can run the
+following command:
+
+```
+pgo scale hippo --toleration=zone=west:NoSchedule
+```
+
+Tolerations can be updated on an existing cluster. To do so, you will need to
+modify the `pgclusters.crunchydata.com` and `pgreplicas.crunchydata.com` custom
+resources directly, e.g. via the `kubectl edit` command. Once the updates are
+applied, the PostgreSQL Operator will roll out the changes to the appropriate
+instances.
+
## Rolling Updates
During the lifecycle of a PostgreSQL cluster, there are certain events that may
@@ -332,3 +383,4 @@ modification to the custom resource:
- Custom annotation changes
- Enabling/disabling the monitoring sidecar on a PostgreSQL cluster (`--metrics`)
- Tablespace additions
+- Toleration modifications
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 6a578bc242..b323911e18 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -201,6 +201,7 @@ spec:
replicationTLSSecret: ""
tlsSecret: ""
tlsOnly: false
+ tolerations: []
user: hippo
userlabels:
crunchy-postgres-exporter: "false"
@@ -392,6 +393,7 @@ spec:
replicationTLSSecret: ""
tlsSecret: ""
tlsOnly: false
+ tolerations: []
user: hippo
userlabels:
backrest-storage-type: "s3"
@@ -713,6 +715,7 @@ make changes, as described below.
| TablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. |
| TLS | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. |
| TLSOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. |
+| Tolerations | `create`,`update` | Any array of Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for how to set this field. |
| Standby | `create`, `update` | If set to true, indicates that the PostgreSQL cluster is a "standby" cluster, i.e. is in read-only mode entirely. Please see [Kubernetes Multi-Cluster Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) for more information. |
| Shutdown | `create`, `update` | If set to true, indicates that a PostgreSQL cluster should shutdown. If set to false, indicates that a PostgreSQL cluster should be up and running. |
@@ -826,3 +829,4 @@ cluster. All of the attributes only affect the replica when it is created.
| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section in the `pgclusters.crunchydata.com` description. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection, you would specify `"crunchy-postgres-exporter": "true"` here. This also allows for node selector pinning using `NodeLabelKey` and `NodeLabelValue`. However, this structure does need to be set, so just follow whatever is in the example. |
+| Tolerations | `create`,`update` | Any array of Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for how to set this field. |
diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md
index 7a8661845d..efc7edc738 100644
--- a/docs/content/pgo-client/reference/pgo_create_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_create_cluster.md
@@ -109,6 +109,9 @@ pgo create cluster [flags]
--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi
--tls-only If true, forces all PostgreSQL connections to be over TLS. Must also set "server-tls-secret" and "server-ca-secret"
+ --toleration strings Set Pod tolerations for each PostgreSQL instance in a cluster.
+ The general format is "key=value:Effect"
+ For example, to add an Exists and an Equals toleration: "--toleration=ssd:NoSchedule,zone=east:NoSchedule"
-u, --username string The username to use for creating the PostgreSQL user with standard permissions. Defaults to the value in the PostgreSQL Operator configuration.
--wal-storage-config string The name of a storage configuration in pgo.yaml to use for PostgreSQL's write-ahead log (WAL).
--wal-storage-size string The size of the capacity for WAL storage, which overrides any value in the storage configuration. Follows the Kubernetes quantity format.
@@ -131,4 +134,4 @@ pgo create cluster [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 22-Nov-2020
+###### Auto generated by spf13/cobra on 24-Dec-2020
diff --git a/docs/content/pgo-client/reference/pgo_scale.md b/docs/content/pgo-client/reference/pgo_scale.md
index 684d506cc8..146645d08c 100644
--- a/docs/content/pgo-client/reference/pgo_scale.md
+++ b/docs/content/pgo-client/reference/pgo_scale.md
@@ -25,12 +25,15 @@ pgo scale [flags]
--replica-count int The replica count to apply to the clusters. (default 1)
--service-type string The service type to use in the replica Service. If not set, the default in pgo.yaml will be used.
--storage-config string The name of a Storage config in pgo.yaml to use for the replica storage.
+ --toleration strings Set Pod tolerations for each PostgreSQL instance in a cluster.
+ The general format is "key=value:Effect"
+ For example, to add an Exists and an Equals toleration: "--toleration=ssd:NoSchedule,zone=east:NoSchedule"
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -44,4 +47,4 @@ pgo scale [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 25-Dec-2020
diff --git a/docs/content/tutorial/customize-cluster.md b/docs/content/tutorial/customize-cluster.md
index 3b30d0f0f2..3cd1f6d374 100644
--- a/docs/content/tutorial/customize-cluster.md
+++ b/docs/content/tutorial/customize-cluster.md
@@ -7,7 +7,7 @@ weight: 130
The PostgreSQL Operator makes it very easy and quick to [create a cluster]({{< relref "tutorial/create-cluster.md" >}}), but there are possibly more customizations you want to make to your cluster. These include:
- Resource allocations (e.g. Memory, CPU, PVC size)
-- Sidecars (e.g. [Monitoring]({{< relref "architecture/monitoring.md" >}}), pgBouncer, [pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}))
+- Sidecars (e.g. [Monitoring]({{< relref "architecture/monitoring.md" >}}), [pgBouncer]({{< relref "tutorial/pgbouncer.md" >}}), [pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}))
- High Availability (e.g. adding replicas)
- Specifying specific PostgreSQL images (e.g. one with PostGIS)
- Specifying a [Pod anti-affinity and Node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/)
@@ -136,6 +136,30 @@ pgo create cluster hippo --replica-count=1
You can scale up and down your PostgreSQL cluster with the [`pgo scale`]({{< relref "pgo-client/reference/pgo_scale.md" >}}) and [`pgo scaledown`]({{< relref "pgo-client/reference/pgo_scaledown.md" >}}) commands.
+## Set Tolerations for a PostgreSQL Cluster
+
+[Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) help with the scheduling of Pods to appropriate nodes. There are many reasons that a Kubernetes administrator may want to use tolerations, such as restricting the types of Pods that can be assigned to particular nodes.
+
+The PostgreSQL Operator supports adding tolerations to PostgreSQL instances using the `--toleration` flag. The format for adding a toleration is as such:
+
+```
+rule:Effect
+```
+
+where a `rule` can represent existence (e.g. `key`) or equality (`key=value`) and `Effect` is one of `NoSchedule`, `PreferNoSchedule`, or `NoExecute`. For more information on how tolerations work, please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
+
+You can assign multiple tolerations to a PostgreSQL cluster.
+
+For example, to add two tolerations to a new PostgreSQL cluster, one that is an existence toleration for a key of `ssd` and the other that is an equality toleration for a key/value pair of `zone`/`east`, you can run the following command:
+
+```
+pgo create cluster hippo \
+ --toleration=ssd:NoSchedule \
+ --toleration=zone=east:NoSchedule
+```
+
+You can also add or edit tolerations directly on the `pgclusters.crunchydata.com` custom resource and the PostgreSQL Operator will roll out the changes to the appropriate instances.
+
## Customize PostgreSQL Configuration
PostgreSQL provides a lot of different knobs that can be used to fine tune the [configuration](https://www.postgresql.org/docs/current/runtime-config.html) for your workload. While you can [customize your PostgreSQL configuration]({{< relref "advanced/custom-configuration.md" >}}) after your cluster has been deployed, you may also want to load in your custom configuration during initialization.
@@ -213,6 +237,10 @@ has successfully started.
- The password for the `ccp_monitoring` user has changed. In this case you will
need to update the Secret with the monitoring credentials.
+### PostgreSQL Pod Not Scheduled to Nodes Matching Tolerations
+
+While Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) allow for Pods to be scheduled to Nodes based on their taints, this does not mean that the Pod _will_ be assigned to those nodes. To provide Kubernetes scheduling guidance on where a Pod should be assigned, you must also use [Node Affinity]({{< relref "architecture/high-availability/_index.md" >}}#node-affinity).
+
## Next Steps
As mentioned at the beginning, there are a lot more customizations that you can make to your PostgreSQL cluster, and we will cover those as the tutorial progresses! This section was to get you familiar with some of the most common customizations, and to explore how many options `pgo create cluster` has!
diff --git a/docs/content/tutorial/high-availability.md b/docs/content/tutorial/high-availability.md
index 3a528ab456..b85a8469cc 100644
--- a/docs/content/tutorial/high-availability.md
+++ b/docs/content/tutorial/high-availability.md
@@ -94,7 +94,19 @@ Please understand the tradeoffs of synchronous replication before using it.
## Pod Anti-Affinity and Node Affinity
-To leran how to use pod anti-affinity and node affinity, please refer to the [high availability architecture documentation]({{< relref "architecture/high-availability/_index.md" >}})
+To learn how to use pod anti-affinity and node affinity, please refer to the [high availability architecture documentation]({{< relref "architecture/high-availability/_index.md" >}}).
+
+## Tolerations
+
+If you want to have a PostgreSQL instance use specific Kubernetes [tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/), you can use the `--toleration` flag on [`pgo scale`]({{< relref "pgo-client/reference/pgo_scale.md">}}). Any tolerations added to the new PostgreSQL instance fully replace any tolerations available to the entire cluster.
+
+For example, to assign equality toleration for a key/value pair of `zone`/`west`, you can run the following command:
+
+```
+pgo scale hippo --toleration=zone=west:NoSchedule
+```
+
+For more information on the PostgreSQL Operator and tolerations, please review the [high availability architecture documentation]({{< relref "architecture/high-availability/_index.md" >}}).
## Troubleshooting
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index 33c02937c4..f5fb452849 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -34,6 +34,9 @@
"spec": {
"securityContext": {{.SecurityContext}},
"serviceAccountName": "pgo-pg",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [
{
"name": "database",
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 3745d1d220..21fc5f24ba 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1490,6 +1490,9 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
setClusterAnnotationGroup(spec.Annotations.Backrest, request.Annotations.Backrest)
setClusterAnnotationGroup(spec.Annotations.PgBouncer, request.Annotations.PgBouncer)
+ // set any tolerations
+ spec.Tolerations = request.Tolerations
+
labels := make(map[string]string)
labels[config.LABEL_NAME] = name
if !request.AutofailFlag || apiserver.Pgo.Cluster.DisableAutofail {
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index b8dadef637..7713f33eb0 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -119,6 +119,7 @@ func ScaleCluster(request msgs.ClusterScaleRequest, pgouser string) msgs.Cluster
labels[config.LABEL_PG_CLUSTER] = cluster.Spec.Name
spec.ClusterName = cluster.Spec.Name
+ spec.Tolerations = request.Tolerations
labels[config.LABEL_PGOUSER] = pgouser
labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]
diff --git a/internal/controller/manager/controllermanager.go b/internal/controller/manager/controllermanager.go
index 4a6ac9dc2c..165677b2f2 100644
--- a/internal/controller/manager/controllermanager.go
+++ b/internal/controller/manager/controllermanager.go
@@ -256,7 +256,7 @@ func (c *ControllerManager) addControllerGroup(namespace string) error {
}
pgReplicacontroller := &pgreplica.Controller{
- Clientset: client,
+ Client: client,
Queue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
Informer: pgoInformerFactory.Crunchydata().V1().Pgreplicas(),
PgreplicaWorkerCount: *c.pgoConfig.Pgo.PGReplicaWorkerCount,
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 5c745fec7c..02789dce9b 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -178,7 +178,7 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
newcluster := newObj.(*crv1.Pgcluster)
// initialize a slice that may contain functions that need to be executed
// as part of a rolling update
- rollingUpdateFuncs := [](func(*crv1.Pgcluster, *appsv1.Deployment) error){}
+ rollingUpdateFuncs := [](func(kubeapi.Interface, *crv1.Pgcluster, *appsv1.Deployment) error){}
log.Debugf("pgcluster onUpdate for cluster %s (namespace %s)", newcluster.ObjectMeta.Namespace,
newcluster.ObjectMeta.Name)
@@ -305,6 +305,11 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
}
}
+ // check to see if any tolerations have been modified
+ if !reflect.DeepEqual(oldcluster.Spec.Tolerations, newcluster.Spec.Tolerations) {
+ rollingUpdateFuncs = append(rollingUpdateFuncs, clusteroperator.UpdateTolerations)
+ }
+
// if there is no need to perform a rolling update, exit here
if len(rollingUpdateFuncs) == 0 {
return
@@ -313,9 +318,9 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
// otherwise, create an anonymous function that executes each of the rolling
// update functions as part of the rolling update
if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, newcluster,
- func(cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
+ func(clientset kubeapi.Interface, cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
for _, fn := range rollingUpdateFuncs {
- if err := fn(cluster, deployment); err != nil {
+ if err := fn(clientset, cluster, deployment); err != nil {
return err
}
}
diff --git a/internal/controller/pgreplica/pgreplicacontroller.go b/internal/controller/pgreplica/pgreplicacontroller.go
index 91325b7066..79f8538100 100644
--- a/internal/controller/pgreplica/pgreplicacontroller.go
+++ b/internal/controller/pgreplica/pgreplicacontroller.go
@@ -18,15 +18,20 @@ limitations under the License.
import (
"context"
"encoding/json"
+ "reflect"
"strings"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster"
+ "github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1"
+
log "github.com/sirupsen/logrus"
+ v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
@@ -34,7 +39,7 @@ import (
// Controller holds the connections for the controller
type Controller struct {
- Clientset kubeapi.Interface
+ Client *kubeapi.Client
Queue workqueue.RateLimitingInterface
Informer informers.PgreplicaInformer
PgreplicaWorkerCount int
@@ -84,7 +89,7 @@ func (c *Controller) processNextItem() bool {
// in this case, the de-dupe logic is to test whether a replica
// deployment exists already , if so, then we don't create another
// backup job
- _, err := c.Clientset.
+ _, err := c.Client.
AppsV1().Deployments(keyNamespace).
Get(ctx, keyResourceName, metav1.GetOptions{})
@@ -97,7 +102,7 @@ func (c *Controller) processNextItem() bool {
// handle the case of when a pgreplica is added which is
// scaling up a cluster
- replica, err := c.Clientset.CrunchydataV1().Pgreplicas(keyNamespace).Get(ctx, keyResourceName, metav1.GetOptions{})
+ replica, err := c.Client.CrunchydataV1().Pgreplicas(keyNamespace).Get(ctx, keyResourceName, metav1.GetOptions{})
if err != nil {
log.Error(err)
c.Queue.Forget(key) // NB(cbandy): This should probably be a retry.
@@ -105,7 +110,7 @@ func (c *Controller) processNextItem() bool {
}
// get the pgcluster resource for the cluster the replica is a part of
- cluster, err := c.Clientset.CrunchydataV1().Pgclusters(keyNamespace).Get(ctx, replica.Spec.ClusterName, metav1.GetOptions{})
+ cluster, err := c.Client.CrunchydataV1().Pgclusters(keyNamespace).Get(ctx, replica.Spec.ClusterName, metav1.GetOptions{})
if err != nil {
log.Error(err)
c.Queue.Forget(key) // NB(cbandy): This should probably be a retry.
@@ -114,7 +119,7 @@ func (c *Controller) processNextItem() bool {
// only process pgreplica if cluster has been initialized
if cluster.Status.State == crv1.PgclusterStateInitialized {
- clusteroperator.ScaleBase(c.Clientset, replica, replica.ObjectMeta.Namespace)
+ clusteroperator.ScaleBase(c.Client, replica, replica.ObjectMeta.Namespace)
patch, err := json.Marshal(map[string]interface{}{
"status": crv1.PgreplicaStatus{
@@ -123,7 +128,7 @@ func (c *Controller) processNextItem() bool {
},
})
if err == nil {
- _, err = c.Clientset.CrunchydataV1().Pgreplicas(replica.Namespace).
+ _, err = c.Client.CrunchydataV1().Pgreplicas(replica.Namespace).
Patch(ctx, replica.Name, types.MergePatchType, patch, metav1.PatchOptions{})
}
if err != nil {
@@ -137,7 +142,7 @@ func (c *Controller) processNextItem() bool {
},
})
if err == nil {
- _, err = c.Clientset.CrunchydataV1().Pgreplicas(replica.Namespace).
+ _, err = c.Client.CrunchydataV1().Pgreplicas(replica.Namespace).
Patch(ctx, replica.Name, types.MergePatchType, patch, metav1.PatchOptions{})
}
if err != nil {
@@ -172,13 +177,14 @@ func (c *Controller) onAdd(obj interface{}) {
func (c *Controller) onUpdate(oldObj, newObj interface{}) {
ctx := context.TODO()
+ oldPgreplica := oldObj.(*crv1.Pgreplica)
newPgreplica := newObj.(*crv1.Pgreplica)
log.Debugf("[pgreplica Controller] onUpdate ns=%s %s", newPgreplica.ObjectMeta.Namespace,
newPgreplica.ObjectMeta.SelfLink)
// get the pgcluster resource for the cluster the replica is a part of
- cluster, err := c.Clientset.
+ cluster, err := c.Client.
CrunchydataV1().Pgclusters(newPgreplica.Namespace).
Get(ctx, newPgreplica.Spec.ClusterName, metav1.GetOptions{})
if err != nil {
@@ -188,7 +194,7 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
// only process pgreplica if cluster has been initialized
if cluster.Status.State == crv1.PgclusterStateInitialized && newPgreplica.Spec.Status != "complete" {
- clusteroperator.ScaleBase(c.Clientset, newPgreplica,
+ clusteroperator.ScaleBase(c.Client, newPgreplica,
newPgreplica.ObjectMeta.Namespace)
patch, err := json.Marshal(map[string]interface{}{
@@ -198,13 +204,55 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
},
})
if err == nil {
- _, err = c.Clientset.CrunchydataV1().Pgreplicas(newPgreplica.Namespace).
+ _, err = c.Client.CrunchydataV1().Pgreplicas(newPgreplica.Namespace).
Patch(ctx, newPgreplica.Name, types.MergePatchType, patch, metav1.PatchOptions{})
}
if err != nil {
log.Errorf("ERROR updating pgreplica status: %s", err.Error())
}
}
+
+ // if the tolerations array changed, updated the tolerations on the instance
+ if !reflect.DeepEqual(oldPgreplica.Spec.Tolerations, newPgreplica.Spec.Tolerations) {
+ // get the Deployment object associated with this instance
+ deployment, err := c.Client.AppsV1().Deployments(newPgreplica.Namespace).Get(ctx,
+ newPgreplica.Name, metav1.GetOptions{})
+
+ if err != nil {
+ log.Errorf("could not find instance for pgreplica: %q", err.Error())
+ return
+ }
+
+ // determine the current Pod -- this is required to stop the instance
+ pods, err := c.Client.CoreV1().Pods(deployment.Namespace).List(ctx, metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: fields.OneTermEqualSelector(config.LABEL_DEPLOYMENT_NAME, deployment.Name).String(),
+ })
+
+ // Even if there are errors with the Pods, we will continue on updating the
+ // Deployment
+ if err != nil {
+ log.Warn(err)
+ } else if len(pods.Items) == 0 {
+ log.Infof("not shutting down PostgreSQL instance [%s] as the Pod cannot be found", deployment.Name)
+ } else {
+ // get the first pod off the items list
+ pod := pods.Items[0]
+
+ // we want to stop PostgreSQL on this instance to ensure all transactions
+ // are safely flushed before we restart
+ if err := util.StopPostgreSQLInstance(c.Client, c.Client.Config, &pod, deployment.Name); err != nil {
+ log.Warn(err)
+ }
+ }
+
+ // apply the tolerations and update the Deployment
+ deployment.Spec.Template.Spec.Tolerations = newPgreplica.Spec.Tolerations
+
+ if _, err := c.Client.AppsV1().Deployments(deployment.Namespace).Update(ctx, deployment, metav1.UpdateOptions{}); err != nil {
+ log.Errorf("could not update deployment for pgreplica update: %q", err.Error())
+ }
+ }
}
// onDelete is called when a pgreplica is deleted
@@ -215,7 +263,7 @@ func (c *Controller) onDelete(obj interface{}) {
// make sure we are not removing a replica deployment
// that is now the primary after a failover
- dep, err := c.Clientset.
+ dep, err := c.Client.
AppsV1().Deployments(replica.ObjectMeta.Namespace).
Get(ctx, replica.Spec.Name, metav1.GetOptions{})
if err == nil {
@@ -224,7 +272,7 @@ func (c *Controller) onDelete(obj interface{}) {
// we will not scale down the deployment
log.Debugf("[pgreplica Controller] OnDelete not scaling down the replica since it is acting as a primary")
} else {
- clusteroperator.ScaleDownBase(c.Clientset, replica, replica.ObjectMeta.Namespace)
+ clusteroperator.ScaleDownBase(c.Client, replica, replica.ObjectMeta.Namespace)
}
}
}
diff --git a/internal/controller/pgtask/pgtaskcontroller.go b/internal/controller/pgtask/pgtaskcontroller.go
index 788de4606a..4e3f041a99 100644
--- a/internal/controller/pgtask/pgtaskcontroller.go
+++ b/internal/controller/pgtask/pgtaskcontroller.go
@@ -137,7 +137,7 @@ func (c *Controller) processNextItem() bool {
if cluster, err := c.Client.CrunchydataV1().Pgclusters(tmpTask.Namespace).
Get(ctx, clusterName, metav1.GetOptions{}); err == nil {
if err := clusteroperator.RollingUpdate(c.Client, c.Client.Config, cluster,
- func(*crv1.Pgcluster, *appsv1.Deployment) error { return nil }); err != nil {
+ func(kubeapi.Interface, *crv1.Pgcluster, *appsv1.Deployment) error { return nil }); err != nil {
log.Errorf("rolling update failed: %q", err.Error())
}
} else {
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 464e1bd28e..e01b3a837b 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -39,6 +39,7 @@ import (
v1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
@@ -470,7 +471,7 @@ func ScaleDownBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespa
// UpdateAnnotations updates the annotations in the "template" portion of a
// PostgreSQL deployment
-func UpdateAnnotations(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
+func UpdateAnnotations(clientset kubeapi.Interface, cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
log.Debugf("update annotations on [%s]", deployment.Name)
annotations := map[string]string{}
@@ -494,7 +495,7 @@ func UpdateAnnotations(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment)
// UpdateResources updates the PostgreSQL instance Deployments to reflect the
// update resources (i.e. CPU, memory)
-func UpdateResources(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
+func UpdateResources(clientset kubeapi.Interface, cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
// iterate through each PostgreSQL instance deployment and update the
// resource values for the database or exporter containers
for index, container := range deployment.Spec.Template.Spec.Containers {
@@ -534,7 +535,7 @@ func UpdateResources(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) er
// UpdateTablespaces updates the PostgreSQL instance Deployments to update
// what tablespaces are mounted.
-func UpdateTablespaces(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
+func UpdateTablespaces(clientset kubeapi.Interface, cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
// update the volume portion of the Deployment spec to reflect all of the
// available tablespaces
for tablespaceName, storageSpec := range cluster.Spec.TablespaceMounts {
@@ -610,6 +611,42 @@ func UpdateTablespaces(cluster *crv1.Pgcluster, deployment *apps_v1.Deployment)
return nil
}
+// UpdateTolerations updates the Toleration definition for a Deployment.
+// However, we have to check if the Deployment is based on a pgreplica Spec --
+// if it is, we need to determine if there are any instance specific tolerations
+// defined on that
+func UpdateTolerations(clientset kubeapi.Interface, cluster *crv1.Pgcluster, deployment *apps_v1.Deployment) error {
+ ctx := context.TODO()
+
+ // determine if this instance is based on the pgcluster or a pgreplica. if
+ // it is based on the pgcluster, we can apply the tolerations and exit early
+ if deployment.Name == cluster.Name {
+ deployment.Spec.Template.Spec.Tolerations = cluster.Spec.Tolerations
+ return nil
+ }
+
+ // ok, so this is based on a pgreplica. Let's try to find it.
+ instance, err := clientset.CrunchydataV1().Pgreplicas(cluster.Namespace).Get(ctx, deployment.Name, metav1.GetOptions{})
+
+ // if we error, log it and return, as this error will interrupt a rolling update
+ if err != nil {
+ log.Error(err)
+ return err
+ }
+
+ // if the instance does have specific tolerations, exit here as we do not
+ // want to override them
+ if len(instance.Spec.Tolerations) != 0 {
+ return nil
+ }
+
+ // otherwise, the tolerations set on the cluster instance are available to
+ // all instances, so set the value and return
+ deployment.Spec.Template.Spec.Tolerations = cluster.Spec.Tolerations
+
+ return nil
+}
+
// annotateBackrestSecret annotates the pgBackRest repository secret with relevant cluster
// configuration as needed to support bootstrapping from the repository after the cluster
// has been deleted
@@ -726,7 +763,10 @@ func stopPostgreSQLInstance(clientset kubernetes.Interface, restConfig *rest.Con
// First, attempt to get the PostgreSQL instance Pod attachd to this
// particular deployment
selector := fmt.Sprintf("%s=%s", config.LABEL_DEPLOYMENT_NAME, deployment.Name)
- pods, err := clientset.CoreV1().Pods(deployment.Namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
+ pods, err := clientset.CoreV1().Pods(deployment.Namespace).List(ctx, metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: selector,
+ })
// if there is a bona fide error, return.
// However, if no Pods are found, issue a warning, but do not return an error
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 2a779614d2..2eca688f6a 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -340,6 +340,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
ReplicationTLSSecret: cl.Spec.TLS.ReplicationTLSSecret,
CASecret: cl.Spec.TLS.CASecret,
Standby: cl.Spec.Standby,
+ Tolerations: operator.GetTolerations(cl.Spec.Tolerations),
}
return deploymentFields
@@ -494,6 +495,11 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
TLSSecret: cluster.Spec.TLS.TLSSecret,
ReplicationTLSSecret: cluster.Spec.TLS.ReplicationTLSSecret,
CASecret: cluster.Spec.TLS.CASecret,
+ // Give precedence to the tolerations defined on the replica spec, otherwise
+ // take any tolerations defined on the cluster spec
+ Tolerations: util.GetValueOrDefault(
+ operator.GetTolerations(replica.Spec.Tolerations),
+ operator.GetTolerations(cluster.Spec.Tolerations)),
}
switch replica.Spec.ReplicaStorage.StorageType {
diff --git a/internal/operator/cluster/exporter.go b/internal/operator/cluster/exporter.go
index 1da55df006..e02fdf6bfe 100644
--- a/internal/operator/cluster/exporter.go
+++ b/internal/operator/cluster/exporter.go
@@ -265,7 +265,7 @@ func RotateExporterPassword(clientset kubernetes.Interface, restconfig *rest.Con
// UpdateExporterSidecar either adds or emoves the metrics sidcar from the
// cluster. This is meant to be used as a rolling update callback function
-func UpdateExporterSidecar(cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
+func UpdateExporterSidecar(clientset kubeapi.Interface, cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
// need to determine if we are adding or removing
if cluster.Spec.Exporter {
return addExporterSidecar(cluster, deployment)
diff --git a/internal/operator/cluster/rolling.go b/internal/operator/cluster/rolling.go
index feb2df24b1..2860db5fbd 100644
--- a/internal/operator/cluster/rolling.go
+++ b/internal/operator/cluster/rolling.go
@@ -26,6 +26,7 @@ import (
"github.com/crunchydata/postgres-operator/internal/operator"
"github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
log "github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
@@ -68,8 +69,8 @@ const (
// Erroring during this process can be fun. If an error occurs within the middle
// of a rolling update, in order to avoid placing the cluster in an
// indeterminate state, most errors are just logged for later troubleshooting
-func RollingUpdate(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster,
- updateFunc func(*crv1.Pgcluster, *appsv1.Deployment) error) error {
+func RollingUpdate(clientset kubeapi.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster,
+ updateFunc func(kubeapi.Interface, *crv1.Pgcluster, *appsv1.Deployment) error) error {
log.Debugf("rolling update for cluster %q", cluster.Name)
// we need to determine which deployments are replicas and which is the
@@ -147,13 +148,13 @@ func RollingUpdate(clientset kubernetes.Interface, restConfig *rest.Config, clus
// instance. It first ensures that the update can be applied. If it can, it will
// safely turn of the PostgreSQL instance before modifying the Deployment
// template.
-func applyUpdateToPostgresInstance(clientset kubernetes.Interface, restConfig *rest.Config,
+func applyUpdateToPostgresInstance(clientset kubeapi.Interface, restConfig *rest.Config,
cluster *crv1.Pgcluster, deployment appsv1.Deployment,
- updateFunc func(*crv1.Pgcluster, *appsv1.Deployment) error) error {
+ updateFunc func(kubeapi.Interface, *crv1.Pgcluster, *appsv1.Deployment) error) error {
ctx := context.TODO()
// apply any updates, if they cannot be applied, then return an error here
- if err := updateFunc(cluster, &deployment); err != nil {
+ if err := updateFunc(clientset, cluster, &deployment); err != nil {
return err
}
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index f96d5e5832..163850861e 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -189,6 +189,9 @@ type DeploymentTemplateFields struct {
Tablespaces string
TablespaceVolumes string
TablespaceVolumeMounts string
+ // Tolerations is an optional parameter that provides Pod tolerations that
+ // have been transformed into JSON encoding from an actual Tolerations object
+ Tolerations string
// The following fields set the TLS requirements as well as provide
// information on how to configure TLS in a PostgreSQL cluster
// TLSEnabled enables TLS in a cluster if set to true. Only works in actuality
@@ -967,6 +970,25 @@ func GetSyncReplication(specSyncReplication *bool) bool {
return false
}
+// GetTolerations returns any tolerations that may be defined in a tolerations
+// in JSON format. Otherwise, it returns an empty string
+func GetTolerations(tolerations []v1.Toleration) string {
+ // if no tolerations, exit early
+ if len(tolerations) == 0 {
+ return ""
+ }
+
+ // turn into a JSON string
+ s, err := json.MarshalIndent(tolerations, "", " ")
+
+ if err != nil {
+ log.Errorf("%s: returning empty string", err.Error())
+ return ""
+ }
+
+ return string(s)
+}
+
// OverrideClusterContainerImages is a helper function that provides the
// appropriate hooks to override any of the container images that might be
// deployed with a PostgreSQL cluster
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index e2180e90dc..c487bc81b8 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -132,6 +132,10 @@ type PgclusterSpec struct {
// Annotations contains a set of Deployment (and by association, Pod)
// annotations that are propagated to all managed Deployments
Annotations ClusterAnnotations `json:"annotations"`
+
+ // Tolerations are an optional list of Pod toleration rules that are applied
+ // to the PostgreSQL instance.
+ Tolerations []v1.Toleration `json:"tolerations"`
}
// ClusterAnnotations provides a set of annotations that can be propagated to
diff --git a/pkg/apis/crunchydata.com/v1/replica.go b/pkg/apis/crunchydata.com/v1/replica.go
index 386fa033d0..1bfba208fe 100644
--- a/pkg/apis/crunchydata.com/v1/replica.go
+++ b/pkg/apis/crunchydata.com/v1/replica.go
@@ -16,6 +16,7 @@ package v1
*/
import (
+ v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -42,6 +43,9 @@ type PgreplicaSpec struct {
ReplicaStorage PgStorageSpec `json:"replicastorage"`
Status string `json:"status"`
UserLabels map[string]string `json:"userlabels"`
+ // Tolerations are an optional list of Pod toleration rules that are applied
+ // to the PostgreSQL instance.
+ Tolerations []v1.Toleration `json:"tolerations"`
}
// PgreplicaList ...
diff --git a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
index 69a3a673f5..3cef8c84f5 100644
--- a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
+++ b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
@@ -271,6 +271,13 @@ func (in *PgclusterSpec) DeepCopyInto(out *PgclusterSpec) {
out.TLS = in.TLS
out.PGDataSource = in.PGDataSource
in.Annotations.DeepCopyInto(&out.Annotations)
+ if in.Tolerations != nil {
+ in, out := &in.Tolerations, &out.Tolerations
+ *out = make([]corev1.Toleration, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
return
}
@@ -465,6 +472,13 @@ func (in *PgreplicaSpec) DeepCopyInto(out *PgreplicaSpec) {
(*out)[key] = val
}
}
+ if in.Tolerations != nil {
+ in, out := &in.Tolerations, &out.Tolerations
+ *out = make([]corev1.Toleration, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
return
}
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index b995d0ea13..53258b36e6 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -17,6 +17,8 @@ limitations under the License.
import (
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
+ v1 "k8s.io/api/core/v1"
)
// ShowClusterRequest shows cluster
@@ -202,6 +204,8 @@ type CreateClusterRequest struct {
PGDataSource crv1.PGDataSourceSpec
// Annotations provide any custom annotations for a cluster
Annotations crv1.ClusterAnnotations `json:"annotations"`
+ // Tolerations allows for the setting of Pod tolerations on Postgres instances
+ Tolerations []v1.Toleration `json:"tolerations"`
}
// CreateClusterDetail provides details about the PostgreSQL cluster that is
@@ -559,6 +563,8 @@ type ClusterScaleRequest struct {
// StorageConfig, if provided, specifies which of the storage configuration
// options should be used. Defaults to what the main cluster definition uses.
StorageConfig string `json:"storageConfig"`
+ // Tolerations allows for the setting of Pod tolerations on Postgres instances
+ Tolerations []v1.Toleration `json:"tolerations"`
}
// ClusterScaleResponse ...
From 2249079a450d21b1062b2dd0bcf6826188ea76e0 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 27 Dec 2020 14:32:08 -0500
Subject: [PATCH 080/276] Allow for PostgreSQL user Secrets to be generated by
Operator
Previously, the PostgreSQL user Secrets required for bootstrap
(superuser, replication user, standard user) needed to be present
before adding a custom resource. Now, the PostgreSQL Operator can
create the missing Secrets before bootstrapping a cluster.
The API server behavior does not change, as the API server can
both create the user credentials and return them to the end user.
This also modifies the examples to not include a requirement to
generate the replication user credential. As this is really a
managed service credential, there is no reason to put this onus
on the user. The superuser credential is still included in the
exmaples, as there are certain situations where one should log
in as the superuser.
---
docs/content/custom-resources/_index.md | 141 ++++++++----------
.../primaryuser-secret.yaml | 11 --
examples/create-by-resource/run.sh | 1 -
.../templates/primaryuser-secret.yaml | 12 --
examples/helm/create-cluster/values.yaml | 2 -
examples/kustomize/createcluster/README.md | 16 +-
.../createcluster/base/kustomization.yaml | 7 -
.../createcluster/overlay/dev/devhippo.json | 3 +-
.../createcluster/overlay/prod/prodhippo.json | 3 +-
.../overlay/staging/staginghippo.json | 3 +-
internal/operator/cluster/cluster.go | 66 ++++++++
11 files changed, 139 insertions(+), 126 deletions(-)
delete mode 100644 examples/create-by-resource/primaryuser-secret.yaml
delete mode 100644 examples/helm/create-cluster/templates/primaryuser-secret.yaml
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index b323911e18..4c88e634ce 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -57,48 +57,6 @@ create additional secrets.
The following guide goes through how to create a PostgreSQL cluster called
`hippo` by creating a new custom resource.
-#### Step 1: Creating the PostgreSQL User Secrets
-
-As mentioned above, there are a minimum of three PostgreSQL user accounts that
-you must create in order to bootstrap a PostgreSQL cluster. These are:
-
-- A PostgreSQL superuser
-- A replication user
-- A standard PostgreSQL user
-
-The below code will help you set up these Secrets.
-
-```
-# this variable is the name of the cluster being created
-pgo_cluster_name=hippo
-# this variable is the namespace the cluster is being deployed into
-cluster_namespace=pgo
-
-# this is the superuser secret
-kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-postgres-secret" \
- --from-literal=username=postgres \
- --from-literal=password=Supersecurepassword*
-
-# this is the replication user secret
-kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser-secret" \
- --from-literal=username=primaryuser \
- --from-literal=password=Anothersecurepassword*
-
-# this is the standard user secret
-kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" \
- --from-literal=username=hippo \
- --from-literal=password=Moresecurepassword*
-
-
-kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-postgres-secret" "pg-cluster=${pgo_cluster_name}"
-kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser-secret" "pg-cluster=${pgo_cluster_name}"
-kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" "pg-cluster=${pgo_cluster_name}"
-```
-
-#### Step 2: Create the PostgreSQL Cluster
-
-With the Secrets in place. It is now time to create the PostgreSQL cluster.
-
The below manifest references the Secrets created in the previous step to add a
custom resource to the `pgclusters.crunchydata.com` custom resource definition.
@@ -121,7 +79,6 @@ metadata:
autofail: "true"
crunchy-pgbadger: "false"
crunchy-pgha-scope: ${pgo_cluster_name}
- crunchy-postgres-exporter: "false"
deployment-name: ${pgo_cluster_name}
name: ${pgo_cluster_name}
pg-cluster: ${pgo_cluster_name}
@@ -172,6 +129,7 @@ spec:
clustername: ${pgo_cluster_name}
customconfig: ""
database: ${pgo_cluster_name}
+ exporter: false
exporterport: "9187"
limits: {}
name: ${pgo_cluster_name}
@@ -204,7 +162,6 @@ spec:
tolerations: []
user: hippo
userlabels:
- crunchy-postgres-exporter: "false"
pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
usersecretname: ${pgo_cluster_name}-hippo-secret
@@ -213,48 +170,47 @@ EOF
kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
```
-### Create a PostgreSQL Cluster With Backups in S3
+And that's all! The PostgreSQL Operator will go ahead and create the cluster.
-A frequent use case is to create a PostgreSQL cluster with S3 or a S3-like
-storage system for storing backups. This requires adding a Secret that contains
-the S3 key and key secret for your account, and adding some additional
-information into the custom resource.
-
-#### Step 1: Create the pgBackRest S3 Secrets
+As part of this process, the PostgreSQL Operator creates several Secrets that
+contain the credentials for three user accounts that must be present in order
+to bootstrap a PostgreSQL cluster. These are:
-As mentioned above, it is necessary to create a Secret containing the S3 key and
-key secret that will allow a user to create backups in S3.
+- A PostgreSQL superuser
+- A replication user
+- A standard PostgreSQL user
-The below code will help you set up this Secret.
+The Secrets represent the following PostgreSQL users and can be identified using
+the below patterns:
-```
-# this variable is the name of the cluster being created
-pgo_cluster_name=hippo
-# this variable is the namespace the cluster is being deployed into
-cluster_namespace=pgo
-# the following variables are your S3 key and key secret
-backrest_s3_key=yours3key
-backrest_s3_key_secret=yours3keysecret
+| PostgreSQL User | Type | Secret Pattern | Notes |
+| --------------- | ----------- | ---------------------------------- | ----- |
+| `postgres` | Superuser | `-postgres-secret` | This is the PostgreSQL superuser account. Using the above example, the name of the secret would be `hippo-postgres-secret`. |
+| `primaryuser` | Replication | `-primaryuser-secret` | This is for the managed replication user account for maintaining high availability. This account does not need to be accessed. Using the above example, the name of the secret would be `hippo-primaryuser-secret`. |
+| User | User | `--secret` | This is an unprivileged user that should be used for most operations. This secret is set by the `user` attribute in the custom resources. In the above example, the name of this user is `hippo`, which would make the Secret `hippo-hippo-secret` |
-kubectl -n "${cluster_namespace}" create secret generic "${pgo_cluster_name}-backrest-repo-config" \
- --from-literal="aws-s3-key=${backrest_s3_key}" \
- --from-literal="aws-s3-key-secret=${backrest_s3_key_secret}"
+To extract the user credentials so you can log into the database, you can use
+the following JSONPath expression:
-unset backrest_s3_key
-unset backrest_s3_key_secret
```
+# namespace that the cluster is running in
+export cluster_namespace=pgo
+# name of the cluster
+export pgo_cluster_name=hippo
+# name of the user whose password we want to get
+export pgo_cluster_username=hippo
-#### Step 2: Creating the PostgreSQL User Secrets
-
-Similar to the basic create cluster example, there are a minimum of three
-PostgreSQL user accounts that you must create in order to bootstrap a PostgreSQL
-cluster. These are:
+kubectl -n "${cluster_namespace}" get secrets \
+ "${pgo_cluster_name}-${pgo_cluster_username}-secret" -o "jsonpath={.data['password']}" | base64 -d
+```
-- A PostgreSQL superuser
-- A replication user
-- A standard PostgreSQL user
+#### Customizing User Credentials
-The below code will help you set up these Secrets.
+If you wish to set the credentials for these users on your own, you have to
+create these Secrets _before_ creating a custom resource. The below example
+shows how to create the three required user accounts prior to creating a custom
+resource. Note that if you omit any of these Secrets, the Postgres Operator
+will create it on its own.
```
# this variable is the name of the cluster being created
@@ -283,7 +239,38 @@ kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser
kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" "pg-cluster=${pgo_cluster_name}"
```
-#### Step 3: Create the PostgreSQL Cluster
+### Create a PostgreSQL Cluster With Backups in S3
+
+A frequent use case is to create a PostgreSQL cluster with S3 or a S3-like
+storage system for storing backups. This requires adding a Secret that contains
+the S3 key and key secret for your account, and adding some additional
+information into the custom resource.
+
+#### Step 1: Create the pgBackRest S3 Secrets
+
+As mentioned above, it is necessary to create a Secret containing the S3 key and
+key secret that will allow a user to create backups in S3.
+
+The below code will help you set up this Secret.
+
+```
+# this variable is the name of the cluster being created
+pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+cluster_namespace=pgo
+# the following variables are your S3 key and key secret
+backrest_s3_key=yours3key
+backrest_s3_key_secret=yours3keysecret
+
+kubectl -n "${cluster_namespace}" create secret generic "${pgo_cluster_name}-backrest-repo-config" \
+ --from-literal="aws-s3-key=${backrest_s3_key}" \
+ --from-literal="aws-s3-key-secret=${backrest_s3_key_secret}"
+
+unset backrest_s3_key
+unset backrest_s3_key_secret
+```
+
+#### Step 2: Create the PostgreSQL Cluster
With the Secrets in place. It is now time to create the PostgreSQL cluster.
diff --git a/examples/create-by-resource/primaryuser-secret.yaml b/examples/create-by-resource/primaryuser-secret.yaml
deleted file mode 100644
index 15ee8ad665..0000000000
--- a/examples/create-by-resource/primaryuser-secret.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-apiVersion: v1
-data:
- password: d0ZvYWlRZFhPTQ==
- username: cHJpbWFyeXVzZXI=
-kind: Secret
-metadata:
- labels:
- pg-cluster: fromcrd
- name: fromcrd-primaryuser-secret
- namespace: pgouser1
-type: Opaque
diff --git a/examples/create-by-resource/run.sh b/examples/create-by-resource/run.sh
index e6940ead12..98bd67ec2c 100755
--- a/examples/create-by-resource/run.sh
+++ b/examples/create-by-resource/run.sh
@@ -45,7 +45,6 @@ rm $DIR/fromcrd-key $DIR/fromcrd-key.pub
# create the required postgres credentials for the fromcrd cluster
$PGO_CMD -n $NS create -f $DIR/postgres-secret.yaml
-$PGO_CMD -n $NS create -f $DIR/primaryuser-secret.yaml
$PGO_CMD -n $NS create -f $DIR/testuser-secret.yaml
# create the pgcluster CRD for the fromcrd cluster
diff --git a/examples/helm/create-cluster/templates/primaryuser-secret.yaml b/examples/helm/create-cluster/templates/primaryuser-secret.yaml
deleted file mode 100644
index f4471b8fd2..0000000000
--- a/examples/helm/create-cluster/templates/primaryuser-secret.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-apiVersion: v1
-data:
- password: {{ .Values.primaryusersecretpassword | b64enc }}
- username: {{ .Values.primaryusersecretuser | b64enc }}
-kind: Secret
-metadata:
- labels:
- pg-cluster: {{ .Values.pgclustername }}
- vendor: crunchydata
- name: {{ .Values.pgclustername }}-primaryuser-secret
- namespace: {{ .Values.namespace }}
-type: Opaque
\ No newline at end of file
diff --git a/examples/helm/create-cluster/values.yaml b/examples/helm/create-cluster/values.yaml
index 4add0e560f..b0301c6205 100644
--- a/examples/helm/create-cluster/values.yaml
+++ b/examples/helm/create-cluster/values.yaml
@@ -13,5 +13,3 @@ hipposecretuser: "hippo"
hipposecretpassword: "Supersecurepassword*"
postgressecretuser: "postgres"
postgressecretpassword: "Anothersecurepassword*"
-primaryusersecretuser: "primaryuser"
-primaryusersecretpassword: "Moresecurepassword*"
\ No newline at end of file
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index 0bfc762305..f21cae2f90 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -34,7 +34,6 @@ You will see these items are created after running the above command
```
secret/hippo-hippo-secret created
secret/hippo-postgres-secret created
-secret/hippo-primaryuser-secret created
pgcluster.crunchydata.com/hippo created
```
You may need to wait a few seconds depending on the resources you have allocated to you kubernetes set up for the Crunchy PostgreSQL cluster to become available.
@@ -51,7 +50,7 @@ cluster : hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
deployment : hippo
deployment : hippo-backrest-shared-repo
service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
- labels : pg-pod-anti-affinity= pgo-backrest=true pgo-version=4.5.1 crunchy-postgres-exporter=false name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata autofail=true crunchy-pgbadger=false
+ labels : pg-pod-anti-affinity= pgo-backrest=true pgo-version=4.5.1 crunchy-postgres-exporter=false name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata autofail=true crunchy-pgbadger=false
```
Feel free to run other pgo cli commands on the hippo cluster
@@ -72,7 +71,6 @@ You will see these items are created after running the above command
```
secret/dev-hippo-hippo-secret created
secret/dev-hippo-postgres-secret created
-secret/dev-hippo-primaryuser-secret created
pgcluster.crunchydata.com/dev-hippo created
```
After the cluster is finished creating lets take a look at the cluster with the Crunchy PostgreSQL Operator
@@ -89,7 +87,7 @@ cluster : dev-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
deployment : dev-hippo-pgbouncer
service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
- labels : crunchy-pgha-scope=dev-hippo crunchy-postgres-exporter=false name=dev-hippo pg-cluster=dev-hippo pg-pod-anti-affinity= pgo-backrest=true vendor=crunchydata autofail=true crunchy-pgbadger=false deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
+ labels : crunchy-pgha-scope=dev-hippo crunchy-postgres-exporter=false name=dev-hippo pg-cluster=dev-hippo pg-pod-anti-affinity= pgo-backrest=true vendor=crunchydata autofail=true crunchy-pgbadger=false deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
```
#### staging
The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
@@ -106,7 +104,6 @@ You will see these items are created after running the above command
```
secret/staging-hippo-hippo-secret created
secret/staging-hippo-postgres-secret created
-secret/staging-hippo-primaryuser-secret created
pgcluster.crunchydata.com/staging-hippo created
pgreplica.crunchydata.com/staging-hippo-rpl1 created
```
@@ -131,11 +128,11 @@ cluster : staging-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
pgreplica : staging-hippo-lnxw
pgreplica : staging-hippo-rpl1
- labels : deployment-name=staging-hippo environment=staging name=staging-hippo pg-pod-anti-affinity= crunchy-postgres-exporter=false crunchy-pgbadger=false crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-backrest=true pgo-version=4.5.1 pgouser=admin vendor=crunchydata autofail=true
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo pg-pod-anti-affinity= crunchy-postgres-exporter=false crunchy-pgbadger=false crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-backrest=true pgo-version=4.5.1 pgouser=admin vendor=crunchydata autofail=true
```
#### production
-The production overlay will deploy a crunchy postgreSQL cluster with one replica
+The production overlay will deploy a crunchy postgreSQL cluster with one replica
Lets generate the kustomize yaml for the prod overlay
```
@@ -149,7 +146,6 @@ You will see these items are created after running the above command
```
secret/prod-hippo-hippo-secret created
secret/prod-hippo-postgres-secret created
-secret/prod-hippo-primaryuser-secret created
pgcluster.crunchydata.com/prod-hippo created
```
After the cluster is finished creating lets take a look at the cluster with the crunchy postgreSQL operator
@@ -169,7 +165,7 @@ cluster : prod-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
pgreplica : prod-hippo-flty
- labels : pgo-backrest=true pgo-version=4.5.1 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=prod-hippo environment=production pg-cluster=prod-hippo pg-pod-anti-affinity= autofail=true crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+ labels : pgo-backrest=true pgo-version=4.5.1 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=prod-hippo environment=production pg-cluster=prod-hippo pg-pod-anti-affinity= autofail=true crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
```
### Delete the clusters
To delete the clusters run the following pgo cli commands
@@ -184,4 +180,4 @@ pgo delete cluster hippo -n pgo
pgo delete cluster dev-hippo -n pgo
pgo delete cluster staging-hippo -n pgo
pgo delete cluster prod-hippo -n pgo
-```
\ No newline at end of file
+```
diff --git a/examples/kustomize/createcluster/base/kustomization.yaml b/examples/kustomize/createcluster/base/kustomization.yaml
index a93d6f8eaa..91fb2a0954 100644
--- a/examples/kustomize/createcluster/base/kustomization.yaml
+++ b/examples/kustomize/createcluster/base/kustomization.yaml
@@ -10,12 +10,6 @@ secretGenerator:
literals:
- username=hippo
- password=Moresecurepassword*
- - name: hippo-primaryuser-secret
- options:
- disableNameSuffixHash: true
- literals:
- - username=primaryuser
- - password=Anothersecurepassword*
- name: hippo-postgres-secret
options:
disableNameSuffixHash: true
@@ -24,4 +18,3 @@ secretGenerator:
- password=Supersecurepassword*
resources:
- pgcluster.yaml
-
diff --git a/examples/kustomize/createcluster/overlay/dev/devhippo.json b/examples/kustomize/createcluster/overlay/dev/devhippo.json
index ab7c2e5071..c25b2085ea 100644
--- a/examples/kustomize/createcluster/overlay/dev/devhippo.json
+++ b/examples/kustomize/createcluster/overlay/dev/devhippo.json
@@ -12,7 +12,6 @@
{ "op": "replace", "path": "/spec/clustername", "value": "dev-hippo" },
{ "op": "replace", "path": "/spec/database", "value": "dev-hippo" },
{ "op": "replace", "path": "/spec/name", "value": "dev-hippo" },
- { "op": "replace", "path": "/spec/primarysecretname", "value": "dev-hippo-primaryuser-secret" },
{ "op": "replace", "path": "/spec/rootsecretname", "value": "dev-hippo-postgres-secret" },
{ "op": "replace", "path": "/spec/usersecretname", "value": "dev-hippo-hippo-secret" }
-]
\ No newline at end of file
+]
diff --git a/examples/kustomize/createcluster/overlay/prod/prodhippo.json b/examples/kustomize/createcluster/overlay/prod/prodhippo.json
index ef8313629d..2a595bc24b 100644
--- a/examples/kustomize/createcluster/overlay/prod/prodhippo.json
+++ b/examples/kustomize/createcluster/overlay/prod/prodhippo.json
@@ -12,8 +12,7 @@
{ "op": "replace", "path": "/spec/clustername", "value": "prod-hippo" },
{ "op": "replace", "path": "/spec/database", "value": "prod-hippo" },
{ "op": "replace", "path": "/spec/name", "value": "prod-hippo" },
- { "op": "replace", "path": "/spec/primarysecretname", "value": "prod-hippo-primaryuser-secret" },
{ "op": "replace", "path": "/spec/replicas", "value": "1"},
{ "op": "replace", "path": "/spec/rootsecretname", "value": "prod-hippo-postgres-secret" },
{ "op": "replace", "path": "/spec/usersecretname", "value": "prod-hippo-hippo-secret" }
-]
\ No newline at end of file
+]
diff --git a/examples/kustomize/createcluster/overlay/staging/staginghippo.json b/examples/kustomize/createcluster/overlay/staging/staginghippo.json
index c19acb5895..0b811d3b45 100644
--- a/examples/kustomize/createcluster/overlay/staging/staginghippo.json
+++ b/examples/kustomize/createcluster/overlay/staging/staginghippo.json
@@ -12,8 +12,7 @@
{ "op": "replace", "path": "/spec/clustername", "value": "staging-hippo" },
{ "op": "replace", "path": "/spec/database", "value": "staging-hippo" },
{ "op": "replace", "path": "/spec/name", "value": "staging-hippo" },
- { "op": "replace", "path": "/spec/primarysecretname", "value": "staging-hippo-primaryuser-secret" },
{ "op": "replace", "path": "/spec/replicas", "value": "1"},
{ "op": "replace", "path": "/spec/rootsecretname", "value": "staging-hippo-postgres-secret" },
{ "op": "replace", "path": "/spec/usersecretname", "value": "staging-hippo-hippo-secret" }
-]
\ No newline at end of file
+]
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index e01b3a837b..b71aac778f 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -74,6 +74,14 @@ func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace s
return
}
+ // create any missing user secrets that are required to be part of the
+ // bootstrap
+ if err := createMissingUserSecrets(clientset, cl); err != nil {
+ log.Errorf("error creating missing user secrets: %q", err.Error())
+ publishClusterCreateFailure(cl, err.Error())
+ return
+ }
+
if err = addClusterCreateMissingService(clientset, cl, namespace); err != nil {
log.Error("error in creating primary service " + err.Error())
publishClusterCreateFailure(cl, err.Error())
@@ -686,6 +694,64 @@ func annotateBackrestSecret(clientset kubernetes.Interface, cluster *crv1.Pgclus
return err
}
+// createMissingUserSecret is the heart of trying to determine if a user secret
+// is missing, and if it is, creating it. Requires the appropriate secretName
+// suffix for a given secret, as well as the user name
+// createUserSecret(request, newInstance, crv1.RootSecretSuffix,
+// crv1.PGUserSuperuser, request.PasswordSuperuser)
+func createMissingUserSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster,
+ secretNameSuffix, username string) error {
+ ctx := context.TODO()
+
+ // the secretName is just the combination cluster name and the
+ // secretNameSuffix
+ secretName := cluster.Spec.Name + secretNameSuffix
+
+ // if the secret already exists, skip it
+ // if it returns an error other than "not found" return an error
+ if _, err := clientset.CoreV1().Secrets(cluster.Spec.Namespace).Get(
+ ctx, secretName, metav1.GetOptions{}); err == nil {
+ log.Infof("user secret %q exists for user %q for cluster %q",
+ secretName, username, cluster.Spec.Name)
+ return nil
+ } else if !kerrors.IsNotFound(err) {
+ return err
+ }
+
+ // alright, so we have to create the secret
+ // if the password fails to generate, return an error
+ passwordLength := util.GeneratedPasswordLength(operator.Pgo.Cluster.PasswordLength)
+ password, err := util.GeneratePassword(passwordLength)
+ if err != nil {
+ return err
+ }
+
+ // great, now we can create the secret! if we can't, return an error
+ return util.CreateSecret(clientset, cluster.Spec.Name, secretName,
+ username, password, cluster.Spec.Namespace)
+}
+
+// createMissingUserSecrets checks to see if there are secrets for the
+// superuser (postgres), replication user (primaryuser), and a standard postgres
+// user for the given cluster. Each of these are created if they do not
+// currently exist
+func createMissingUserSecrets(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error {
+ // first, determine if we need to create a user secret for the postgres
+ // superuser
+ if err := createMissingUserSecret(clientset, cluster, crv1.RootSecretSuffix, crv1.PGUserSuperuser); err != nil {
+ return err
+ }
+
+ // next, determine if we need to create a user secret for the replication user
+ if err := createMissingUserSecret(clientset, cluster, crv1.PrimarySecretSuffix, crv1.PGUserReplication); err != nil {
+ return err
+ }
+
+ // finally, determine if we need to create a user secret for the regular user
+ userSecretSuffix := fmt.Sprintf("-%s%s", cluster.Spec.User, crv1.UserSecretSuffix)
+ return createMissingUserSecret(clientset, cluster, userSecretSuffix, cluster.Spec.User)
+}
+
func deleteConfigMaps(clientset kubernetes.Interface, clusterName, ns string) error {
ctx := context.TODO()
label := fmt.Sprintf("pg-cluster=%s", clusterName)
From 11375e22483a6bd5316e4d52db4e03fcb7857533 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 27 Dec 2020 16:11:06 -0500
Subject: [PATCH 081/276] Eliminate user Secret name attributes from pgcluster
CRD
These Secrets have been fully managed by the Postgres Operator
for a long time and there is no reason why they need to be specified
by the user as specific custom resource attributes. This further
simplifies the number of steps required to create a PostgreSQL
cluster with the operator through a "custom resource" workflow.
---
docs/content/custom-resources/_index.md | 10 --
examples/create-by-resource/fromcrd.json | 5 +-
.../create-cluster/templates/pgcluster.yaml | 3 -
.../createcluster/base/pgcluster.yaml | 3 -
.../createcluster/overlay/dev/devhippo.json | 4 +-
.../createcluster/overlay/prod/prodhippo.json | 4 +-
.../overlay/staging/staginghippo.json | 4 +-
.../files/crds/pgclusters-crd.yaml | 3 -
.../postgresoperator.crd.descriptions.yaml | 16 ---
.../olm/postgresoperator.crd.examples.yaml | 3 -
installers/olm/postgresoperator.crd.yaml | 3 -
.../apiserver/clusterservice/clusterimpl.go | 104 +++++++++---------
.../apiserver/scheduleservice/scheduleimpl.go | 2 +-
internal/apiserver/userservice/userimpl.go | 21 +---
internal/operator/cluster/cluster.go | 15 +--
internal/operator/cluster/clusterlogic.go | 12 +-
internal/util/secrets.go | 19 ++--
pkg/apis/crunchydata.com/v1/cluster.go | 23 +++-
pkg/apis/crunchydata.com/v1/cluster_test.go | 75 +++++++++++++
pkg/apis/crunchydata.com/v1/common.go | 9 --
20 files changed, 177 insertions(+), 161 deletions(-)
create mode 100644 pkg/apis/crunchydata.com/v1/cluster_test.go
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 4c88e634ce..a9f5879c4f 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -148,9 +148,7 @@ spec:
pgBouncer: preferred
policies: ""
port: "5432"
- primarysecretname: ${pgo_cluster_name}-primaryuser-secret
replicas: "0"
- rootsecretname: ${pgo_cluster_name}-postgres-secret
shutdown: false
standby: false
tablespaceMounts: {}
@@ -164,7 +162,6 @@ spec:
userlabels:
pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
- usersecretname: ${pgo_cluster_name}-hippo-secret
EOF
kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
@@ -369,9 +366,7 @@ spec:
pgBouncer: preferred
policies: ""
port: "5432"
- primarysecretname: ${pgo_cluster_name}-primaryuser-secret
replicas: "0"
- rootsecretname: ${pgo_cluster_name}-postgres-secret
shutdown: false
standby: false
tablespaceMounts: {}
@@ -386,7 +381,6 @@ spec:
backrest-storage-type: "s3"
pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
- usersecretname: ${pgo_cluster_name}-hippo-secret
EOF
kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
@@ -689,16 +683,12 @@ make changes, as described below.
| PodAntiAffinity | `create` | A required section. Sets the [pod anti-affinity rules]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) for the PostgreSQL cluster and associated deployments. Please see the `Pod Anti-Affinity Specification` section below. |
| Policies | `create` | If provided, a comma-separated list referring to `pgpolicies.crunchydata.com.Spec.Name` that should be run once the PostgreSQL primary is first initialized. |
| Port | `create` | The port that PostgreSQL will run on, e.g. `5432`. |
-| PrimaryStorage | `create` | A specification that gives information about the storage attributes for the primary instance in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. |
-| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL _replication user_ that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
| Replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. |
| Resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL superuser that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection or pgBadger, you would specify `"crunchy-postgres-exporter": "true"` and `"crunchy-pgbadger": "true"` here, respectively. However, this structure does need to be set, so just follow whatever is in the example. |
-| UserSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a standard PostgreSQL user that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.|
| TablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. |
| TLS | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. |
| TLSOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. |
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index 58dd2f515e..8713deb719 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -72,9 +72,7 @@
},
"policies": "",
"port": "5432",
- "primarysecretname": "fromcrd-primaryuser-secret",
"replicas": "0",
- "rootsecretname": "fromcrd-postgres-secret",
"secretfrom": "",
"shutdown": false,
"standby": false,
@@ -89,7 +87,6 @@
"pgo-version": "4.5.1",
"pgouser": "pgoadmin",
"pgo-backrest": "true"
- },
- "usersecretname": "fromcrd-testuser-secret"
+ }
}
}
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
index 0852b1f447..65f12d6460 100644
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ b/examples/helm/create-cluster/templates/pgcluster.yaml
@@ -76,9 +76,7 @@ spec:
pgBouncer: preferred
policies: ""
port: "5432"
- primarysecretname: {{ .Values.pgclustername }}-primaryuser-secret
replicas: "0"
- rootsecretname: {{ .Values.pgclustername }}-postgres-secret
shutdown: false
standby: false
tablespaceMounts: {}
@@ -92,4 +90,3 @@ spec:
crunchy-postgres-exporter: "false"
pg-pod-anti-affinity: ""
pgo-version: {{ .Values.pgoversion }}
- usersecretname: {{ .Values.pgclustername }}-hippo-secret
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index 29aa0c6e83..975a7ddf5d 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -86,9 +86,7 @@ spec:
pgBouncer: preferred
policies: ""
port: "5432"
- primarysecretname: hippo-primaryuser-secret
replicas: "0"
- rootsecretname: hippo-postgres-secret
shutdown: false
standby: false
tablespaceMounts: {}
@@ -102,4 +100,3 @@ spec:
crunchy-postgres-exporter: "false"
pg-pod-anti-affinity: ""
pgo-version: 4.5.1
- usersecretname: hippo-hippo-secret
diff --git a/examples/kustomize/createcluster/overlay/dev/devhippo.json b/examples/kustomize/createcluster/overlay/dev/devhippo.json
index c25b2085ea..843e9c2c80 100644
--- a/examples/kustomize/createcluster/overlay/dev/devhippo.json
+++ b/examples/kustomize/createcluster/overlay/dev/devhippo.json
@@ -11,7 +11,5 @@
{ "op": "replace", "path": "/spec/PrimaryStorage/name", "value": "dev-hippo" },
{ "op": "replace", "path": "/spec/clustername", "value": "dev-hippo" },
{ "op": "replace", "path": "/spec/database", "value": "dev-hippo" },
- { "op": "replace", "path": "/spec/name", "value": "dev-hippo" },
- { "op": "replace", "path": "/spec/rootsecretname", "value": "dev-hippo-postgres-secret" },
- { "op": "replace", "path": "/spec/usersecretname", "value": "dev-hippo-hippo-secret" }
+ { "op": "replace", "path": "/spec/name", "value": "dev-hippo" }
]
diff --git a/examples/kustomize/createcluster/overlay/prod/prodhippo.json b/examples/kustomize/createcluster/overlay/prod/prodhippo.json
index 2a595bc24b..76fd528ac0 100644
--- a/examples/kustomize/createcluster/overlay/prod/prodhippo.json
+++ b/examples/kustomize/createcluster/overlay/prod/prodhippo.json
@@ -12,7 +12,5 @@
{ "op": "replace", "path": "/spec/clustername", "value": "prod-hippo" },
{ "op": "replace", "path": "/spec/database", "value": "prod-hippo" },
{ "op": "replace", "path": "/spec/name", "value": "prod-hippo" },
- { "op": "replace", "path": "/spec/replicas", "value": "1"},
- { "op": "replace", "path": "/spec/rootsecretname", "value": "prod-hippo-postgres-secret" },
- { "op": "replace", "path": "/spec/usersecretname", "value": "prod-hippo-hippo-secret" }
+ { "op": "replace", "path": "/spec/replicas", "value": "1"}
]
diff --git a/examples/kustomize/createcluster/overlay/staging/staginghippo.json b/examples/kustomize/createcluster/overlay/staging/staginghippo.json
index 0b811d3b45..7a5a9ab23f 100644
--- a/examples/kustomize/createcluster/overlay/staging/staginghippo.json
+++ b/examples/kustomize/createcluster/overlay/staging/staginghippo.json
@@ -12,7 +12,5 @@
{ "op": "replace", "path": "/spec/clustername", "value": "staging-hippo" },
{ "op": "replace", "path": "/spec/database", "value": "staging-hippo" },
{ "op": "replace", "path": "/spec/name", "value": "staging-hippo" },
- { "op": "replace", "path": "/spec/replicas", "value": "1"},
- { "op": "replace", "path": "/spec/rootsecretname", "value": "staging-hippo-postgres-secret" },
- { "op": "replace", "path": "/spec/usersecretname", "value": "staging-hippo-hippo-secret" }
+ { "op": "replace", "path": "/spec/replicas", "value": "1"}
]
diff --git a/installers/ansible/roles/pgo-operator/files/crds/pgclusters-crd.yaml b/installers/ansible/roles/pgo-operator/files/crds/pgclusters-crd.yaml
index bea777b436..92c9fd7cd5 100644
--- a/installers/ansible/roles/pgo-operator/files/crds/pgclusters-crd.yaml
+++ b/installers/ansible/roles/pgo-operator/files/crds/pgclusters-crd.yaml
@@ -24,12 +24,9 @@ spec:
exporterport: { type: string }
name: { type: string }
pgbadgerport: { type: string }
- primarysecretname: { type: string }
PrimaryStorage: { type: object }
port: { type: string }
- rootsecretname: { type: string }
userlabels: { type: object }
- usersecretname: { type: string }
status:
properties:
state: { type: string }
diff --git a/installers/olm/postgresoperator.crd.descriptions.yaml b/installers/olm/postgresoperator.crd.descriptions.yaml
index 5d76dd4e0c..15c5b274ba 100644
--- a/installers/olm/postgresoperator.crd.descriptions.yaml
+++ b/installers/olm/postgresoperator.crd.descriptions.yaml
@@ -56,22 +56,6 @@
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:number'
- - path: rootsecretname
- displayName: PostgreSQL superuser credentials
- description: The name of the Secret that contains the PostgreSQL superuser credentials
- x-descriptors:
- - 'urn:alm:descriptor:io.kubernetes:Secret'
- - path: primarysecretname
- displayName: PostgreSQL support service credentials
- description: The name of the Secret that contains the credentials used for managing cluster instance authentication, e.g. connections for replicas
- x-descriptors:
- - 'urn:alm:descriptor:io.kubernetes:Secret'
- - path: usersecretname
- displayName: PostgreSQL user credentials
- description: The name of the Secret that contains the PostgreSQL user credentials for logging into the PostgreSQL cluster
- x-descriptors:
- - 'urn:alm:descriptor:io.kubernetes:Secret'
-
# `operator-sdk scorecard` expects this field to have a descriptor.
- path: PrimaryStorage
displayName: PostgreSQL Primary Storage
diff --git a/installers/olm/postgresoperator.crd.examples.yaml b/installers/olm/postgresoperator.crd.examples.yaml
index d7783c2707..49b58fac6c 100644
--- a/installers/olm/postgresoperator.crd.examples.yaml
+++ b/installers/olm/postgresoperator.crd.examples.yaml
@@ -18,9 +18,6 @@ spec:
exporterport: '9187'
pgbadgerport: '10000'
port: '5432'
- primarysecretname: example-primaryuser
- rootsecretname: example-postgresuser
- usersecretname: example-primaryuser
userlabels: { archive: 'false' }
---
diff --git a/installers/olm/postgresoperator.crd.yaml b/installers/olm/postgresoperator.crd.yaml
index f39ac244fc..9d9da35997 100644
--- a/installers/olm/postgresoperator.crd.yaml
+++ b/installers/olm/postgresoperator.crd.yaml
@@ -24,13 +24,10 @@ spec:
exporterport: { type: string }
name: { type: string }
pgbadgerport: { type: string }
- primarysecretname: { type: string }
PrimaryStorage: { type: object }
port: { type: string }
- rootsecretname: { type: string }
status: { type: string }
userlabels: { type: object }
- usersecretname: { type: string }
status:
properties:
state: { type: string }
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 21fc5f24ba..43106d7dba 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -886,10 +886,10 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser
if request.SecretFrom != "" {
- err = validateSecretFrom(request.SecretFrom, newInstance.Spec.User, ns)
+ err = validateSecretFrom(newInstance, request.SecretFrom)
if err != nil {
resp.Status.Code = msgs.Error
- resp.Status.Msg = request.SecretFrom + " secret was not found "
+ resp.Status.Msg = err.Error()
return resp
}
}
@@ -898,15 +898,12 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
// create the user secrets
// first, the superuser
- if secretName, password, err := createUserSecret(request, newInstance, crv1.RootSecretSuffix,
- crv1.PGUserSuperuser, request.PasswordSuperuser); err != nil {
+ if password, err := createUserSecret(request, newInstance, crv1.PGUserSuperuser, request.PasswordSuperuser); err != nil {
log.Error(err)
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
} else {
- newInstance.Spec.RootSecretName = secretName
-
// if the user requests to show system accounts, append it to the list
if request.ShowSystemAccounts {
user := msgs.CreateClusterDetailUser{
@@ -919,15 +916,12 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
// next, the replication user
- if secretName, password, err := createUserSecret(request, newInstance, crv1.PrimarySecretSuffix,
- crv1.PGUserReplication, request.PasswordReplication); err != nil {
+ if password, err := createUserSecret(request, newInstance, crv1.PGUserReplication, request.PasswordReplication); err != nil {
log.Error(err)
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
} else {
- newInstance.Spec.PrimarySecretName = secretName
-
// if the user requests to show system accounts, append it to the list
if request.ShowSystemAccounts {
user := msgs.CreateClusterDetailUser{
@@ -940,16 +934,12 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
// finally, the user from the request and/or default user
- userSecretSuffix := fmt.Sprintf("-%s%s", newInstance.Spec.User, crv1.UserSecretSuffix)
- if secretName, password, err := createUserSecret(request, newInstance, userSecretSuffix, newInstance.Spec.User,
- request.Password); err != nil {
+ if password, err := createUserSecret(request, newInstance, newInstance.Spec.User, request.Password); err != nil {
log.Error(err)
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
} else {
- newInstance.Spec.UserSecretName = secretName
-
user := msgs.CreateClusterDetailUser{
Username: newInstance.Spec.User,
Password: password,
@@ -1531,43 +1521,57 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
return newInstance
}
-func validateSecretFrom(secretname, user, ns string) error {
+// validateSecretFrom is a legacy method that looks for all of the Secrets from
+// a cluster defined by "clusterName" and determines if there are bootstrap
+// secrets available, i.e.:
+//
+// - the Postgres superuser
+// - the replication user
+// - a user as defined vy the "user" parameter
+func validateSecretFrom(cluster *crv1.Pgcluster, secretFromClusterName string) error {
ctx := context.TODO()
- var err error
- selector := config.LABEL_PG_CLUSTER + "=" + secretname
- secrets, err := apiserver.Clientset.
- CoreV1().Secrets(ns).
- List(ctx, metav1.ListOptions{LabelSelector: selector})
+ // grab all of the Secrets from the referenced cluster so we can determine if
+ // the Secrets that we are looking for are present
+ options := metav1.ListOptions{
+ LabelSelector: fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, secretFromClusterName).String(),
+ }
+
+ secrets, err := apiserver.Clientset.CoreV1().Secrets(cluster.Namespace).List(ctx, options)
if err != nil {
return err
}
- log.Debugf("secrets for %s", secretname)
- pgprimaryFound := false
- pgrootFound := false
- pguserFound := false
-
- for _, s := range secrets.Items {
- if s.ObjectMeta.Name == secretname+crv1.PrimarySecretSuffix {
- pgprimaryFound = true
- } else if s.ObjectMeta.Name == secretname+crv1.RootSecretSuffix {
- pgrootFound = true
- } else if s.ObjectMeta.Name == secretname+"-"+user+crv1.UserSecretSuffix {
- pguserFound = true
- }
+ // if no secrets are found, take an early exit
+ if len(secrets.Items) == 0 {
+ return fmt.Errorf("no secrets found for %q", secretFromClusterName)
}
- if !pgprimaryFound {
- return errors.New(secretname + crv1.PrimarySecretSuffix + " not found")
+
+ // see if all three of the secrets exist. this borrows from the legacy method
+ // of checking
+ found := map[string]bool{
+ crv1.PGUserSuperuser: false,
+ crv1.PGUserReplication: false,
+ cluster.Spec.User: false,
}
- if !pgrootFound {
- return errors.New(secretname + crv1.RootSecretSuffix + " not found")
+
+ for _, secret := range secrets.Items {
+ found[crv1.PGUserSuperuser] = found[crv1.PGUserSuperuser] ||
+ (secret.Name == crv1.UserSecretNameFromClusterName(secretFromClusterName, crv1.PGUserSuperuser))
+ found[crv1.PGUserReplication] = found[crv1.PGUserReplication] ||
+ (secret.Name == crv1.UserSecretNameFromClusterName(secretFromClusterName, crv1.PGUserReplication))
+ found[cluster.Spec.User] = found[cluster.Spec.User] ||
+ (secret.Name == crv1.UserSecretNameFromClusterName(secretFromClusterName, cluster.Spec.User))
}
- if !pguserFound {
- return errors.New(secretname + "-" + user + crv1.UserSecretSuffix + " not found")
+
+ // if not all of the Secrets were found, return an error
+ for secretName, ok := range found {
+ if !ok {
+ return fmt.Errorf("could not find secret %q in cluster %q", secretName, secretFromClusterName)
+ }
}
- return err
+ return nil
}
func getReadyStatus(pod *v1.Pod) (string, bool) {
@@ -1692,11 +1696,9 @@ func getReplicas(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterReplica,
// password length
//
// returns the secertname, password as well as any errors
-func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluster, secretNameSuffix, username, password string) (string, string, error) {
+func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluster, username, password string) (string, error) {
ctx := context.TODO()
-
- // the secretName is just the combination cluster name and the secretNameSuffix
- secretName := fmt.Sprintf("%s%s", cluster.Spec.Name, secretNameSuffix)
+ secretName := crv1.UserSecretName(cluster, username)
// if the secret already exists, we can perform an early exit
// if there is an error, we'll ignore it
@@ -1705,7 +1707,7 @@ func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluste
Get(ctx, secretName, metav1.GetOptions{}); err == nil {
log.Infof("secret exists: [%s] - skipping", secretName)
- return secretName, string(secret.Data["password"][:]), nil
+ return string(secret.Data["password"][:]), nil
}
// alright, go through the hierarchy and determine if we need to set the
@@ -1717,14 +1719,14 @@ func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluste
// if the "SecretFrom" parameter is set, then load the password from a prexisting password
case request.SecretFrom != "":
// set up the name of the secret that we are loading the secret from
- secretFromSecretName := fmt.Sprintf("%s%s", request.SecretFrom, secretNameSuffix)
+ secretFromSecretName := fmt.Sprintf("%s-%s-secret", request.SecretFrom, username)
// now attempt to load said secret
oldPassword, err := util.GetPasswordFromSecret(apiserver.Clientset, cluster.Spec.Namespace, secretFromSecretName)
// if there is an error, abandon here, otherwise set the oldPassword as the
// current password
if err != nil {
- return "", "", err
+ return "", err
}
password = oldPassword
@@ -1740,7 +1742,7 @@ func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluste
generatedPassword, err := util.GeneratePassword(passwordLength)
// if the password fails to generate, return the error
if err != nil {
- return "", "", err
+ return "", err
}
password = generatedPassword
@@ -1749,11 +1751,11 @@ func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluste
// great, now we can create the secret! if we can't, return an error
if err := util.CreateSecret(apiserver.Clientset, cluster.Spec.Name, secretName,
username, password, cluster.Spec.Namespace); err != nil {
- return "", "", err
+ return "", err
}
// otherwise, return the secret name, password
- return secretName, password, nil
+ return password, nil
}
// UpdateCluster ...
diff --git a/internal/apiserver/scheduleservice/scheduleimpl.go b/internal/apiserver/scheduleservice/scheduleimpl.go
index f8c70ae059..8de3c624ea 100644
--- a/internal/apiserver/scheduleservice/scheduleimpl.go
+++ b/internal/apiserver/scheduleservice/scheduleimpl.go
@@ -79,7 +79,7 @@ func (s scheduleRequest) createPolicySchedule(cluster *crv1.Pgcluster, ns string
}
if s.Request.Secret == "" {
- s.Request.Secret = cluster.Spec.PrimarySecretName
+ s.Request.Secret = crv1.UserSecretName(cluster, crv1.PGUserReplication)
}
schedule := &PgScheduleSpec{
Name: name,
diff --git a/internal/apiserver/userservice/userimpl.go b/internal/apiserver/userservice/userimpl.go
index cc63850ba2..1264e9b5c1 100644
--- a/internal/apiserver/userservice/userimpl.go
+++ b/internal/apiserver/userservice/userimpl.go
@@ -250,8 +250,7 @@ func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUser
// if this user is "managed" by the Operator, add a secret. If there is an
// error, we can fall through as the next step is appending the results
if request.ManagedUser {
- if err := util.CreateUserSecret(apiserver.Clientset, cluster.Spec.ClusterName, result.Username,
- result.Password, cluster.Spec.Namespace); err != nil {
+ if err := util.CreateUserSecret(apiserver.Clientset, cluster, result.Username, result.Password); err != nil {
log.Error(err)
result.Error = true
@@ -549,16 +548,7 @@ func ShowUser(request *msgs.ShowUserRequest) msgs.ShowUserResponse {
//
// We ignore any errors...if the password get set, we add it. If not, we
// don't
- secretName := ""
-
- // handle special cases with user names + secrets lining up
- switch result.Username {
- default:
- secretName = fmt.Sprintf(util.UserSecretFormat, result.ClusterName, result.Username)
- case "ccp_monitoring":
- secretName = util.GenerateExporterSecretName(result.ClusterName)
- }
-
+ secretName := crv1.UserSecretName(&cluster, result.Username)
password, _ := util.GetPasswordFromSecret(apiserver.Clientset, pod.Namespace, secretName)
if password != "" {
@@ -662,7 +652,7 @@ func UpdateUser(request *msgs.UpdateUserRequest, pgouser string) msgs.UpdateUser
// error in here, but do nothing with it
func deleteUserSecret(cluster crv1.Pgcluster, username string) {
ctx := context.TODO()
- secretName := fmt.Sprintf(util.UserSecretFormat, cluster.Spec.ClusterName, username)
+ secretName := crv1.UserSecretName(&cluster, username)
err := apiserver.Clientset.CoreV1().Secrets(cluster.Spec.Namespace).
Delete(ctx, secretName, metav1.DeleteOptions{})
if err != nil {
@@ -1169,13 +1159,12 @@ func updateUser(request *msgs.UpdateUserRequest, cluster *crv1.Pgcluster) msgs.U
// has a "managed" account (i.e. there is a secret for this user account"),
// we can now updated the value of that password in the secret
if isChanged {
- secretName := fmt.Sprintf(util.UserSecretFormat, cluster.Spec.ClusterName, result.Username)
+ secretName := crv1.UserSecretName(cluster, result.Username)
// only call update user secret if the secret exists
if _, err := apiserver.Clientset.CoreV1().Secrets(cluster.Namespace).Get(ctx, secretName, metav1.GetOptions{}); err == nil {
// if we cannot update the user secret, only warn that we cannot do so
- if err := util.UpdateUserSecret(apiserver.Clientset, cluster.Spec.ClusterName,
- result.Username, result.Password, cluster.Namespace); err != nil {
+ if err := util.UpdateUserSecret(apiserver.Clientset, cluster, result.Username, result.Password); err != nil {
log.Warn(err)
}
}
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index b71aac778f..0c5ddda5c2 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -699,13 +699,11 @@ func annotateBackrestSecret(clientset kubernetes.Interface, cluster *crv1.Pgclus
// suffix for a given secret, as well as the user name
// createUserSecret(request, newInstance, crv1.RootSecretSuffix,
// crv1.PGUserSuperuser, request.PasswordSuperuser)
-func createMissingUserSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster,
- secretNameSuffix, username string) error {
+func createMissingUserSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster, username string) error {
ctx := context.TODO()
- // the secretName is just the combination cluster name and the
- // secretNameSuffix
- secretName := cluster.Spec.Name + secretNameSuffix
+ // derive the secret name
+ secretName := crv1.UserSecretName(cluster, username)
// if the secret already exists, skip it
// if it returns an error other than "not found" return an error
@@ -738,18 +736,17 @@ func createMissingUserSecret(clientset kubernetes.Interface, cluster *crv1.Pgclu
func createMissingUserSecrets(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error {
// first, determine if we need to create a user secret for the postgres
// superuser
- if err := createMissingUserSecret(clientset, cluster, crv1.RootSecretSuffix, crv1.PGUserSuperuser); err != nil {
+ if err := createMissingUserSecret(clientset, cluster, crv1.PGUserSuperuser); err != nil {
return err
}
// next, determine if we need to create a user secret for the replication user
- if err := createMissingUserSecret(clientset, cluster, crv1.PrimarySecretSuffix, crv1.PGUserReplication); err != nil {
+ if err := createMissingUserSecret(clientset, cluster, crv1.PGUserReplication); err != nil {
return err
}
// finally, determine if we need to create a user secret for the regular user
- userSecretSuffix := fmt.Sprintf("-%s%s", cluster.Spec.User, crv1.UserSecretSuffix)
- return createMissingUserSecret(clientset, cluster, userSecretSuffix, cluster.Spec.User)
+ return createMissingUserSecret(clientset, cluster, cluster.Spec.User)
}
func deleteConfigMaps(clientset kubernetes.Interface, clusterName, ns string) error {
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 2eca688f6a..f91e45b365 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -315,9 +315,9 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
DataPathOverride: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
Database: cl.Spec.Database,
SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
- RootSecretName: cl.Spec.RootSecretName,
- PrimarySecretName: cl.Spec.PrimarySecretName,
- UserSecretName: cl.Spec.UserSecretName,
+ RootSecretName: crv1.UserSecretName(cl, crv1.PGUserSuperuser),
+ PrimarySecretName: crv1.UserSecretName(cl, crv1.PGUserReplication),
+ UserSecretName: crv1.UserSecretName(cl, cl.Spec.User),
NodeSelector: operator.GetAffinity(cl.Spec.UserLabels["NodeLabelKey"], cl.Spec.UserLabels["NodeLabelValue"], "In"),
PodAntiAffinity: operator.GetPodAntiAffinity(cl, crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default),
ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
@@ -473,9 +473,9 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationPostgres),
PodLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
- RootSecretName: cluster.Spec.RootSecretName,
- PrimarySecretName: cluster.Spec.PrimarySecretName,
- UserSecretName: cluster.Spec.UserSecretName,
+ RootSecretName: crv1.UserSecretName(cluster, crv1.PGUserSuperuser),
+ PrimarySecretName: crv1.UserSecretName(cluster, crv1.PGUserReplication),
+ UserSecretName: crv1.UserSecretName(cluster, cluster.Spec.User),
ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits),
NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
diff --git a/internal/util/secrets.go b/internal/util/secrets.go
index c8509e332f..be8c2f4288 100644
--- a/internal/util/secrets.go
+++ b/internal/util/secrets.go
@@ -18,7 +18,6 @@ package util
import (
"context"
"crypto/rand"
- "fmt"
"math/big"
"strconv"
"strings"
@@ -32,10 +31,6 @@ import (
"k8s.io/client-go/kubernetes"
)
-// UserSecretFormat follows the pattern of how the user information is stored,
-// which is "--secret"
-const UserSecretFormat = "%s-%s" + crv1.UserSecretSuffix
-
// The following constants are used as a part of password generation. For more
// information on these selections, please consulting the ASCII man page
// (`man ascii`)
@@ -141,10 +136,10 @@ func IsPostgreSQLUserSystemAccount(username string) bool {
}
// CreateUserSecret will create a new secret holding a user credential
-func CreateUserSecret(clientset kubernetes.Interface, clustername, username, password, namespace string) error {
- secretName := fmt.Sprintf(UserSecretFormat, clustername, username)
+func CreateUserSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster, username, password string) error {
+ secretName := crv1.UserSecretName(cluster, username)
- if err := CreateSecret(clientset, clustername, secretName, username, password, namespace); err != nil {
+ if err := CreateSecret(clientset, cluster.Name, secretName, username, password, cluster.Namespace); err != nil {
log.Error(err)
return err
}
@@ -157,12 +152,12 @@ func CreateUserSecret(clientset kubernetes.Interface, clustername, username, pas
//
// 1. If the Secret exists, it updates the value of the Secret
// 2. If the Secret does not exist, it creates the secret
-func UpdateUserSecret(clientset kubernetes.Interface, clustername, username, password, namespace string) error {
+func UpdateUserSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster, username, password string) error {
ctx := context.TODO()
- secretName := fmt.Sprintf(UserSecretFormat, clustername, username)
+ secretName := crv1.UserSecretName(cluster, username)
// see if the secret already exists
- secret, err := clientset.CoreV1().Secrets(namespace).Get(ctx, secretName, metav1.GetOptions{})
+ secret, err := clientset.CoreV1().Secrets(cluster.Namespace).Get(ctx, secretName, metav1.GetOptions{})
// if this returns an error and it's not the "not found" error, return
// However, if it is the "not found" error, treat this as creating the user
// secret
@@ -171,7 +166,7 @@ func UpdateUserSecret(clientset kubernetes.Interface, clustername, username, pas
return err
}
- return CreateUserSecret(clientset, clustername, username, password, namespace)
+ return CreateUserSecret(clientset, cluster, username, password)
}
// update the value of "password"
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index c487bc81b8..d89e10f6c5 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -107,9 +107,6 @@ type PgclusterSpec struct {
User string `json:"user"`
Database string `json:"database"`
Replicas string `json:"replicas"`
- UserSecretName string `json:"usersecretname"`
- RootSecretName string `json:"rootsecretname"`
- PrimarySecretName string `json:"primarysecretname"`
Status string `json:"status"`
CustomConfig string `json:"customconfig"`
UserLabels map[string]string `json:"userlabels"`
@@ -348,3 +345,23 @@ func (p PodAntiAffinityType) Validate() error {
return fmt.Errorf("Invalid pod anti-affinity type. Valid values are '%s', '%s' or '%s'",
PodAntiAffinityRequired, PodAntiAffinityPreffered, PodAntiAffinityDisabled)
}
+
+// UserSecretName returns the name of a Kubernetes Secret representing the user.
+// Delegates to UserSecretNameFromClusterName. This is the preferred method
+// given there is less thinking for the caller to do, but there are some (one?)
+// cases where UserSecretNameFromClusterName needs to be called as that cluster
+// object is unavailable
+func UserSecretName(cluster *Pgcluster, username string) string {
+ return UserSecretNameFromClusterName(cluster.Name, username)
+}
+
+// UserSecretNameFromClusterName returns the name of a Kubernetes Secret
+// representing a user.
+func UserSecretNameFromClusterName(clusterName, username string) string {
+ switch username {
+ default: // standard format
+ return fmt.Sprintf("%s-%s-secret", clusterName, username)
+ case PGUserMonitor:
+ return fmt.Sprintf("%s-exporter-secret", clusterName)
+ }
+}
diff --git a/pkg/apis/crunchydata.com/v1/cluster_test.go b/pkg/apis/crunchydata.com/v1/cluster_test.go
new file mode 100644
index 0000000000..c2f6c70bb3
--- /dev/null
+++ b/pkg/apis/crunchydata.com/v1/cluster_test.go
@@ -0,0 +1,75 @@
+package v1
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "fmt"
+ "testing"
+
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+func TestUserSecretName(t *testing.T) {
+ cluster := &Pgcluster{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "second-pick",
+ },
+ Spec: PgclusterSpec{
+ ClusterName: "second-pick",
+ User: "puppy",
+ },
+ }
+
+ t.Run(PGUserMonitor, func(t *testing.T) {
+ expected := fmt.Sprintf("%s-%s-secret", cluster.Name, "exporter")
+ actual := UserSecretName(cluster, PGUserMonitor)
+ if expected != actual {
+ t.Fatalf("expected %q, got %q", expected, actual)
+ }
+ })
+
+ t.Run("any other user", func(t *testing.T) {
+ for _, user := range []string{PGUserSuperuser, PGUserReplication, cluster.Spec.User} {
+ expected := fmt.Sprintf("%s-%s-secret", cluster.Name, user)
+ actual := UserSecretName(cluster, user)
+ if expected != actual {
+ t.Fatalf("expected %q, got %q", expected, actual)
+ }
+ }
+ })
+}
+
+func TestUserSecretNameFromClusterName(t *testing.T) {
+ clusterName := "second-pick"
+
+ t.Run(PGUserMonitor, func(t *testing.T) {
+ expected := fmt.Sprintf("%s-%s-secret", clusterName, "exporter")
+ actual := UserSecretNameFromClusterName(clusterName, PGUserMonitor)
+ if expected != actual {
+ t.Fatalf("expected %q, got %q", expected, actual)
+ }
+ })
+
+ t.Run("any other user", func(t *testing.T) {
+ for _, user := range []string{PGUserSuperuser, PGUserReplication, "puppy"} {
+ expected := fmt.Sprintf("%s-%s-secret", clusterName, user)
+ actual := UserSecretNameFromClusterName(clusterName, user)
+ if expected != actual {
+ t.Fatalf("expected %q, got %q", expected, actual)
+ }
+ }
+ })
+}
diff --git a/pkg/apis/crunchydata.com/v1/common.go b/pkg/apis/crunchydata.com/v1/common.go
index fcd2238f36..33818edf72 100644
--- a/pkg/apis/crunchydata.com/v1/common.go
+++ b/pkg/apis/crunchydata.com/v1/common.go
@@ -22,15 +22,6 @@ import (
log "github.com/sirupsen/logrus"
)
-// RootSecretSuffix ...
-const RootSecretSuffix = "-postgres-secret"
-
-// UserSecretSuffix ...
-const UserSecretSuffix = "-secret"
-
-// PrimarySecretSuffix ...
-const PrimarySecretSuffix = "-primaryuser-secret"
-
// StorageExisting ...
const StorageExisting = "existing"
From c91015cfe28ea870fae1699c676adeb977b36e4d Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 27 Dec 2020 21:41:46 -0500
Subject: [PATCH 082/276] Remove superfluous "pgo-backrest" label on pgcluster
This is not used anymore as pgBackRest is always enabled for
a PostgreSQL cluster, and even if it were togglable, this would
need to be a CRD attribute.
This also removes the "ArchiveMode" configuration parameter, which
has not been used for a long time.
---
conf/postgres-operator/pgo.yaml | 1 -
docs/content/custom-resources/_index.md | 2 --
examples/create-by-resource/fromcrd.json | 4 +--
.../create-cluster/templates/pgcluster.yaml | 1 -
examples/kustomize/createcluster/README.md | 8 ++---
.../createcluster/base/pgcluster.yaml | 1 -
.../roles/pgo-operator/templates/pgo.yaml.j2 | 1 -
.../apiserver/backrestservice/backrestimpl.go | 13 -------
.../apiserver/clusterservice/clusterimpl.go | 4 ---
internal/operator/cluster/clusterlogic.go | 14 ++------
internal/operator/cluster/upgrade.go | 4 ---
internal/operator/clusterutilities.go | 36 +++++++++----------
12 files changed, 23 insertions(+), 66 deletions(-)
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index 622fc268f8..0cb8bfadc1 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -11,7 +11,6 @@ Cluster:
PasswordAgeDays: 0
PasswordLength: 24
Replicas: 0
- ArchiveMode: false
ServiceType: ClusterIP
BackrestPort: 2022
BackrestS3Bucket:
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index a9f5879c4f..b5d58308c0 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -83,7 +83,6 @@ metadata:
name: ${pgo_cluster_name}
pg-cluster: ${pgo_cluster_name}
pg-pod-anti-affinity: ""
- pgo-backrest: "true"
pgo-version: {{< param operatorVersion >}}
pgouser: admin
name: ${pgo_cluster_name}
@@ -301,7 +300,6 @@ metadata:
name: ${pgo_cluster_name}
pg-cluster: ${pgo_cluster_name}
pg-pod-anti-affinity: ""
- pgo-backrest: "true"
pgo-version: {{< param operatorVersion >}}
pgouser: admin
name: ${pgo_cluster_name}
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index 8713deb719..30384d2316 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -15,7 +15,6 @@
"name": "fromcrd",
"pg-cluster": "fromcrd",
"pg-pod-anti-affinity": "",
- "pgo-backrest": "true",
"pgo-version": "4.5.1",
"pgouser": "pgoadmin",
"primary": "true"
@@ -85,8 +84,7 @@
"crunchy-postgres-exporter": "false",
"pg-pod-anti-affinity": "",
"pgo-version": "4.5.1",
- "pgouser": "pgoadmin",
- "pgo-backrest": "true"
+ "pgouser": "pgoadmin"
}
}
}
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
index 65f12d6460..3378f37fd9 100644
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ b/examples/helm/create-cluster/templates/pgcluster.yaml
@@ -12,7 +12,6 @@ metadata:
name: {{ .Values.pgclustername }}
pg-cluster: {{ .Values.pgclustername }}
pg-pod-anti-affinity: ""
- pgo-backrest: "true"
pgo-version: 4.5.1
pgouser: admin
name: {{ .Values.pgclustername }}
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index f21cae2f90..ddca1fd70a 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -50,7 +50,7 @@ cluster : hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
deployment : hippo
deployment : hippo-backrest-shared-repo
service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
- labels : pg-pod-anti-affinity= pgo-backrest=true pgo-version=4.5.1 crunchy-postgres-exporter=false name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata autofail=true crunchy-pgbadger=false
+ labels : pg-pod-anti-affinity= pgo-version=4.5.1 crunchy-postgres-exporter=false name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata autofail=true crunchy-pgbadger=false
```
Feel free to run other pgo cli commands on the hippo cluster
@@ -87,7 +87,7 @@ cluster : dev-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
deployment : dev-hippo-pgbouncer
service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
- labels : crunchy-pgha-scope=dev-hippo crunchy-postgres-exporter=false name=dev-hippo pg-cluster=dev-hippo pg-pod-anti-affinity= pgo-backrest=true vendor=crunchydata autofail=true crunchy-pgbadger=false deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
+ labels : crunchy-pgha-scope=dev-hippo crunchy-postgres-exporter=false name=dev-hippo pg-cluster=dev-hippo pg-pod-anti-affinity= vendor=crunchydata autofail=true crunchy-pgbadger=false deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
```
#### staging
The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
@@ -128,7 +128,7 @@ cluster : staging-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
pgreplica : staging-hippo-lnxw
pgreplica : staging-hippo-rpl1
- labels : deployment-name=staging-hippo environment=staging name=staging-hippo pg-pod-anti-affinity= crunchy-postgres-exporter=false crunchy-pgbadger=false crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-backrest=true pgo-version=4.5.1 pgouser=admin vendor=crunchydata autofail=true
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo pg-pod-anti-affinity= crunchy-postgres-exporter=false crunchy-pgbadger=false crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.5.1 pgouser=admin vendor=crunchydata autofail=true
```
#### production
@@ -165,7 +165,7 @@ cluster : prod-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
pgreplica : prod-hippo-flty
- labels : pgo-backrest=true pgo-version=4.5.1 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=prod-hippo environment=production pg-cluster=prod-hippo pg-pod-anti-affinity= autofail=true crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.5.1 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=prod-hippo environment=production pg-cluster=prod-hippo pg-pod-anti-affinity= autofail=true crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
```
### Delete the clusters
To delete the clusters run the following pgo cli commands
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index 975a7ddf5d..6dbcd21517 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -12,7 +12,6 @@ metadata:
name: hippo
pg-cluster: hippo
pg-pod-anti-affinity: ""
- pgo-backrest: "true"
pgo-version: 4.5.1
pgouser: admin
name: hippo
diff --git a/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2
index e87a245bed..2c8f272bde 100644
--- a/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2
+++ b/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2
@@ -18,7 +18,6 @@ Cluster:
PasswordAgeDays: {{ db_password_age_days }}
PasswordLength: {{ db_password_length }}
Replicas: {{ db_replicas }}
- ArchiveMode: {{ archive_mode }}
ServiceType: {{ service_type }}
DisableReplicaStartFailReinit: {{ disable_replica_start_fail_reinit }}
PodAntiAffinity: {{ pod_anti_affinity }}
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index 6f1aee538a..1d545e387e 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -150,12 +150,6 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
return resp
}
- if cluster.Labels[config.LABEL_BACKREST] != "true" {
- resp.Status.Code = msgs.Error
- resp.Status.Msg = clusterName + " does not have pgbackrest enabled"
- return resp
- }
-
err = util.ValidateBackrestStorageTypeOnBackupRestore(request.BackrestStorageType,
cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], false)
if err != nil {
@@ -551,13 +545,6 @@ func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreRespo
return resp
}
- // verify that the cluster we are restoring from has backrest enabled
- if cluster.Labels[config.LABEL_BACKREST] != "true" {
- resp.Status.Code = msgs.Error
- resp.Status.Msg = "can't restore, cluster restoring from does not have backrest enabled"
- return resp
- }
-
// Return an error if any clusters identified for the restore are in standby mode. Restoring
// from a standby cluster is not allowed since the cluster is following a remote primary,
// which itself is responsible for performing any restores as required for the cluster.
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 43106d7dba..ee8b01e0d6 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1502,10 +1502,6 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
labels[config.LABEL_BADGER] = "true"
}
- // pgBackRest is always set to true. This is here due to a time where
- // pgBackRest was not the only way
- labels[config.LABEL_BACKREST] = "true"
-
newInstance := &crv1.Pgcluster{
ObjectMeta: metav1.ObjectMeta{
Name: name,
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index f91e45b365..e527df2cd4 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -326,7 +326,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
ScopeLabel: config.LABEL_PGHA_SCOPE,
- PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Labels[config.LABEL_BACKREST], cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
+ PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
cl.Spec.Port, cl.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cl, clientset, namespace),
ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
@@ -421,15 +421,6 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
cluster.Spec.UserLabels["name"] = serviceName
cluster.Spec.UserLabels[config.LABEL_PG_CLUSTER] = replica.Spec.ClusterName
- archiveMode := "off"
- if cluster.Spec.UserLabels[config.LABEL_ARCHIVE] == "true" {
- archiveMode = "on"
- }
- if cluster.Labels[config.LABEL_BACKREST] == "true" {
- // backrest requires archive mode be set to on
- archiveMode = "on"
- }
-
image := cluster.Spec.CCPImage
// check for --ccp-image-tag at the command line
@@ -466,7 +457,6 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
PVCName: dataVolume.InlineVolumeSource(),
Database: cluster.Spec.Database,
DataPathOverride: replica.Spec.Name,
- ArchiveMode: archiveMode,
Replicas: "1",
ConfVolume: operator.GetConfVolume(clientset, cluster, namespace),
DeploymentLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
@@ -482,7 +472,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
ExporterAddon: operator.GetExporterAddon(cluster.Spec),
BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cluster, replica.Spec.Name),
ScopeLabel: config.LABEL_PGHA_SCOPE,
- PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, cluster.Labels[config.LABEL_BACKREST], replica.Spec.Name,
+ PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, replica.Spec.Name,
cluster.Spec.Port, cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cluster, clientset, namespace),
ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index 4bb6d63a53..0b12c5c4ff 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -540,10 +540,6 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
pgcluster.Spec.PGBadgerPort = operator.Pgo.Cluster.PGBadgerPort
}
- // ensure that the pgo-backrest label is set to 'true' since pgbackrest is required for normal
- // cluster operations in this version of the Postgres Operator
- pgcluster.ObjectMeta.Labels[config.LABEL_BACKREST] = "true"
-
// added in 4.2 and copied from configuration in 4.4
if pgcluster.Spec.BackrestS3Bucket == "" {
pgcluster.Spec.BackrestS3Bucket = operator.Pgo.Cluster.BackrestS3Bucket
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 163850861e..3281a43c7e 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -161,7 +161,6 @@ type DeploymentTemplateFields struct {
PodAnnotations string
PodLabels string
DataPathOverride string
- ArchiveMode string
PVCName string
RootSecretName string
UserSecretName string
@@ -277,27 +276,24 @@ func GetAnnotations(cluster *crv1.Pgcluster, annotationType crv1.ClusterAnnotati
}
// consolidate with cluster.GetPgbackrestEnvVars
-func GetPgbackrestEnvVars(cluster *crv1.Pgcluster, backrestEnabled, depName, port, storageType string) string {
- if backrestEnabled == "true" {
- fields := PgbackrestEnvVarsTemplateFields{
- PgbackrestStanza: "db",
- PgbackrestRepo1Host: cluster.Name + "-backrest-shared-repo",
- PgbackrestRepo1Path: util.GetPGBackRestRepoPath(*cluster),
- PgbackrestDBPath: "/pgdata/" + depName,
- PgbackrestPGPort: port,
- PgbackrestRepo1Type: GetRepoType(storageType),
- PgbackrestLocalAndS3Storage: IsLocalAndS3Storage(storageType),
- }
+func GetPgbackrestEnvVars(cluster *crv1.Pgcluster, depName, port, storageType string) string {
+ fields := PgbackrestEnvVarsTemplateFields{
+ PgbackrestStanza: "db",
+ PgbackrestRepo1Host: cluster.Name + "-backrest-shared-repo",
+ PgbackrestRepo1Path: util.GetPGBackRestRepoPath(*cluster),
+ PgbackrestDBPath: "/pgdata/" + depName,
+ PgbackrestPGPort: port,
+ PgbackrestRepo1Type: GetRepoType(storageType),
+ PgbackrestLocalAndS3Storage: IsLocalAndS3Storage(storageType),
+ }
- var doc bytes.Buffer
- err := config.PgbackrestEnvVarsTemplate.Execute(&doc, fields)
- if err != nil {
- log.Error(err.Error())
- return ""
- }
- return doc.String()
+ doc := bytes.Buffer{}
+ if err := config.PgbackrestEnvVarsTemplate.Execute(&doc, fields); err != nil {
+ log.Error(err.Error())
+ return ""
}
- return ""
+
+ return doc.String()
}
// GetPgbackrestBootstrapEnvVars returns a string containing the pgBackRest environment variables
From 41eff90f1cebe230f8003241a8ef3fe40cc2fa88 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 27 Dec 2020 22:30:33 -0500
Subject: [PATCH 083/276] Simplify the custom resource examples for pgcluster
There are many attributes that are not actually required to be
set when creating a new PostgreSQL cluster. This simplifies the
examples to only use the attributes that are (mostly) required
to be set.
---
docs/content/custom-resources/_index.md | 50 ++-----------------
examples/create-by-resource/fromcrd.json | 25 +---------
.../create-cluster/templates/pgcluster.yaml | 29 +----------
.../createcluster/base/pgcluster.yaml | 36 +------------
4 files changed, 8 insertions(+), 132 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index b5d58308c0..eb3f7cc21c 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -112,30 +112,16 @@ spec:
storageclass: ""
storagetype: create
supplementalgroups: ""
- annotations:
- backrestLimits: {}
- backrestRepoPath: ""
- backrestResources:
- memory: 48Mi
- backrestS3Bucket: ""
- backrestS3Endpoint: ""
- backrestS3Region: ""
- backrestS3URIStyle: ""
- backrestS3VerifyTLS: ""
+ annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
ccpimagetag: {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}
clustername: ${pgo_cluster_name}
- customconfig: ""
database: ${pgo_cluster_name}
- exporter: false
exporterport: "9187"
limits: {}
name: ${pgo_cluster_name}
namespace: ${cluster_namespace}
- pgBouncer:
- limits: {}
- replicas: 0
pgDataSource:
restoreFrom: ""
restoreOpts: ""
@@ -145,21 +131,10 @@ spec:
default: preferred
pgBackRest: preferred
pgBouncer: preferred
- policies: ""
port: "5432"
- replicas: "0"
- shutdown: false
- standby: false
- tablespaceMounts: {}
- tls:
- caSecret: ""
- replicationTLSSecret: ""
- tlsSecret: ""
- tlsOnly: false
tolerations: []
user: hippo
userlabels:
- pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
EOF
@@ -329,11 +304,7 @@ spec:
storageclass: ""
storagetype: dynamic
supplementalgroups: ""
- annotations:
- backrestLimits: {}
- backrestRepoPath: ""
- backrestResources:
- memory: 48Mi
+ annotations: {}
backrestS3Bucket: ${backrest_s3_bucket}
backrestS3Endpoint: ${backrest_s3_endpoint}
backrestS3Region: ${backrest_s3_region}
@@ -343,16 +314,11 @@ spec:
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
ccpimagetag: {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}
clustername: ${pgo_cluster_name}
- customconfig: ""
database: ${pgo_cluster_name}
- exporter: false
exporterport: "9187"
limits: {}
name: ${pgo_cluster_name}
namespace: ${cluster_namespace}
- pgBouncer:
- limits: {}
- replicas: 0
pgDataSource:
restoreFrom: ""
restoreOpts: ""
@@ -362,22 +328,11 @@ spec:
default: preferred
pgBackRest: preferred
pgBouncer: preferred
- policies: ""
port: "5432"
- replicas: "0"
- shutdown: false
- standby: false
- tablespaceMounts: {}
- tls:
- caSecret: ""
- replicationTLSSecret: ""
- tlsSecret: ""
- tlsOnly: false
tolerations: []
user: hippo
userlabels:
backrest-storage-type: "s3"
- pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
EOF
@@ -469,6 +424,7 @@ spec:
storageclass: ""
storagetype: create
supplementalgroups: ""
+ tolerations: []
userlabels:
NodeLabelKey: ""
NodeLabelValue: ""
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index 30384d2316..49e4665fe7 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -9,15 +9,12 @@
"autofail": "true",
"crunchy-pgbadger": "false",
"crunchy-pgha-scope": "fromcrd",
- "crunchy-postgres-exporter": "false",
- "current-primary": "fromcrd",
"deployment-name": "fromcrd",
"name": "fromcrd",
"pg-cluster": "fromcrd",
"pg-pod-anti-affinity": "",
"pgo-version": "4.5.1",
- "pgouser": "pgoadmin",
- "primary": "true"
+ "pgouser": "pgoadmin"
},
"name": "fromcrd",
"namespace": "pgouser1"
@@ -50,41 +47,23 @@
"storagetype": "dynamic",
"supplementalgroups": ""
},
- "backrestResources": {},
"ccpimage": "crunchy-postgres-ha",
"ccpimagetag": "centos7-12.5-4.5.1",
"clustername": "fromcrd",
- "customconfig": "",
"database": "userdb",
"exporterport": "9187",
"name": "fromcrd",
"namespace": "pgouser1",
- "pgBouncer": {
- "replicas": 0,
- "resources": {}
- },
"pgbadgerport": "10000",
"podPodAntiAffinity": {
"default": "preferred",
"pgBackRest": "preferred",
"pgBouncer": "preferred"
},
- "policies": "",
"port": "5432",
- "replicas": "0",
- "secretfrom": "",
- "shutdown": false,
- "standby": false,
- "status": "",
- "syncReplication": null,
- "tablespaceMounts": {},
- "tls": {},
"user": "testuser",
"userlabels": {
- "crunchy-postgres-exporter": "false",
- "pg-pod-anti-affinity": "",
- "pgo-version": "4.5.1",
- "pgouser": "pgoadmin"
+ "pgo-version": "4.5.1"
}
}
}
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
index 3378f37fd9..1a5e99617d 100644
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ b/examples/helm/create-cluster/templates/pgcluster.yaml
@@ -7,7 +7,6 @@ metadata:
autofail: "true"
crunchy-pgbadger: "false"
crunchy-pgha-scope: {{ .Values.pgclustername }}
- crunchy-postgres-exporter: "false"
deployment-name: {{ .Values.pgclustername }}
name: {{ .Values.pgclustername }}
pg-cluster: {{ .Values.pgclustername }}
@@ -41,29 +40,17 @@ spec:
storageclass: ""
storagetype: dynamic
supplementalgroups: ""
- annotations:
- backrestLimits: {}
- backrestRepoPath: ""
- backrestResources:
- memory: 48Mi
- backrestS3Bucket: ""
- backrestS3Endpoint: ""
- backrestS3Region: ""
- backrestS3URIStyle: ""
- backrestS3VerifyTLS: ""
+ annotations: {}
ccpimage: {{ .Values.ccpimage }}
ccpimageprefix: {{ .Values.ccpimageprefix }}
ccpimagetag: {{ .Values.ccpimagetag }}
clustername: {{ .Values.pgclustername }}
- customconfig: ""
database: {{ .Values.pgclustername }}
+ exporter: false
exporterport: "9187"
limits: {}
name: {{ .Values.pgclustername }}
namespace: {{ .Values.namespace }}
- pgBouncer:
- limits: {}
- replicas: 0
pgDataSource:
restoreFrom: ""
restoreOpts: ""
@@ -73,19 +60,7 @@ spec:
default: preferred
pgBackRest: preferred
pgBouncer: preferred
- policies: ""
port: "5432"
- replicas: "0"
- shutdown: false
- standby: false
- tablespaceMounts: {}
- tls:
- caSecret: ""
- replicationTLSSecret: ""
- tlsSecret: ""
- tlsOnly: false
user: hippo
userlabels:
- crunchy-postgres-exporter: "false"
- pg-pod-anti-affinity: ""
pgo-version: {{ .Values.pgoversion }}
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index 6dbcd21517..ed7c27622d 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -7,7 +7,6 @@ metadata:
autofail: "true"
crunchy-pgbadger: "false"
crunchy-pgha-scope: hippo
- crunchy-postgres-exporter: "false"
deployment-name: hippo
name: hippo
pg-cluster: hippo
@@ -41,24 +40,7 @@ spec:
storageclass: ""
storagetype: dynamic
supplementalgroups: ""
- annotations:
- global:
- favorite: ""
- backrest:
- chair: ""
- pgBouncer:
- pool: ""
- postgres:
- elephant: ""
- backrestLimits: {}
- backrestRepoPath: ""
- backrestResources:
- memory: 48Mi
- backrestS3Bucket: ""
- backrestS3Endpoint: ""
- backrestS3Region: ""
- backrestS3URIStyle: ""
- backrestS3VerifyTLS: ""
+ annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
ccpimagetag: centos7-12.5-4.5.1
@@ -69,11 +51,6 @@ spec:
limits: {}
name: hippo
namespace: pgo
- pgBouncer:
- limits: {}
- replicas: 0
- resources:
- memory: "0"
pgDataSource:
restoreFrom: ""
restoreOpts: ""
@@ -85,17 +62,6 @@ spec:
pgBouncer: preferred
policies: ""
port: "5432"
- replicas: "0"
- shutdown: false
- standby: false
- tablespaceMounts: {}
- tls:
- caSecret: ""
- replicationTLSSecret: ""
- tlsSecret: ""
- tlsOnly: false
user: hippo
userlabels:
- crunchy-postgres-exporter: "false"
- pg-pod-anti-affinity: ""
pgo-version: 4.5.1
From dccdb9dd82bcd7e00378cdaa696ac0e8be271bee Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 27 Dec 2020 23:05:08 -0500
Subject: [PATCH 084/276] Add documentation for TLS-enabled clusters with CR
workflow
This adds an example for how to create a TLS-enabled PostgreSQL
cluster via a custom resource.
---
docs/content/custom-resources/_index.md | 126 ++++++++++++++++++++++++
1 file changed, 126 insertions(+)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index eb3f7cc21c..f327e17667 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -339,6 +339,132 @@ EOF
kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
```
+### Create a PostgreSQL Cluster with TLS
+
+There are three items that are required to enable TLS in your PostgreSQL clusters:
+
+- A CA certificate
+- A TLS private key
+- A TLS certificate
+
+It is possible [create a PostgreSQL cluster with TLS]({{< relref "tutorial/tls.md" >}}) using a custom resource workflow with the prerequisite of ensuring the above three items are created.
+
+For a detailed explanation for how TLS works with the PostgreSQL Operator, please see the [TLS tutorial]({{< relref "tutorial/tls.md" >}}).
+
+#### Step 1: Create TLS Secrets
+
+There are two Secrets that need to be created:
+
+1. A Secret containing the certificate authority (CA). You may only need to create this Secret once, as a CA certificate can be shared amongst your clusters.
+2. A Secret that contains the TLS private key & certificate.
+
+This assumes that you have already [generated your TLS certificates](https://www.postgresql.org/docs/current/ssl-tcp.html#SSL-CERTIFICATE-CREATION) where the CA is named `ca.crt` and the server key and certificate are named `server.key` and `server.crt` respectively.
+
+Substitute the correct values for your environment into the environmental variables in the example below:
+
+```
+# this variable is the name of the cluster being created
+export pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+export cluster_namespace=pgo
+# this is the local path to where you stored the CA and server key and certificate
+export cluster_tls_asset_path=/path/to
+
+# create the CA secret. if this already exists, it's OK if it fails
+kubectl create secret generic postgresql-ca -n "${cluster_namespace}" \
+ --from-file="ca.crt=${cluster_tls_asset_path}/ca.crt"
+
+# create the server key/certificate secret
+kubectl create secret tls "${pgo_cluster_name}-tls-keypair" -n "${cluster_namespace}" \
+ --cert="${cluster_tls_asset_path}/server.crt" \
+ --key="${cluster_tls_asset_path}/server.key"
+```
+
+#### Step 2: Create the PostgreSQL Cluster
+
+The below example uses the Secrets created in the previous step and creates a TLS-enabled PostgreSQL cluster. Additionally, this example sets the `tlsOnly` attribute to `true`, which requires all TCP connections to occur over TLS:
+
+```
+# this variable is the name of the cluster being created
+export pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+export cluster_namespace=pgo
+
+cat <<-EOF > "${pgo_cluster_name}-pgcluster.yaml"
+apiVersion: crunchydata.com/v1
+kind: Pgcluster
+metadata:
+ annotations:
+ current-primary: ${pgo_cluster_name}
+ labels:
+ autofail: "true"
+ crunchy-pgbadger: "false"
+ crunchy-pgha-scope: ${pgo_cluster_name}
+ deployment-name: ${pgo_cluster_name}
+ name: ${pgo_cluster_name}
+ pg-cluster: ${pgo_cluster_name}
+ pg-pod-anti-affinity: ""
+ pgo-version: {{< param operatorVersion >}}
+ pgouser: admin
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+spec:
+ BackrestStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ PrimaryStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ${pgo_cluster_name}
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ ReplicaStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ annotations: {}
+ ccpimage: crunchy-postgres-ha
+ ccpimageprefix: registry.developers.crunchydata.com/crunchydata
+ ccpimagetag: {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}
+ clustername: ${pgo_cluster_name}
+ database: ${pgo_cluster_name}
+ exporterport: "9187"
+ limits: {}
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+ pgDataSource:
+ restoreFrom: ""
+ restoreOpts: ""
+ pgbadgerport: "10000"
+ pgoimageprefix: registry.developers.crunchydata.com/crunchydata
+ podAntiAffinity:
+ default: preferred
+ pgBackRest: preferred
+ pgBouncer: preferred
+ port: "5432"
+ tls:
+ caSecret: postgresql-ca
+ tlsSecret: ${pgo_cluster_name}-tls-keypair
+ tlsOnly: true
+ user: hippo
+ userlabels:
+ pgo-version: {{< param operatorVersion >}}
+EOF
+
+kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
+```
+
### Modify a Cluster
There following modification operations are supported on the
From 0c9905f9b866d1831fd07cbdfe19857730d7dcdc Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 28 Dec 2020 15:27:29 -0500
Subject: [PATCH 085/276] Update comment around applying Tolerations
The comment was a bit confusing, likely as it was written in haste.
It now more clearly states the meaning of the conditional and what
it is for.
---
internal/operator/cluster/cluster.go | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 0c5ddda5c2..8bf6e19884 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -642,8 +642,10 @@ func UpdateTolerations(clientset kubeapi.Interface, cluster *crv1.Pgcluster, dep
return err
}
- // if the instance does have specific tolerations, exit here as we do not
- // want to override them
+ // "replica" instances can have toleration overrides. these get managed as
+ // part of the pgreplicas controller, not here. as such, if this "replica"
+ // instance has specific toleration overrides, we will exit here so we do not
+ // apply the cluster-wide tolerations
if len(instance.Spec.Tolerations) != 0 {
return nil
}
From 6e572121fd051e64263bd5591e8374a730f4fd58 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 28 Dec 2020 16:45:30 -0500
Subject: [PATCH 086/276] Add automated test workflow
While there needs to be a greater repository of tests built up,
this does provide assurance over the existing codebase and
encouragement for adding more tests.
Co-authored-by: Chris Bandy
---
.github/workflows/test.yaml | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
create mode 100644 .github/workflows/test.yaml
diff --git a/.github/workflows/test.yaml b/.github/workflows/test.yaml
new file mode 100644
index 0000000000..ddb05d51cb
--- /dev/null
+++ b/.github/workflows/test.yaml
@@ -0,0 +1,17 @@
+on:
+ pull_request:
+ branches:
+ - master
+ push:
+ branches:
+ - master
+
+jobs:
+ go-test:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ - uses: actions/setup-go@v2
+ with:
+ go-version: 1.x
+ - run: PGOROOT=$(pwd) go test ./...
From 99efd152e7726048d5a5260670d4ff1f8220d555 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 29 Dec 2020 09:59:45 -0500
Subject: [PATCH 087/276] Ensure sync replication ConfigMap is deleted on
delete
The deletion task was not checking for the existence of this
ConfigMap on delete, and as such it was being missed.
---
cmd/pgo-rmdata/process.go | 3 +++
1 file changed, 3 insertions(+)
diff --git a/cmd/pgo-rmdata/process.go b/cmd/pgo-rmdata/process.go
index b1eba3bc95..3449476170 100644
--- a/cmd/pgo-rmdata/process.go
+++ b/cmd/pgo-rmdata/process.go
@@ -43,6 +43,7 @@ const (
configConfigMapSuffix = "config"
leaderConfigMapSuffix = "leader"
failoverConfigMapSuffix = "failover"
+ syncConfigMapSuffix = "sync"
)
func Delete(request Request) {
@@ -235,6 +236,8 @@ func removeClusterConfigmaps(request Request) {
// next, the name of the failover configmap, which is
// "`clusterName`-failover"
fmt.Sprintf("%s-%s", request.ClusterName, failoverConfigMapSuffix),
+ // next, if there is a synchronous replication configmap, clean that up
+ fmt.Sprintf("%s-%s", request.ClusterName, syncConfigMapSuffix),
// finally, if there is a pgbouncer, remove the pgbouncer configmap
util.GeneratePgBouncerConfigMapName(request.ClusterName),
}
From 68f89be068346691956a34378eff01977a8ec593 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 30 Dec 2020 17:44:14 -0500
Subject: [PATCH 088/276] Bump pgBackRest to 2.31
Issue: [ch9995]
---
Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 1411047cb5..5aa5bae77d 100644
--- a/Makefile
+++ b/Makefile
@@ -8,7 +8,7 @@ PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
PGO_VERSION ?= 4.5.1
PGO_PG_VERSION ?= 12
PGO_PG_FULLVERSION ?= 12.5
-PGO_BACKREST_VERSION ?= 2.29
+PGO_BACKREST_VERSION ?= 2.31
PACKAGER ?= yum
RELTMPDIR=/tmp/release.$(PGO_VERSION)
From 50e2dde2eea44cf4ad41030b7c0a2201b9f52c4a Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 28 Dec 2020 21:48:16 -0500
Subject: [PATCH 089/276] Manage Service port for replicas for monitoring
The update functions for adding/removing a metrics collection
sidecar were not managing the port on the replica service. This
patch rectifies that by ensuring all PostgreSQL Services under
management by the Operator have their exporter port managed.
---
internal/operator/cluster/clusterlogic.go | 2 +-
internal/operator/cluster/common.go | 16 +++++
internal/operator/cluster/exporter.go | 85 +++++++++++++----------
3 files changed, 66 insertions(+), 37 deletions(-)
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index e527df2cd4..6e26bf35a3 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -390,7 +390,7 @@ func scaleReplicaCreateMissingService(clientset kubernetes.Interface, replica *c
// only add references to the exporter / pgBadger ports
clusterLabels := cluster.ObjectMeta.GetLabels()
- if val, ok := clusterLabels[config.LABEL_EXPORTER]; ok && val == config.LABEL_TRUE {
+ if cluster.Spec.Exporter {
serviceFields.ExporterPort = cluster.Spec.ExporterPort
}
diff --git a/internal/operator/cluster/common.go b/internal/operator/cluster/common.go
index 82bf11c3de..bbd497e582 100644
--- a/internal/operator/cluster/common.go
+++ b/internal/operator/cluster/common.go
@@ -16,16 +16,20 @@ package cluster
*/
import (
+ "context"
"fmt"
"strings"
+ "github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
"github.com/crunchydata/postgres-operator/internal/operator"
pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password"
"github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
@@ -81,6 +85,18 @@ func generatePassword() (string, error) {
return util.GeneratePassword(generatedPasswordLength)
}
+// getClusterInstanceServices gets all of the services applicable to each
+// PostgreSQL instances
+func getClusterInstanceServices(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (*v1.ServiceList, error) {
+ ctx := context.TODO()
+ options := metav1.ListOptions{
+ LabelSelector: fmt.Sprintf("%s=%s,!%s",
+ config.LABEL_PG_CLUSTER, cluster.Name, config.LABEL_PGO_BACKREST_REPO),
+ }
+
+ return clientset.CoreV1().Services(cluster.Namespace).List(ctx, options)
+}
+
// makePostgreSQLPassword creates the expected hash for a password type for a
// PostgreSQL password
// nolint:unparam // this is set up to accept SCRAM in the not-too-distant future
diff --git a/internal/operator/cluster/exporter.go b/internal/operator/cluster/exporter.go
index e02fdf6bfe..c57d953f38 100644
--- a/internal/operator/cluster/exporter.go
+++ b/internal/operator/cluster/exporter.go
@@ -66,37 +66,45 @@ func AddExporter(clientset kubernetes.Interface, restconfig *rest.Config, cluste
return err
}
- // set up the Service, which is still needed on a standby
- svc, err := clientset.CoreV1().Services(cluster.Namespace).Get(ctx, cluster.Name, metav1.GetOptions{})
+ // set up the Services, which are still needed on a standby
+ services, err := getClusterInstanceServices(clientset, cluster)
if err != nil {
return err
}
- // loop over the service ports to see if exporter port is already set up. if
- // it is, we can return from there
- for _, svcPort := range svc.Spec.Ports {
- if svcPort.Name == exporterServicePortName {
- return nil
+ // loop over each service to perform the necessary modifications
+svcLoop:
+ for i := range services.Items {
+ svc := &services.Items[i]
+
+ // loop over the service ports to see if exporter port is already set up. if
+ // it is, we can continue and skip the outerloop
+ for _, svcPort := range svc.Spec.Ports {
+ if svcPort.Name == exporterServicePortName {
+ continue svcLoop
+ }
}
- }
- // otherwise, we need to append a service port to the list
- port, err := strconv.ParseInt(
- util.GetValueOrDefault(cluster.Spec.ExporterPort, operator.Pgo.Cluster.ExporterPort), 10, 32)
- if err != nil {
- return err
- }
+ // otherwise, we need to append a service port to the list
+ port, err := strconv.ParseInt(
+ util.GetValueOrDefault(cluster.Spec.ExporterPort, operator.Pgo.Cluster.ExporterPort), 10, 32)
+ // if we can't parse this for whatever reason, issue a warning and continue on
+ if err != nil {
+ log.Warn(err)
+ }
- svcPort := v1.ServicePort{
- Name: exporterServicePortName,
- Protocol: v1.ProtocolTCP,
- Port: int32(port),
- }
+ svcPort := v1.ServicePort{
+ Name: exporterServicePortName,
+ Protocol: v1.ProtocolTCP,
+ Port: int32(port),
+ }
- svc.Spec.Ports = append(svc.Spec.Ports, svcPort)
+ svc.Spec.Ports = append(svc.Spec.Ports, svcPort)
- if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
- return err
+ // if we fail to update the service, warn, but continue on
+ if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
+ log.Warn(err)
+ }
}
// this can't be installed if this is a standby, so abort if that's the case
@@ -111,7 +119,7 @@ func AddExporter(clientset kubernetes.Interface, restconfig *rest.Config, cluste
return err
}
- // add the m onitoring user and all the views associated with this
+ // add the monitoring user and all the views associated with this
// user. this can be done by executing a script on the container itself
cmd := []string{"/bin/bash", exporterInstallScript}
@@ -188,27 +196,32 @@ func CreateExporterSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluste
func RemoveExporter(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
ctx := context.TODO()
- // close the service port
- svc, err := clientset.CoreV1().Services(cluster.Namespace).Get(ctx, cluster.Name, metav1.GetOptions{})
+ // close the exporter port on each service
+ services, err := getClusterInstanceServices(clientset, cluster)
if err != nil {
return err
}
- svcPorts := []v1.ServicePort{}
+ for i := range services.Items {
+ svc := &services.Items[i]
+ svcPorts := []v1.ServicePort{}
- for _, svcPort := range svc.Spec.Ports {
- // if we find the service port for the exporter, skip it in the loop
- if svcPort.Name == exporterServicePortName {
- continue
- }
+ for _, svcPort := range svc.Spec.Ports {
+ // if we find the service port for the exporter, skip it in the loop, but
+ // as we will not be including it in the update
+ if svcPort.Name == exporterServicePortName {
+ continue
+ }
- svcPorts = append(svcPorts, svcPort)
- }
+ svcPorts = append(svcPorts, svcPort)
+ }
- svc.Spec.Ports = svcPorts
+ svc.Spec.Ports = svcPorts
- if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
- return err
+ // if we fail to update the service, warn but continue
+ if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
+ log.Warn(err)
+ }
}
// disable the user before clearing the Secret, so there does not end up being
From 738491e3cadb1c7e83bf9558c4c0bb85950267c8 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 28 Dec 2020 22:18:00 -0500
Subject: [PATCH 090/276] Convert pgBadger enablement to CRD attribute
The pgBadger sidecar is now enabled on creation by setting the
`pgBadger` attribute on the `pgclusters.crunchydata.com` CRD
instead of a label on a custom resource.
---
docs/content/custom-resources/_index.md | 3 +-
.../apiserver/clusterservice/clusterimpl.go | 9 ++--
internal/config/labels.go | 2 -
internal/operator/cluster/clusterlogic.go | 14 ++----
internal/operator/cluster/upgrade.go | 7 +++
internal/operator/clusterutilities.go | 47 ++++++++++---------
pkg/apis/crunchydata.com/v1/cluster.go | 4 +-
7 files changed, 46 insertions(+), 40 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index f327e17667..e1d256bd64 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -756,7 +756,8 @@ make changes, as described below.
| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
-| PGBadgerPort | `create` | If the `"crunchy-pgbadger"` label is set in `UserLabels`, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
+| PGBadger | `create` | If `true`, deploys the `crunchy-pgbadger` sidecar for query analysis. |
+| PGBadgerPort | `create` | If the `PGBadger` label is set, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
| PGDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. |
| PGOImagePrefix | `create` | If provided, the image prefix (or registry) of any PostgreSQL Operator images that are used for jobs, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
| PgBouncer | `create`, `update` | If specified, defines the attributes to use for the pgBouncer connection pooling deployment that can be used in conjunction with this PostgreSQL cluster. Please see the specification defined below. |
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index ee8b01e0d6..f92777fd0e 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1496,11 +1496,10 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
// set the pgBackRest repository path
spec.BackrestRepoPath = request.BackrestRepoPath
- // pgbadger - set with global flag first then check for a user flag
- labels[config.LABEL_BADGER] = strconv.FormatBool(apiserver.BadgerFlag)
- if request.BadgerFlag {
- labels[config.LABEL_BADGER] = "true"
- }
+ // enable the pgBadger sidecar based on the what the user passed in or what
+ // the default value is. the user value takes precedence, unless it's false,
+ // as the legacy check only looked for enablement
+ spec.PGBadger = request.BadgerFlag || apiserver.BadgerFlag
newInstance := &crv1.Pgcluster{
ObjectMeta: metav1.ObjectMeta{
diff --git a/internal/config/labels.go b/internal/config/labels.go
index d6d851eb52..5b15db75fb 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -92,8 +92,6 @@ const (
LABEL_BACKREST_PITR_TARGET = "backrest-pitr-target"
LABEL_BACKREST_STORAGE_TYPE = "backrest-storage-type"
LABEL_BACKREST_S3_VERIFY_TLS = "backrest-s3-verify-tls"
- LABEL_BADGER = "crunchy-pgbadger"
- LABEL_BADGER_CCPIMAGE = "crunchy-pgbadger"
LABEL_BACKUP_TYPE_BACKREST = "pgbackrest"
LABEL_BACKUP_TYPE_PGDUMP = "pgdump"
)
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 6e26bf35a3..517f6181a2 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -64,10 +64,8 @@ func addClusterCreateMissingService(clientset kubernetes.Interface, cl *crv1.Pgc
ServiceType: st,
}
- // only add references to the exporter / pgBadger ports
- clusterLabels := cl.ObjectMeta.GetLabels()
-
- if val, ok := clusterLabels[config.LABEL_BADGER]; ok && val == config.LABEL_TRUE {
+ // set the pgBadger port if pgBadger is enabled
+ if cl.Spec.PGBadger {
serviceFields.PGBadgerPort = cl.Spec.PGBadgerPort
}
@@ -323,7 +321,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
ExporterAddon: operator.GetExporterAddon(cl.Spec),
- BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
+ BadgerAddon: operator.GetBadgerAddon(clientset, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
ScopeLabel: config.LABEL_PGHA_SCOPE,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
@@ -388,13 +386,11 @@ func scaleReplicaCreateMissingService(clientset kubernetes.Interface, replica *c
}
// only add references to the exporter / pgBadger ports
- clusterLabels := cluster.ObjectMeta.GetLabels()
-
if cluster.Spec.Exporter {
serviceFields.ExporterPort = cluster.Spec.ExporterPort
}
- if val, ok := clusterLabels[config.LABEL_BADGER]; ok && val == config.LABEL_TRUE {
+ if cluster.Spec.PGBadger {
serviceFields.PGBadgerPort = cluster.Spec.PGBadgerPort
}
@@ -470,7 +466,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
ExporterAddon: operator.GetExporterAddon(cluster.Spec),
- BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cluster, replica.Spec.Name),
+ BadgerAddon: operator.GetBadgerAddon(clientset, cluster, replica.Spec.Name),
ScopeLabel: config.LABEL_PGHA_SCOPE,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, replica.Spec.Name,
cluster.Spec.Port, cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index 0b12c5c4ff..f46ed64671 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -483,6 +483,13 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
delete(pgcluster.Spec.UserLabels, config.LABEL_EXPORTER)
}
+ // 4.6.0 moved pgBadger to use an attribute instead of a label. If this label
+ // exists on the current CRD, move the value to the attribute.
+ if _, ok := pgcluster.ObjectMeta.GetLabels()["crunchy-pgbadger"]; ok {
+ pgcluster.Spec.PGBadger = true
+ delete(pgcluster.ObjectMeta.Labels, "crunchy-pgbadger")
+ }
+
// since the current primary label is not used in this version of the Postgres Operator,
// delete it before moving on to other upgrade tasks
delete(pgcluster.ObjectMeta.Labels, config.LABEL_CURRENT_PRIMARY)
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 3281a43c7e..cfa6351408 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -328,30 +328,33 @@ func GetBackrestDeployment(clientset kubernetes.Interface, cluster *crv1.Pgclust
return deployment, err
}
-func GetBadgerAddon(clientset kubernetes.Interface, namespace string, cluster *crv1.Pgcluster, pgbadger_target string) string {
- spec := cluster.Spec
-
- if cluster.Labels[config.LABEL_BADGER] == "true" {
- log.Debug("crunchy_badger was found as a label on cluster create")
- badgerTemplateFields := badgerTemplateFields{}
- badgerTemplateFields.CCPImageTag = util.GetStandardImageTag(spec.CCPImage, spec.CCPImageTag)
- badgerTemplateFields.BadgerTarget = pgbadger_target
- badgerTemplateFields.PGBadgerPort = spec.PGBadgerPort
- badgerTemplateFields.CCPImagePrefix = util.GetValueOrDefault(spec.CCPImagePrefix, Pgo.Cluster.CCPImagePrefix)
-
- var badgerDoc bytes.Buffer
- err := config.BadgerTemplate.Execute(&badgerDoc, badgerTemplateFields)
- if err != nil {
- log.Error(err.Error())
- return ""
- }
+// GetBadgerAddon is a legacy method that generates a JSONish string to be used
+// to add a pgBadger sidecar to a PostgreSQL instance
+func GetBadgerAddon(clientset kubernetes.Interface, cluster *crv1.Pgcluster, target string) string {
+ if !cluster.Spec.PGBadger {
+ return ""
+ }
- if CRUNCHY_DEBUG {
- _ = config.BadgerTemplate.Execute(os.Stdout, badgerTemplateFields)
- }
- return badgerDoc.String()
+ log.Debugf("pgBadger enabled for cluster %q", cluster.Name)
+
+ badgerTemplateFields := badgerTemplateFields{
+ BadgerTarget: target,
+ CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, Pgo.Cluster.CCPImagePrefix),
+ CCPImageTag: util.GetStandardImageTag(cluster.Spec.CCPImage, cluster.Spec.CCPImageTag),
+ PGBadgerPort: cluster.Spec.PGBadgerPort,
+ }
+
+ if CRUNCHY_DEBUG {
+ _ = config.BadgerTemplate.Execute(os.Stdout, badgerTemplateFields)
+ }
+
+ doc := bytes.Buffer{}
+ if err := config.BadgerTemplate.Execute(&doc, badgerTemplateFields); err != nil {
+ log.Error(err)
+ return ""
}
- return ""
+
+ return doc.String()
}
// GetExporterAddon returns the template used to create an exporter container
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index d89e10f6c5..72b26cbc0e 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -49,7 +49,9 @@ type PgclusterSpec struct {
CCPImagePrefix string `json:"ccpimageprefix"`
PGOImagePrefix string `json:"pgoimageprefix"`
Port string `json:"port"`
- PGBadgerPort string `json:"pgbadgerport"`
+ // PGBadger, if set to true, enables the pgBadger sidecar
+ PGBadger bool `json:"pgBadger"`
+ PGBadgerPort string `json:"pgbadgerport"`
// Exporter, if set to true, enables the exporter sidecar
Exporter bool `json:"exporter"`
ExporterPort string `json:"exporterport"`
From d7215679b42f0c1c7615cc6033a3aedeee10ce6b Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 28 Dec 2020 23:05:49 -0500
Subject: [PATCH 091/276] Allow for pgBadger sidecar to be enabled/disabled
Similar to the metrics collection sidecar, the pgBadger sidecar
can now be enabled during the lifetime of a PostgreSQL cluster.
This also adds the `--enable-pgbadger` and `--disable-pgbadger`
flags to the `pgo update cluster` command.
Additionally, the update to the pgBadger sidecar can be triggered
through a modification to the `pgclusters.crunchydata.com` CR,
and will trigger a rolling update to minimize downtime.
---
cmd/pgo/cmd/cluster.go | 7 +
cmd/pgo/cmd/update.go | 12 ++
docs/content/custom-resources/_index.md | 2 +-
.../reference/pgo_update_cluster.md | 12 +-
.../files/pgo-configs/cluster-deployment.json | 6 +-
.../files/pgo-configs/pgbadger.json | 82 ++++----
.../apiserver/clusterservice/clusterimpl.go | 9 +
.../pgcluster/pgclustercontroller.go | 19 ++
internal/operator/cluster/cluster.go | 9 +-
internal/operator/cluster/clusterlogic.go | 4 +-
internal/operator/cluster/pgbadger.go | 199 ++++++++++++++++++
internal/operator/clusterutilities.go | 2 +-
pkg/apiservermsgs/clustermsgs.go | 15 +-
13 files changed, 322 insertions(+), 56 deletions(-)
create mode 100644 internal/operator/cluster/pgbadger.go
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index de7b895de4..0e9d2b739c 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -708,6 +708,13 @@ func updateCluster(args []string, ns string) {
r.Metrics = msgs.UpdateClusterMetricsDisable
}
+ // check to see if the pgBadger sidecar needs to be enabled or disabled
+ if EnablePGBadger {
+ r.PGBadger = msgs.UpdateClusterPGBadgerEnable
+ } else if DisablePGBadger {
+ r.PGBadger = msgs.UpdateClusterPGBadgerDisable
+ }
+
// if the user provided resources for CPU or Memory, validate them to ensure
// they are valid Kubernetes values
if err := util.ValidateQuantity(r.CPURequest, "cpu"); err != nil {
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index f5c5d6eb82..c11e352094 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -31,11 +31,15 @@ var (
DisableLogin bool
// DisableMetrics allows a user to disable metrics collection
DisableMetrics bool
+ // DisablePGBadger allows a user to disable pgBadger
+ DisablePGBadger bool
// EnableLogin allows a user to enable the ability for a PostgreSQL uesr to
// log in
EnableLogin bool
// EnableMetrics allows a user to enbale metrics collection
EnableMetrics bool
+ // EnablePGBadger allows a user to enbale pgBadger
+ EnablePGBadger bool
// ExpireUser sets a user to having their password expired
ExpireUser bool
// ExporterRotatePassword rotates the password for the designed PostgreSQL
@@ -92,6 +96,8 @@ func init() {
UpdateClusterCmd.Flags().BoolVar(&DisableAutofailFlag, "disable-autofail", false, "Disables autofail capabitilies in the cluster.")
UpdateClusterCmd.Flags().BoolVar(&DisableMetrics, "disable-metrics", false,
"Disable the metrics collection sidecar. May cause brief downtime.")
+ UpdateClusterCmd.Flags().BoolVar(&DisablePGBadger, "disable-pgbadger", false,
+ "Disable the pgBadger sidecar. May cause brief downtime.")
UpdateClusterCmd.Flags().BoolVar(&EnableAutofailFlag, "enable-autofail", false, "Enables autofail capabitilies in the cluster.")
UpdateClusterCmd.Flags().StringVar(&MemoryRequest, "memory", "", "Set the amount of RAM to request, e.g. "+
"1GiB.")
@@ -118,6 +124,8 @@ func init() {
"the Crunchy Postgres Exporter sidecar container.")
UpdateClusterCmd.Flags().BoolVar(&EnableMetrics, "enable-metrics", false,
"Enable the metrics collection sidecar. May cause brief downtime.")
+ UpdateClusterCmd.Flags().BoolVar(&EnablePGBadger, "enable-pgbadger", false,
+ "Enable the pgBadger sidecar. May cause brief downtime.")
UpdateClusterCmd.Flags().BoolVar(&ExporterRotatePassword, "exporter-rotate-password", false, "Used to rotate the password for the metrics collection agent.")
UpdateClusterCmd.Flags().BoolVarP(&EnableStandby, "enable-standby", "", false,
"Enables standby mode in the cluster(s) specified.")
@@ -260,6 +268,10 @@ var UpdateClusterCmd = &cobra.Command{
fmt.Println("Adding or removing a metrics collection sidecar can cause downtime.")
}
+ if EnablePGBadger || DisablePGBadger {
+ fmt.Println("Adding or removing a pgBadger sidecar can cause downtime.")
+ }
+
if len(Tablespaces) > 0 {
fmt.Println("Adding tablespaces can cause downtime.")
}
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index e1d256bd64..d70831814b 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -756,7 +756,7 @@ make changes, as described below.
| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
-| PGBadger | `create` | If `true`, deploys the `crunchy-pgbadger` sidecar for query analysis. |
+| PGBadger | `create`,`update` | If `true`, deploys the `crunchy-pgbadger` sidecar for query analysis. |
| PGBadgerPort | `create` | If the `PGBadger` label is set, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
| PGDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. |
| PGOImagePrefix | `create` | If provided, the image prefix (or registry) of any PostgreSQL Operator images that are used for jobs, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index e480bf767f..4622833460 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -25,7 +25,7 @@ pgo update cluster [flags]
--annotation strings Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer)
The format to add an annotation is "name=value"
The format to remove an annotation is "name-"
-
+
For example, to add two annotations: "--annotation=hippo=awesome,elephant=cool"
--annotation-pgbackrest strings Add an Annotation specifically to pgBackRest deployments
The format to add an annotation is "name=value"
@@ -39,8 +39,10 @@ pgo update cluster [flags]
--cpu-limit string Set the number of millicores to limit for the CPU, e.g. "100m" or "0.1".
--disable-autofail Disables autofail capabitilies in the cluster.
--disable-metrics Disable the metrics collection sidecar. May cause brief downtime.
+ --disable-pgbadger Disable the pgBadger sidecar. May cause brief downtime.
--enable-autofail Enables autofail capabitilies in the cluster.
--enable-metrics Enable the metrics collection sidecar. May cause brief downtime.
+ --enable-pgbadger Enable the pgBadger sidecar. May cause brief downtime.
--enable-standby Enables standby mode in the cluster(s) specified.
--exporter-cpu string Set the number of millicores to request for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1".
--exporter-cpu-limit string Set the number of millicores to limit for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1".
@@ -60,13 +62,13 @@ pgo update cluster [flags]
--shutdown Shutdown the database cluster if it is currently running.
--startup Restart the database cluster if it is currently shutdown.
--tablespace strings Add a PostgreSQL tablespace on the cluster, e.g. "name=ts1:storageconfig=nfsstorage". The format is a key/value map that is delimited by "=" and separated by ":". The following parameters are available:
-
+
- name (required): the name of the PostgreSQL tablespace
- storageconfig (required): the storage configuration to use, as specified in the list available in the "pgo-config" ConfigMap (aka "pgo.yaml")
- pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. Follows the Kubernetes quantity format.
-
+
For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:
-
+
--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi
```
@@ -87,4 +89,4 @@ pgo update cluster [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 16-Dec-2020
+###### Auto generated by spf13/cobra on 28-Dec-2020
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index f5fb452849..8499739ccf 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -206,9 +206,9 @@
{{ if .ExporterAddon }}
,{{.ExporterAddon }}
{{ end }}
-
- {{.BadgerAddon }}
-
+ {{ if .BadgerAddon }}
+ ,{{.BadgerAddon }}
+ {{ end }}
],
"volumes": [{
"name": "pgdata",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbadger.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbadger.json
index d9b04daa73..ce03aaabb7 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbadger.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbadger.json
@@ -1,41 +1,41 @@
- ,{
- "name": "pgbadger",
- "image": "{{.CCPImagePrefix}}/crunchy-pgbadger:{{.CCPImageTag}}",
- "ports": [ {
- "containerPort": {{.PGBadgerPort}},
- "protocol": "TCP"
- }
- ],
- "readinessProbe": {
- "tcpSocket": {
- "port": {{.PGBadgerPort}}
- },
- "initialDelaySeconds": 20,
- "periodSeconds": 10
- },
- "env": [ {
- "name": "BADGER_TARGET",
- "value": "{{.BadgerTarget}}"
- }, {
- "name": "PGBADGER_SERVICE_PORT",
- "value": "{{.PGBadgerPort}}"
- } ],
- "resources": {
- "limits": {
- "cpu": "500m",
- "memory": "64Mi"
- }
- },
- "volumeMounts": [
- {
- "mountPath": "/pgdata",
- "name": "pgdata",
- "readOnly": true
- },
- {
- "mountPath": "/report",
- "name": "report",
- "readOnly": false
- }
- ]
- }
+{
+ "name": "pgbadger",
+ "image": "{{.CCPImagePrefix}}/crunchy-pgbadger:{{.CCPImageTag}}",
+ "ports": [ {
+ "containerPort": {{.PGBadgerPort}},
+ "protocol": "TCP"
+ }
+ ],
+ "readinessProbe": {
+ "tcpSocket": {
+ "port": {{.PGBadgerPort}}
+ },
+ "initialDelaySeconds": 20,
+ "periodSeconds": 10
+ },
+ "env": [ {
+ "name": "BADGER_TARGET",
+ "value": "{{.BadgerTarget}}"
+ }, {
+ "name": "PGBADGER_SERVICE_PORT",
+ "value": "{{.PGBadgerPort}}"
+ } ],
+ "resources": {
+ "limits": {
+ "cpu": "500m",
+ "memory": "64Mi"
+ }
+ },
+ "volumeMounts": [
+ {
+ "mountPath": "/pgdata",
+ "name": "pgdata",
+ "readOnly": true
+ },
+ {
+ "mountPath": "/report",
+ "name": "report",
+ "readOnly": false
+ }
+ ]
+}
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index f92777fd0e..d93d7f4e4f 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1903,6 +1903,15 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
case msgs.UpdateClusterMetricsDoNothing: // this is never reached -- no-op
}
+ // enable or disable the pgBadger sidecar
+ switch request.PGBadger {
+ case msgs.UpdateClusterPGBadgerEnable:
+ cluster.Spec.PGBadger = true
+ case msgs.UpdateClusterPGBadgerDisable:
+ cluster.Spec.PGBadger = false
+ case msgs.UpdateClusterPGBadgerDoNothing: // this is never reached -- no-op
+ }
+
// enable or disable standby mode based on UpdateClusterStandbyStatus provided in
// the request
switch request.Standby {
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 02789dce9b..d2c910ad99 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -255,6 +255,25 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
}
}
+ // see if we are adding / removing the pgBadger sidecar
+ if oldcluster.Spec.PGBadger != newcluster.Spec.PGBadger {
+ var err error
+
+ // determine if the sidecar is being enabled/disabled and take the precursor
+ // actions before the deployment template is modified
+ if newcluster.Spec.PGBadger {
+ err = clusteroperator.AddPGBadger(c.Client, c.Client.Config, newcluster)
+ } else {
+ err = clusteroperator.RemovePGBadger(c.Client, c.Client.Config, newcluster)
+ }
+
+ if err == nil {
+ rollingUpdateFuncs = append(rollingUpdateFuncs, clusteroperator.UpdatePGBadgerSidecar)
+ } else {
+ log.Errorf("could not update pgbadger sidecar: %q", err.Error())
+ }
+ }
+
// see if any of the resource values have changed for the database or exporter container,
// if so, update them
if !reflect.DeepEqual(oldcluster.Spec.Resources, newcluster.Spec.Resources) ||
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 8bf6e19884..2b292290c5 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -59,8 +59,13 @@ type ServiceTemplateFields struct {
// ReplicaSuffix ...
const ReplicaSuffix = "-replica"
-// exporterContainerName is the name of the exporter container
-const exporterContainerName = "exporter"
+const (
+ // exporterContainerName is the name of the exporter container
+ exporterContainerName = "exporter"
+
+ // pgBadgerContainerName is the name of the pgBadger container
+ pgBadgerContainerName = "pgbadger"
+)
func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace string) {
ctx := context.TODO()
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 517f6181a2..f6ed1c52f8 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -321,7 +321,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
ExporterAddon: operator.GetExporterAddon(cl.Spec),
- BadgerAddon: operator.GetBadgerAddon(clientset, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
+ BadgerAddon: operator.GetBadgerAddon(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
ScopeLabel: config.LABEL_PGHA_SCOPE,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
@@ -466,7 +466,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
ExporterAddon: operator.GetExporterAddon(cluster.Spec),
- BadgerAddon: operator.GetBadgerAddon(clientset, cluster, replica.Spec.Name),
+ BadgerAddon: operator.GetBadgerAddon(cluster, replica.Spec.Name),
ScopeLabel: config.LABEL_PGHA_SCOPE,
PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, replica.Spec.Name,
cluster.Spec.Port, cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
diff --git a/internal/operator/cluster/pgbadger.go b/internal/operator/cluster/pgbadger.go
new file mode 100644
index 0000000000..ed1b0fdfc2
--- /dev/null
+++ b/internal/operator/cluster/pgbadger.go
@@ -0,0 +1,199 @@
+package cluster
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "strconv"
+
+ "github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/operator"
+ "github.com/crunchydata/postgres-operator/internal/util"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
+ log "github.com/sirupsen/logrus"
+ appsv1 "k8s.io/api/apps/v1"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+)
+
+const (
+ // pgBadgerServicePortName is the name used to identify the pgBadger port in
+ // the service
+ pgBadgerServicePortName = "pgbadger"
+)
+
+// AddPGBadger ensures that a PostgreSQL cluster is able to undertake the
+// actions required by the "crunchy-badger", i.e. updating the Service.
+// This executes regardless if this is a standby cluster.
+//
+// This does not modify the Deployment that has the pgBadger sidecar. That is
+// handled by the "UpdatePGBadgerSidecar" function, so it can be handled as part
+// of a rolling update.
+func AddPGBadger(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
+ ctx := context.TODO()
+ // set up the Services, which are still needed on a standby
+ services, err := getClusterInstanceServices(clientset, cluster)
+ if err != nil {
+ return err
+ }
+
+ // loop over each service to perform the necessary modifications
+svcLoop:
+ for i := range services.Items {
+ svc := &services.Items[i]
+
+ // loop over the service ports to see if pgBadger port is already set up. if
+ // it is, we can continue and skip the outerloop
+ for _, svcPort := range svc.Spec.Ports {
+ if svcPort.Name == pgBadgerServicePortName {
+ continue svcLoop
+ }
+ }
+
+ // otherwise, we need to append a service port to the list
+ port, err := strconv.ParseInt(
+ util.GetValueOrDefault(cluster.Spec.PGBadgerPort, operator.Pgo.Cluster.PGBadgerPort), 10, 32)
+ // if we can't parse this for whatever reason, issue a warning and continue on
+ if err != nil {
+ log.Warn(err)
+ }
+
+ svcPort := v1.ServicePort{
+ Name: pgBadgerServicePortName,
+ Protocol: v1.ProtocolTCP,
+ Port: int32(port),
+ }
+
+ svc.Spec.Ports = append(svc.Spec.Ports, svcPort)
+
+ // if we fail to update the service, warn, but continue on
+ if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
+ log.Warn(err)
+ }
+ }
+
+ return nil
+}
+
+// RemovePGBadger disables the ability for a PostgreSQL cluster to run a
+// pgBadger cluster.
+//
+// This does not modify the Deployment that has the pgBadger sidecar. That is
+// handled by the "UpdatePGBadgerSidecar" function, so it can be handled as part
+// of a rolling update.
+func RemovePGBadger(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error {
+ ctx := context.TODO()
+
+ // close the exporter port on each service
+ services, err := getClusterInstanceServices(clientset, cluster)
+ if err != nil {
+ return err
+ }
+
+ for i := range services.Items {
+ svc := &services.Items[i]
+ svcPorts := []v1.ServicePort{}
+
+ for _, svcPort := range svc.Spec.Ports {
+ // if we find the service port for the pgBadger, skip it in the loop, but
+ // as we will not be including it in the update
+ if svcPort.Name == pgBadgerServicePortName {
+ continue
+ }
+
+ svcPorts = append(svcPorts, svcPort)
+ }
+
+ svc.Spec.Ports = svcPorts
+
+ // if we fail to update the service, warn but continue
+ if _, err := clientset.CoreV1().Services(svc.Namespace).Update(ctx, svc, metav1.UpdateOptions{}); err != nil {
+ log.Warn(err)
+ }
+ }
+ return nil
+}
+
+// UpdatePGBadgerSidecar either adds or emoves the pgBadger sidcar from the
+// cluster. This is meant to be used as a rolling update callback function
+func UpdatePGBadgerSidecar(clientset kubeapi.Interface, cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
+ // need to determine if we are adding or removing
+ if cluster.Spec.PGBadger {
+ return addPGBadgerSidecar(cluster, deployment)
+ }
+
+ removePGBadgerSidecar(deployment)
+
+ return nil
+}
+
+// addPGBadgerSidecar adds the pgBadger sidecar to a Deployment. If pgBadger is
+// already present, this call supersedes it and adds the "new version" of the
+// pgBadger container.
+func addPGBadgerSidecar(cluster *crv1.Pgcluster, deployment *appsv1.Deployment) error {
+ // use the legacy template generation to make the appropriate substitutions,
+ // and then get said generation to be placed into an actual Container object
+ template := operator.GetBadgerAddon(cluster, deployment.Name)
+ container := v1.Container{}
+
+ if err := json.Unmarshal([]byte(template), &container); err != nil {
+ return fmt.Errorf("error unmarshalling exporter json into Container: %w ", err)
+ }
+
+ // append the container to the deployment container list. However, we are
+ // going to do this carefully, in case the pgBadger container already exists.
+ // this definition will supersede any exporter container already in the
+ // containers list
+ containers := []v1.Container{}
+ for _, c := range deployment.Spec.Template.Spec.Containers {
+ // skip if this is the pgBadger container. pgBadger is added after the loop
+ if c.Name == pgBadgerContainerName {
+ continue
+ }
+
+ containers = append(containers, c)
+ }
+
+ // add the pgBadger container and override the containers list definition
+ containers = append(containers, container)
+ deployment.Spec.Template.Spec.Containers = containers
+
+ return nil
+}
+
+// removePGBadgerSidecar removes the pgBadger sidecar from a Deployment.
+//
+// This involves:
+// - Removing the container entry for pgBadger
+func removePGBadgerSidecar(deployment *appsv1.Deployment) {
+ // first, find the container entry in the list of containers and remove it
+ containers := []v1.Container{}
+ for _, c := range deployment.Spec.Template.Spec.Containers {
+ // skip if this is the pgBadger container
+ if c.Name == pgBadgerContainerName {
+ continue
+ }
+
+ containers = append(containers, c)
+ }
+
+ deployment.Spec.Template.Spec.Containers = containers
+}
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index cfa6351408..da6e37434b 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -330,7 +330,7 @@ func GetBackrestDeployment(clientset kubernetes.Interface, cluster *crv1.Pgclust
// GetBadgerAddon is a legacy method that generates a JSONish string to be used
// to add a pgBadger sidecar to a PostgreSQL instance
-func GetBadgerAddon(clientset kubernetes.Interface, cluster *crv1.Pgcluster, target string) string {
+func GetBadgerAddon(cluster *crv1.Pgcluster, target string) string {
if !cluster.Spec.PGBadger {
return ""
}
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index 53258b36e6..93eb9ffd46 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -372,6 +372,16 @@ const (
UpdateClusterMetricsDisable
)
+// UpdateClusterPGBadger determines whether or not to enable/disable the
+// pgBadger sidecar in a cluster
+type UpdateClusterPGBadger int
+
+const (
+ UpdateClusterPGBadgerDoNothing UpdateClusterPGBadger = iota
+ UpdateClusterPGBadgerEnable
+ UpdateClusterPGBadgerDisable
+)
+
// UpdateClusterStandbyStatus defines the types for updating the Standby status
type UpdateClusterStandbyStatus int
@@ -447,7 +457,10 @@ type UpdateClusterRequest struct {
MemoryRequest string
// Metrics allows for the enabling/disabling of the metrics sidecar. This can
// cause downtime and triggers a rolling update
- Metrics UpdateClusterMetrics
+ Metrics UpdateClusterMetrics
+ // PGBadger allows for the enabling/disabling of the pgBadger sidecar. This can
+ // cause downtime and triggers a rolling update
+ PGBadger UpdateClusterPGBadger
Standby UpdateClusterStandbyStatus
Startup bool
Shutdown bool
From a16b1ec32f184ea0af10500a187cc88b1ca3491b Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 29 Dec 2020 10:39:26 -0500
Subject: [PATCH 092/276] Remove "functional labels" that are superseded by CRD
attributes
The "user labels" that were created for synchronous replication
and custom configurations were superseded by the attributes
on the pgclusters.crunchydata.com CRD. As such, it is OK to remove
these labels.
---
.../apiserver/clusterservice/clusterimpl.go | 22 -------------------
internal/config/labels.go | 2 --
internal/operator/clusterutilities.go | 5 ++---
3 files changed, 2 insertions(+), 27 deletions(-)
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index d93d7f4e4f..81f0fc97b5 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -715,8 +715,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
resp.Status.Msg = err.Error()
return resp
}
- // add a label for the custom config
- userLabelsMap[config.LABEL_CUSTOM_CONFIG] = request.CustomConfig
}
if request.ServiceType != "" {
@@ -774,10 +772,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
log.Debug("userLabelsMap")
log.Debugf("%v", userLabelsMap)
- if existsGlobalConfig(ns) {
- userLabelsMap[config.LABEL_CUSTOM_CONFIG] = config.GLOBAL_CUSTOM_CONFIGMAP
- }
-
if request.StorageConfig != "" && !apiserver.IsValidStorageName(request.StorageConfig) {
resp.Status.Code = msgs.Error
resp.Status.Msg = fmt.Sprintf("%q storage config was not found", request.StorageConfig)
@@ -865,12 +859,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
}
- // if synchronous replication has been enabled, then add to user labels
- if request.SyncReplication != nil {
- userLabelsMap[config.LABEL_SYNC_REPLICATION] =
- string(strconv.FormatBool(*request.SyncReplication))
- }
-
// pgBackRest URI style must be set to either 'path' or 'host'. If it is neither,
// log an error and stop the cluster from being created.
if request.BackrestS3URIStyle != "" {
@@ -1110,10 +1098,6 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
},
}
- if userLabelsMap[config.LABEL_CUSTOM_CONFIG] != "" {
- spec.CustomConfig = userLabelsMap[config.LABEL_CUSTOM_CONFIG]
- }
-
// enable the exporter sidecar based on the what the user passed in or what
// the default value is. the user value takes precedence, unless it's false,
// as the legacy check only looked for enablement
@@ -1643,12 +1627,6 @@ func validateCustomConfig(configmapname, ns string) (bool, error) {
return err == nil, err
}
-func existsGlobalConfig(ns string) bool {
- ctx := context.TODO()
- _, err := apiserver.Clientset.CoreV1().ConfigMaps(ns).Get(ctx, config.GLOBAL_CUSTOM_CONFIGMAP, metav1.GetOptions{})
- return err == nil
-}
-
func getReplicas(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterReplica, error) {
ctx := context.TODO()
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 5b15db75fb..f9f1176f9a 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -46,7 +46,6 @@ const (
LABEL_EXPORTER = "crunchy-postgres-exporter"
LABEL_ARCHIVE = "archive"
LABEL_ARCHIVE_TIMEOUT = "archive-timeout"
- LABEL_CUSTOM_CONFIG = "custom-config"
LABEL_NODE_LABEL_KEY = "NodeLabelKey"
LABEL_NODE_LABEL_VALUE = "NodeLabelValue"
LABEL_REPLICA_NAME = "replica-name"
@@ -55,7 +54,6 @@ const (
LABEL_IMAGE_PREFIX = "image-prefix"
LABEL_SERVICE_TYPE = "service-type"
LABEL_POD_ANTI_AFFINITY = "pg-pod-anti-affinity"
- LABEL_SYNC_REPLICATION = "sync-replication"
)
const (
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index da6e37434b..1095fbb2b2 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -963,10 +963,9 @@ func GetSyncReplication(specSyncReplication *bool) bool {
// alawys use the value from the CR if explicitly provided
if specSyncReplication != nil {
return *specSyncReplication
- } else if Pgo.Cluster.SyncReplication {
- return true
}
- return false
+
+ return Pgo.Cluster.SyncReplication
}
// GetTolerations returns any tolerations that may be defined in a tolerations
From d2efb2f9075e976b41f14c5107e775a8e7f95afe Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 29 Dec 2020 11:43:35 -0500
Subject: [PATCH 093/276] Move ServiceType control to CRD attribute
This adds a "ServiceType" parameter to both the pgclusters +
pgreplicas custom resource definitions so that the behavior
can be managed withou the user of a nested label.
---
cmd/pgo/cmd/cluster.go | 2 +-
cmd/pgo/cmd/scale.go | 3 +-
docs/content/custom-resources/_index.md | 1 +
.../apiserver/clusterservice/clusterimpl.go | 19 ++++---
.../apiserver/clusterservice/scaleimpl.go | 20 ++++---
internal/config/labels.go | 1 -
internal/config/pgoconfig.go | 23 +++-----
internal/operator/cluster/cluster.go | 2 +-
internal/operator/cluster/clusterlogic.go | 57 ++++++++++++-------
internal/operator/cluster/upgrade.go | 7 +++
pkg/apis/crunchydata.com/v1/cluster.go | 4 ++
pkg/apis/crunchydata.com/v1/replica.go | 15 +++--
pkg/apiservermsgs/clustermsgs.go | 4 +-
13 files changed, 96 insertions(+), 62 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index 0e9d2b739c..fd2f1241ab 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -278,7 +278,7 @@ func createCluster(args []string, ns string, createClusterCmd *cobra.Command) {
r.ExporterMemoryRequest = ExporterMemoryRequest
r.ExporterMemoryLimit = ExporterMemoryLimit
r.BadgerFlag = BadgerFlag
- r.ServiceType = ServiceType
+ r.ServiceType = v1.ServiceType(ServiceType)
r.AutofailFlag = !DisableAutofailFlag
r.PgbouncerFlag = PgbouncerFlag
r.BackrestStorageConfig = BackrestStorageConfig
diff --git a/cmd/pgo/cmd/scale.go b/cmd/pgo/cmd/scale.go
index 6352e91cd3..5303c6dd6a 100644
--- a/cmd/pgo/cmd/scale.go
+++ b/cmd/pgo/cmd/scale.go
@@ -24,6 +24,7 @@ import (
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
+ v1 "k8s.io/api/core/v1"
)
var ReplicaCount int
@@ -76,7 +77,7 @@ func scaleCluster(args []string, ns string) {
Namespace: ns,
NodeLabel: NodeLabel,
ReplicaCount: ReplicaCount,
- ServiceType: ServiceType,
+ ServiceType: v1.ServiceType(ServiceType),
StorageConfig: StorageConfig,
Tolerations: getClusterTolerations(Tolerations),
}
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index d70831814b..61204b26e5 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -767,6 +767,7 @@ make changes, as described below.
| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
| Replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. |
| Resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| ServiceType | `create` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to `ClusterIP`. |
| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection or pgBadger, you would specify `"crunchy-postgres-exporter": "true"` and `"crunchy-pgbadger": "true"` here, respectively. However, this structure does need to be set, so just follow whatever is in the example. |
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 81f0fc97b5..aa86db62f1 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -717,14 +717,14 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
}
- if request.ServiceType != "" {
- if request.ServiceType != config.DEFAULT_SERVICE_TYPE && request.ServiceType != config.LOAD_BALANCER_SERVICE_TYPE && request.ServiceType != config.NODEPORT_SERVICE_TYPE {
- resp.Status.Code = msgs.Error
- resp.Status.Msg = "error ServiceType should be either ClusterIP or LoadBalancer "
-
- return resp
- }
- userLabelsMap[config.LABEL_SERVICE_TYPE] = request.ServiceType
+ // validate the optional ServiceType parameter
+ switch request.ServiceType {
+ default:
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = fmt.Sprintf("invalid service type %q", request.ServiceType)
+ return resp
+ case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName, "": // no-op
}
// if the request is for a standby cluster then validate it to ensure all parameters have
@@ -1359,6 +1359,9 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
spec.Replicas = strconv.Itoa(request.ReplicaCount)
log.Debugf("replicas is %s", spec.Replicas)
}
+
+ spec.ServiceType = request.ServiceType
+
spec.UserLabels = userLabelsMap
spec.UserLabels[config.LABEL_PGO_VERSION] = msgs.PGO_VERSION
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index 7713f33eb0..0a8bcf0fff 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -27,6 +27,7 @@ import (
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
+ v1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -85,15 +86,16 @@ func ScaleCluster(request msgs.ClusterScaleRequest, pgouser string) msgs.Cluster
if request.CCPImageTag != "" {
spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY] = request.CCPImageTag
}
- if request.ServiceType != "" {
- if request.ServiceType != config.DEFAULT_SERVICE_TYPE &&
- request.ServiceType != config.NODEPORT_SERVICE_TYPE &&
- request.ServiceType != config.LOAD_BALANCER_SERVICE_TYPE {
- response.Status.Code = msgs.Error
- response.Status.Msg = "error --service-type should be either ClusterIP, NodePort, or LoadBalancer "
- return response
- }
- spec.UserLabels[config.LABEL_SERVICE_TYPE] = request.ServiceType
+
+ // check the optional ServiceType paramater
+ switch request.ServiceType {
+ default:
+ response.Status.Code = msgs.Error
+ response.Status.Msg = fmt.Sprintf("invalid service type %q", request.ServiceType)
+ return response
+ case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName, "":
+ spec.ServiceType = request.ServiceType
}
// set replica node lables to blank to start with, then check for overrides
diff --git a/internal/config/labels.go b/internal/config/labels.go
index f9f1176f9a..30bce70cdf 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -52,7 +52,6 @@ const (
LABEL_CCP_IMAGE_TAG_KEY = "ccp-image-tag"
LABEL_CCP_IMAGE_KEY = "ccp-image"
LABEL_IMAGE_PREFIX = "image-prefix"
- LABEL_SERVICE_TYPE = "service-type"
LABEL_POD_ANTI_AFFINITY = "pg-pod-anti-affinity"
)
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index 6c870686a0..2a72513437 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -205,7 +205,7 @@ type ClusterStruct struct {
PasswordAgeDays string
PasswordLength string
Replicas string
- ServiceType string
+ ServiceType v1.ServiceType
BackrestPort int
BackrestS3Bucket string
BackrestS3Endpoint string
@@ -262,10 +262,8 @@ type PgoConfig struct {
}
const (
- DEFAULT_SERVICE_TYPE = "ClusterIP"
- LOAD_BALANCER_SERVICE_TYPE = "LoadBalancer"
- NODEPORT_SERVICE_TYPE = "NodePort"
- CONFIG_PATH = "pgo.yaml"
+ DefaultServiceType = v1.ServiceTypeClusterIP
+ CONFIG_PATH = "pgo.yaml"
)
const (
@@ -345,15 +343,12 @@ func (c *PgoConfig) Validate() error {
return errors.New(errPrefix + "Pgo.PGOImageTag is required")
}
- if c.Cluster.ServiceType == "" {
- log.Warn("Cluster.ServiceType not set, using default, ClusterIP ")
- c.Cluster.ServiceType = DEFAULT_SERVICE_TYPE
- } else {
- if c.Cluster.ServiceType != DEFAULT_SERVICE_TYPE &&
- c.Cluster.ServiceType != LOAD_BALANCER_SERVICE_TYPE &&
- c.Cluster.ServiceType != NODEPORT_SERVICE_TYPE {
- return errors.New(errPrefix + "Cluster.ServiceType is required to be either ClusterIP, NodePort, or LoadBalancer")
- }
+ // if ServiceType is set, ensure it is valid
+ switch c.Cluster.ServiceType {
+ default:
+ return fmt.Errorf("Cluster.ServiceType is an invalid ServiceType: %q", c.Cluster.ServiceType)
+ case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName, "": // no-op
}
if c.Cluster.CCPImagePrefix == "" {
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 2b292290c5..9023fe080f 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -53,7 +53,7 @@ type ServiceTemplateFields struct {
Port string
PGBadgerPort string
ExporterPort string
- ServiceType string
+ ServiceType v1.ServiceType
}
// ReplicaSuffix ...
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index f6ed1c52f8..86ea813208 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -49,29 +49,37 @@ import (
// addClusterCreateMissingService creates a service for the cluster primary if
// it does not yet exist.
-func addClusterCreateMissingService(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) error {
- st := operator.Pgo.Cluster.ServiceType
- if cl.Spec.UserLabels[config.LABEL_SERVICE_TYPE] != "" {
- st = cl.Spec.UserLabels[config.LABEL_SERVICE_TYPE]
+func addClusterCreateMissingService(clientset kubernetes.Interface, cluster *crv1.Pgcluster, namespace string) error {
+ // start with the default value for ServiceType
+ serviceType := config.DefaultServiceType
+
+ // then see if there is a configuration provided value
+ if operator.Pgo.Cluster.ServiceType != "" {
+ serviceType = operator.Pgo.Cluster.ServiceType
+ }
+
+ // then see if there is an override on the custom resource definition
+ if cluster.Spec.ServiceType != "" {
+ serviceType = cluster.Spec.ServiceType
}
// create the primary service
serviceFields := ServiceTemplateFields{
- Name: cl.Spec.Name,
- ServiceName: cl.Spec.Name,
- ClusterName: cl.Spec.Name,
- Port: cl.Spec.Port,
- ServiceType: st,
+ Name: cluster.Spec.Name,
+ ServiceName: cluster.Spec.Name,
+ ClusterName: cluster.Spec.Name,
+ Port: cluster.Spec.Port,
+ ServiceType: serviceType,
}
// set the pgBadger port if pgBadger is enabled
- if cl.Spec.PGBadger {
- serviceFields.PGBadgerPort = cl.Spec.PGBadgerPort
+ if cluster.Spec.PGBadger {
+ serviceFields.PGBadgerPort = cluster.Spec.PGBadgerPort
}
// set the exporter port if exporter is enabled
- if cl.Spec.Exporter {
- serviceFields.ExporterPort = cl.Spec.ExporterPort
+ if cluster.Spec.Exporter {
+ serviceFields.ExporterPort = cluster.Spec.ExporterPort
}
return CreateService(clientset, &serviceFields, namespace)
@@ -369,11 +377,22 @@ func DeleteCluster(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace
// scaleReplicaCreateMissingService creates a service for cluster replicas if
// it does not yet exist.
func scaleReplicaCreateMissingService(clientset kubernetes.Interface, replica *crv1.Pgreplica, cluster *crv1.Pgcluster, namespace string) error {
- st := operator.Pgo.Cluster.ServiceType
- if replica.Spec.UserLabels[config.LABEL_SERVICE_TYPE] != "" {
- st = replica.Spec.UserLabels[config.LABEL_SERVICE_TYPE]
- } else if cluster.Spec.UserLabels[config.LABEL_SERVICE_TYPE] != "" {
- st = cluster.Spec.UserLabels[config.LABEL_SERVICE_TYPE]
+ // start with the default value for ServiceType
+ serviceType := config.DefaultServiceType
+
+ // then see if there is a configuration provided value
+ if operator.Pgo.Cluster.ServiceType != "" {
+ serviceType = operator.Pgo.Cluster.ServiceType
+ }
+
+ // then see if there is an override on the custom resource definition
+ if cluster.Spec.ServiceType != "" {
+ serviceType = cluster.Spec.ServiceType
+ }
+
+ // and finally, see if there is an instance specific override. Yay.
+ if replica.Spec.ServiceType != "" {
+ serviceType = replica.Spec.ServiceType
}
serviceName := fmt.Sprintf("%s-replica", replica.Spec.ClusterName)
@@ -382,7 +401,7 @@ func scaleReplicaCreateMissingService(clientset kubernetes.Interface, replica *c
ServiceName: serviceName,
ClusterName: replica.Spec.ClusterName,
Port: cluster.Spec.Port,
- ServiceType: st,
+ ServiceType: serviceType,
}
// only add references to the exporter / pgBadger ports
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index f46ed64671..729a77e58f 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -490,6 +490,13 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
delete(pgcluster.ObjectMeta.Labels, "crunchy-pgbadger")
}
+ // 4.6.0 moved the format "service-type" label into the ServiceType CRD
+ // attribute, so we may need to do the same
+ if val, ok := pgcluster.Spec.UserLabels["service-type"]; ok {
+ pgcluster.Spec.ServiceType = v1.ServiceType(val)
+ delete(pgcluster.Spec.UserLabels, "service-type")
+ }
+
// since the current primary label is not used in this version of the Postgres Operator,
// delete it before moving on to other upgrade tasks
delete(pgcluster.ObjectMeta.Labels, config.LABEL_CURRENT_PRIMARY)
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 72b26cbc0e..2347cfb637 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -132,6 +132,10 @@ type PgclusterSpec struct {
// annotations that are propagated to all managed Deployments
Annotations ClusterAnnotations `json:"annotations"`
+ // ServiceType references the type of Service that should be used when
+ // deploying PostgreSQL instances
+ ServiceType v1.ServiceType `json:"serviceType"`
+
// Tolerations are an optional list of Pod toleration rules that are applied
// to the PostgreSQL instance.
Tolerations []v1.Toleration `json:"tolerations"`
diff --git a/pkg/apis/crunchydata.com/v1/replica.go b/pkg/apis/crunchydata.com/v1/replica.go
index 1bfba208fe..45f6cd123a 100644
--- a/pkg/apis/crunchydata.com/v1/replica.go
+++ b/pkg/apis/crunchydata.com/v1/replica.go
@@ -37,12 +37,15 @@ type Pgreplica struct {
// PgreplicaSpec ...
// swagger:ignore
type PgreplicaSpec struct {
- Namespace string `json:"namespace"`
- Name string `json:"name"`
- ClusterName string `json:"clustername"`
- ReplicaStorage PgStorageSpec `json:"replicastorage"`
- Status string `json:"status"`
- UserLabels map[string]string `json:"userlabels"`
+ Namespace string `json:"namespace"`
+ Name string `json:"name"`
+ ClusterName string `json:"clustername"`
+ ReplicaStorage PgStorageSpec `json:"replicastorage"`
+ // ServiceType references the type of Service that should be used when
+ // deploying PostgreSQL instances
+ ServiceType v1.ServiceType `json:"serviceType"`
+ Status string `json:"status"`
+ UserLabels map[string]string `json:"userlabels"`
// Tolerations are an optional list of Pod toleration rules that are applied
// to the PostgreSQL instance.
Tolerations []v1.Toleration `json:"tolerations"`
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index 93eb9ffd46..61255f6cc8 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -62,7 +62,7 @@ type CreateClusterRequest struct {
CCPImagePrefix string
PGOImagePrefix string
ReplicaCount int
- ServiceType string
+ ServiceType v1.ServiceType
MetricsFlag bool
// ExporterCPULimit, if specified, is the value of the max CPU for a
// Crunchy Postgres Exporter sidecar container
@@ -572,7 +572,7 @@ type ClusterScaleRequest struct {
ReplicaCount int `json:"replicaCount"`
// ServiceType is the kind of Service to deploy with this instance. Defaults
// to the value on the cluster.
- ServiceType string `json:"serviceType"`
+ ServiceType v1.ServiceType `json:"serviceType"`
// StorageConfig, if provided, specifies which of the storage configuration
// options should be used. Defaults to what the main cluster definition uses.
StorageConfig string `json:"storageConfig"`
From db427b4c77e72cbe59b6eb6703d89cc24bea5327 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 29 Dec 2020 15:34:59 -0500
Subject: [PATCH 094/276] Allow for ServiceType to be updated on existing
clusters
This allows for updates to the ServiceType attribute to
propagate to the managed PostgreSQL Services. Any updates
follow this level of precedence, from least to most:
- Operator default (ClusterIP)
- Configuration default
- Cluster => ServiceType
- Replica => ServiceType
---
docs/content/custom-resources/_index.md | 2 +-
.../pgcluster/pgclustercontroller.go | 58 ++++++++++++++++
.../pgreplica/pgreplicacontroller.go | 8 +++
internal/operator/cluster/cluster.go | 2 +-
internal/operator/cluster/service.go | 66 +++++++++++++++++++
5 files changed, 134 insertions(+), 2 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 61204b26e5..212791e7fd 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -767,7 +767,7 @@ make changes, as described below.
| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
| Replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. |
| Resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| ServiceType | `create` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to `ClusterIP`. |
+| ServiceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to `ClusterIP`. |
| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection or pgBadger, you would specify `"crunchy-postgres-exporter": "true"` and `"crunchy-pgbadger": "true"` here, respectively. However, this structure does need to be set, so just follow whatever is in the example. |
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index d2c910ad99..c4d74cf009 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -37,6 +37,7 @@ import (
appsv1 "k8s.io/api/apps/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
@@ -236,6 +237,12 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
}
}
+ // if the service type has changed, update the service type. Log an error if
+ // it fails, but continue on
+ if oldcluster.Spec.ServiceType != newcluster.Spec.ServiceType {
+ updateServices(c.Client, newcluster)
+ }
+
// see if we are adding / removing the metrics collection sidecar
if oldcluster.Spec.Exporter != newcluster.Spec.Exporter {
var err error
@@ -483,6 +490,57 @@ func updatePgBouncer(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1
return clusteroperator.UpdatePgbouncer(c.Client, oldCluster, newCluster)
}
+// updateServices handles any updates to the Service objects. Given how legacy
+// replica services are handled (really, replica service singular), the update
+// around replica services is a bit grotty, but it is what it is.
+//
+// If there are errors on the updates, this logs them but will continue on
+// unless otherwise noted.
+func updateServices(clientset kubeapi.Interface, cluster *crv1.Pgcluster) {
+ ctx := context.TODO()
+
+ // handle the primary instance
+ if err := clusteroperator.UpdateClusterService(clientset, cluster); err != nil {
+ log.Error(err)
+ }
+
+ // handle the replica instances. Ish. This is kind of "broken" due to the
+ // fact that we have a single service for all of the replicas. so, we'll
+ // loop through all of the replicas and try to see if any of them have
+ // any specialized service types. If so, we'll pluck that one out and use
+ // it to apply
+ options := metav1.ListOptions{
+ LabelSelector: fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, cluster.Name).String(),
+ }
+ replicas, err := clientset.CrunchydataV1().Pgreplicas(cluster.Namespace).List(ctx, options)
+
+ // well, if there is an error here, log it and abort
+ if err != nil {
+ log.Error(err)
+ return
+ }
+
+ // if there are no replicas, also return
+ if len(replicas.Items) == 0 {
+ return
+ }
+
+ // ok, we're guaranteed at least one replica, so there should be a Service
+ var replica *crv1.Pgreplica
+ for i := range replicas.Items {
+ // store the replica no matter what, for later comparison
+ replica = &replicas.Items[i]
+ // however, if the servicetype is customized, break out. Yup.
+ if replica.Spec.ServiceType != "" {
+ break
+ }
+ }
+
+ if err := clusteroperator.UpdateReplicaService(clientset, cluster, replica); err != nil {
+ log.Error(err)
+ }
+}
+
// updateTablespaces updates the PostgreSQL instance Deployments to reflect the
// new PostgreSQL tablespaces that should be added
func updateTablespaces(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) error {
diff --git a/internal/controller/pgreplica/pgreplicacontroller.go b/internal/controller/pgreplica/pgreplicacontroller.go
index 79f8538100..c37ee16b58 100644
--- a/internal/controller/pgreplica/pgreplicacontroller.go
+++ b/internal/controller/pgreplica/pgreplicacontroller.go
@@ -212,6 +212,14 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
}
}
+ // if the service type changed, updated on the instance
+ // if there is an error, log but continue
+ if oldPgreplica.Spec.ServiceType != newPgreplica.Spec.ServiceType {
+ if err := clusteroperator.UpdateReplicaService(c.Client, cluster, newPgreplica); err != nil {
+ log.Error(err)
+ }
+ }
+
// if the tolerations array changed, updated the tolerations on the instance
if !reflect.DeepEqual(oldPgreplica.Spec.Tolerations, newPgreplica.Spec.Tolerations) {
// get the Deployment object associated with this instance
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 9023fe080f..b16a89829b 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -56,7 +56,7 @@ type ServiceTemplateFields struct {
ServiceType v1.ServiceType
}
-// ReplicaSuffix ...
+// ReplicaSuffix is the suffix of the replica Service name
const ReplicaSuffix = "-replica"
const (
diff --git a/internal/operator/cluster/service.go b/internal/operator/cluster/service.go
index bca0eb9e45..73edd7e35e 100644
--- a/internal/operator/cluster/service.go
+++ b/internal/operator/cluster/service.go
@@ -26,12 +26,22 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/operator"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
+ v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
)
+// serviceInfo is a structured way of compiling all of the info required to
+// update a service
+type serviceInfo struct {
+ serviceName string
+ serviceNamespace string
+ serviceType v1.ServiceType
+}
+
// CreateService ...
func CreateService(clientset kubernetes.Interface, fields *ServiceTemplateFields, namespace string) error {
ctx := context.TODO()
@@ -63,3 +73,59 @@ func CreateService(clientset kubernetes.Interface, fields *ServiceTemplateFields
return err
}
+
+// UpdateClusterService updates parameters (really just one) on a Service that
+// represents a PostgreSQL cluster
+func UpdateClusterService(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error {
+ return updateService(clientset, serviceInfo{
+ serviceName: cluster.Name,
+ serviceNamespace: cluster.Namespace,
+ serviceType: cluster.Spec.ServiceType,
+ })
+}
+
+// UpdateClusterService updates parameters (really just one) on a Service that
+// represents a PostgreSQL replca instance
+func UpdateReplicaService(clientset kubernetes.Interface, cluster *crv1.Pgcluster, replica *crv1.Pgreplica) error {
+ serviceType := cluster.Spec.ServiceType
+
+ // if the replica has a specific service type, override with that
+ if replica.Spec.ServiceType != "" {
+ serviceType = replica.Spec.ServiceType
+ }
+
+ return updateService(clientset, serviceInfo{
+ serviceName: replica.Spec.ClusterName + ReplicaSuffix,
+ serviceNamespace: replica.Namespace,
+ serviceType: serviceType,
+ })
+}
+
+// updateService does the legwork for updating a service
+func updateService(clientset kubernetes.Interface, info serviceInfo) error {
+ ctx := context.TODO()
+
+ // first, attempt to get the Service. If we cannot do that, then we can't
+ // update the service
+ svc, err := clientset.CoreV1().Services(info.serviceNamespace).Get(ctx, info.serviceName, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+
+ // update the desired attributes, which is really just the ServiceType
+ svc.Spec.Type = info.serviceType
+
+ // ...so, while the documentation says that any "NodePort" settings are wiped
+ // if the type is not "NodePort", this is actually not the case, so we need to
+ // overcompensate for that
+ // Ref: https://godoc.org/k8s.io/api/core/v1#ServicePort
+ if svc.Spec.Type != v1.ServiceTypeNodePort {
+ for i := range svc.Spec.Ports {
+ svc.Spec.Ports[i].NodePort = 0
+ }
+ }
+
+ _, err = clientset.CoreV1().Services(info.serviceNamespace).Update(ctx, svc, metav1.UpdateOptions{})
+
+ return err
+}
From c90f4e13c0f9ad1120dc39c18840b13f8215f99c Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 29 Dec 2020 17:17:45 -0500
Subject: [PATCH 095/276] Move autofail toggle to `DisableAutofail` attribute
on CRD
This required relatively little shifting of code, as this former
label effectively behaved like an attribute. Given autofail (read:
HA) is the default in the PostgreSQL Operator, the chosen nomenclature
is to reflect that we want the use to make a conscious decision to
disable HA (given Go defaults booleans to "false").
---
docs/content/custom-resources/_index.md | 1 +
.../apiserver/clusterservice/clusterimpl.go | 11 +++-----
internal/config/labels.go | 1 -
.../pgcluster/pgclustercontroller.go | 27 ++++++-------------
internal/controller/pod/inithandler.go | 18 ++++++-------
internal/operator/cluster/upgrade.go | 15 ++++++++---
internal/util/cluster.go | 10 -------
pkg/apis/crunchydata.com/v1/cluster.go | 4 +++
8 files changed, 35 insertions(+), 52 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 212791e7fd..8683722431 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -749,6 +749,7 @@ make changes, as described below.
| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
| CustomConfig | `create` | If specified, references a custom ConfigMap to use when bootstrapping a PostgreSQL cluster. For the shape of this file, please see the section on [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) |
| Database | `create` | The name of a database that the PostgreSQL user can log into after the PostgreSQL cluster is created. |
+| DisableAutofail | `create`, `update` | If set to true, disables the high availability capabilities of a PostgreSQL cluster. By default, every cluster can have high availability if there is at least one replica. |
| ExporterLimits | `create`, `update` | Specify the container resource limits that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Exporter | `create`,`update` | If `true`, deploys the `crunchy-postgres-exporter` sidecar for metrics collection |
| ExporterPort | `create` | If `Exporter` is `true`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) |
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index aa86db62f1..56f228bfa6 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1472,12 +1472,7 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
labels := make(map[string]string)
labels[config.LABEL_NAME] = name
- if !request.AutofailFlag || apiserver.Pgo.Cluster.DisableAutofail {
- labels[config.LABEL_AUTOFAIL] = "false"
- } else {
- labels[config.LABEL_AUTOFAIL] = "true"
- }
-
+ spec.DisableAutofail = !request.AutofailFlag || apiserver.Pgo.Cluster.DisableAutofail
// set whether or not the cluster will be a standby cluster
spec.Standby = request.Standby
// set the pgBackRest repository path
@@ -1869,9 +1864,9 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
// Make the change based on the value of Autofail vis-a-vis UpdateClusterAutofailStatus
switch request.Autofail {
case msgs.UpdateClusterAutofailEnable:
- cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "true"
+ cluster.Spec.DisableAutofail = false
case msgs.UpdateClusterAutofailDisable:
- cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "false"
+ cluster.Spec.DisableAutofail = true
case msgs.UpdateClusterAutofailDoNothing: // no-op
}
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 30bce70cdf..1d0f9b7c3f 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -28,7 +28,6 @@ const (
const LABEL_PGTASK = "pg-task"
const (
- LABEL_AUTOFAIL = "autofail"
LABEL_FAILOVER = "failover"
LABEL_RESTART = "restart"
)
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index c4d74cf009..8875ec481a 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -20,7 +20,6 @@ import (
"encoding/json"
"io/ioutil"
"reflect"
- "strconv"
"strings"
"github.com/crunchydata/postgres-operator/internal/config"
@@ -202,26 +201,16 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
_ = clusteroperator.StartupCluster(c.Client, *newcluster)
}
- // check to see if the "autofail" label on the pgcluster CR has been changed from either true to false, or from
- // false to true. If it has been changed to false, autofail will then be disabled in the pg cluster. If has
- // been changed to true, autofail will then be enabled in the pg cluster
- if newcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] != "" {
- autofailEnabledOld, err := strconv.ParseBool(oldcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL])
- if err != nil {
- log.Error(err)
- return
- }
- autofailEnabledNew, err := strconv.ParseBool(newcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL])
- if err != nil {
+ // check to see if autofail setting has been changed. If set to "true", it
+ // will be disabled, otherwise it will be enabled. Simple.
+ if oldcluster.Spec.DisableAutofail != newcluster.Spec.DisableAutofail {
+ // take the inverse, as this func checks for autofail being enabled
+ // if we can't toggle autofailover, log the error but continue on
+ if err := util.ToggleAutoFailover(c.Client, !newcluster.Spec.DisableAutofail,
+ newcluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE],
+ newcluster.ObjectMeta.Namespace); err != nil {
log.Error(err)
- return
}
- if autofailEnabledNew != autofailEnabledOld {
- _ = util.ToggleAutoFailover(c.Client, autofailEnabledNew,
- newcluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE],
- newcluster.ObjectMeta.Namespace)
- }
-
}
// handle standby being enabled and disabled for the cluster
diff --git a/internal/controller/pod/inithandler.go b/internal/controller/pod/inithandler.go
index 2f09dbef3c..733d301cb7 100644
--- a/internal/controller/pod/inithandler.go
+++ b/internal/controller/pod/inithandler.go
@@ -18,7 +18,6 @@ limitations under the License.
import (
"context"
"fmt"
- "strconv"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/controller"
@@ -101,15 +100,14 @@ func (c *Controller) handleBackRestRepoInit(newPod *apiv1.Pod, cluster *crv1.Pgc
// regardless of the specific type of cluster (e.g. regualar or standby) or the reason the
// cluster is being initialized (initial bootstrap or restore)
func (c *Controller) handleCommonInit(cluster *crv1.Pgcluster) error {
- // Disable autofailover in the cluster that is now "Ready" if the autofail label is set
- // to "false" on the pgcluster (i.e. label "autofail=true")
- autofailEnabled, err := strconv.ParseBool(cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL])
- if err != nil {
- log.Error(err)
- return err
- } else if !autofailEnabled {
- _ = util.ToggleAutoFailover(c.Client, false,
- cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE], cluster.Namespace)
+ // Disable autofailover in the cluster that is now "Ready" if autofilover
+ // is disabled for the cluster
+ if cluster.Spec.DisableAutofail {
+ // accepts the inverse
+ if err := util.ToggleAutoFailover(c.Client, !cluster.Spec.DisableAutofail,
+ cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE], cluster.Namespace); err != nil {
+ log.Error(err)
+ }
}
if err := operator.UpdatePGHAConfigInitFlag(c.Client, false, cluster.Name,
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index 729a77e58f..639f46ae41 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -497,6 +497,13 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
delete(pgcluster.Spec.UserLabels, "service-type")
}
+ // 4.6.0 moved the "autofail" label to the DisableAutofail attribute. Given
+ // by default we need to start in an autofailover state, we just delete the
+ // legacy attribute
+ if _, ok := pgcluster.ObjectMeta.GetLabels()["autofail"]; ok {
+ delete(pgcluster.ObjectMeta.Labels, "autofail")
+ }
+
// since the current primary label is not used in this version of the Postgres Operator,
// delete it before moving on to other upgrade tasks
delete(pgcluster.ObjectMeta.Labels, config.LABEL_CURRENT_PRIMARY)
@@ -533,10 +540,10 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
// use with PostGIS enabled pgclusters
pgcluster.Spec.CCPImageTag = parameters[config.LABEL_CCP_IMAGE_KEY]
- // set a default autofail value of "true" to enable Patroni's replication. If left to an existing
- // value of "false," Patroni will be in a paused state and unable to sync all replicas to the
- // current timeline
- pgcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "true"
+ // set a default disable autofail value of "false" to enable Patroni's replication.
+ // If left to an existing value of "true," Patroni will be in a paused state
+ // and unable to sync all replicas to the current timeline
+ pgcluster.Spec.DisableAutofail = false
// Don't think we'll need to do this, but leaving the comment for now....
// pgcluster.ObjectMeta.Labels[config.LABEL_POD_ANTI_AFFINITY] = ""
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index 9bab736209..b92ea1c587 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -231,16 +231,6 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
return err
}
-// IsAutofailEnabled - returns true if autofail label is set to true, false if not.
-func IsAutofailEnabled(cluster *crv1.Pgcluster) bool {
- labels := cluster.ObjectMeta.Labels
- failLabel := labels[config.LABEL_AUTOFAIL]
-
- log.Debugf("IsAutoFailEnabled: %s", failLabel)
-
- return failLabel == "true"
-}
-
// GeneratedPasswordValidUntilDays returns the value for the number of days that
// a password is valid for, which is used as part of PostgreSQL's VALID UNTIL
// directive on a user. It first determines if the user provided this value via
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 2347cfb637..e9a84e729c 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -49,6 +49,10 @@ type PgclusterSpec struct {
CCPImagePrefix string `json:"ccpimageprefix"`
PGOImagePrefix string `json:"pgoimageprefix"`
Port string `json:"port"`
+ // DisableAutofail, if set to true, disables the autofail/HA capabilities
+ // We choose this, instead of the affirmative, so that way we default to
+ // autofail being on, given we're doing some legacy CRD stuff here
+ DisableAutofail bool `json:"disableAutofail"`
// PGBadger, if set to true, enables the pgBadger sidecar
PGBadger bool `json:"pgBadger"`
PGBadgerPort string `json:"pgbadgerport"`
From 9f790f52d34cfc45489799e81aed625d7cd41cc6 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 30 Dec 2020 16:24:53 -0500
Subject: [PATCH 096/276] Move pgBackRest storage type to CRD attribute
This previously was driven by a label stored on a
pgclusters.crunchydata.com custom resource. However, this is structured
data, as well as data that drives cluster behavior, and as such it
should be moved. This adds a new attribute called "backrestStorageTypes"
which contains a list of acceptable storage types, i.e.:
- posix
- s3
"local" is kept for backwards compatibility purposes, but the upgrade
attempts to force things to "posix". Additionally, if left empty,
"posix" is assumed by the Postgres Operator.
Modifying "backrestStorageTypes" on an existing cluster currently has
no effect, though this is set up to be able to switch pgBackRest
repositories during the lifetime of a cluster.
These changes also have the effect of simplifying "pgo backup" behavior.
If the command "pgo backup " is called, it will
automagically issue backups to all available pgBackRest repositories.
For example, a cluster that uses "s3" or "posix"/"s3" repositories will
automatically have backups issued to them without specifying a target.
The ability to specify taking a backup to a specific target still
remains.
The same applies to the restore in place functionality, as the default
storage type is known.
---
.../pgo-backrest-repo-sync.sh | 95 ----------
cmd/pgo/cmd/backup.go | 2 +-
cmd/pgo/cmd/create.go | 6 +-
.../content/architecture/disaster-recovery.md | 10 +-
.../multi-cluster-kubernetes.md | 2 +-
docs/content/architecture/overview.md | 2 +-
docs/content/architecture/provisioning.md | 6 +-
docs/content/custom-resources/_index.md | 15 +-
docs/content/pgo-client/common-tasks.md | 2 +-
.../pgo-client/reference/pgo_backup.md | 6 +-
.../content/pgo-client/reference/pgo_clone.md | 45 -----
.../reference/pgo_create_cluster.md | 12 +-
.../reference/pgo_create_schedule.md | 2 +-
.../pgo-client/reference/pgo_restore.md | 4 +-
docs/content/tutorial/disaster-recovery.md | 4 +-
.../apiserver/backrestservice/backrestimpl.go | 96 +++++-----
.../apiserver/clusterservice/clusterimpl.go | 60 ++++---
internal/apiserver/common.go | 64 ++++++-
internal/apiserver/common_test.go | 143 +++++++++++++++
.../apiserver/scheduleservice/scheduleimpl.go | 4 +-
internal/controller/job/backresthandler.go | 3 +-
internal/controller/pod/inithandler.go | 3 +-
internal/operator/backrest/backup.go | 34 +++-
internal/operator/backrest/repo.go | 9 +-
internal/operator/backrest/stanza.go | 27 +--
internal/operator/cluster/cluster.go | 2 +-
internal/operator/cluster/clusterlogic.go | 110 ++++++------
internal/operator/cluster/upgrade.go | 44 +++++
internal/operator/clusterutilities.go | 24 ++-
internal/operator/common.go | 59 +++++--
internal/operator/common_test.go | 165 ++++++++++++++++++
internal/util/backrest.go | 72 --------
pkg/apis/crunchydata.com/v1/cluster.go | 103 ++++++++---
pkg/apis/crunchydata.com/v1/cluster_test.go | 119 +++++++++++++
pkg/apis/crunchydata.com/v1/errors.go | 23 +++
pkg/apis/crunchydata.com/v1/task.go | 4 -
36 files changed, 911 insertions(+), 470 deletions(-)
delete mode 100644 bin/pgo-backrest-repo-sync/pgo-backrest-repo-sync.sh
delete mode 100644 docs/content/pgo-client/reference/pgo_clone.md
create mode 100644 internal/operator/common_test.go
create mode 100644 pkg/apis/crunchydata.com/v1/errors.go
diff --git a/bin/pgo-backrest-repo-sync/pgo-backrest-repo-sync.sh b/bin/pgo-backrest-repo-sync/pgo-backrest-repo-sync.sh
deleted file mode 100644
index 53e98e3a2e..0000000000
--- a/bin/pgo-backrest-repo-sync/pgo-backrest-repo-sync.sh
+++ /dev/null
@@ -1,95 +0,0 @@
-#!/bin/bash -x
-
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-function trap_sigterm() {
- echo "Signal trap triggered, beginning shutdown.."
- killall sshd
-}
-
-trap 'trap_sigterm' SIGINT SIGTERM
-
-# First enable sshd prior to running rsync if using pgbackrest with a repository
-# host
-enable_sshd() {
- SSHD_CONFIG=/sshd
-
- mkdir ~/.ssh/
- cp $SSHD_CONFIG/config ~/.ssh/
- cp $SSHD_CONFIG/id_ed25519 /tmp
- chmod 400 /tmp/id_ed25519 ~/.ssh/config
-
- # start sshd which is used by pgbackrest for remote connections
- /usr/sbin/sshd -D -f $SSHD_CONFIG/sshd_config &
-
- echo "sleep 5 secs to let sshd come up before running rsync command"
- sleep 5
-}
-
-# Runs rync to sync from a specified source directory to a target directory
-rsync_repo() {
- echo "rsync pgbackrest from ${1} to ${2}"
- # note, the "/" after the repo path is important, as we do not want to sync
- # the top level directory
- rsync -a --progress "${1}" "${2}"
- echo "finished rsync"
-}
-
-# Use the aws cli sync command to sync files from a source location to a target
-# location. The this includes syncing files between who s3 locations,
-# syncing a local directory to s3, or syncing from s3 to a local directory.
-aws_sync_repo() {
- export AWS_CA_BUNDLE="${PGBACKREST_REPO1_S3_CA_FILE}"
- export AWS_ACCESS_KEY_ID="${PGBACKREST_REPO1_S3_KEY}"
- export AWS_SECRET_ACCESS_KEY="${PGBACKREST_REPO1_S3_KEY_SECRET}"
- export AWS_DEFAULT_REGION="${PGBACKREST_REPO1_S3_REGION}"
-
- echo "Executing aws s3 sync from source ${1} to target ${2}"
- aws s3 sync "${1}" "${2}"
- echo "Finished aws s3 sync"
-}
-
-# If s3 is identifed as the data source, then the aws cli will be utilized to
-# sync the repo to the target location in s3. If local storage is also enabled
-# (along with s3) for the cluster, then also use the aws cli to sync the repo
-# from s3 to the target volume locally.
-#
-# If the data source is local (the default if not specified at all), then first
-# rsync the repo to the target directory locally. Then, if s3 storage is also
-# enabled (along with local), use the aws cli to sync the local repo to the
-# target s3 location.
-if [[ "${BACKREST_STORAGE_SOURCE}" == "s3" ]]
-then
- aws_source="s3://${PGBACKREST_REPO1_S3_BUCKET}${PGBACKREST_REPO1_PATH}/"
- aws_target="s3://${PGBACKREST_REPO1_S3_BUCKET}${NEW_PGBACKREST_REPO}/"
- aws_sync_repo "${aws_source}" "${aws_target}"
- if [[ "${PGHA_PGBACKREST_LOCAL_S3_STORAGE}" == "true" ]]
- then
- aws_source="s3://${PGBACKREST_REPO1_S3_BUCKET}${PGBACKREST_REPO1_PATH}/"
- aws_target="${NEW_PGBACKREST_REPO}/"
- aws_sync_repo "${aws_source}" "${aws_target}"
- fi
-else
- enable_sshd # enable sshd for rsync
-
- rsync_source="${PGBACKREST_REPO1_HOST}:${PGBACKREST_REPO1_PATH}/"
- rsync_target="$NEW_PGBACKREST_REPO"
- rsync_repo "${rsync_source}" "${rsync_target}"
- if [[ "${PGHA_PGBACKREST_LOCAL_S3_STORAGE}" == "true" ]]
- then
- aws_source="${NEW_PGBACKREST_REPO}/"
- aws_target="s3://${PGBACKREST_REPO1_S3_BUCKET}${NEW_PGBACKREST_REPO}/"
- aws_sync_repo "${aws_source}" "${aws_target}"
- fi
-fi
diff --git a/cmd/pgo/cmd/backup.go b/cmd/pgo/cmd/backup.go
index d70877a198..217c689f8e 100644
--- a/cmd/pgo/cmd/backup.go
+++ b/cmd/pgo/cmd/backup.go
@@ -91,7 +91,7 @@ func init() {
backupCmd.Flags().StringVarP(&PVCName, "pvc-name", "", "", "The PVC name to use for the backup instead of the default.")
backupCmd.Flags().StringVarP(&PGDumpDB, "database", "d", "postgres", "The name of the database pgdump will backup.")
backupCmd.Flags().StringVar(&backupType, "backup-type", "pgbackrest", "The backup type to perform. Default is pgbackrest. Valid backup types are pgbackrest and pgdump.")
- backupCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use when scheduling pgBackRest backups. Either \"local\", \"s3\" or both, comma separated. (default \"local\")")
+ backupCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use when scheduling pgBackRest backups. Either \"posix\", \"s3\" or both, comma separated. (default \"posix\")")
}
// deleteBackup ....
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index aff3f53dca..6b798e4a23 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -410,7 +410,7 @@ func init() {
createClusterCmd.Flags().StringVar(&BackrestMemoryLimit, "pgbackrest-memory-limit", "", "Set the amount of memory to limit for "+
"the pgBackRest repository.")
createClusterCmd.Flags().StringVarP(&BackrestPVCSize, "pgbackrest-pvc-size", "", "",
- `The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "local" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi"`)
+ `The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "posix" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi"`)
createClusterCmd.Flags().StringVarP(&BackrestRepoPath, "pgbackrest-repo-path", "", "",
"The pgBackRest repository path that should be utilized instead of the default. Required "+
"for standby\nclusters to define the location of an existing pgBackRest repository.")
@@ -435,7 +435,7 @@ func init() {
createClusterCmd.Flags().StringVarP(&BackrestS3URIStyle, "pgbackrest-s3-uri-style", "", "", "Specifies whether \"host\" or \"path\" style URIs will be used when connecting to S3.")
createClusterCmd.Flags().BoolVarP(&BackrestS3VerifyTLS, "pgbackrest-s3-verify-tls", "", true, "This sets if pgBackRest should verify the TLS certificate when connecting to S3. To disable, use \"--pgbackrest-s3-verify-tls=false\".")
createClusterCmd.Flags().StringVar(&BackrestStorageConfig, "pgbackrest-storage-config", "", "The name of the storage config in pgo.yaml to use for the pgBackRest local repository.")
- createClusterCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use with pgBackRest. Either \"local\", \"s3\" or both, comma separated. (default \"local\")")
+ createClusterCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use with pgBackRest. Either \"posix\", \"s3\" or both, comma separated. (default \"posix\")")
createClusterCmd.Flags().BoolVarP(&BadgerFlag, "pgbadger", "", false, "Adds the crunchy-pgbadger container to the database pod.")
createClusterCmd.Flags().BoolVarP(&PgbouncerFlag, "pgbouncer", "", false, "Adds a crunchy-pgbouncer deployment to the cluster.")
createClusterCmd.Flags().StringVar(&PgBouncerCPURequest, "pgbouncer-cpu", "", "Set the number of millicores to request for CPU "+
@@ -540,7 +540,7 @@ func init() {
// "pgo create schedule" flags
createScheduleCmd.Flags().StringVarP(&ScheduleDatabase, "database", "", "", "The database to run the SQL policy against.")
createScheduleCmd.Flags().StringVarP(&PGBackRestType, "pgbackrest-backup-type", "", "", "The type of pgBackRest backup to schedule (full, diff or incr).")
- createScheduleCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use when scheduling pgBackRest backups. Either \"local\", \"s3\" or both, comma separated. (default \"local\")")
+ createScheduleCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use when scheduling pgBackRest backups. Either \"posix\", \"s3\" or both, comma separated. (default \"posix\")")
createScheduleCmd.Flags().StringVarP(&CCPImageTag, "ccp-image-tag", "c", "", "The CCPImageTag to use for cluster creation. If specified, overrides the pgo.yaml setting.")
createScheduleCmd.Flags().StringVarP(&SchedulePolicy, "policy", "", "", "The policy to use for SQL schedules.")
createScheduleCmd.Flags().StringVarP(&Schedule, "schedule", "", "", "The schedule assigned to the cron task.")
diff --git a/docs/content/architecture/disaster-recovery.md b/docs/content/architecture/disaster-recovery.md
index 7d3639e0fc..e49a1be579 100644
--- a/docs/content/architecture/disaster-recovery.md
+++ b/docs/content/architecture/disaster-recovery.md
@@ -36,10 +36,10 @@ At PostgreSQL cluster creation time, you can specify a specific Storage Class
for the pgBackRest repository. Additionally, you can also specify the type of
pgBackRest repository that can be used, including:
-- `local`: Uses the storage that is provided by the Kubernetes cluster's Storage
+- `posix`: Uses the storage that is provided by the Kubernetes cluster's Storage
Class that you select
- `s3`: Use Amazon S3 or an object storage system that uses the S3 protocol
-- `local,s3`: Use both the storage that is provided by the Kubernetes cluster's
+- `posix,s3`: Use both the storage that is provided by the Kubernetes cluster's
Storage Class that you select AND Amazon S3 (or equivalent object storage system
that uses the S3 protocol)
@@ -300,7 +300,7 @@ stored in Kubernetes Secrets and are securely mounted to the PostgreSQL
clusters.
To enable a PostgreSQL cluster to use S3, the `--pgbackrest-storage-type` on the
-`pgo create cluster` command needs to be set to `s3` or `local,s3`.
+`pgo create cluster` command needs to be set to `s3` or `posix,s3`.
Once configured, the `pgo backup` and `pgo restore` commands will work with S3
similarly to the above!
@@ -325,7 +325,7 @@ example pgBackRest repository in the state shown after running the
```
cluster: hippo
-storage type: local
+storage type: posix
stanza: db
status: ok
@@ -377,7 +377,7 @@ Verify the backup is deleted with `pgo show backup hippo`:
```
cluster: hippo
-storage type: local
+storage type: posix
stanza: db
status: ok
diff --git a/docs/content/architecture/high-availability/multi-cluster-kubernetes.md b/docs/content/architecture/high-availability/multi-cluster-kubernetes.md
index f2be1e03c5..7944056673 100644
--- a/docs/content/architecture/high-availability/multi-cluster-kubernetes.md
+++ b/docs/content/architecture/high-availability/multi-cluster-kubernetes.md
@@ -142,7 +142,7 @@ pgBackRest. For example:
```
pgo create cluster hippo --pgbouncer --replica-count=2 \
- --pgbackrest-storage-type=local,s3 \
+ --pgbackrest-storage-type=posix,s3 \
--pgbackrest-s3-key= \
--pgbackrest-s3-key-secret= \
--pgbackrest-s3-bucket=watering-hole \
diff --git a/docs/content/architecture/overview.md b/docs/content/architecture/overview.md
index 9365787ba8..bf12101df1 100644
--- a/docs/content/architecture/overview.md
+++ b/docs/content/architecture/overview.md
@@ -78,7 +78,7 @@ built-in metrics and connection pooling, similar to:
We can accomplish that with a single command:
```shell
-pgo create cluster hacluster --replica-count=1 --metrics --pgbackrest-storage-type="local,s3" --pgbouncer --pgbadger
+pgo create cluster hacluster --replica-count=1 --metrics --pgbackrest-storage-type="posix,s3" --pgbouncer --pgbadger
```
The PostgreSQL Operator handles setting up all of the various Deployments and
diff --git a/docs/content/architecture/provisioning.md b/docs/content/architecture/provisioning.md
index 23734ee180..a43d4baaba 100644
--- a/docs/content/architecture/provisioning.md
+++ b/docs/content/architecture/provisioning.md
@@ -49,8 +49,8 @@ allowing them to replay old WAL logs
backups and perform full and point-in-time restores
The pgBackRest repository can be configured to use storage that resides within
-the Kubernetes cluster (the `local` option), Amazon S3 or a storage system that
-uses the S3 protocol (the `s3` option), or both (`local,s3`).
+the Kubernetes cluster (the `posix` option), Amazon S3 or a storage system that
+uses the S3 protocol (the `s3` option), or both (`posix,s3`).
Once the PostgreSQL primary instance is ready, there are two follow up actions
that the PostgreSQL Operator takes to properly leverage the pgBackRest
@@ -147,7 +147,7 @@ pgo create cluster mycluster2 --restore-from=mycluster1
```
By default, pgBackRest will restore the latest backup available in the repository, and will replay
-all available WAL archives. However, additional pgBackRest options can be specified using the
+all available WAL archives. However, additional pgBackRest options can be specified using the
`restore-opts` option, which allows the restore command to be further tailored and customized. For
instance, the following demonstrates how a point-in-time restore can be utilized when creating a
new cluster:
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 8683722431..abc7706284 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -76,13 +76,10 @@ metadata:
annotations:
current-primary: ${pgo_cluster_name}
labels:
- autofail: "true"
- crunchy-pgbadger: "false"
crunchy-pgha-scope: ${pgo_cluster_name}
deployment-name: ${pgo_cluster_name}
name: ${pgo_cluster_name}
pg-cluster: ${pgo_cluster_name}
- pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
pgouser: admin
name: ${pgo_cluster_name}
@@ -267,14 +264,10 @@ metadata:
annotations:
current-primary: ${pgo_cluster_name}
labels:
- autofail: "true"
- backrest-storage-type: "s3"
- crunchy-pgbadger: "false"
crunchy-pgha-scope: ${pgo_cluster_name}
deployment-name: ${pgo_cluster_name}
name: ${pgo_cluster_name}
pg-cluster: ${pgo_cluster_name}
- pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
pgouser: admin
name: ${pgo_cluster_name}
@@ -305,6 +298,8 @@ spec:
storagetype: dynamic
supplementalgroups: ""
annotations: {}
+ backrestStorageTypes:
+ - s3
backrestS3Bucket: ${backrest_s3_bucket}
backrestS3Endpoint: ${backrest_s3_endpoint}
backrestS3Region: ${backrest_s3_region}
@@ -332,7 +327,6 @@ spec:
tolerations: []
user: hippo
userlabels:
- backrest-storage-type: "s3"
pgo-version: {{< param operatorVersion >}}
EOF
@@ -397,13 +391,10 @@ metadata:
annotations:
current-primary: ${pgo_cluster_name}
labels:
- autofail: "true"
- crunchy-pgbadger: "false"
crunchy-pgha-scope: ${pgo_cluster_name}
deployment-name: ${pgo_cluster_name}
name: ${pgo_cluster_name}
pg-cluster: ${pgo_cluster_name}
- pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
pgouser: admin
name: ${pgo_cluster_name}
@@ -554,7 +545,6 @@ spec:
userlabels:
NodeLabelKey: ""
NodeLabelValue: ""
- pg-pod-anti-affinity: ""
pgo-version: {{< param operatorVersion >}}
EOF
@@ -740,6 +730,7 @@ make changes, as described below.
| BackrestS3Bucket | `create` | An optional parameter that specifies a S3 bucket that pgBackRest should use. |
| BackrestS3Endpoint | `create` | An optional parameter that specifies the S3 endpoint pgBackRest should use. |
| BackrestS3Region | `create` | An optional parameter that specifies a cloud region that pgBackRest should use. |
+| BackrestStorageTypes | `create` | An optional parameter that takes an array of different repositories types that can be used to store pgBackRest backups. Choices are `posix` and `s3`. If nothing is specified, it defaults to `posix`. (`local`, equivalent to `posix`, is available for backwards compatibility).|
| BackrestS3URIStyle | `create` | An optional parameter that specifies if pgBackRest should use the `path` or `host` S3 URI style. |
| BackrestS3VerifyTLS | `create` | An optional parameter that specifies if pgBackRest should verify the TLS endpoint. |
| BackrestStorage | `create` | A specification that gives information about the storage attributes for the pgBackRest repository, which stores backups and archives, of the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. |
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index 02243ea62f..64ffd26a6a 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -1262,7 +1262,7 @@ specifications:
```shell
pgo create cluster hippo --pgbouncer --replica-count=2 \
- --pgbackrest-storage-type=local,s3 \
+ --pgbackrest-storage-type=posix,s3 \
--pgbackrest-s3-key= \
--pgbackrest-s3-key-secret= \
--pgbackrest-s3-bucket=watering-hole \
diff --git a/docs/content/pgo-client/reference/pgo_backup.md b/docs/content/pgo-client/reference/pgo_backup.md
index 0e4c65a530..d8e028bf57 100644
--- a/docs/content/pgo-client/reference/pgo_backup.md
+++ b/docs/content/pgo-client/reference/pgo_backup.md
@@ -22,7 +22,7 @@ pgo backup [flags]
--backup-type string The backup type to perform. Default is pgbackrest. Valid backup types are pgbackrest and pgdump. (default "pgbackrest")
-d, --database string The name of the database pgdump will backup. (default "postgres")
-h, --help help for backup
- --pgbackrest-storage-type string The type of storage to use when scheduling pgBackRest backups. Either "local", "s3" or both, comma separated. (default "local")
+ --pgbackrest-storage-type string The type of storage to use when scheduling pgBackRest backups. Either "posix", "s3" or both, comma separated. (default "posix")
--pvc-name string The PVC name to use for the backup instead of the default.
-s, --selector string The selector to use for cluster filtering.
```
@@ -30,7 +30,7 @@ pgo backup [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -44,4 +44,4 @@ pgo backup [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 30-Dec-2020
diff --git a/docs/content/pgo-client/reference/pgo_clone.md b/docs/content/pgo-client/reference/pgo_clone.md
deleted file mode 100644
index 6f07741010..0000000000
--- a/docs/content/pgo-client/reference/pgo_clone.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: "pgo clone"
----
-## pgo clone
-
-Copies the primary database of an existing cluster to a new cluster
-
-### Synopsis
-
-Clone makes a copy of an existing PostgreSQL cluster managed by the Operator and creates a new PostgreSQL cluster managed by the Operator, with the data from the old cluster.
-
- pgo clone oldcluster newcluster
-
-```
-pgo clone [flags]
-```
-
-### Options
-
-```
- --enable-metrics If sets, enables metrics collection on the newly cloned cluster
- -h, --help help for clone
- --pgbackrest-pvc-size string The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "local" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi"
- --pgbackrest-storage-source string The data source for the clone when both "local" and "s3" are enabled in the source cluster. Either "local", "s3" or both, comma separated. (default "local")
- --pvc-size string The size of the PVC capacity for primary and replica PostgreSQL instances. Overrides the value set in the storage class. Must follow the standard Kubernetes format, e.g. "10.1Gi"
-```
-
-### Options inherited from parent commands
-
-```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
- --debug Enable additional output for debugging.
- --disable-tls Disable TLS authentication to the Postgres Operator.
- --exclude-os-trust Exclude CA certs from OS default trust store
- -n, --namespace string The namespace to use for pgo requests.
- --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver.
- --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver.
- --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver.
-```
-
-### SEE ALSO
-
-* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-
-###### Auto generated by spf13/cobra on 2-Jul-2020
diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md
index efc7edc738..36000d4253 100644
--- a/docs/content/pgo-client/reference/pgo_create_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_create_cluster.md
@@ -21,7 +21,7 @@ pgo create cluster [flags]
--annotation strings Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer)
The format to add an annotation is "name=value"
The format to remove an annotation is "name-"
-
+
For example, to add two annotations: "--annotation=hippo=awesome,elephant=cool"
--annotation-pgbackrest strings Add an Annotation specifically to pgBackRest deployments
The format to add an annotation is "name=value"
@@ -59,7 +59,7 @@ pgo create cluster [flags]
--pgbackrest-custom-config string The name of a ConfigMap containing pgBackRest configuration files.
--pgbackrest-memory string Set the amount of memory to request for the pgBackRest repository. Defaults to server value (48Mi).
--pgbackrest-memory-limit string Set the amount of memory to limit for the pgBackRest repository.
- --pgbackrest-pvc-size string The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "local" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi"
+ --pgbackrest-pvc-size string The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "posix" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi"
--pgbackrest-repo-path string The pgBackRest repository path that should be utilized instead of the default. Required for standby
clusters to define the location of an existing pgBackRest repository.
--pgbackrest-s3-bucket string The AWS S3 bucket that should be utilized for the cluster when the "s3" storage type is enabled for pgBackRest.
@@ -71,7 +71,7 @@ pgo create cluster [flags]
--pgbackrest-s3-uri-style string Specifies whether "host" or "path" style URIs will be used when connecting to S3.
--pgbackrest-s3-verify-tls This sets if pgBackRest should verify the TLS certificate when connecting to S3. To disable, use "--pgbackrest-s3-verify-tls=false". (default true)
--pgbackrest-storage-config string The name of the storage config in pgo.yaml to use for the pgBackRest local repository.
- --pgbackrest-storage-type string The type of storage to use with pgBackRest. Either "local", "s3" or both, comma separated. (default "local")
+ --pgbackrest-storage-type string The type of storage to use with pgBackRest. Either "posix", "s3" or both, comma separated. (default "posix")
--pgbadger Adds the crunchy-pgbadger container to the database pod.
--pgbouncer Adds a crunchy-pgbouncer deployment to the cluster.
--pgbouncer-cpu string Set the number of millicores to request for CPU for pgBouncer. Defaults to being unset.
@@ -100,13 +100,13 @@ pgo create cluster [flags]
--storage-config string The name of a Storage config in pgo.yaml to use for the cluster storage.
--sync-replication Enables synchronous replication for the cluster.
--tablespace strings Create a PostgreSQL tablespace on the cluster, e.g. "name=ts1:storageconfig=nfsstorage". The format is a key/value map that is delimited by "=" and separated by ":". The following parameters are available:
-
+
- name (required): the name of the PostgreSQL tablespace
- storageconfig (required): the storage configuration to use, as specified in the list available in the "pgo-config" ConfigMap (aka "pgo.yaml")
- pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. Follows the Kubernetes quantity format.
-
+
For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:
-
+
--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi
--tls-only If true, forces all PostgreSQL connections to be over TLS. Must also set "server-tls-secret" and "server-ca-secret"
--toleration strings Set Pod tolerations for each PostgreSQL instance in a cluster.
diff --git a/docs/content/pgo-client/reference/pgo_create_schedule.md b/docs/content/pgo-client/reference/pgo_create_schedule.md
index 4aeb07fe88..6549cfe588 100644
--- a/docs/content/pgo-client/reference/pgo_create_schedule.md
+++ b/docs/content/pgo-client/reference/pgo_create_schedule.md
@@ -22,7 +22,7 @@ pgo create schedule [flags]
--database string The database to run the SQL policy against.
-h, --help help for schedule
--pgbackrest-backup-type string The type of pgBackRest backup to schedule (full, diff or incr).
- --pgbackrest-storage-type string The type of storage to use when scheduling pgBackRest backups. Either "local", "s3" or both, comma separated. (default "local")
+ --pgbackrest-storage-type string The type of storage to use when scheduling pgBackRest backups. Either "posix", "s3" or both, comma separated. (default "posix")
--policy string The policy to use for SQL schedules.
--schedule string The schedule assigned to the cron task.
--schedule-opts string The custom options passed to the create schedule API.
diff --git a/docs/content/pgo-client/reference/pgo_restore.md b/docs/content/pgo-client/reference/pgo_restore.md
index 2d8561c64d..e7e377c914 100644
--- a/docs/content/pgo-client/reference/pgo_restore.md
+++ b/docs/content/pgo-client/reference/pgo_restore.md
@@ -9,7 +9,7 @@ Perform a restore from previous backup
RESTORE performs a restore to a new PostgreSQL cluster. This includes stopping the database and recreating a new primary with the restored data. Valid backup types to restore from are pgbackrest and pgdump. For example:
- pgo restore mycluster
+ pgo restore mycluster
```
pgo restore [flags]
@@ -24,7 +24,7 @@ pgo restore [flags]
-h, --help help for restore
--no-prompt No command line confirmation.
--node-label string The node label (key=value) to use when scheduling the restore job, and in the case of a pgBackRest restore, also the new (i.e. restored) primary deployment. If not set, any node is used.
- --pgbackrest-storage-type string The type of storage to use for a pgBackRest restore. Either "local", "s3". (default "local")
+ --pgbackrest-storage-type string The type of storage to use for a pgBackRest restore. Either "posix", "s3". (default "posix")
-d, --pgdump-database string The name of the database pgdump will restore. (default "postgres")
--pitr-target string The PITR target, being a PostgreSQL timestamp such as '2018-08-13 11:25:42.582117-04'.
```
diff --git a/docs/content/tutorial/disaster-recovery.md b/docs/content/tutorial/disaster-recovery.md
index 0ad19d0fe3..2d87319ec9 100644
--- a/docs/content/tutorial/disaster-recovery.md
+++ b/docs/content/tutorial/disaster-recovery.md
@@ -200,7 +200,7 @@ Let's say that the `hippo` cluster currently has a set of backups that look like
```
cluster: hippo
-storage type: local
+storage type: posix
stanza: db
status: ok
@@ -250,7 +250,7 @@ You can then verify the backup is deleted with `pgo show backup hippo`:
```
cluster: hippo
-storage type: local
+storage type: posix
stanza: db
status: ok
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index 1d545e387e..e743925d95 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -25,15 +25,15 @@ import (
"strings"
"time"
- "github.com/crunchydata/postgres-operator/internal/apiserver/backupoptions"
- "github.com/crunchydata/postgres-operator/internal/operator"
- "github.com/crunchydata/postgres-operator/internal/util"
-
"github.com/crunchydata/postgres-operator/internal/apiserver"
+ "github.com/crunchydata/postgres-operator/internal/apiserver/backupoptions"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/operator"
+ "github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
+
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -150,12 +150,14 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
return resp
}
- err = util.ValidateBackrestStorageTypeOnBackupRestore(request.BackrestStorageType,
- cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], false)
- if err != nil {
- resp.Status.Code = msgs.Error
- resp.Status.Msg = err.Error()
- return resp
+ // if a specific pgBackRest storage type was passed in to perform the
+ // backup, validate that this cluster can support it
+ if request.BackrestStorageType != "" {
+ if err := apiserver.ValidateBackrestStorageTypeForCommand(cluster, request.BackrestStorageType); err != nil {
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = err.Error()
+ return resp
+ }
}
err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, taskName, metav1.DeleteOptions{})
@@ -428,26 +430,17 @@ func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse {
// so we potentially add two "pieces of detail" based on whether or not we
// have a local repository, a s3 repository, or both
- storageTypes := c.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]
-
- for _, storageType := range apiserver.GetBackrestStorageTypes() {
-
- // so the way we currently store the different repos is not ideal, and
- // this is not being fixed right now, so we'll follow this logic:
- //
- // 1. If storage type is "local" and the string either contains "local" or
- // is empty, we can add the pgBackRest info
- // 2. if the storage type is "s3" and the string contains "s3", we can
- // add the pgBackRest info
- // 3. Otherwise, continue
- if (storageTypes == "" && storageType != "local") || (storageTypes != "" && !strings.Contains(storageTypes, storageType)) {
- continue
- }
+ storageTypes := c.Spec.BackrestStorageTypes
+ // if this happens to be empty, then the storage type is "posix"
+ if len(storageTypes) == 0 {
+ storageTypes = append(storageTypes, crv1.BackrestStorageTypePosix)
+ }
+ for _, storageType := range storageTypes {
// begin preparing the detailed response
detail := msgs.ShowBackrestDetail{
Name: c.Name,
- StorageType: storageType,
+ StorageType: string(storageType),
}
verifyTLS, _ := strconv.ParseBool(operator.GetS3VerifyTLSSetting(c))
@@ -480,12 +473,12 @@ func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse {
return response
}
-func getInfo(storageType, podname, ns string, verifyTLS bool) (string, error) {
+func getInfo(storageType crv1.BackrestStorageType, podname, ns string, verifyTLS bool) (string, error) {
log.Debug("backrest info command requested")
cmd := pgBackRestInfoCommand
- if storageType == "s3" {
+ if storageType == crv1.BackrestStorageTypeS3 {
cmd = append(cmd, repoTypeFlagS3...)
if !verifyTLS {
@@ -555,24 +548,21 @@ func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreRespo
return resp
}
- // ensure the backrest storage type specified for the backup is valid and enabled in the
- // cluster
- err = util.ValidateBackrestStorageTypeOnBackupRestore(request.BackrestStorageType,
- cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], true)
- if err != nil {
+ // ensure the backrest storage type specified for the backup is valid and
+ // enabled in the cluster
+ if err := apiserver.ValidateBackrestStorageTypeForCommand(cluster, request.BackrestStorageType); err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
}
- var id string
- id, err = createRestoreWorkflowTask(cluster.Name, ns)
+ id, err := createRestoreWorkflowTask(cluster)
if err != nil {
resp.Results = append(resp.Results, err.Error())
return resp
}
- pgtask, err := getRestoreParams(request, ns)
+ pgtask, err := getRestoreParams(cluster, request)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
@@ -607,18 +597,24 @@ func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreRespo
return resp
}
-func getRestoreParams(request *msgs.RestoreRequest, ns string) (*crv1.Pgtask, error) {
+func getRestoreParams(cluster *crv1.Pgcluster, request *msgs.RestoreRequest) (*crv1.Pgtask, error) {
var newInstance *crv1.Pgtask
spec := crv1.PgtaskSpec{}
- spec.Namespace = ns
- spec.Name = "backrest-restore-" + request.FromCluster
+ spec.Namespace = cluster.Namespace
+ spec.Name = "backrest-restore-" + cluster.Name
spec.TaskType = crv1.PgtaskBackrestRestore
spec.Parameters = make(map[string]string)
- spec.Parameters[config.LABEL_BACKREST_RESTORE_FROM_CLUSTER] = request.FromCluster
+ spec.Parameters[config.LABEL_BACKREST_RESTORE_FROM_CLUSTER] = cluster.Name
spec.Parameters[config.LABEL_BACKREST_RESTORE_OPTS] = request.RestoreOpts
spec.Parameters[config.LABEL_BACKREST_PITR_TARGET] = request.PITRTarget
- spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = request.BackrestStorageType
+
+ // get the repository to restore from. if not explicitly provided, the default
+ // for the cluster is used
+ spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = string(operator.GetRepoType(cluster))
+ if request.BackrestStorageType != "" {
+ spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = request.BackrestStorageType
+ }
// validate & parse nodeLabel if exists
if request.NodeLabel != "" {
@@ -635,7 +631,7 @@ func getRestoreParams(request *msgs.RestoreRequest, ns string) (*crv1.Pgtask, er
newInstance = &crv1.Pgtask{
ObjectMeta: metav1.ObjectMeta{
- Labels: map[string]string{config.LABEL_PG_CLUSTER: request.FromCluster},
+ Labels: map[string]string{config.LABEL_PG_CLUSTER: cluster.Name},
Name: spec.Name,
},
Spec: spec,
@@ -643,26 +639,26 @@ func getRestoreParams(request *msgs.RestoreRequest, ns string) (*crv1.Pgtask, er
return newInstance, nil
}
-func createRestoreWorkflowTask(clusterName, ns string) (string, error) {
+func createRestoreWorkflowTask(cluster *crv1.Pgcluster) (string, error) {
ctx := context.TODO()
- taskName := clusterName + "-" + crv1.PgtaskWorkflowBackrestRestoreType
+ taskName := cluster.Name + "-" + crv1.PgtaskWorkflowBackrestRestoreType
// delete any existing pgtask with the same name
- if err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).
+ if err := apiserver.Clientset.CrunchydataV1().Pgtasks(cluster.Namespace).
Delete(ctx, taskName, metav1.DeleteOptions{}); err != nil && !kubeapi.IsNotFound(err) {
return "", err
}
// create pgtask CRD
spec := crv1.PgtaskSpec{}
- spec.Namespace = ns
- spec.Name = clusterName + "-" + crv1.PgtaskWorkflowBackrestRestoreType
+ spec.Namespace = cluster.Namespace
+ spec.Name = cluster.Name + "-" + crv1.PgtaskWorkflowBackrestRestoreType
spec.TaskType = crv1.PgtaskWorkflow
spec.Parameters = make(map[string]string)
spec.Parameters[crv1.PgtaskWorkflowSubmittedStatus] = time.Now().Format(time.RFC3339)
- spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName
+ spec.Parameters[config.LABEL_PG_CLUSTER] = cluster.Name
u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid")
if err != nil {
@@ -678,10 +674,10 @@ func createRestoreWorkflowTask(clusterName, ns string) (string, error) {
Spec: spec,
}
newInstance.ObjectMeta.Labels = make(map[string]string)
- newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName
+ newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = cluster.Name
newInstance.ObjectMeta.Labels[crv1.PgtaskWorkflowID] = spec.Parameters[crv1.PgtaskWorkflowID]
- if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).
+ if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(cluster.Namespace).
Create(ctx, newInstance, metav1.CreateOptions{}); err != nil {
log.Error(err)
return "", err
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 56f228bfa6..8307434c80 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -750,18 +750,13 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
// ensure the backrest storage type specified for the cluster is valid, and that the
// configuration required to use that storage type (e.g. a bucket, endpoint and region
// when using aws s3 storage) has been provided
- err = validateBackrestStorageTypeOnCreate(request)
+ backrestStorageTypes, err := validateBackrestStorageTypeOnCreate(request)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
}
- if request.BackrestStorageType != "" {
- log.Debug("using backrest storage type provided by user")
- userLabelsMap[config.LABEL_BACKREST_STORAGE_TYPE] = request.BackrestStorageType
- }
-
// if a value for BackrestStorageConfig is provided, validate it here
if request.BackrestStorageConfig != "" && !apiserver.IsValidStorageName(request.BackrestStorageConfig) {
resp.Status.Code = msgs.Error
@@ -872,6 +867,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
// Create an instance of our CRD
newInstance := getClusterParams(request, clusterName, userLabelsMap, ns)
newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser
+ newInstance.Spec.BackrestStorageTypes = backrestStorageTypes
if request.SecretFrom != "" {
err = validateSecretFrom(newInstance, request.SecretFrom)
@@ -1905,12 +1901,18 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
}
// return an error if attempting to enable standby for a cluster that does not have the
// required S3 settings
- if cluster.Spec.Standby &&
- !strings.Contains(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], "s3") {
- response.Status.Code = msgs.Error
- response.Status.Msg = "Backrest storage type 's3' must be enabled in order to enable " +
- "standby mode"
- return response
+ if cluster.Spec.Standby {
+ s3Enabled := false
+ for _, storageType := range cluster.Spec.BackrestStorageTypes {
+ s3Enabled = s3Enabled || (storageType == crv1.BackrestStorageTypeS3)
+ }
+
+ if !s3Enabled {
+ response.Status.Code = msgs.Error
+ response.Status.Msg = "Backrest storage type 's3' must be enabled in order to enable " +
+ "standby mode"
+ return response
+ }
}
// if a startup or shutdown was requested then update the pgcluster spec accordingly
@@ -2132,19 +2134,33 @@ func setClusterAnnotationGroup(annotationGroup, annotations map[string]string) {
// validateBackrestStorageTypeOnCreate validates the pgbackrest storage type specified when
// a new cluster. This includes ensuring the type provided is valid, and that the required
// configuration settings (s3 bucket, region, etc.) are also present
-func validateBackrestStorageTypeOnCreate(request *msgs.CreateClusterRequest) error {
- requestBackRestStorageType := request.BackrestStorageType
+func validateBackrestStorageTypeOnCreate(request *msgs.CreateClusterRequest) ([]crv1.BackrestStorageType, error) {
+ storageTypes, err := crv1.ParseBackrestStorageTypes(request.BackrestStorageType)
- if requestBackRestStorageType != "" && !util.IsValidBackrestStorageType(requestBackRestStorageType) {
- return fmt.Errorf("Invalid value provided for pgBackRest storage type. The following values are allowed: %s",
- "\""+strings.Join(apiserver.GetBackrestStorageTypes(), "\", \"")+"\"")
- } else if strings.Contains(requestBackRestStorageType, "s3") && isMissingS3Config(request) {
- return errors.New("A configuration setting for AWS S3 storage is missing. Values must be " +
- "provided for the S3 bucket, S3 endpoint and S3 region in order to use the 's3' " +
- "storage type with pgBackRest.")
+ if err != nil {
+ // if the error is due to no storage types elected, return an empty storage
+ // type slice. otherwise return an error
+ if errors.Is(err, crv1.ErrStorageTypesEmpty) {
+ return []crv1.BackrestStorageType{}, nil
+ }
+
+ return nil, err
}
- return nil
+ // a special check -- if S3 storage is included, check to see if all of the
+ // appropriate settings are in place
+ for _, storageType := range storageTypes {
+ if storageType == crv1.BackrestStorageTypeS3 {
+ if isMissingS3Config(request) {
+ return nil, fmt.Errorf("A configuration setting for AWS S3 storage is missing. Values must be " +
+ "provided for the S3 bucket, S3 endpoint and S3 region in order to use the 's3' " +
+ "storage type with pgBackRest.")
+ }
+ break
+ }
+ }
+
+ return storageTypes, nil
}
// validateClusterTLS validates the parameters that allow a user to enable TLS
diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go
index a4c392a817..204f50fa70 100644
--- a/internal/apiserver/common.go
+++ b/internal/apiserver/common.go
@@ -20,6 +20,7 @@ import (
"errors"
"fmt"
"strconv"
+ "strings"
"github.com/crunchydata/postgres-operator/internal/config"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
@@ -41,7 +42,6 @@ const (
)
var (
- backrestStorageTypes = []string{"local", "s3"}
// ErrDBContainerNotFound is an error that indicates that a "database" container
// could not be found in a specific pod
ErrDBContainerNotFound = errors.New("\"database\" container not found in pod")
@@ -93,10 +93,6 @@ func CreateRMDataTask(clusterName, replicaName, taskName string, deleteBackups,
return err
}
-func GetBackrestStorageTypes() []string {
- return backrestStorageTypes
-}
-
// IsValidPVC determines if a PVC with the name provided exits
func IsValidPVC(pvcName, ns string) bool {
ctx := context.TODO()
@@ -112,6 +108,64 @@ func IsValidPVC(pvcName, ns string) bool {
return pvc != nil
}
+// ValidateBackrestStorageTypeForCommand determines if a submitted pgBackRest
+// storage value can be used as part of a pgBackRest operation based upon the
+// storage types used by the PostgreSQL cluster itself
+func ValidateBackrestStorageTypeForCommand(cluster *crv1.Pgcluster, storageTypeStr string) error {
+ // first, parse the submitted storage type string to see what we're up against
+ storageTypes, err := crv1.ParseBackrestStorageTypes(storageTypeStr)
+
+ // if there is an error parsing the string and it's not due to the string
+ // being empty, return the error
+ // if it is due to an empty string, then return so that the defaults will be
+ // used
+ if err != nil {
+ if errors.Is(err, crv1.ErrStorageTypesEmpty) {
+ return nil
+ }
+ return err
+ }
+
+ // there can only be one storage type used for a command (for now), so ensure
+ // this condition is sated
+ if len(storageTypes) > 1 {
+ return fmt.Errorf("you can only select one storage type")
+ }
+
+ // a special case: the list of storage types is empty. if this is not a posix
+ // (or local) storage type, then return an error. Otherwise, we can exit here.
+ if len(cluster.Spec.BackrestStorageTypes) == 0 {
+ if !(storageTypes[0] == crv1.BackrestStorageTypePosix || storageTypes[0] == crv1.BackrestStorageTypeLocal) {
+ return fmt.Errorf("%w: choices are: posix", crv1.ErrInvalidStorageType)
+ }
+ return nil
+ }
+
+ // now, see if the select storage type is available in the list of storage
+ // types on the cluster
+ ok := false
+ for _, storageType := range cluster.Spec.BackrestStorageTypes {
+ switch storageTypes[0] {
+ default:
+ ok = ok || (storageType == storageTypes[0])
+ case crv1.BackrestStorageTypePosix, crv1.BackrestStorageTypeLocal:
+ ok = ok || (storageType == crv1.BackrestStorageTypePosix || storageType == crv1.BackrestStorageTypeLocal)
+ }
+ }
+
+ if !ok {
+ choices := make([]string, len(cluster.Spec.BackrestStorageTypes))
+ for i, storageType := range cluster.Spec.BackrestStorageTypes {
+ choices[i] = string(storageType)
+ }
+
+ return fmt.Errorf("%w: choices are: %s",
+ crv1.ErrInvalidStorageType, strings.Join(choices, " "))
+ }
+
+ return nil
+}
+
// ValidateResourceRequestLimit validates that a Kubernetes Requests/Limit pair
// is valid, both by validating the values are valid quantity values, and then
// by checking that the limit >= request. This also needs to check against the
diff --git a/internal/apiserver/common_test.go b/internal/apiserver/common_test.go
index da909d2ba6..2475fca015 100644
--- a/internal/apiserver/common_test.go
+++ b/internal/apiserver/common_test.go
@@ -18,9 +18,152 @@ limitations under the License.
import (
"testing"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
"k8s.io/apimachinery/pkg/api/resource"
)
+func TestValidateBackrestStorageTypeForCommand(t *testing.T) {
+ cluster := &crv1.Pgcluster{
+ Spec: crv1.PgclusterSpec{},
+ }
+
+ t.Run("empty repo type", func(t *testing.T) {
+ err := ValidateBackrestStorageTypeForCommand(cluster, "")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("invalid repo type", func(t *testing.T) {
+ err := ValidateBackrestStorageTypeForCommand(cluster, "bad")
+
+ if err == nil {
+ t.Fatalf("expected invalid repo type to return an error, no error returned")
+ }
+ })
+
+ t.Run("multiple repo types", func(t *testing.T) {
+ err := ValidateBackrestStorageTypeForCommand(cluster, "posix,s3")
+
+ if err == nil {
+ t.Fatalf("expected error")
+ }
+ })
+
+ t.Run("posix repo, no repo types on resource", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "posix")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("local repo, no repo types on resource", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "local")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("s3 repo, no repo types on resource", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "s3")
+
+ if err == nil {
+ t.Fatalf("expected error")
+ }
+ })
+
+ t.Run("posix repo, posix repo type available", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypePosix}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "posix")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("posix repo, posix repo type unavailable", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypeS3}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "posix")
+
+ if err == nil {
+ t.Fatalf("expected error")
+ }
+ })
+
+ t.Run("posix repo, local repo type available", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypeLocal}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "posix")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("posix repo, multi-repo", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypeS3,
+ crv1.BackrestStorageTypePosix,
+ }
+ err := ValidateBackrestStorageTypeForCommand(cluster, "posix")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("local repo, local repo type available", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypeLocal}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "local")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("local repo, local repo type unavailable", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypeS3}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "local")
+
+ if err == nil {
+ t.Fatalf("expected error")
+ }
+ })
+
+ t.Run("local repo, posix repo type available", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypePosix}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "local")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("s3 repo, s3 repo type available", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypeS3}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "s3")
+
+ if err != nil {
+ t.Fatalf("expected no error, actual error: %s", err.Error())
+ }
+ })
+
+ t.Run("s3 repo, s3 repo type unavailable", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{crv1.BackrestStorageTypePosix}
+ err := ValidateBackrestStorageTypeForCommand(cluster, "s3")
+
+ if err == nil {
+ t.Fatalf("expected error")
+ }
+ })
+}
+
func TestValidateResourceRequestLimit(t *testing.T) {
t.Run("valid", func(t *testing.T) {
resources := []struct{ request, limit, defaultRequest string }{
diff --git a/internal/apiserver/scheduleservice/scheduleimpl.go b/internal/apiserver/scheduleservice/scheduleimpl.go
index 8de3c624ea..86e3bdcfad 100644
--- a/internal/apiserver/scheduleservice/scheduleimpl.go
+++ b/internal/apiserver/scheduleservice/scheduleimpl.go
@@ -41,9 +41,7 @@ type scheduleRequest struct {
func (s scheduleRequest) createBackRestSchedule(cluster *crv1.Pgcluster, ns string) *PgScheduleSpec {
name := fmt.Sprintf("%s-%s-%s", cluster.Name, s.Request.ScheduleType, s.Request.PGBackRestType)
- err := util.ValidateBackrestStorageTypeOnBackupRestore(s.Request.BackrestStorageType,
- cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], false)
- if err != nil {
+ if err := apiserver.ValidateBackrestStorageTypeForCommand(cluster, s.Request.BackrestStorageType); err != nil {
s.Response.Status.Code = msgs.Error
s.Response.Status.Msg = err.Error()
return &PgScheduleSpec{}
diff --git a/internal/controller/job/backresthandler.go b/internal/controller/job/backresthandler.go
index 40c10110ae..3bd90fcf83 100644
--- a/internal/controller/job/backresthandler.go
+++ b/internal/controller/job/backresthandler.go
@@ -102,7 +102,8 @@ func (c *Controller) handleBackrestBackupUpdate(job *apiv1.Job) error {
return nil
}
-// handleBackrestRestoreUpdate is responsible for handling updates to backrest stanza create jobs
+// handleBackrestStanzaCreateUpdate is responsible for handling updates to
+// backrest stanza create jobs
func (c *Controller) handleBackrestStanzaCreateUpdate(job *apiv1.Job) error {
ctx := context.TODO()
diff --git a/internal/controller/pod/inithandler.go b/internal/controller/pod/inithandler.go
index 733d301cb7..7081c1dfbf 100644
--- a/internal/controller/pod/inithandler.go
+++ b/internal/controller/pod/inithandler.go
@@ -191,7 +191,8 @@ func (c *Controller) handleStandbyInit(cluster *crv1.Pgcluster) error {
// a standby cluster that does not have "s3" storage only enabled.
// If this is a standby cluster and the pgBackRest storage type is set
// to "s3" for S3 storage only, set the cluster to an initialized status.
- if cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE] != "s3" {
+ if !(len(cluster.Spec.BackrestStorageTypes) == 1 &&
+ cluster.Spec.BackrestStorageTypes[0] == crv1.BackrestStorageTypeS3) {
// first try to delete any existing stanza create task and/or job
if err := c.Client.CrunchydataV1().Pgtasks(namespace).
Delete(ctx, fmt.Sprintf("%s-%s", clusterName, crv1.PgtaskBackrestStanzaCreate),
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index e486104961..8cd150cdf3 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -56,7 +56,7 @@ type backrestJobTemplateFields struct {
PgbackrestStanza string
PgbackrestDBPath string
PgbackrestRepo1Path string
- PgbackrestRepo1Type string
+ PgbackrestRepo1Type crv1.BackrestStorageType
BackrestLocalAndS3Storage bool
PgbackrestS3VerifyTLS string
PgbackrestRestoreVolumes string
@@ -69,13 +69,36 @@ var (
)
// Backrest ...
-func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtask) {
+func Backrest(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
ctx := context.TODO()
- // create the Job to run the backrest command
+ // get the cluster that is requesting the backup. if we cannot get the cluster
+ // do not take the backup
+ cluster, err := clientset.CrunchydataV1().Pgclusters(task.Namespace).Get(ctx,
+ task.Spec.Parameters[config.LABEL_PG_CLUSTER], metav1.GetOptions{})
+
+ if err != nil {
+ log.Error(err)
+ return
+ }
cmd := task.Spec.Parameters[config.LABEL_BACKREST_COMMAND]
+ // determine the repo type. we need to make a special check for a standby
+ // cluster (see below)
+ repoType := operator.GetRepoType(cluster)
+
+ // If this is a standby cluster and the stanza creation task, if posix storage
+ // is specified then this ensures that the stanza is created on the local
+ // repository only.
+ //
+ //The stanza for the S3 repo will have already been created by the cluster
+ // the standby is replicating from, and therefore does not need to be
+ // attempted again.
+ if cluster.Spec.Standby && cmd == crv1.PgtaskBackrestStanzaCreate {
+ repoType = crv1.BackrestStorageTypePosix
+ }
+ // create the Job to run the backrest command
jobFields := backrestJobTemplateFields{
JobName: task.Spec.Parameters[config.LABEL_JOB_NAME],
ClusterName: task.Spec.Parameters[config.LABEL_PG_CLUSTER],
@@ -91,8 +114,8 @@ func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtas
PgbackrestRepo1Path: task.Spec.Parameters[config.LABEL_PGBACKREST_REPO_PATH],
PgbackrestRestoreVolumes: "",
PgbackrestRestoreVolumeMounts: "",
- PgbackrestRepo1Type: operator.GetRepoType(task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE]),
- BackrestLocalAndS3Storage: operator.IsLocalAndS3Storage(task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE]),
+ PgbackrestRepo1Type: repoType,
+ BackrestLocalAndS3Storage: operator.IsLocalAndS3Storage(cluster),
PgbackrestS3VerifyTLS: task.Spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS],
}
@@ -204,7 +227,6 @@ func CreateBackup(clientset pgo.Interface, namespace, clusterName, podName strin
spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix)
spec.Parameters[config.LABEL_BACKREST_COMMAND] = crv1.PgtaskBackrestBackup
spec.Parameters[config.LABEL_BACKREST_OPTS] = backupOpts
- spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]
// Get 'true' or 'false' for setting the pgBackRest S3 verify TLS value
spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS] = operator.GetS3VerifyTLSSetting(cluster)
diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go
index f36fee6dce..28debffc17 100644
--- a/internal/operator/backrest/repo.go
+++ b/internal/operator/backrest/repo.go
@@ -55,7 +55,7 @@ type RepoDeploymentTemplateFields struct {
PgbackrestPGPort string
SshdPort int
PgbackrestStanza string
- PgbackrestRepo1Type string
+ PgbackrestRepo1Type crv1.BackrestStorageType
PgbackrestS3EnvVars string
Name string
ClusterName string
@@ -225,7 +225,6 @@ func setBootstrapRepoOverrides(clientset kubernetes.Interface, cluster *crv1.Pgc
// specific PostgreSQL cluster.
func getRepoDeploymentFields(clientset kubernetes.Interface, cluster *crv1.Pgcluster,
replicas int) *RepoDeploymentTemplateFields {
- namespace := cluster.GetNamespace()
repoFields := RepoDeploymentTemplateFields{
CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
@@ -234,13 +233,13 @@ func getRepoDeploymentFields(clientset kubernetes.Interface, cluster *crv1.Pgclu
BackrestRepoClaimName: fmt.Sprintf(util.BackrestRepoPVCName, cluster.Name),
SshdSecretsName: fmt.Sprintf(util.BackrestRepoSecretName, cluster.Name),
PGbackrestDBHost: cluster.Name,
- PgbackrestRepo1Path: util.GetPGBackRestRepoPath(*cluster),
+ PgbackrestRepo1Path: operator.GetPGBackRestRepoPath(cluster),
PgbackrestDBPath: "/pgdata/" + cluster.Name,
PgbackrestPGPort: cluster.Spec.Port,
SshdPort: operator.Pgo.Cluster.BackrestPort,
PgbackrestStanza: "db",
- PgbackrestRepo1Type: operator.GetRepoType(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
- PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cluster, clientset, namespace),
+ PgbackrestRepo1Type: operator.GetRepoType(cluster),
+ PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(clientset, *cluster),
Name: fmt.Sprintf(util.BackrestRepoServiceName, cluster.Name),
ClusterName: cluster.Name,
SecurityContext: operator.GetPodSecurityContext(cluster.Spec.BackrestStorage.GetSupplementalGroups()),
diff --git a/internal/operator/backrest/stanza.go b/internal/operator/backrest/stanza.go
index a2a0176452..ba97c225c1 100644
--- a/internal/operator/backrest/stanza.go
+++ b/internal/operator/backrest/stanza.go
@@ -17,7 +17,6 @@ package backrest
import (
"context"
- "strings"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
@@ -94,31 +93,17 @@ func StanzaCreate(namespace, clusterName string, clientset kubeapi.Interface) {
// this will be used by the associated backrest job
spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix)
spec.Parameters[config.LABEL_BACKREST_COMMAND] = crv1.PgtaskBackrestStanzaCreate
+ // Get 'true' or 'false' for setting the pgBackRest S3 verify TLS value
+ spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS] = operator.GetS3VerifyTLSSetting(cluster)
- // Handle stanza creation for a standby cluster, which requires some additional consideration.
- // This includes setting the pgBackRest storage type and command options as needed to support
- // stanza creation for a standby cluster. If not a standby cluster then simply set the
- // storage type and options as usual.
+ // Handle stanza creation for a standby cluster, which requires some
+ // additional consideration.
+ // Since the primary will not be directly accessible to the standby cluster,
+ // ensure the stanza created in offline mode
if cluster.Spec.Standby {
- // Since this is a standby cluster, if local storage is specified then ensure stanza
- // creation is for the local repo only. The stanza for the S3 repo will have already been
- // created by the cluster the standby is replicating from, and therefore does not need to
- // be attempted again.
- if strings.Contains(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], "local") {
- spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = "local"
- }
- // Since the primary will not be directly accessible to the standby cluster, create the
- // stanza in offline mode
spec.Parameters[config.LABEL_BACKREST_OPTS] = "--no-online"
- } else {
- spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] =
- cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]
- spec.Parameters[config.LABEL_BACKREST_OPTS] = ""
}
- // Get 'true' or 'false' for setting the pgBackRest S3 verify TLS value
- spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS] = operator.GetS3VerifyTLSSetting(cluster)
-
newInstance := &crv1.Pgtask{
ObjectMeta: metav1.ObjectMeta{
Name: taskName,
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index b16a89829b..216300cc0e 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -683,7 +683,7 @@ func annotateBackrestSecret(clientset kubernetes.Interface, cluster *crv1.Pgclus
secretName := fmt.Sprintf(util.BackrestRepoSecretName, clusterName)
patch, err := kubeapi.NewMergePatch().Add("metadata", "annotations")(map[string]string{
config.ANNOTATION_PG_PORT: cluster.Spec.Port,
- config.ANNOTATION_REPO_PATH: util.GetPGBackRestRepoPath(*cluster),
+ config.ANNOTATION_REPO_PATH: operator.GetPGBackRestRepoPath(cluster),
config.ANNOTATION_S3_BUCKET: cfg(cl.BackrestS3Bucket, op.BackrestS3Bucket),
config.ANNOTATION_S3_ENDPOINT: cfg(cl.BackrestS3Endpoint, op.BackrestS3Endpoint),
config.ANNOTATION_S3_REGION: cfg(cl.BackrestS3Region, op.BackrestS3Region),
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 86ea813208..66c109296f 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -306,35 +306,34 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
// create the primary deployment
deploymentFields := operator.DeploymentTemplateFields{
- Name: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
- IsInit: true,
- Replicas: "0",
- ClusterName: cl.Spec.Name,
- Port: cl.Spec.Port,
- CCPImagePrefix: util.GetValueOrDefault(cl.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
- CCPImage: cl.Spec.CCPImage,
- CCPImageTag: cl.Spec.CCPImageTag,
- PVCName: dataVolume.InlineVolumeSource(),
- DeploymentLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
- PodAnnotations: operator.GetAnnotations(cl, crv1.ClusterAnnotationPostgres),
- PodLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
- DataPathOverride: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
- Database: cl.Spec.Database,
- SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
- RootSecretName: crv1.UserSecretName(cl, crv1.PGUserSuperuser),
- PrimarySecretName: crv1.UserSecretName(cl, crv1.PGUserReplication),
- UserSecretName: crv1.UserSecretName(cl, cl.Spec.User),
- NodeSelector: operator.GetAffinity(cl.Spec.UserLabels["NodeLabelKey"], cl.Spec.UserLabels["NodeLabelValue"], "In"),
- PodAntiAffinity: operator.GetPodAntiAffinity(cl, crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default),
- ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
- ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
- ExporterAddon: operator.GetExporterAddon(cl.Spec),
- BadgerAddon: operator.GetBadgerAddon(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
- PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
- ScopeLabel: config.LABEL_PGHA_SCOPE,
- PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
- cl.Spec.Port, cl.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
- PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cl, clientset, namespace),
+ Name: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
+ IsInit: true,
+ Replicas: "0",
+ ClusterName: cl.Spec.Name,
+ Port: cl.Spec.Port,
+ CCPImagePrefix: util.GetValueOrDefault(cl.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
+ CCPImage: cl.Spec.CCPImage,
+ CCPImageTag: cl.Spec.CCPImageTag,
+ PVCName: dataVolume.InlineVolumeSource(),
+ DeploymentLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
+ PodAnnotations: operator.GetAnnotations(cl, crv1.ClusterAnnotationPostgres),
+ PodLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
+ DataPathOverride: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
+ Database: cl.Spec.Database,
+ SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
+ RootSecretName: crv1.UserSecretName(cl, crv1.PGUserSuperuser),
+ PrimarySecretName: crv1.UserSecretName(cl, crv1.PGUserReplication),
+ UserSecretName: crv1.UserSecretName(cl, cl.Spec.User),
+ NodeSelector: operator.GetAffinity(cl.Spec.UserLabels["NodeLabelKey"], cl.Spec.UserLabels["NodeLabelValue"], "In"),
+ PodAntiAffinity: operator.GetPodAntiAffinity(cl, crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default),
+ ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
+ ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
+ ExporterAddon: operator.GetExporterAddon(cl.Spec),
+ BadgerAddon: operator.GetBadgerAddon(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
+ PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
+ ScopeLabel: config.LABEL_PGHA_SCOPE,
+ PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], cl.Spec.Port),
+ PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(clientset, *cl),
ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
SyncReplication: operator.GetSyncReplication(cl.Spec.SyncReplication),
Tablespaces: operator.GetTablespaceNames(cl.Spec.TablespaceMounts),
@@ -463,33 +462,32 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
// create the replica deployment
replicaDeploymentFields := operator.DeploymentTemplateFields{
- Name: replica.Spec.Name,
- ClusterName: replica.Spec.ClusterName,
- Port: cluster.Spec.Port,
- CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
- CCPImageTag: imageTag,
- CCPImage: image,
- PVCName: dataVolume.InlineVolumeSource(),
- Database: cluster.Spec.Database,
- DataPathOverride: replica.Spec.Name,
- Replicas: "1",
- ConfVolume: operator.GetConfVolume(clientset, cluster, namespace),
- DeploymentLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
- PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationPostgres),
- PodLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
- SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
- RootSecretName: crv1.UserSecretName(cluster, crv1.PGUserSuperuser),
- PrimarySecretName: crv1.UserSecretName(cluster, crv1.PGUserReplication),
- UserSecretName: crv1.UserSecretName(cluster, cluster.Spec.User),
- ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits),
- NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
- PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
- ExporterAddon: operator.GetExporterAddon(cluster.Spec),
- BadgerAddon: operator.GetBadgerAddon(cluster, replica.Spec.Name),
- ScopeLabel: config.LABEL_PGHA_SCOPE,
- PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, replica.Spec.Name,
- cluster.Spec.Port, cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]),
- PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cluster, clientset, namespace),
+ Name: replica.Spec.Name,
+ ClusterName: replica.Spec.ClusterName,
+ Port: cluster.Spec.Port,
+ CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
+ CCPImageTag: imageTag,
+ CCPImage: image,
+ PVCName: dataVolume.InlineVolumeSource(),
+ Database: cluster.Spec.Database,
+ DataPathOverride: replica.Spec.Name,
+ Replicas: "1",
+ ConfVolume: operator.GetConfVolume(clientset, cluster, namespace),
+ DeploymentLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
+ PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationPostgres),
+ PodLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
+ SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
+ RootSecretName: crv1.UserSecretName(cluster, crv1.PGUserSuperuser),
+ PrimarySecretName: crv1.UserSecretName(cluster, crv1.PGUserReplication),
+ UserSecretName: crv1.UserSecretName(cluster, cluster.Spec.User),
+ ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits),
+ NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
+ PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
+ ExporterAddon: operator.GetExporterAddon(cluster.Spec),
+ BadgerAddon: operator.GetBadgerAddon(cluster, replica.Spec.Name),
+ ScopeLabel: config.LABEL_PGHA_SCOPE,
+ PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, replica.Spec.Name, cluster.Spec.Port),
+ PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(clientset, *cluster),
ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
SyncReplication: operator.GetSyncReplication(cluster.Spec.SyncReplication),
Tablespaces: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts),
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index 639f46ae41..bebdeea866 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -21,6 +21,7 @@ import (
"fmt"
"io/ioutil"
"strconv"
+ "strings"
"time"
"github.com/crunchydata/postgres-operator/internal/config"
@@ -504,6 +505,49 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
delete(pgcluster.ObjectMeta.Labels, "autofail")
}
+ // 4.6.0 moved the "backrest-storage-type" label to a CRD attribute, well,
+ // really an array of CRD attributes, which we need to map the various
+ // attributes to. "local" will be mapped the "posix" to match the pgBackRest
+ // nomenclature
+ //
+ // If we come back with an empty array, we will default it to posix
+ if val, ok := pgcluster.Spec.UserLabels["backrest-storage-type"]; ok {
+ pgcluster.Spec.BackrestStorageTypes = make([]crv1.BackrestStorageType, 0)
+ storageTypes := strings.Split(val, ",")
+
+ // loop through each of the storage types processed and determine which of
+ // the standard storage types it matches
+ for _, s := range storageTypes {
+ for _, storageType := range crv1.BackrestStorageTypes {
+ // if this is not the storage type, continue looping
+ if crv1.BackrestStorageType(s) != storageType {
+ continue
+ }
+
+ // so this is the storage type. However, if it's "local" let's update
+ // it to be posix
+ if storageType == crv1.BackrestStorageTypeLocal {
+ pgcluster.Spec.BackrestStorageTypes = append(pgcluster.Spec.BackrestStorageTypes,
+ crv1.BackrestStorageTypePosix)
+ } else {
+ pgcluster.Spec.BackrestStorageTypes = append(pgcluster.Spec.BackrestStorageTypes, storageType)
+ }
+
+ // we can break the inner loop
+ break
+ }
+ }
+
+ // remember: if somehow this is empty, add "posix"
+ if len(pgcluster.Spec.BackrestStorageTypes) == 0 {
+ pgcluster.Spec.BackrestStorageTypes = append(pgcluster.Spec.BackrestStorageTypes,
+ crv1.BackrestStorageTypePosix)
+ }
+
+ // and delete the label
+ delete(pgcluster.Spec.UserLabels, "backrest-storage-type")
+ }
+
// since the current primary label is not used in this version of the Postgres Operator,
// delete it before moving on to other upgrade tasks
delete(pgcluster.ObjectMeta.Labels, config.LABEL_CURRENT_PRIMARY)
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 1095fbb2b2..6497d95262 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -114,7 +114,7 @@ type PgbackrestEnvVarsTemplateFields struct {
PgbackrestDBPath string
PgbackrestRepo1Path string
PgbackrestRepo1Host string
- PgbackrestRepo1Type string
+ PgbackrestRepo1Type crv1.BackrestStorageType
PgbackrestLocalAndS3Storage bool
PgbackrestPGPort string
}
@@ -276,15 +276,15 @@ func GetAnnotations(cluster *crv1.Pgcluster, annotationType crv1.ClusterAnnotati
}
// consolidate with cluster.GetPgbackrestEnvVars
-func GetPgbackrestEnvVars(cluster *crv1.Pgcluster, depName, port, storageType string) string {
+func GetPgbackrestEnvVars(cluster *crv1.Pgcluster, depName, port string) string {
fields := PgbackrestEnvVarsTemplateFields{
PgbackrestStanza: "db",
PgbackrestRepo1Host: cluster.Name + "-backrest-shared-repo",
- PgbackrestRepo1Path: util.GetPGBackRestRepoPath(*cluster),
+ PgbackrestRepo1Path: GetPGBackRestRepoPath(cluster),
PgbackrestDBPath: "/pgdata/" + depName,
PgbackrestPGPort: port,
- PgbackrestRepo1Type: GetRepoType(storageType),
- PgbackrestLocalAndS3Storage: IsLocalAndS3Storage(storageType),
+ PgbackrestRepo1Type: GetRepoType(cluster),
+ PgbackrestLocalAndS3Storage: IsLocalAndS3Storage(cluster),
}
doc := bytes.Buffer{}
@@ -306,7 +306,7 @@ func GetPgbackrestBootstrapEnvVars(restoreClusterName, depName string,
PgbackrestRepo1Path: restoreFromSecret.Annotations[config.ANNOTATION_REPO_PATH],
PgbackrestPGPort: restoreFromSecret.Annotations[config.ANNOTATION_PG_PORT],
PgbackrestRepo1Host: fmt.Sprintf(util.BackrestRepoDeploymentName, restoreClusterName),
- PgbackrestRepo1Type: "posix", // just set to the default, can be overridden via CLI args
+ PgbackrestRepo1Type: crv1.BackrestStorageTypePosix, // just set to the default, can be overridden via CLI args
}
var doc bytes.Buffer
@@ -793,9 +793,15 @@ func GetPgmonitorEnvVars(cluster *crv1.Pgcluster) string {
// pgBackRest environment variables required to enable S3 support. After the template has been
// executed with the proper values, the result is then returned a string for inclusion in the PG
// and pgBackRest deployments.
-func GetPgbackrestS3EnvVars(cluster crv1.Pgcluster, clientset kubernetes.Interface,
- ns string) string {
- if !strings.Contains(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], "s3") {
+func GetPgbackrestS3EnvVars(clientset kubernetes.Interface, cluster crv1.Pgcluster) string {
+ // determine if backups are enabled to be stored on S3
+ isS3 := false
+
+ for _, storageType := range cluster.Spec.BackrestStorageTypes {
+ isS3 = isS3 || (storageType == crv1.BackrestStorageTypeS3)
+ }
+
+ if !isS3 {
return ""
}
diff --git a/internal/operator/common.go b/internal/operator/common.go
index 819fdc84d5..382fbb1498 100644
--- a/internal/operator/common.go
+++ b/internal/operator/common.go
@@ -19,10 +19,10 @@ import (
"bytes"
"context"
"encoding/json"
+ "fmt"
"io/ioutil"
"os"
"path"
- "strings"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/ns"
@@ -36,6 +36,11 @@ import (
)
const (
+ // defaultBackrestRepoPath defines the default repo1-path for pgBackRest for
+ // use when a specic path is not provided in the pgcluster CR. The '%s'
+ // format verb will be replaced with the cluster name when this variable is
+ // utilized
+ defaultBackrestRepoPath = "/backrestrepo/%s-backrest-shared-repo"
// defaultBackrestRepoConfigPath contains the default configuration that are used
// to set up a pgBackRest repository
defaultBackrestRepoConfigPath = "/default-pgo-backrest-repo/"
@@ -231,24 +236,56 @@ func GetResourcesJSON(resources, limits v1.ResourceList) string {
return doc.String()
}
+// GetPGBackRestRepoPath is responsible for determining the repo path setting
+// (i.e. 'repo1-path' flag) for use by pgBackRest. If a specific repo path has
+// been defined in the pgcluster CR, then that path will be returned. Otherwise
+// a default path will be returned that is generated from the cluster name
+func GetPGBackRestRepoPath(cluster *crv1.Pgcluster) string {
+ if cluster.Spec.BackrestRepoPath != "" {
+ return cluster.Spec.BackrestRepoPath
+ }
+ return fmt.Sprintf(defaultBackrestRepoPath, cluster.Name)
+}
+
// GetRepoType returns the proper repo type to set in container based on the
// backrest storage type provided
-func GetRepoType(backrestStorageType string) string {
- if backrestStorageType != "" && backrestStorageType == "s3" {
- return "s3"
- } else {
- return "posix"
+//
+// If there are multiple types, the default returned is "posix". This could
+// change once there is proper multi-repo support, but with proper multi-repo
+// support, this function is likely annhilated.
+//
+// If there is nothing, the default returned is posix
+func GetRepoType(cluster *crv1.Pgcluster) crv1.BackrestStorageType {
+ // so...per the above comment...
+ if len(cluster.Spec.BackrestStorageTypes) == 0 || len(cluster.Spec.BackrestStorageTypes) > 1 {
+ return crv1.BackrestStorageTypePosix
}
+
+ // alright, so there is only 1. If it happens to be "local" ensure that posix
+ // is returned
+ if cluster.Spec.BackrestStorageTypes[0] == crv1.BackrestStorageTypeLocal {
+ return crv1.BackrestStorageTypePosix
+ }
+
+ return cluster.Spec.BackrestStorageTypes[0]
}
// IsLocalAndS3Storage a boolean indicating whether or not local and s3 storage should
// be enabled for pgBackRest based on the backrestStorageType string provided
-func IsLocalAndS3Storage(backrestStorageType string) bool {
- if backrestStorageType != "" && strings.Contains(backrestStorageType, "s3") &&
- strings.Contains(backrestStorageType, "local") {
- return true
+func IsLocalAndS3Storage(cluster *crv1.Pgcluster) bool {
+ // this works for the time being. if the counter is two or greater, then we
+ // have both local and S3 storage
+ i := 0
+
+ for _, storageType := range cluster.Spec.BackrestStorageTypes {
+ switch storageType {
+ default: // no -oop
+ case crv1.BackrestStorageTypeLocal, crv1.BackrestStorageTypePosix, crv1.BackrestStorageTypeS3:
+ i += 1
+ }
}
- return false
+
+ return i >= 2
}
// SetContainerImageOverride determines if there is an override available for
diff --git a/internal/operator/common_test.go b/internal/operator/common_test.go
new file mode 100644
index 0000000000..53035c9933
--- /dev/null
+++ b/internal/operator/common_test.go
@@ -0,0 +1,165 @@
+package operator
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "testing"
+
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+)
+
+func TestGetRepoType(t *testing.T) {
+ cluster := &crv1.Pgcluster{
+ Spec: crv1.PgclusterSpec{},
+ }
+
+ t.Run("empty list returns posix", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = make([]crv1.BackrestStorageType, 0)
+
+ expected := crv1.BackrestStorageTypePosix
+ actual := GetRepoType(cluster)
+ if expected != actual {
+ t.Fatalf("expected %q, actual %q", expected, actual)
+ }
+ })
+
+ t.Run("multiple list returns posix", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypeS3,
+ crv1.BackrestStorageTypePosix,
+ }
+
+ expected := crv1.BackrestStorageTypePosix
+ actual := GetRepoType(cluster)
+ if expected != actual {
+ t.Fatalf("expected %q, actual %q", expected, actual)
+ }
+ })
+
+ t.Run("local returns posix", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypeLocal,
+ }
+
+ expected := crv1.BackrestStorageTypePosix
+ actual := GetRepoType(cluster)
+ if expected != actual {
+ t.Fatalf("expected %q, actual %q", expected, actual)
+ }
+ })
+
+ t.Run("posix returns posix", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypePosix,
+ }
+
+ expected := crv1.BackrestStorageTypePosix
+ actual := GetRepoType(cluster)
+ if expected != actual {
+ t.Fatalf("expected %q, actual %q", expected, actual)
+ }
+ })
+
+ t.Run("s3 returns s3", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypeS3,
+ }
+
+ expected := crv1.BackrestStorageTypeS3
+ actual := GetRepoType(cluster)
+ if expected != actual {
+ t.Fatalf("expected %q, actual %q", expected, actual)
+ }
+ })
+}
+
+func TestIsLocalAndS3Storage(t *testing.T) {
+ cluster := &crv1.Pgcluster{
+ Spec: crv1.PgclusterSpec{},
+ }
+
+ t.Run("empty list returns false", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = make([]crv1.BackrestStorageType, 0)
+
+ expected := false
+ actual := IsLocalAndS3Storage(cluster)
+ if expected != actual {
+ t.Fatalf("expected %t, actual %t", expected, actual)
+ }
+ })
+
+ t.Run("posix only returns false", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypePosix,
+ }
+
+ expected := false
+ actual := IsLocalAndS3Storage(cluster)
+ if expected != actual {
+ t.Fatalf("expected %t, actual %t", expected, actual)
+ }
+ })
+
+ t.Run("local only returns false", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypeLocal,
+ }
+
+ expected := false
+ actual := IsLocalAndS3Storage(cluster)
+ if expected != actual {
+ t.Fatalf("expected %t, actual %t", expected, actual)
+ }
+ })
+
+ t.Run("s3 only returns false", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypeS3,
+ }
+
+ expected := false
+ actual := IsLocalAndS3Storage(cluster)
+ if expected != actual {
+ t.Fatalf("expected %t, actual %t", expected, actual)
+ }
+ })
+
+ t.Run("posix and s3 returns true", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypePosix,
+ crv1.BackrestStorageTypeS3,
+ }
+
+ expected := true
+ actual := IsLocalAndS3Storage(cluster)
+ if expected != actual {
+ t.Fatalf("expected %t, actual %t", expected, actual)
+ }
+ })
+
+ t.Run("local and s3 returns true", func(t *testing.T) {
+ cluster.Spec.BackrestStorageTypes = []crv1.BackrestStorageType{
+ crv1.BackrestStorageTypeLocal,
+ crv1.BackrestStorageTypeS3,
+ }
+
+ expected := true
+ actual := IsLocalAndS3Storage(cluster)
+ if expected != actual {
+ t.Fatalf("expected %t, actual %t", expected, actual)
+ }
+ })
+}
diff --git a/internal/util/backrest.go b/internal/util/backrest.go
index 2c42d2ae4b..a4b572f16b 100644
--- a/internal/util/backrest.go
+++ b/internal/util/backrest.go
@@ -15,14 +15,6 @@ package util
limitations under the License.
*/
-import (
- "errors"
- "fmt"
- "strings"
-
- crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
-)
-
const (
BackrestRepoDeploymentName = "%s-backrest-shared-repo"
BackrestRepoServiceName = "%s-backrest-shared-repo"
@@ -30,67 +22,3 @@ const (
// #nosec: G101
BackrestRepoSecretName = "%s-backrest-repo-config"
)
-
-// defines the default repo1-path for pgBackRest for use when a specic path is not provided
-// in the pgcluster CR. The '%s' format verb will be replaced with the cluster name when this
-// variable is utilized
-const defaultBackrestRepoPath = "/backrestrepo/%s-backrest-shared-repo"
-
-// ValidateBackrestStorageTypeOnBackupRestore checks to see if the pgbackrest storage type provided
-// when performing either pgbackrest backup or restore is valid. This includes ensuring the value
-// provided is a valid storage type (e.g. "s3" and/or "local"). This also includes ensuring the
-// storage type specified (e.g. "s3" or "local") is enabled in the current cluster. And finally,
-// validation is ocurring for a restore, the ensure only one storage type is selected.
-func ValidateBackrestStorageTypeOnBackupRestore(newBackRestStorageType,
- currentBackRestStorageType string, restore bool) error {
- if newBackRestStorageType != "" && !IsValidBackrestStorageType(newBackRestStorageType) {
- return fmt.Errorf("Invalid value provided for pgBackRest storage type. The following "+
- "values are allowed: %s", "\""+strings.Join(crv1.BackrestStorageTypes, "\", \"")+"\"")
- } else if newBackRestStorageType != "" &&
- strings.Contains(newBackRestStorageType, "s3") &&
- !strings.Contains(currentBackRestStorageType, "s3") {
- return errors.New("Storage type 's3' not allowed. S3 storage is not enabled for " +
- "pgBackRest in this cluster")
- } else if (newBackRestStorageType == "" ||
- strings.Contains(newBackRestStorageType, "local")) &&
- (currentBackRestStorageType != "" &&
- !strings.Contains(currentBackRestStorageType, "local")) {
- return errors.New("Storage type 'local' not allowed. Local storage is not enabled for " +
- "pgBackRest in this cluster. If this cluster uses S3 storage only, specify 's3' " +
- "for the pgBackRest storage type.")
- }
-
- // storage type validation that is only applicable for restores
- if restore && newBackRestStorageType != "" &&
- len(strings.Split(newBackRestStorageType, ",")) > 1 {
- return fmt.Errorf("Multiple storage types cannot be selected cannot be select when "+
- "performing a restore. Please select one of the following: %s",
- "\""+strings.Join(crv1.BackrestStorageTypes, "\", \"")+"\"")
- }
-
- return nil
-}
-
-// IsValidBackrestStorageType determines if the storage source string contains valid pgBackRest
-// storage type values
-func IsValidBackrestStorageType(storageType string) bool {
- isValid := true
- for _, storageType := range strings.Split(storageType, ",") {
- if !IsStringOneOf(storageType, crv1.BackrestStorageTypes...) {
- isValid = false
- break
- }
- }
- return isValid
-}
-
-// GetPGBackRestRepoPath is responsible for determining the repo path setting (i.e. 'repo1-path'
-// flag) for use by pgBackRest. If a specific repo path has been defined in the pgcluster CR,
-// then that path will be returned. Otherwise a default path will be returned, which is generated
-// using the 'defaultBackrestRepoPath' constant and the cluster name.
-func GetPGBackRestRepoPath(cluster crv1.Pgcluster) string {
- if cluster.Spec.BackrestRepoPath != "" {
- return cluster.Spec.BackrestRepoPath
- }
- return fmt.Sprintf(defaultBackrestRepoPath, cluster.Name)
-}
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index e9a84e729c..76f7ccae67 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -17,6 +17,7 @@ package v1
import (
"fmt"
+ "strings"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -109,28 +110,33 @@ type PgclusterSpec struct {
// PgBouncer contains all of the settings to properly maintain a pgBouncer
// implementation
- PgBouncer PgBouncerSpec `json:"pgBouncer"`
- User string `json:"user"`
- Database string `json:"database"`
- Replicas string `json:"replicas"`
- Status string `json:"status"`
- CustomConfig string `json:"customconfig"`
- UserLabels map[string]string `json:"userlabels"`
- PodAntiAffinity PodAntiAffinitySpec `json:"podAntiAffinity"`
- SyncReplication *bool `json:"syncReplication"`
- BackrestConfig []v1.VolumeProjection `json:"backrestConfig"`
- BackrestS3Bucket string `json:"backrestS3Bucket"`
- BackrestS3Region string `json:"backrestS3Region"`
- BackrestS3Endpoint string `json:"backrestS3Endpoint"`
- BackrestS3URIStyle string `json:"backrestS3URIStyle"`
- BackrestS3VerifyTLS string `json:"backrestS3VerifyTLS"`
- BackrestRepoPath string `json:"backrestRepoPath"`
- TablespaceMounts map[string]PgStorageSpec `json:"tablespaceMounts"`
- TLS TLSSpec `json:"tls"`
- TLSOnly bool `json:"tlsOnly"`
- Standby bool `json:"standby"`
- Shutdown bool `json:"shutdown"`
- PGDataSource PGDataSourceSpec `json:"pgDataSource"`
+ PgBouncer PgBouncerSpec `json:"pgBouncer"`
+ User string `json:"user"`
+ Database string `json:"database"`
+ Replicas string `json:"replicas"`
+ Status string `json:"status"`
+ CustomConfig string `json:"customconfig"`
+ UserLabels map[string]string `json:"userlabels"`
+ PodAntiAffinity PodAntiAffinitySpec `json:"podAntiAffinity"`
+ SyncReplication *bool `json:"syncReplication"`
+ BackrestConfig []v1.VolumeProjection `json:"backrestConfig"`
+ BackrestS3Bucket string `json:"backrestS3Bucket"`
+ BackrestS3Region string `json:"backrestS3Region"`
+ BackrestS3Endpoint string `json:"backrestS3Endpoint"`
+ BackrestS3URIStyle string `json:"backrestS3URIStyle"`
+ BackrestS3VerifyTLS string `json:"backrestS3VerifyTLS"`
+ BackrestRepoPath string `json:"backrestRepoPath"`
+ // BackrestStorageTypes is a list of the different pgBackRest storage types
+ // to be used for this cluster. Presently, it can only accept up to local
+ // and S3, but is available to support different repo types in the future
+ // if the array is empty, "local" ("posix") is presumed.
+ BackrestStorageTypes []BackrestStorageType `json:"backrestStorageTypes"`
+ TablespaceMounts map[string]PgStorageSpec `json:"tablespaceMounts"`
+ TLS TLSSpec `json:"tls"`
+ TLSOnly bool `json:"tlsOnly"`
+ Standby bool `json:"standby"`
+ Shutdown bool `json:"shutdown"`
+ PGDataSource PGDataSourceSpec `json:"pgDataSource"`
// Annotations contains a set of Deployment (and by association, Pod)
// annotations that are propagated to all managed Deployments
@@ -145,6 +151,28 @@ type PgclusterSpec struct {
Tolerations []v1.Toleration `json:"tolerations"`
}
+// BackrestStorageType refers to the types of storage accept by pgBackRest
+type BackrestStorageType string
+
+const (
+ // BackrestStorageTypeLocal is DEPRECATED. It is the equivalent to "posix"
+ // storage and is the default storage available (well posix is the default).
+ // Available for legacy purposes -- this really maps to "posix"
+ BackrestStorageTypeLocal BackrestStorageType = "local"
+ // BackrestStorageTypePosix is the "posix" storage type and in the fullness
+ // of time should supercede local
+ BackrestStorageTypePosix BackrestStorageType = "posix"
+ // BackrestStorageTypeS3 if the S3 storage type for using S3 or S3-equivalent
+ // storage
+ BackrestStorageTypeS3 BackrestStorageType = "s3"
+)
+
+var BackrestStorageTypes = []BackrestStorageType{
+ BackrestStorageTypeLocal,
+ BackrestStorageTypePosix,
+ BackrestStorageTypeS3,
+}
+
// ClusterAnnotations provides a set of annotations that can be propagated to
// the managed deployments. These are subdivided into four categories, which
// are explained further below:
@@ -356,6 +384,37 @@ func (p PodAntiAffinityType) Validate() error {
PodAntiAffinityRequired, PodAntiAffinityPreffered, PodAntiAffinityDisabled)
}
+// ParseBackrestStorageTypes takes a comma-delimited string of potential
+// pgBackRest storage types and attempts to parse it into a recognizable array.
+// if an invalid type is passed in, then an error is returned
+func ParseBackrestStorageTypes(storageTypeStr string) ([]BackrestStorageType, error) {
+ storageTypes := make([]BackrestStorageType, 0)
+
+ parsed := strings.Split(storageTypeStr, ",")
+
+ // if no storage types found in the string, return
+ if len(parsed) == 1 && parsed[0] == "" {
+ return nil, ErrStorageTypesEmpty
+ }
+
+ // iterate through the list and determine if there are valid storage types
+ // map all "local" into "posix"
+ for _, s := range parsed {
+ storageType := BackrestStorageType(s)
+
+ switch storageType {
+ default:
+ return nil, fmt.Errorf("%w: %s", ErrInvalidStorageType, storageType)
+ case BackrestStorageTypePosix, BackrestStorageTypeLocal:
+ storageTypes = append(storageTypes, BackrestStorageTypePosix)
+ case BackrestStorageTypeS3:
+ storageTypes = append(storageTypes, storageType)
+ }
+ }
+
+ return storageTypes, nil
+}
+
// UserSecretName returns the name of a Kubernetes Secret representing the user.
// Delegates to UserSecretNameFromClusterName. This is the preferred method
// given there is less thinking for the caller to do, but there are some (one?)
diff --git a/pkg/apis/crunchydata.com/v1/cluster_test.go b/pkg/apis/crunchydata.com/v1/cluster_test.go
index c2f6c70bb3..4af663cf3e 100644
--- a/pkg/apis/crunchydata.com/v1/cluster_test.go
+++ b/pkg/apis/crunchydata.com/v1/cluster_test.go
@@ -16,12 +16,131 @@ package v1
*/
import (
+ "errors"
"fmt"
"testing"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
+func TestParseBackrestStorageTypes(t *testing.T) {
+ t.Run("empty", func(t *testing.T) {
+ _, err := ParseBackrestStorageTypes("")
+
+ if !errors.Is(err, ErrStorageTypesEmpty) {
+ t.Fatalf("expected ErrStorageTypesEmpty actual %q", err.Error())
+ }
+ })
+
+ t.Run("invalid", func(t *testing.T) {
+ _, err := ParseBackrestStorageTypes("bad bad bad")
+
+ if !errors.Is(err, ErrInvalidStorageType) {
+ t.Fatalf("expected ErrInvalidStorageType actual %q", err.Error())
+ }
+
+ _, err = ParseBackrestStorageTypes("posix,bad")
+
+ if !errors.Is(err, ErrInvalidStorageType) {
+ t.Fatalf("expected ErrInvalidStorageType actual %q", err.Error())
+ }
+ })
+
+ t.Run("local should be posix", func(t *testing.T) {
+ storageTypes, err := ParseBackrestStorageTypes("local")
+
+ if err != nil {
+ t.Fatalf("expected no error actual %q", err.Error())
+ }
+
+ if len(storageTypes) != 1 {
+ t.Fatalf("multiple storage types returned, expected 1")
+ }
+
+ if storageTypes[0] != BackrestStorageTypePosix {
+ t.Fatalf("posix expected but not found")
+ }
+ })
+
+ t.Run("posix", func(t *testing.T) {
+ storageTypes, err := ParseBackrestStorageTypes("posix")
+
+ if err != nil {
+ t.Fatalf("expected no error actual %q", err.Error())
+ }
+
+ if len(storageTypes) != 1 {
+ t.Fatalf("multiple storage types returned, expected 1")
+ }
+
+ if storageTypes[0] != BackrestStorageTypePosix {
+ t.Fatalf("posix expected but not found")
+ }
+ })
+
+ t.Run("s3", func(t *testing.T) {
+ storageTypes, err := ParseBackrestStorageTypes("s3")
+
+ if err != nil {
+ t.Fatalf("expected no error actual %q", err.Error())
+ }
+
+ if len(storageTypes) != 1 {
+ t.Fatalf("multiple storage types returned, expected 1")
+ }
+
+ if storageTypes[0] != BackrestStorageTypeS3 {
+ t.Fatalf("s3 expected but not found")
+ }
+ })
+
+ t.Run("posix and s3", func(t *testing.T) {
+ storageTypes, err := ParseBackrestStorageTypes("posix,s3")
+
+ if err != nil {
+ t.Fatalf("expected no error actual %q", err.Error())
+ }
+
+ if len(storageTypes) != 2 {
+ t.Fatalf("expected 2 storage types, actual %d", len(storageTypes))
+ }
+
+ posix := false
+ s3 := false
+ for _, storageType := range storageTypes {
+ posix = posix || (storageType == BackrestStorageTypePosix)
+ s3 = s3 || (storageType == BackrestStorageTypeS3)
+ }
+
+ if !(posix && s3) {
+ t.Fatalf("posix and s3 expected but not found")
+ }
+ })
+
+ t.Run("local and s3", func(t *testing.T) {
+ storageTypes, err := ParseBackrestStorageTypes("local,s3")
+
+ if err != nil {
+ t.Fatalf("expected no error actual %q", err.Error())
+ }
+
+ if len(storageTypes) != 2 {
+ t.Fatalf("expected 2 storage types, actual %d", len(storageTypes))
+ }
+
+ posix := false
+ s3 := false
+ for _, storageType := range storageTypes {
+ posix = posix || (storageType == BackrestStorageTypePosix)
+ s3 = s3 || (storageType == BackrestStorageTypeS3)
+ }
+
+ if !(posix && s3) {
+ t.Fatalf("posix and s3 expected but not found")
+ }
+ })
+}
+
func TestUserSecretName(t *testing.T) {
cluster := &Pgcluster{
ObjectMeta: metav1.ObjectMeta{
diff --git a/pkg/apis/crunchydata.com/v1/errors.go b/pkg/apis/crunchydata.com/v1/errors.go
new file mode 100644
index 0000000000..6c8fddbb2d
--- /dev/null
+++ b/pkg/apis/crunchydata.com/v1/errors.go
@@ -0,0 +1,23 @@
+package v1
+
+import "errors"
+
+/*
+ Copyright 2020 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+var (
+ ErrStorageTypesEmpty = errors.New("no storage types detected")
+ ErrInvalidStorageType = errors.New("invalid storage type")
+)
diff --git a/pkg/apis/crunchydata.com/v1/task.go b/pkg/apis/crunchydata.com/v1/task.go
index 463c532f80..d6791c8415 100644
--- a/pkg/apis/crunchydata.com/v1/task.go
+++ b/pkg/apis/crunchydata.com/v1/task.go
@@ -84,10 +84,6 @@ const (
BackupTypeBootstrap string = "bootstrap"
)
-// BackrestStorageTypes defines the valid types of storage that can be utilized
-// with pgBackRest
-var BackrestStorageTypes = []string{"local", "s3"}
-
// PgtaskSpec ...
// swagger:ignore
type PgtaskSpec struct {
From 34e2da1b65841c54292e73d93900e16d80f2fd42 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 30 Dec 2020 21:47:25 -0500
Subject: [PATCH 097/276] Ensure pod anti-affinity is loaded from spec
The pod anti-affinity had been driven from the pgcluster custom
resource since its inception, but there were some vestiges
in the form of labels on the custom resource. This moves all of
the labelign to be driven by the custom resource attributes, which
primarily affects the Postgres instances.
This also includes some random documentation updates.
---
docs/content/pgo-client/common-tasks.md | 2 +-
examples/create-by-resource/fromcrd.json | 3 -
.../create-cluster/templates/pgcluster.yaml | 1 -
examples/kustomize/createcluster/README.md | 8 +-
.../createcluster/base/pgcluster.yaml | 1 -
.../overlay/staging/hippo-rpl1-pgreplica.yaml | 2 -
.../files/pgo-configs/cluster-deployment.json | 1 +
.../apiserver/clusterservice/clusterimpl.go | 3 -
internal/operator/cluster/clusterlogic.go | 158 +++++++++---------
internal/operator/cluster/upgrade.go | 3 -
internal/operator/clusterutilities.go | 48 +++---
11 files changed, 113 insertions(+), 117 deletions(-)
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index 64ffd26a6a..d5ae6b8b33 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -384,7 +384,7 @@ cluster : hacluster (crunchy-postgres-ha:{{< param centosBase >}}-{{< param post
deployment : hacluster
deployment : hacluster-backrest-shared-repo
service : hacluster - ClusterIP (10.102.20.42)
- labels : pg-pod-anti-affinity= archive-timeout=60 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=hacluster pg-cluster=hacluster crunchy-pgha-scope=hacluster autofail=true pgo-backrest=true pgo-version={{< param operatorVersion >}} current-primary=hacluster name=hacluster pgouser=admin workflowid=ae714d12-f5d0-4fa9-910f-21944b41dec8
+ labels : archive-timeout=60 deployment-name=hacluster pg-cluster=hacluster crunchy-pgha-scope=hacluster pgo-version={{< param operatorVersion >}} current-primary=hacluster name=hacluster pgouser=admin workflowid=ae714d12-f5d0-4fa9-910f-21944b41dec8
```
### Deleting a Cluster
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index 49e4665fe7..c267e258d5 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -6,13 +6,10 @@
"current-primary": "fromcrd"
},
"labels": {
- "autofail": "true",
- "crunchy-pgbadger": "false",
"crunchy-pgha-scope": "fromcrd",
"deployment-name": "fromcrd",
"name": "fromcrd",
"pg-cluster": "fromcrd",
- "pg-pod-anti-affinity": "",
"pgo-version": "4.5.1",
"pgouser": "pgoadmin"
},
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
index 1a5e99617d..9d1036581d 100644
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ b/examples/helm/create-cluster/templates/pgcluster.yaml
@@ -10,7 +10,6 @@ metadata:
deployment-name: {{ .Values.pgclustername }}
name: {{ .Values.pgclustername }}
pg-cluster: {{ .Values.pgclustername }}
- pg-pod-anti-affinity: ""
pgo-version: 4.5.1
pgouser: admin
name: {{ .Values.pgclustername }}
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index ddca1fd70a..3ea4c18f9f 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -50,7 +50,7 @@ cluster : hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
deployment : hippo
deployment : hippo-backrest-shared-repo
service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
- labels : pg-pod-anti-affinity= pgo-version=4.5.1 crunchy-postgres-exporter=false name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata autofail=true crunchy-pgbadger=false
+ labels : pgo-version=4.5.1 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
```
Feel free to run other pgo cli commands on the hippo cluster
@@ -87,7 +87,7 @@ cluster : dev-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
deployment : dev-hippo-pgbouncer
service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
- labels : crunchy-pgha-scope=dev-hippo crunchy-postgres-exporter=false name=dev-hippo pg-cluster=dev-hippo pg-pod-anti-affinity= vendor=crunchydata autofail=true crunchy-pgbadger=false deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
+ labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
```
#### staging
The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
@@ -128,7 +128,7 @@ cluster : staging-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
pgreplica : staging-hippo-lnxw
pgreplica : staging-hippo-rpl1
- labels : deployment-name=staging-hippo environment=staging name=staging-hippo pg-pod-anti-affinity= crunchy-postgres-exporter=false crunchy-pgbadger=false crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.5.1 pgouser=admin vendor=crunchydata autofail=true
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.5.1 pgouser=admin vendor=crunchydata
```
#### production
@@ -165,7 +165,7 @@ cluster : prod-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
pgreplica : prod-hippo-flty
- labels : pgo-version=4.5.1 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=prod-hippo environment=production pg-cluster=prod-hippo pg-pod-anti-affinity= autofail=true crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.5.1 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
```
### Delete the clusters
To delete the clusters run the following pgo cli commands
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index ed7c27622d..cf3293a73a 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -10,7 +10,6 @@ metadata:
deployment-name: hippo
name: hippo
pg-cluster: hippo
- pg-pod-anti-affinity: ""
pgo-version: 4.5.1
pgouser: admin
name: hippo
diff --git a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
index 33a36b5ef9..ed7d6e0b4b 100644
--- a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
+++ b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
@@ -22,6 +22,4 @@ spec:
userlabels:
NodeLabelKey: ""
NodeLabelValue: ""
- crunchy-postgres-exporter: "false"
- pg-pod-anti-affinity: ""
pgo-version: 4.5.1
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index 8499739ccf..a0645d1048 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -28,6 +28,7 @@
"name": "{{.Name}}",
"vendor": "crunchydata",
"pgo-pg-database": "true",
+ "{{.PodAntiAffinityLabelName}}": "{{.PodAntiAffinityLabelValue}}",
{{.PodLabels }}
}
},
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 8307434c80..7ce3b78c11 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -827,9 +827,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
resp.Status.Msg = err.Error()
return resp
}
- userLabelsMap[config.LABEL_POD_ANTI_AFFINITY] = request.PodAntiAffinity
- } else {
- userLabelsMap[config.LABEL_POD_ANTI_AFFINITY] = ""
}
// check to see if there are any pod anti-affinity overrides, specifically for
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 66c109296f..964e5cb9e9 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -306,46 +306,49 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
// create the primary deployment
deploymentFields := operator.DeploymentTemplateFields{
- Name: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
- IsInit: true,
- Replicas: "0",
- ClusterName: cl.Spec.Name,
- Port: cl.Spec.Port,
- CCPImagePrefix: util.GetValueOrDefault(cl.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
- CCPImage: cl.Spec.CCPImage,
- CCPImageTag: cl.Spec.CCPImageTag,
- PVCName: dataVolume.InlineVolumeSource(),
- DeploymentLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
- PodAnnotations: operator.GetAnnotations(cl, crv1.ClusterAnnotationPostgres),
- PodLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
- DataPathOverride: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
- Database: cl.Spec.Database,
- SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
- RootSecretName: crv1.UserSecretName(cl, crv1.PGUserSuperuser),
- PrimarySecretName: crv1.UserSecretName(cl, crv1.PGUserReplication),
- UserSecretName: crv1.UserSecretName(cl, cl.Spec.User),
- NodeSelector: operator.GetAffinity(cl.Spec.UserLabels["NodeLabelKey"], cl.Spec.UserLabels["NodeLabelValue"], "In"),
- PodAntiAffinity: operator.GetPodAntiAffinity(cl, crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default),
- ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
- ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
- ExporterAddon: operator.GetExporterAddon(cl.Spec),
- BadgerAddon: operator.GetBadgerAddon(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
- PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
- ScopeLabel: config.LABEL_PGHA_SCOPE,
- PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], cl.Spec.Port),
- PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(clientset, *cl),
- ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
- SyncReplication: operator.GetSyncReplication(cl.Spec.SyncReplication),
- Tablespaces: operator.GetTablespaceNames(cl.Spec.TablespaceMounts),
- TablespaceVolumes: operator.GetTablespaceVolumesJSON(cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], tablespaceStorageTypeMap),
- TablespaceVolumeMounts: operator.GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap),
- TLSEnabled: cl.Spec.TLS.IsTLSEnabled(),
- TLSOnly: cl.Spec.TLSOnly,
- TLSSecret: cl.Spec.TLS.TLSSecret,
- ReplicationTLSSecret: cl.Spec.TLS.ReplicationTLSSecret,
- CASecret: cl.Spec.TLS.CASecret,
- Standby: cl.Spec.Standby,
- Tolerations: operator.GetTolerations(cl.Spec.Tolerations),
+ Name: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
+ IsInit: true,
+ Replicas: "0",
+ ClusterName: cl.Spec.Name,
+ Port: cl.Spec.Port,
+ CCPImagePrefix: util.GetValueOrDefault(cl.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
+ CCPImage: cl.Spec.CCPImage,
+ CCPImageTag: cl.Spec.CCPImageTag,
+ PVCName: dataVolume.InlineVolumeSource(),
+ DeploymentLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
+ PodAnnotations: operator.GetAnnotations(cl, crv1.ClusterAnnotationPostgres),
+ PodLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels),
+ DataPathOverride: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY],
+ Database: cl.Spec.Database,
+ SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
+ RootSecretName: crv1.UserSecretName(cl, crv1.PGUserSuperuser),
+ PrimarySecretName: crv1.UserSecretName(cl, crv1.PGUserReplication),
+ UserSecretName: crv1.UserSecretName(cl, cl.Spec.User),
+ NodeSelector: operator.GetAffinity(cl.Spec.UserLabels["NodeLabelKey"], cl.Spec.UserLabels["NodeLabelValue"], "In"),
+ PodAntiAffinity: operator.GetPodAntiAffinity(cl,
+ crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default),
+ PodAntiAffinityLabelName: config.LABEL_POD_ANTI_AFFINITY,
+ PodAntiAffinityLabelValue: string(cl.Spec.PodAntiAffinity.Default),
+ ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits),
+ ConfVolume: operator.GetConfVolume(clientset, cl, namespace),
+ ExporterAddon: operator.GetExporterAddon(cl.Spec),
+ BadgerAddon: operator.GetBadgerAddon(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]),
+ PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl),
+ ScopeLabel: config.LABEL_PGHA_SCOPE,
+ PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], cl.Spec.Port),
+ PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(clientset, *cl),
+ ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
+ SyncReplication: operator.GetSyncReplication(cl.Spec.SyncReplication),
+ Tablespaces: operator.GetTablespaceNames(cl.Spec.TablespaceMounts),
+ TablespaceVolumes: operator.GetTablespaceVolumesJSON(cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], tablespaceStorageTypeMap),
+ TablespaceVolumeMounts: operator.GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap),
+ TLSEnabled: cl.Spec.TLS.IsTLSEnabled(),
+ TLSOnly: cl.Spec.TLSOnly,
+ TLSSecret: cl.Spec.TLS.TLSSecret,
+ ReplicationTLSSecret: cl.Spec.TLS.ReplicationTLSSecret,
+ CASecret: cl.Spec.TLS.CASecret,
+ Standby: cl.Spec.Standby,
+ Tolerations: operator.GetTolerations(cl.Spec.Tolerations),
}
return deploymentFields
@@ -462,42 +465,45 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
// create the replica deployment
replicaDeploymentFields := operator.DeploymentTemplateFields{
- Name: replica.Spec.Name,
- ClusterName: replica.Spec.ClusterName,
- Port: cluster.Spec.Port,
- CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
- CCPImageTag: imageTag,
- CCPImage: image,
- PVCName: dataVolume.InlineVolumeSource(),
- Database: cluster.Spec.Database,
- DataPathOverride: replica.Spec.Name,
- Replicas: "1",
- ConfVolume: operator.GetConfVolume(clientset, cluster, namespace),
- DeploymentLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
- PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationPostgres),
- PodLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
- SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
- RootSecretName: crv1.UserSecretName(cluster, crv1.PGUserSuperuser),
- PrimarySecretName: crv1.UserSecretName(cluster, crv1.PGUserReplication),
- UserSecretName: crv1.UserSecretName(cluster, cluster.Spec.User),
- ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits),
- NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
- PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
- ExporterAddon: operator.GetExporterAddon(cluster.Spec),
- BadgerAddon: operator.GetBadgerAddon(cluster, replica.Spec.Name),
- ScopeLabel: config.LABEL_PGHA_SCOPE,
- PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, replica.Spec.Name, cluster.Spec.Port),
- PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(clientset, *cluster),
- ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
- SyncReplication: operator.GetSyncReplication(cluster.Spec.SyncReplication),
- Tablespaces: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts),
- TablespaceVolumes: operator.GetTablespaceVolumesJSON(replica.Spec.Name, tablespaceStorageTypeMap),
- TablespaceVolumeMounts: operator.GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap),
- TLSEnabled: cluster.Spec.TLS.IsTLSEnabled(),
- TLSOnly: cluster.Spec.TLSOnly,
- TLSSecret: cluster.Spec.TLS.TLSSecret,
- ReplicationTLSSecret: cluster.Spec.TLS.ReplicationTLSSecret,
- CASecret: cluster.Spec.TLS.CASecret,
+ Name: replica.Spec.Name,
+ ClusterName: replica.Spec.ClusterName,
+ Port: cluster.Spec.Port,
+ CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
+ CCPImageTag: imageTag,
+ CCPImage: image,
+ PVCName: dataVolume.InlineVolumeSource(),
+ Database: cluster.Spec.Database,
+ DataPathOverride: replica.Spec.Name,
+ Replicas: "1",
+ ConfVolume: operator.GetConfVolume(clientset, cluster, namespace),
+ DeploymentLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
+ PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationPostgres),
+ PodLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels),
+ SecurityContext: operator.GetPodSecurityContext(supplementalGroups),
+ RootSecretName: crv1.UserSecretName(cluster, crv1.PGUserSuperuser),
+ PrimarySecretName: crv1.UserSecretName(cluster, crv1.PGUserReplication),
+ UserSecretName: crv1.UserSecretName(cluster, cluster.Spec.User),
+ ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits),
+ NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
+ PodAntiAffinity: operator.GetPodAntiAffinity(cluster,
+ crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
+ PodAntiAffinityLabelName: config.LABEL_POD_ANTI_AFFINITY,
+ PodAntiAffinityLabelValue: string(cluster.Spec.PodAntiAffinity.Default),
+ ExporterAddon: operator.GetExporterAddon(cluster.Spec),
+ BadgerAddon: operator.GetBadgerAddon(cluster, replica.Spec.Name),
+ ScopeLabel: config.LABEL_PGHA_SCOPE,
+ PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, replica.Spec.Name, cluster.Spec.Port),
+ PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(clientset, *cluster),
+ ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit,
+ SyncReplication: operator.GetSyncReplication(cluster.Spec.SyncReplication),
+ Tablespaces: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts),
+ TablespaceVolumes: operator.GetTablespaceVolumesJSON(replica.Spec.Name, tablespaceStorageTypeMap),
+ TablespaceVolumeMounts: operator.GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap),
+ TLSEnabled: cluster.Spec.TLS.IsTLSEnabled(),
+ TLSOnly: cluster.Spec.TLSOnly,
+ TLSSecret: cluster.Spec.TLS.TLSSecret,
+ ReplicationTLSSecret: cluster.Spec.TLS.ReplicationTLSSecret,
+ CASecret: cluster.Spec.TLS.CASecret,
// Give precedence to the tolerations defined on the replica spec, otherwise
// take any tolerations defined on the cluster spec
Tolerations: util.GetValueOrDefault(
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index bebdeea866..62127b687e 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -589,9 +589,6 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
// and unable to sync all replicas to the current timeline
pgcluster.Spec.DisableAutofail = false
- // Don't think we'll need to do this, but leaving the comment for now....
- // pgcluster.ObjectMeta.Labels[config.LABEL_POD_ANTI_AFFINITY] = ""
-
// set pgouser to match the default configuration currently in use after the Operator upgrade
pgcluster.ObjectMeta.Labels[config.LABEL_PGOUSER] = parameters[config.LABEL_PGOUSER]
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 6497d95262..c90ea0090a 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -158,29 +158,31 @@ type DeploymentTemplateFields struct {
DeploymentLabels string
// PodAnnotations are user-specified annotations that can be applied to a
// Pod, e.g. annotations specific to a PostgreSQL instance
- PodAnnotations string
- PodLabels string
- DataPathOverride string
- PVCName string
- RootSecretName string
- UserSecretName string
- PrimarySecretName string
- SecurityContext string
- ContainerResources string
- NodeSelector string
- ConfVolume string
- ExporterAddon string
- BadgerAddon string
- PgbackrestEnvVars string
- PgbackrestS3EnvVars string
- PgmonitorEnvVars string
- ScopeLabel string
- Replicas string
- IsInit bool
- ReplicaReinitOnStartFail bool
- PodAntiAffinity string
- SyncReplication bool
- Standby bool
+ PodAnnotations string
+ PodLabels string
+ DataPathOverride string
+ PVCName string
+ RootSecretName string
+ UserSecretName string
+ PrimarySecretName string
+ SecurityContext string
+ ContainerResources string
+ NodeSelector string
+ ConfVolume string
+ ExporterAddon string
+ BadgerAddon string
+ PgbackrestEnvVars string
+ PgbackrestS3EnvVars string
+ PgmonitorEnvVars string
+ ScopeLabel string
+ Replicas string
+ IsInit bool
+ ReplicaReinitOnStartFail bool
+ PodAntiAffinity string
+ PodAntiAffinityLabelName string
+ PodAntiAffinityLabelValue string
+ SyncReplication bool
+ Standby bool
// A comma-separated list of tablespace names...this could be an array, but
// given how this would ultimately be interpreted in a shell script somewhere
// down the line, it's easier for the time being to do it this way. In the
From 59f89dbee02de060bf50a8d33701b6ccecbf1c83 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 31 Dec 2020 13:04:23 -0500
Subject: [PATCH 098/276] Move node affinity controls to CRD attributes
Node affinity had been controlled by the use of an amorphous label
on the `pgclusters.crunchydata.com` and `pgreplicas.crunchydata.com`
attributes. This is now moved to use a similar approach on the CRD
as used for pod anti-affinity, though using the standard Kubernetes
definition for setting node affinity on a Pod.
The `--node-label` syntax on the various pgo-client commands still work
as expected and are mapped to using the new format. The attributes
on the CRDs will allow for expansion over the number of supported node
affinity rules that can be placed on a PostgreSQL cluster, particularly
as Kubernetes deployment topologies continue to evolve.
---
.../files/pgo-configs/affinity.json | 14 ----
.../files/pgo-configs/cluster-deployment.json | 8 ++-
.../apiserver/clusterservice/clusterimpl.go | 16 +++--
.../apiserver/clusterservice/scaleimpl.go | 11 +--
internal/apiserver/common.go | 1 +
internal/config/pgoconfig.go | 9 ---
internal/operator/backrest/restore.go | 13 ++--
internal/operator/cluster/cluster.go | 9 ++-
internal/operator/cluster/clusterlogic.go | 11 ++-
internal/operator/cluster/upgrade.go | 31 +++++++++
internal/operator/clusterutilities.go | 57 +++++-----------
internal/operator/pgdump/restore.go | 10 ++-
internal/util/cluster.go | 25 +++++++
internal/util/cluster_test.go | 67 +++++++++++++++++++
pkg/apis/crunchydata.com/v1/cluster.go | 17 +++++
pkg/apis/crunchydata.com/v1/replica.go | 3 +
.../v1/zz_generated.deepcopy.go | 32 +++++++++
17 files changed, 245 insertions(+), 89 deletions(-)
delete mode 100644 installers/ansible/roles/pgo-operator/files/pgo-configs/affinity.json
create mode 100644 internal/util/cluster_test.go
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/affinity.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/affinity.json
deleted file mode 100644
index a247bd9bb4..0000000000
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/affinity.json
+++ /dev/null
@@ -1,14 +0,0 @@
- "nodeAffinity": {
- "preferredDuringSchedulingIgnoredDuringExecution": [{
- "weight": 10,
- "preference": {
- "matchExpressions": [{
- "key": "{{.NodeLabelKey}}",
- "operator": "{{.OperatorValue}}",
- "values": [
- "{{.NodeLabelValue}}"
- ]
- }]
- }
- }]
- }
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index a0645d1048..0e3f2ef6cc 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -373,9 +373,11 @@
{{.TablespaceVolumes}}
],
"affinity": {
- {{.NodeSelector}}
- {{if and .NodeSelector .PodAntiAffinity}},{{end}}
- {{.PodAntiAffinity}}
+ {{if .NodeSelector}}
+ "nodeAffinity": {{ .NodeSelector }}
+ {{ end }}
+ {{if and .NodeSelector .PodAntiAffinity}},{{end}}
+ {{.PodAntiAffinity}}
},
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 7ce3b78c11..00b621c680 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -792,12 +792,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
resp.Status.Msg = err.Error()
return resp
}
-
- parts := strings.Split(request.NodeLabel, "=")
- userLabelsMap[config.LABEL_NODE_LABEL_KEY] = parts[0]
- userLabelsMap[config.LABEL_NODE_LABEL_VALUE] = parts[1]
-
- log.Debug("primary node labels used from user entered flag")
}
if request.ReplicaStorageConfig != "" {
@@ -1242,6 +1236,16 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
spec.PodAntiAffinity = podAntiAffinity
}
+ // if there is a node label, set the node affinity
+ if request.NodeLabel != "" {
+ nodeLabel := strings.Split(request.NodeLabel, "=")
+ spec.NodeAffinity = crv1.NodeAffinitySpec{
+ Default: util.GenerateNodeAffinity(nodeLabel[0], []string{nodeLabel[1]}),
+ }
+
+ log.Debugf("using node label %s", request.NodeLabel)
+ }
+
// if the PVCSize is overwritten, update the primary storage spec with this
// value
if request.PVCSize != "" {
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index 0a8bcf0fff..b4dc9f2dba 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -98,10 +98,6 @@ func ScaleCluster(request msgs.ClusterScaleRequest, pgouser string) msgs.Cluster
spec.ServiceType = request.ServiceType
}
- // set replica node lables to blank to start with, then check for overrides
- spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = ""
- spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = ""
-
// validate & parse nodeLabel if exists
if request.NodeLabel != "" {
if err = apiserver.ValidateNodeLabel(request.NodeLabel); err != nil {
@@ -110,11 +106,10 @@ func ScaleCluster(request msgs.ClusterScaleRequest, pgouser string) msgs.Cluster
return response
}
- parts := strings.Split(request.NodeLabel, "=")
- spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = parts[0]
- spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = parts[1]
+ nodeLabel := strings.Split(request.NodeLabel, "=")
+ spec.NodeAffinity = util.GenerateNodeAffinity(nodeLabel[0], []string{nodeLabel[1]})
- log.Debug("using user entered node label for replica creation")
+ log.Debugf("using node label %s", request.NodeLabel)
}
labels := make(map[string]string)
diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go
index 204f50fa70..7f5592b3c4 100644
--- a/internal/apiserver/common.go
+++ b/internal/apiserver/common.go
@@ -24,6 +24,7 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
log "github.com/sirupsen/logrus"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index 2a72513437..09d78ab841 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -95,10 +95,6 @@ var ContainerResourcesTemplate *template.Template
const containerResourcesTemplatePath = "container-resources.json"
-var AffinityTemplate *template.Template
-
-const affinityTemplatePath = "affinity.json"
-
var PodAntiAffinityTemplate *template.Template
const podAntiAffinityTemplatePath = "pod-anti-affinity.json"
@@ -685,11 +681,6 @@ func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string)
return err
}
- AffinityTemplate, err = c.LoadTemplate(cMap, affinityTemplatePath)
- if err != nil {
- return err
- }
-
PodAntiAffinityTemplate, err = c.LoadTemplate(cMap, podAntiAffinityTemplatePath)
if err != nil {
return err
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index 5d727522f6..4216a31197 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -26,6 +26,7 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
"github.com/crunchydata/postgres-operator/pkg/events"
pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned"
@@ -92,10 +93,14 @@ func UpdatePGClusterSpecForRestore(clientset kubeapi.Interface, cluster *crv1.Pg
cluster.Spec.PGDataSource.RestoreOpts = restoreOpts
// set the proper node affinity for the restore job
- cluster.Spec.UserLabels[config.LABEL_NODE_LABEL_KEY] =
- task.Spec.Parameters[config.LABEL_NODE_LABEL_KEY]
- cluster.Spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] =
- task.Spec.Parameters[config.LABEL_NODE_LABEL_VALUE]
+ if task.Spec.Parameters[config.LABEL_NODE_LABEL_KEY] != "" && task.Spec.Parameters[config.LABEL_NODE_LABEL_VALUE] != "" {
+ cluster.Spec.NodeAffinity = crv1.NodeAffinitySpec{
+ Default: util.GenerateNodeAffinity(
+ task.Spec.Parameters[config.LABEL_NODE_LABEL_KEY],
+ []string{task.Spec.Parameters[config.LABEL_NODE_LABEL_VALUE]},
+ ),
+ }
+ }
}
// PrepareClusterForRestore prepares a PostgreSQL cluster for a restore. This includes deleting
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 216300cc0e..9319a10c00 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -212,9 +212,12 @@ func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace s
spec.UserLabels = cl.Spec.UserLabels
- // the replica should not use the same node labels as the primary
- spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = ""
- spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = ""
+ // if the primary cluster has default node affinity rules set, we need
+ // to honor them in the spec. if a different affinity is desired, the
+ // replica needs to set its own rules
+ if cl.Spec.NodeAffinity.Default != nil {
+ spec.NodeAffinity = cl.Spec.NodeAffinity.Default
+ }
labels := make(map[string]string)
labels[config.LABEL_PG_CLUSTER] = cl.Spec.Name
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 964e5cb9e9..c7f1d4cc00 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -324,7 +324,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
RootSecretName: crv1.UserSecretName(cl, crv1.PGUserSuperuser),
PrimarySecretName: crv1.UserSecretName(cl, crv1.PGUserReplication),
UserSecretName: crv1.UserSecretName(cl, cl.Spec.User),
- NodeSelector: operator.GetAffinity(cl.Spec.UserLabels["NodeLabelKey"], cl.Spec.UserLabels["NodeLabelValue"], "In"),
+ NodeSelector: operator.GetNodeAffinity(cl.Spec.NodeAffinity.Default),
PodAntiAffinity: operator.GetPodAntiAffinity(cl,
crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default),
PodAntiAffinityLabelName: config.LABEL_POD_ANTI_AFFINITY,
@@ -463,6 +463,13 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
supplementalGroups = append(supplementalGroups, v.SupplementalGroups...)
}
+ // check if there are any node affinity rules. rules on the replica supersede
+ // rules on the primary
+ nodeAffinity := cluster.Spec.NodeAffinity.Default
+ if replica.Spec.NodeAffinity != nil {
+ nodeAffinity = replica.Spec.NodeAffinity
+ }
+
// create the replica deployment
replicaDeploymentFields := operator.DeploymentTemplateFields{
Name: replica.Spec.Name,
@@ -484,7 +491,7 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
PrimarySecretName: crv1.UserSecretName(cluster, crv1.PGUserReplication),
UserSecretName: crv1.UserSecretName(cluster, cluster.Spec.User),
ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits),
- NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"),
+ NodeSelector: operator.GetNodeAffinity(nodeAffinity),
PodAntiAffinity: operator.GetPodAntiAffinity(cluster,
crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default),
PodAntiAffinityLabelName: config.LABEL_POD_ANTI_AFFINITY,
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index 62127b687e..fb5c344cdb 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -505,6 +505,37 @@ func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string
delete(pgcluster.ObjectMeta.Labels, "autofail")
}
+ // 4.6.0 moved the node labels to the custom resource objects in a more
+ // structure way. if we have a node label, then let's migrate it to that
+ // format
+ if pgcluster.Spec.UserLabels["NodeLabelKey"] != "" && pgcluster.Spec.UserLabels["NodeLabelValue"] != "" {
+ // transition to using the native NodeAffinity objects. In the previous
+ // setup, this was, by default, preferred node affinity. Designed to match
+ // a standard setup.
+ requirement := v1.NodeSelectorRequirement{
+ Key: pgcluster.Spec.UserLabels["NodeLabelKey"],
+ Values: []string{pgcluster.Spec.UserLabels["NodeLabelValue"]},
+ Operator: v1.NodeSelectorOpIn,
+ }
+ term := v1.PreferredSchedulingTerm{
+ Weight: crv1.NodeAffinityDefaultWeight, // taking this from the former template
+ Preference: v1.NodeSelectorTerm{
+ MatchExpressions: []v1.NodeSelectorRequirement{requirement},
+ },
+ }
+
+ // and here is our default node affinity rule
+ pgcluster.Spec.NodeAffinity = crv1.NodeAffinitySpec{
+ Default: &v1.NodeAffinity{
+ PreferredDuringSchedulingIgnoredDuringExecution: []v1.PreferredSchedulingTerm{term},
+ },
+ }
+
+ // erase all trace of this
+ delete(pgcluster.Spec.UserLabels, "NodeLabelKey")
+ delete(pgcluster.Spec.UserLabels, "NodeLabelValue")
+ }
+
// 4.6.0 moved the "backrest-storage-type" label to a CRD attribute, well,
// really an array of CRD attributes, which we need to map the various
// attributes to. "local" will be mapped the "posix" to match the pgBackRest
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index c90ea0090a..fe1cb37b26 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -36,12 +36,6 @@ import (
"k8s.io/client-go/kubernetes"
)
-// consolidate with cluster.affinityTemplateFields
-const (
- AffinityInOperator = "In"
- AFFINITY_NOTINOperator = "NotIn"
-)
-
// PGHAConfigMapSuffix defines the suffix for the name of the PGHA configMap created for each PG
// cluster
const PGHAConfigMapSuffix = "pgha-config"
@@ -74,12 +68,6 @@ const (
preferScheduleIgnoreExec affinityType = "preferredDuringSchedulingIgnoredDuringExecution"
)
-type affinityTemplateFields struct {
- NodeLabelKey string
- NodeLabelValue string
- OperatorValue string
-}
-
type podAntiAffinityTemplateFields struct {
AffinityType affinityType
ClusterName string
@@ -467,6 +455,24 @@ func CreatePGHAConfigMap(clientset kubernetes.Interface, cluster *crv1.Pgcluster
return nil
}
+// GetNodeAffinity returns any node affinity rules for the Operator in a JSON
+// string. If there is no data or there is an error, it will return an empty
+// string.
+func GetNodeAffinity(nodeAffinity *v1.NodeAffinity) string {
+ if nodeAffinity == nil {
+ return ""
+ }
+
+ data, err := json.MarshalIndent(nodeAffinity, "", " ")
+
+ if err != nil {
+ log.Warnf("could not generate node affinity: %s", err.Error())
+ return ""
+ }
+
+ return string(data)
+}
+
// GetTablespaceNamePVCMap returns a map of the tablespace name to the PVC name
func GetTablespaceNamePVCMap(clusterName string, tablespaceStorageTypeMap map[string]string) map[string]string {
tablespacePVCMap := map[string]string{}
@@ -633,33 +639,6 @@ func GetLabelsFromMap(labels map[string]string) string {
return strings.TrimSuffix(output, ",")
}
-// GetAffinity ...
-func GetAffinity(nodeLabelKey, nodeLabelValue string, affoperator string) string {
- log.Debugf("GetAffinity with nodeLabelKey=[%s] nodeLabelKey=[%s] and operator=[%s]\n", nodeLabelKey, nodeLabelValue, affoperator)
- output := ""
- if nodeLabelKey == "" {
- return output
- }
-
- affinityTemplateFields := affinityTemplateFields{}
- affinityTemplateFields.NodeLabelKey = nodeLabelKey
- affinityTemplateFields.NodeLabelValue = nodeLabelValue
- affinityTemplateFields.OperatorValue = affoperator
-
- var affinityDoc bytes.Buffer
- err := config.AffinityTemplate.Execute(&affinityDoc, affinityTemplateFields)
- if err != nil {
- log.Error(err.Error())
- return output
- }
-
- if CRUNCHY_DEBUG {
- _ = config.AffinityTemplate.Execute(os.Stdout, affinityTemplateFields)
- }
-
- return affinityDoc.String()
-}
-
// GetPodAntiAffinity returns the populated pod anti-affinity json that should be attached to
// the various pods comprising the pg cluster
func GetPodAntiAffinity(cluster *crv1.Pgcluster, deploymentType crv1.PodAntiAffinityDeployment, podAntiAffinityType crv1.PodAntiAffinityType) string {
diff --git a/internal/operator/pgdump/restore.go b/internal/operator/pgdump/restore.go
index 95169c06de..550b64390b 100644
--- a/internal/operator/pgdump/restore.go
+++ b/internal/operator/pgdump/restore.go
@@ -28,8 +28,10 @@ import (
"github.com/crunchydata/postgres-operator/internal/operator/pvc"
"github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
log "github.com/sirupsen/logrus"
v1batch "k8s.io/api/batch/v1"
+ v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -76,6 +78,12 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
storage := cluster.Spec.PrimaryStorage
taskName := task.Name
+ var nodeAffinity *v1.NodeAffinity
+
+ if task.Spec.Parameters["NodeLabelKey"] != "" && task.Spec.Parameters["NodeLabelValue"] != "" {
+ nodeAffinity = util.GenerateNodeAffinity(
+ task.Spec.Parameters["NodeLabelKey"], []string{task.Spec.Parameters["NodeLabelValue"]})
+ }
jobFields := restorejobTemplateFields{
JobName: fmt.Sprintf("pgrestore-%s-%s", task.Spec.Parameters[config.LABEL_PGRESTORE_FROM_CLUSTER],
@@ -92,7 +100,7 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
PITRTarget: task.Spec.Parameters[config.LABEL_PGRESTORE_PITR_TARGET],
CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
CCPImageTag: operator.Pgo.Cluster.CCPImageTag,
- NodeSelector: operator.GetAffinity(task.Spec.Parameters["NodeLabelKey"], task.Spec.Parameters["NodeLabelValue"], "In"),
+ NodeSelector: operator.GetNodeAffinity(nodeAffinity),
}
var doc2 bytes.Buffer
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index b92ea1c587..43c52fd39b 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -231,6 +231,31 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
return err
}
+// GenerateNodeAffinity creates a Kubernetes node affinity object suitable for
+// storage on our custom resource. For now, it only supports preferred affinity,
+// though can be expanded to support more complex rules
+func GenerateNodeAffinity(key string, values []string) *v1.NodeAffinity {
+ // generate the selector requirement, which at this point is just the
+ // "node label is in" requirement
+ requirement := v1.NodeSelectorRequirement{
+ Key: key,
+ Values: values,
+ Operator: v1.NodeSelectorOpIn,
+ }
+ // build the preferred affinity term. Right now this is the only one supported
+ term := v1.PreferredSchedulingTerm{
+ Weight: crv1.NodeAffinityDefaultWeight,
+ Preference: v1.NodeSelectorTerm{
+ MatchExpressions: []v1.NodeSelectorRequirement{requirement},
+ },
+ }
+
+ // and here is our node affinity rule
+ return &v1.NodeAffinity{
+ PreferredDuringSchedulingIgnoredDuringExecution: []v1.PreferredSchedulingTerm{term},
+ }
+}
+
// GeneratedPasswordValidUntilDays returns the value for the number of days that
// a password is valid for, which is used as part of PostgreSQL's VALID UNTIL
// directive on a user. It first determines if the user provided this value via
diff --git a/internal/util/cluster_test.go b/internal/util/cluster_test.go
new file mode 100644
index 0000000000..3d80c561f4
--- /dev/null
+++ b/internal/util/cluster_test.go
@@ -0,0 +1,67 @@
+package util
+
+/*
+Copyright 2020 Crunchy Data Solutions, Inc.
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+import (
+ "reflect"
+ "testing"
+
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+ v1 "k8s.io/api/core/v1"
+)
+
+func TestGenerateNodeAffinity(t *testing.T) {
+ // presently this test is really strict. as we allow for more options, we will
+ // need to add more tests.
+ t.Run("valid", func(t *testing.T) {
+ key := "foo"
+ values := []string{"bar", "baz"}
+
+ affinity := GenerateNodeAffinity(key, values)
+
+ if len(affinity.PreferredDuringSchedulingIgnoredDuringExecution) == 0 {
+ t.Fatalf("expected preferred node affinity to be set")
+ } else if len(affinity.PreferredDuringSchedulingIgnoredDuringExecution) > 1 {
+ t.Fatalf("only expected one rule to be set")
+ }
+
+ term := affinity.PreferredDuringSchedulingIgnoredDuringExecution[0]
+
+ if term.Weight != crv1.NodeAffinityDefaultWeight {
+ t.Fatalf("expected weight %d actual %d", crv1.NodeAffinityDefaultWeight, term.Weight)
+ }
+
+ if len(term.Preference.MatchExpressions) == 0 {
+ t.Fatalf("expected a match expression to be set")
+ } else if len(term.Preference.MatchExpressions) > 1 {
+ t.Fatalf("expected only one match expression to be set")
+ }
+
+ rule := term.Preference.MatchExpressions[0]
+
+ if rule.Operator != v1.NodeSelectorOpIn {
+ t.Fatalf("operator expected %s actual %s", v1.NodeSelectorOpIn, rule.Operator)
+ }
+
+ if rule.Key != key {
+ t.Fatalf("key expected %s actual %s", key, rule.Key)
+ }
+
+ if !reflect.DeepEqual(rule.Values, values) {
+ t.Fatalf("values expected %v actual %v", values, rule.Values)
+ }
+ })
+}
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 76f7ccae67..54c7c8fe4c 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -117,6 +117,7 @@ type PgclusterSpec struct {
Status string `json:"status"`
CustomConfig string `json:"customconfig"`
UserLabels map[string]string `json:"userlabels"`
+ NodeAffinity NodeAffinitySpec `json:"nodeAffinity"`
PodAntiAffinity PodAntiAffinitySpec `json:"podAntiAffinity"`
SyncReplication *bool `json:"syncReplication"`
BackrestConfig []v1.VolumeProjection `json:"backrestConfig"`
@@ -248,6 +249,22 @@ type PgclusterStatus struct {
// swagger:ignore
type PgclusterState string
+// NodeAffinityDefaultWeight is the default weighting for the preferred node
+// affinity. This was taken from our legacy template for handling this, so there
+// may be some logic to this, or this could be an arbitrary weight. Either way,
+// the number needs to be somewhere between [1, 100].
+const NodeAffinityDefaultWeight int32 = 10
+
+// NodeAffinitySpec contains optional NodeAffinity rules for the different
+// deployment types managed by the Operator. While similar to how the Operator
+// handles pod anti-affinity, makes reference to the supported Kubernetes
+// objects to maintain more familiarity and consistency.
+//
+// All of these are optional, so one must ensure they check for nils.
+type NodeAffinitySpec struct {
+ Default *v1.NodeAffinity `json:"default"`
+}
+
// PodAntiAffinityDeployment distinguishes between the different types of
// Deployments that can leverage PodAntiAffinity
type PodAntiAffinityDeployment int
diff --git a/pkg/apis/crunchydata.com/v1/replica.go b/pkg/apis/crunchydata.com/v1/replica.go
index 45f6cd123a..08a830a8d3 100644
--- a/pkg/apis/crunchydata.com/v1/replica.go
+++ b/pkg/apis/crunchydata.com/v1/replica.go
@@ -46,6 +46,9 @@ type PgreplicaSpec struct {
ServiceType v1.ServiceType `json:"serviceType"`
Status string `json:"status"`
UserLabels map[string]string `json:"userlabels"`
+ // NodeAffinity is an optional structure that dictates how an instance should
+ // be deployed in an environment
+ NodeAffinity *v1.NodeAffinity `json:"nodeAffinity"`
// Tolerations are an optional list of Pod toleration rules that are applied
// to the PostgreSQL instance.
Tolerations []v1.Toleration `json:"tolerations"`
diff --git a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
index 3cef8c84f5..86b9b5ed4d 100644
--- a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
+++ b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
@@ -68,6 +68,27 @@ func (in *ClusterAnnotations) DeepCopy() *ClusterAnnotations {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *NodeAffinitySpec) DeepCopyInto(out *NodeAffinitySpec) {
+ *out = *in
+ if in.Default != nil {
+ in, out := &in.Default, &out.Default
+ *out = new(corev1.NodeAffinity)
+ (*in).DeepCopyInto(*out)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeAffinitySpec.
+func (in *NodeAffinitySpec) DeepCopy() *NodeAffinitySpec {
+ if in == nil {
+ return nil
+ }
+ out := new(NodeAffinitySpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PGDataSourceSpec) DeepCopyInto(out *PGDataSourceSpec) {
*out = *in
@@ -248,6 +269,7 @@ func (in *PgclusterSpec) DeepCopyInto(out *PgclusterSpec) {
(*out)[key] = val
}
}
+ in.NodeAffinity.DeepCopyInto(&out.NodeAffinity)
out.PodAntiAffinity = in.PodAntiAffinity
if in.SyncReplication != nil {
in, out := &in.SyncReplication, &out.SyncReplication
@@ -261,6 +283,11 @@ func (in *PgclusterSpec) DeepCopyInto(out *PgclusterSpec) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
+ if in.BackrestStorageTypes != nil {
+ in, out := &in.BackrestStorageTypes, &out.BackrestStorageTypes
+ *out = make([]BackrestStorageType, len(*in))
+ copy(*out, *in)
+ }
if in.TablespaceMounts != nil {
in, out := &in.TablespaceMounts, &out.TablespaceMounts
*out = make(map[string]PgStorageSpec, len(*in))
@@ -472,6 +499,11 @@ func (in *PgreplicaSpec) DeepCopyInto(out *PgreplicaSpec) {
(*out)[key] = val
}
}
+ if in.NodeAffinity != nil {
+ in, out := &in.NodeAffinity, &out.NodeAffinity
+ *out = new(corev1.NodeAffinity)
+ (*in).DeepCopyInto(*out)
+ }
if in.Tolerations != nil {
in, out := &in.Tolerations, &out.Tolerations
*out = make([]corev1.Toleration, len(*in))
From 9818ac9eadd3af90ba30db070195941cd0fdef4b Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 31 Dec 2020 16:03:42 -0500
Subject: [PATCH 099/276] Allow for selection of node affinity type from pgo
client
The previous commit allows for node affinity type (e.g. preferred, required) to
be selected directly from a custom resource. This is now possible from the
command line using the `--node-affinity-type` flag, which accepts values of
"preferred" and "required" ("prefer" and "require" are also accepted).
This flag is available on the following commands:
- `pgo create cluster`
- `pgo scale`
- `pgo restore`
Note that this flag needs to be used in conjunction with the `--node-label`
flag.
---
cmd/pgo/cmd/cluster.go | 1 +
cmd/pgo/cmd/common.go | 33 +++++++++++
cmd/pgo/cmd/create.go | 6 ++
cmd/pgo/cmd/restore.go | 6 +-
cmd/pgo/cmd/scale.go | 19 ++++---
.../architecture/high-availability/_index.md | 31 ++++++++--
docs/content/custom-resources/_index.md | 22 +++++--
.../reference/pgo_create_cluster.md | 11 ++--
.../pgo-client/reference/pgo_restore.md | 7 ++-
.../reference/pgo_update_cluster.md | 10 ++--
.../overlay/staging/hippo-rpl1-pgreplica.yaml | 2 -
.../apiserver/backrestservice/backrestimpl.go | 6 ++
.../apiserver/clusterservice/clusterimpl.go | 2 +-
.../apiserver/clusterservice/scaleimpl.go | 2 +-
.../apiserver/pgdumpservice/pgdumpimpl.go | 6 ++
internal/config/labels.go | 29 +++++-----
internal/operator/backrest/restore.go | 6 ++
internal/operator/pgdump/restore.go | 7 ++-
internal/util/cluster.go | 33 +++++++----
internal/util/cluster_test.go | 57 +++++++++++++++++--
pkg/apis/crunchydata.com/v1/cluster.go | 10 ++++
pkg/apiservermsgs/backrestmsgs.go | 15 +++--
pkg/apiservermsgs/clustermsgs.go | 10 +++-
pkg/apiservermsgs/pgdumpmsgs.go | 13 +++--
24 files changed, 270 insertions(+), 74 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index fd2f1241ab..9d0465da22 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -260,6 +260,7 @@ func createCluster(args []string, ns string, createClusterCmd *cobra.Command) {
r.Name = args[0]
r.Namespace = ns
r.ReplicaCount = ClusterReplicaCount
+ r.NodeAffinityType = getNodeAffinityType(NodeLabel, NodeAffinityType)
r.NodeLabel = NodeLabel
r.PasswordLength = PasswordLength
r.PasswordSuperuser = PasswordSuperuser
diff --git a/cmd/pgo/cmd/common.go b/cmd/pgo/cmd/common.go
index f1e8f84e70..6d618e44bc 100644
--- a/cmd/pgo/cmd/common.go
+++ b/cmd/pgo/cmd/common.go
@@ -18,7 +18,10 @@ package cmd
import (
"encoding/json"
"fmt"
+ "os"
"reflect"
+
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
)
// unitType is used to group together the unit types
@@ -98,6 +101,36 @@ func getMaxLength(results []interface{}, title, fieldName string) int {
return maxLength + 1
}
+// getNodeAffinityType takes a string value of "NodeAffinityType" and converts
+// it to the proper enumeration
+func getNodeAffinityType(nodeLabel, nodeAffinityType string) crv1.NodeAffinityType {
+ // if nodeAffinityType is not set, just exit with the default
+ if nodeAffinityType == "" {
+ return crv1.NodeAffinityTypePreferred
+ }
+
+ // force an exit if nodeAffinityType is set but nodeLabel is not
+ if nodeLabel == "" && nodeAffinityType != "" {
+ fmt.Println("error: --node-affinity-type set, but --node-label not set")
+ os.Exit(1)
+ }
+
+ // and away we go
+ switch nodeAffinityType {
+ default:
+ fmt.Printf("error: invalid node affinity type %q. choices are: preferred required\n", nodeAffinityType)
+ os.Exit(1)
+ case "preferred", "prefer":
+ return crv1.NodeAffinityTypePreferred
+ case "required", "require":
+ return crv1.NodeAffinityTypeRequired
+ }
+
+ // one should never get to here because of the exit, but we need to compile
+ // the program. Yes, we really shouldn't be exiting.
+ return crv1.NodeAffinityTypePreferred
+}
+
// getSizeAndUnit determines the best size to return based on the best unit
// where unit is KB, MB, GB, etc...
func getSizeAndUnit(size int64) (float64, unitType) {
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index 6b798e4a23..19d1612f1a 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -109,6 +109,10 @@ var BackrestS3CASecretName string
// BackrestRepoPath allows the pgBackRest repo path to be defined instead of using the default
var BackrestRepoPath string
+// NodeAffinityType needs to be used with "NodeLabel" and can be one of
+// "preferred" or "required" -- gets mapped to an enumeration
+var NodeAffinityType string
+
// Standby determines whether or not the cluster should be created as a standby cluster
var Standby bool
@@ -395,6 +399,8 @@ func init() {
"the Crunchy Postgres Exporter sidecar container. Defaults to server value (24Mi).")
createClusterCmd.Flags().StringVar(&ExporterMemoryLimit, "exporter-memory-limit", "", "Set the amount of memory to limit for "+
"the Crunchy Postgres Exporter sidecar container.")
+ createClusterCmd.Flags().StringVar(&NodeAffinityType, "node-affinity-type", "", "Sets the type of node affinity to use. "+
+ "Can be either preferred (default) or required. Must be used with --node-label")
createClusterCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key=value) to use in placing the primary database. If not set, any node is used.")
createClusterCmd.Flags().StringVarP(&Password, "password", "", "", "The password to use for standard user account created during cluster initialization.")
createClusterCmd.Flags().IntVarP(&PasswordLength, "password-length", "", 0, "If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server.")
diff --git a/cmd/pgo/cmd/restore.go b/cmd/pgo/cmd/restore.go
index aa3b4bb725..bb8931f5b7 100644
--- a/cmd/pgo/cmd/restore.go
+++ b/cmd/pgo/cmd/restore.go
@@ -64,13 +64,15 @@ func init() {
restoreCmd.Flags().StringVarP(&BackupOpts, "backup-opts", "", "", "The restore options for pgbackrest or pgdump.")
restoreCmd.Flags().StringVarP(&PITRTarget, "pitr-target", "", "", "The PITR target, being a PostgreSQL timestamp such as '2018-08-13 11:25:42.582117-04'.")
+ restoreCmd.Flags().StringVar(&NodeAffinityType, "node-affinity-type", "", "Sets the type of node affinity to use. "+
+ "Can be either preferred (default) or required. Must be used with --node-label")
restoreCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key=value) to use when scheduling "+
"the restore job, and in the case of a pgBackRest restore, also the new (i.e. restored) primary deployment. If not set, any node is used.")
restoreCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
restoreCmd.Flags().StringVarP(&BackupPVC, "backup-pvc", "", "", "The PVC containing the pgdump to restore from.")
restoreCmd.Flags().StringVarP(&PGDumpDB, "pgdump-database", "d", "postgres", "The name of the database pgdump will restore.")
restoreCmd.Flags().StringVarP(&BackupType, "backup-type", "", "", "The type of backup to restore from, default is pgbackrest. Valid types are pgbackrest or pgdump.")
- restoreCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use for a pgBackRest restore. Either \"local\", \"s3\". (default \"local\")")
+ restoreCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use for a pgBackRest restore. Either \"posix\", \"s3\". (default \"posix\")")
}
// restore ....
@@ -90,6 +92,7 @@ func restore(args []string, ns string) {
request.PITRTarget = PITRTarget
request.FromPVC = BackupPVC // use PVC specified on command line for pgrestore
request.PGDumpDB = PGDumpDB
+ request.NodeAffinityType = getNodeAffinityType(NodeLabel, NodeAffinityType)
request.NodeLabel = NodeLabel
response, err = api.RestoreDump(httpclient, &SessionCredentials, request)
@@ -101,6 +104,7 @@ func restore(args []string, ns string) {
request.RestoreOpts = BackupOpts
request.PITRTarget = PITRTarget
request.NodeLabel = NodeLabel
+ request.NodeAffinityType = getNodeAffinityType(NodeLabel, NodeAffinityType)
request.BackrestStorageType = BackrestStorageType
response, err = api.Restore(httpclient, &SessionCredentials, request)
diff --git a/cmd/pgo/cmd/scale.go b/cmd/pgo/cmd/scale.go
index 5303c6dd6a..dd9d8a8a95 100644
--- a/cmd/pgo/cmd/scale.go
+++ b/cmd/pgo/cmd/scale.go
@@ -59,6 +59,8 @@ func init() {
scaleCmd.Flags().StringVarP(&ServiceType, "service-type", "", "", "The service type to use in the replica Service. If not set, the default in pgo.yaml will be used.")
scaleCmd.Flags().StringVarP(&CCPImageTag, "ccp-image-tag", "", "", "The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting.")
+ scaleCmd.Flags().StringVar(&NodeAffinityType, "node-affinity-type", "", "Sets the type of node affinity to use. "+
+ "Can be either preferred (default) or required. Must be used with --node-label")
scaleCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key) to use in placing the replica database. If not set, any node is used.")
scaleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
scaleCmd.Flags().IntVarP(&ReplicaCount, "replica-count", "", 1, "The replica count to apply to the clusters.")
@@ -72,14 +74,15 @@ func init() {
func scaleCluster(args []string, ns string) {
for _, arg := range args {
request := msgs.ClusterScaleRequest{
- CCPImageTag: CCPImageTag,
- Name: arg,
- Namespace: ns,
- NodeLabel: NodeLabel,
- ReplicaCount: ReplicaCount,
- ServiceType: v1.ServiceType(ServiceType),
- StorageConfig: StorageConfig,
- Tolerations: getClusterTolerations(Tolerations),
+ CCPImageTag: CCPImageTag,
+ Name: arg,
+ Namespace: ns,
+ NodeAffinityType: getNodeAffinityType(NodeLabel, NodeAffinityType),
+ NodeLabel: NodeLabel,
+ ReplicaCount: ReplicaCount,
+ ServiceType: v1.ServiceType(ServiceType),
+ StorageConfig: StorageConfig,
+ Tolerations: getClusterTolerations(Tolerations),
}
response, err := api.ScaleCluster(httpclient, &SessionCredentials, request)
diff --git a/docs/content/architecture/high-availability/_index.md b/docs/content/architecture/high-availability/_index.md
index 3a5d79c806..f267d306f0 100644
--- a/docs/content/architecture/high-availability/_index.md
+++ b/docs/content/architecture/high-availability/_index.md
@@ -272,10 +272,33 @@ when creating a PostgreSQL cluster;
pgo create cluster thatcluster --node-label=region=us-east-1
```
-The Node Affinity only uses the `preferred` scheduling strategy (similar to what
-is described in the Pod Anti-Affinity section above), so if a Pod cannot be
-scheduled to a particular Node matching the label, it will be scheduled to a
-different Node.
+By default, node affinity uses the `preferred` scheduling strategy (similar to
+what is described in the [Pod Anti-Affinity]("#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity")
+section above), so if a Pod cannot be scheduled to a particular Node matching
+the label, it will be scheduled to a different Node.
+
+The PostgreSQL Operator supports two different types of node affinity:
+
+- `preferred`
+- `required`
+
+which can be selected with the `--node-affinity-type` flag, e.g:
+
+```
+pgo create cluster hippo \
+ --node-label=region=us-east-1 --node-affinity-type=required
+```
+
+When creating a cluster, the node affinity rules will be applied to the primary
+and any other PostgreSQL instances that are added. If you would like to specify
+a node affinity rule for a specific instance, you can do so with the
+[`pgo scale`]({{< relref "pgo-client/reference/pgo_scale.md">}}) command and the
+`--node-label` and `--node-affinity-type` flags, i.e:
+
+```
+pgo scale cluster hippo \
+ --node-label=region=us-south-1 --node-affinity-type=required
+```
## Tolerations
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index abc7706284..84589f100e 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -543,8 +543,6 @@ spec:
supplementalgroups: ""
tolerations: []
userlabels:
- NodeLabelKey: ""
- NodeLabelValue: ""
pgo-version: {{< param operatorVersion >}}
EOF
@@ -748,6 +746,7 @@ make changes, as described below.
| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
+| NodeAffinity | `create` | Sets the [node affinity rules]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}}) for the PostgreSQL cluster and associated PostgreSQL instances. Can be overridden on a per-instance (`pgreplicas.crunchydata.com`) basis. Please see the `Node Affinity Specification` section below. |
| PGBadger | `create`,`update` | If `true`, deploys the `crunchy-pgbadger` sidecar for query analysis. |
| PGBadgerPort | `create` | If the `PGBadger` label is set, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
| PGDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. |
@@ -762,7 +761,7 @@ make changes, as described below.
| ServiceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to `ClusterIP`. |
| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
-| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection or pgBadger, you would specify `"crunchy-postgres-exporter": "true"` and `"crunchy-pgbadger": "true"` here, respectively. However, this structure does need to be set, so just follow whatever is in the example. |
+| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" as well as a way to add custom labels to clusters. This will disappear at some point. |
| TablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. |
| TLS | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. |
| TLSOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. |
@@ -787,6 +786,20 @@ attribute and how it works.
| StorageType | `create` | Set to `create` if storage is provisioned (e.g. using `hostpath`). Set to `dynamic` if using a dynamic storage provisioner, e.g. via a `StorageClass`. |
| SupplementalGroups | `create` | If provided, a comma-separated list of group IDs to use in case it is needed to interface with a particular storage system. Typically used with NFS or hostpath storage. |
+##### Node Affinity Specification
+
+Sets the [node affinity]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}})
+for the PostgreSQL cluster and associated deployments. Follows the [Kubernetes standard format for setting node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), including `preferred` and `required` node affinity.
+
+To set node affinity for a PostgreSQL cluster, you will need to modify the `default` attribute in the node affinity specification. As mentioned above, the values that `default` accepts match what Kubernetes uses for [node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity).
+
+For a detailed explanation for node affinity works. Please see the [high-availability]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}})
+documentation.
+
+| Attribute | Action | Description |
+|-----------|--------|-------------|
+| default | `create` | The default pod anti-affinity to use for all PostgreSQL instances managed in a given PostgreSQL cluster. Can be overridden on a per-instance basis with the `pgreplicas.crunchydata.com` custom resource. |
+
##### Pod Anti-Affinity Specification
Sets the [pod anti-affinity]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}})
@@ -878,6 +891,7 @@ cluster. All of the attributes only affect the replica when it is created.
| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
| Name | `create` | The name of this PostgreSQL replica. It should be unique within a `ClusterName`. |
| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
+| NodeAffinity | `create` | Sets the [node affinity rules]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}}) for this PostgreSQL instance. Follows the [Kubernetes standard format for setting node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity). |
| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section in the `pgclusters.crunchydata.com` description. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
-| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection, you would specify `"crunchy-postgres-exporter": "true"` here. This also allows for node selector pinning using `NodeLabelKey` and `NodeLabelValue`. However, this structure does need to be set, so just follow whatever is in the example. |
+| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" as well as a way to add custom labels to clusters. This will disappear at some point. |
| Tolerations | `create`,`update` | Any array of Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for how to set this field. |
diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md
index 36000d4253..bc5dd28f99 100644
--- a/docs/content/pgo-client/reference/pgo_create_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_create_cluster.md
@@ -21,7 +21,7 @@ pgo create cluster [flags]
--annotation strings Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer)
The format to add an annotation is "name=value"
The format to remove an annotation is "name-"
-
+
For example, to add two annotations: "--annotation=hippo=awesome,elephant=cool"
--annotation-pgbackrest strings Add an Annotation specifically to pgBackRest deployments
The format to add an annotation is "name=value"
@@ -49,6 +49,7 @@ pgo create cluster [flags]
--memory string Set the amount of RAM to request, e.g. 1GiB. Overrides the default server value.
--memory-limit string Set the amount of RAM to limit, e.g. 1GiB.
--metrics Adds the crunchy-postgres-exporter container to the database pod.
+ --node-affinity-type string Sets the type of node affinity to use. Can be either preferred (default) or required. Must be used with --node-label
--node-label string The node label (key=value) to use in placing the primary database. If not set, any node is used.
--password string The password to use for standard user account created during cluster initialization.
--password-length int If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server.
@@ -100,13 +101,13 @@ pgo create cluster [flags]
--storage-config string The name of a Storage config in pgo.yaml to use for the cluster storage.
--sync-replication Enables synchronous replication for the cluster.
--tablespace strings Create a PostgreSQL tablespace on the cluster, e.g. "name=ts1:storageconfig=nfsstorage". The format is a key/value map that is delimited by "=" and separated by ":". The following parameters are available:
-
+
- name (required): the name of the PostgreSQL tablespace
- storageconfig (required): the storage configuration to use, as specified in the list available in the "pgo-config" ConfigMap (aka "pgo.yaml")
- pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. Follows the Kubernetes quantity format.
-
+
For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:
-
+
--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi
--tls-only If true, forces all PostgreSQL connections to be over TLS. Must also set "server-tls-secret" and "server-ca-secret"
--toleration strings Set Pod tolerations for each PostgreSQL instance in a cluster.
@@ -134,4 +135,4 @@ pgo create cluster [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 24-Dec-2020
+###### Auto generated by spf13/cobra on 31-Dec-2020
diff --git a/docs/content/pgo-client/reference/pgo_restore.md b/docs/content/pgo-client/reference/pgo_restore.md
index e7e377c914..78d3584e7e 100644
--- a/docs/content/pgo-client/reference/pgo_restore.md
+++ b/docs/content/pgo-client/reference/pgo_restore.md
@@ -9,7 +9,7 @@ Perform a restore from previous backup
RESTORE performs a restore to a new PostgreSQL cluster. This includes stopping the database and recreating a new primary with the restored data. Valid backup types to restore from are pgbackrest and pgdump. For example:
- pgo restore mycluster
+ pgo restore mycluster
```
pgo restore [flags]
@@ -23,6 +23,7 @@ pgo restore [flags]
--backup-type string The type of backup to restore from, default is pgbackrest. Valid types are pgbackrest or pgdump.
-h, --help help for restore
--no-prompt No command line confirmation.
+ --node-affinity-type string Sets the type of node affinity to use. Can be either preferred (default) or required. Must be used with --node-label
--node-label string The node label (key=value) to use when scheduling the restore job, and in the case of a pgBackRest restore, also the new (i.e. restored) primary deployment. If not set, any node is used.
--pgbackrest-storage-type string The type of storage to use for a pgBackRest restore. Either "posix", "s3". (default "posix")
-d, --pgdump-database string The name of the database pgdump will restore. (default "postgres")
@@ -32,7 +33,7 @@ pgo restore [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -46,4 +47,4 @@ pgo restore [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 31-Dec-2020
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index 4622833460..08b1e5ae4c 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -25,7 +25,7 @@ pgo update cluster [flags]
--annotation strings Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer)
The format to add an annotation is "name=value"
The format to remove an annotation is "name-"
-
+
For example, to add two annotations: "--annotation=hippo=awesome,elephant=cool"
--annotation-pgbackrest strings Add an Annotation specifically to pgBackRest deployments
The format to add an annotation is "name=value"
@@ -62,13 +62,13 @@ pgo update cluster [flags]
--shutdown Shutdown the database cluster if it is currently running.
--startup Restart the database cluster if it is currently shutdown.
--tablespace strings Add a PostgreSQL tablespace on the cluster, e.g. "name=ts1:storageconfig=nfsstorage". The format is a key/value map that is delimited by "=" and separated by ":". The following parameters are available:
-
+
- name (required): the name of the PostgreSQL tablespace
- storageconfig (required): the storage configuration to use, as specified in the list available in the "pgo-config" ConfigMap (aka "pgo.yaml")
- pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. Follows the Kubernetes quantity format.
-
+
For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:
-
+
--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi
```
@@ -89,4 +89,4 @@ pgo update cluster [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 28-Dec-2020
+###### Auto generated by spf13/cobra on 31-Dec-2020
diff --git a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
index ed7d6e0b4b..253995eb69 100644
--- a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
+++ b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
@@ -20,6 +20,4 @@ spec:
storagetype: dynamic
supplementalgroups: ""
userlabels:
- NodeLabelKey: ""
- NodeLabelValue: ""
pgo-version: 4.5.1
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index e743925d95..c9ae9fb943 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -626,6 +626,12 @@ func getRestoreParams(cluster *crv1.Pgcluster, request *msgs.RestoreRequest) (*c
spec.Parameters[config.LABEL_NODE_LABEL_KEY] = parts[0]
spec.Parameters[config.LABEL_NODE_LABEL_VALUE] = parts[1]
+ // determine if any special node affinity type must be set
+ spec.Parameters[config.LABEL_NODE_AFFINITY_TYPE] = "preferred"
+ if request.NodeAffinityType == crv1.NodeAffinityTypeRequired {
+ spec.Parameters[config.LABEL_NODE_AFFINITY_TYPE] = "required"
+ }
+
log.Debug("Restore node labels used from user entered flag")
}
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 00b621c680..784426d834 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1240,7 +1240,7 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
if request.NodeLabel != "" {
nodeLabel := strings.Split(request.NodeLabel, "=")
spec.NodeAffinity = crv1.NodeAffinitySpec{
- Default: util.GenerateNodeAffinity(nodeLabel[0], []string{nodeLabel[1]}),
+ Default: util.GenerateNodeAffinity(request.NodeAffinityType, nodeLabel[0], []string{nodeLabel[1]}),
}
log.Debugf("using node label %s", request.NodeLabel)
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index b4dc9f2dba..36739d1e44 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -107,7 +107,7 @@ func ScaleCluster(request msgs.ClusterScaleRequest, pgouser string) msgs.Cluster
}
nodeLabel := strings.Split(request.NodeLabel, "=")
- spec.NodeAffinity = util.GenerateNodeAffinity(nodeLabel[0], []string{nodeLabel[1]})
+ spec.NodeAffinity = util.GenerateNodeAffinity(request.NodeAffinityType, nodeLabel[0], []string{nodeLabel[1]})
log.Debugf("using node label %s", request.NodeLabel)
}
diff --git a/internal/apiserver/pgdumpservice/pgdumpimpl.go b/internal/apiserver/pgdumpservice/pgdumpimpl.go
index 5a93045493..3e97ede60a 100644
--- a/internal/apiserver/pgdumpservice/pgdumpimpl.go
+++ b/internal/apiserver/pgdumpservice/pgdumpimpl.go
@@ -456,6 +456,12 @@ func buildPgTaskForRestore(taskName string, action string, request *msgs.PgResto
spec.Parameters[config.LABEL_NODE_LABEL_KEY] = parts[0]
spec.Parameters[config.LABEL_NODE_LABEL_VALUE] = parts[1]
+ // determine if any special node affinity type must be set
+ spec.Parameters[config.LABEL_NODE_AFFINITY_TYPE] = "preferred"
+ if request.NodeAffinityType == crv1.NodeAffinityTypeRequired {
+ spec.Parameters[config.LABEL_NODE_AFFINITY_TYPE] = "required"
+ }
+
log.Debug("Restore node labels used from user entered flag")
}
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 1d0f9b7c3f..5dce3957f0 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -38,20 +38,21 @@ const (
)
const (
- LABEL_PGPOLICY = "pgpolicy"
- LABEL_INGEST = "ingest"
- LABEL_PGREMOVE = "pgremove"
- LABEL_PVCNAME = "pvcname"
- LABEL_EXPORTER = "crunchy-postgres-exporter"
- LABEL_ARCHIVE = "archive"
- LABEL_ARCHIVE_TIMEOUT = "archive-timeout"
- LABEL_NODE_LABEL_KEY = "NodeLabelKey"
- LABEL_NODE_LABEL_VALUE = "NodeLabelValue"
- LABEL_REPLICA_NAME = "replica-name"
- LABEL_CCP_IMAGE_TAG_KEY = "ccp-image-tag"
- LABEL_CCP_IMAGE_KEY = "ccp-image"
- LABEL_IMAGE_PREFIX = "image-prefix"
- LABEL_POD_ANTI_AFFINITY = "pg-pod-anti-affinity"
+ LABEL_PGPOLICY = "pgpolicy"
+ LABEL_INGEST = "ingest"
+ LABEL_PGREMOVE = "pgremove"
+ LABEL_PVCNAME = "pvcname"
+ LABEL_EXPORTER = "crunchy-postgres-exporter"
+ LABEL_ARCHIVE = "archive"
+ LABEL_ARCHIVE_TIMEOUT = "archive-timeout"
+ LABEL_NODE_AFFINITY_TYPE = "node-affinity-type"
+ LABEL_NODE_LABEL_KEY = "NodeLabelKey"
+ LABEL_NODE_LABEL_VALUE = "NodeLabelValue"
+ LABEL_REPLICA_NAME = "replica-name"
+ LABEL_CCP_IMAGE_TAG_KEY = "ccp-image-tag"
+ LABEL_CCP_IMAGE_KEY = "ccp-image"
+ LABEL_IMAGE_PREFIX = "image-prefix"
+ LABEL_POD_ANTI_AFFINITY = "pg-pod-anti-affinity"
)
const (
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index 4216a31197..a8395b2484 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -94,8 +94,14 @@ func UpdatePGClusterSpecForRestore(clientset kubeapi.Interface, cluster *crv1.Pg
// set the proper node affinity for the restore job
if task.Spec.Parameters[config.LABEL_NODE_LABEL_KEY] != "" && task.Spec.Parameters[config.LABEL_NODE_LABEL_VALUE] != "" {
+ affinityType := crv1.NodeAffinityTypePreferred
+ if task.Spec.Parameters[config.LABEL_NODE_AFFINITY_TYPE] == "required" {
+ affinityType = crv1.NodeAffinityTypeRequired
+ }
+
cluster.Spec.NodeAffinity = crv1.NodeAffinitySpec{
Default: util.GenerateNodeAffinity(
+ affinityType,
task.Spec.Parameters[config.LABEL_NODE_LABEL_KEY],
[]string{task.Spec.Parameters[config.LABEL_NODE_LABEL_VALUE]},
),
diff --git a/internal/operator/pgdump/restore.go b/internal/operator/pgdump/restore.go
index 550b64390b..38b118c35b 100644
--- a/internal/operator/pgdump/restore.go
+++ b/internal/operator/pgdump/restore.go
@@ -81,7 +81,12 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
var nodeAffinity *v1.NodeAffinity
if task.Spec.Parameters["NodeLabelKey"] != "" && task.Spec.Parameters["NodeLabelValue"] != "" {
- nodeAffinity = util.GenerateNodeAffinity(
+ affinityType := crv1.NodeAffinityTypePreferred
+ if task.Spec.Parameters[config.LABEL_NODE_AFFINITY_TYPE] == "required" {
+ affinityType = crv1.NodeAffinityTypeRequired
+ }
+
+ nodeAffinity = util.GenerateNodeAffinity(affinityType,
task.Spec.Parameters["NodeLabelKey"], []string{task.Spec.Parameters["NodeLabelValue"]})
}
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index 43c52fd39b..ef7796f439 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -234,7 +234,8 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
// GenerateNodeAffinity creates a Kubernetes node affinity object suitable for
// storage on our custom resource. For now, it only supports preferred affinity,
// though can be expanded to support more complex rules
-func GenerateNodeAffinity(key string, values []string) *v1.NodeAffinity {
+func GenerateNodeAffinity(affinityType crv1.NodeAffinityType, key string, values []string) *v1.NodeAffinity {
+ nodeAffinity := &v1.NodeAffinity{}
// generate the selector requirement, which at this point is just the
// "node label is in" requirement
requirement := v1.NodeSelectorRequirement{
@@ -242,18 +243,30 @@ func GenerateNodeAffinity(key string, values []string) *v1.NodeAffinity {
Values: values,
Operator: v1.NodeSelectorOpIn,
}
- // build the preferred affinity term. Right now this is the only one supported
- term := v1.PreferredSchedulingTerm{
- Weight: crv1.NodeAffinityDefaultWeight,
- Preference: v1.NodeSelectorTerm{
+
+ // build out the node affinity based on whether or not this is required or
+ // preferred (the default)
+ if affinityType == crv1.NodeAffinityTypeRequired {
+ // build the required affinity term.
+ nodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = &v1.NodeSelector{
+ NodeSelectorTerms: make([]v1.NodeSelectorTerm, 1),
+ }
+ nodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0] = v1.NodeSelectorTerm{
MatchExpressions: []v1.NodeSelectorRequirement{requirement},
- },
+ }
+ } else {
+ // build the preferred affinity term.
+ term := v1.PreferredSchedulingTerm{
+ Weight: crv1.NodeAffinityDefaultWeight,
+ Preference: v1.NodeSelectorTerm{
+ MatchExpressions: []v1.NodeSelectorRequirement{requirement},
+ },
+ }
+ nodeAffinity.PreferredDuringSchedulingIgnoredDuringExecution = []v1.PreferredSchedulingTerm{term}
}
- // and here is our node affinity rule
- return &v1.NodeAffinity{
- PreferredDuringSchedulingIgnoredDuringExecution: []v1.PreferredSchedulingTerm{term},
- }
+ // return the node affinity rule
+ return nodeAffinity
}
// GeneratedPasswordValidUntilDays returns the value for the number of days that
diff --git a/internal/util/cluster_test.go b/internal/util/cluster_test.go
index 3d80c561f4..bf50277c8f 100644
--- a/internal/util/cluster_test.go
+++ b/internal/util/cluster_test.go
@@ -24,13 +24,18 @@ import (
)
func TestGenerateNodeAffinity(t *testing.T) {
- // presently this test is really strict. as we allow for more options, we will
- // need to add more tests.
- t.Run("valid", func(t *testing.T) {
+ // presently only one rule is allowed, so we are testing for that. future
+ // tests may need to expand upon that
+ t.Run("preferred", func(t *testing.T) {
+ affinityType := crv1.NodeAffinityTypePreferred
key := "foo"
values := []string{"bar", "baz"}
- affinity := GenerateNodeAffinity(key, values)
+ affinity := GenerateNodeAffinity(affinityType, key, values)
+
+ if affinity.RequiredDuringSchedulingIgnoredDuringExecution != nil {
+ t.Fatalf("expected required node affinity to not be set")
+ }
if len(affinity.PreferredDuringSchedulingIgnoredDuringExecution) == 0 {
t.Fatalf("expected preferred node affinity to be set")
@@ -64,4 +69,48 @@ func TestGenerateNodeAffinity(t *testing.T) {
t.Fatalf("values expected %v actual %v", values, rule.Values)
}
})
+
+ t.Run("required", func(t *testing.T) {
+ affinityType := crv1.NodeAffinityTypeRequired
+ key := "foo"
+ values := []string{"bar", "baz"}
+
+ affinity := GenerateNodeAffinity(affinityType, key, values)
+
+ if len(affinity.PreferredDuringSchedulingIgnoredDuringExecution) != 0 {
+ t.Fatalf("expected preferred node affinity to not be set")
+ }
+
+ if affinity.RequiredDuringSchedulingIgnoredDuringExecution == nil {
+ t.Fatalf("expected required node affinity to be set")
+ }
+
+ if len(affinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms) == 0 {
+ t.Fatalf("expected required node affinity to have at least one rule.")
+ } else if len(affinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms) > 1 {
+ t.Fatalf("expected required node affinity to have only one rule.")
+ }
+
+ term := affinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0]
+
+ if len(term.MatchExpressions) == 0 {
+ t.Fatalf("expected a match expression to be set")
+ } else if len(term.MatchExpressions) > 1 {
+ t.Fatalf("expected only one match expression to be set")
+ }
+
+ rule := term.MatchExpressions[0]
+
+ if rule.Operator != v1.NodeSelectorOpIn {
+ t.Fatalf("operator expected %s actual %s", v1.NodeSelectorOpIn, rule.Operator)
+ }
+
+ if rule.Key != key {
+ t.Fatalf("key expected %s actual %s", key, rule.Key)
+ }
+
+ if !reflect.DeepEqual(rule.Values, values) {
+ t.Fatalf("values expected %v actual %v", values, rule.Values)
+ }
+ })
}
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 54c7c8fe4c..1eade8a99f 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -265,6 +265,16 @@ type NodeAffinitySpec struct {
Default *v1.NodeAffinity `json:"default"`
}
+// NodeAffinityType indicates the type of node affinity that the request seeks
+// to use. Given the custom resource uses the native Kubernetes types to set
+// node affinity, this is just for convenience for the API
+type NodeAffinityType int
+
+const (
+ NodeAffinityTypePreferred NodeAffinityType = iota
+ NodeAffinityTypeRequired
+)
+
// PodAntiAffinityDeployment distinguishes between the different types of
// Deployments that can leverage PodAntiAffinity
type PodAntiAffinityDeployment int
diff --git a/pkg/apiservermsgs/backrestmsgs.go b/pkg/apiservermsgs/backrestmsgs.go
index b1c4887fa9..29acd33a99 100644
--- a/pkg/apiservermsgs/backrestmsgs.go
+++ b/pkg/apiservermsgs/backrestmsgs.go
@@ -15,6 +15,10 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
+import (
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+)
+
// CreateBackrestBackupResponse ...
// swagger:model
type CreateBackrestBackupResponse struct {
@@ -145,10 +149,13 @@ type RestoreResponse struct {
// RestoreRequest ...
// swagger:model
type RestoreRequest struct {
- Namespace string
- FromCluster string
- RestoreOpts string
- PITRTarget string
+ Namespace string
+ FromCluster string
+ RestoreOpts string
+ PITRTarget string
+ // NodeAffinityType is only considered when "NodeLabel" is also set, and is
+ // either a value of "preferred" (default) or "required"
+ NodeAffinityType crv1.NodeAffinityType
NodeLabel string
BackrestStorageType string
}
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index 61255f6cc8..ee2cb8aff0 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -46,8 +46,11 @@ type ShowClusterRequest struct {
//
// swagger:model
type CreateClusterRequest struct {
- Name string `json:"Name"`
- Namespace string
+ Name string `json:"Name"`
+ Namespace string
+ // NodeAffinityType is only considered when "NodeLabel" is also set, and is
+ // either a value of "preferred" (default) or "required"
+ NodeAffinityType crv1.NodeAffinityType
NodeLabel string
PasswordLength int
PasswordSuperuser string
@@ -565,6 +568,9 @@ type ClusterScaleRequest struct {
Name string `json:"name"`
// Namespace is the namespace in which the queried cluster resides.
Namespace string `json:"namespace"`
+ // NodeAffinityType is only considered when "NodeLabel" is also set, and is
+ // either a value of "preferred" (default) or "required"
+ NodeAffinityType crv1.NodeAffinityType
// NodeLabel if provided is a node label to use.
NodeLabel string `json:"nodeLabel"`
// ReplicaCount is the number of replicas to add to the cluster. This is
diff --git a/pkg/apiservermsgs/pgdumpmsgs.go b/pkg/apiservermsgs/pgdumpmsgs.go
index e247fca304..3afb00955a 100644
--- a/pkg/apiservermsgs/pgdumpmsgs.go
+++ b/pkg/apiservermsgs/pgdumpmsgs.go
@@ -1,9 +1,5 @@
package apiservermsgs
-import (
- crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
-)
-
/*
Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
@@ -19,6 +15,10 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
+import (
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+)
+
// CreatepgDumpBackupResponse ...
// swagger:model
type CreatepgDumpBackupResponse struct {
@@ -61,7 +61,10 @@ type PgRestoreRequest struct {
PGDumpDB string
RestoreOpts string
PITRTarget string
- NodeLabel string
+ // NodeAffinityType is only considered when "NodeLabel" is also set, and is
+ // either a value of "preferred" (default) or "required"
+ NodeAffinityType crv1.NodeAffinityType
+ NodeLabel string
}
// NOTE: these are ported over from legacy functionality
From e6cbc1efff6089068f1ad3a62e45d20a9c0378e2 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 1 Jan 2021 14:01:04 -0500
Subject: [PATCH 100/276] Update build dependencies
This updates the build dependencies to the following versions:
- Kubernetes: 0.20.1
- controller-runtime: 0.6.4
---
go.mod | 10 ++++-----
go.sum | 65 ++++++++++++++++++++++++++++++++++++----------------------
2 files changed, 46 insertions(+), 29 deletions(-)
diff --git a/go.mod b/go.mod
index e85242cd76..c2142d2be5 100644
--- a/go.mod
+++ b/go.mod
@@ -18,11 +18,11 @@ require (
go.opentelemetry.io/otel v0.13.0
go.opentelemetry.io/otel/exporters/stdout v0.13.0
go.opentelemetry.io/otel/exporters/trace/jaeger v0.13.0
- golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9
+ golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208
- k8s.io/api v0.19.4
- k8s.io/apimachinery v0.19.4
- k8s.io/client-go v0.19.4
- sigs.k8s.io/controller-runtime v0.6.3
+ k8s.io/api v0.20.1
+ k8s.io/apimachinery v0.20.1
+ k8s.io/client-go v0.20.1
+ sigs.k8s.io/controller-runtime v0.6.4
sigs.k8s.io/yaml v1.2.0
)
diff --git a/go.sum b/go.sum
index 36a21fea4b..41050fa343 100644
--- a/go.sum
+++ b/go.sum
@@ -6,7 +6,6 @@ cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxK
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
-cloud.google.com/go v0.51.0/go.mod h1:hWtGJ6gnXH+KgDv+V0zFGDvpi07n3z8ZNj3T1RW0Gcw=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
@@ -33,17 +32,22 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
+github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
-github.com/Azure/go-autorest/autorest v0.9.6/go.mod h1:/FALq9T/kS7b5J5qsQ+RSTUdAmGFqi0vUdVNNx8q630=
+github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
-github.com/Azure/go-autorest/autorest/adal v0.8.2/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q=
+github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
+github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA=
-github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g=
+github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
-github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM=
+github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc=
+github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
+github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/sketches-go v0.0.1 h1:RtG+76WKgZuz6FIaGsjoPePmadDBkuD/KC6+ZWu78b8=
@@ -118,6 +122,7 @@ github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/felixge/httpsnoop v1.0.1 h1:lvB5Jl89CsZtGIWuTcDM1E/vkVs49/Ml7JJe07l8SPQ=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
@@ -213,6 +218,8 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
+github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
@@ -243,6 +250,8 @@ github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm4
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
+github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
@@ -449,10 +458,11 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
-golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0 h1:hb9wdF1z5waM+dSIICn1l0DkLVDT3hqhhQsDNUmHPRE=
+golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -523,6 +533,8 @@ golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgN
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up98WAHf3f/ulnJ62IyA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b h1:uwuIcX0g4Yl1NC5XAz37xsr2lTtcqevgzYNVt49waME=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -584,11 +596,12 @@ golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4 h1:5/PjkGUjvEU5Gl6BxmvKRPpqo2uNMv4rcHBMwzk/st8=
-golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f h1:Fqb3ao1hUmOR3GkUOg/Y+BadLwykBIzs5q8Ez2SbHyc=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201112073958-5cba982894dd h1:5CtCZbICpIOFdgO940moixOPjc0178IU44m4EjOO5IY=
+golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -596,11 +609,15 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.4 h1:0YWbFKbhXG/wIiuHDSKpS0Iy7FSA+u45VtBMfQcFTTc=
+golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 h1:/5xXl8Y5W96D+TtHSlonuFqGHIWVuyCkGJLwGh9JJFs=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e h1:EHBhcS0mlXEAVwNyO2dLfjToGsyY4j24pTs2ScHnX7s=
+golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -773,17 +790,17 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.18.6/go.mod h1:eeyxr+cwCjMdLAmr2W3RyDI0VvTawSg/3RFFBEnmZGI=
-k8s.io/api v0.19.4 h1:I+1I4cgJYuCDgiLNjKx7SLmIbwgj9w7N7Zr5vSIdwpo=
-k8s.io/api v0.19.4/go.mod h1:SbtJ2aHCItirzdJ36YslycFNzWADYH3tgOhvBEFtZAk=
+k8s.io/api v0.20.1 h1:ud1c3W3YNzGd6ABJlbFfKXBKXO+1KdGfcgGGNgFR03E=
+k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo=
k8s.io/apiextensions-apiserver v0.18.6 h1:vDlk7cyFsDyfwn2rNAO2DbmUbvXy5yT5GE3rrqOzaMo=
k8s.io/apiextensions-apiserver v0.18.6/go.mod h1:lv89S7fUysXjLZO7ke783xOwVTm6lKizADfvUM/SS/M=
k8s.io/apimachinery v0.18.6/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko=
-k8s.io/apimachinery v0.19.4 h1:+ZoddM7nbzrDCp0T3SWnyxqf8cbWPT2fkZImoyvHUG0=
-k8s.io/apimachinery v0.19.4/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
+k8s.io/apimachinery v0.20.1 h1:LAhz8pKbgR8tUwn7boK+b2HZdt7MiTu2mkYtFMUjTRQ=
+k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apiserver v0.18.6/go.mod h1:Zt2XvTHuaZjBz6EFYzpp+X4hTmgWGy8AthNVnTdm3Wg=
k8s.io/client-go v0.18.6/go.mod h1:/fwtGLjYMS1MaM5oi+eXhKwG+1UHidUEXRh6cNsdO0Q=
-k8s.io/client-go v0.19.4 h1:85D3mDNoLF+xqpyE9Dh/OtrJDyJrSRKkHmDXIbEzer8=
-k8s.io/client-go v0.19.4/go.mod h1:ZrEy7+wj9PjH5VMBCuu/BDlvtUAku0oVFk4MmnW9mWA=
+k8s.io/client-go v0.20.1 h1:Qquik0xNFbK9aUG92pxHYsyfea5/RPO9o9bSywNor+M=
+k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y=
k8s.io/code-generator v0.18.6/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
k8s.io/component-base v0.18.6/go.mod h1:knSVsibPR5K6EW2XOjEHik6sdU5nCvKMrzMt2D4In14=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
@@ -795,25 +812,25 @@ k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/klog/v2 v2.0.0 h1:Foj74zO6RbjjP4hBEKjnYtjjAhGg4jNynUdYF6fJrok=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
-k8s.io/klog/v2 v2.2.0 h1:XRvcwJozkgZ1UQJmfMGpvRthQHOvihEhYtDfAaxMz/A=
-k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.4.0 h1:7+X0fUguPyrKEC4WjH8iGDg3laWgMo5tMnRTIGTTxGQ=
+k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E=
-k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6 h1:+WnxoVtG8TMiudHBSEtrVL1egv36TkkJm+bA8AxicmQ=
-k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o=
+k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd h1:sOHNzJIkytDF6qadMNKhhDRpc6ODik8lVC6nOur7B2c=
+k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
-k8s.io/utils v0.0.0-20200729134348-d5654de09c73 h1:uJmqzgNWG7XyClnU/mLPBWwfKKF1K8Hf8whTseBgJcg=
-k8s.io/utils v0.0.0-20200729134348-d5654de09c73/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920 h1:CbnUZsM497iRC5QMVkHwyl8s2tB3g7yaSHkYPkpgelw=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0=
-sigs.k8s.io/controller-runtime v0.6.3 h1:SBbr+inLPEKhvlJtrvDcwIpm+uhDvp63Bl72xYJtoOE=
-sigs.k8s.io/controller-runtime v0.6.3/go.mod h1:WlZNXcM0++oyaQt4B7C2lEE5JYRs8vJUzRP4N4JpdAY=
+sigs.k8s.io/controller-runtime v0.6.4 h1:4013CKsBs5bEqo+LevzDett+LLxag/FjQWG94nVZ/9g=
+sigs.k8s.io/controller-runtime v0.6.4/go.mod h1:WlZNXcM0++oyaQt4B7C2lEE5JYRs8vJUzRP4N4JpdAY=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0-20200116222232-67a7b8c61874/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
-sigs.k8s.io/structured-merge-diff/v4 v4.0.1 h1:YXTMot5Qz/X1iBRJhAt+vI+HVttY0WkSqqhKxQ0xVbA=
-sigs.k8s.io/structured-merge-diff/v4 v4.0.1/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.2 h1:YHQV7Dajm86OuqnIR6zAelnDWBRjo+YhYV9PmGrh1s8=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
From bd046f684a9933d3276bb240d7a2c63e3f65d4cc Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 1 Jan 2021 14:11:49 -0500
Subject: [PATCH 101/276] Bump pgMonitor to v4.4
---
bin/get-pgmonitor.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/bin/get-pgmonitor.sh b/bin/get-pgmonitor.sh
index cfc178e77d..efa62b8628 100755
--- a/bin/get-pgmonitor.sh
+++ b/bin/get-pgmonitor.sh
@@ -14,7 +14,7 @@
# limitations under the License.
echo "Getting pgMonitor..."
-PGMONITOR_COMMIT='v4.4-RC7'
+PGMONITOR_COMMIT='v4.4'
# pgMonitor Setup
if [[ -d ${PGOROOT?}/tools/pgmonitor ]]
From a65ccf78a8c5ff0a068cffb49d5ae220fa2e8da4 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 1 Jan 2021 14:28:08 -0500
Subject: [PATCH 102/276] Add `make check` target for running tests
---
Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/Makefile b/Makefile
index 5aa5bae77d..3c41592987 100644
--- a/Makefile
+++ b/Makefile
@@ -201,6 +201,9 @@ pgo-base-docker: pgo-base-build
#======== Utility =======
+check:
+ PGOROOT=$(PGOROOT) go test ./...
+
cli-docs:
rm docs/content/pgo-client/reference/*.md
cd docs/content/pgo-client/reference && go run ../../../../cmd/pgo/generatedocs.go
From b7012d041754218fccea63b7a23cf802bc5c5cd1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 2 Jan 2021 10:45:36 -0500
Subject: [PATCH 103/276] Remove current bash completion support
This has been unmaintained and seemingly unused. A future release
could reintroduce it.
---
Makefile | 1 -
cmd/pgo/cmd/root.go | 15 -
docs/content/pgo-client/_index.md | 1 -
examples/pgo-bash-completion | 2146 -----------------------------
4 files changed, 2163 deletions(-)
delete mode 100644 examples/pgo-bash-completion
diff --git a/Makefile b/Makefile
index 3c41592987..5f5e8f06e6 100644
--- a/Makefile
+++ b/Makefile
@@ -246,7 +246,6 @@ release: linuxpgo macpgo winpgo
cp bin/pgo $(RELTMPDIR)
cp bin/pgo-mac $(RELTMPDIR)
cp bin/pgo.exe $(RELTMPDIR)
- cp $(PGOROOT)/examples/pgo-bash-completion $(RELTMPDIR)
tar czvf $(RELFILE) -C $(RELTMPDIR) .
generate:
diff --git a/cmd/pgo/cmd/root.go b/cmd/pgo/cmd/root.go
index 3cd416c784..f22a92fb1a 100644
--- a/cmd/pgo/cmd/root.go
+++ b/cmd/pgo/cmd/root.go
@@ -104,19 +104,4 @@ func initConfig() {
httpclient = hc
}
}
-
- if os.Getenv("GENERATE_BASH_COMPLETION") != "" {
- generateBashCompletion()
- }
-}
-
-func generateBashCompletion() {
- log.Debugf("generating bash completion script")
- // #nosec: G303
- file, err2 := os.Create("/tmp/pgo-bash-completion.out")
- if err2 != nil {
- fmt.Println("Error: ", err2.Error())
- }
- defer file.Close()
- _ = RootCmd.GenBashCompletion(file)
}
diff --git a/docs/content/pgo-client/_index.md b/docs/content/pgo-client/_index.md
index f1c6149c3f..f5fb7c7d3f 100644
--- a/docs/content/pgo-client/_index.md
+++ b/docs/content/pgo-client/_index.md
@@ -154,7 +154,6 @@ client.
| Name | Description |
| :-- | :-- |
| `EXCLUDE_OS_TRUST` | Exclude CA certs from OS default trust store. |
-| `GENERATE_BASH_COMPLETION` | If set, will allow `pgo` to leverage "bash completion" to help complete commands as they are typed. |
| `PGO_APISERVER_URL` | The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a `/`. |
| `PGO_CA_CERT` | The CA certificate file path for authenticating to the PostgreSQL Operator apiserver. |
| `PGO_CLIENT_CERT` | The client certificate file path for authenticating to the PostgreSQL Operator apiserver. |
diff --git a/examples/pgo-bash-completion b/examples/pgo-bash-completion
deleted file mode 100644
index 6c89ce0b37..0000000000
--- a/examples/pgo-bash-completion
+++ /dev/null
@@ -1,2146 +0,0 @@
-# bash completion for pgo -*- shell-script -*-
-
-__pgo_debug()
-{
- if [[ -n ${BASH_COMP_DEBUG_FILE} ]]; then
- echo "$*" >> "${BASH_COMP_DEBUG_FILE}"
- fi
-}
-
-# Homebrew on Macs have version 1.3 of bash-completion which doesn't include
-# _init_completion. This is a very minimal version of that function.
-__pgo_init_completion()
-{
- COMPREPLY=()
- _get_comp_words_by_ref "$@" cur prev words cword
-}
-
-__pgo_index_of_word()
-{
- local w word=$1
- shift
- index=0
- for w in "$@"; do
- [[ $w = "$word" ]] && return
- index=$((index+1))
- done
- index=-1
-}
-
-__pgo_contains_word()
-{
- local w word=$1; shift
- for w in "$@"; do
- [[ $w = "$word" ]] && return
- done
- return 1
-}
-
-__pgo_handle_reply()
-{
- __pgo_debug "${FUNCNAME[0]}"
- case $cur in
- -*)
- if [[ $(type -t compopt) = "builtin" ]]; then
- compopt -o nospace
- fi
- local allflags
- if [ ${#must_have_one_flag[@]} -ne 0 ]; then
- allflags=("${must_have_one_flag[@]}")
- else
- allflags=("${flags[*]} ${two_word_flags[*]}")
- fi
- COMPREPLY=( $(compgen -W "${allflags[*]}" -- "$cur") )
- if [[ $(type -t compopt) = "builtin" ]]; then
- [[ "${COMPREPLY[0]}" == *= ]] || compopt +o nospace
- fi
-
- # complete after --flag=abc
- if [[ $cur == *=* ]]; then
- if [[ $(type -t compopt) = "builtin" ]]; then
- compopt +o nospace
- fi
-
- local index flag
- flag="${cur%=*}"
- __pgo_index_of_word "${flag}" "${flags_with_completion[@]}"
- COMPREPLY=()
- if [[ ${index} -ge 0 ]]; then
- PREFIX=""
- cur="${cur#*=}"
- ${flags_completion[${index}]}
- if [ -n "${ZSH_VERSION}" ]; then
- # zsh completion needs --flag= prefix
- eval "COMPREPLY=( \"\${COMPREPLY[@]/#/${flag}=}\" )"
- fi
- fi
- fi
- return 0;
- ;;
- esac
-
- # check if we are handling a flag with special work handling
- local index
- __pgo_index_of_word "${prev}" "${flags_with_completion[@]}"
- if [[ ${index} -ge 0 ]]; then
- ${flags_completion[${index}]}
- return
- fi
-
- # we are parsing a flag and don't have a special handler, no completion
- if [[ ${cur} != "${words[cword]}" ]]; then
- return
- fi
-
- local completions
- completions=("${commands[@]}")
- if [[ ${#must_have_one_noun[@]} -ne 0 ]]; then
- completions=("${must_have_one_noun[@]}")
- fi
- if [[ ${#must_have_one_flag[@]} -ne 0 ]]; then
- completions+=("${must_have_one_flag[@]}")
- fi
- COMPREPLY=( $(compgen -W "${completions[*]}" -- "$cur") )
-
- if [[ ${#COMPREPLY[@]} -eq 0 && ${#noun_aliases[@]} -gt 0 && ${#must_have_one_noun[@]} -ne 0 ]]; then
- COMPREPLY=( $(compgen -W "${noun_aliases[*]}" -- "$cur") )
- fi
-
- if [[ ${#COMPREPLY[@]} -eq 0 ]]; then
- declare -F __custom_func >/dev/null && __custom_func
- fi
-
- # available in bash-completion >= 2, not always present on macOS
- if declare -F __ltrim_colon_completions >/dev/null; then
- __ltrim_colon_completions "$cur"
- fi
-
- # If there is only 1 completion and it is a flag with an = it will be completed
- # but we don't want a space after the =
- if [[ "${#COMPREPLY[@]}" -eq "1" ]] && [[ $(type -t compopt) = "builtin" ]] && [[ "${COMPREPLY[0]}" == --*= ]]; then
- compopt -o nospace
- fi
-}
-
-# The arguments should be in the form "ext1|ext2|extn"
-__pgo_handle_filename_extension_flag()
-{
- local ext="$1"
- _filedir "@(${ext})"
-}
-
-__pgo_handle_subdirs_in_dir_flag()
-{
- local dir="$1"
- pushd "${dir}" >/dev/null 2>&1 && _filedir -d && popd >/dev/null 2>&1
-}
-
-__pgo_handle_flag()
-{
- __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
-
- # if a command required a flag, and we found it, unset must_have_one_flag()
- local flagname=${words[c]}
- local flagvalue
- # if the word contained an =
- if [[ ${words[c]} == *"="* ]]; then
- flagvalue=${flagname#*=} # take in as flagvalue after the =
- flagname=${flagname%=*} # strip everything after the =
- flagname="${flagname}=" # but put the = back
- fi
- __pgo_debug "${FUNCNAME[0]}: looking for ${flagname}"
- if __pgo_contains_word "${flagname}" "${must_have_one_flag[@]}"; then
- must_have_one_flag=()
- fi
-
- # if you set a flag which only applies to this command, don't show subcommands
- if __pgo_contains_word "${flagname}" "${local_nonpersistent_flags[@]}"; then
- commands=()
- fi
-
- # keep flag value with flagname as flaghash
- # flaghash variable is an associative array which is only supported in bash > 3.
- if [[ -z "${BASH_VERSION}" || "${BASH_VERSINFO[0]}" -gt 3 ]]; then
- if [ -n "${flagvalue}" ] ; then
- flaghash[${flagname}]=${flagvalue}
- elif [ -n "${words[ $((c+1)) ]}" ] ; then
- flaghash[${flagname}]=${words[ $((c+1)) ]}
- else
- flaghash[${flagname}]="true" # pad "true" for bool flag
- fi
- fi
-
- # skip the argument to a two word flag
- if __pgo_contains_word "${words[c]}" "${two_word_flags[@]}"; then
- c=$((c+1))
- # if we are looking for a flags value, don't show commands
- if [[ $c -eq $cword ]]; then
- commands=()
- fi
- fi
-
- c=$((c+1))
-
-}
-
-__pgo_handle_noun()
-{
- __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
-
- if __pgo_contains_word "${words[c]}" "${must_have_one_noun[@]}"; then
- must_have_one_noun=()
- elif __pgo_contains_word "${words[c]}" "${noun_aliases[@]}"; then
- must_have_one_noun=()
- fi
-
- nouns+=("${words[c]}")
- c=$((c+1))
-}
-
-__pgo_handle_command()
-{
- __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
-
- local next_command
- if [[ -n ${last_command} ]]; then
- next_command="_${last_command}_${words[c]//:/__}"
- else
- if [[ $c -eq 0 ]]; then
- next_command="_pgo_root_command"
- else
- next_command="_${words[c]//:/__}"
- fi
- fi
- c=$((c+1))
- __pgo_debug "${FUNCNAME[0]}: looking for ${next_command}"
- declare -F "$next_command" >/dev/null && $next_command
-}
-
-__pgo_handle_word()
-{
- if [[ $c -ge $cword ]]; then
- __pgo_handle_reply
- return
- fi
- __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
- if [[ "${words[c]}" == -* ]]; then
- __pgo_handle_flag
- elif __pgo_contains_word "${words[c]}" "${commands[@]}"; then
- __pgo_handle_command
- elif [[ $c -eq 0 ]]; then
- __pgo_handle_command
- elif __pgo_contains_word "${words[c]}" "${command_aliases[@]}"; then
- # aliashash variable is an associative array which is only supported in bash > 3.
- if [[ -z "${BASH_VERSION}" || "${BASH_VERSINFO[0]}" -gt 3 ]]; then
- words[c]=${aliashash[${words[c]}]}
- __pgo_handle_command
- else
- __pgo_handle_noun
- fi
- else
- __pgo_handle_noun
- fi
- __pgo_handle_word
-}
-
-_pgo_apply()
-{
- last_command="pgo_apply"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--dry-run")
- local_nonpersistent_flags+=("--dry-run")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_backup()
-{
- last_command="pgo_backup"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--backup-opts=")
- local_nonpersistent_flags+=("--backup-opts=")
- flags+=("--backup-type=")
- local_nonpersistent_flags+=("--backup-type=")
- flags+=("--pgbackrest-storage-type=")
- local_nonpersistent_flags+=("--pgbackrest-storage-type=")
- flags+=("--pvc-name=")
- local_nonpersistent_flags+=("--pvc-name=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--storage-config=")
- local_nonpersistent_flags+=("--storage-config=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_cat()
-{
- last_command="pgo_cat"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_cluster()
-{
- last_command="pgo_create_cluster"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--autofail")
- local_nonpersistent_flags+=("--autofail")
- flags+=("--ccp-image=")
- local_nonpersistent_flags+=("--ccp-image=")
- flags+=("--ccp-image-tag=")
- two_word_flags+=("-c")
- local_nonpersistent_flags+=("--ccp-image-tag=")
- flags+=("--custom-config=")
- local_nonpersistent_flags+=("--custom-config=")
- flags+=("--labels=")
- two_word_flags+=("-l")
- local_nonpersistent_flags+=("--labels=")
- flags+=("--metrics")
- local_nonpersistent_flags+=("--metrics")
- flags+=("--node-label=")
- local_nonpersistent_flags+=("--node-label=")
- flags+=("--password=")
- two_word_flags+=("-w")
- local_nonpersistent_flags+=("--password=")
- flags+=("--pgbackrest=")
- local_nonpersistent_flags+=("--pgbackrest=")
- flags+=("--pgbackrest-storage-type=")
- local_nonpersistent_flags+=("--pgbackrest-storage-type=")
- flags+=("--pgbadger")
- local_nonpersistent_flags+=("--pgbadger")
- flags+=("--pgbouncer")
- local_nonpersistent_flags+=("--pgbouncer")
- flags+=("--policies=")
- two_word_flags+=("-z")
- local_nonpersistent_flags+=("--policies=")
- flags+=("--replica-count=")
- local_nonpersistent_flags+=("--replica-count=")
- flags+=("--replica-storage-config=")
- local_nonpersistent_flags+=("--replica-storage-config=")
- flags+=("--secret-from=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--secret-from=")
- two_word_flags+=("-e")
- flags+=("--service-type=")
- local_nonpersistent_flags+=("--service-type=")
- flags+=("--storage-config=")
- local_nonpersistent_flags+=("--storage-config=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_namespace()
-{
- last_command="pgo_create_namespace"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_pgbouncer()
-{
- last_command="pgo_create_pgbouncer"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_pgorole()
-{
- last_command="pgo_create_pgorole"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--permissions=")
- local_nonpersistent_flags+=("--permissions=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_pgouser()
-{
- last_command="pgo_create_pgouser"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all-namespaces")
- local_nonpersistent_flags+=("--all-namespaces")
- flags+=("--pgouser-namespaces=")
- local_nonpersistent_flags+=("--pgouser-namespaces=")
- flags+=("--pgouser-password=")
- local_nonpersistent_flags+=("--pgouser-password=")
- flags+=("--pgouser-roles=")
- local_nonpersistent_flags+=("--pgouser-roles=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_policy()
-{
- last_command="pgo_create_policy"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--in-file=")
- two_word_flags+=("-i")
- local_nonpersistent_flags+=("--in-file=")
- flags+=("--url=")
- two_word_flags+=("-u")
- local_nonpersistent_flags+=("--url=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_schedule()
-{
- last_command="pgo_create_schedule"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--ccp-image-tag=")
- two_word_flags+=("-c")
- local_nonpersistent_flags+=("--ccp-image-tag=")
- flags+=("--database=")
- local_nonpersistent_flags+=("--database=")
- flags+=("--pgbackrest-backup-type=")
- local_nonpersistent_flags+=("--pgbackrest-backup-type=")
- flags+=("--pgbackrest-storage-type=")
- local_nonpersistent_flags+=("--pgbackrest-storage-type=")
- flags+=("--policy=")
- local_nonpersistent_flags+=("--policy=")
- flags+=("--pvc-name=")
- local_nonpersistent_flags+=("--pvc-name=")
- flags+=("--schedule=")
- local_nonpersistent_flags+=("--schedule=")
- flags+=("--schedule-opts=")
- local_nonpersistent_flags+=("--schedule-opts=")
- flags+=("--schedule-type=")
- local_nonpersistent_flags+=("--schedule-type=")
- flags+=("--secret=")
- local_nonpersistent_flags+=("--secret=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create_user()
-{
- last_command="pgo_create_user"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--managed")
- local_nonpersistent_flags+=("--managed")
- flags+=("--password=")
- local_nonpersistent_flags+=("--password=")
- flags+=("--password-length=")
- local_nonpersistent_flags+=("--password-length=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--username=")
- local_nonpersistent_flags+=("--username=")
- flags+=("--valid-days=")
- local_nonpersistent_flags+=("--valid-days=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_create()
-{
- last_command="pgo_create"
-
- command_aliases=()
-
- commands=()
- commands+=("cluster")
- commands+=("namespace")
- commands+=("pgbouncer")
- commands+=("pgorole")
- commands+=("pgouser")
- commands+=("policy")
- commands+=("schedule")
- commands+=("user")
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_backup()
-{
- last_command="pgo_delete_backup"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_cluster()
-{
- last_command="pgo_delete_cluster"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--delete-backups")
- flags+=("-b")
- local_nonpersistent_flags+=("--delete-backups")
- flags+=("--delete-data")
- flags+=("-d")
- local_nonpersistent_flags+=("--delete-data")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_label()
-{
- last_command="pgo_delete_label"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--label=")
- local_nonpersistent_flags+=("--label=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_namespace()
-{
- last_command="pgo_delete_namespace"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_pgbouncer()
-{
- last_command="pgo_delete_pgbouncer"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_pgorole()
-{
- last_command="pgo_delete_pgorole"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_pgouser()
-{
- last_command="pgo_delete_pgouser"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_policy()
-{
- last_command="pgo_delete_policy"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_schedule()
-{
- last_command="pgo_delete_schedule"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--schedule-name=")
- local_nonpersistent_flags+=("--schedule-name=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete_user()
-{
- last_command="pgo_delete_user"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--username=")
- local_nonpersistent_flags+=("--username=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_delete()
-{
- last_command="pgo_delete"
-
- command_aliases=()
-
- commands=()
- commands+=("backup")
- commands+=("cluster")
- commands+=("label")
- commands+=("namespace")
- commands+=("pgbouncer")
- commands+=("pgorole")
- commands+=("pgouser")
- commands+=("policy")
- commands+=("schedule")
- commands+=("user")
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_df()
-{
- last_command="pgo_df"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_failover()
-{
- last_command="pgo_failover"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--autofail-replace-replica=")
- local_nonpersistent_flags+=("--autofail-replace-replica=")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--query")
- local_nonpersistent_flags+=("--query")
- flags+=("--target=")
- local_nonpersistent_flags+=("--target=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_label()
-{
- last_command="pgo_label"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--dry-run")
- local_nonpersistent_flags+=("--dry-run")
- flags+=("--label=")
- local_nonpersistent_flags+=("--label=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_ls()
-{
- last_command="pgo_ls"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_reload()
-{
- last_command="pgo_reload"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_restore()
-{
- last_command="pgo_restore"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--backup-opts=")
- local_nonpersistent_flags+=("--backup-opts=")
- flags+=("--backup-path=")
- local_nonpersistent_flags+=("--backup-path=")
- flags+=("--backup-pvc=")
- local_nonpersistent_flags+=("--backup-pvc=")
- flags+=("--backup-type=")
- local_nonpersistent_flags+=("--backup-type=")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--node-label=")
- local_nonpersistent_flags+=("--node-label=")
- flags+=("--pgbackrest-storage-type=")
- local_nonpersistent_flags+=("--pgbackrest-storage-type=")
- flags+=("--pitr-target=")
- local_nonpersistent_flags+=("--pitr-target=")
- flags+=("--restore-to-pvc=")
- local_nonpersistent_flags+=("--restore-to-pvc=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_scale()
-{
- last_command="pgo_scale"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--ccp-image-tag=")
- local_nonpersistent_flags+=("--ccp-image-tag=")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--node-label=")
- local_nonpersistent_flags+=("--node-label=")
- flags+=("--replica-count=")
- local_nonpersistent_flags+=("--replica-count=")
- flags+=("--service-type=")
- local_nonpersistent_flags+=("--service-type=")
- flags+=("--storage-config=")
- local_nonpersistent_flags+=("--storage-config=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_scaledown()
-{
- last_command="pgo_scaledown"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--delete-data")
- flags+=("-d")
- local_nonpersistent_flags+=("--delete-data")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--query")
- local_nonpersistent_flags+=("--query")
- flags+=("--target=")
- local_nonpersistent_flags+=("--target=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_backup()
-{
- last_command="pgo_show_backup"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--backup-type=")
- local_nonpersistent_flags+=("--backup-type=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_cluster()
-{
- last_command="pgo_show_cluster"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--ccp-image-tag=")
- local_nonpersistent_flags+=("--ccp-image-tag=")
- flags+=("--output=")
- two_word_flags+=("-o")
- local_nonpersistent_flags+=("--output=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_config()
-{
- last_command="pgo_show_config"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_namespace()
-{
- last_command="pgo_show_namespace"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_pgorole()
-{
- last_command="pgo_show_pgorole"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_pgouser()
-{
- last_command="pgo_show_pgouser"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_policy()
-{
- last_command="pgo_show_policy"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_pvc()
-{
- last_command="pgo_show_pvc"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--node-label=")
- local_nonpersistent_flags+=("--node-label=")
- flags+=("--pvc-root=")
- local_nonpersistent_flags+=("--pvc-root=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_schedule()
-{
- last_command="pgo_show_schedule"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--schedule-name=")
- local_nonpersistent_flags+=("--schedule-name=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_user()
-{
- last_command="pgo_show_user"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--expired=")
- local_nonpersistent_flags+=("--expired=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show_workflow()
-{
- last_command="pgo_show_workflow"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_show()
-{
- last_command="pgo_show"
-
- command_aliases=()
-
- commands=()
- commands+=("backup")
- commands+=("cluster")
- commands+=("config")
- commands+=("namespace")
- commands+=("pgorole")
- commands+=("pgouser")
- commands+=("policy")
- commands+=("pvc")
- commands+=("schedule")
- commands+=("user")
- commands+=("workflow")
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_status()
-{
- last_command="pgo_status"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--output=")
- two_word_flags+=("-o")
- local_nonpersistent_flags+=("--output=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_test()
-{
- last_command="pgo_test"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--output=")
- two_word_flags+=("-o")
- local_nonpersistent_flags+=("--output=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_update_cluster()
-{
- last_command="pgo_update_cluster"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--autofail")
- local_nonpersistent_flags+=("--autofail")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_update_namespace()
-{
- last_command="pgo_update_namespace"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_update_pgorole()
-{
- last_command="pgo_update_pgorole"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--permissions=")
- local_nonpersistent_flags+=("--permissions=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_update_pgouser()
-{
- last_command="pgo_update_pgouser"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all-namespaces")
- local_nonpersistent_flags+=("--all-namespaces")
- flags+=("--no-prompt")
- local_nonpersistent_flags+=("--no-prompt")
- flags+=("--pgouser-namespaces=")
- local_nonpersistent_flags+=("--pgouser-namespaces=")
- flags+=("--pgouser-password=")
- local_nonpersistent_flags+=("--pgouser-password=")
- flags+=("--pgouser-roles=")
- local_nonpersistent_flags+=("--pgouser-roles=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_update_user()
-{
- last_command="pgo_update_user"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--all")
- local_nonpersistent_flags+=("--all")
- flags+=("--expire-user")
- local_nonpersistent_flags+=("--expire-user")
- flags+=("--expired=")
- local_nonpersistent_flags+=("--expired=")
- flags+=("--password=")
- local_nonpersistent_flags+=("--password=")
- flags+=("--password-length=")
- local_nonpersistent_flags+=("--password-length=")
- flags+=("--selector=")
- two_word_flags+=("-s")
- local_nonpersistent_flags+=("--selector=")
- flags+=("--username=")
- local_nonpersistent_flags+=("--username=")
- flags+=("--valid-days=")
- local_nonpersistent_flags+=("--valid-days=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_update()
-{
- last_command="pgo_update"
-
- command_aliases=()
-
- commands=()
- commands+=("cluster")
- commands+=("namespace")
- commands+=("pgorole")
- commands+=("pgouser")
- commands+=("user")
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_upgrade()
-{
- last_command="pgo_upgrade"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--ccp-image-tag=")
- local_nonpersistent_flags+=("--ccp-image-tag=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_version()
-{
- last_command="pgo_version"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--help")
- flags+=("-h")
- local_nonpersistent_flags+=("--help")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_watch()
-{
- last_command="pgo_watch"
-
- command_aliases=()
-
- commands=()
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--pgo-event-address=")
- two_word_flags+=("-a")
- local_nonpersistent_flags+=("--pgo-event-address=")
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-_pgo_root_command()
-{
- last_command="pgo"
-
- command_aliases=()
-
- commands=()
- commands+=("apply")
- commands+=("backup")
- commands+=("cat")
- commands+=("create")
- commands+=("delete")
- commands+=("df")
- commands+=("failover")
- commands+=("label")
- commands+=("load")
- commands+=("ls")
- commands+=("reload")
- commands+=("restore")
- commands+=("scale")
- commands+=("scaledown")
- commands+=("show")
- commands+=("status")
- commands+=("test")
- commands+=("update")
- commands+=("upgrade")
- commands+=("version")
- commands+=("watch")
-
- flags=()
- two_word_flags=()
- local_nonpersistent_flags=()
- flags_with_completion=()
- flags_completion=()
-
- flags+=("--apiserver-url=")
- flags+=("--debug")
- flags+=("--namespace=")
- two_word_flags+=("-n")
- flags+=("--pgo-ca-cert=")
- flags+=("--pgo-client-cert=")
- flags+=("--pgo-client-key=")
-
- must_have_one_flag=()
- must_have_one_noun=()
- noun_aliases=()
-}
-
-__start_pgo()
-{
- local cur prev words cword
- declare -A flaghash 2>/dev/null || :
- declare -A aliashash 2>/dev/null || :
- if declare -F _init_completion >/dev/null 2>&1; then
- _init_completion -s || return
- else
- __pgo_init_completion -n "=" || return
- fi
-
- local c=0
- local flags=()
- local two_word_flags=()
- local local_nonpersistent_flags=()
- local flags_with_completion=()
- local flags_completion=()
- local commands=("pgo")
- local must_have_one_flag=()
- local must_have_one_noun=()
- local last_command
- local nouns=()
-
- __pgo_handle_word
-}
-
-if [[ $(type -t compopt) = "builtin" ]]; then
- complete -o default -F __start_pgo pgo
-else
- complete -o default -o nospace -F __start_pgo pgo
-fi
-
-# ex: ts=4 sw=4 et filetype=sh
From 31495ae4663e372fd7cc6d36bb152811ba0d6ba4 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 2 Jan 2021 11:35:05 -0500
Subject: [PATCH 104/276] Add ServiceType support for pgBouncer
There are cases where one may want to use a different ServiceType for
pgBouncer compared to the PostgreSQL cluster. This patch adds the
`serviceType` attribute to the pgBouncer Spec in the pgclusters CRD,
and adds the following flags to the commands:
- `pgo create cluster --pgbouncer-service-type`
- `pgo create pgbouncer --service-type`
- `pgo update pgbouncer --service-type`
Accepted values match Kubernetes Service Types.
---
cmd/pgo/cmd/cluster.go | 1 +
cmd/pgo/cmd/create.go | 5 ++-
cmd/pgo/cmd/pgbouncer.go | 3 ++
cmd/pgo/cmd/update.go | 1 +
docs/content/custom-resources/_index.md | 1 +
.../reference/pgo_create_cluster.md | 3 +-
.../reference/pgo_create_pgbouncer.md | 3 +-
.../reference/pgo_update_pgbouncer.md | 5 ++-
.../apiserver/clusterservice/clusterimpl.go | 25 ++++++++---
.../pgbouncerservice/pgbouncerimpl.go | 24 ++++++++++
internal/operator/cluster/pgbouncer.go | 45 ++++++++++++++++---
pkg/apis/crunchydata.com/v1/cluster.go | 4 ++
pkg/apiservermsgs/clustermsgs.go | 4 ++
pkg/apiservermsgs/pgbouncermsgs.go | 9 ++++
14 files changed, 117 insertions(+), 16 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index 9d0465da22..f8db1a8409 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -280,6 +280,7 @@ func createCluster(args []string, ns string, createClusterCmd *cobra.Command) {
r.ExporterMemoryLimit = ExporterMemoryLimit
r.BadgerFlag = BadgerFlag
r.ServiceType = v1.ServiceType(ServiceType)
+ r.PgBouncerServiceType = v1.ServiceType(ServiceTypePgBouncer)
r.AutofailFlag = !DisableAutofailFlag
r.PgbouncerFlag = PgbouncerFlag
r.BackrestStorageConfig = BackrestStorageConfig
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index 19d1612f1a..36fd950a71 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -42,6 +42,7 @@ var (
UserLabels string
Tablespaces []string
ServiceType string
+ ServiceTypePgBouncer string
Schedule string
ScheduleOptions string
ScheduleType string
@@ -453,6 +454,7 @@ func init() {
createClusterCmd.Flags().StringVar(&PgBouncerMemoryLimit, "pgbouncer-memory-limit", "", "Set the amount of memory to limit for "+
"pgBouncer.")
createClusterCmd.Flags().Int32Var(&PgBouncerReplicas, "pgbouncer-replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.")
+ createClusterCmd.Flags().StringVar(&ServiceTypePgBouncer, "pgbouncer-service-type", "", "The Service type to use for pgBouncer. Defaults to the Service type of the PostgreSQL cluster.")
createClusterCmd.Flags().StringVar(&PgBouncerTLSSecret, "pgbouncer-tls-secret", "", "The name of the secret "+
"that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. "+
"Must also set server-tls-secret and server-ca-secret.")
@@ -486,7 +488,7 @@ func init() {
createClusterCmd.Flags().StringVar(&TLSSecret, "server-tls-secret", "", "The name of the secret that contains "+
"the TLS keypair to use for enabling the PostgreSQL cluster to accept TLS connections. "+
"Must be used with \"server-ca-secret\"")
- createClusterCmd.Flags().StringVarP(&ServiceType, "service-type", "", "", "The Service type to use for the PostgreSQL cluster. If not set, the pgo.yaml default will be used.")
+ createClusterCmd.Flags().StringVar(&ServiceType, "service-type", "", "The Service type to use for the PostgreSQL cluster. If not set, the pgo.yaml default will be used.")
createClusterCmd.Flags().BoolVar(&ShowSystemAccounts, "show-system-accounts", false, "Include the system accounts in the results.")
createClusterCmd.Flags().StringVarP(&StorageConfig, "storage-config", "", "", "The name of a Storage config in pgo.yaml to use for the cluster storage.")
createClusterCmd.Flags().BoolVarP(&SyncReplication, "sync-replication", "", false,
@@ -529,6 +531,7 @@ func init() {
"pgBouncer.")
createPgbouncerCmd.Flags().Int32Var(&PgBouncerReplicas, "replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.")
createPgbouncerCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
+ createPgbouncerCmd.Flags().StringVar(&ServiceType, "service-type", "", "The Service type to use for pgBouncer. Defaults to the Service type of the PostgreSQL cluster.")
createPgbouncerCmd.Flags().StringVar(&PgBouncerTLSSecret, "tls-secret", "", "The name of the secret "+
"that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. "+
"The PostgreSQL cluster must have TLS enabled.")
diff --git a/cmd/pgo/cmd/pgbouncer.go b/cmd/pgo/cmd/pgbouncer.go
index f9096e4419..b7ec28506f 100644
--- a/cmd/pgo/cmd/pgbouncer.go
+++ b/cmd/pgo/cmd/pgbouncer.go
@@ -23,6 +23,7 @@ import (
"github.com/crunchydata/postgres-operator/cmd/pgo/api"
"github.com/crunchydata/postgres-operator/cmd/pgo/util"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
+ v1 "k8s.io/api/core/v1"
)
// showPgBouncerTextPadding contains the values for what the text padding should be
@@ -67,6 +68,7 @@ func createPgbouncer(args []string, ns string) {
Namespace: ns,
Replicas: PgBouncerReplicas,
Selector: Selector,
+ ServiceType: v1.ServiceType(ServiceType),
TLSSecret: PgBouncerTLSSecret,
}
@@ -367,6 +369,7 @@ func updatePgBouncer(namespace string, clusterNames []string) {
Replicas: PgBouncerReplicas,
RotatePassword: RotatePassword,
Selector: Selector,
+ ServiceType: v1.ServiceType(ServiceType),
}
if err := util.ValidateQuantity(request.CPURequest, "cpu"); err != nil {
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index c11e352094..e64c48ed76 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -156,6 +156,7 @@ func init() {
UpdatePgBouncerCmd.Flags().Int32Var(&PgBouncerReplicas, "replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.")
UpdatePgBouncerCmd.Flags().BoolVar(&RotatePassword, "rotate-password", false, "Used to rotate the pgBouncer service account password. Can cause interruption of service.")
UpdatePgBouncerCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
+ UpdatePgBouncerCmd.Flags().StringVar(&ServiceType, "service-type", "", "The Service type to use for pgBouncer.")
UpdatePgouserCmd.Flags().StringVarP(&PgouserNamespaces, "pgouser-namespaces", "", "", "The namespaces to use for updating the pgouser roles.")
UpdatePgouserCmd.Flags().BoolVar(&AllNamespaces, "all-namespaces", false, "all namespaces.")
UpdatePgouserCmd.Flags().StringVarP(&PgouserRoles, "pgouser-roles", "", "", "The roles to use for updating the pgouser roles.")
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 84589f100e..3559d0eb14 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -855,6 +855,7 @@ a PostgreSQL cluster to help with failover scenarios too.
| Limits | `create`, `update` | Specify the container resource limits that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| Replicas | `create`, `update` | The number of pgBouncer instances to deploy. Must be set to at least `1` to deploy pgBouncer. Setting to `0` removes an existing pgBouncer deployment for the PostgreSQL cluster. |
| Resources | `create`, `update` | Specify the container resource requests that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| ServiceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to the `ServiceType` set for the PostgreSQL cluster. |
| TLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the pgBouncer instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with the parent Spec `TLSSecret` and `CASecret`. |
##### Annotations Specification
diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md
index bc5dd28f99..26ee79dcfc 100644
--- a/docs/content/pgo-client/reference/pgo_create_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_create_cluster.md
@@ -80,6 +80,7 @@ pgo create cluster [flags]
--pgbouncer-memory string Set the amount of memory to request for pgBouncer. Defaults to server value (24Mi).
--pgbouncer-memory-limit string Set the amount of memory to limit for pgBouncer.
--pgbouncer-replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.
+ --pgbouncer-service-type string The Service type to use for pgBouncer. Defaults to the Service type of the PostgreSQL cluster.
--pgbouncer-tls-secret string The name of the secret that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. Must also set server-tls-secret and server-ca-secret.
--pgo-image-prefix string The PGOImagePrefix to use for cluster creation. If specified, overrides the global configuration.
--pod-anti-affinity string Specifies the type of anti-affinity that should be utilized when applying default pod anti-affinity rules to PG clusters (default "preferred")
@@ -135,4 +136,4 @@ pgo create cluster [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 31-Dec-2020
+###### Auto generated by spf13/cobra on 2-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md b/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
index 156820f023..beef67d591 100644
--- a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
+++ b/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
@@ -25,6 +25,7 @@ pgo create pgbouncer [flags]
--memory-limit string Set the amount of memory to limit for pgBouncer.
--replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.
-s, --selector string The selector to use for cluster filtering.
+ --service-type string The Service type to use for pgBouncer. Defaults to the Service type of the PostgreSQL cluster.
--tls-secret string The name of the secret that contains the TLS keypair to use for enabling pgBouncer to accept TLS connections. The PostgreSQL cluster must have TLS enabled.
```
@@ -45,4 +46,4 @@ pgo create pgbouncer [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 22-Nov-2020
+###### Auto generated by spf13/cobra on 1-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_update_pgbouncer.md b/docs/content/pgo-client/reference/pgo_update_pgbouncer.md
index ec51137fd2..b4dd3f0247 100644
--- a/docs/content/pgo-client/reference/pgo_update_pgbouncer.md
+++ b/docs/content/pgo-client/reference/pgo_update_pgbouncer.md
@@ -30,12 +30,13 @@ pgo update pgbouncer [flags]
--replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.
--rotate-password Used to rotate the pgBouncer service account password. Can cause interruption of service.
-s, --selector string The selector to use for cluster filtering.
+ --service-type string The Service type to use for pgBouncer.
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -49,4 +50,4 @@ pgo update pgbouncer [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 1-Jan-2021
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 784426d834..0297e548ba 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -803,11 +803,23 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
// if the pgBouncer flag is set, validate that replicas is set to a
- // nonnegative value
- if request.PgbouncerFlag && request.PgBouncerReplicas < 0 {
- resp.Status.Code = msgs.Error
- resp.Status.Msg = fmt.Sprintf(apiserver.ErrMessageReplicas+" for pgBouncer", 1)
- return resp
+ // nonnegative value and the service type.
+ if request.PgbouncerFlag {
+ if request.PgBouncerReplicas < 0 {
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = fmt.Sprintf(apiserver.ErrMessageReplicas+" for pgBouncer", 1)
+ return resp
+ }
+
+ // validate the optional ServiceType parameter
+ switch request.PgBouncerServiceType {
+ default:
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = fmt.Sprintf("invalid pgBouncer service type %q", request.PgBouncerServiceType)
+ return resp
+ case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName, "": // no-op
+ }
}
// if a value is provided in the request for PodAntiAffinity, then ensure is valid. If
@@ -1184,6 +1196,9 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
if request.PgBouncerReplicas > 0 {
spec.PgBouncer.Replicas = request.PgBouncerReplicas
}
+
+ // additionally if a specific pgBouncer Service Type is set, set that here
+ spec.PgBouncer.ServiceType = request.PgBouncerServiceType
}
// similarly, if there are any overriding pgBouncer container resource request
diff --git a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
index bd4b0e8fa4..3bacd91886 100644
--- a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
+++ b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
@@ -109,6 +109,17 @@ func CreatePgbouncer(request *msgs.CreatePgbouncerRequest, ns, pgouser string) m
cluster.Spec.PgBouncer.Replicas = request.Replicas
}
+ // set the optional ServiceType parameter
+ switch request.ServiceType {
+ default:
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = fmt.Sprintf("invalid service type %q", request.ServiceType)
+ return resp
+ case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName:
+ cluster.Spec.PgBouncer.ServiceType = request.ServiceType
+ }
+
// if the request has overriding CPU/memory parameters,
// these will take precedence over the defaults
if request.CPULimit != "" {
@@ -373,6 +384,19 @@ func UpdatePgBouncer(request *msgs.UpdatePgBouncerRequest, namespace, pgouser st
}
}
+ // set the optional ServiceType parameter
+ switch request.ServiceType {
+ default:
+ result.Error = true
+ result.ErrorMessage = fmt.Sprintf("invalid service type %q", request.ServiceType)
+ response.Results = append(response.Results, result)
+ continue
+ case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName:
+ cluster.Spec.PgBouncer.ServiceType = request.ServiceType
+ case "": // no-op, well, no change
+ }
+
// ensure the Resources/Limits are non-nil
if cluster.Spec.PgBouncer.Resources == nil {
cluster.Spec.PgBouncer.Resources = v1.ResourceList{}
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index 7ee7487bd4..87768ac59c 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -379,12 +379,20 @@ func UpdatePgbouncer(clientset kubernetes.Interface, oldCluster, newCluster *crv
log.Debugf("update pgbouncer from cluster [%s] in namespace [%s]", clusterName, namespace)
- // we need to detect what has changed. presently, two "groups" of things could
- // have changed
- // 1. The # of replicas to maintain
- // 2. The pgBouncer container resources
+ // we need to detect what has changed. This includes:
//
- // As #2 is a bit more destructive, we'll do that last
+ // 1. The Service type for the pgBouncer Service
+ // 2. The # of replicas to maintain
+ // 3. The pgBouncer container resources
+ //
+ // As #3 is a bit more destructive, we'll do that last
+
+ // check the pgBouncer Service
+ if oldCluster.Spec.PgBouncer.ServiceType != newCluster.Spec.PgBouncer.ServiceType {
+ if err := UpdatePgBouncerService(clientset, newCluster); err != nil {
+ return err
+ }
+ }
// check if the replicas differ
if oldCluster.Spec.PgBouncer.Replicas != newCluster.Spec.PgBouncer.Replicas {
@@ -436,6 +444,25 @@ func UpdatePgBouncerAnnotations(clientset kubernetes.Interface, cluster *crv1.Pg
return nil
}
+// UpdatePgBouncerService updates the information on the pgBouncer Service.
+// Specifically, it determines if it should use the information from the parent
+// PostgreSQL cluster or any specific overrides that are available in the
+// pgBouncer spec.
+func UpdatePgBouncerService(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error {
+ info := serviceInfo{
+ serviceName: fmt.Sprintf(pgBouncerDeploymentFormat, cluster.Name),
+ serviceNamespace: cluster.Namespace,
+ serviceType: cluster.Spec.ServiceType,
+ }
+
+ // if the pgBouncer ServiceType is set, use that
+ if cluster.Spec.PgBouncer.ServiceType != "" {
+ info.serviceType = cluster.Spec.PgBouncer.ServiceType
+ }
+
+ return updateService(clientset, info)
+}
+
// checkPgBouncerInstall checks to see if pgBouncer is installed in the
// PostgreSQL custer, which involves check to see if the pgBouncer role is
// present in the PostgreSQL cluster
@@ -643,7 +670,13 @@ func createPgBouncerService(clientset kubernetes.Interface, cluster *crv1.Pgclus
ClusterName: cluster.Name,
// TODO: I think "port" needs to be evaluated, but I think for now using
// the standard PostgreSQL port works
- Port: operator.Pgo.Cluster.Port,
+ Port: operator.Pgo.Cluster.Port,
+ ServiceType: cluster.Spec.ServiceType,
+ }
+
+ // override the service type if it is set specifically for pgBouncer
+ if cluster.Spec.PgBouncer.ServiceType != "" {
+ fields.ServiceType = cluster.Spec.PgBouncer.ServiceType
}
if err := CreateService(clientset, &fields, cluster.Namespace); err != nil {
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 1eade8a99f..4ad84d3b12 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -320,6 +320,10 @@ type PgBouncerSpec struct {
// Limits, if specified, contains the container resource limits
// for any pgBouncer Deployments that are part of a PostgreSQL cluster
Limits v1.ResourceList `json:"limits"`
+ // ServiceType references the type of Service that should be used when
+ // deploying the pgBouncer instances. If unset, it defaults to the value of
+ // the PostgreSQL cluster.
+ ServiceType v1.ServiceType `json:"serviceType"`
// TLSSecret contains the name of the secret to use that contains the TLS
// keypair for pgBouncer
// This follows the Kubernetes secret format ("kubernetes.io/tls") which has
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index ee2cb8aff0..ae01b0f4f4 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -93,6 +93,10 @@ type CreateClusterRequest struct {
// PostgreSQL cluster. Only works if PgbouncerFlag is set, and if so, it must
// be at least 1. If 0 is passed in, it will automatically be set to 1
PgBouncerReplicas int32
+ // PgBouncerServiceType, if specified and if PgbouncerFlag is true, is the
+ // ServiceType to use for pgBouncer. If not set, the value is defaultd to that
+ // of the PostgreSQL cluster ServiceType.
+ PgBouncerServiceType v1.ServiceType
// PgBouncerTLSSecret is the name of the Secret containing the TLS keypair
// for enabling TLS with pgBouncer. This also requires for TLSSecret and
// CASecret to be set
diff --git a/pkg/apiservermsgs/pgbouncermsgs.go b/pkg/apiservermsgs/pgbouncermsgs.go
index 49669b8fe7..e971e31424 100644
--- a/pkg/apiservermsgs/pgbouncermsgs.go
+++ b/pkg/apiservermsgs/pgbouncermsgs.go
@@ -1,5 +1,7 @@
package apiservermsgs
+import v1 "k8s.io/api/core/v1"
+
/*
Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
@@ -40,6 +42,9 @@ type CreatePgbouncerRequest struct {
// automatically be set to 1
Replicas int32
Selector string
+ // ServiceType is the kind of Service to deploy with this instance. If unset,
+ // it will default to the value for the PostgreSQL cluster.
+ ServiceType v1.ServiceType `json:"serviceType"`
// TLSSecret is the name of the secret that contains the keypair required to
// deploy TLS-enabled pgBouncer
TLSSecret string
@@ -186,6 +191,10 @@ type UpdatePgBouncerRequest struct {
// Selector is optional and contains a selector for pgBouncer deployments that
// are to be updated
Selector string
+
+ // ServiceType is the kind of Service to deploy with this instance. If unset,
+ // it will default to the value for the PostgreSQL cluster.
+ ServiceType v1.ServiceType `json:"serviceType"`
}
// UpdatePgBouncerResponse contains the resulting output of the update request
From 107b8666a067899e5a208783d7f30336e2acdcec Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 2 Jan 2021 11:54:02 -0500
Subject: [PATCH 105/276] Add `--service-type` flag to `pgo update cluster`
While a previous commit introduced the ability to update the Service
type of a PostgreSQL cluster via a custom resource, this commit
introduces the ability to do so from the pgo client.
Additionally, this ensures a Service Type update to a cluster propagates
to pgBouncer if the pgBouncer Service type is set to empty.
---
cmd/pgo/cmd/cluster.go | 1 +
cmd/pgo/cmd/update.go | 1 +
.../pgo-client/reference/pgo_update_cluster.md | 3 ++-
internal/apiserver/clusterservice/clusterimpl.go | 12 ++++++++++++
internal/controller/pgcluster/pgclustercontroller.go | 8 ++++++++
pkg/apiservermsgs/clustermsgs.go | 4 +++-
6 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index f8db1a8409..6a7d7fd9a3 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -673,6 +673,7 @@ func updateCluster(args []string, ns string) {
r.ExporterMemoryLimit = ExporterMemoryLimit
r.ExporterRotatePassword = ExporterRotatePassword
r.Clustername = args
+ r.ServiceType = v1.ServiceType(ServiceType)
r.Startup = Startup
r.Shutdown = Shutdown
// set the container resource requests
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index e64c48ed76..0f6b06c97b 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -129,6 +129,7 @@ func init() {
UpdateClusterCmd.Flags().BoolVar(&ExporterRotatePassword, "exporter-rotate-password", false, "Used to rotate the password for the metrics collection agent.")
UpdateClusterCmd.Flags().BoolVarP(&EnableStandby, "enable-standby", "", false,
"Enables standby mode in the cluster(s) specified.")
+ UpdateClusterCmd.Flags().StringVar(&ServiceType, "service-type", "", "The Service type to use for the PostgreSQL cluster. If not set, the pgo.yaml default will be used.")
UpdateClusterCmd.Flags().BoolVar(&Startup, "startup", false, "Restart the database cluster if it "+
"is currently shutdown.")
UpdateClusterCmd.Flags().BoolVar(&Shutdown, "shutdown", false, "Shutdown the database "+
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index 08b1e5ae4c..70689cdfea 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -59,6 +59,7 @@ pgo update cluster [flags]
--pgbackrest-memory-limit string Set the amount of memory to limit for the pgBackRest repository.
--promote-standby Disables standby mode (if enabled) and promotes the cluster(s) specified.
-s, --selector string The selector to use for cluster filtering.
+ --service-type string The Service type to use for the PostgreSQL cluster. If not set, the pgo.yaml default will be used.
--shutdown Shutdown the database cluster if it is currently running.
--startup Restart the database cluster if it is currently shutdown.
--tablespace strings Add a PostgreSQL tablespace on the cluster, e.g. "name=ts1:storageconfig=nfsstorage". The format is a key/value map that is delimited by "=" and separated by ":". The following parameters are available:
@@ -89,4 +90,4 @@ pgo update cluster [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 31-Dec-2020
+###### Auto generated by spf13/cobra on 2-Jan-2021
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 0297e548ba..54a4c8b3a6 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1900,6 +1900,18 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
case msgs.UpdateClusterPGBadgerDoNothing: // this is never reached -- no-op
}
+ // set the optional ServiceType parameter
+ switch request.ServiceType {
+ default:
+ response.Status.Code = msgs.Error
+ response.Status.Msg = fmt.Sprintf("invalid service type %q", request.ServiceType)
+ return response
+ case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName:
+ cluster.Spec.ServiceType = request.ServiceType
+ case "": // no-op, well, no change
+ }
+
// enable or disable standby mode based on UpdateClusterStandbyStatus provided in
// the request
switch request.Standby {
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 8875ec481a..713bf6887f 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -493,6 +493,14 @@ func updateServices(clientset kubeapi.Interface, cluster *crv1.Pgcluster) {
log.Error(err)
}
+ // if there is a pgBouncer and the pgBouncer service type value is empty,
+ // update the pgBouncer Service
+ if cluster.Spec.PgBouncer.Enabled() && cluster.Spec.PgBouncer.ServiceType == "" {
+ if err := clusteroperator.UpdatePgBouncerService(clientset, cluster); err != nil {
+ log.Error(err)
+ }
+ }
+
// handle the replica instances. Ish. This is kind of "broken" due to the
// fact that we have a single service for all of the replicas. so, we'll
// loop through all of the replicas and try to see if any of them have
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index ae01b0f4f4..d6c588eb56 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -467,7 +467,9 @@ type UpdateClusterRequest struct {
Metrics UpdateClusterMetrics
// PGBadger allows for the enabling/disabling of the pgBadger sidecar. This can
// cause downtime and triggers a rolling update
- PGBadger UpdateClusterPGBadger
+ PGBadger UpdateClusterPGBadger
+ // ServiceType, if specified, will change the service type of a cluster.
+ ServiceType v1.ServiceType
Standby UpdateClusterStandbyStatus
Startup bool
Shutdown bool
From 383dfa95991553352623f14d3d0d4c9193795855 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 2 Jan 2021 11:58:10 -0500
Subject: [PATCH 106/276] Happy New Year!
Good riddance 2020, hello 2021.
---
LICENSE.md | 2 +-
bin/check-deps.sh | 2 +-
bin/crunchy-postgres-exporter/common_lib.sh | 2 +-
bin/crunchy-postgres-exporter/start.sh | 2 +-
bin/get-deps.sh | 2 +-
bin/get-pgmonitor.sh | 2 +-
bin/pgo-event/pgo-event.sh | 2 +-
bin/pgo-rmdata/start.sh | 2 +-
bin/pgo-scheduler/start.sh | 2 +-
bin/pgo-sqlrunner/start.sh | 2 +-
bin/pre-pull-crunchy-containers.sh | 2 +-
bin/pull-from-gcr.sh | 2 +-
bin/push-ccp-to-gcr.sh | 2 +-
bin/push-to-gcr.sh | 2 +-
bin/uid_daemon.sh | 2 +-
bin/upgrade-secret.sh | 2 +-
cmd/apiserver/main.go | 2 +-
cmd/pgo-rmdata/main.go | 2 +-
cmd/pgo-rmdata/process.go | 2 +-
cmd/pgo-rmdata/types.go | 2 +-
cmd/pgo-scheduler/main.go | 2 +-
cmd/pgo-scheduler/scheduler/configmapcontroller.go | 2 +-
cmd/pgo-scheduler/scheduler/controllermanager.go | 2 +-
cmd/pgo-scheduler/scheduler/pgbackrest.go | 2 +-
cmd/pgo-scheduler/scheduler/policy.go | 2 +-
cmd/pgo-scheduler/scheduler/scheduler.go | 2 +-
cmd/pgo-scheduler/scheduler/tasks.go | 2 +-
cmd/pgo-scheduler/scheduler/types.go | 2 +-
cmd/pgo-scheduler/scheduler/validate.go | 2 +-
cmd/pgo-scheduler/scheduler/validate_test.go | 2 +-
cmd/pgo/api/backrest.go | 2 +-
cmd/pgo/api/cat.go | 2 +-
cmd/pgo/api/cluster.go | 2 +-
cmd/pgo/api/common.go | 2 +-
cmd/pgo/api/config.go | 2 +-
cmd/pgo/api/df.go | 2 +-
cmd/pgo/api/failover.go | 2 +-
cmd/pgo/api/label.go | 2 +-
cmd/pgo/api/namespace.go | 2 +-
cmd/pgo/api/pgadmin.go | 2 +-
cmd/pgo/api/pgbouncer.go | 2 +-
cmd/pgo/api/pgdump.go | 2 +-
cmd/pgo/api/pgorole.go | 2 +-
cmd/pgo/api/pgouser.go | 2 +-
cmd/pgo/api/policy.go | 2 +-
cmd/pgo/api/pvc.go | 2 +-
cmd/pgo/api/reload.go | 2 +-
cmd/pgo/api/restart.go | 2 +-
cmd/pgo/api/restore.go | 2 +-
cmd/pgo/api/restoreDump.go | 2 +-
cmd/pgo/api/scale.go | 2 +-
cmd/pgo/api/scaledown.go | 2 +-
cmd/pgo/api/schedule.go | 2 +-
cmd/pgo/api/status.go | 2 +-
cmd/pgo/api/test.go | 2 +-
cmd/pgo/api/upgrade.go | 2 +-
cmd/pgo/api/user.go | 2 +-
cmd/pgo/api/version.go | 2 +-
cmd/pgo/api/workflow.go | 2 +-
cmd/pgo/cmd/auth.go | 2 +-
cmd/pgo/cmd/backrest.go | 2 +-
cmd/pgo/cmd/backup.go | 2 +-
cmd/pgo/cmd/cat.go | 2 +-
cmd/pgo/cmd/cluster.go | 2 +-
cmd/pgo/cmd/common.go | 2 +-
cmd/pgo/cmd/config.go | 2 +-
cmd/pgo/cmd/create.go | 2 +-
cmd/pgo/cmd/delete.go | 2 +-
cmd/pgo/cmd/df.go | 2 +-
cmd/pgo/cmd/failover.go | 2 +-
cmd/pgo/cmd/flags.go | 2 +-
cmd/pgo/cmd/label.go | 2 +-
cmd/pgo/cmd/namespace.go | 2 +-
cmd/pgo/cmd/pgadmin.go | 2 +-
cmd/pgo/cmd/pgbouncer.go | 2 +-
cmd/pgo/cmd/pgdump.go | 2 +-
cmd/pgo/cmd/pgorole.go | 2 +-
cmd/pgo/cmd/pgouser.go | 2 +-
cmd/pgo/cmd/policy.go | 2 +-
cmd/pgo/cmd/pvc.go | 2 +-
cmd/pgo/cmd/reload.go | 2 +-
cmd/pgo/cmd/restart.go | 2 +-
cmd/pgo/cmd/restore.go | 2 +-
cmd/pgo/cmd/root.go | 2 +-
cmd/pgo/cmd/scale.go | 2 +-
cmd/pgo/cmd/scaledown.go | 2 +-
cmd/pgo/cmd/schedule.go | 2 +-
cmd/pgo/cmd/show.go | 2 +-
cmd/pgo/cmd/status.go | 2 +-
cmd/pgo/cmd/test.go | 2 +-
cmd/pgo/cmd/update.go | 2 +-
cmd/pgo/cmd/upgrade.go | 2 +-
cmd/pgo/cmd/user.go | 2 +-
cmd/pgo/cmd/version.go | 2 +-
cmd/pgo/cmd/watch.go | 2 +-
cmd/pgo/cmd/workflow.go | 2 +-
cmd/pgo/generatedocs.go | 2 +-
cmd/pgo/main.go | 2 +-
cmd/pgo/util/confirmation.go | 2 +-
cmd/pgo/util/pad.go | 2 +-
cmd/pgo/util/validation.go | 2 +-
cmd/postgres-operator/main.go | 2 +-
cmd/postgres-operator/open_telemetry.go | 2 +-
deploy/add-targeted-namespace-reconcile-rbac.sh | 2 +-
deploy/add-targeted-namespace.sh | 2 +-
deploy/cleannamespaces.sh | 2 +-
deploy/cleanup-rbac.sh | 2 +-
deploy/cleanup.sh | 2 +-
deploy/deploy.sh | 2 +-
deploy/gen-api-keys.sh | 2 +-
deploy/install-bootstrap-creds.sh | 2 +-
deploy/install-rbac.sh | 2 +-
deploy/remove-crd.sh | 2 +-
deploy/setupnamespaces.sh | 2 +-
deploy/show-crd.sh | 2 +-
deploy/upgrade-creds.sh | 2 +-
deploy/upgrade-pgo.sh | 2 +-
examples/create-by-resource/run.sh | 2 +-
examples/custom-config/create.sh | 2 +-
examples/custom-config/setup.sql | 2 +-
hack/boilerplate.go.txt | 2 +-
hack/config_sync.sh | 2 +-
hack/update-codegen.sh | 2 +-
.../roles/pgo-operator/templates/add-targeted-namespace.sh.j2 | 2 +-
installers/image/bin/pgo-deploy.sh | 2 +-
installers/kubectl/client-setup.sh | 2 +-
internal/apiserver/backrestservice/backrestimpl.go | 2 +-
internal/apiserver/backrestservice/backrestservice.go | 2 +-
internal/apiserver/backupoptions/backupoptionsutil.go | 2 +-
internal/apiserver/backupoptions/pgbackrestoptions.go | 2 +-
internal/apiserver/backupoptions/pgdumpoptions.go | 2 +-
internal/apiserver/catservice/catimpl.go | 2 +-
internal/apiserver/catservice/catservice.go | 2 +-
internal/apiserver/clusterservice/clusterimpl.go | 2 +-
internal/apiserver/clusterservice/clusterservice.go | 2 +-
internal/apiserver/clusterservice/scaleimpl.go | 2 +-
internal/apiserver/clusterservice/scaleservice.go | 2 +-
internal/apiserver/common.go | 2 +-
internal/apiserver/common_test.go | 2 +-
internal/apiserver/configservice/configimpl.go | 2 +-
internal/apiserver/configservice/configservice.go | 2 +-
internal/apiserver/dfservice/dfimpl.go | 2 +-
internal/apiserver/dfservice/dfservice.go | 2 +-
internal/apiserver/failoverservice/failoverimpl.go | 2 +-
internal/apiserver/failoverservice/failoverservice.go | 2 +-
internal/apiserver/labelservice/labelimpl.go | 2 +-
internal/apiserver/labelservice/labelservice.go | 2 +-
internal/apiserver/middleware.go | 2 +-
internal/apiserver/namespaceservice/namespaceimpl.go | 2 +-
internal/apiserver/namespaceservice/namespaceservice.go | 2 +-
internal/apiserver/perms.go | 2 +-
internal/apiserver/pgadminservice/pgadminimpl.go | 2 +-
internal/apiserver/pgadminservice/pgadminservice.go | 2 +-
internal/apiserver/pgbouncerservice/pgbouncerimpl.go | 2 +-
internal/apiserver/pgbouncerservice/pgbouncerservice.go | 2 +-
internal/apiserver/pgdumpservice/pgdumpimpl.go | 2 +-
internal/apiserver/pgdumpservice/pgdumpservice.go | 2 +-
internal/apiserver/pgoroleservice/pgoroleimpl.go | 2 +-
internal/apiserver/pgoroleservice/pgoroleservice.go | 2 +-
internal/apiserver/pgouserservice/pgouserimpl.go | 2 +-
internal/apiserver/pgouserservice/pgouserservice.go | 2 +-
internal/apiserver/policyservice/policyimpl.go | 2 +-
internal/apiserver/policyservice/policyservice.go | 2 +-
internal/apiserver/pvcservice/pvcimpl.go | 2 +-
internal/apiserver/pvcservice/pvcservice.go | 2 +-
internal/apiserver/reloadservice/reloadimpl.go | 2 +-
internal/apiserver/reloadservice/reloadservice.go | 2 +-
internal/apiserver/restartservice/restartimpl.go | 2 +-
internal/apiserver/restartservice/restartservice.go | 2 +-
internal/apiserver/root.go | 2 +-
internal/apiserver/routing/doc.go | 2 +-
internal/apiserver/routing/routes.go | 2 +-
internal/apiserver/scheduleservice/scheduleimpl.go | 2 +-
internal/apiserver/scheduleservice/scheduleservice.go | 2 +-
internal/apiserver/statusservice/statusimpl.go | 2 +-
internal/apiserver/statusservice/statusservice.go | 2 +-
internal/apiserver/upgradeservice/upgradeimpl.go | 2 +-
internal/apiserver/upgradeservice/upgradeservice.go | 2 +-
internal/apiserver/userservice/userimpl.go | 2 +-
internal/apiserver/userservice/userimpl_test.go | 2 +-
internal/apiserver/userservice/userservice.go | 2 +-
internal/apiserver/versionservice/versionimpl.go | 2 +-
internal/apiserver/versionservice/versionservice.go | 2 +-
internal/apiserver/workflowservice/workflowimpl.go | 2 +-
internal/apiserver/workflowservice/workflowservice.go | 2 +-
internal/config/annotations.go | 2 +-
internal/config/defaults.go | 2 +-
internal/config/images.go | 2 +-
internal/config/labels.go | 2 +-
internal/config/pgoconfig.go | 2 +-
internal/config/secrets.go | 2 +-
internal/config/volumes.go | 2 +-
internal/controller/configmap/configmapcontroller.go | 2 +-
internal/controller/configmap/synchandler.go | 2 +-
internal/controller/controllerutil.go | 2 +-
internal/controller/job/backresthandler.go | 2 +-
internal/controller/job/bootstraphandler.go | 2 +-
internal/controller/job/jobcontroller.go | 2 +-
internal/controller/job/jobevents.go | 2 +-
internal/controller/job/jobutil.go | 2 +-
internal/controller/job/pgdumphandler.go | 2 +-
internal/controller/job/rmdatahandler.go | 2 +-
internal/controller/manager/controllermanager.go | 2 +-
internal/controller/manager/rbac.go | 2 +-
internal/controller/namespace/namespacecontroller.go | 2 +-
internal/controller/pgcluster/pgclustercontroller.go | 2 +-
internal/controller/pgpolicy/pgpolicycontroller.go | 2 +-
internal/controller/pgreplica/pgreplicacontroller.go | 2 +-
internal/controller/pgtask/backresthandler.go | 2 +-
internal/controller/pgtask/pgtaskcontroller.go | 2 +-
internal/controller/pod/inithandler.go | 2 +-
internal/controller/pod/podcontroller.go | 2 +-
internal/controller/pod/podevents.go | 2 +-
internal/controller/pod/promotionhandler.go | 2 +-
internal/kubeapi/client_config.go | 2 +-
internal/kubeapi/endpoints.go | 2 +-
internal/kubeapi/errors.go | 2 +-
internal/kubeapi/exec.go | 2 +-
internal/kubeapi/fake/clientset.go | 2 +-
internal/kubeapi/fake/fakeclients.go | 2 +-
internal/kubeapi/patch.go | 2 +-
internal/kubeapi/patch_test.go | 2 +-
internal/kubeapi/volumes.go | 2 +-
internal/kubeapi/volumes_test.go | 2 +-
internal/logging/loglib.go | 2 +-
internal/ns/nslogic.go | 2 +-
internal/operator/backrest/backup.go | 2 +-
internal/operator/backrest/repo.go | 2 +-
internal/operator/backrest/restore.go | 2 +-
internal/operator/backrest/stanza.go | 2 +-
internal/operator/cluster/cluster.go | 2 +-
internal/operator/cluster/clusterlogic.go | 2 +-
internal/operator/cluster/common.go | 2 +-
internal/operator/cluster/common_test.go | 2 +-
internal/operator/cluster/exporter.go | 2 +-
internal/operator/cluster/failover.go | 2 +-
internal/operator/cluster/failoverlogic.go | 2 +-
internal/operator/cluster/pgadmin.go | 2 +-
internal/operator/cluster/pgbadger.go | 2 +-
internal/operator/cluster/pgbouncer.go | 2 +-
internal/operator/cluster/pgbouncer_test.go | 2 +-
internal/operator/cluster/rmdata.go | 2 +-
internal/operator/cluster/rolling.go | 2 +-
internal/operator/cluster/service.go | 2 +-
internal/operator/cluster/standby.go | 2 +-
internal/operator/cluster/upgrade.go | 2 +-
internal/operator/clusterutilities.go | 2 +-
internal/operator/clusterutilities_test.go | 2 +-
internal/operator/common.go | 2 +-
internal/operator/common_test.go | 2 +-
internal/operator/config/configutil.go | 2 +-
internal/operator/config/dcs.go | 2 +-
internal/operator/config/localdb.go | 2 +-
internal/operator/operatorupgrade/version-check.go | 2 +-
internal/operator/pgbackrest.go | 2 +-
internal/operator/pgbackrest_test.go | 2 +-
internal/operator/pgdump/dump.go | 2 +-
internal/operator/pgdump/restore.go | 2 +-
internal/operator/pvc/pvc.go | 2 +-
internal/operator/storage.go | 2 +-
internal/operator/storage_test.go | 2 +-
internal/operator/task/applypolicies.go | 2 +-
internal/operator/task/rmdata.go | 2 +-
internal/operator/task/workflow.go | 2 +-
internal/operator/wal.go | 2 +-
internal/patroni/doc.go | 2 +-
internal/patroni/patroni.go | 2 +-
internal/pgadmin/backoff.go | 2 +-
internal/pgadmin/backoff_test.go | 2 +-
internal/pgadmin/crypto.go | 2 +-
internal/pgadmin/crypto_test.go | 2 +-
internal/pgadmin/doc.go | 2 +-
internal/pgadmin/hash.go | 2 +-
internal/pgadmin/logic.go | 2 +-
internal/pgadmin/runner.go | 2 +-
internal/pgadmin/server.go | 2 +-
internal/postgres/doc.go | 2 +-
internal/postgres/password/doc.go | 2 +-
internal/postgres/password/md5.go | 2 +-
internal/postgres/password/md5_test.go | 2 +-
internal/postgres/password/password.go | 2 +-
internal/postgres/password/password_test.go | 2 +-
internal/postgres/password/scram.go | 2 +-
internal/postgres/password/scram_test.go | 2 +-
internal/tlsutil/primitives.go | 2 +-
internal/tlsutil/primitives_test.go | 2 +-
internal/util/backrest.go | 2 +-
internal/util/cluster.go | 2 +-
internal/util/cluster_test.go | 2 +-
internal/util/exporter.go | 2 +-
internal/util/exporter_test.go | 2 +-
internal/util/failover.go | 2 +-
internal/util/pgbouncer.go | 2 +-
internal/util/policy.go | 2 +-
internal/util/secrets.go | 2 +-
internal/util/secrets_test.go | 2 +-
internal/util/ssh.go | 2 +-
internal/util/util.go | 2 +-
pkg/apis/crunchydata.com/v1/cluster.go | 2 +-
pkg/apis/crunchydata.com/v1/cluster_test.go | 2 +-
pkg/apis/crunchydata.com/v1/common.go | 2 +-
pkg/apis/crunchydata.com/v1/common_test.go | 2 +-
pkg/apis/crunchydata.com/v1/doc.go | 2 +-
pkg/apis/crunchydata.com/v1/errors.go | 2 +-
pkg/apis/crunchydata.com/v1/policy.go | 2 +-
pkg/apis/crunchydata.com/v1/register.go | 2 +-
pkg/apis/crunchydata.com/v1/replica.go | 2 +-
pkg/apis/crunchydata.com/v1/task.go | 2 +-
pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go | 2 +-
pkg/apiservermsgs/backrestmsgs.go | 2 +-
pkg/apiservermsgs/catmsgs.go | 2 +-
pkg/apiservermsgs/clustermsgs.go | 2 +-
pkg/apiservermsgs/common.go | 2 +-
pkg/apiservermsgs/configmsgs.go | 2 +-
pkg/apiservermsgs/dfmsgs.go | 2 +-
pkg/apiservermsgs/failovermsgs.go | 2 +-
pkg/apiservermsgs/labelmsgs.go | 2 +-
pkg/apiservermsgs/namespacemsgs.go | 2 +-
pkg/apiservermsgs/pgadminmsgs.go | 2 +-
pkg/apiservermsgs/pgbouncermsgs.go | 2 +-
pkg/apiservermsgs/pgdumpmsgs.go | 2 +-
pkg/apiservermsgs/pgorolemsgs.go | 2 +-
pkg/apiservermsgs/pgousermsgs.go | 2 +-
pkg/apiservermsgs/policymsgs.go | 2 +-
pkg/apiservermsgs/pvcmsgs.go | 2 +-
pkg/apiservermsgs/reloadmsgs.go | 2 +-
pkg/apiservermsgs/restartmsgs.go | 2 +-
pkg/apiservermsgs/schedulemsgs.go | 2 +-
pkg/apiservermsgs/statusmsgs.go | 2 +-
pkg/apiservermsgs/upgrademsgs.go | 2 +-
pkg/apiservermsgs/usermsgs.go | 2 +-
pkg/apiservermsgs/usermsgs_test.go | 2 +-
pkg/apiservermsgs/versionmsgs.go | 2 +-
pkg/apiservermsgs/watchmsgs.go | 2 +-
pkg/apiservermsgs/workflowmsgs.go | 2 +-
pkg/events/eventing.go | 2 +-
pkg/events/eventtype.go | 2 +-
pkg/events/pgoeventtype.go | 2 +-
pkg/generated/clientset/versioned/clientset.go | 2 +-
pkg/generated/clientset/versioned/doc.go | 2 +-
pkg/generated/clientset/versioned/fake/clientset_generated.go | 2 +-
pkg/generated/clientset/versioned/fake/doc.go | 2 +-
pkg/generated/clientset/versioned/fake/register.go | 2 +-
pkg/generated/clientset/versioned/scheme/doc.go | 2 +-
pkg/generated/clientset/versioned/scheme/register.go | 2 +-
.../typed/crunchydata.com/v1/crunchydata.com_client.go | 2 +-
.../clientset/versioned/typed/crunchydata.com/v1/doc.go | 2 +-
.../clientset/versioned/typed/crunchydata.com/v1/fake/doc.go | 2 +-
.../crunchydata.com/v1/fake/fake_crunchydata.com_client.go | 2 +-
.../versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go | 2 +-
.../versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go | 2 +-
.../versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go | 2 +-
.../versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go | 2 +-
.../versioned/typed/crunchydata.com/v1/generated_expansion.go | 2 +-
.../clientset/versioned/typed/crunchydata.com/v1/pgcluster.go | 2 +-
.../clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go | 2 +-
.../clientset/versioned/typed/crunchydata.com/v1/pgreplica.go | 2 +-
.../clientset/versioned/typed/crunchydata.com/v1/pgtask.go | 2 +-
.../informers/externalversions/crunchydata.com/interface.go | 2 +-
.../informers/externalversions/crunchydata.com/v1/interface.go | 2 +-
.../informers/externalversions/crunchydata.com/v1/pgcluster.go | 2 +-
.../informers/externalversions/crunchydata.com/v1/pgpolicy.go | 2 +-
.../informers/externalversions/crunchydata.com/v1/pgreplica.go | 2 +-
.../informers/externalversions/crunchydata.com/v1/pgtask.go | 2 +-
pkg/generated/informers/externalversions/factory.go | 2 +-
pkg/generated/informers/externalversions/generic.go | 2 +-
.../externalversions/internalinterfaces/factory_interfaces.go | 2 +-
pkg/generated/listers/crunchydata.com/v1/expansion_generated.go | 2 +-
pkg/generated/listers/crunchydata.com/v1/pgcluster.go | 2 +-
pkg/generated/listers/crunchydata.com/v1/pgpolicy.go | 2 +-
pkg/generated/listers/crunchydata.com/v1/pgreplica.go | 2 +-
pkg/generated/listers/crunchydata.com/v1/pgtask.go | 2 +-
pv/create-pv-nfs-label.sh | 2 +-
pv/create-pv-nfs-legacy.sh | 2 +-
pv/create-pv-nfs.sh | 2 +-
pv/create-pv.sh | 2 +-
pv/delete-pv.sh | 2 +-
testing/pgo_cli/cluster_backup_test.go | 2 +-
testing/pgo_cli/cluster_cat_test.go | 2 +-
testing/pgo_cli/cluster_create_test.go | 2 +-
testing/pgo_cli/cluster_delete_test.go | 2 +-
testing/pgo_cli/cluster_df_test.go | 2 +-
testing/pgo_cli/cluster_failover_test.go | 2 +-
testing/pgo_cli/cluster_label_test.go | 2 +-
testing/pgo_cli/cluster_pgbouncer_test.go | 2 +-
testing/pgo_cli/cluster_policy_test.go | 2 +-
testing/pgo_cli/cluster_pvc_test.go | 2 +-
testing/pgo_cli/cluster_reload_test.go | 2 +-
testing/pgo_cli/cluster_restart_test.go | 2 +-
testing/pgo_cli/cluster_scale_test.go | 2 +-
testing/pgo_cli/cluster_scaledown_test.go | 2 +-
testing/pgo_cli/cluster_test_test.go | 2 +-
testing/pgo_cli/cluster_user_test.go | 2 +-
testing/pgo_cli/operator_namespace_test.go | 2 +-
testing/pgo_cli/operator_rbac_test.go | 2 +-
testing/pgo_cli/operator_test.go | 2 +-
testing/pgo_cli/suite_helpers_test.go | 2 +-
testing/pgo_cli/suite_pgo_cmd_test.go | 2 +-
testing/pgo_cli/suite_test.go | 2 +-
399 files changed, 399 insertions(+), 399 deletions(-)
diff --git a/LICENSE.md b/LICENSE.md
index 90fe0562e2..f8ebe3dacd 100644
--- a/LICENSE.md
+++ b/LICENSE.md
@@ -176,7 +176,7 @@
END OF TERMS AND CONDITIONS
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/bin/check-deps.sh b/bin/check-deps.sh
index fd0d77ce24..ba66ec9f1d 100755
--- a/bin/check-deps.sh
+++ b/bin/check-deps.sh
@@ -1,6 +1,6 @@
#!/bin/bash -e
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/crunchy-postgres-exporter/common_lib.sh b/bin/crunchy-postgres-exporter/common_lib.sh
index 283352062b..a42a618eb1 100755
--- a/bin/crunchy-postgres-exporter/common_lib.sh
+++ b/bin/crunchy-postgres-exporter/common_lib.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/crunchy-postgres-exporter/start.sh b/bin/crunchy-postgres-exporter/start.sh
index a7397973a9..42ed3d6d03 100755
--- a/bin/crunchy-postgres-exporter/start.sh
+++ b/bin/crunchy-postgres-exporter/start.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/get-deps.sh b/bin/get-deps.sh
index edf4a08b81..0b895f329d 100755
--- a/bin/get-deps.sh
+++ b/bin/get-deps.sh
@@ -1,6 +1,6 @@
#!/bin/bash -e
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/get-pgmonitor.sh b/bin/get-pgmonitor.sh
index efa62b8628..e8cf4a0e02 100755
--- a/bin/get-pgmonitor.sh
+++ b/bin/get-pgmonitor.sh
@@ -1,6 +1,6 @@
#!/bin/bash -e
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/pgo-event/pgo-event.sh b/bin/pgo-event/pgo-event.sh
index cddcb2e708..1602e56193 100755
--- a/bin/pgo-event/pgo-event.sh
+++ b/bin/pgo-event/pgo-event.sh
@@ -1,6 +1,6 @@
#!/bin/bash -x
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/pgo-rmdata/start.sh b/bin/pgo-rmdata/start.sh
index 95a4903289..228f891694 100755
--- a/bin/pgo-rmdata/start.sh
+++ b/bin/pgo-rmdata/start.sh
@@ -1,6 +1,6 @@
#!/bin/bash -x
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/pgo-scheduler/start.sh b/bin/pgo-scheduler/start.sh
index 4a32cf8bc3..4549d82d57 100755
--- a/bin/pgo-scheduler/start.sh
+++ b/bin/pgo-scheduler/start.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/pgo-sqlrunner/start.sh b/bin/pgo-sqlrunner/start.sh
index 0b2eb6d417..01422d26d7 100755
--- a/bin/pgo-sqlrunner/start.sh
+++ b/bin/pgo-sqlrunner/start.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/pre-pull-crunchy-containers.sh b/bin/pre-pull-crunchy-containers.sh
index 5a7031f8e9..91cfcb9dc8 100755
--- a/bin/pre-pull-crunchy-containers.sh
+++ b/bin/pre-pull-crunchy-containers.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/pull-from-gcr.sh b/bin/pull-from-gcr.sh
index 0e57fc13db..2087443de0 100755
--- a/bin/pull-from-gcr.sh
+++ b/bin/pull-from-gcr.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index 59e2e329e8..d476c07b0b 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/push-to-gcr.sh b/bin/push-to-gcr.sh
index 4bc46b933c..cd21a868f4 100755
--- a/bin/push-to-gcr.sh
+++ b/bin/push-to-gcr.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/uid_daemon.sh b/bin/uid_daemon.sh
index 83d8aca5e2..bc988bae79 100755
--- a/bin/uid_daemon.sh
+++ b/bin/uid_daemon.sh
@@ -1,6 +1,6 @@
#!/usr/bin/bash
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/bin/upgrade-secret.sh b/bin/upgrade-secret.sh
index ee93af1377..f852008890 100755
--- a/bin/upgrade-secret.sh
+++ b/bin/upgrade-secret.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/cmd/apiserver/main.go b/cmd/apiserver/main.go
index 8b6b1216af..562a5870a8 100644
--- a/cmd/apiserver/main.go
+++ b/cmd/apiserver/main.go
@@ -1,7 +1,7 @@
package main
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-rmdata/main.go b/cmd/pgo-rmdata/main.go
index b4c5c2c4fc..1b3b4d82cf 100644
--- a/cmd/pgo-rmdata/main.go
+++ b/cmd/pgo-rmdata/main.go
@@ -1,7 +1,7 @@
package main
/*
-Copyright 2019 - 2020 Crunchy Data
+Copyright 2019 - 2021 Crunchy Data
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-rmdata/process.go b/cmd/pgo-rmdata/process.go
index 3449476170..f8923a8a12 100644
--- a/cmd/pgo-rmdata/process.go
+++ b/cmd/pgo-rmdata/process.go
@@ -1,7 +1,7 @@
package main
/*
-Copyright 2019 - 2020 Crunchy Data
+Copyright 2019 - 2021 Crunchy Data
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-rmdata/types.go b/cmd/pgo-rmdata/types.go
index 36a95778ff..590bf9ac13 100644
--- a/cmd/pgo-rmdata/types.go
+++ b/cmd/pgo-rmdata/types.go
@@ -1,7 +1,7 @@
package main
/*
-Copyright 2019 - 2020 Crunchy Data
+Copyright 2019 - 2021 Crunchy Data
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/main.go b/cmd/pgo-scheduler/main.go
index 63e0c17a00..ef13e0e794 100644
--- a/cmd/pgo-scheduler/main.go
+++ b/cmd/pgo-scheduler/main.go
@@ -1,7 +1,7 @@
package main
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/configmapcontroller.go b/cmd/pgo-scheduler/scheduler/configmapcontroller.go
index 95d21d883c..dfb6f7aaa7 100644
--- a/cmd/pgo-scheduler/scheduler/configmapcontroller.go
+++ b/cmd/pgo-scheduler/scheduler/configmapcontroller.go
@@ -1,7 +1,7 @@
package scheduler
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/controllermanager.go b/cmd/pgo-scheduler/scheduler/controllermanager.go
index 055527fc64..fa9244b0d4 100644
--- a/cmd/pgo-scheduler/scheduler/controllermanager.go
+++ b/cmd/pgo-scheduler/scheduler/controllermanager.go
@@ -1,7 +1,7 @@
package scheduler
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/pgbackrest.go b/cmd/pgo-scheduler/scheduler/pgbackrest.go
index ef0ba90a00..62330d61e1 100644
--- a/cmd/pgo-scheduler/scheduler/pgbackrest.go
+++ b/cmd/pgo-scheduler/scheduler/pgbackrest.go
@@ -1,7 +1,7 @@
package scheduler
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/policy.go b/cmd/pgo-scheduler/scheduler/policy.go
index 15a93cf27f..c143df1978 100644
--- a/cmd/pgo-scheduler/scheduler/policy.go
+++ b/cmd/pgo-scheduler/scheduler/policy.go
@@ -1,7 +1,7 @@
package scheduler
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/scheduler.go b/cmd/pgo-scheduler/scheduler/scheduler.go
index 8d6d326936..18d49b4ebd 100644
--- a/cmd/pgo-scheduler/scheduler/scheduler.go
+++ b/cmd/pgo-scheduler/scheduler/scheduler.go
@@ -1,7 +1,7 @@
package scheduler
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/tasks.go b/cmd/pgo-scheduler/scheduler/tasks.go
index a2c715d3be..a8374bfbe7 100644
--- a/cmd/pgo-scheduler/scheduler/tasks.go
+++ b/cmd/pgo-scheduler/scheduler/tasks.go
@@ -1,7 +1,7 @@
package scheduler
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/types.go b/cmd/pgo-scheduler/scheduler/types.go
index 674ef86ad2..7c766fa539 100644
--- a/cmd/pgo-scheduler/scheduler/types.go
+++ b/cmd/pgo-scheduler/scheduler/types.go
@@ -1,7 +1,7 @@
package scheduler
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/validate.go b/cmd/pgo-scheduler/scheduler/validate.go
index 35bddb7e78..d483688097 100644
--- a/cmd/pgo-scheduler/scheduler/validate.go
+++ b/cmd/pgo-scheduler/scheduler/validate.go
@@ -1,7 +1,7 @@
package scheduler
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo-scheduler/scheduler/validate_test.go b/cmd/pgo-scheduler/scheduler/validate_test.go
index a7401a00d7..d6cb8f2cb4 100644
--- a/cmd/pgo-scheduler/scheduler/validate_test.go
+++ b/cmd/pgo-scheduler/scheduler/validate_test.go
@@ -1,7 +1,7 @@
package scheduler
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/backrest.go b/cmd/pgo/api/backrest.go
index ff157058fe..f71633c654 100644
--- a/cmd/pgo/api/backrest.go
+++ b/cmd/pgo/api/backrest.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/cat.go b/cmd/pgo/api/cat.go
index 1da601649e..8c19125034 100644
--- a/cmd/pgo/api/cat.go
+++ b/cmd/pgo/api/cat.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/cluster.go b/cmd/pgo/api/cluster.go
index c51425b09b..904f09bfe3 100644
--- a/cmd/pgo/api/cluster.go
+++ b/cmd/pgo/api/cluster.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/common.go b/cmd/pgo/api/common.go
index b541272e94..970691e516 100644
--- a/cmd/pgo/api/common.go
+++ b/cmd/pgo/api/common.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/config.go b/cmd/pgo/api/config.go
index c64be16cd1..375a8186c5 100644
--- a/cmd/pgo/api/config.go
+++ b/cmd/pgo/api/config.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/df.go b/cmd/pgo/api/df.go
index cd64ea10da..0d5215fa62 100644
--- a/cmd/pgo/api/df.go
+++ b/cmd/pgo/api/df.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/failover.go b/cmd/pgo/api/failover.go
index 61731da12e..15c93c32fd 100644
--- a/cmd/pgo/api/failover.go
+++ b/cmd/pgo/api/failover.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/label.go b/cmd/pgo/api/label.go
index b96facc729..b94f943abf 100644
--- a/cmd/pgo/api/label.go
+++ b/cmd/pgo/api/label.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/namespace.go b/cmd/pgo/api/namespace.go
index 1648e02384..038b705557 100644
--- a/cmd/pgo/api/namespace.go
+++ b/cmd/pgo/api/namespace.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/pgadmin.go b/cmd/pgo/api/pgadmin.go
index 7f4ac09d89..e22c09c1b6 100644
--- a/cmd/pgo/api/pgadmin.go
+++ b/cmd/pgo/api/pgadmin.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/pgbouncer.go b/cmd/pgo/api/pgbouncer.go
index 56be678166..2aca81d5d3 100644
--- a/cmd/pgo/api/pgbouncer.go
+++ b/cmd/pgo/api/pgbouncer.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/pgdump.go b/cmd/pgo/api/pgdump.go
index b228954c91..5bc5ab118f 100644
--- a/cmd/pgo/api/pgdump.go
+++ b/cmd/pgo/api/pgdump.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/pgorole.go b/cmd/pgo/api/pgorole.go
index 1157677fba..345a18a372 100644
--- a/cmd/pgo/api/pgorole.go
+++ b/cmd/pgo/api/pgorole.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/pgouser.go b/cmd/pgo/api/pgouser.go
index 9f8cfaf63b..ed2932297d 100644
--- a/cmd/pgo/api/pgouser.go
+++ b/cmd/pgo/api/pgouser.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/policy.go b/cmd/pgo/api/policy.go
index 61b4f4842c..9761b87ac3 100644
--- a/cmd/pgo/api/policy.go
+++ b/cmd/pgo/api/policy.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/pvc.go b/cmd/pgo/api/pvc.go
index 4c51b05423..2c021b2957 100644
--- a/cmd/pgo/api/pvc.go
+++ b/cmd/pgo/api/pvc.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/reload.go b/cmd/pgo/api/reload.go
index ee33d79b76..077764848e 100644
--- a/cmd/pgo/api/reload.go
+++ b/cmd/pgo/api/reload.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/restart.go b/cmd/pgo/api/restart.go
index f73f6cc249..7a53e87de0 100644
--- a/cmd/pgo/api/restart.go
+++ b/cmd/pgo/api/restart.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/restore.go b/cmd/pgo/api/restore.go
index f80efe32f5..03f04876e9 100644
--- a/cmd/pgo/api/restore.go
+++ b/cmd/pgo/api/restore.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/restoreDump.go b/cmd/pgo/api/restoreDump.go
index 6e1f918f7d..ad4566b4a7 100644
--- a/cmd/pgo/api/restoreDump.go
+++ b/cmd/pgo/api/restoreDump.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/scale.go b/cmd/pgo/api/scale.go
index 574cbc8b4c..6ae78dbf67 100644
--- a/cmd/pgo/api/scale.go
+++ b/cmd/pgo/api/scale.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/scaledown.go b/cmd/pgo/api/scaledown.go
index 9075c23074..4abbe638b7 100644
--- a/cmd/pgo/api/scaledown.go
+++ b/cmd/pgo/api/scaledown.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/schedule.go b/cmd/pgo/api/schedule.go
index 444bd04832..e2700a4b19 100644
--- a/cmd/pgo/api/schedule.go
+++ b/cmd/pgo/api/schedule.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/status.go b/cmd/pgo/api/status.go
index 9def02a132..4feb1432f2 100644
--- a/cmd/pgo/api/status.go
+++ b/cmd/pgo/api/status.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/test.go b/cmd/pgo/api/test.go
index ca70b5c132..f21d6ea919 100644
--- a/cmd/pgo/api/test.go
+++ b/cmd/pgo/api/test.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/upgrade.go b/cmd/pgo/api/upgrade.go
index 3613b4527a..bb780d99bf 100644
--- a/cmd/pgo/api/upgrade.go
+++ b/cmd/pgo/api/upgrade.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/user.go b/cmd/pgo/api/user.go
index ed1ad4fda5..48987b3561 100644
--- a/cmd/pgo/api/user.go
+++ b/cmd/pgo/api/user.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/version.go b/cmd/pgo/api/version.go
index a48ac5bd8e..4499c72c93 100644
--- a/cmd/pgo/api/version.go
+++ b/cmd/pgo/api/version.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/api/workflow.go b/cmd/pgo/api/workflow.go
index 46381889f4..e3cf233062 100644
--- a/cmd/pgo/api/workflow.go
+++ b/cmd/pgo/api/workflow.go
@@ -1,7 +1,7 @@
package api
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/auth.go b/cmd/pgo/cmd/auth.go
index fda8689df6..a54dbd9db9 100644
--- a/cmd/pgo/cmd/auth.go
+++ b/cmd/pgo/cmd/auth.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/backrest.go b/cmd/pgo/cmd/backrest.go
index 2b149097cf..7d750e1c6b 100644
--- a/cmd/pgo/cmd/backrest.go
+++ b/cmd/pgo/cmd/backrest.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/backup.go b/cmd/pgo/cmd/backup.go
index 217c689f8e..72a3cc85c0 100644
--- a/cmd/pgo/cmd/backup.go
+++ b/cmd/pgo/cmd/backup.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/cat.go b/cmd/pgo/cmd/cat.go
index 7afe28126f..63d0d7e6f6 100644
--- a/cmd/pgo/cmd/cat.go
+++ b/cmd/pgo/cmd/cat.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index 6a7d7fd9a3..da97cef37a 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/common.go b/cmd/pgo/cmd/common.go
index 6d618e44bc..087326b57d 100644
--- a/cmd/pgo/cmd/common.go
+++ b/cmd/pgo/cmd/common.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/config.go b/cmd/pgo/cmd/config.go
index 0d5f4f4335..b5b34f4c29 100644
--- a/cmd/pgo/cmd/config.go
+++ b/cmd/pgo/cmd/config.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index 36fd950a71..f301111797 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/delete.go b/cmd/pgo/cmd/delete.go
index 86a6a68c7c..511204ac8e 100644
--- a/cmd/pgo/cmd/delete.go
+++ b/cmd/pgo/cmd/delete.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/df.go b/cmd/pgo/cmd/df.go
index bad4301ceb..c07762ef7b 100644
--- a/cmd/pgo/cmd/df.go
+++ b/cmd/pgo/cmd/df.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/failover.go b/cmd/pgo/cmd/failover.go
index 1459c659ae..bfbd3b0848 100644
--- a/cmd/pgo/cmd/failover.go
+++ b/cmd/pgo/cmd/failover.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/flags.go b/cmd/pgo/cmd/flags.go
index 28afacc651..bdaf760942 100644
--- a/cmd/pgo/cmd/flags.go
+++ b/cmd/pgo/cmd/flags.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/label.go b/cmd/pgo/cmd/label.go
index bd794e706b..70b18061e5 100644
--- a/cmd/pgo/cmd/label.go
+++ b/cmd/pgo/cmd/label.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/namespace.go b/cmd/pgo/cmd/namespace.go
index e095328360..e3a886b484 100644
--- a/cmd/pgo/cmd/namespace.go
+++ b/cmd/pgo/cmd/namespace.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/pgadmin.go b/cmd/pgo/cmd/pgadmin.go
index 0d034f1710..bd629e0f7e 100644
--- a/cmd/pgo/cmd/pgadmin.go
+++ b/cmd/pgo/cmd/pgadmin.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/pgbouncer.go b/cmd/pgo/cmd/pgbouncer.go
index b7ec28506f..ba2ec6f803 100644
--- a/cmd/pgo/cmd/pgbouncer.go
+++ b/cmd/pgo/cmd/pgbouncer.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/pgdump.go b/cmd/pgo/cmd/pgdump.go
index 0ca8dfd637..d10e31b43d 100644
--- a/cmd/pgo/cmd/pgdump.go
+++ b/cmd/pgo/cmd/pgdump.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/pgorole.go b/cmd/pgo/cmd/pgorole.go
index 180459d2ef..110eca9887 100644
--- a/cmd/pgo/cmd/pgorole.go
+++ b/cmd/pgo/cmd/pgorole.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/pgouser.go b/cmd/pgo/cmd/pgouser.go
index 31ea66b316..fa74050ffe 100644
--- a/cmd/pgo/cmd/pgouser.go
+++ b/cmd/pgo/cmd/pgouser.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/policy.go b/cmd/pgo/cmd/policy.go
index ed4aae82a8..a503910bb9 100644
--- a/cmd/pgo/cmd/policy.go
+++ b/cmd/pgo/cmd/policy.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/pvc.go b/cmd/pgo/cmd/pvc.go
index 3991901442..9f910f0555 100644
--- a/cmd/pgo/cmd/pvc.go
+++ b/cmd/pgo/cmd/pvc.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/reload.go b/cmd/pgo/cmd/reload.go
index 9d8b85b2b4..bb0098ee86 100644
--- a/cmd/pgo/cmd/reload.go
+++ b/cmd/pgo/cmd/reload.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/restart.go b/cmd/pgo/cmd/restart.go
index 5349a1ff7e..650bcd16fd 100644
--- a/cmd/pgo/cmd/restart.go
+++ b/cmd/pgo/cmd/restart.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/restore.go b/cmd/pgo/cmd/restore.go
index bb8931f5b7..309808ee8d 100644
--- a/cmd/pgo/cmd/restore.go
+++ b/cmd/pgo/cmd/restore.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/root.go b/cmd/pgo/cmd/root.go
index f22a92fb1a..c38ba543c6 100644
--- a/cmd/pgo/cmd/root.go
+++ b/cmd/pgo/cmd/root.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/scale.go b/cmd/pgo/cmd/scale.go
index dd9d8a8a95..a44fa959f2 100644
--- a/cmd/pgo/cmd/scale.go
+++ b/cmd/pgo/cmd/scale.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/scaledown.go b/cmd/pgo/cmd/scaledown.go
index 7b49350599..ba526c3e74 100644
--- a/cmd/pgo/cmd/scaledown.go
+++ b/cmd/pgo/cmd/scaledown.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/schedule.go b/cmd/pgo/cmd/schedule.go
index 86a06aa8ae..a4d66d7f69 100644
--- a/cmd/pgo/cmd/schedule.go
+++ b/cmd/pgo/cmd/schedule.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/show.go b/cmd/pgo/cmd/show.go
index 96a312b733..6c75afa100 100644
--- a/cmd/pgo/cmd/show.go
+++ b/cmd/pgo/cmd/show.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/status.go b/cmd/pgo/cmd/status.go
index 28c930f226..fd13a7f140 100644
--- a/cmd/pgo/cmd/status.go
+++ b/cmd/pgo/cmd/status.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/test.go b/cmd/pgo/cmd/test.go
index 89da7b7b7f..c183879247 100644
--- a/cmd/pgo/cmd/test.go
+++ b/cmd/pgo/cmd/test.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index 0f6b06c97b..89d88fe45b 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/upgrade.go b/cmd/pgo/cmd/upgrade.go
index 31122cebf7..2781210f4b 100644
--- a/cmd/pgo/cmd/upgrade.go
+++ b/cmd/pgo/cmd/upgrade.go
@@ -2,7 +2,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/user.go b/cmd/pgo/cmd/user.go
index ae5793167e..3c61d5671e 100644
--- a/cmd/pgo/cmd/user.go
+++ b/cmd/pgo/cmd/user.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/version.go b/cmd/pgo/cmd/version.go
index 969f146e85..cf269d06b3 100644
--- a/cmd/pgo/cmd/version.go
+++ b/cmd/pgo/cmd/version.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/watch.go b/cmd/pgo/cmd/watch.go
index fce2bcbd74..79746296a7 100644
--- a/cmd/pgo/cmd/watch.go
+++ b/cmd/pgo/cmd/watch.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/cmd/workflow.go b/cmd/pgo/cmd/workflow.go
index 74b2741f0a..fd372fd254 100644
--- a/cmd/pgo/cmd/workflow.go
+++ b/cmd/pgo/cmd/workflow.go
@@ -1,7 +1,7 @@
package cmd
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/generatedocs.go b/cmd/pgo/generatedocs.go
index a5b2d38271..8bd31cfb01 100644
--- a/cmd/pgo/generatedocs.go
+++ b/cmd/pgo/generatedocs.go
@@ -3,7 +3,7 @@
package main
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/main.go b/cmd/pgo/main.go
index 68aa34dee6..f122259ee9 100644
--- a/cmd/pgo/main.go
+++ b/cmd/pgo/main.go
@@ -1,7 +1,7 @@
package main
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/util/confirmation.go b/cmd/pgo/util/confirmation.go
index c227055cb1..4d15e0ae83 100644
--- a/cmd/pgo/util/confirmation.go
+++ b/cmd/pgo/util/confirmation.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/util/pad.go b/cmd/pgo/util/pad.go
index 276469471a..fee08615ed 100644
--- a/cmd/pgo/util/pad.go
+++ b/cmd/pgo/util/pad.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/pgo/util/validation.go b/cmd/pgo/util/validation.go
index 33690de426..4e011ef43f 100644
--- a/cmd/pgo/util/validation.go
+++ b/cmd/pgo/util/validation.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/postgres-operator/main.go b/cmd/postgres-operator/main.go
index f457b79a0d..052b11377f 100644
--- a/cmd/postgres-operator/main.go
+++ b/cmd/postgres-operator/main.go
@@ -1,7 +1,7 @@
package main
/*
-Copyright 2017 - 2020 Crunchy Data
+Copyright 2017 - 2021 Crunchy Data
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/cmd/postgres-operator/open_telemetry.go b/cmd/postgres-operator/open_telemetry.go
index e4b156bb35..3e1f51da24 100644
--- a/cmd/postgres-operator/open_telemetry.go
+++ b/cmd/postgres-operator/open_telemetry.go
@@ -1,7 +1,7 @@
package main
/*
-Copyright 2020 Crunchy Data
+Copyright 2020 - 2021 Crunchy Data
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/deploy/add-targeted-namespace-reconcile-rbac.sh b/deploy/add-targeted-namespace-reconcile-rbac.sh
index 8438c10912..533e8507f9 100755
--- a/deploy/add-targeted-namespace-reconcile-rbac.sh
+++ b/deploy/add-targeted-namespace-reconcile-rbac.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/add-targeted-namespace.sh b/deploy/add-targeted-namespace.sh
index af088314d9..61647e9feb 100755
--- a/deploy/add-targeted-namespace.sh
+++ b/deploy/add-targeted-namespace.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/cleannamespaces.sh b/deploy/cleannamespaces.sh
index 66cd693863..169bae6853 100755
--- a/deploy/cleannamespaces.sh
+++ b/deploy/cleannamespaces.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/cleanup-rbac.sh b/deploy/cleanup-rbac.sh
index 50f52bbc5f..df60c14500 100755
--- a/deploy/cleanup-rbac.sh
+++ b/deploy/cleanup-rbac.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/cleanup.sh b/deploy/cleanup.sh
index afe13f98c7..711e276823 100755
--- a/deploy/cleanup.sh
+++ b/deploy/cleanup.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/deploy.sh b/deploy/deploy.sh
index 67478acb8b..ae8b1b3eea 100755
--- a/deploy/deploy.sh
+++ b/deploy/deploy.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/gen-api-keys.sh b/deploy/gen-api-keys.sh
index 15b310f85f..5fd1139ced 100755
--- a/deploy/gen-api-keys.sh
+++ b/deploy/gen-api-keys.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/install-bootstrap-creds.sh b/deploy/install-bootstrap-creds.sh
index 1b446824d3..e25253104c 100755
--- a/deploy/install-bootstrap-creds.sh
+++ b/deploy/install-bootstrap-creds.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/install-rbac.sh b/deploy/install-rbac.sh
index d96532d9f1..ec7a4d7d49 100755
--- a/deploy/install-rbac.sh
+++ b/deploy/install-rbac.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/remove-crd.sh b/deploy/remove-crd.sh
index 764645264f..f14bbfd022 100755
--- a/deploy/remove-crd.sh
+++ b/deploy/remove-crd.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/setupnamespaces.sh b/deploy/setupnamespaces.sh
index 9d2188a56f..08aade3518 100755
--- a/deploy/setupnamespaces.sh
+++ b/deploy/setupnamespaces.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/show-crd.sh b/deploy/show-crd.sh
index 7f40285c5d..091c2b6810 100755
--- a/deploy/show-crd.sh
+++ b/deploy/show-crd.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/upgrade-creds.sh b/deploy/upgrade-creds.sh
index ddc0953df7..dfea9c10c1 100755
--- a/deploy/upgrade-creds.sh
+++ b/deploy/upgrade-creds.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/deploy/upgrade-pgo.sh b/deploy/upgrade-pgo.sh
index 72fdb420f0..87496f3c49 100755
--- a/deploy/upgrade-pgo.sh
+++ b/deploy/upgrade-pgo.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/examples/create-by-resource/run.sh b/examples/create-by-resource/run.sh
index 98bd67ec2c..afb5e896e5 100755
--- a/examples/create-by-resource/run.sh
+++ b/examples/create-by-resource/run.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/examples/custom-config/create.sh b/examples/custom-config/create.sh
index df6c701f2a..5900132568 100755
--- a/examples/custom-config/create.sh
+++ b/examples/custom-config/create.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/examples/custom-config/setup.sql b/examples/custom-config/setup.sql
index 206005eb8a..1a05bce487 100644
--- a/examples/custom-config/setup.sql
+++ b/examples/custom-config/setup.sql
@@ -1,5 +1,5 @@
/*
- * Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ * Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
diff --git a/hack/boilerplate.go.txt b/hack/boilerplate.go.txt
index 8aabc9a12b..e681957476 100644
--- a/hack/boilerplate.go.txt
+++ b/hack/boilerplate.go.txt
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/hack/config_sync.sh b/hack/config_sync.sh
index cab45b023b..6317c556a1 100755
--- a/hack/config_sync.sh
+++ b/hack/config_sync.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/hack/update-codegen.sh b/hack/update-codegen.sh
index f81fc33be6..43e607336a 100755
--- a/hack/update-codegen.sh
+++ b/hack/update-codegen.sh
@@ -1,6 +1,6 @@
#!/usr/bin/env bash
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/installers/ansible/roles/pgo-operator/templates/add-targeted-namespace.sh.j2 b/installers/ansible/roles/pgo-operator/templates/add-targeted-namespace.sh.j2
index 380a8a80b7..ec5e9e82d6 100644
--- a/installers/ansible/roles/pgo-operator/templates/add-targeted-namespace.sh.j2
+++ b/installers/ansible/roles/pgo-operator/templates/add-targeted-namespace.sh.j2
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/installers/image/bin/pgo-deploy.sh b/installers/image/bin/pgo-deploy.sh
index 9a965d58be..92fc955e9a 100755
--- a/installers/image/bin/pgo-deploy.sh
+++ b/installers/image/bin/pgo-deploy.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/installers/kubectl/client-setup.sh b/installers/kubectl/client-setup.sh
index 496f25abd4..1504009506 100755
--- a/installers/kubectl/client-setup.sh
+++ b/installers/kubectl/client-setup.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2020 Crunchy Data Solutions, Inc.
+# Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index c9ae9fb943..ec1e9b2f5b 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -1,7 +1,7 @@
package backrestservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/backrestservice/backrestservice.go b/internal/apiserver/backrestservice/backrestservice.go
index c9e5d4b030..d2f4810872 100644
--- a/internal/apiserver/backrestservice/backrestservice.go
+++ b/internal/apiserver/backrestservice/backrestservice.go
@@ -1,7 +1,7 @@
package backrestservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/backupoptions/backupoptionsutil.go b/internal/apiserver/backupoptions/backupoptionsutil.go
index b31bd8e00f..75b922b957 100644
--- a/internal/apiserver/backupoptions/backupoptionsutil.go
+++ b/internal/apiserver/backupoptions/backupoptionsutil.go
@@ -1,7 +1,7 @@
package backupoptions
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/backupoptions/pgbackrestoptions.go b/internal/apiserver/backupoptions/pgbackrestoptions.go
index 7926f4c0da..ec18545fef 100644
--- a/internal/apiserver/backupoptions/pgbackrestoptions.go
+++ b/internal/apiserver/backupoptions/pgbackrestoptions.go
@@ -1,7 +1,7 @@
package backupoptions
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/backupoptions/pgdumpoptions.go b/internal/apiserver/backupoptions/pgdumpoptions.go
index 803a417e95..cd4c218be6 100644
--- a/internal/apiserver/backupoptions/pgdumpoptions.go
+++ b/internal/apiserver/backupoptions/pgdumpoptions.go
@@ -1,7 +1,7 @@
package backupoptions
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/catservice/catimpl.go b/internal/apiserver/catservice/catimpl.go
index e21fd76266..f1ed27bf73 100644
--- a/internal/apiserver/catservice/catimpl.go
+++ b/internal/apiserver/catservice/catimpl.go
@@ -1,7 +1,7 @@
package catservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/catservice/catservice.go b/internal/apiserver/catservice/catservice.go
index cf25ac5f9a..5084eb3a33 100644
--- a/internal/apiserver/catservice/catservice.go
+++ b/internal/apiserver/catservice/catservice.go
@@ -1,7 +1,7 @@
package catservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 54a4c8b3a6..22f10a7f14 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1,7 +1,7 @@
package clusterservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/clusterservice/clusterservice.go b/internal/apiserver/clusterservice/clusterservice.go
index 32b904eb61..c26e91cfa4 100644
--- a/internal/apiserver/clusterservice/clusterservice.go
+++ b/internal/apiserver/clusterservice/clusterservice.go
@@ -1,7 +1,7 @@
package clusterservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index 36739d1e44..f8ee7c57d8 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -1,7 +1,7 @@
package clusterservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/clusterservice/scaleservice.go b/internal/apiserver/clusterservice/scaleservice.go
index 810eba4086..113f730f24 100644
--- a/internal/apiserver/clusterservice/scaleservice.go
+++ b/internal/apiserver/clusterservice/scaleservice.go
@@ -1,7 +1,7 @@
package clusterservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go
index 7f5592b3c4..08bc42f6ac 100644
--- a/internal/apiserver/common.go
+++ b/internal/apiserver/common.go
@@ -1,7 +1,7 @@
package apiserver
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/common_test.go b/internal/apiserver/common_test.go
index 2475fca015..9f11dc4e49 100644
--- a/internal/apiserver/common_test.go
+++ b/internal/apiserver/common_test.go
@@ -1,7 +1,7 @@
package apiserver
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/configservice/configimpl.go b/internal/apiserver/configservice/configimpl.go
index 76d891d5ed..dc7ec2274c 100644
--- a/internal/apiserver/configservice/configimpl.go
+++ b/internal/apiserver/configservice/configimpl.go
@@ -1,7 +1,7 @@
package configservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/configservice/configservice.go b/internal/apiserver/configservice/configservice.go
index d4066b5017..fddabe30b8 100644
--- a/internal/apiserver/configservice/configservice.go
+++ b/internal/apiserver/configservice/configservice.go
@@ -1,7 +1,7 @@
package configservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/dfservice/dfimpl.go b/internal/apiserver/dfservice/dfimpl.go
index 1a24d235d2..3014821621 100644
--- a/internal/apiserver/dfservice/dfimpl.go
+++ b/internal/apiserver/dfservice/dfimpl.go
@@ -1,7 +1,7 @@
package dfservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/dfservice/dfservice.go b/internal/apiserver/dfservice/dfservice.go
index 89c8796b27..8f42c65eef 100644
--- a/internal/apiserver/dfservice/dfservice.go
+++ b/internal/apiserver/dfservice/dfservice.go
@@ -1,7 +1,7 @@
package dfservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/failoverservice/failoverimpl.go b/internal/apiserver/failoverservice/failoverimpl.go
index 7886e3ddfa..ca25062303 100644
--- a/internal/apiserver/failoverservice/failoverimpl.go
+++ b/internal/apiserver/failoverservice/failoverimpl.go
@@ -1,7 +1,7 @@
package failoverservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/failoverservice/failoverservice.go b/internal/apiserver/failoverservice/failoverservice.go
index ad42873f77..08461e9356 100644
--- a/internal/apiserver/failoverservice/failoverservice.go
+++ b/internal/apiserver/failoverservice/failoverservice.go
@@ -1,7 +1,7 @@
package failoverservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/labelservice/labelimpl.go b/internal/apiserver/labelservice/labelimpl.go
index ed12991f58..2fe6883074 100644
--- a/internal/apiserver/labelservice/labelimpl.go
+++ b/internal/apiserver/labelservice/labelimpl.go
@@ -1,7 +1,7 @@
package labelservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/labelservice/labelservice.go b/internal/apiserver/labelservice/labelservice.go
index e166d2d03f..de5e199ebe 100644
--- a/internal/apiserver/labelservice/labelservice.go
+++ b/internal/apiserver/labelservice/labelservice.go
@@ -1,7 +1,7 @@
package labelservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/middleware.go b/internal/apiserver/middleware.go
index fc7e60e8e2..58a1fcd77d 100644
--- a/internal/apiserver/middleware.go
+++ b/internal/apiserver/middleware.go
@@ -1,7 +1,7 @@
package apiserver
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/namespaceservice/namespaceimpl.go b/internal/apiserver/namespaceservice/namespaceimpl.go
index 24b6234b48..c51d6b6f11 100644
--- a/internal/apiserver/namespaceservice/namespaceimpl.go
+++ b/internal/apiserver/namespaceservice/namespaceimpl.go
@@ -1,7 +1,7 @@
package namespaceservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/namespaceservice/namespaceservice.go b/internal/apiserver/namespaceservice/namespaceservice.go
index fa5918fe45..2416e27731 100644
--- a/internal/apiserver/namespaceservice/namespaceservice.go
+++ b/internal/apiserver/namespaceservice/namespaceservice.go
@@ -1,7 +1,7 @@
package namespaceservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/perms.go b/internal/apiserver/perms.go
index 71316b2c82..e4e95978ae 100644
--- a/internal/apiserver/perms.go
+++ b/internal/apiserver/perms.go
@@ -1,7 +1,7 @@
package apiserver
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgadminservice/pgadminimpl.go b/internal/apiserver/pgadminservice/pgadminimpl.go
index 89bb7bcde3..a0154b37f9 100644
--- a/internal/apiserver/pgadminservice/pgadminimpl.go
+++ b/internal/apiserver/pgadminservice/pgadminimpl.go
@@ -1,7 +1,7 @@
package pgadminservice
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgadminservice/pgadminservice.go b/internal/apiserver/pgadminservice/pgadminservice.go
index 68c1b1b3db..fc24b5b94c 100644
--- a/internal/apiserver/pgadminservice/pgadminservice.go
+++ b/internal/apiserver/pgadminservice/pgadminservice.go
@@ -1,7 +1,7 @@
package pgadminservice
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
index 3bacd91886..5dfc3dfc47 100644
--- a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
+++ b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
@@ -1,7 +1,7 @@
package pgbouncerservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgbouncerservice/pgbouncerservice.go b/internal/apiserver/pgbouncerservice/pgbouncerservice.go
index 773514d48d..978f21bac2 100644
--- a/internal/apiserver/pgbouncerservice/pgbouncerservice.go
+++ b/internal/apiserver/pgbouncerservice/pgbouncerservice.go
@@ -1,7 +1,7 @@
package pgbouncerservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgdumpservice/pgdumpimpl.go b/internal/apiserver/pgdumpservice/pgdumpimpl.go
index 3e97ede60a..52c90d9006 100644
--- a/internal/apiserver/pgdumpservice/pgdumpimpl.go
+++ b/internal/apiserver/pgdumpservice/pgdumpimpl.go
@@ -1,7 +1,7 @@
package pgdumpservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgdumpservice/pgdumpservice.go b/internal/apiserver/pgdumpservice/pgdumpservice.go
index 0b57ed2d75..28acf0933a 100644
--- a/internal/apiserver/pgdumpservice/pgdumpservice.go
+++ b/internal/apiserver/pgdumpservice/pgdumpservice.go
@@ -1,7 +1,7 @@
package pgdumpservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgoroleservice/pgoroleimpl.go b/internal/apiserver/pgoroleservice/pgoroleimpl.go
index 4192a4437b..4f10d76f3e 100644
--- a/internal/apiserver/pgoroleservice/pgoroleimpl.go
+++ b/internal/apiserver/pgoroleservice/pgoroleimpl.go
@@ -1,7 +1,7 @@
package pgoroleservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgoroleservice/pgoroleservice.go b/internal/apiserver/pgoroleservice/pgoroleservice.go
index 1ab26cdac6..581927de8f 100644
--- a/internal/apiserver/pgoroleservice/pgoroleservice.go
+++ b/internal/apiserver/pgoroleservice/pgoroleservice.go
@@ -1,7 +1,7 @@
package pgoroleservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgouserservice/pgouserimpl.go b/internal/apiserver/pgouserservice/pgouserimpl.go
index 136deb2064..415f8d1b95 100644
--- a/internal/apiserver/pgouserservice/pgouserimpl.go
+++ b/internal/apiserver/pgouserservice/pgouserimpl.go
@@ -1,7 +1,7 @@
package pgouserservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pgouserservice/pgouserservice.go b/internal/apiserver/pgouserservice/pgouserservice.go
index f0205c5eea..5ccff33a2e 100644
--- a/internal/apiserver/pgouserservice/pgouserservice.go
+++ b/internal/apiserver/pgouserservice/pgouserservice.go
@@ -1,7 +1,7 @@
package pgouserservice
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/policyservice/policyimpl.go b/internal/apiserver/policyservice/policyimpl.go
index 21b77d4d63..59808d925a 100644
--- a/internal/apiserver/policyservice/policyimpl.go
+++ b/internal/apiserver/policyservice/policyimpl.go
@@ -1,7 +1,7 @@
package policyservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/policyservice/policyservice.go b/internal/apiserver/policyservice/policyservice.go
index 00a499be64..290b8271a0 100644
--- a/internal/apiserver/policyservice/policyservice.go
+++ b/internal/apiserver/policyservice/policyservice.go
@@ -1,7 +1,7 @@
package policyservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pvcservice/pvcimpl.go b/internal/apiserver/pvcservice/pvcimpl.go
index 091aa7b67d..5f22e1abba 100644
--- a/internal/apiserver/pvcservice/pvcimpl.go
+++ b/internal/apiserver/pvcservice/pvcimpl.go
@@ -1,7 +1,7 @@
package pvcservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/pvcservice/pvcservice.go b/internal/apiserver/pvcservice/pvcservice.go
index 332b3740b2..f3a994d078 100644
--- a/internal/apiserver/pvcservice/pvcservice.go
+++ b/internal/apiserver/pvcservice/pvcservice.go
@@ -1,7 +1,7 @@
package pvcservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/reloadservice/reloadimpl.go b/internal/apiserver/reloadservice/reloadimpl.go
index 25ed88be67..cb6978e5f5 100644
--- a/internal/apiserver/reloadservice/reloadimpl.go
+++ b/internal/apiserver/reloadservice/reloadimpl.go
@@ -1,7 +1,7 @@
package reloadservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/reloadservice/reloadservice.go b/internal/apiserver/reloadservice/reloadservice.go
index 149b660125..14e8463f5e 100644
--- a/internal/apiserver/reloadservice/reloadservice.go
+++ b/internal/apiserver/reloadservice/reloadservice.go
@@ -1,7 +1,7 @@
package reloadservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/restartservice/restartimpl.go b/internal/apiserver/restartservice/restartimpl.go
index dc1b4f95ce..4defee23b3 100644
--- a/internal/apiserver/restartservice/restartimpl.go
+++ b/internal/apiserver/restartservice/restartimpl.go
@@ -1,7 +1,7 @@
package restartservice
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/restartservice/restartservice.go b/internal/apiserver/restartservice/restartservice.go
index 374cb3ca93..f8d24ac4e2 100644
--- a/internal/apiserver/restartservice/restartservice.go
+++ b/internal/apiserver/restartservice/restartservice.go
@@ -1,7 +1,7 @@
package restartservice
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/root.go b/internal/apiserver/root.go
index bf68f1b870..e8afecbd65 100644
--- a/internal/apiserver/root.go
+++ b/internal/apiserver/root.go
@@ -1,7 +1,7 @@
package apiserver
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/routing/doc.go b/internal/apiserver/routing/doc.go
index e985fd4280..2807dfb978 100644
--- a/internal/apiserver/routing/doc.go
+++ b/internal/apiserver/routing/doc.go
@@ -1,5 +1,5 @@
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/routing/routes.go b/internal/apiserver/routing/routes.go
index eb3f69c862..495682052a 100644
--- a/internal/apiserver/routing/routes.go
+++ b/internal/apiserver/routing/routes.go
@@ -1,7 +1,7 @@
package routing
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/scheduleservice/scheduleimpl.go b/internal/apiserver/scheduleservice/scheduleimpl.go
index 86e3bdcfad..04c027f391 100644
--- a/internal/apiserver/scheduleservice/scheduleimpl.go
+++ b/internal/apiserver/scheduleservice/scheduleimpl.go
@@ -1,7 +1,7 @@
package scheduleservice
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/scheduleservice/scheduleservice.go b/internal/apiserver/scheduleservice/scheduleservice.go
index 3ac9205a59..4508c30e7a 100644
--- a/internal/apiserver/scheduleservice/scheduleservice.go
+++ b/internal/apiserver/scheduleservice/scheduleservice.go
@@ -1,7 +1,7 @@
package scheduleservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/statusservice/statusimpl.go b/internal/apiserver/statusservice/statusimpl.go
index 7cc81278b0..4c1f72b1a1 100644
--- a/internal/apiserver/statusservice/statusimpl.go
+++ b/internal/apiserver/statusservice/statusimpl.go
@@ -1,7 +1,7 @@
package statusservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/statusservice/statusservice.go b/internal/apiserver/statusservice/statusservice.go
index 3adecd9156..5618f821e4 100644
--- a/internal/apiserver/statusservice/statusservice.go
+++ b/internal/apiserver/statusservice/statusservice.go
@@ -1,7 +1,7 @@
package statusservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/upgradeservice/upgradeimpl.go b/internal/apiserver/upgradeservice/upgradeimpl.go
index 66d6a806b0..ee41b1194c 100644
--- a/internal/apiserver/upgradeservice/upgradeimpl.go
+++ b/internal/apiserver/upgradeservice/upgradeimpl.go
@@ -1,7 +1,7 @@
package upgradeservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/upgradeservice/upgradeservice.go b/internal/apiserver/upgradeservice/upgradeservice.go
index b058345e6a..aa4b5ebb2e 100644
--- a/internal/apiserver/upgradeservice/upgradeservice.go
+++ b/internal/apiserver/upgradeservice/upgradeservice.go
@@ -1,7 +1,7 @@
package upgradeservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/userservice/userimpl.go b/internal/apiserver/userservice/userimpl.go
index 1264e9b5c1..2372edc519 100644
--- a/internal/apiserver/userservice/userimpl.go
+++ b/internal/apiserver/userservice/userimpl.go
@@ -1,7 +1,7 @@
package userservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/userservice/userimpl_test.go b/internal/apiserver/userservice/userimpl_test.go
index f49d171ff4..2a458d3589 100644
--- a/internal/apiserver/userservice/userimpl_test.go
+++ b/internal/apiserver/userservice/userimpl_test.go
@@ -1,7 +1,7 @@
package userservice
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/userservice/userservice.go b/internal/apiserver/userservice/userservice.go
index 94a9be299b..2dda7305e4 100644
--- a/internal/apiserver/userservice/userservice.go
+++ b/internal/apiserver/userservice/userservice.go
@@ -1,7 +1,7 @@
package userservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/versionservice/versionimpl.go b/internal/apiserver/versionservice/versionimpl.go
index d2341d4e93..959cfae669 100644
--- a/internal/apiserver/versionservice/versionimpl.go
+++ b/internal/apiserver/versionservice/versionimpl.go
@@ -1,7 +1,7 @@
package versionservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/versionservice/versionservice.go b/internal/apiserver/versionservice/versionservice.go
index bcb2adc7b0..6fd72719b5 100644
--- a/internal/apiserver/versionservice/versionservice.go
+++ b/internal/apiserver/versionservice/versionservice.go
@@ -1,7 +1,7 @@
package versionservice
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/workflowservice/workflowimpl.go b/internal/apiserver/workflowservice/workflowimpl.go
index 3f46b2a26c..debf2a31c2 100644
--- a/internal/apiserver/workflowservice/workflowimpl.go
+++ b/internal/apiserver/workflowservice/workflowimpl.go
@@ -1,7 +1,7 @@
package workflowservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/apiserver/workflowservice/workflowservice.go b/internal/apiserver/workflowservice/workflowservice.go
index 35adb2b0a4..89a2e29796 100644
--- a/internal/apiserver/workflowservice/workflowservice.go
+++ b/internal/apiserver/workflowservice/workflowservice.go
@@ -1,7 +1,7 @@
package workflowservice
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/config/annotations.go b/internal/config/annotations.go
index db8482fe0a..3f42e4e4db 100644
--- a/internal/config/annotations.go
+++ b/internal/config/annotations.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/config/defaults.go b/internal/config/defaults.go
index d86e404eb7..776dfd7c90 100644
--- a/internal/config/defaults.go
+++ b/internal/config/defaults.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/config/images.go b/internal/config/images.go
index 7ab595ed98..0deb5b4cd6 100644
--- a/internal/config/images.go
+++ b/internal/config/images.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 5dce3957f0..9308d17f91 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go
index 09d78ab841..97fa7fcb6e 100644
--- a/internal/config/pgoconfig.go
+++ b/internal/config/pgoconfig.go
@@ -1,7 +1,7 @@
package config
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/config/secrets.go b/internal/config/secrets.go
index f518c813ba..769ec70781 100644
--- a/internal/config/secrets.go
+++ b/internal/config/secrets.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/config/volumes.go b/internal/config/volumes.go
index 8723f9670a..f49c8d916d 100644
--- a/internal/config/volumes.go
+++ b/internal/config/volumes.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/configmap/configmapcontroller.go b/internal/controller/configmap/configmapcontroller.go
index a390075b67..7c9cb27d2b 100644
--- a/internal/controller/configmap/configmapcontroller.go
+++ b/internal/controller/configmap/configmapcontroller.go
@@ -1,7 +1,7 @@
package configmap
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/configmap/synchandler.go b/internal/controller/configmap/synchandler.go
index b556f1561a..ac673b1746 100644
--- a/internal/controller/configmap/synchandler.go
+++ b/internal/controller/configmap/synchandler.go
@@ -1,7 +1,7 @@
package configmap
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/controllerutil.go b/internal/controller/controllerutil.go
index 4b1f5da6ba..1db944cf3b 100644
--- a/internal/controller/controllerutil.go
+++ b/internal/controller/controllerutil.go
@@ -1,7 +1,7 @@
package controller
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/job/backresthandler.go b/internal/controller/job/backresthandler.go
index 3bd90fcf83..c7f585cb5e 100644
--- a/internal/controller/job/backresthandler.go
+++ b/internal/controller/job/backresthandler.go
@@ -1,7 +1,7 @@
package job
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/job/bootstraphandler.go b/internal/controller/job/bootstraphandler.go
index 7b64937642..6ba9cd9b19 100644
--- a/internal/controller/job/bootstraphandler.go
+++ b/internal/controller/job/bootstraphandler.go
@@ -1,7 +1,7 @@
package job
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/job/jobcontroller.go b/internal/controller/job/jobcontroller.go
index aa11399b47..3db9406f56 100644
--- a/internal/controller/job/jobcontroller.go
+++ b/internal/controller/job/jobcontroller.go
@@ -1,7 +1,7 @@
package job
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/job/jobevents.go b/internal/controller/job/jobevents.go
index df21ba3ef6..7d45c23006 100644
--- a/internal/controller/job/jobevents.go
+++ b/internal/controller/job/jobevents.go
@@ -1,7 +1,7 @@
package job
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/job/jobutil.go b/internal/controller/job/jobutil.go
index 78d6bb6e34..e7a3113469 100644
--- a/internal/controller/job/jobutil.go
+++ b/internal/controller/job/jobutil.go
@@ -1,7 +1,7 @@
package job
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/job/pgdumphandler.go b/internal/controller/job/pgdumphandler.go
index 0fd25918b2..b407ec262a 100644
--- a/internal/controller/job/pgdumphandler.go
+++ b/internal/controller/job/pgdumphandler.go
@@ -1,7 +1,7 @@
package job
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/job/rmdatahandler.go b/internal/controller/job/rmdatahandler.go
index 5fb8b0ed6c..73ce88a486 100644
--- a/internal/controller/job/rmdatahandler.go
+++ b/internal/controller/job/rmdatahandler.go
@@ -1,7 +1,7 @@
package job
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/manager/controllermanager.go b/internal/controller/manager/controllermanager.go
index 165677b2f2..afb678d21a 100644
--- a/internal/controller/manager/controllermanager.go
+++ b/internal/controller/manager/controllermanager.go
@@ -1,7 +1,7 @@
package manager
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/manager/rbac.go b/internal/controller/manager/rbac.go
index 8cad4ff247..4068d67b86 100644
--- a/internal/controller/manager/rbac.go
+++ b/internal/controller/manager/rbac.go
@@ -1,7 +1,7 @@
package manager
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/namespace/namespacecontroller.go b/internal/controller/namespace/namespacecontroller.go
index 609a0715d0..0c651cfbdf 100644
--- a/internal/controller/namespace/namespacecontroller.go
+++ b/internal/controller/namespace/namespacecontroller.go
@@ -1,7 +1,7 @@
package namespace
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 713bf6887f..bd7556ec3a 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -1,7 +1,7 @@
package pgcluster
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pgpolicy/pgpolicycontroller.go b/internal/controller/pgpolicy/pgpolicycontroller.go
index 27d640475b..5a1f6a2fc5 100644
--- a/internal/controller/pgpolicy/pgpolicycontroller.go
+++ b/internal/controller/pgpolicy/pgpolicycontroller.go
@@ -1,7 +1,7 @@
package pgpolicy
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pgreplica/pgreplicacontroller.go b/internal/controller/pgreplica/pgreplicacontroller.go
index c37ee16b58..5d0d8c01a8 100644
--- a/internal/controller/pgreplica/pgreplicacontroller.go
+++ b/internal/controller/pgreplica/pgreplicacontroller.go
@@ -1,7 +1,7 @@
package pgreplica
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pgtask/backresthandler.go b/internal/controller/pgtask/backresthandler.go
index e8f0534e6b..f1aff229df 100644
--- a/internal/controller/pgtask/backresthandler.go
+++ b/internal/controller/pgtask/backresthandler.go
@@ -1,7 +1,7 @@
package pgtask
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pgtask/pgtaskcontroller.go b/internal/controller/pgtask/pgtaskcontroller.go
index 4e3f041a99..d26d7231d4 100644
--- a/internal/controller/pgtask/pgtaskcontroller.go
+++ b/internal/controller/pgtask/pgtaskcontroller.go
@@ -1,7 +1,7 @@
package pgtask
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pod/inithandler.go b/internal/controller/pod/inithandler.go
index 7081c1dfbf..8379d4859f 100644
--- a/internal/controller/pod/inithandler.go
+++ b/internal/controller/pod/inithandler.go
@@ -1,7 +1,7 @@
package pod
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pod/podcontroller.go b/internal/controller/pod/podcontroller.go
index 95d81bea32..cbfe3ba2db 100644
--- a/internal/controller/pod/podcontroller.go
+++ b/internal/controller/pod/podcontroller.go
@@ -1,7 +1,7 @@
package pod
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pod/podevents.go b/internal/controller/pod/podevents.go
index b3086355ca..d019c2b4b4 100644
--- a/internal/controller/pod/podevents.go
+++ b/internal/controller/pod/podevents.go
@@ -1,7 +1,7 @@
package pod
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/controller/pod/promotionhandler.go b/internal/controller/pod/promotionhandler.go
index a1eb83530a..e1a35e25c8 100644
--- a/internal/controller/pod/promotionhandler.go
+++ b/internal/controller/pod/promotionhandler.go
@@ -1,7 +1,7 @@
package pod
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/client_config.go b/internal/kubeapi/client_config.go
index b3eeed0c2a..c9e0b5ea5c 100644
--- a/internal/kubeapi/client_config.go
+++ b/internal/kubeapi/client_config.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/endpoints.go b/internal/kubeapi/endpoints.go
index 232469fe10..e871dbf5d6 100644
--- a/internal/kubeapi/endpoints.go
+++ b/internal/kubeapi/endpoints.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/errors.go b/internal/kubeapi/errors.go
index 829ca9f097..ab5e9c07aa 100644
--- a/internal/kubeapi/errors.go
+++ b/internal/kubeapi/errors.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/exec.go b/internal/kubeapi/exec.go
index b2e994d84d..c154ba1fa3 100644
--- a/internal/kubeapi/exec.go
+++ b/internal/kubeapi/exec.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/fake/clientset.go b/internal/kubeapi/fake/clientset.go
index 7fbd74b802..2ac060e1da 100644
--- a/internal/kubeapi/fake/clientset.go
+++ b/internal/kubeapi/fake/clientset.go
@@ -1,7 +1,7 @@
package fake
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/fake/fakeclients.go b/internal/kubeapi/fake/fakeclients.go
index 8c7395549d..cdc96c1ec6 100644
--- a/internal/kubeapi/fake/fakeclients.go
+++ b/internal/kubeapi/fake/fakeclients.go
@@ -1,7 +1,7 @@
package fake
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/patch.go b/internal/kubeapi/patch.go
index fcaf83a432..57f11e6867 100644
--- a/internal/kubeapi/patch.go
+++ b/internal/kubeapi/patch.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/patch_test.go b/internal/kubeapi/patch_test.go
index fa270e340c..706ee4768d 100644
--- a/internal/kubeapi/patch_test.go
+++ b/internal/kubeapi/patch_test.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/volumes.go b/internal/kubeapi/volumes.go
index 05412672ac..795b0a9151 100644
--- a/internal/kubeapi/volumes.go
+++ b/internal/kubeapi/volumes.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/kubeapi/volumes_test.go b/internal/kubeapi/volumes_test.go
index c2933d9e87..fead7d17be 100644
--- a/internal/kubeapi/volumes_test.go
+++ b/internal/kubeapi/volumes_test.go
@@ -1,7 +1,7 @@
package kubeapi
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/logging/loglib.go b/internal/logging/loglib.go
index e346317f6e..d171ba29fd 100644
--- a/internal/logging/loglib.go
+++ b/internal/logging/loglib.go
@@ -2,7 +2,7 @@
package logging
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/ns/nslogic.go b/internal/ns/nslogic.go
index 34df014bed..c9c53dd480 100644
--- a/internal/ns/nslogic.go
+++ b/internal/ns/nslogic.go
@@ -1,7 +1,7 @@
package ns
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index 8cd150cdf3..efdd97f7e9 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -1,7 +1,7 @@
package backrest
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go
index 28debffc17..86304055d1 100644
--- a/internal/operator/backrest/repo.go
+++ b/internal/operator/backrest/repo.go
@@ -1,7 +1,7 @@
package backrest
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index a8395b2484..4ff7797cda 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -1,7 +1,7 @@
package backrest
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/backrest/stanza.go b/internal/operator/backrest/stanza.go
index ba97c225c1..e9f1fc6c50 100644
--- a/internal/operator/backrest/stanza.go
+++ b/internal/operator/backrest/stanza.go
@@ -1,7 +1,7 @@
package backrest
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 9319a10c00..05455daaf7 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -4,7 +4,7 @@
package cluster
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index c7f1d4cc00..528ff775dc 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -4,7 +4,7 @@
package cluster
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/common.go b/internal/operator/cluster/common.go
index bbd497e582..1974462b92 100644
--- a/internal/operator/cluster/common.go
+++ b/internal/operator/cluster/common.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/common_test.go b/internal/operator/cluster/common_test.go
index 8b83becb80..b2666b4d91 100644
--- a/internal/operator/cluster/common_test.go
+++ b/internal/operator/cluster/common_test.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/exporter.go b/internal/operator/cluster/exporter.go
index c57d953f38..929f06b85f 100644
--- a/internal/operator/cluster/exporter.go
+++ b/internal/operator/cluster/exporter.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/failover.go b/internal/operator/cluster/failover.go
index 5c2b43e173..d1ff4fb033 100644
--- a/internal/operator/cluster/failover.go
+++ b/internal/operator/cluster/failover.go
@@ -4,7 +4,7 @@
package cluster
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/failoverlogic.go b/internal/operator/cluster/failoverlogic.go
index f1ec1183d6..7391de79e4 100644
--- a/internal/operator/cluster/failoverlogic.go
+++ b/internal/operator/cluster/failoverlogic.go
@@ -4,7 +4,7 @@
package cluster
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/pgadmin.go b/internal/operator/cluster/pgadmin.go
index 9ed1bdbea5..9f058a89b7 100644
--- a/internal/operator/cluster/pgadmin.go
+++ b/internal/operator/cluster/pgadmin.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/pgbadger.go b/internal/operator/cluster/pgbadger.go
index ed1b0fdfc2..b24c40ebf5 100644
--- a/internal/operator/cluster/pgbadger.go
+++ b/internal/operator/cluster/pgbadger.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index 87768ac59c..35b788e9d8 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/pgbouncer_test.go b/internal/operator/cluster/pgbouncer_test.go
index 0784afa58c..2c07739e43 100644
--- a/internal/operator/cluster/pgbouncer_test.go
+++ b/internal/operator/cluster/pgbouncer_test.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/rmdata.go b/internal/operator/cluster/rmdata.go
index 27c224eaec..6aa4e986a0 100644
--- a/internal/operator/cluster/rmdata.go
+++ b/internal/operator/cluster/rmdata.go
@@ -4,7 +4,7 @@
package cluster
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/rolling.go b/internal/operator/cluster/rolling.go
index 2860db5fbd..39d50dff10 100644
--- a/internal/operator/cluster/rolling.go
+++ b/internal/operator/cluster/rolling.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/service.go b/internal/operator/cluster/service.go
index 73edd7e35e..c651551d3d 100644
--- a/internal/operator/cluster/service.go
+++ b/internal/operator/cluster/service.go
@@ -4,7 +4,7 @@
package cluster
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/standby.go b/internal/operator/cluster/standby.go
index 0ec759e025..696ac1ad18 100644
--- a/internal/operator/cluster/standby.go
+++ b/internal/operator/cluster/standby.go
@@ -1,7 +1,7 @@
package cluster
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index fb5c344cdb..c55d405a24 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -1,7 +1,7 @@
package cluster
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index fe1cb37b26..94744111ca 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/clusterutilities_test.go b/internal/operator/clusterutilities_test.go
index 52e31aa66c..80824ddd6e 100644
--- a/internal/operator/clusterutilities_test.go
+++ b/internal/operator/clusterutilities_test.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/common.go b/internal/operator/common.go
index 382fbb1498..20734af392 100644
--- a/internal/operator/common.go
+++ b/internal/operator/common.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/common_test.go b/internal/operator/common_test.go
index 53035c9933..88bb7f633b 100644
--- a/internal/operator/common_test.go
+++ b/internal/operator/common_test.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/config/configutil.go b/internal/operator/config/configutil.go
index 9c11483dac..3205d35284 100644
--- a/internal/operator/config/configutil.go
+++ b/internal/operator/config/configutil.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/config/dcs.go b/internal/operator/config/dcs.go
index fe405d05c1..16238bfdf4 100644
--- a/internal/operator/config/dcs.go
+++ b/internal/operator/config/dcs.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2020 Crunchy Data Solutions, Ind.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Ind.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/config/localdb.go b/internal/operator/config/localdb.go
index 2e4a630563..76d641d38e 100644
--- a/internal/operator/config/localdb.go
+++ b/internal/operator/config/localdb.go
@@ -1,7 +1,7 @@
package config
/*
- Copyright 2020 Crunchy Data Solutions, Inl.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inl.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/operatorupgrade/version-check.go b/internal/operator/operatorupgrade/version-check.go
index 8544c77b77..a63773e9f2 100644
--- a/internal/operator/operatorupgrade/version-check.go
+++ b/internal/operator/operatorupgrade/version-check.go
@@ -1,7 +1,7 @@
package operatorupgrade
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/pgbackrest.go b/internal/operator/pgbackrest.go
index 8e369e764c..19c255da2c 100644
--- a/internal/operator/pgbackrest.go
+++ b/internal/operator/pgbackrest.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/pgbackrest_test.go b/internal/operator/pgbackrest_test.go
index 046d2be770..38d09a6be6 100644
--- a/internal/operator/pgbackrest_test.go
+++ b/internal/operator/pgbackrest_test.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/pgdump/dump.go b/internal/operator/pgdump/dump.go
index 1df0efbc79..73bf7cc0a1 100644
--- a/internal/operator/pgdump/dump.go
+++ b/internal/operator/pgdump/dump.go
@@ -1,7 +1,7 @@
package pgdump
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/pgdump/restore.go b/internal/operator/pgdump/restore.go
index 38b118c35b..51f36d0f0e 100644
--- a/internal/operator/pgdump/restore.go
+++ b/internal/operator/pgdump/restore.go
@@ -1,7 +1,7 @@
package pgdump
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/pvc/pvc.go b/internal/operator/pvc/pvc.go
index 21d2fc2808..5e96d67c8e 100644
--- a/internal/operator/pvc/pvc.go
+++ b/internal/operator/pvc/pvc.go
@@ -1,7 +1,7 @@
package pvc
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/storage.go b/internal/operator/storage.go
index 83b3918ae7..b6c06b1abd 100644
--- a/internal/operator/storage.go
+++ b/internal/operator/storage.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/storage_test.go b/internal/operator/storage_test.go
index 44a235fff0..46b9161dbe 100644
--- a/internal/operator/storage_test.go
+++ b/internal/operator/storage_test.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/task/applypolicies.go b/internal/operator/task/applypolicies.go
index 11b568f0c4..823800451e 100644
--- a/internal/operator/task/applypolicies.go
+++ b/internal/operator/task/applypolicies.go
@@ -1,7 +1,7 @@
package task
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/task/rmdata.go b/internal/operator/task/rmdata.go
index eb9c9c2fe8..d65141b207 100644
--- a/internal/operator/task/rmdata.go
+++ b/internal/operator/task/rmdata.go
@@ -1,7 +1,7 @@
package task
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/task/workflow.go b/internal/operator/task/workflow.go
index 531e3e5eac..a890859293 100644
--- a/internal/operator/task/workflow.go
+++ b/internal/operator/task/workflow.go
@@ -1,7 +1,7 @@
package task
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/operator/wal.go b/internal/operator/wal.go
index 1b679755fb..b9a14c3219 100644
--- a/internal/operator/wal.go
+++ b/internal/operator/wal.go
@@ -1,7 +1,7 @@
package operator
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/patroni/doc.go b/internal/patroni/doc.go
index 63a42c84d3..6311d1e653 100644
--- a/internal/patroni/doc.go
+++ b/internal/patroni/doc.go
@@ -4,7 +4,7 @@
package patroni
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/patroni/patroni.go b/internal/patroni/patroni.go
index 111515c02a..3b20703c0e 100644
--- a/internal/patroni/patroni.go
+++ b/internal/patroni/patroni.go
@@ -1,7 +1,7 @@
package patroni
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/backoff.go b/internal/pgadmin/backoff.go
index 40cacaec13..ee62223e82 100644
--- a/internal/pgadmin/backoff.go
+++ b/internal/pgadmin/backoff.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/backoff_test.go b/internal/pgadmin/backoff_test.go
index 09c0445526..ec4195df4c 100644
--- a/internal/pgadmin/backoff_test.go
+++ b/internal/pgadmin/backoff_test.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/crypto.go b/internal/pgadmin/crypto.go
index 55ebc8b771..e2db5beb5d 100644
--- a/internal/pgadmin/crypto.go
+++ b/internal/pgadmin/crypto.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/crypto_test.go b/internal/pgadmin/crypto_test.go
index 221b23fb80..aeb18a1fcb 100644
--- a/internal/pgadmin/crypto_test.go
+++ b/internal/pgadmin/crypto_test.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/doc.go b/internal/pgadmin/doc.go
index 97900b0227..58bf983ab5 100644
--- a/internal/pgadmin/doc.go
+++ b/internal/pgadmin/doc.go
@@ -4,7 +4,7 @@ database which powers pgadmin */
package pgadmin
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/hash.go b/internal/pgadmin/hash.go
index 728faed22b..beaab646c9 100644
--- a/internal/pgadmin/hash.go
+++ b/internal/pgadmin/hash.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/logic.go b/internal/pgadmin/logic.go
index 68426ae91b..9bd6cda94a 100644
--- a/internal/pgadmin/logic.go
+++ b/internal/pgadmin/logic.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/runner.go b/internal/pgadmin/runner.go
index 7ce80a484c..ab052b431c 100644
--- a/internal/pgadmin/runner.go
+++ b/internal/pgadmin/runner.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/pgadmin/server.go b/internal/pgadmin/server.go
index 26568e8806..b5ab3b6ef5 100644
--- a/internal/pgadmin/server.go
+++ b/internal/pgadmin/server.go
@@ -1,7 +1,7 @@
package pgadmin
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/doc.go b/internal/postgres/doc.go
index 974cb7c8df..2a2155a2cd 100644
--- a/internal/postgres/doc.go
+++ b/internal/postgres/doc.go
@@ -5,7 +5,7 @@
package postgres
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/password/doc.go b/internal/postgres/password/doc.go
index 6ea6563873..b0e22372f6 100644
--- a/internal/postgres/password/doc.go
+++ b/internal/postgres/password/doc.go
@@ -4,7 +4,7 @@
package password
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/password/md5.go b/internal/postgres/password/md5.go
index 56f9504608..1697e04cb9 100644
--- a/internal/postgres/password/md5.go
+++ b/internal/postgres/password/md5.go
@@ -1,7 +1,7 @@
package password
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/password/md5_test.go b/internal/postgres/password/md5_test.go
index 41c0711b04..7adabe8831 100644
--- a/internal/postgres/password/md5_test.go
+++ b/internal/postgres/password/md5_test.go
@@ -1,7 +1,7 @@
package password
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/password/password.go b/internal/postgres/password/password.go
index f63fb31492..923c854f00 100644
--- a/internal/postgres/password/password.go
+++ b/internal/postgres/password/password.go
@@ -1,7 +1,7 @@
package password
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/password/password_test.go b/internal/postgres/password/password_test.go
index d315c966bc..40962b9a75 100644
--- a/internal/postgres/password/password_test.go
+++ b/internal/postgres/password/password_test.go
@@ -1,7 +1,7 @@
package password
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/password/scram.go b/internal/postgres/password/scram.go
index 794575e4dd..1411af9636 100644
--- a/internal/postgres/password/scram.go
+++ b/internal/postgres/password/scram.go
@@ -1,7 +1,7 @@
package password
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/postgres/password/scram_test.go b/internal/postgres/password/scram_test.go
index 7cdac419d0..4995191655 100644
--- a/internal/postgres/password/scram_test.go
+++ b/internal/postgres/password/scram_test.go
@@ -1,7 +1,7 @@
package password
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/tlsutil/primitives.go b/internal/tlsutil/primitives.go
index 2ed4881e8e..363630fec1 100644
--- a/internal/tlsutil/primitives.go
+++ b/internal/tlsutil/primitives.go
@@ -1,7 +1,7 @@
package tlsutil
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/tlsutil/primitives_test.go b/internal/tlsutil/primitives_test.go
index 09d4dab6ce..684e4e3df6 100644
--- a/internal/tlsutil/primitives_test.go
+++ b/internal/tlsutil/primitives_test.go
@@ -1,7 +1,7 @@
package tlsutil
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/backrest.go b/internal/util/backrest.go
index a4b572f16b..50235ce0ad 100644
--- a/internal/util/backrest.go
+++ b/internal/util/backrest.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index ef7796f439..c9d59d9bf6 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/cluster_test.go b/internal/util/cluster_test.go
index bf50277c8f..6bb8ea472a 100644
--- a/internal/util/cluster_test.go
+++ b/internal/util/cluster_test.go
@@ -1,7 +1,7 @@
package util
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/exporter.go b/internal/util/exporter.go
index d46ad8cf53..6b4423b13a 100644
--- a/internal/util/exporter.go
+++ b/internal/util/exporter.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/exporter_test.go b/internal/util/exporter_test.go
index ffbde3a6e1..b614c4272d 100644
--- a/internal/util/exporter_test.go
+++ b/internal/util/exporter_test.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/failover.go b/internal/util/failover.go
index a19d887b25..37a899e7f8 100644
--- a/internal/util/failover.go
+++ b/internal/util/failover.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/pgbouncer.go b/internal/util/pgbouncer.go
index 0b3a9a528d..ff1033fada 100644
--- a/internal/util/pgbouncer.go
+++ b/internal/util/pgbouncer.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/policy.go b/internal/util/policy.go
index 25fe2953d7..5fe260ffef 100644
--- a/internal/util/policy.go
+++ b/internal/util/policy.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/secrets.go b/internal/util/secrets.go
index be8c2f4288..6a692ce377 100644
--- a/internal/util/secrets.go
+++ b/internal/util/secrets.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/secrets_test.go b/internal/util/secrets_test.go
index 423beb5e03..8dca7649bb 100644
--- a/internal/util/secrets_test.go
+++ b/internal/util/secrets_test.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/ssh.go b/internal/util/ssh.go
index e116716d12..17c4904753 100644
--- a/internal/util/ssh.go
+++ b/internal/util/ssh.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/internal/util/util.go b/internal/util/util.go
index 3559d09894..532568e783 100644
--- a/internal/util/util.go
+++ b/internal/util/util.go
@@ -1,7 +1,7 @@
package util
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go
index 4ad84d3b12..b8b88b4e1d 100644
--- a/pkg/apis/crunchydata.com/v1/cluster.go
+++ b/pkg/apis/crunchydata.com/v1/cluster.go
@@ -1,7 +1,7 @@
package v1
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/cluster_test.go b/pkg/apis/crunchydata.com/v1/cluster_test.go
index 4af663cf3e..8a31175cf4 100644
--- a/pkg/apis/crunchydata.com/v1/cluster_test.go
+++ b/pkg/apis/crunchydata.com/v1/cluster_test.go
@@ -1,7 +1,7 @@
package v1
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/common.go b/pkg/apis/crunchydata.com/v1/common.go
index 33818edf72..c768a0d408 100644
--- a/pkg/apis/crunchydata.com/v1/common.go
+++ b/pkg/apis/crunchydata.com/v1/common.go
@@ -1,7 +1,7 @@
package v1
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/common_test.go b/pkg/apis/crunchydata.com/v1/common_test.go
index 8ad909e64f..cde6832420 100644
--- a/pkg/apis/crunchydata.com/v1/common_test.go
+++ b/pkg/apis/crunchydata.com/v1/common_test.go
@@ -1,7 +1,7 @@
package v1
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/doc.go b/pkg/apis/crunchydata.com/v1/doc.go
index 4c793e782f..62cd5bc582 100644
--- a/pkg/apis/crunchydata.com/v1/doc.go
+++ b/pkg/apis/crunchydata.com/v1/doc.go
@@ -108,7 +108,7 @@ package v1
// +k8s:deepcopy-gen=package,register
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/errors.go b/pkg/apis/crunchydata.com/v1/errors.go
index 6c8fddbb2d..9a1fbc30b1 100644
--- a/pkg/apis/crunchydata.com/v1/errors.go
+++ b/pkg/apis/crunchydata.com/v1/errors.go
@@ -3,7 +3,7 @@ package v1
import "errors"
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/policy.go b/pkg/apis/crunchydata.com/v1/policy.go
index 904567d496..df08940188 100644
--- a/pkg/apis/crunchydata.com/v1/policy.go
+++ b/pkg/apis/crunchydata.com/v1/policy.go
@@ -1,7 +1,7 @@
package v1
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/register.go b/pkg/apis/crunchydata.com/v1/register.go
index 7b7359f504..586424126d 100644
--- a/pkg/apis/crunchydata.com/v1/register.go
+++ b/pkg/apis/crunchydata.com/v1/register.go
@@ -1,7 +1,7 @@
package v1
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/pkg/apis/crunchydata.com/v1/replica.go b/pkg/apis/crunchydata.com/v1/replica.go
index 08a830a8d3..878ff63481 100644
--- a/pkg/apis/crunchydata.com/v1/replica.go
+++ b/pkg/apis/crunchydata.com/v1/replica.go
@@ -1,7 +1,7 @@
package v1
/*
- Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/task.go b/pkg/apis/crunchydata.com/v1/task.go
index d6791c8415..c7eb9e4605 100644
--- a/pkg/apis/crunchydata.com/v1/task.go
+++ b/pkg/apis/crunchydata.com/v1/task.go
@@ -1,7 +1,7 @@
package v1
/*
- Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
index 86b9b5ed4d..6534215bbf 100644
--- a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
+++ b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go
@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/backrestmsgs.go b/pkg/apiservermsgs/backrestmsgs.go
index 29acd33a99..5a11e963bc 100644
--- a/pkg/apiservermsgs/backrestmsgs.go
+++ b/pkg/apiservermsgs/backrestmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/catmsgs.go b/pkg/apiservermsgs/catmsgs.go
index ded313371f..15f7d5cf85 100644
--- a/pkg/apiservermsgs/catmsgs.go
+++ b/pkg/apiservermsgs/catmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index d6c588eb56..d4287fd3cd 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/common.go b/pkg/apiservermsgs/common.go
index 093405b4fd..d52499aa4a 100644
--- a/pkg/apiservermsgs/common.go
+++ b/pkg/apiservermsgs/common.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/configmsgs.go b/pkg/apiservermsgs/configmsgs.go
index 06ed680008..325e2281e5 100644
--- a/pkg/apiservermsgs/configmsgs.go
+++ b/pkg/apiservermsgs/configmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/dfmsgs.go b/pkg/apiservermsgs/dfmsgs.go
index 22541840e7..8d947768ef 100644
--- a/pkg/apiservermsgs/dfmsgs.go
+++ b/pkg/apiservermsgs/dfmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/failovermsgs.go b/pkg/apiservermsgs/failovermsgs.go
index bfeefcb49a..b51b11c3e4 100644
--- a/pkg/apiservermsgs/failovermsgs.go
+++ b/pkg/apiservermsgs/failovermsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/labelmsgs.go b/pkg/apiservermsgs/labelmsgs.go
index eabf3e8ecf..d0a914840e 100644
--- a/pkg/apiservermsgs/labelmsgs.go
+++ b/pkg/apiservermsgs/labelmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/namespacemsgs.go b/pkg/apiservermsgs/namespacemsgs.go
index 3921604a00..5fc4665a9b 100644
--- a/pkg/apiservermsgs/namespacemsgs.go
+++ b/pkg/apiservermsgs/namespacemsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/pgadminmsgs.go b/pkg/apiservermsgs/pgadminmsgs.go
index 5d68b9352d..73e4475294 100644
--- a/pkg/apiservermsgs/pgadminmsgs.go
+++ b/pkg/apiservermsgs/pgadminmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/pgbouncermsgs.go b/pkg/apiservermsgs/pgbouncermsgs.go
index e971e31424..9dd37ffb14 100644
--- a/pkg/apiservermsgs/pgbouncermsgs.go
+++ b/pkg/apiservermsgs/pgbouncermsgs.go
@@ -3,7 +3,7 @@ package apiservermsgs
import v1 "k8s.io/api/core/v1"
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/pgdumpmsgs.go b/pkg/apiservermsgs/pgdumpmsgs.go
index 3afb00955a..c83269648a 100644
--- a/pkg/apiservermsgs/pgdumpmsgs.go
+++ b/pkg/apiservermsgs/pgdumpmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/pgorolemsgs.go b/pkg/apiservermsgs/pgorolemsgs.go
index 1f62efa1ab..6aae3494d0 100644
--- a/pkg/apiservermsgs/pgorolemsgs.go
+++ b/pkg/apiservermsgs/pgorolemsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/pgousermsgs.go b/pkg/apiservermsgs/pgousermsgs.go
index 815f8f1fdf..4690c1f888 100644
--- a/pkg/apiservermsgs/pgousermsgs.go
+++ b/pkg/apiservermsgs/pgousermsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/policymsgs.go b/pkg/apiservermsgs/policymsgs.go
index ba5c6cea21..023676cf6b 100644
--- a/pkg/apiservermsgs/policymsgs.go
+++ b/pkg/apiservermsgs/policymsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/pvcmsgs.go b/pkg/apiservermsgs/pvcmsgs.go
index f59ddd7983..da902da96b 100644
--- a/pkg/apiservermsgs/pvcmsgs.go
+++ b/pkg/apiservermsgs/pvcmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/reloadmsgs.go b/pkg/apiservermsgs/reloadmsgs.go
index 34a3738399..11c4293980 100644
--- a/pkg/apiservermsgs/reloadmsgs.go
+++ b/pkg/apiservermsgs/reloadmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/restartmsgs.go b/pkg/apiservermsgs/restartmsgs.go
index a36307739c..e9bf5b8b57 100644
--- a/pkg/apiservermsgs/restartmsgs.go
+++ b/pkg/apiservermsgs/restartmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/schedulemsgs.go b/pkg/apiservermsgs/schedulemsgs.go
index 4b037a5992..47e9f90ca8 100644
--- a/pkg/apiservermsgs/schedulemsgs.go
+++ b/pkg/apiservermsgs/schedulemsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/statusmsgs.go b/pkg/apiservermsgs/statusmsgs.go
index 94994c75b9..72a6c79aab 100644
--- a/pkg/apiservermsgs/statusmsgs.go
+++ b/pkg/apiservermsgs/statusmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/upgrademsgs.go b/pkg/apiservermsgs/upgrademsgs.go
index ab7fecc47a..a360c036c7 100644
--- a/pkg/apiservermsgs/upgrademsgs.go
+++ b/pkg/apiservermsgs/upgrademsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/usermsgs.go b/pkg/apiservermsgs/usermsgs.go
index 9c63c3d483..4a716966ba 100644
--- a/pkg/apiservermsgs/usermsgs.go
+++ b/pkg/apiservermsgs/usermsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/usermsgs_test.go b/pkg/apiservermsgs/usermsgs_test.go
index 207eda3757..d6cc93b2ca 100644
--- a/pkg/apiservermsgs/usermsgs_test.go
+++ b/pkg/apiservermsgs/usermsgs_test.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/versionmsgs.go b/pkg/apiservermsgs/versionmsgs.go
index 7685221c44..38ab640cdb 100644
--- a/pkg/apiservermsgs/versionmsgs.go
+++ b/pkg/apiservermsgs/versionmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/watchmsgs.go b/pkg/apiservermsgs/watchmsgs.go
index 9d50a81ccd..02c1472b90 100644
--- a/pkg/apiservermsgs/watchmsgs.go
+++ b/pkg/apiservermsgs/watchmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/apiservermsgs/workflowmsgs.go b/pkg/apiservermsgs/workflowmsgs.go
index 2908d75347..3d4c44353a 100644
--- a/pkg/apiservermsgs/workflowmsgs.go
+++ b/pkg/apiservermsgs/workflowmsgs.go
@@ -1,7 +1,7 @@
package apiservermsgs
/*
-Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/events/eventing.go b/pkg/events/eventing.go
index 5c40861352..1a13b932f1 100644
--- a/pkg/events/eventing.go
+++ b/pkg/events/eventing.go
@@ -1,7 +1,7 @@
package events
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/events/eventtype.go b/pkg/events/eventtype.go
index 95a09f8121..b93159e77e 100644
--- a/pkg/events/eventtype.go
+++ b/pkg/events/eventtype.go
@@ -1,7 +1,7 @@
package events
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/events/pgoeventtype.go b/pkg/events/pgoeventtype.go
index 75c076b311..4e4f114868 100644
--- a/pkg/events/pgoeventtype.go
+++ b/pkg/events/pgoeventtype.go
@@ -1,7 +1,7 @@
package events
/*
- Copyright 2019 - 2020 Crunchy Data Solutions, Inc.
+ Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/clientset.go b/pkg/generated/clientset/versioned/clientset.go
index 061bbaed95..7a20991345 100644
--- a/pkg/generated/clientset/versioned/clientset.go
+++ b/pkg/generated/clientset/versioned/clientset.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/doc.go b/pkg/generated/clientset/versioned/doc.go
index e2534c0fe7..f862afa1b0 100644
--- a/pkg/generated/clientset/versioned/doc.go
+++ b/pkg/generated/clientset/versioned/doc.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/fake/clientset_generated.go b/pkg/generated/clientset/versioned/fake/clientset_generated.go
index 384d0e7737..5f9ec9bbbb 100644
--- a/pkg/generated/clientset/versioned/fake/clientset_generated.go
+++ b/pkg/generated/clientset/versioned/fake/clientset_generated.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/fake/doc.go b/pkg/generated/clientset/versioned/fake/doc.go
index 6318a06f3c..e9300efbfe 100644
--- a/pkg/generated/clientset/versioned/fake/doc.go
+++ b/pkg/generated/clientset/versioned/fake/doc.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/fake/register.go b/pkg/generated/clientset/versioned/fake/register.go
index 62032825ad..d33f1544e9 100644
--- a/pkg/generated/clientset/versioned/fake/register.go
+++ b/pkg/generated/clientset/versioned/fake/register.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/scheme/doc.go b/pkg/generated/clientset/versioned/scheme/doc.go
index 462fec5e30..49cfafd10d 100644
--- a/pkg/generated/clientset/versioned/scheme/doc.go
+++ b/pkg/generated/clientset/versioned/scheme/doc.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/scheme/register.go b/pkg/generated/clientset/versioned/scheme/register.go
index 4850f74045..2ce45ab80b 100644
--- a/pkg/generated/clientset/versioned/scheme/register.go
+++ b/pkg/generated/clientset/versioned/scheme/register.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/crunchydata.com_client.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/crunchydata.com_client.go
index aac71b2aa3..4c862528fa 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/crunchydata.com_client.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/crunchydata.com_client.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/doc.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/doc.go
index b7311c21af..21c249ea20 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/doc.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/doc.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/doc.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/doc.go
index 759d8fff95..14f506a6fb 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/doc.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/doc.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_crunchydata.com_client.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_crunchydata.com_client.go
index f8d6b6b350..33ad7a5550 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_crunchydata.com_client.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_crunchydata.com_client.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go
index 177fe4240c..ff11262ddf 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go
index 746a49a17c..5a661bb23a 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go
index 70a1e8a559..05708f6b48 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go
index 6ec34a55fd..df8a0479cd 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/generated_expansion.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/generated_expansion.go
index 066f811e51..5ea3a63db1 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/generated_expansion.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/generated_expansion.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgcluster.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgcluster.go
index 6ccbb22d73..45d3777e84 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgcluster.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgcluster.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go
index 1d9711033c..8dbd0227c7 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgreplica.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgreplica.go
index f9ffed63eb..c9c553db30 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgreplica.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgreplica.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgtask.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgtask.go
index 5971a76095..e1095c71e9 100644
--- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgtask.go
+++ b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgtask.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/crunchydata.com/interface.go b/pkg/generated/informers/externalversions/crunchydata.com/interface.go
index dfe44a0fcb..698763aff3 100644
--- a/pkg/generated/informers/externalversions/crunchydata.com/interface.go
+++ b/pkg/generated/informers/externalversions/crunchydata.com/interface.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/interface.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/interface.go
index c34a37f8e7..b30e24b239 100644
--- a/pkg/generated/informers/externalversions/crunchydata.com/v1/interface.go
+++ b/pkg/generated/informers/externalversions/crunchydata.com/v1/interface.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgcluster.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgcluster.go
index c11596abe9..1e9753596d 100644
--- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgcluster.go
+++ b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgcluster.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgpolicy.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgpolicy.go
index 2016ae2a2e..741ad3c39e 100644
--- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgpolicy.go
+++ b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgpolicy.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgreplica.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgreplica.go
index 9387d0937b..c7946ec142 100644
--- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgreplica.go
+++ b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgreplica.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgtask.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgtask.go
index d08c342305..44398863fa 100644
--- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgtask.go
+++ b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgtask.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/factory.go b/pkg/generated/informers/externalversions/factory.go
index 56886a005a..65c18752b4 100644
--- a/pkg/generated/informers/externalversions/factory.go
+++ b/pkg/generated/informers/externalversions/factory.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/generic.go b/pkg/generated/informers/externalversions/generic.go
index 130dd5ad37..48e7491d80 100644
--- a/pkg/generated/informers/externalversions/generic.go
+++ b/pkg/generated/informers/externalversions/generic.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/informers/externalversions/internalinterfaces/factory_interfaces.go b/pkg/generated/informers/externalversions/internalinterfaces/factory_interfaces.go
index 4086ab3a09..130bc043a8 100644
--- a/pkg/generated/informers/externalversions/internalinterfaces/factory_interfaces.go
+++ b/pkg/generated/informers/externalversions/internalinterfaces/factory_interfaces.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/listers/crunchydata.com/v1/expansion_generated.go b/pkg/generated/listers/crunchydata.com/v1/expansion_generated.go
index ca6b77b1a3..369c56b717 100644
--- a/pkg/generated/listers/crunchydata.com/v1/expansion_generated.go
+++ b/pkg/generated/listers/crunchydata.com/v1/expansion_generated.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/listers/crunchydata.com/v1/pgcluster.go b/pkg/generated/listers/crunchydata.com/v1/pgcluster.go
index 4dd8121f86..7bcf7f4328 100644
--- a/pkg/generated/listers/crunchydata.com/v1/pgcluster.go
+++ b/pkg/generated/listers/crunchydata.com/v1/pgcluster.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/listers/crunchydata.com/v1/pgpolicy.go b/pkg/generated/listers/crunchydata.com/v1/pgpolicy.go
index 03740c4b71..f4b39358a0 100644
--- a/pkg/generated/listers/crunchydata.com/v1/pgpolicy.go
+++ b/pkg/generated/listers/crunchydata.com/v1/pgpolicy.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/listers/crunchydata.com/v1/pgreplica.go b/pkg/generated/listers/crunchydata.com/v1/pgreplica.go
index b6cee83186..f3bdf3bd68 100644
--- a/pkg/generated/listers/crunchydata.com/v1/pgreplica.go
+++ b/pkg/generated/listers/crunchydata.com/v1/pgreplica.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pkg/generated/listers/crunchydata.com/v1/pgtask.go b/pkg/generated/listers/crunchydata.com/v1/pgtask.go
index c7d30868a8..1a46df7b78 100644
--- a/pkg/generated/listers/crunchydata.com/v1/pgtask.go
+++ b/pkg/generated/listers/crunchydata.com/v1/pgtask.go
@@ -1,5 +1,5 @@
/*
-Copyright 2020 Crunchy Data Solutions, Inc.
+Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/pv/create-pv-nfs-label.sh b/pv/create-pv-nfs-label.sh
index a77e3e68e3..a347a907fd 100755
--- a/pv/create-pv-nfs-label.sh
+++ b/pv/create-pv-nfs-label.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2018 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/pv/create-pv-nfs-legacy.sh b/pv/create-pv-nfs-legacy.sh
index 4850e73652..96d698e159 100755
--- a/pv/create-pv-nfs-legacy.sh
+++ b/pv/create-pv-nfs-legacy.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/pv/create-pv-nfs.sh b/pv/create-pv-nfs.sh
index 8b2ef4ab67..e1e71c95d8 100755
--- a/pv/create-pv-nfs.sh
+++ b/pv/create-pv-nfs.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/pv/create-pv.sh b/pv/create-pv.sh
index 46bbf4dbe8..6d9ede0b71 100755
--- a/pv/create-pv.sh
+++ b/pv/create-pv.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/pv/delete-pv.sh b/pv/delete-pv.sh
index cd653a1778..b3d7422ff2 100755
--- a/pv/delete-pv.sh
+++ b/pv/delete-pv.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright 2017 - 2020 Crunchy Data Solutions, Inc.
+# Copyright 2017 - 2021 Crunchy Data Solutions, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_backup_test.go b/testing/pgo_cli/cluster_backup_test.go
index d2f8508c3d..ceeefe2e5a 100644
--- a/testing/pgo_cli/cluster_backup_test.go
+++ b/testing/pgo_cli/cluster_backup_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_cat_test.go b/testing/pgo_cli/cluster_cat_test.go
index 4cb159be8d..aea958fb15 100644
--- a/testing/pgo_cli/cluster_cat_test.go
+++ b/testing/pgo_cli/cluster_cat_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_create_test.go b/testing/pgo_cli/cluster_create_test.go
index 26f0c2be4f..f0579de8cd 100644
--- a/testing/pgo_cli/cluster_create_test.go
+++ b/testing/pgo_cli/cluster_create_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_delete_test.go b/testing/pgo_cli/cluster_delete_test.go
index cba99408f7..1285b6026e 100644
--- a/testing/pgo_cli/cluster_delete_test.go
+++ b/testing/pgo_cli/cluster_delete_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_df_test.go b/testing/pgo_cli/cluster_df_test.go
index 8171a7aa45..91ae0b8092 100644
--- a/testing/pgo_cli/cluster_df_test.go
+++ b/testing/pgo_cli/cluster_df_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_failover_test.go b/testing/pgo_cli/cluster_failover_test.go
index d35a6d87f5..ac4f2a40c6 100644
--- a/testing/pgo_cli/cluster_failover_test.go
+++ b/testing/pgo_cli/cluster_failover_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_label_test.go b/testing/pgo_cli/cluster_label_test.go
index 0f54f7af93..ccf4a17461 100644
--- a/testing/pgo_cli/cluster_label_test.go
+++ b/testing/pgo_cli/cluster_label_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_pgbouncer_test.go b/testing/pgo_cli/cluster_pgbouncer_test.go
index 9c5b72ba66..b0f9199881 100644
--- a/testing/pgo_cli/cluster_pgbouncer_test.go
+++ b/testing/pgo_cli/cluster_pgbouncer_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_policy_test.go b/testing/pgo_cli/cluster_policy_test.go
index df7197deb4..66db7c6080 100644
--- a/testing/pgo_cli/cluster_policy_test.go
+++ b/testing/pgo_cli/cluster_policy_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_pvc_test.go b/testing/pgo_cli/cluster_pvc_test.go
index bd91b28435..0009225e4a 100644
--- a/testing/pgo_cli/cluster_pvc_test.go
+++ b/testing/pgo_cli/cluster_pvc_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_reload_test.go b/testing/pgo_cli/cluster_reload_test.go
index e2900ee4fb..02cf63a479 100644
--- a/testing/pgo_cli/cluster_reload_test.go
+++ b/testing/pgo_cli/cluster_reload_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_restart_test.go b/testing/pgo_cli/cluster_restart_test.go
index 9daeea644e..c46763731a 100644
--- a/testing/pgo_cli/cluster_restart_test.go
+++ b/testing/pgo_cli/cluster_restart_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_scale_test.go b/testing/pgo_cli/cluster_scale_test.go
index 11ce9a9c21..219e44f582 100644
--- a/testing/pgo_cli/cluster_scale_test.go
+++ b/testing/pgo_cli/cluster_scale_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_scaledown_test.go b/testing/pgo_cli/cluster_scaledown_test.go
index f1926a4d4d..5e9dc16b28 100644
--- a/testing/pgo_cli/cluster_scaledown_test.go
+++ b/testing/pgo_cli/cluster_scaledown_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_test_test.go b/testing/pgo_cli/cluster_test_test.go
index 153d47f467..76100eb9f0 100644
--- a/testing/pgo_cli/cluster_test_test.go
+++ b/testing/pgo_cli/cluster_test_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/cluster_user_test.go b/testing/pgo_cli/cluster_user_test.go
index 9e59757a9a..964bb6b58c 100644
--- a/testing/pgo_cli/cluster_user_test.go
+++ b/testing/pgo_cli/cluster_user_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/operator_namespace_test.go b/testing/pgo_cli/operator_namespace_test.go
index ef327c2ece..57bc685ea2 100644
--- a/testing/pgo_cli/operator_namespace_test.go
+++ b/testing/pgo_cli/operator_namespace_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/operator_rbac_test.go b/testing/pgo_cli/operator_rbac_test.go
index 8fa4609894..569f1bd090 100644
--- a/testing/pgo_cli/operator_rbac_test.go
+++ b/testing/pgo_cli/operator_rbac_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/operator_test.go b/testing/pgo_cli/operator_test.go
index 743b872614..09e8148c68 100644
--- a/testing/pgo_cli/operator_test.go
+++ b/testing/pgo_cli/operator_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/suite_helpers_test.go b/testing/pgo_cli/suite_helpers_test.go
index dcaee7b0f0..57f3a7ca4a 100644
--- a/testing/pgo_cli/suite_helpers_test.go
+++ b/testing/pgo_cli/suite_helpers_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/suite_pgo_cmd_test.go b/testing/pgo_cli/suite_pgo_cmd_test.go
index 91aec62228..245d314aa7 100644
--- a/testing/pgo_cli/suite_pgo_cmd_test.go
+++ b/testing/pgo_cli/suite_pgo_cmd_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
diff --git a/testing/pgo_cli/suite_test.go b/testing/pgo_cli/suite_test.go
index 4f2056d08e..9429f28278 100644
--- a/testing/pgo_cli/suite_test.go
+++ b/testing/pgo_cli/suite_test.go
@@ -1,7 +1,7 @@
package pgo_cli_test
/*
- Copyright 2020 Crunchy Data Solutions, Inc.
+ Copyright 2020 - 2021 Crunchy Data Solutions, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
From 8556371512223c67796c83bf0308e018c843c21e Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 2 Jan 2021 12:46:02 -0500
Subject: [PATCH 107/276] Update explanation of how the default storage
configuration works
This now reflects how the Operator actually uses default storage
classes in deployment environments.
---
docs/content/installation/configuration.md | 26 ++++++++++------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/docs/content/installation/configuration.md b/docs/content/installation/configuration.md
index ce097d2753..d984bc34bd 100644
--- a/docs/content/installation/configuration.md
+++ b/docs/content/installation/configuration.md
@@ -129,30 +129,28 @@ unique ID for each required storage configuration.
You can specify the default storage to use for PostgreSQL, pgBackRest, and other
elements that require storage that can outlast the lifetime of a Pod. While the
-PostgreSQL Operator defaults to using `hostpathstorage` to work with
-environments that are typically used to test, we recommend using one of the
-other storage classes in production deployments.
+PostgreSQL Operator defaults to using `default` to work with the default storage
+class available in your environment.
| Name | Default | Required | Description |
|------|---------|----------|-------------|
-| `backrest_storage` | hostpathstorage | **Required** | Set the value of the storage configuration to use for the pgbackrest shared repository deployment created when a user specifies pgbackrest to be enabled on a cluster. |
-| `backup_storage` | hostpathstorage | **Required** | Set the value of the storage configuration to use for backups, including the storage for pgbackrest repo volumes. |
-| `primary_storage` | hostpathstorage | **Required** | Set to configure which storage definition to use when creating volumes used by PostgreSQL primaries on all newly created clusters. |
-| `replica_storage` | hostpathstorage | **Required** | Set to configure which storage definition to use when creating volumes used by PostgreSQL replicas on all newly created clusters. |
+| `backrest_storage` | default | **Required** | Set the value of the storage configuration to use for the pgbackrest shared repository deployment created when a user specifies pgbackrest to be enabled on a cluster. |
+| `backup_storage` | default | **Required** | Set the value of the storage configuration to use for backups, including the storage for pgbackrest repo volumes. |
+| `primary_storage` | default | **Required** | Set to configure which storage definition to use when creating volumes used by PostgreSQL primaries on all newly created clusters. |
+| `replica_storage` | default | **Required** | Set to configure which storage definition to use when creating volumes used by PostgreSQL replicas on all newly created clusters. |
| `wal_storage` | | | Set to configure which storage definition to use when creating volumes used for PostgreSQL Write-Ahead Log. |
#### Example Defaults
```yaml
-backrest_storage: 'nfsstorage'
-backup_storage: 'nfsstorage'
-primary_storage: 'nfsstorage'
-replica_storage: 'nfsstorage'
+backrest_storage: default
+backup_storage: default
+primary_storage: default
+replica_storage: default
```
-With the configuration shown above, the `nfsstorage` storage configuration would
-be used by default for the various containers created for a PG cluster
-(i.e. containers for the primary DB, replica DB's, backups and/or `pgBackRest`).
+With the configuration shown above, the default storage class available in the
+deployment environment is used.
### Considerations for Multi-Zone Cloud Environments
From 3f9026aff26d6bdb4719f6ed75012599938600b2 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sat, 2 Jan 2021 13:34:17 -0500
Subject: [PATCH 108/276] Update attribute name formatting in custom resource
docs
The name formatting now matches what the actual specifications
look like.
---
docs/content/custom-resources/_index.md | 152 ++++++++++++------------
1 file changed, 77 insertions(+), 75 deletions(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 3559d0eb14..370a0aee25 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -721,53 +721,54 @@ make changes, as described below.
| Attribute | Action | Description |
|-----------|--------|-------------|
-| Annotations | `create`, `update` | Specify Kubernetes [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) that can be applied to the different deployments managed by the PostgreSQL Operator (PostgreSQL, pgBackRest, pgBouncer). For more information, please see the "Annotations Specification" below. |
-| BackrestConfig | `create` | Optional references to pgBackRest configuration files
-| BackrestLimits | `create`, `update` | Specify the container resource limits that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| annotations | `create`, `update` | Specify Kubernetes [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) that can be applied to the different deployments managed by the PostgreSQL Operator (PostgreSQL, pgBackRest, pgBouncer). For more information, please see the "Annotations Specification" below. |
+| backrestConfig | `create` | Optional references to pgBackRest configuration files |
+| backrestLimits | `create`, `update` | Specify the container resource limits that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| backrestRepoPath | `create` | Optional reference to the location of the pgBackRest repository. |
| BackrestResources | `create`, `update` | Specify the container resource requests that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| BackrestS3Bucket | `create` | An optional parameter that specifies a S3 bucket that pgBackRest should use. |
-| BackrestS3Endpoint | `create` | An optional parameter that specifies the S3 endpoint pgBackRest should use. |
-| BackrestS3Region | `create` | An optional parameter that specifies a cloud region that pgBackRest should use. |
-| BackrestStorageTypes | `create` | An optional parameter that takes an array of different repositories types that can be used to store pgBackRest backups. Choices are `posix` and `s3`. If nothing is specified, it defaults to `posix`. (`local`, equivalent to `posix`, is available for backwards compatibility).|
-| BackrestS3URIStyle | `create` | An optional parameter that specifies if pgBackRest should use the `path` or `host` S3 URI style. |
-| BackrestS3VerifyTLS | `create` | An optional parameter that specifies if pgBackRest should verify the TLS endpoint. |
+| backrestS3Bucket | `create` | An optional parameter that specifies a S3 bucket that pgBackRest should use. |
+| backrestS3Endpoint | `create` | An optional parameter that specifies the S3 endpoint pgBackRest should use. |
+| backrestS3Region | `create` | An optional parameter that specifies a cloud region that pgBackRest should use. |
+| backrestS3URIStyle | `create` | An optional parameter that specifies if pgBackRest should use the `path` or `host` S3 URI style. |
+| backrestS3VerifyTLS | `create` | An optional parameter that specifies if pgBackRest should verify the TLS endpoint. |
| BackrestStorage | `create` | A specification that gives information about the storage attributes for the pgBackRest repository, which stores backups and archives, of the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. |
-| CCPImage | `create` | The name of the PostgreSQL container image to use, e.g. `crunchy-postgres-ha` or `crunchy-postgres-ha-gis`. |
-| CCPImagePrefix | `create` | If provided, the image prefix (or registry) of the PostgreSQL container image, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
-| CCPImageTag | `create` | The tag of the PostgreSQL container image to use, e.g. `{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}`. |
-| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
-| CustomConfig | `create` | If specified, references a custom ConfigMap to use when bootstrapping a PostgreSQL cluster. For the shape of this file, please see the section on [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) |
-| Database | `create` | The name of a database that the PostgreSQL user can log into after the PostgreSQL cluster is created. |
-| DisableAutofail | `create`, `update` | If set to true, disables the high availability capabilities of a PostgreSQL cluster. By default, every cluster can have high availability if there is at least one replica. |
-| ExporterLimits | `create`, `update` | Specify the container resource limits that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| Exporter | `create`,`update` | If `true`, deploys the `crunchy-postgres-exporter` sidecar for metrics collection |
-| ExporterPort | `create` | If `Exporter` is `true`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) |
-| ExporterResources | `create`, `update` | Specify the container resource requests that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
-| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
-| NodeAffinity | `create` | Sets the [node affinity rules]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}}) for the PostgreSQL cluster and associated PostgreSQL instances. Can be overridden on a per-instance (`pgreplicas.crunchydata.com`) basis. Please see the `Node Affinity Specification` section below. |
-| PGBadger | `create`,`update` | If `true`, deploys the `crunchy-pgbadger` sidecar for query analysis. |
-| PGBadgerPort | `create` | If the `PGBadger` label is set, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
-| PGDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. |
-| PGOImagePrefix | `create` | If provided, the image prefix (or registry) of any PostgreSQL Operator images that are used for jobs, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
-| PgBouncer | `create`, `update` | If specified, defines the attributes to use for the pgBouncer connection pooling deployment that can be used in conjunction with this PostgreSQL cluster. Please see the specification defined below. |
-| PodAntiAffinity | `create` | A required section. Sets the [pod anti-affinity rules]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) for the PostgreSQL cluster and associated deployments. Please see the `Pod Anti-Affinity Specification` section below. |
-| Policies | `create` | If provided, a comma-separated list referring to `pgpolicies.crunchydata.com.Spec.Name` that should be run once the PostgreSQL primary is first initialized. |
-| Port | `create` | The port that PostgreSQL will run on, e.g. `5432`. |
+| backrestStorageTypes | `create` | An optional parameter that takes an array of different repositories types that can be used to store pgBackRest backups. Choices are `posix` and `s3`. If nothing is specified, it defaults to `posix`. (`local`, equivalent to `posix`, is available for backwards compatibility).|
+| ccpimage | `create` | The name of the PostgreSQL container image to use, e.g. `crunchy-postgres-ha` or `crunchy-postgres-ha-gis`. |
+| ccpimageprefix | `create` | If provided, the image prefix (or registry) of the PostgreSQL container image, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
+| ccpimagetag | `create` | The tag of the PostgreSQL container image to use, e.g. `{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}`. |
+| clustername | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
+| customconfig | `create` | If specified, references a custom ConfigMap to use when bootstrapping a PostgreSQL cluster. For the shape of this file, please see the section on [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) |
+| database | `create` | The name of a database that the PostgreSQL user can log into after the PostgreSQL cluster is created. |
+| disableAutofail | `create`, `update` | If set to true, disables the high availability capabilities of a PostgreSQL cluster. By default, every cluster can have high availability if there is at least one replica. |
+| exporter | `create`,`update` | If `true`, deploys the `crunchy-postgres-exporter` sidecar for metrics collection |
+| exporterLimits | `create`, `update` | Specify the container resource limits that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| exporterport | `create` | If `Exporter` is `true`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) |
+| exporterResources | `create`, `update` | Specify the container resource requests that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
+| namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
+| nodeAffinity | `create` | Sets the [node affinity rules]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}}) for the PostgreSQL cluster and associated PostgreSQL instances. Can be overridden on a per-instance (`pgreplicas.crunchydata.com`) basis. Please see the `Node Affinity Specification` section below. |
+| pgBadger | `create`,`update` | If `true`, deploys the `crunchy-pgbadger` sidecar for query analysis. |
+| pgbadgerport | `create` | If the `PGBadger` label is set, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
+| pgBouncer | `create`, `update` | If specified, defines the attributes to use for the pgBouncer connection pooling deployment that can be used in conjunction with this PostgreSQL cluster. Please see the specification defined below. |
+| pgDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. |
+| pgoimageprefix | `create` | If provided, the image prefix (or registry) of any PostgreSQL Operator images that are used for jobs, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. |
+| podAntiAffinity | `create` | A required section. Sets the [pod anti-affinity rules]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) for the PostgreSQL cluster and associated deployments. Please see the `Pod Anti-Affinity Specification` section below. |
+| policies | `create` | If provided, a comma-separated list referring to `pgpolicies.crunchydata.com.Spec.Name` that should be run once the PostgreSQL primary is first initialized. |
+| port | `create` | The port that PostgreSQL will run on, e.g. `5432`. |
| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
-| Replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. |
-| Resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| ServiceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to `ClusterIP`. |
-| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
-| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
-| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" as well as a way to add custom labels to clusters. This will disappear at some point. |
-| TablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. |
-| TLS | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. |
-| TLSOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. |
-| Tolerations | `create`,`update` | Any array of Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for how to set this field. |
-| Standby | `create`, `update` | If set to true, indicates that the PostgreSQL cluster is a "standby" cluster, i.e. is in read-only mode entirely. Please see [Kubernetes Multi-Cluster Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) for more information. |
-| Shutdown | `create`, `update` | If set to true, indicates that a PostgreSQL cluster should shutdown. If set to false, indicates that a PostgreSQL cluster should be up and running. |
+| replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. |
+| resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| serviceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to `ClusterIP`. |
+| shutdown | `create`, `update` | If set to true, indicates that a PostgreSQL cluster should shutdown. If set to false, indicates that a PostgreSQL cluster should be up and running. |
+| standby | `create`, `update` | If set to true, indicates that the PostgreSQL cluster is a "standby" cluster, i.e. is in read-only mode entirely. Please see [Kubernetes Multi-Cluster Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) for more information. |
+| syncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).|
+| tablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. |
+| tls | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. |
+| tlsOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. |
+| tolerations | `create`,`update` | Any array of Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for how to set this field. |
+| user | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. |
+| userlabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" as well as a way to add custom labels to clusters. This will disappear at some point. |
##### Storage Specification
@@ -778,13 +779,13 @@ attribute and how it works.
| Attribute | Action | Description |
|-----------|--------|-------------|
-| AccessMode| `create` | The name of the Kubernetes Persistent Volume [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) to use. |
-| MatchLabels | `create` | Only used with `StorageType` of `create`, used to match a particular subset of provisioned Persistent Volumes. |
-| Name | `create` | Only needed for `PrimaryStorage` in `pgclusters.crunchydata.com`.Used to identify the name of the PostgreSQL cluster. Should match `ClusterName`. |
-| Size | `create` | The size of the [Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC). Must use a Kubernetes resource value, e.g. `20Gi`. |
-| StorageClass | `create` | The name of the Kubernetes [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) to use. |
-| StorageType | `create` | Set to `create` if storage is provisioned (e.g. using `hostpath`). Set to `dynamic` if using a dynamic storage provisioner, e.g. via a `StorageClass`. |
-| SupplementalGroups | `create` | If provided, a comma-separated list of group IDs to use in case it is needed to interface with a particular storage system. Typically used with NFS or hostpath storage. |
+| accessmode | `create` | The name of the Kubernetes Persistent Volume [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) to use. |
+| matchLabels | `create` | Only used with `StorageType` of `create`, used to match a particular subset of provisioned Persistent Volumes. |
+| name | `create` | Only needed for `PrimaryStorage` in `pgclusters.crunchydata.com`.Used to identify the name of the PostgreSQL cluster. Should match `ClusterName`. |
+| size | `create` | The size of the [Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC). Must use a Kubernetes resource value, e.g. `20Gi`. |
+| storageclass | `create` | The name of the Kubernetes [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) to use. |
+| storagetype | `create` | Set to `create` if storage is provisioned (e.g. using `hostpath`). Set to `dynamic` if using a dynamic storage provisioner, e.g. via a `StorageClass`. |
+| supplementalgroups | `create` | If provided, a comma-separated list of group IDs to use in case it is needed to interface with a particular storage system. Typically used with NFS or hostpath storage. |
##### Node Affinity Specification
@@ -815,9 +816,9 @@ documentation.
| Attribute | Action | Description |
|-----------|--------|-------------|
-| Default | `create` | The default pod anti-affinity to use for all Pods managed in a given PostgreSQL cluster. |
-| PgBackRest | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBackRest repository. |
-| PgBouncer | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBouncer Pods. |
+| default | `create` | The default pod anti-affinity to use for all Pods managed in a given PostgreSQL cluster. |
+| pgBackRest | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBackRest repository. |
+| pgBouncer | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBouncer Pods. |
##### PostgreSQL Data Source Specification
@@ -828,8 +829,8 @@ spawning new PostgreSQL clusters.
| Attribute | Action | Description |
|-----------|--------|-------------|
-| RestoreFrom | `create` | The name of a PostgreSQL cluster, active or former, that will be used for bootstrapping the data of a new PostgreSQL cluster. |
-| RestoreOpts | `create` | Additional pgBackRest [restore options](https://pgbackrest.org/command.html#command-restore) that can be used as part of the bootstrapping operation, for example, point-in-time-recovery options. |
+| restoreFrom | `create` | The name of a PostgreSQL cluster, active or former, that will be used for bootstrapping the data of a new PostgreSQL cluster. |
+| restoreOpts | `create` | Additional pgBackRest [restore options](https://pgbackrest.org/command.html#command-restore) that can be used as part of the bootstrapping operation, for example, point-in-time-recovery options. |
##### TLS Specification
@@ -839,9 +840,9 @@ should be structured, please see [Enabling TLS in a PostgreSQL Cluster]({{< relr
| Attribute | Action | Description |
|-----------|--------|-------------|
-| CASecret | `create` | A reference to the name of a Kubernetes Secret that specifies a certificate authority for the PostgreSQL cluster to trust. |
-| ReplicationTLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair for authenticating the replication user. Must be used with `CASecret` and `TLSSecret`. |
-| TLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the PostgreSQL instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with `CASecret`. |
+| caSecret | `create` | A reference to the name of a Kubernetes Secret that specifies a certificate authority for the PostgreSQL cluster to trust. |
+| replicationTLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair for authenticating the replication user. Must be used with `CASecret` and `TLSSecret`. |
+| tlsSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the PostgreSQL instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with `CASecret`. |
##### pgBouncer Specification
@@ -852,11 +853,11 @@ a PostgreSQL cluster to help with failover scenarios too.
| Attribute | Action | Description |
|-----------|--------|-------------|
-| Limits | `create`, `update` | Specify the container resource limits that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| Replicas | `create`, `update` | The number of pgBouncer instances to deploy. Must be set to at least `1` to deploy pgBouncer. Setting to `0` removes an existing pgBouncer deployment for the PostgreSQL cluster. |
-| Resources | `create`, `update` | Specify the container resource requests that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
-| ServiceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to the `ServiceType` set for the PostgreSQL cluster. |
-| TLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the pgBouncer instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with the parent Spec `TLSSecret` and `CASecret`. |
+| limits | `create`, `update` | Specify the container resource limits that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| replicas | `create`, `update` | The number of pgBouncer instances to deploy. Must be set to at least `1` to deploy pgBouncer. Setting to `0` removes an existing pgBouncer deployment for the PostgreSQL cluster. |
+| resources | `create`, `update` | Specify the container resource requests that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
+| serviceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for the cluster. If not set, defaults to the `ServiceType` set for the PostgreSQL cluster. |
+| tlsSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the pgBouncer instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with the parent Spec `TLSSecret` and `CASecret`. |
##### Annotations Specification
@@ -874,10 +875,10 @@ different deployment groups.
| Attribute | Action | Description |
|-----------|--------|-------------|
-| Backrest | `create`, `update` | Specify annotations that are only applied to the pgBackRest deployments |
-| Global | `create`, `update` | Specify annotations that are applied to the PostgreSQL, pgBackRest, and pgBouncer deployments |
-| PgBouncer | `create`, `update` | Specify annotations that are only applied to the pgBouncer deployments |
-| Postgres | `create`, `update` | Specify annotations that are only applied to the PostgreSQL deployments |
+| backrest | `create`, `update` | Specify annotations that are only applied to the pgBackRest deployments |
+| global | `create`, `update` | Specify annotations that are applied to the PostgreSQL, pgBackRest, and pgBouncer deployments |
+| pgBouncer | `create`, `update` | Specify annotations that are only applied to the pgBouncer deployments |
+| postgres | `create`, `update` | Specify annotations that are only applied to the PostgreSQL deployments |
### `pgreplicas.crunchydata.com`
@@ -889,10 +890,11 @@ cluster. All of the attributes only affect the replica when it is created.
| Attribute | Action | Description |
|-----------|--------|-------------|
-| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
-| Name | `create` | The name of this PostgreSQL replica. It should be unique within a `ClusterName`. |
-| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
-| NodeAffinity | `create` | Sets the [node affinity rules]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}}) for this PostgreSQL instance. Follows the [Kubernetes standard format for setting node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity). |
-| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section in the `pgclusters.crunchydata.com` description. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
-| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" as well as a way to add custom labels to clusters. This will disappear at some point. |
-| Tolerations | `create`,`update` | Any array of Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for how to set this field. |
+| clustername | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. |
+| name | `create` | The name of this PostgreSQL replica. It should be unique within a `ClusterName`. |
+| namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
+| nodeAffinity | `create` | Sets the [node affinity rules]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}}) for this PostgreSQL instance. Follows the [Kubernetes standard format for setting node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity). |
+| replicastorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section in the `pgclusters.crunchydata.com` description. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. |
+| serviceType | `create`, `update` | Sets the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) type to use for this particular instance. If not set, defaults to the value in the related `pgclusters.crunchydata.com` custom resource. |
+| userlabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" as well as a way to add custom labels to clusters. This will disappear at some point. |
+| tolerations | `create`,`update` | Any array of Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for how to set this field. |
From b00bab88ddb97ed225b3ac660e237984c8e1a972 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 3 Jan 2021 10:51:19 -0500
Subject: [PATCH 109/276] Add support for toleration updates using pgo client
This commit adds the `--toleration` flag to the `pgo update cluster`
command and supports to adding and removing of tolerations to an
existing PostgreSQL cluster.
A toleration can be added in similar fashion to the --toleration
flag when running `pgo create cluster` or `pgo scale`.
A toleration can be removed by appending a `-` to the end of the
toleration flag value, e.g.
pgo update cluster hippo --toleration=zone=east:NoSchedule-
---
cmd/pgo/cmd/cluster.go | 22 +++++++++++++--
cmd/pgo/cmd/create.go | 4 +++
cmd/pgo/cmd/scale.go | 2 +-
cmd/pgo/cmd/update.go | 6 ++++
.../architecture/high-availability/_index.md | 23 +++++++++++----
.../reference/pgo_update_cluster.md | 7 ++++-
docs/content/tutorial/customize-cluster.md | 8 ++++++
.../apiserver/clusterservice/clusterimpl.go | 28 +++++++++++++++++++
pkg/apiservermsgs/clustermsgs.go | 6 ++++
9 files changed, 96 insertions(+), 10 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index da97cef37a..e66a28fc32 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -337,7 +337,7 @@ func createCluster(args []string, ns string, createClusterCmd *cobra.Command) {
r.Annotations = getClusterAnnotations(Annotations, AnnotationsPostgres, AnnotationsBackrest,
AnnotationsPgBouncer)
// set any tolerations
- r.Tolerations = getClusterTolerations(Tolerations)
+ r.Tolerations = getClusterTolerations(Tolerations, false)
// only set SyncReplication in the request if actually provided via the CLI
if createClusterCmd.Flag("sync-replication").Changed {
@@ -557,7 +557,11 @@ func getTablespaces(tablespaceParams []string) []msgs.ClusterTablespaceDetail {
//
// Exists - key:Effect
// Equals - key=value:Effect
-func getClusterTolerations(tolerationList []string) []v1.Toleration {
+//
+// If the remove flag is set to true, check for a trailing "-" at the end of
+// each item, as this will be a remove list. Otherwise, only consider
+// tolerations that are not being removed
+func getClusterTolerations(tolerationList []string, remove bool) []v1.Toleration {
tolerations := make([]v1.Toleration, 0)
// if no tolerations, exit early
@@ -577,7 +581,17 @@ func getClusterTolerations(tolerationList []string) []v1.Toleration {
}
// for ease of reading
- rule, effect := ruleEffect[0], v1.TaintEffect(ruleEffect[1])
+ rule, effectStr := ruleEffect[0], ruleEffect[1]
+
+ // determine if the effect is for removal or not, as we will continue the
+ // loop based on that
+ if (remove && !strings.HasSuffix(effectStr, "-")) || (!remove && strings.HasSuffix(effectStr, "-")) {
+ continue
+ }
+
+ // no matter what we can trim any trailing "-" off of the string, and cast
+ // it as a TaintEffect
+ effect := v1.TaintEffect(strings.TrimSuffix(effectStr, "-"))
// see if the effect is a valid effect
if !isValidTaintEffect(effect) {
@@ -687,6 +701,8 @@ func updateCluster(args []string, ns string) {
// set any annotations
r.Annotations = getClusterAnnotations(Annotations, AnnotationsPostgres, AnnotationsBackrest,
AnnotationsPgBouncer)
+ r.Tolerations = getClusterTolerations(Tolerations, false)
+ r.TolerationsDelete = getClusterTolerations(Tolerations, true)
// check to see if EnableStandby or DisableStandby is set. If so,
// set a value for Standby
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index f301111797..fa6e71e32b 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -156,6 +156,10 @@ var (
// Example:
//
// zone=east:NoSchedule,highspeed:NoSchedule
+//
+// A toleration can be removed by adding a "-" to the end, e.g.:
+//
+// zone=east:NoSchedule-
var Tolerations []string
var CreateCmd = &cobra.Command{
diff --git a/cmd/pgo/cmd/scale.go b/cmd/pgo/cmd/scale.go
index a44fa959f2..833b7de3af 100644
--- a/cmd/pgo/cmd/scale.go
+++ b/cmd/pgo/cmd/scale.go
@@ -82,7 +82,7 @@ func scaleCluster(args []string, ns string) {
ReplicaCount: ReplicaCount,
ServiceType: v1.ServiceType(ServiceType),
StorageConfig: StorageConfig,
- Tolerations: getClusterTolerations(Tolerations),
+ Tolerations: getClusterTolerations(Tolerations, false),
}
response, err := api.ScaleCluster(httpclient, &SessionCredentials, request)
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index 89d88fe45b..d09c506c7b 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -144,6 +144,12 @@ func init() {
"Follows the Kubernetes quantity format.\n\n"+
"For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:\n\n"+
"--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi")
+ UpdateClusterCmd.Flags().StringSliceVar(&Tolerations, "toleration", []string{},
+ "Set Pod tolerations for each PostgreSQL instance in a cluster.\n"+
+ "The general format is \"key=value:Effect\"\n"+
+ "For example, to add an Exists and an Equals toleration: \"--toleration=ssd:NoSchedule,zone=east:NoSchedule\"\n"+
+ "A toleration can be removed by adding a \"-\" to the end, for example:\n"+
+ "--toleration=ssd:NoSchedule-")
UpdatePgBouncerCmd.Flags().StringVar(&PgBouncerCPURequest, "cpu", "", "Set the number of millicores to request for CPU "+
"for pgBouncer.")
UpdatePgBouncerCmd.Flags().StringVar(&PgBouncerCPULimit, "cpu-limit", "", "Set the number of millicores to limit for CPU "+
diff --git a/docs/content/architecture/high-availability/_index.md b/docs/content/architecture/high-availability/_index.md
index f267d306f0..073df5e599 100644
--- a/docs/content/architecture/high-availability/_index.md
+++ b/docs/content/architecture/high-availability/_index.md
@@ -345,11 +345,24 @@ following command:
pgo scale hippo --toleration=zone=west:NoSchedule
```
-Tolerations can be updated on an existing cluster. To do so, you will need to
-modify the `pgclusters.crunchydata.com` and `pgreplicas.crunchydata.com` custom
-resources directly, e.g. via the `kubectl edit` command. Once the updates are
-applied, the PostgreSQL Operator will roll out the changes to the appropriate
-instances.
+Tolerations can be updated on an existing cluster. You can do this by either
+modifying the `pgclusters.crunchydata.com` and `pgreplicas.crunchydata.com`
+custom resources directly, e.g. via the `kubectl edit` command, or with the
+[`pgo update cluster`]({{ relref "pgo-client/reference/pgo_update_cluster.md" }})
+command. Using the `pgo update cluster` command, a toleration can be removed by
+adding a `-` at the end of the toleration effect.
+
+For example, to add a toleration of `zone=west:NoSchedule` and remove the
+toleration of `zone=east:NoSchedule`, you could run the following command:
+
+```
+pgo update cluster hippo \
+ --toleration=zone=west:NoSchedule \
+ --toleration=zone-east:NoSchedule-
+```
+
+Once the updates are applied, the PostgreSQL Operator will roll out the changes
+to the appropriate instances.
## Rolling Updates
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index 70689cdfea..3596be5282 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -71,6 +71,11 @@ pgo update cluster [flags]
For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:
--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi
+ --toleration strings Set Pod tolerations for each PostgreSQL instance in a cluster.
+ The general format is "key=value:Effect"
+ For example, to add an Exists and an Equals toleration: "--toleration=ssd:NoSchedule,zone=east:NoSchedule"
+ A toleration can be removed by adding a "-" to the end, for example:
+ --toleration=ssd:NoSchedule-
```
### Options inherited from parent commands
@@ -90,4 +95,4 @@ pgo update cluster [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 2-Jan-2021
+###### Auto generated by spf13/cobra on 3-Jan-2021
diff --git a/docs/content/tutorial/customize-cluster.md b/docs/content/tutorial/customize-cluster.md
index 3cd1f6d374..8d5f4f941d 100644
--- a/docs/content/tutorial/customize-cluster.md
+++ b/docs/content/tutorial/customize-cluster.md
@@ -158,6 +158,14 @@ pgo create cluster hippo \
--toleration=zone=east:NoSchedule
```
+Tolerations can be updated on an existing cluster using the [`pgo update cluster`]({{ relref "pgo-client/reference/pgo_update_cluster.md" }}) command. For example, to add a toleration of `zone=west:NoSchedule` and remove the toleration of `zone=east:NoSchedule`, you could run the following command:
+
+```
+pgo update cluster hippo \
+ --toleration=zone=west:NoSchedule \
+ --toleration=zone-east:NoSchedule-
+```
+
You can also add or edit tolerations directly on the `pgclusters.crunchydata.com` custom resource and the PostgreSQL Operator will roll out the changes to the appropriate instances.
## Customize PostgreSQL Configuration
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 22f10a7f14..12b8604391 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -20,6 +20,7 @@ import (
"errors"
"fmt"
"io/ioutil"
+ "reflect"
"strconv"
"strings"
"time"
@@ -2072,6 +2073,33 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
cluster.Spec.TablespaceMounts[tablespace.Name] = storageSpec
}
+ // Handle any tolerations. This is fun. So we will have to go through both
+ // the toleration addition list as well as the toleration subtraction list.
+ //
+ // First, we will remove any tolerations that are slated for removal
+ if len(request.TolerationsDelete) > 0 {
+ tolerations := make([]v1.Toleration, 0)
+
+ for _, toleration := range cluster.Spec.Tolerations {
+ delete := false
+
+ for _, tolerationDelete := range request.TolerationsDelete {
+ delete = delete || (reflect.DeepEqual(toleration, tolerationDelete))
+ }
+
+ // if delete does not match, then we can include this toleration in any
+ // updates
+ if !delete {
+ tolerations = append(tolerations, toleration)
+ }
+ }
+
+ cluster.Spec.Tolerations = tolerations
+ }
+
+ // now, add any new tolerations to the spec
+ cluster.Spec.Tolerations = append(cluster.Spec.Tolerations, request.Tolerations...)
+
if _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Update(ctx, &cluster, metav1.UpdateOptions{}); err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index d4287fd3cd..d6cbf91fd2 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -474,6 +474,12 @@ type UpdateClusterRequest struct {
Startup bool
Shutdown bool
Tablespaces []ClusterTablespaceDetail
+ // Tolerations allows for the adding of Pod tolerations on a PostgreSQL
+ // cluster.
+ Tolerations []v1.Toleration `json:"tolerations"`
+ // TolerationsDelete allows for the removal of Pod tolerations on a
+ // PostgreSQL cluster
+ TolerationsDelete []v1.Toleration `json:"tolerationsDelete"`
}
// UpdateClusterResponse ...
From e2af460ae7555090f26a330652c165c0cb4c17f2 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 3 Jan 2021 13:56:54 -0500
Subject: [PATCH 110/276] Change default use for executing scheduled policy
Previously this was the user meant for replication. Now this defaults
to the superuser.
---
internal/apiserver/scheduleservice/scheduleimpl.go | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/internal/apiserver/scheduleservice/scheduleimpl.go b/internal/apiserver/scheduleservice/scheduleimpl.go
index 04c027f391..07efb121a8 100644
--- a/internal/apiserver/scheduleservice/scheduleimpl.go
+++ b/internal/apiserver/scheduleservice/scheduleimpl.go
@@ -77,8 +77,9 @@ func (s scheduleRequest) createPolicySchedule(cluster *crv1.Pgcluster, ns string
}
if s.Request.Secret == "" {
- s.Request.Secret = crv1.UserSecretName(cluster, crv1.PGUserReplication)
+ s.Request.Secret = crv1.UserSecretName(cluster, crv1.PGUserSuperuser)
}
+
schedule := &PgScheduleSpec{
Name: name,
Cluster: cluster.Name,
From cfb35ff15ab123d91ea29a6e13febab8d149fef7 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 3 Jan 2021 11:15:28 -0500
Subject: [PATCH 111/276] Modify internal call for reload command
This moves away from making a direct call to the REST API and
leverages the command directly.
---
.../apiserver/reloadservice/reloadimpl.go | 30 -------------------
internal/patroni/patroni.go | 22 +++++++-------
pkg/events/eventtype.go | 15 ----------
3 files changed, 11 insertions(+), 56 deletions(-)
diff --git a/internal/apiserver/reloadservice/reloadimpl.go b/internal/apiserver/reloadservice/reloadimpl.go
index cb6978e5f5..465b6a9e26 100644
--- a/internal/apiserver/reloadservice/reloadimpl.go
+++ b/internal/apiserver/reloadservice/reloadimpl.go
@@ -19,13 +19,11 @@ import (
"context"
"fmt"
"strings"
- "time"
"github.com/crunchydata/postgres-operator/internal/apiserver"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/patroni"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
- "github.com/crunchydata/postgres-operator/pkg/events"
log "github.com/sirupsen/logrus"
kerrors "k8s.io/apimachinery/pkg/api/errors"
@@ -99,11 +97,6 @@ func Reload(request *msgs.ReloadRequest, ns, username string) msgs.ReloadRespons
}
resp.Results = append(resp.Results, fmt.Sprintf("reload performed on %s", clusterName))
-
- if err := publishReloadClusterEvent(cluster.GetName(), ns, username); err != nil {
- log.Error(err.Error())
- errorMsgs = append(errorMsgs, err.Error())
- }
}
if len(errorMsgs) > 0 {
@@ -113,26 +106,3 @@ func Reload(request *msgs.ReloadRequest, ns, username string) msgs.ReloadRespons
return resp
}
-
-// publishReloadClusterEvent publishes an event when a cluster is reloaded
-func publishReloadClusterEvent(clusterName, username, namespace string) error {
- topics := make([]string, 1)
- topics[0] = events.EventTopicCluster
-
- f := events.EventReloadClusterFormat{
- EventHeader: events.EventHeader{
- Namespace: namespace,
- Username: username,
- Topic: topics,
- Timestamp: time.Now(),
- EventType: events.EventReloadCluster,
- },
- Clustername: clusterName,
- }
-
- if err := events.Publish(f); err != nil {
- return err
- }
-
- return nil
-}
diff --git a/internal/patroni/patroni.go b/internal/patroni/patroni.go
index 3b20703c0e..f4384a74e8 100644
--- a/internal/patroni/patroni.go
+++ b/internal/patroni/patroni.go
@@ -35,12 +35,10 @@ import (
const dbContainerName = "database"
var (
- // reloadCMD is the command for reloading a specific PG instance (primary or replica) within a
- // PG cluster
- reloadCMD = []string{
- "/bin/bash", "-c",
- fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/reload", config.DEFAULT_PATRONI_PORT),
- }
+ // reloadCMD is the command for reloading a specific PG instance (primary or
+ // replica) within a Postgres cluster. It requires a cluster and instance name
+ // to be appended to it
+ reloadCMD = []string{"patronictl", "reload", "--force"}
// restartCMD is the command for restart a specific PG database (primary or replica) within a
// PG cluster
restartCMD = []string{
@@ -195,17 +193,19 @@ func (p *patroniClient) RestartInstances(instances ...string) ([]RestartResult,
// reload performs a Patroni reload (which includes a PG reload) on a specific instance (primary or
// replica) within a PG cluster
func (p *patroniClient) reload(podName string) error {
- stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset, reloadCMD,
- dbContainerName, podName, p.namespace, nil)
+ cmd := reloadCMD
+ cmd = append(cmd, p.clusterName, podName)
+
+ stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset,
+ cmd, dbContainerName, podName, p.namespace, nil)
+
if err != nil {
- return err
- } else if stderr != "" {
return fmt.Errorf(stderr)
}
log.Debugf("Successfully reloaded PG on pod %s: %s", podName, stdout)
- return err
+ return nil
}
// restart performs a Patroni restart on a specific instance (primary or replica) within a PG
diff --git a/pkg/events/eventtype.go b/pkg/events/eventtype.go
index b93159e77e..8a2031d08b 100644
--- a/pkg/events/eventtype.go
+++ b/pkg/events/eventtype.go
@@ -104,21 +104,6 @@ type EventInterface interface {
String() string
}
-//--------
-type EventReloadClusterFormat struct {
- EventHeader `json:"eventheader"`
- Clustername string `json:"clustername"`
-}
-
-func (p EventReloadClusterFormat) GetHeader() EventHeader {
- return p.EventHeader
-}
-
-func (lvl EventReloadClusterFormat) String() string {
- msg := fmt.Sprintf("Event %s - (reload) name %s", lvl.EventHeader, lvl.Clustername)
- return msg
-}
-
//----------------------------
type EventCreateClusterFailureFormat struct {
EventHeader `json:"eventheader"`
From 955c6e841509413d9bc68d1dab97d56d4078bd12 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 3 Jan 2021 11:21:42 -0500
Subject: [PATCH 112/276] Modify internal call for restart command
This moves away from making a direct call to the API and instead
leverages the command to perform a restart.
---
internal/patroni/patroni.go | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/internal/patroni/patroni.go b/internal/patroni/patroni.go
index f4384a74e8..70651dcace 100644
--- a/internal/patroni/patroni.go
+++ b/internal/patroni/patroni.go
@@ -39,12 +39,10 @@ var (
// replica) within a Postgres cluster. It requires a cluster and instance name
// to be appended to it
reloadCMD = []string{"patronictl", "reload", "--force"}
- // restartCMD is the command for restart a specific PG database (primary or replica) within a
- // PG cluster
- restartCMD = []string{
- "/bin/bash", "-c",
- fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/restart", config.DEFAULT_PATRONI_PORT),
- }
+ // restartCMD is the command for restart a specific PG database (primary or
+ // replica) within a Postgres cluster. It requires a cluster and instance name
+ // to be appended to it.
+ restartCMD = []string{"patronictl", "restart", "--force"}
// ErrInstanceNotFound is the error thrown when a target instance cannot be found in the cluster
ErrInstanceNotFound = errors.New("The instance does not exist in the cluster")
@@ -211,7 +209,10 @@ func (p *patroniClient) reload(podName string) error {
// restart performs a Patroni restart on a specific instance (primary or replica) within a PG
// cluster.
func (p *patroniClient) restart(podName string) error {
- stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset, restartCMD,
+ cmd := restartCMD
+ cmd = append(cmd, p.clusterName, podName)
+
+ stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset, cmd,
dbContainerName, podName, p.namespace, nil)
if err != nil {
return err
From f9eafa93a5013c76980f09458420787b7fcd88d1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 1 Jan 2021 16:28:39 -0500
Subject: [PATCH 113/276] Move `pgo failover` to use updated switchover
plumbing
The manual failover/switchover command, `pgo failover`, now
uses the updated "switchover" plumbing that was introduced as
part of the rolling update changes. This ensures a unified
experience when performing an action that involves a failover.
Additionally, `pgo failover` now occurs inline: it does not
create a pgtask custom resource. This is due to both being a much
simpler process and the transactional nature of an immediate failover.
`pgo failover` can now also be executed with a --target flag. This
was actually always supported, but unavailable based upon a restriction
in the `pgo` client. When the `--target` flag is not used, the
PostgreSQL Operator will choose the best candidate for failing over.
---
cmd/pgo/cmd/failover.go | 21 +-
docs/content/pgo-client/common-tasks.md | 27 +-
.../pgo-client/reference/pgo_failover.md | 11 +-
docs/content/tutorial/high-availability.md | 23 +-
.../apiserver/failoverservice/failoverimpl.go | 71 ++----
internal/config/labels.go | 9 +-
.../controller/pgtask/pgtaskcontroller.go | 28 ---
internal/operator/cluster/failover.go | 134 +++-------
internal/operator/cluster/failoverlogic.go | 234 ------------------
internal/operator/cluster/rolling.go | 49 +---
internal/operator/switchover.go | 156 ++++++++++++
internal/operator/switchover_test.go | 45 ++++
internal/util/failover.go | 38 ---
pkg/apis/crunchydata.com/v1/task.go | 2 -
pkg/apiservermsgs/failovermsgs.go | 3 +-
pkg/events/eventtype.go | 34 ---
16 files changed, 326 insertions(+), 559 deletions(-)
delete mode 100644 internal/operator/cluster/failoverlogic.go
create mode 100644 internal/operator/switchover.go
create mode 100644 internal/operator/switchover_test.go
diff --git a/cmd/pgo/cmd/failover.go b/cmd/pgo/cmd/failover.go
index bfbd3b0848..b67cd1d2d1 100644
--- a/cmd/pgo/cmd/failover.go
+++ b/cmd/pgo/cmd/failover.go
@@ -32,7 +32,12 @@ var failoverCmd = &cobra.Command{
Short: "Performs a manual failover",
Long: `Performs a manual failover. For example:
- pgo failover mycluster`,
+ # have the operator select the best target candidate
+ pgo failover hippo
+ # get a list of target candidates
+ pgo failover hippo --query
+ # failover to a specific target candidate
+ pgo failover hippo --target=hippo-abcd`,
Run: func(cmd *cobra.Command, args []string) {
if Namespace == "" {
Namespace = PGONamespace
@@ -44,10 +49,6 @@ var failoverCmd = &cobra.Command{
if Query {
queryFailover(args, Namespace)
} else if util.AskForConfirmation(NoPrompt, "") {
- if Target == "" {
- fmt.Println(`Error: The --target flag is required for failover.`)
- return
- }
createFailover(args, Namespace)
} else {
fmt.Println("Aborting...")
@@ -80,14 +81,12 @@ func createFailover(args []string, ns string) {
os.Exit(2)
}
- if response.Status.Code == msgs.Ok {
- for k := range response.Results {
- fmt.Println(response.Results[k])
- }
- } else {
+ if response.Status.Code != msgs.Ok {
fmt.Println("Error: " + response.Status.Msg)
- os.Exit(2)
+ os.Exit(1)
}
+
+ fmt.Println(response.Results)
}
// queryFailover is a helper function to return the user information about the
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index d5ae6b8b33..ae0129a86d 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -816,13 +816,26 @@ pgo failover --query hacluster
The PostgreSQL Operator is set up with an automated failover system based on
distributed consensus, but there may be times where you wish to have your
-cluster manually failover. If you wish to have your cluster manually failover,
-first, query your cluster to determine which failover targets are available.
-The query command also provides information that may help your decision, such as
-replication lag:
+cluster manually failover. There are two ways to issue a manual failover to
+your PostgreSQL cluster:
+
+1. Allow for the PostgreSQL Operator to select the best replica candidate to
+failover to
+2. Select your own replica candidate to failover to.
+
+To have the PostgreSQL Operator select the best replica candidate for failover,
+all you need to do is execute the following command:
+
+```
+pgo failover hacluster
+```
+
+If you wish to have your cluster manually failover, you must first query your
+cluster to determine which failover targets are available. The query command
+also provides information that may help your decision, such as replication lag:
```shell
-pgo failover --query hacluster
+pgo failover hacluster --query
```
Once you have selected the replica that is best for your to failover to, you can
@@ -833,7 +846,9 @@ pgo failover hacluster --target=hacluster-abcd
```
where `hacluster-abcd` is the name of the PostgreSQL instance that you want to
-promote to become the new primary
+promote to become the new primary.
+
+Both methods perform the failover immediately upon execution.
#### Destroying a Replica
diff --git a/docs/content/pgo-client/reference/pgo_failover.md b/docs/content/pgo-client/reference/pgo_failover.md
index d60cefd417..ea65e16f04 100644
--- a/docs/content/pgo-client/reference/pgo_failover.md
+++ b/docs/content/pgo-client/reference/pgo_failover.md
@@ -9,7 +9,12 @@ Performs a manual failover
Performs a manual failover. For example:
- pgo failover mycluster
+ # have the operator select the best target candidate
+ pgo failover hippo
+ # get a list of target candidates
+ pgo failover hippo --query
+ # failover to a specific target candidate
+ pgo failover hippo --target=hippo-abcd
```
pgo failover [flags]
@@ -27,7 +32,7 @@ pgo failover [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -41,4 +46,4 @@ pgo failover [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 1-Jan-2021
diff --git a/docs/content/tutorial/high-availability.md b/docs/content/tutorial/high-availability.md
index b85a8469cc..2d277d382c 100644
--- a/docs/content/tutorial/high-availability.md
+++ b/docs/content/tutorial/high-availability.md
@@ -62,7 +62,28 @@ pgo scaledown hippo --target=hippo-ojnd
## Manual Failover
-Each PostgreSQL cluster will manage its own availability. If you wish to manually fail over, you will need to use the [`pgo failover`]({{< relref "pgo-client/reference/pgo_failover.md">}}) command. First, determine which instance you want to fail over to:
+Each PostgreSQL cluster will manage its own availability. If you wish to manually fail over, you will need to use the [`pgo failover`]({{< relref "pgo-client/reference/pgo_failover.md">}}) command.
+
+There are two ways to issue a manual failover to your PostgreSQL cluster:
+
+1. Allow for the PostgreSQL Operator to select the best replica candidate for failover.
+2. Select your own replica candidate for failover.
+
+Both methods are detailed below.
+
+### Manual Failover - PostgreSQL Operator Candidate Selection
+
+To have the PostgreSQL Operator select the best replica candidate for failover, all you need to do is execute the following command:
+
+```
+pgo failover hippo
+```
+
+The PostgreSQL Operator will determine which is the best replica candidate to fail over to, and take into account factors such as replication lag and current timeline.
+
+### Manual Failover - Manual Selection
+
+If you wish to have your cluster manually failover, you must first query your determine which instance you want to fail over to. You can do so with the following command:
```
pgo failover hippo --query
diff --git a/internal/apiserver/failoverservice/failoverimpl.go b/internal/apiserver/failoverservice/failoverimpl.go
index ca25062303..983496b7ed 100644
--- a/internal/apiserver/failoverservice/failoverimpl.go
+++ b/internal/apiserver/failoverservice/failoverimpl.go
@@ -18,29 +18,32 @@ limitations under the License.
import (
"context"
"errors"
+ "fmt"
"github.com/crunchydata/postgres-operator/internal/apiserver"
"github.com/crunchydata/postgres-operator/internal/config"
+ "github.com/crunchydata/postgres-operator/internal/operator"
"github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
log "github.com/sirupsen/logrus"
- v1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
-// CreateFailover ...
+// CreateFailover is the API endpoint for triggering a manual failover of a
+// cluster. It performs this function inline, i.e. it does not trigger any
+// asynchronous methods.
+//
// pgo failover mycluster
-// pgo failover all
-// pgo failover --selector=name=mycluster
func CreateFailover(request *msgs.CreateFailoverRequest, ns, pgouser string) msgs.CreateFailoverResponse {
- ctx := context.TODO()
+ log.Debugf("create failover called for %s", request.ClusterName)
- var err error
- resp := msgs.CreateFailoverResponse{}
- resp.Status.Code = msgs.Ok
- resp.Status.Msg = ""
- resp.Results = make([]string, 0)
+ resp := msgs.CreateFailoverResponse{
+ Results: "",
+ Status: msgs.Status{
+ Code: msgs.Ok,
+ },
+ }
cluster, err := validateClusterName(request.ClusterName, ns)
if err != nil {
@@ -58,49 +61,21 @@ func CreateFailover(request *msgs.CreateFailoverRequest, ns, pgouser string) msg
}
if request.Target != "" {
- _, err = isValidFailoverTarget(request.Target, request.ClusterName, ns)
- if err != nil {
+ if err := isValidFailoverTarget(request.Target, request.ClusterName, ns); err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
}
}
- log.Debugf("create failover called for %s", request.ClusterName)
-
- // Create a pgtask
- spec := crv1.PgtaskSpec{}
- spec.Namespace = ns
- spec.Name = request.ClusterName + "-" + config.LABEL_FAILOVER
-
- // previous failovers will leave a pgtask so remove it first
- _ = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, spec.Name, metav1.DeleteOptions{})
-
- spec.TaskType = crv1.PgtaskFailover
- spec.Parameters = make(map[string]string)
- spec.Parameters[request.ClusterName] = request.ClusterName
-
- labels := make(map[string]string)
- labels["target"] = request.Target
- labels[config.LABEL_PG_CLUSTER] = request.ClusterName
- labels[config.LABEL_PGOUSER] = pgouser
-
- newInstance := &crv1.Pgtask{
- ObjectMeta: metav1.ObjectMeta{
- Name: spec.Name,
- Labels: labels,
- },
- Spec: spec,
- }
-
- _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(ctx, newInstance, metav1.CreateOptions{})
- if err != nil {
+ // perform the switchover
+ if err := operator.Switchover(apiserver.Clientset, apiserver.RESTConfig, cluster, request.Target); err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
}
- resp.Results = append(resp.Results, "created Pgtask (failover) for cluster "+request.ClusterName)
+ resp.Results = "failover success for cluster " + cluster.Name
return resp
}
@@ -186,7 +161,7 @@ func validateClusterName(clusterName, ns string) (*crv1.Pgcluster, error) {
// specified, and then ensuring the PG pod created by the deployment is not the current primary.
// If the deployment is not found, or if the pod is the current primary, an error will be returned.
// Otherwise the deployment is returned.
-func isValidFailoverTarget(deployName, clusterName, ns string) (*v1.Deployment, error) {
+func isValidFailoverTarget(deployName, clusterName, ns string) error {
ctx := context.TODO()
// Using the following label selector, ensure the deployment specified using deployName exists in the
@@ -198,11 +173,11 @@ func isValidFailoverTarget(deployName, clusterName, ns string) (*v1.Deployment,
List(ctx, metav1.ListOptions{LabelSelector: selector})
if err != nil {
log.Error(err)
- return nil, err
+ return err
} else if len(deployments.Items) == 0 {
- return nil, errors.New("no target found named " + deployName)
+ return fmt.Errorf("no target found named %s", deployName)
} else if len(deployments.Items) > 1 {
- return nil, errors.New("more than one target found named " + deployName)
+ return fmt.Errorf("more than one target found named %s", deployName)
}
// Using the following label selector, determine if the target specified is the current
@@ -212,8 +187,8 @@ func isValidFailoverTarget(deployName, clusterName, ns string) (*v1.Deployment,
"," + config.LABEL_PGHA_ROLE + "=" + config.LABEL_PGHA_ROLE_PRIMARY
pods, _ := apiserver.Clientset.CoreV1().Pods(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
if len(pods.Items) > 0 {
- return nil, errors.New("The primary database cannot be selected as a failover target")
+ return fmt.Errorf("The primary database cannot be selected as a failover target")
}
- return &deployments.Items[0], nil
+ return nil
}
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 9308d17f91..327eb74183 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -27,19 +27,14 @@ const (
const LABEL_PGTASK = "pg-task"
-const (
- LABEL_FAILOVER = "failover"
- LABEL_RESTART = "restart"
-)
+const LABEL_RESTART = "restart"
const (
- LABEL_TARGET = "target"
LABEL_RMDATA = "pgrmdata"
)
const (
LABEL_PGPOLICY = "pgpolicy"
- LABEL_INGEST = "ingest"
LABEL_PGREMOVE = "pgremove"
LABEL_PVCNAME = "pvcname"
LABEL_EXPORTER = "crunchy-postgres-exporter"
@@ -179,8 +174,6 @@ const (
LABEL_PGO_UPDATED_BY = "pgo-updated-by"
)
-const LABEL_FAILOVER_STARTED = "failover-started"
-
const GLOBAL_CUSTOM_CONFIGMAP = "pgo-custom-pg-config"
const (
diff --git a/internal/controller/pgtask/pgtaskcontroller.go b/internal/controller/pgtask/pgtaskcontroller.go
index d26d7231d4..d52ccf6dd0 100644
--- a/internal/controller/pgtask/pgtaskcontroller.go
+++ b/internal/controller/pgtask/pgtaskcontroller.go
@@ -122,13 +122,6 @@ func (c *Controller) processNextItem() bool {
case crv1.PgtaskUpgrade:
log.Debug("upgrade task added")
clusteroperator.AddUpgrade(c.Client, tmpTask, keyNamespace)
- case crv1.PgtaskFailover:
- log.Debug("failover task added")
- if !dupeFailover(c.Client, tmpTask, keyNamespace) {
- clusteroperator.FailoverBase(keyNamespace, c.Client, tmpTask, c.Client.Config)
- } else {
- log.Debugf("skipping duplicate onAdd failover task %s/%s", keyNamespace, keyResourceName)
- }
case crv1.PgtaskRollingUpdate:
log.Debug("rolling update task added")
// first, attempt to get the pgcluster object
@@ -164,9 +157,6 @@ func (c *Controller) processNextItem() bool {
case crv1.PgtaskpgRestore:
log.Debug("pgDump restore task added")
pgdumpoperator.Restore(keyNamespace, c.Client, tmpTask)
-
- case crv1.PgtaskAutoFailover:
- log.Debugf("autofailover task added %s", keyResourceName)
case crv1.PgtaskWorkflow:
log.Debugf("workflow task added [%s] ID [%s]", keyResourceName, tmpTask.Spec.Parameters[crv1.PgtaskWorkflowID])
@@ -217,24 +207,6 @@ func (c *Controller) AddPGTaskEventHandler() {
log.Debugf("pgtask Controller: added event handler to informer")
}
-// de-dupe logic for a failover, if the failover started
-// parameter is set, it means a failover has already been
-// started on this
-func dupeFailover(clientset pgo.Interface, task *crv1.Pgtask, ns string) bool {
- ctx := context.TODO()
- tmp, err := clientset.CrunchydataV1().Pgtasks(ns).Get(ctx, task.Spec.Name, metav1.GetOptions{})
- if err != nil {
- // a big time error if this occurs
- return false
- }
-
- if tmp.Spec.Parameters[config.LABEL_FAILOVER_STARTED] == "" {
- return false
- }
-
- return true
-}
-
// de-dupe logic for a delete data, if the delete data job started
// parameter is set, it means a delete data job has already been
// started on this
diff --git a/internal/operator/cluster/failover.go b/internal/operator/cluster/failover.go
index d1ff4fb033..67d43e3c05 100644
--- a/internal/operator/cluster/failover.go
+++ b/internal/operator/cluster/failover.go
@@ -20,121 +20,63 @@ package cluster
import (
"context"
- "encoding/json"
- "time"
+ "fmt"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
- crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
- "github.com/crunchydata/postgres-operator/pkg/events"
- pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned"
log "github.com/sirupsen/logrus"
+ v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/types"
+ "k8s.io/apimachinery/pkg/fields"
+ "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
-// FailoverBase ...
-// gets called first on a failover
-func FailoverBase(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask, restconfig *rest.Config) {
+// RemovePrimaryOnRoleChangeTag sets the 'primary_on_role_change' tag to null in the
+// Patroni DCS, effectively removing the tag. This is accomplished by exec'ing into
+// the primary PG pod, and sending a patch request to update the appropriate data (i.e.
+// the 'primary_on_role_change' tag) in the DCS.
+func RemovePrimaryOnRoleChangeTag(clientset kubernetes.Interface, restconfig *rest.Config,
+ clusterName, namespace string) error {
ctx := context.TODO()
- var err error
- // look up the pgcluster for this task
- // in the case, the clustername is passed as a key in the
- // parameters map
- var clusterName string
- for k := range task.Spec.Parameters {
- clusterName = k
- }
-
- cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(ctx, clusterName, metav1.GetOptions{})
- if err != nil {
- return
- }
-
- // create marker (clustername, namespace)
- err = PatchpgtaskFailoverStatus(clientset, task, namespace)
- if err != nil {
- log.Errorf("could not set failover started marker for task %s cluster %s", task.Spec.Name, clusterName)
- return
- }
+ selector := config.LABEL_PG_CLUSTER + "=" + clusterName +
+ "," + config.LABEL_PGHA_ROLE + "=" + config.LABEL_PGHA_ROLE_PRIMARY
- // get initial count of replicas --selector=pg-cluster=clusterName
- selector := config.LABEL_PG_CLUSTER + "=" + clusterName
- replicaList, err := clientset.CrunchydataV1().Pgreplicas(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
- if err != nil {
- log.Error(err)
- return
+ // only consider pods that are running
+ options := metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: selector,
}
- log.Debugf("replica count before failover is %d", len(replicaList.Items))
- // publish event for failover
- topics := make([]string, 1)
- topics[0] = events.EventTopicCluster
-
- f := events.EventFailoverClusterFormat{
- EventHeader: events.EventHeader{
- Namespace: namespace,
- Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER],
- Topic: topics,
- Timestamp: time.Now(),
- EventType: events.EventFailoverCluster,
- },
- Clustername: clusterName,
- Target: task.ObjectMeta.Labels[config.LABEL_TARGET],
- }
+ pods, err := clientset.CoreV1().Pods(namespace).List(ctx, options)
- err = events.Publish(f)
if err != nil {
log.Error(err)
+ return err
+ } else if len(pods.Items) == 0 {
+ return fmt.Errorf("no pods found for cluster %q", clusterName)
+ } else if len(pods.Items) > 1 {
+ log.Error("More than one primary found after completing the post-failover backup")
}
-
- _ = Failover(cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], clientset, clusterName, task, namespace, restconfig)
-
- // publish event for failover completed
- topics = make([]string, 1)
- topics[0] = events.EventTopicCluster
-
- g := events.EventFailoverClusterCompletedFormat{
- EventHeader: events.EventHeader{
- Namespace: namespace,
- Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER],
- Topic: topics,
- Timestamp: time.Now(),
- EventType: events.EventFailoverClusterCompleted,
- },
- Clustername: clusterName,
- Target: task.ObjectMeta.Labels[config.LABEL_TARGET],
- }
-
- err = events.Publish(g)
+ pod := pods.Items[0]
+
+ // generate the curl command that will be run on the pod selected for the failover in order
+ // to trigger the failover and promote that specific pod to primary
+ command := make([]string, 3)
+ command[0] = "/bin/bash"
+ command[1] = "-c"
+ command[2] = fmt.Sprintf("curl -s 127.0.0.1:%s/config -XPATCH -d "+
+ "'{\"tags\":{\"primary_on_role_change\":null}}'", config.DEFAULT_PATRONI_PORT)
+
+ log.Debugf("running Exec command '%s' with namespace=[%s] podname=[%s] container name=[%s]",
+ command, namespace, pod.Name, pod.Spec.Containers[0].Name)
+ stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, command,
+ pod.Spec.Containers[0].Name, pod.Name, namespace, nil)
+ log.Debugf("stdout=[%s] stderr=[%s]", stdout, stderr)
if err != nil {
log.Error(err)
- }
-
- // remove marker
-}
-
-func PatchpgtaskFailoverStatus(clientset pgo.Interface, oldCrd *crv1.Pgtask, namespace string) error {
- ctx := context.TODO()
-
- // change it
- oldCrd.Spec.Parameters[config.LABEL_FAILOVER_STARTED] = time.Now().Format(time.RFC3339)
-
- // create the patch
- patchBytes, err := json.Marshal(map[string]interface{}{
- "spec": map[string]interface{}{
- "parameters": oldCrd.Spec.Parameters,
- },
- })
- if err != nil {
return err
}
-
- // apply patch
- _, err6 := clientset.CrunchydataV1().Pgtasks(namespace).
- Patch(ctx, oldCrd.Name, types.MergePatchType, patchBytes, metav1.PatchOptions{})
-
- return err6
+ return nil
}
diff --git a/internal/operator/cluster/failoverlogic.go b/internal/operator/cluster/failoverlogic.go
deleted file mode 100644
index 7391de79e4..0000000000
--- a/internal/operator/cluster/failoverlogic.go
+++ /dev/null
@@ -1,234 +0,0 @@
-// Package cluster holds the cluster CRD logic and definitions
-// A cluster is comprised of a primary service, replica service,
-// primary deployment, and replica deployment
-package cluster
-
-/*
- Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-import (
- "context"
- "fmt"
- "time"
-
- "github.com/crunchydata/postgres-operator/internal/config"
- "github.com/crunchydata/postgres-operator/internal/kubeapi"
- "github.com/crunchydata/postgres-operator/internal/util"
- crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
- "github.com/crunchydata/postgres-operator/pkg/events"
- pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned"
- log "github.com/sirupsen/logrus"
- v1 "k8s.io/api/core/v1"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/fields"
- "k8s.io/apimachinery/pkg/types"
- "k8s.io/client-go/kubernetes"
- "k8s.io/client-go/rest"
-)
-
-func Failover(identifier string, clientset kubeapi.Interface, clusterName string, task *crv1.Pgtask, namespace string, restconfig *rest.Config) error {
- ctx := context.TODO()
-
- var pod *v1.Pod
- var err error
- target := task.ObjectMeta.Labels[config.LABEL_TARGET]
-
- log.Infof("Failover called on [%s] target [%s]", clusterName, target)
-
- pod, err = util.GetPod(clientset, target, namespace)
- if err != nil {
- log.Error(err)
- return err
- }
- log.Debugf("pod selected to failover to is %s", pod.Name)
-
- updateFailoverStatus(clientset, task, namespace, "deleted primary deployment "+clusterName)
-
- // trigger the failover to the selected replica
- if err := promote(pod, clientset, namespace, restconfig); err != nil {
- log.Warn(err)
- }
-
- publishPromoteEvent(namespace, task.ObjectMeta.Labels[config.LABEL_PGOUSER], clusterName, target)
-
- updateFailoverStatus(clientset, task, namespace, "promoting pod "+pod.Name+" target "+target)
-
- // relabel the deployment with primary labels
- // by setting service-name=clustername
- upod, err := clientset.CoreV1().Pods(namespace).Get(ctx, pod.Name, metav1.GetOptions{})
- if err != nil {
- log.Error(err)
- log.Error("error in getting pod during failover relabel")
- return err
- }
-
- // set the service-name label to the cluster name to match
- // the primary service selector
- log.Debugf("setting label on pod %s=%s", config.LABEL_SERVICE_NAME, clusterName)
-
- patch, err := kubeapi.NewMergePatch().Add("metadata", "labels", config.LABEL_SERVICE_NAME)(clusterName).Bytes()
- if err == nil {
- log.Debugf("patching pod %s: %s", upod.Name, patch)
- _, err = clientset.CoreV1().Pods(namespace).
- Patch(ctx, upod.Name, types.MergePatchType, patch, metav1.PatchOptions{})
- }
- if err != nil {
- log.Error(err)
- log.Error("error in updating pod during failover relabel")
- return err
- }
-
- targetDepName := upod.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME]
- log.Debugf("patching deployment %s: %s", targetDepName, patch)
- _, err = clientset.AppsV1().Deployments(namespace).
- Patch(ctx, targetDepName, types.MergePatchType, patch, metav1.PatchOptions{})
- if err != nil {
- log.Error(err)
- log.Error("error in updating deployment during failover relabel")
- return err
- }
-
- updateFailoverStatus(clientset, task, namespace, "updating label deployment...pod "+pod.Name+"was the failover target...failover completed")
-
- // update the pgcluster current-primary to new deployment name
- cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(ctx, clusterName, metav1.GetOptions{})
- if err != nil {
- log.Errorf("could not find pgcluster %s with labels", clusterName)
- return err
- }
-
- // update the CRD with the new current primary. If there is an error, log it
- // here, otherwise return
- if err := util.CurrentPrimaryUpdate(clientset, cluster, target, namespace); err != nil {
- log.Error(err)
- return err
- }
-
- return nil
-}
-
-func updateFailoverStatus(clientset pgo.Interface, task *crv1.Pgtask, namespace, message string) {
- ctx := context.TODO()
-
- log.Debugf("updateFailoverStatus namespace=[%s] taskName=[%s] message=[%s]", namespace, task.Name, message)
-
- // update the task
- t, err := clientset.CrunchydataV1().Pgtasks(task.Namespace).Get(ctx, task.Name, metav1.GetOptions{})
- if err != nil {
- return
- }
- *task = *t
-
- task.Status.Message = message
-
- t, err = clientset.CrunchydataV1().Pgtasks(task.Namespace).Update(ctx, task, metav1.UpdateOptions{})
- if err != nil {
- return
- }
- *task = *t
-}
-
-func promote(
- pod *v1.Pod,
- clientset kubernetes.Interface,
- namespace string, restconfig *rest.Config) error {
- // generate the curl command that will be run on the pod selected for the failover in order
- // to trigger the failover and promote that specific pod to primary
- command := make([]string, 3)
- command[0] = "/bin/bash"
- command[1] = "-c"
- command[2] = fmt.Sprintf("curl -s http://127.0.0.1:%s/failover -XPOST "+
- "-d '{\"candidate\":\"%s\"}'", config.DEFAULT_PATRONI_PORT, pod.Name)
-
- log.Debugf("running Exec with namespace=[%s] podname=[%s] container name=[%s]", namespace, pod.Name, pod.Spec.Containers[0].Name)
- stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, command, pod.Spec.Containers[0].Name, pod.Name, namespace, nil)
- log.Debugf("stdout=[%s] stderr=[%s]", stdout, stderr)
- if err != nil {
- log.Error(err)
- }
-
- return err
-}
-
-func publishPromoteEvent(namespace, username, clusterName, target string) {
- topics := make([]string, 1)
- topics[0] = events.EventTopicCluster
-
- f := events.EventFailoverClusterFormat{
- EventHeader: events.EventHeader{
- Namespace: namespace,
- Username: username,
- Topic: topics,
- Timestamp: time.Now(),
- EventType: events.EventFailoverCluster,
- },
- Clustername: clusterName,
- Target: target,
- }
-
- err := events.Publish(f)
- if err != nil {
- log.Error(err.Error())
- }
-}
-
-// RemovePrimaryOnRoleChangeTag sets the 'primary_on_role_change' tag to null in the
-// Patroni DCS, effectively removing the tag. This is accomplished by exec'ing into
-// the primary PG pod, and sending a patch request to update the appropriate data (i.e.
-// the 'primary_on_role_change' tag) in the DCS.
-func RemovePrimaryOnRoleChangeTag(clientset kubernetes.Interface, restconfig *rest.Config,
- clusterName, namespace string) error {
- ctx := context.TODO()
-
- selector := config.LABEL_PG_CLUSTER + "=" + clusterName +
- "," + config.LABEL_PGHA_ROLE + "=" + config.LABEL_PGHA_ROLE_PRIMARY
-
- // only consider pods that are running
- options := metav1.ListOptions{
- FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
- LabelSelector: selector,
- }
-
- pods, err := clientset.CoreV1().Pods(namespace).List(ctx, options)
-
- if err != nil {
- log.Error(err)
- return err
- } else if len(pods.Items) == 0 {
- return fmt.Errorf("no pods found for cluster %q", clusterName)
- } else if len(pods.Items) > 1 {
- log.Error("More than one primary found after completing the post-failover backup")
- }
- pod := pods.Items[0]
-
- // generate the curl command that will be run on the pod selected for the failover in order
- // to trigger the failover and promote that specific pod to primary
- command := make([]string, 3)
- command[0] = "/bin/bash"
- command[1] = "-c"
- command[2] = fmt.Sprintf("curl -s 127.0.0.1:%s/config -XPATCH -d "+
- "'{\"tags\":{\"primary_on_role_change\":null}}'", config.DEFAULT_PATRONI_PORT)
-
- log.Debugf("running Exec command '%s' with namespace=[%s] podname=[%s] container name=[%s]",
- command, namespace, pod.Name, pod.Spec.Containers[0].Name)
- stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, command,
- pod.Spec.Containers[0].Name, pod.Name, namespace, nil)
- log.Debugf("stdout=[%s] stderr=[%s]", stdout, stderr)
- if err != nil {
- log.Error(err)
- return err
- }
- return nil
-}
diff --git a/internal/operator/cluster/rolling.go b/internal/operator/cluster/rolling.go
index 39d50dff10..0811fb3d99 100644
--- a/internal/operator/cluster/rolling.go
+++ b/internal/operator/cluster/rolling.go
@@ -24,7 +24,6 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
"github.com/crunchydata/postgres-operator/internal/operator"
- "github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
log "github.com/sirupsen/logrus"
@@ -125,7 +124,7 @@ func RollingUpdate(clientset kubeapi.Interface, restConfig *rest.Config, cluster
// replicas that have the updated Deployment state.
if len(instances[deploymentTypeReplica]) > 0 && len(instances[deploymentTypePrimary]) == 1 {
// if the switchover fails, warn that it failed but continue on
- if err := switchover(clientset, restConfig, cluster); err != nil {
+ if err := operator.Switchover(clientset, restConfig, cluster, ""); err != nil {
log.Warnf("switchover failed: %s", err.Error())
}
}
@@ -233,52 +232,6 @@ func generatePostgresReadyCommand(port string) []string {
return []string{"pg_isready", "-p", port}
}
-// generatePostgresSwitchoverCommand creates the command that is used to issue
-// a switchover (demote a primary, promote a replica). Takes the name of the
-// cluster; Patroni will choose the best candidate to switchover to
-func generatePostgresSwitchoverCommand(clusterName string) []string {
- return []string{"patronictl", "switchover", "--force", clusterName}
-}
-
-// switchover performs a controlled switchover within a PostgreSQL cluster, i.e.
-// demoting a primary and promoting a replica. The method works as such:
-//
-// 1. The function looks for all available replicas as well as the current
-// primary. We look up the primary for convenience to avoid various API calls
-//
-// 2. We then search over the list to find both a primary and a suitable
-// candidate for promotion. A candidate is suitable if:
-// - It is on the latest timeline
-// - It has the least amount of replication lag
-//
-// This is done to limit the risk of data loss.
-//
-// If either a primary or candidate is **not** found, we do not switch over.
-//
-// 3. If all of the above works successfully, a switchover is attempted.
-func switchover(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster) error {
- // we want to find a Pod to execute the switchover command on, i.e. the
- // primary
- pod, err := util.GetPrimaryPod(clientset, cluster)
- if err != nil {
- return err
- }
-
- // good to generally log which instances are being used in the switchover
- log.Infof("controlled switchover started for cluster %q", cluster.Name)
-
- cmd := generatePostgresSwitchoverCommand(cluster.Name)
- if _, stderr, err := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
- cmd, "database", pod.Name, cluster.Namespace, nil); err != nil {
- return fmt.Errorf(stderr)
- }
-
- log.Infof("controlled switchover completed for cluster %q", cluster.Name)
-
- // and that's all
- return nil
-}
-
// waitForPostgresInstance waits for a PostgreSQL instance within a Pod is ready
// to accept connections
func waitForPostgresInstance(clientset kubernetes.Interface, restConfig *rest.Config,
diff --git a/internal/operator/switchover.go b/internal/operator/switchover.go
new file mode 100644
index 0000000000..bdffd268d2
--- /dev/null
+++ b/internal/operator/switchover.go
@@ -0,0 +1,156 @@
+package operator
+
+/*
+ Copyright 2021 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/crunchydata/postgres-operator/internal/config"
+ "github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/util"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+ log "github.com/sirupsen/logrus"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+)
+
+// switchover performs a controlled switchover within a PostgreSQL cluster, i.e.
+// demoting a primary and promoting a replica. There are two types of switchover
+// methods that can be invoked.
+//
+// Method #1: Automatic Choice
+//
+// The switchover command invokves Patroni which works as such:
+//
+// 1. The function looks for all available replicas as well as the current
+// primary. We look up the primary for convenience to avoid various API calls
+//
+// 2. We then search over the list to find both a primary and a suitable
+// candidate for promotion. A candidate is suitable if:
+//
+// - It is on the latest timeline
+// - It has the least amount of replication lag
+//
+// This is done to limit the risk of data loss.
+//
+// If either a primary or candidate is **not** found, we do not switch over.
+//
+// 3. If all of the above works successfully, a switchover is attempted.
+//
+// Method #2: Targeted Choice
+//
+// 1. If the "target" parameter, which should contain the name of the target
+// instances (Deployment), is not empty then we will attempt to locate that
+// target Pod.
+//
+// 2. The target Pod name, called the candidate is passed into the switchover
+// command generation function, and then is ultimately used in the switchover.
+func Switchover(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster, target string) error {
+ var (
+ candidate string
+ err error
+ pod *v1.Pod
+ )
+
+ // the method to get the pod is dictated by whether or not there is a target
+ // specified.
+ //
+ // If target is specified, then we will attempt to get the Pod that
+ // represents that target.
+ //
+ // If it is not specified, then we will attempt to get the primary pod
+ //
+ // If either errors, we will return an error
+ if target != "" {
+ pod, err = getCandidatePod(clientset, cluster, target)
+ candidate = pod.Name
+ } else {
+ pod, err = util.GetPrimaryPod(clientset, cluster)
+ }
+
+ if err != nil {
+ return err
+ }
+
+ // generate the command
+ cmd := generatePostgresSwitchoverCommand(cluster.Name, candidate)
+
+ // good to generally log which instances are being used in the switchover
+ log.Infof("controlled switchover started for cluster %q", cluster.Name)
+
+ if _, stderr, err := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
+ cmd, "database", pod.Name, cluster.Namespace, nil); err != nil {
+ return fmt.Errorf(stderr)
+ }
+
+ log.Infof("controlled switchover completed for cluster %q", cluster.Name)
+
+ // and that's all
+ return nil
+}
+
+// generatePostgresSwitchoverCommand creates the command that is used to issue
+// a switchover (demote a primary, promote a replica).
+//
+// There are two ways to run this command:
+//
+// 1. Pass in only a clusterName. Patroni will select the best candidate
+// 2. Pass in a clusterName AND a target candidate name, which has to be the
+// name of a Pod
+func generatePostgresSwitchoverCommand(clusterName, candidate string) []string {
+ cmd := []string{"patronictl", "switchover", "--force", clusterName}
+
+ if candidate != "" {
+ cmd = append(cmd, "--candidate", candidate)
+ }
+
+ return cmd
+}
+
+// getCandidatePod tries to get the candidate Pod for a switchover. If such a
+// Pod cannot be found, we likely cannot use the instance as a switchover
+// candidate.
+func getCandidatePod(clientset kubernetes.Interface, cluster *crv1.Pgcluster, candidateName string) (*v1.Pod, error) {
+ ctx := context.TODO()
+ // ensure the Pod is part of the cluster and is running
+ options := metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: fields.AndSelectors(
+ fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, cluster.Name),
+ fields.OneTermEqualSelector(config.LABEL_PG_DATABASE, config.LABEL_TRUE),
+ fields.OneTermEqualSelector(config.LABEL_DEPLOYMENT_NAME, candidateName),
+ ).String(),
+ }
+
+ pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(ctx, options)
+ if err != nil {
+ return nil, err
+ }
+
+ // if no Pods are found, then also return an error as we then cannot switch
+ // over to this instance
+ if len(pods.Items) == 0 {
+ return nil, fmt.Errorf("no pods found for instance %s", candidateName)
+ }
+
+ // there is an outside chance the list returns multiple Pods, so just return
+ // the first one
+ return &pods.Items[0], nil
+}
diff --git a/internal/operator/switchover_test.go b/internal/operator/switchover_test.go
new file mode 100644
index 0000000000..9d9abba2ba
--- /dev/null
+++ b/internal/operator/switchover_test.go
@@ -0,0 +1,45 @@
+package operator
+
+/*
+ Copyright 2021 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "reflect"
+ "testing"
+)
+
+func TestGeneratePostgresSwitchoverCommand(t *testing.T) {
+ clusterName := "hippo"
+ candidate := ""
+
+ t.Run("no candidate", func(t *testing.T) {
+ expected := []string{"patronictl", "switchover", "--force", clusterName}
+ actual := generatePostgresSwitchoverCommand(clusterName, candidate)
+
+ if !reflect.DeepEqual(expected, actual) {
+ t.Fatalf("expected: %v actual: %v", expected, actual)
+ }
+ })
+
+ t.Run("candidate", func(t *testing.T) {
+ candidate = "hippo-abc-123"
+ expected := []string{"patronictl", "switchover", "--force", clusterName, "--candidate", candidate}
+ actual := generatePostgresSwitchoverCommand(clusterName, candidate)
+
+ if !reflect.DeepEqual(expected, actual) {
+ t.Fatalf("expected: %v actual: %v", expected, actual)
+ }
+ })
+}
diff --git a/internal/util/failover.go b/internal/util/failover.go
index 37a899e7f8..7c530d93b1 100644
--- a/internal/util/failover.go
+++ b/internal/util/failover.go
@@ -18,7 +18,6 @@ package util
import (
"context"
"encoding/json"
- "errors"
"fmt"
"github.com/crunchydata/postgres-operator/internal/config"
@@ -96,43 +95,6 @@ const (
// replication lag
var instanceInfoCommand = []string{"patronictl", "list", "-f", "json"}
-// GetPod determines the best target to fail to
-func GetPod(clientset kubernetes.Interface, deploymentName, namespace string) (*v1.Pod, error) {
- ctx := context.TODO()
-
- var err error
- var pod *v1.Pod
- var pods *v1.PodList
-
- selector := config.LABEL_DEPLOYMENT_NAME + "=" + deploymentName + "," + config.LABEL_PGHA_ROLE + "=replica"
- pods, err = clientset.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{LabelSelector: selector})
- if err != nil {
- return pod, err
- }
- if len(pods.Items) != 1 {
- return pod, errors.New("could not determine which pod to failover to")
- }
-
- for i := range pods.Items {
- pod = &pods.Items[i]
- }
-
- found := false
-
- // make sure the pod has a database container it it
- for _, c := range pod.Spec.Containers {
- if c.Name == "database" {
- found = true
- }
- }
-
- if !found {
- return pod, errors.New("could not find a database container in the pod")
- }
-
- return pod, err
-}
-
// ReplicationStatus is responsible for retrieving and returning the replication
// information about the status of the replicas in a PostgreSQL cluster. It
// executes into a single replica pod and leverages the functionality of Patroni
diff --git a/pkg/apis/crunchydata.com/v1/task.go b/pkg/apis/crunchydata.com/v1/task.go
index c7eb9e4605..58340cc900 100644
--- a/pkg/apis/crunchydata.com/v1/task.go
+++ b/pkg/apis/crunchydata.com/v1/task.go
@@ -24,8 +24,6 @@ const PgtaskResourcePlural = "pgtasks"
const (
PgtaskDeleteData = "delete-data"
- PgtaskFailover = "failover"
- PgtaskAutoFailover = "autofailover"
PgtaskAddPolicies = "addpolicies"
PgtaskRollingUpdate = "rolling update"
)
diff --git a/pkg/apiservermsgs/failovermsgs.go b/pkg/apiservermsgs/failovermsgs.go
index b51b11c3e4..e7c37d30ab 100644
--- a/pkg/apiservermsgs/failovermsgs.go
+++ b/pkg/apiservermsgs/failovermsgs.go
@@ -37,8 +37,7 @@ type QueryFailoverResponse struct {
// CreateFailoverResponse ...
// swagger:model
type CreateFailoverResponse struct {
- Results []string
- Targets string
+ Results string
Status
}
diff --git a/pkg/events/eventtype.go b/pkg/events/eventtype.go
index 8a2031d08b..ebecda8055 100644
--- a/pkg/events/eventtype.go
+++ b/pkg/events/eventtype.go
@@ -45,8 +45,6 @@ const (
EventScaleClusterFailure = "ScaleClusterFailure"
EventScaleDownCluster = "ScaleDownCluster"
EventShutdownCluster = "ShutdownCluster"
- EventFailoverCluster = "FailoverCluster"
- EventFailoverClusterCompleted = "FailoverClusterCompleted"
EventRestoreCluster = "RestoreCluster"
EventRestoreClusterCompleted = "RestoreClusterCompleted"
EventUpgradeCluster = "UpgradeCluster"
@@ -202,38 +200,6 @@ func (lvl EventScaleDownClusterFormat) String() string {
return msg
}
-//----------------------------
-type EventFailoverClusterFormat struct {
- EventHeader `json:"eventheader"`
- Clustername string `json:"clustername"`
- Target string `json:"target"`
-}
-
-func (p EventFailoverClusterFormat) GetHeader() EventHeader {
- return p.EventHeader
-}
-
-func (lvl EventFailoverClusterFormat) String() string {
- msg := fmt.Sprintf("Event %s (failover) - clustername %s - target %s", lvl.EventHeader, lvl.Clustername, lvl.Target)
- return msg
-}
-
-//----------------------------
-type EventFailoverClusterCompletedFormat struct {
- EventHeader `json:"eventheader"`
- Clustername string `json:"clustername"`
- Target string `json:"target"`
-}
-
-func (p EventFailoverClusterCompletedFormat) GetHeader() EventHeader {
- return p.EventHeader
-}
-
-func (lvl EventFailoverClusterCompletedFormat) String() string {
- msg := fmt.Sprintf("Event %s (failover completed) - clustername %s - target %s", lvl.EventHeader, lvl.Clustername, lvl.Target)
- return msg
-}
-
//----------------------------
type EventUpgradeClusterFormat struct {
EventHeader `json:"eventheader"`
From a4dc532f363a34559b83fc9dea3ca7903da16629 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 4 Jan 2021 09:46:40 -0500
Subject: [PATCH 114/276] Modify role change check command call
This moves away from making a direct call to the API and instead
leverages the command to update the tag on a role change.
---
internal/operator/cluster/failover.go | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/internal/operator/cluster/failover.go b/internal/operator/cluster/failover.go
index 67d43e3c05..748c9f1b01 100644
--- a/internal/operator/cluster/failover.go
+++ b/internal/operator/cluster/failover.go
@@ -32,6 +32,9 @@ import (
"k8s.io/client-go/rest"
)
+var roleChangeCmd = []string{"patronictl", "edit-config", "--force",
+ "--set", "tags.primary_on_role_change=null"}
+
// RemovePrimaryOnRoleChangeTag sets the 'primary_on_role_change' tag to null in the
// Patroni DCS, effectively removing the tag. This is accomplished by exec'ing into
// the primary PG pod, and sending a patch request to update the appropriate data (i.e.
@@ -61,17 +64,11 @@ func RemovePrimaryOnRoleChangeTag(clientset kubernetes.Interface, restconfig *re
}
pod := pods.Items[0]
- // generate the curl command that will be run on the pod selected for the failover in order
- // to trigger the failover and promote that specific pod to primary
- command := make([]string, 3)
- command[0] = "/bin/bash"
- command[1] = "-c"
- command[2] = fmt.Sprintf("curl -s 127.0.0.1:%s/config -XPATCH -d "+
- "'{\"tags\":{\"primary_on_role_change\":null}}'", config.DEFAULT_PATRONI_PORT)
-
+ // execute the command that will be run on the pod selected for the failover
+ // in order to trigger the failover and promote that specific pod to primary
log.Debugf("running Exec command '%s' with namespace=[%s] podname=[%s] container name=[%s]",
- command, namespace, pod.Name, pod.Spec.Containers[0].Name)
- stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, command,
+ roleChangeCmd, namespace, pod.Name, pod.Spec.Containers[0].Name)
+ stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, roleChangeCmd,
pod.Spec.Containers[0].Name, pod.Name, namespace, nil)
log.Debugf("stdout=[%s] stderr=[%s]", stdout, stderr)
if err != nil {
From 464286155c9d5dbdc5e068a165817801654b1912 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 4 Jan 2021 16:34:22 -0500
Subject: [PATCH 115/276] Enable user-defined forced failovers
While this was the default behavior in Operator's past when using
`pgo failover`, a previous commit changed the Operator internals
to leverage a "controlled switchover" which is a bit nicer (i.e.
it only works if there is a healthy instance to fail over to).
However, there are situations where one must force a failover,
and as such, we need to allow for one to do so.
This adds the `--force` flag to `pgo failover` to allow for a
forced failover. Note that `--target` must explicitly be set
when forcing a failover.
---
cmd/pgo/cmd/failover.go | 26 +++++--
cmd/pgo/cmd/flags.go | 11 ++-
.../pgo-client/reference/pgo_failover.md | 3 +-
.../apiserver/failoverservice/failoverimpl.go | 70 +++++++++++++-----
internal/controller/job/backresthandler.go | 7 +-
internal/operator/common.go | 44 +++++++++++
internal/operator/{cluster => }/failover.go | 74 ++++++++++++++++++-
internal/operator/failover_test.go | 45 +++++++++++
internal/operator/switchover.go | 59 ++-------------
pkg/apiservermsgs/failovermsgs.go | 11 ++-
10 files changed, 259 insertions(+), 91 deletions(-)
rename internal/operator/{cluster => }/failover.go (53%)
create mode 100644 internal/operator/failover_test.go
diff --git a/cmd/pgo/cmd/failover.go b/cmd/pgo/cmd/failover.go
index b67cd1d2d1..b6eb41e735 100644
--- a/cmd/pgo/cmd/failover.go
+++ b/cmd/pgo/cmd/failover.go
@@ -19,6 +19,7 @@ package cmd
import (
"fmt"
"os"
+ "strings"
"github.com/crunchydata/postgres-operator/cmd/pgo/api"
"github.com/crunchydata/postgres-operator/cmd/pgo/util"
@@ -60,8 +61,10 @@ var failoverCmd = &cobra.Command{
func init() {
RootCmd.AddCommand(failoverCmd)
- failoverCmd.Flags().BoolVarP(&Query, "query", "", false, "Prints the list of failover candidates.")
+ failoverCmd.Flags().BoolVar(&Force, "force", false, "Force the failover to occur, regardless "+
+ "of the health of the target instance. Must be used with \"--target\".")
failoverCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.")
+ failoverCmd.Flags().BoolVar(&Query, "query", false, "Prints the list of failover candidates.")
failoverCmd.Flags().StringVarP(&Target, "target", "", "", "The replica target which the failover will occur on.")
}
@@ -69,20 +72,27 @@ func init() {
func createFailover(args []string, ns string) {
log.Debugf("createFailover called %v", args)
- request := new(msgs.CreateFailoverRequest)
- request.Namespace = ns
- request.ClusterName = args[0]
- request.Target = Target
- request.ClientVersion = msgs.PGO_VERSION
+ request := &msgs.CreateFailoverRequest{
+ ClientVersion: msgs.PGO_VERSION,
+ ClusterName: args[0],
+ Force: Force,
+ Namespace: ns,
+ Target: Target,
+ }
response, err := api.CreateFailover(httpclient, &SessionCredentials, request)
if err != nil {
fmt.Println("Error: " + err.Error())
- os.Exit(2)
+ os.Exit(1)
}
if response.Status.Code != msgs.Ok {
- fmt.Println("Error: " + response.Status.Msg)
+ fmt.Println("Error:", strings.ReplaceAll(response.Status.Msg, "Error: ", ""))
+
+ if strings.Contains(response.Status.Msg, "no primary") {
+ fmt.Println(`Hint: Try using the "--force" flag`)
+ }
+
os.Exit(1)
}
diff --git a/cmd/pgo/cmd/flags.go b/cmd/pgo/cmd/flags.go
index bdaf760942..a91b3daa14 100644
--- a/cmd/pgo/cmd/flags.go
+++ b/cmd/pgo/cmd/flags.go
@@ -22,7 +22,16 @@ var DeleteData bool
// even after a cluster is deleted. This is DEPRECATED
var KeepData bool
-var Query bool
+var (
+ // Force indicates that the "force" action should be taken for that step. This
+ // is different than NoPrompt as "Force" is for indicating that the API server
+ // must try at all costs
+ Force bool
+
+ // Query indicates that the attempted request is "querying" information
+ // instead of taking some action
+ Query bool
+)
var (
Target string
diff --git a/docs/content/pgo-client/reference/pgo_failover.md b/docs/content/pgo-client/reference/pgo_failover.md
index ea65e16f04..c81b3bfd92 100644
--- a/docs/content/pgo-client/reference/pgo_failover.md
+++ b/docs/content/pgo-client/reference/pgo_failover.md
@@ -23,6 +23,7 @@ pgo failover [flags]
### Options
```
+ --force Force the failover to occur, regardless of the health of the target instance. Must be used with "--target".
-h, --help help for failover
--no-prompt No command line confirmation.
--query Prints the list of failover candidates.
@@ -46,4 +47,4 @@ pgo failover [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Jan-2021
+###### Auto generated by spf13/cobra on 4-Jan-2021
diff --git a/internal/apiserver/failoverservice/failoverimpl.go b/internal/apiserver/failoverservice/failoverimpl.go
index 983496b7ed..7d75420d44 100644
--- a/internal/apiserver/failoverservice/failoverimpl.go
+++ b/internal/apiserver/failoverservice/failoverimpl.go
@@ -19,6 +19,7 @@ import (
"context"
"errors"
"fmt"
+ "strings"
"github.com/crunchydata/postgres-operator/internal/apiserver"
"github.com/crunchydata/postgres-operator/internal/config"
@@ -26,8 +27,11 @@ import (
"github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
+
log "github.com/sirupsen/logrus"
+ v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
)
// CreateFailover is the API endpoint for triggering a manual failover of a
@@ -60,18 +64,24 @@ func CreateFailover(request *msgs.CreateFailoverRequest, ns, pgouser string) msg
return resp
}
- if request.Target != "" {
- if err := isValidFailoverTarget(request.Target, request.ClusterName, ns); err != nil {
- resp.Status.Code = msgs.Error
- resp.Status.Msg = err.Error()
- return resp
- }
+ if err := isValidFailoverTarget(request); err != nil {
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = err.Error()
+ return resp
}
- // perform the switchover
- if err := operator.Switchover(apiserver.Clientset, apiserver.RESTConfig, cluster, request.Target); err != nil {
+ // perform the switchover or failover, depending on which flag is selected
+ // if we are forcing the failover, we need to use "Failover", otherwise we
+ // perform a controlled switchover
+ if request.Force {
+ err = operator.Failover(apiserver.Clientset, apiserver.RESTConfig, cluster, request.Target)
+ } else {
+ err = operator.Switchover(apiserver.Clientset, apiserver.RESTConfig, cluster, request.Target)
+ }
+
+ if err != nil {
resp.Status.Code = msgs.Error
- resp.Status.Msg = err.Error()
+ resp.Status.Msg = strings.ReplaceAll(err.Error(), "master", "primary")
return resp
}
@@ -161,31 +171,53 @@ func validateClusterName(clusterName, ns string) (*crv1.Pgcluster, error) {
// specified, and then ensuring the PG pod created by the deployment is not the current primary.
// If the deployment is not found, or if the pod is the current primary, an error will be returned.
// Otherwise the deployment is returned.
-func isValidFailoverTarget(deployName, clusterName, ns string) error {
+func isValidFailoverTarget(request *msgs.CreateFailoverRequest) error {
ctx := context.TODO()
+ // if we're not forcing a failover and the target is blank, we can
+ // return here
+ // However, if we are forcing a failover and the target is blank, then we do
+ // have an error
+ if request.Target == "" {
+ if !request.Force {
+ return nil
+ }
+
+ return fmt.Errorf("target is required when forcing a failover.")
+ }
+
// Using the following label selector, ensure the deployment specified using deployName exists in the
// cluster specified using clusterName:
// pg-cluster=clusterName,deployment-name=deployName
- selector := config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_DEPLOYMENT_NAME + "=" + deployName
- deployments, err := apiserver.Clientset.
- AppsV1().Deployments(ns).
- List(ctx, metav1.ListOptions{LabelSelector: selector})
+ options := metav1.ListOptions{
+ LabelSelector: fields.AndSelectors(
+ fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, request.ClusterName),
+ fields.OneTermEqualSelector(config.LABEL_DEPLOYMENT_NAME, request.Target),
+ ).String(),
+ }
+ deployments, err := apiserver.Clientset.AppsV1().Deployments(request.Namespace).List(ctx, options)
+
if err != nil {
log.Error(err)
return err
} else if len(deployments.Items) == 0 {
- return fmt.Errorf("no target found named %s", deployName)
+ return fmt.Errorf("no target found named %s", request.Target)
} else if len(deployments.Items) > 1 {
- return fmt.Errorf("more than one target found named %s", deployName)
+ return fmt.Errorf("more than one target found named %s", request.Target)
}
// Using the following label selector, determine if the target specified is the current
// primary for the cluster and return an error if it is:
// pg-cluster=clusterName,deployment-name=deployName,role=primary
- selector = config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_DEPLOYMENT_NAME + "=" + deployName +
- "," + config.LABEL_PGHA_ROLE + "=" + config.LABEL_PGHA_ROLE_PRIMARY
- pods, _ := apiserver.Clientset.CoreV1().Pods(ns).List(ctx, metav1.ListOptions{LabelSelector: selector})
+ options.FieldSelector = fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String()
+ options.LabelSelector = fields.AndSelectors(
+ fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, request.ClusterName),
+ fields.OneTermEqualSelector(config.LABEL_DEPLOYMENT_NAME, request.Target),
+ fields.OneTermEqualSelector(config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_PRIMARY),
+ ).String()
+
+ pods, _ := apiserver.Clientset.CoreV1().Pods(request.Namespace).List(ctx, options)
+
if len(pods.Items) > 0 {
return fmt.Errorf("The primary database cannot be selected as a failover target")
}
diff --git a/internal/controller/job/backresthandler.go b/internal/controller/job/backresthandler.go
index c7f585cb5e..2359fa11c0 100644
--- a/internal/controller/job/backresthandler.go
+++ b/internal/controller/job/backresthandler.go
@@ -26,8 +26,8 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/controller"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/operator"
"github.com/crunchydata/postgres-operator/internal/operator/backrest"
- clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
)
@@ -92,9 +92,8 @@ func (c *Controller) handleBackrestBackupUpdate(job *apiv1.Job) error {
job.ObjectMeta.Namespace)
} else if labels[config.LABEL_PGHA_BACKUP_TYPE] == crv1.BackupTypeFailover {
- err := clusteroperator.RemovePrimaryOnRoleChangeTag(c.Client, c.Client.Config,
- labels[config.LABEL_PG_CLUSTER], job.ObjectMeta.Namespace)
- if err != nil {
+ if err := operator.RemovePrimaryOnRoleChangeTag(c.Client, c.Client.Config,
+ labels[config.LABEL_PG_CLUSTER], job.ObjectMeta.Namespace); err != nil {
log.Error(err)
return err
}
diff --git a/internal/operator/common.go b/internal/operator/common.go
index 20734af392..138436b4d2 100644
--- a/internal/operator/common.go
+++ b/internal/operator/common.go
@@ -32,6 +32,7 @@ import (
v1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/fields"
"k8s.io/client-go/kubernetes"
)
@@ -302,6 +303,49 @@ func SetContainerImageOverride(containerImageName string, container *v1.Containe
}
}
+// getCandidatePod tries to get the candidate Pod for a switchover or failover.
+// If "candidateName" is provided, it will seek out the specific PostgreSQL
+// instance. Otherwise, it will just attempt to find a running Pod.
+//
+// If such a Pod cannot be found, we likely cannot use the instance for a
+// switchover for failover candidate as it is not running.
+func getCandidatePod(clientset kubernetes.Interface, cluster *crv1.Pgcluster, candidateName string) (*v1.Pod, error) {
+ ctx := context.TODO()
+
+ // build the label selector. we are looking for any PostgreSQL instance within
+ // this cluster, so that part is easy
+ labelSelector := fields.Set{
+ config.LABEL_PG_CLUSTER: cluster.Name,
+ config.LABEL_PG_DATABASE: config.LABEL_TRUE,
+ }
+
+ // if a candidateName is supplied, use that as part of the label selector to
+ // find the candidate Pod
+ if candidateName != "" {
+ labelSelector[config.LABEL_DEPLOYMENT_NAME] = candidateName
+ }
+
+ // ensure the Pod is part of the cluster and is running
+ options := metav1.ListOptions{
+ FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
+ LabelSelector: labelSelector.String(),
+ }
+
+ pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(ctx, options)
+ if err != nil {
+ return nil, err
+ }
+
+ // if no Pods are found, then also return an error as we then cannot switch
+ // over to this instance
+ if len(pods.Items) == 0 {
+ return nil, fmt.Errorf("no pods found for instance %s", candidateName)
+ }
+
+ // the list returns multiple Pods, so just return the first one
+ return &pods.Items[0], nil
+}
+
// initializeContainerImageOverrides initializes the container image overrides
// that could be set if there are any `RELATED_IMAGE_*` environmental variables
func initializeContainerImageOverrides() {
diff --git a/internal/operator/cluster/failover.go b/internal/operator/failover.go
similarity index 53%
rename from internal/operator/cluster/failover.go
rename to internal/operator/failover.go
index 748c9f1b01..1334a09989 100644
--- a/internal/operator/cluster/failover.go
+++ b/internal/operator/failover.go
@@ -1,7 +1,4 @@
-// Package cluster holds the cluster CRD logic and definitions
-// A cluster is comprised of a primary service, replica service,
-// primary deployment, and replica deployment
-package cluster
+package operator
/*
Copyright 2018 - 2021 Crunchy Data Solutions, Inc.
@@ -24,6 +21,7 @@ import (
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
+ crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -35,6 +33,56 @@ import (
var roleChangeCmd = []string{"patronictl", "edit-config", "--force",
"--set", "tags.primary_on_role_change=null"}
+// Failover performs a failover to a PostgreSQL cluster, which is effectively
+// a "forced switchover." In other words, failover will force ensure that
+// there is a primary available.
+//
+// NOTE: This is reserve as the "last resort" case. If you want a controlled
+// failover, you want "Switchover".
+//
+// A target must be specified. The target should contain the name of the target
+// instances (Deployment), is not empty then we will attempt to locate that
+// target Pod.
+//
+// The target Pod name, called the candidate is passed into the failover
+// command generation function, and then is ultimately used in the failover.
+func Failover(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster, target string) error {
+ // ensure target is not empty
+ if target == "" {
+ return fmt.Errorf("failover requires a target instance to be specified.")
+ }
+
+ // When the target is specified, we will attempt to get the Pod that
+ // represents that target.
+ //
+ // If it is not specified, then we will attempt to get any Pod.
+ //
+ // If either errors, we will return an error
+ pod, err := getCandidatePod(clientset, cluster, target)
+
+ if err != nil {
+ return err
+ }
+
+ candidate := pod.Name
+
+ // generate the command
+ cmd := generatePostgresFailoverCommand(cluster.Name, candidate)
+
+ // good to generally log which instances are being used in the failover
+ log.Infof("failover started for cluster %q", cluster.Name)
+
+ if _, stderr, err := kubeapi.ExecToPodThroughAPI(restConfig, clientset,
+ cmd, "database", pod.Name, cluster.Namespace, nil); err != nil {
+ return fmt.Errorf(stderr)
+ }
+
+ log.Infof("failover completed for cluster %q", cluster.Name)
+
+ // and that's all
+ return nil
+}
+
// RemovePrimaryOnRoleChangeTag sets the 'primary_on_role_change' tag to null in the
// Patroni DCS, effectively removing the tag. This is accomplished by exec'ing into
// the primary PG pod, and sending a patch request to update the appropriate data (i.e.
@@ -77,3 +125,21 @@ func RemovePrimaryOnRoleChangeTag(clientset kubernetes.Interface, restconfig *re
}
return nil
}
+
+// generatePostgresFailoverCommand creates the command that is used to issue
+// a failover command (ensure that there is a promoted primary).
+//
+// There are two ways to run this command:
+//
+// 1. Pass in only a clusterName. Patroni will select the best candidate
+// 2. Pass in a clusterName AND a target candidate name, which has to be the
+// name of a Pod
+func generatePostgresFailoverCommand(clusterName, candidate string) []string {
+ cmd := []string{"patronictl", "failover", "--force", clusterName}
+
+ if candidate != "" {
+ cmd = append(cmd, "--candidate", candidate)
+ }
+
+ return cmd
+}
diff --git a/internal/operator/failover_test.go b/internal/operator/failover_test.go
new file mode 100644
index 0000000000..67f3be31ef
--- /dev/null
+++ b/internal/operator/failover_test.go
@@ -0,0 +1,45 @@
+package operator
+
+/*
+ Copyright 2021 Crunchy Data Solutions, Inc.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+import (
+ "reflect"
+ "testing"
+)
+
+func TestGeneratePostgresFailoverCommand(t *testing.T) {
+ clusterName := "hippo"
+ candidate := ""
+
+ t.Run("no candidate", func(t *testing.T) {
+ expected := []string{"patronictl", "failover", "--force", clusterName}
+ actual := generatePostgresFailoverCommand(clusterName, candidate)
+
+ if !reflect.DeepEqual(expected, actual) {
+ t.Fatalf("expected: %v actual: %v", expected, actual)
+ }
+ })
+
+ t.Run("candidate", func(t *testing.T) {
+ candidate = "hippo-abc-123"
+ expected := []string{"patronictl", "failover", "--force", clusterName, "--candidate", candidate}
+ actual := generatePostgresFailoverCommand(clusterName, candidate)
+
+ if !reflect.DeepEqual(expected, actual) {
+ t.Fatalf("expected: %v actual: %v", expected, actual)
+ }
+ })
+}
diff --git a/internal/operator/switchover.go b/internal/operator/switchover.go
index bdffd268d2..6595f61208 100644
--- a/internal/operator/switchover.go
+++ b/internal/operator/switchover.go
@@ -16,22 +16,16 @@ package operator
*/
import (
- "context"
"fmt"
- "github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
- "github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
log "github.com/sirupsen/logrus"
- v1 "k8s.io/api/core/v1"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/fields"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
-// switchover performs a controlled switchover within a PostgreSQL cluster, i.e.
+// Switchover performs a controlled switchover within a PostgreSQL cluster, i.e.
// demoting a primary and promoting a replica. There are two types of switchover
// methods that can be invoked.
//
@@ -63,32 +57,26 @@ import (
// 2. The target Pod name, called the candidate is passed into the switchover
// command generation function, and then is ultimately used in the switchover.
func Switchover(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster, target string) error {
- var (
- candidate string
- err error
- pod *v1.Pod
- )
-
// the method to get the pod is dictated by whether or not there is a target
// specified.
//
// If target is specified, then we will attempt to get the Pod that
// represents that target.
//
- // If it is not specified, then we will attempt to get the primary pod
+ // If it is not specified, then we will attempt to get any Pod.
//
// If either errors, we will return an error
- if target != "" {
- pod, err = getCandidatePod(clientset, cluster, target)
- candidate = pod.Name
- } else {
- pod, err = util.GetPrimaryPod(clientset, cluster)
- }
+ candidate := ""
+ pod, err := getCandidatePod(clientset, cluster, target)
if err != nil {
return err
}
+ if target != "" {
+ candidate = pod.Name
+ }
+
// generate the command
cmd := generatePostgresSwitchoverCommand(cluster.Name, candidate)
@@ -123,34 +111,3 @@ func generatePostgresSwitchoverCommand(clusterName, candidate string) []string {
return cmd
}
-
-// getCandidatePod tries to get the candidate Pod for a switchover. If such a
-// Pod cannot be found, we likely cannot use the instance as a switchover
-// candidate.
-func getCandidatePod(clientset kubernetes.Interface, cluster *crv1.Pgcluster, candidateName string) (*v1.Pod, error) {
- ctx := context.TODO()
- // ensure the Pod is part of the cluster and is running
- options := metav1.ListOptions{
- FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(),
- LabelSelector: fields.AndSelectors(
- fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, cluster.Name),
- fields.OneTermEqualSelector(config.LABEL_PG_DATABASE, config.LABEL_TRUE),
- fields.OneTermEqualSelector(config.LABEL_DEPLOYMENT_NAME, candidateName),
- ).String(),
- }
-
- pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(ctx, options)
- if err != nil {
- return nil, err
- }
-
- // if no Pods are found, then also return an error as we then cannot switch
- // over to this instance
- if len(pods.Items) == 0 {
- return nil, fmt.Errorf("no pods found for instance %s", candidateName)
- }
-
- // there is an outside chance the list returns multiple Pods, so just return
- // the first one
- return &pods.Items[0], nil
-}
diff --git a/pkg/apiservermsgs/failovermsgs.go b/pkg/apiservermsgs/failovermsgs.go
index e7c37d30ab..208c7bc2ee 100644
--- a/pkg/apiservermsgs/failovermsgs.go
+++ b/pkg/apiservermsgs/failovermsgs.go
@@ -44,10 +44,15 @@ type CreateFailoverResponse struct {
// CreateFailoverRequest ...
// swagger:model
type CreateFailoverRequest struct {
- Namespace string
- ClusterName string
- Target string
ClientVersion string
+ ClusterName string
+ // Force determines whether or not to force the failover. A normal failover
+ // request uses a switchover, which seeks a healthy option. However, "Force"
+ // forces the issue so to speak, and will promote either the instance that is
+ // the best fit or the specified target
+ Force bool
+ Namespace string
+ Target string
}
// QueryFailoverRequest ...
From 3f007d583259ad166ac9a5dadd500aac5a57a6e9 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 4 Jan 2021 17:43:57 -0500
Subject: [PATCH 116/276] Move default PostgreSQL version to 13
We are close enough to 13.2 that it is OK to move up the default
to 13 for this release.
---
Makefile | 2 +-
bin/push-ccp-to-gcr.sh | 2 +-
conf/postgres-operator/pgo.yaml | 2 +-
docs/config.toml | 4 ++--
docs/content/tutorial/pgbouncer.md | 4 ++--
examples/create-by-resource/fromcrd.json | 2 +-
examples/helm/create-cluster/values.yaml | 2 +-
examples/kustomize/createcluster/README.md | 8 ++++----
examples/kustomize/createcluster/base/pgcluster.yaml | 2 +-
installers/ansible/values.yaml | 2 +-
installers/gcp-marketplace/values.yaml | 2 +-
installers/helm/values.yaml | 2 +-
installers/kubectl/postgres-operator-ocp311.yml | 2 +-
installers/kubectl/postgres-operator.yml | 2 +-
installers/olm/Makefile | 2 +-
15 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/Makefile b/Makefile
index 5f5e8f06e6..c5e9fb2036 100644
--- a/Makefile
+++ b/Makefile
@@ -7,7 +7,7 @@ PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
PGO_VERSION ?= 4.5.1
PGO_PG_VERSION ?= 12
-PGO_PG_FULLVERSION ?= 12.5
+PGO_PG_FULLVERSION ?= 13.1
PGO_BACKREST_VERSION ?= 2.31
PACKAGER ?= yum
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index d476c07b0b..999d355f1f 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -16,7 +16,7 @@
GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
CCP_IMAGE_PREFIX=crunchydata
-CCP_IMAGE_TAG=centos7-12.5-4.5.1
+CCP_IMAGE_TAG=centos7-13.1-4.5.1
IMAGES=(
crunchy-prometheus
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index 0cb8bfadc1..569f91142c 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -2,7 +2,7 @@ Cluster:
CCPImagePrefix: registry.developers.crunchydata.com/crunchydata
Metrics: false
Badger: false
- CCPImageTag: centos7-12.5-4.5.1
+ CCPImageTag: centos7-13.1-4.5.1
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
diff --git a/docs/config.toml b/docs/config.toml
index dc39d49daa..7c04096c8e 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -26,9 +26,9 @@ highlightClientSide = false # set true to use highlight.pack.js instead of the d
menushortcutsnewtab = true # set true to open shortcuts links to a new tab/window
enableGitInfo = true
operatorVersion = "4.5.1"
-postgresVersion = "12.5"
+postgresVersion = "13.1"
postgresVersion13 = "13.1"
-postgresVersion12 = "12.5"
+postgresVersion12 = "13.1"
postgresVersion11 = "11.10"
postgresVersion10 = "10.15"
postgresVersion96 = "9.6.20"
diff --git a/docs/content/tutorial/pgbouncer.md b/docs/content/tutorial/pgbouncer.md
index 43c2534996..a2d3b342c1 100644
--- a/docs/content/tutorial/pgbouncer.md
+++ b/docs/content/tutorial/pgbouncer.md
@@ -116,7 +116,7 @@ PGPASSWORD=randompassword psql -h localhost -p 5432 -U pgbouncer pgbouncer
You should see something similar to this:
```
-psql (12.5, server 1.14.0/bouncer)
+psql (13.1, server 1.14.0/bouncer)
Type "help" for help.
pgbouncer=#
@@ -172,7 +172,7 @@ PGPASSWORD=securerandomlygeneratedpassword psql -h localhost -p 5432 -U testuser
You should see something similar to this:
```
-psql (12.5)
+psql (13.1)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index c267e258d5..0d34b03e32 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -45,7 +45,7 @@
"supplementalgroups": ""
},
"ccpimage": "crunchy-postgres-ha",
- "ccpimagetag": "centos7-12.5-4.5.1",
+ "ccpimagetag": "centos7-13.1-4.5.1",
"clustername": "fromcrd",
"database": "userdb",
"exporterport": "9187",
diff --git a/examples/helm/create-cluster/values.yaml b/examples/helm/create-cluster/values.yaml
index b0301c6205..1f1336bf64 100644
--- a/examples/helm/create-cluster/values.yaml
+++ b/examples/helm/create-cluster/values.yaml
@@ -4,7 +4,7 @@
# The values is for the namespace and the postgresql cluster name
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
-ccpimagetag: centos7-12.5-4.5.1
+ccpimagetag: centos7-13.1-4.5.1
namespace: pgo
pgclustername: hippo
pgoimageprefix: registry.developers.crunchydata.com/crunchydata
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index 3ea4c18f9f..04b6b2b7d6 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -44,7 +44,7 @@ pgo show cluster hippo -n pgo
```
You will see something like this if successful:
```
-cluster : hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+cluster : hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
pod : hippo-8fb6bd96-j87wq (Running) on gke-xxxx-default-pool-38e946bd-257w (1/1) (primary)
pvc: hippo (1Gi)
deployment : hippo
@@ -79,7 +79,7 @@ pgo show cluster dev-hippo -n pgo
```
You will see something like this if successful:
```
-cluster : dev-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+cluster : dev-hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
pod : dev-hippo-588d4cb746-bwrxb (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: dev-hippo (1Gi)
deployment : dev-hippo
@@ -113,7 +113,7 @@ pgo show cluster staging-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : staging-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+cluster : staging-hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
pod : staging-hippo-85cf6dcb65-9h748 (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: staging-hippo (1Gi)
pod : staging-hippo-lnxw-cf47d8c8b-6r4wn (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
@@ -154,7 +154,7 @@ pgo show cluster prod-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : prod-hippo (crunchy-postgres-ha:centos7-12.5-4.5.1)
+cluster : prod-hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
pod : prod-hippo-5d6dd46497-rr67c (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (primary)
pvc: prod-hippo (1Gi)
pod : prod-hippo-flty-84d97c8769-2pzbh (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (replica)
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index cf3293a73a..d1c8fb884c 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -42,7 +42,7 @@ spec:
annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
- ccpimagetag: centos7-12.5-4.5.1
+ ccpimagetag: centos7-13.1-4.5.1
clustername: hippo
customconfig: ""
database: hippo
diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml
index ebce0ed751..58d5f6debf 100644
--- a/installers/ansible/values.yaml
+++ b/installers/ansible/values.yaml
@@ -17,7 +17,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-12.5-4.5.1"
+ccp_image_tag: "centos7-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml
index e2ed852df1..c86f34b68a 100644
--- a/installers/gcp-marketplace/values.yaml
+++ b/installers/gcp-marketplace/values.yaml
@@ -10,7 +10,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-12.5-4.5.1"
+ccp_image_tag: "centos7-13.1-4.5.1"
create_rbac: "true"
db_name: ""
db_password_age_days: "0"
diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml
index b2c5d441b2..536a1bcb9d 100644
--- a/installers/helm/values.yaml
+++ b/installers/helm/values.yaml
@@ -37,7 +37,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-12.5-4.5.1"
+ccp_image_tag: "centos7-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml
index 977c9ea790..f0929e4b14 100644
--- a/installers/kubectl/postgres-operator-ocp311.yml
+++ b/installers/kubectl/postgres-operator-ocp311.yml
@@ -44,7 +44,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos7-12.5-4.5.1"
+ ccp_image_tag: "centos7-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index 971e436d20..91e05d823e 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -138,7 +138,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos7-12.5-4.5.1"
+ ccp_image_tag: "centos7-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
diff --git a/installers/olm/Makefile b/installers/olm/Makefile
index ebc81698fa..36a11f149e 100644
--- a/installers/olm/Makefile
+++ b/installers/olm/Makefile
@@ -2,7 +2,7 @@
.SUFFIXES:
CCP_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
-CCP_PG_FULLVERSION ?= 12.5
+CCP_PG_FULLVERSION ?= 13.1
CCP_POSTGIS_VERSION ?= 3.0
CONTAINER ?= docker
KUBECONFIG ?= $(HOME)/.kube/config
From 815dae81c0cc6e62a9368d9f40abebbd4539750f Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 4 Jan 2021 17:47:43 -0500
Subject: [PATCH 117/276] Move base image default to CentOS 8
---
Makefile | 2 +-
bin/push-ccp-to-gcr.sh | 2 +-
conf/postgres-operator/pgo.yaml | 4 ++--
docs/config.toml | 2 +-
.../advanced/crunchy-postgres-exporter.md | 4 ++--
examples/create-by-resource/fromcrd.json | 2 +-
examples/envs.sh | 2 +-
examples/helm/create-cluster/values.yaml | 2 +-
examples/kustomize/createcluster/README.md | 8 +++----
.../createcluster/base/pgcluster.yaml | 2 +-
installers/ansible/values.yaml | 4 ++--
installers/gcp-marketplace/values.yaml | 4 ++--
installers/helm/values.yaml | 4 ++--
.../kubectl/postgres-operator-ocp311.yml | 6 ++---
installers/kubectl/postgres-operator.yml | 6 ++---
installers/metrics/helm/helm_template.yaml | 2 +-
installers/metrics/helm/values.yaml | 2 +-
.../postgres-operator-metrics-ocp311.yml | 2 +-
.../kubectl/postgres-operator-metrics.yml | 2 +-
installers/olm/Makefile | 2 +-
internal/util/util_test.go | 24 +++++++++----------
21 files changed, 44 insertions(+), 44 deletions(-)
diff --git a/Makefile b/Makefile
index c5e9fb2036..1469e405b1 100644
--- a/Makefile
+++ b/Makefile
@@ -2,7 +2,7 @@
# Default values if not already set
ANSIBLE_VERSION ?= 2.9.*
PGOROOT ?= $(CURDIR)
-PGO_BASEOS ?= centos7
+PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
PGO_VERSION ?= 4.5.1
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index 999d355f1f..f1565f0dfb 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -16,7 +16,7 @@
GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
CCP_IMAGE_PREFIX=crunchydata
-CCP_IMAGE_TAG=centos7-13.1-4.5.1
+CCP_IMAGE_TAG=centos8-13.1-4.5.1
IMAGES=(
crunchy-prometheus
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index 569f91142c..0b90394354 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -2,7 +2,7 @@ Cluster:
CCPImagePrefix: registry.developers.crunchydata.com/crunchydata
Metrics: false
Badger: false
- CCPImageTag: centos7-13.1-4.5.1
+ CCPImageTag: centos8-13.1-4.5.1
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
@@ -81,4 +81,4 @@ Storage:
Pgo:
Audit: false
PGOImagePrefix: registry.developers.crunchydata.com/crunchydata
- PGOImageTag: centos7-4.5.1
+ PGOImageTag: centos8-4.5.1
diff --git a/docs/config.toml b/docs/config.toml
index 7c04096c8e..cafb971a4d 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -34,7 +34,7 @@ postgresVersion10 = "10.15"
postgresVersion96 = "9.6.20"
postgresVersion95 = "9.5.24"
postgisVersion = "3.0"
-centosBase = "centos7"
+centosBase = "centos8"
[outputs]
home = [ "HTML", "RSS", "JSON"]
diff --git a/docs/content/advanced/crunchy-postgres-exporter.md b/docs/content/advanced/crunchy-postgres-exporter.md
index b9b2a3ba09..16a58e5672 100644
--- a/docs/content/advanced/crunchy-postgres-exporter.md
+++ b/docs/content/advanced/crunchy-postgres-exporter.md
@@ -24,8 +24,8 @@ can be specified for the API to collect. For an example of a queries.yml file, s
The crunchy-postgres-exporter Docker image contains the following packages (versions vary depending on PostgreSQL version):
* PostgreSQL ({{< param postgresVersion13 >}}, {{< param postgresVersion12 >}}, {{< param postgresVersion11 >}}, {{< param postgresVersion10 >}}, {{< param postgresVersion96 >}} and {{< param postgresVersion95 >}})
-* CentOS7 - publicly available
-* UBI7 - customers only
+* CentOS 8 - publicly available
+* UBI 7, UBI 8 - customers only
* [PostgreSQL Exporter](https://github.com/wrouesnel/postgres_exporter)
## Environment Variables
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index 0d34b03e32..aca1a509c1 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -45,7 +45,7 @@
"supplementalgroups": ""
},
"ccpimage": "crunchy-postgres-ha",
- "ccpimagetag": "centos7-13.1-4.5.1",
+ "ccpimagetag": "centos8-13.1-4.5.1",
"clustername": "fromcrd",
"database": "userdb",
"exporterport": "9187",
diff --git a/examples/envs.sh b/examples/envs.sh
index 848a7252b5..b824de2eeb 100644
--- a/examples/envs.sh
+++ b/examples/envs.sh
@@ -19,7 +19,7 @@ export PGO_CONF_DIR=$PGOROOT/installers/ansible/roles/pgo-operator/files
# the version of the Operator you run is set by these vars
export PGO_IMAGE_PREFIX=registry.developers.crunchydata.com/crunchydata
-export PGO_BASEOS=centos7
+export PGO_BASEOS=centos8
export PGO_VERSION=4.5.1
export PGO_IMAGE_TAG=$PGO_BASEOS-$PGO_VERSION
diff --git a/examples/helm/create-cluster/values.yaml b/examples/helm/create-cluster/values.yaml
index 1f1336bf64..ce2abbf974 100644
--- a/examples/helm/create-cluster/values.yaml
+++ b/examples/helm/create-cluster/values.yaml
@@ -4,7 +4,7 @@
# The values is for the namespace and the postgresql cluster name
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
-ccpimagetag: centos7-13.1-4.5.1
+ccpimagetag: centos8-13.1-4.5.1
namespace: pgo
pgclustername: hippo
pgoimageprefix: registry.developers.crunchydata.com/crunchydata
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index 04b6b2b7d6..6eb934577b 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -44,7 +44,7 @@ pgo show cluster hippo -n pgo
```
You will see something like this if successful:
```
-cluster : hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
+cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
pod : hippo-8fb6bd96-j87wq (Running) on gke-xxxx-default-pool-38e946bd-257w (1/1) (primary)
pvc: hippo (1Gi)
deployment : hippo
@@ -79,7 +79,7 @@ pgo show cluster dev-hippo -n pgo
```
You will see something like this if successful:
```
-cluster : dev-hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
+cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
pod : dev-hippo-588d4cb746-bwrxb (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: dev-hippo (1Gi)
deployment : dev-hippo
@@ -113,7 +113,7 @@ pgo show cluster staging-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : staging-hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
+cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
pod : staging-hippo-85cf6dcb65-9h748 (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: staging-hippo (1Gi)
pod : staging-hippo-lnxw-cf47d8c8b-6r4wn (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
@@ -154,7 +154,7 @@ pgo show cluster prod-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : prod-hippo (crunchy-postgres-ha:centos7-13.1-4.5.1)
+cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
pod : prod-hippo-5d6dd46497-rr67c (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (primary)
pvc: prod-hippo (1Gi)
pod : prod-hippo-flty-84d97c8769-2pzbh (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (replica)
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index d1c8fb884c..f070683e08 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -42,7 +42,7 @@ spec:
annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
- ccpimagetag: centos7-13.1-4.5.1
+ ccpimagetag: centos8-13.1-4.5.1
clustername: hippo
customconfig: ""
database: hippo
diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml
index 58d5f6debf..2675ac8eb6 100644
--- a/installers/ansible/values.yaml
+++ b/installers/ansible/values.yaml
@@ -17,7 +17,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-13.1-4.5.1"
+ccp_image_tag: "centos8-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -57,7 +57,7 @@ pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos7-4.5.1"
+pgo_image_tag: "centos8-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml
index c86f34b68a..a90d892520 100644
--- a/installers/gcp-marketplace/values.yaml
+++ b/installers/gcp-marketplace/values.yaml
@@ -10,7 +10,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-13.1-4.5.1"
+ccp_image_tag: "centos8-13.1-4.5.1"
create_rbac: "true"
db_name: ""
db_password_age_days: "0"
@@ -34,7 +34,7 @@ pgo_client_container_install: "false"
pgo_client_install: 'false'
pgo_client_version: "4.5.1"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos7-4.5.1"
+pgo_image_tag: "centos8-4.5.1"
pgo_installation_name: '${OPERATOR_NAME}'
pgo_operator_namespace: '${OPERATOR_NAMESPACE}'
scheduler_timeout: "3600"
diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml
index 536a1bcb9d..b3362d98e7 100644
--- a/installers/helm/values.yaml
+++ b/installers/helm/values.yaml
@@ -37,7 +37,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos7-13.1-4.5.1"
+ccp_image_tag: "centos8-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -77,7 +77,7 @@ pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos7-4.5.1"
+pgo_image_tag: "centos8-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml
index f0929e4b14..f2d411b295 100644
--- a/installers/kubectl/postgres-operator-ocp311.yml
+++ b/installers/kubectl/postgres-operator-ocp311.yml
@@ -44,7 +44,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos7-13.1-4.5.1"
+ ccp_image_tag: "centos8-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -84,7 +84,7 @@ data:
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos7-4.5.1"
+ pgo_image_tag: "centos8-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -161,7 +161,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index 91e05d823e..cf48f71d50 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -138,7 +138,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos7-13.1-4.5.1"
+ ccp_image_tag: "centos8-13.1-4.5.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -178,7 +178,7 @@ data:
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos7-4.5.1"
+ pgo_image_tag: "centos8-4.5.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -268,7 +268,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/helm/helm_template.yaml b/installers/metrics/helm/helm_template.yaml
index b328adba55..d0f8e64273 100644
--- a/installers/metrics/helm/helm_template.yaml
+++ b/installers/metrics/helm/helm_template.yaml
@@ -20,5 +20,5 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos7-4.5.1"
+pgo_image_tag: "centos8-4.5.1"
diff --git a/installers/metrics/helm/values.yaml b/installers/metrics/helm/values.yaml
index 616001b5ec..edb41f9038 100644
--- a/installers/metrics/helm/values.yaml
+++ b/installers/metrics/helm/values.yaml
@@ -20,7 +20,7 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos7-4.5.1"
+pgo_image_tag: "centos8-4.5.1"
# =====================
# Configuration Options
diff --git a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
index f4643fc126..b9bdd356a3 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
@@ -96,7 +96,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/kubectl/postgres-operator-metrics.yml b/installers/metrics/kubectl/postgres-operator-metrics.yml
index 313698aaeb..87ca1f8af1 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics.yml
@@ -165,7 +165,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/olm/Makefile b/installers/olm/Makefile
index 36a11f149e..2650440832 100644
--- a/installers/olm/Makefile
+++ b/installers/olm/Makefile
@@ -9,7 +9,7 @@ KUBECONFIG ?= $(HOME)/.kube/config
OLM_SDK_VERSION ?= 0.15.1
OLM_TOOLS ?= registry.localhost:5000/postgres-operator-olm-tools:$(OLM_SDK_VERSION)
OLM_VERSION ?= 0.15.1
-PGO_BASEOS ?= centos7
+PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
PGO_VERSION ?= 4.5.1
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
diff --git a/internal/util/util_test.go b/internal/util/util_test.go
index d867351a85..08c6966a3a 100644
--- a/internal/util/util_test.go
+++ b/internal/util/util_test.go
@@ -17,25 +17,25 @@ func TestGetStandardImageTag(t *testing.T) {
expected string
}{
{
- "image: crunchy-postgres-ha, tag: centos7-12.4-4.5.0",
+ "image: crunchy-postgres-ha, tag: centos8-12.4-4.5.0",
"crunchy-postgres-ha",
- "centos7-12.4-4.5.0",
- "centos7-12.4-4.5.0",
+ "centos8-12.4-4.5.0",
+ "centos8-12.4-4.5.0",
}, {
- "image: crunchy-postgres-gis-ha, tag: centos7-12.4-3.0-4.5.0",
+ "image: crunchy-postgres-gis-ha, tag: centos8-12.4-3.0-4.5.0",
"crunchy-postgres-gis-ha",
- "centos7-12.4-3.0-4.5.0",
- "centos7-12.4-4.5.0",
+ "centos8-12.4-3.0-4.5.0",
+ "centos8-12.4-4.5.0",
}, {
- "image: crunchy-postgres-ha, tag: centos7-12.4-4.5.0-beta.1",
+ "image: crunchy-postgres-ha, tag: centos8-12.4-4.5.0-beta.1",
"crunchy-postgres-ha",
- "centos7-12.4-4.5.0-beta.1",
- "centos7-12.4-4.5.0-beta.1",
+ "centos8-12.4-4.5.0-beta.1",
+ "centos8-12.4-4.5.0-beta.1",
}, {
- "image: crunchy-postgres-gis-ha, tag: centos7-12.4-3.0-4.5.0-beta.2",
+ "image: crunchy-postgres-gis-ha, tag: centos8-12.4-3.0-4.5.0-beta.2",
"crunchy-postgres-gis-ha",
- "centos7-12.4-3.0-4.5.0-beta.2",
- "centos7-12.4-4.5.0-beta.2",
+ "centos8-12.4-3.0-4.5.0-beta.2",
+ "centos8-12.4-4.5.0-beta.2",
}, {
"image: crunchy-postgres-ha, tag: centos8-9.5.23-4.5.0-rc.1",
"crunchy-postgres-ha",
From 60132e03ac5c882604a175495e3c20a57fd1d715 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 4 Jan 2021 18:10:42 -0500
Subject: [PATCH 118/276] Version bump 4.6.0-beta.1
---
Makefile | 2 +-
README.md | 10 +-
bin/push-ccp-to-gcr.sh | 2 +-
conf/postgres-operator/pgo.yaml | 4 +-
docs/config.toml | 2 +-
docs/content/Configuration/compatibility.md | 6 +
docs/content/_index.md | 9 +-
docs/content/releases/4.6.0.md | 233 ++++++++++++++++++
examples/create-by-resource/fromcrd.json | 6 +-
examples/envs.sh | 2 +-
.../create-cluster/templates/pgcluster.yaml | 2 +-
examples/helm/create-cluster/values.yaml | 4 +-
examples/kustomize/createcluster/README.md | 16 +-
.../createcluster/base/pgcluster.yaml | 6 +-
.../overlay/staging/hippo-rpl1-pgreplica.yaml | 2 +-
installers/ansible/README.md | 2 +-
installers/ansible/values.yaml | 6 +-
installers/gcp-marketplace/Makefile | 2 +-
installers/gcp-marketplace/README.md | 2 +-
installers/gcp-marketplace/values.yaml | 6 +-
installers/helm/Chart.yaml | 2 +-
installers/helm/values.yaml | 6 +-
installers/kubectl/client-setup.sh | 2 +-
.../kubectl/postgres-operator-ocp311.yml | 8 +-
installers/kubectl/postgres-operator.yml | 8 +-
installers/metrics/ansible/README.md | 2 +-
installers/metrics/helm/Chart.yaml | 2 +-
installers/metrics/helm/helm_template.yaml | 2 +-
installers/metrics/helm/values.yaml | 2 +-
.../postgres-operator-metrics-ocp311.yml | 2 +-
.../kubectl/postgres-operator-metrics.yml | 2 +-
installers/olm/Makefile | 2 +-
pkg/apis/crunchydata.com/v1/doc.go | 8 +-
pkg/apiservermsgs/common.go | 2 +-
redhat/atomic/help.1 | 2 +-
redhat/atomic/help.md | 2 +-
36 files changed, 312 insertions(+), 66 deletions(-)
create mode 100644 docs/content/releases/4.6.0.md
diff --git a/Makefile b/Makefile
index 1469e405b1..62661ad617 100644
--- a/Makefile
+++ b/Makefile
@@ -5,7 +5,7 @@ PGOROOT ?= $(CURDIR)
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
-PGO_VERSION ?= 4.5.1
+PGO_VERSION ?= 4.6.0-beta.1
PGO_PG_VERSION ?= 12
PGO_PG_FULLVERSION ?= 13.1
PGO_BACKREST_VERSION ?= 2.31
diff --git a/README.md b/README.md
index 784fad181e..8051906430 100644
--- a/README.md
+++ b/README.md
@@ -131,14 +131,18 @@ The PostgreSQL Operator is developed and tested on CentOS and RHEL linux platfor
### Supported Platforms
-The Crunchy PostgreSQL Operator is tested on the following Platforms:
+The Crunchy PostgreSQL Operator maintains backwards compatibility to Kubernetes 1.11 and is tested is tested against the following Platforms:
-- Kubernetes 1.13+
-- OpenShift 3.11+
+- Kubernetes 1.17+
+- Openshift 4.4+
+- OpenShift 3.11
- Google Kubernetes Engine (GKE), including Anthos
- Amazon EKS
+- Microsoft AKS
- VMware Enterprise PKS 1.3+
+This list only includes the platforms that the PostgreSQL Operator is specifically tested on as part of the release process: the PostgreSQL Operator works on other Kubernetes distributions as well.
+
### Storage
The Crunchy PostgreSQL Operator is tested with a variety of different types of Kubernetes storage and Storage Classes, including:
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index f1565f0dfb..a235fecf59 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -16,7 +16,7 @@
GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
CCP_IMAGE_PREFIX=crunchydata
-CCP_IMAGE_TAG=centos8-13.1-4.5.1
+CCP_IMAGE_TAG=centos8-13.1-4.6.0-beta.1
IMAGES=(
crunchy-prometheus
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index 0b90394354..2b0de113bf 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -2,7 +2,7 @@ Cluster:
CCPImagePrefix: registry.developers.crunchydata.com/crunchydata
Metrics: false
Badger: false
- CCPImageTag: centos8-13.1-4.5.1
+ CCPImageTag: centos8-13.1-4.6.0-beta.1
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
@@ -81,4 +81,4 @@ Storage:
Pgo:
Audit: false
PGOImagePrefix: registry.developers.crunchydata.com/crunchydata
- PGOImageTag: centos8-4.5.1
+ PGOImageTag: centos8-4.6.0-beta.1
diff --git a/docs/config.toml b/docs/config.toml
index cafb971a4d..3a59d4523e 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -25,7 +25,7 @@ disableNavChevron = false # set true to hide next/prev chevron, default is false
highlightClientSide = false # set true to use highlight.pack.js instead of the default hugo chroma highlighter
menushortcutsnewtab = true # set true to open shortcuts links to a new tab/window
enableGitInfo = true
-operatorVersion = "4.5.1"
+operatorVersion = "4.6.0-beta.1"
postgresVersion = "13.1"
postgresVersion13 = "13.1"
postgresVersion12 = "13.1"
diff --git a/docs/content/Configuration/compatibility.md b/docs/content/Configuration/compatibility.md
index 43af7021db..b28901568f 100644
--- a/docs/content/Configuration/compatibility.md
+++ b/docs/content/Configuration/compatibility.md
@@ -12,6 +12,12 @@ version dependencies between the two projects. Below are the operator releases a
| Operator Release | Container Release | Postgres | PgBackrest Version
|:----------|:-------------|:------------|:--------------
+| 4.6.0 | 4.6.0 | 13.1 | 2.31 |
+|||12.5|2.31|
+|||11.10|2.31|
+|||10.15|2.31|
+|||9.6.20|2.31|
+||||
| 4.5.1 | 4.5.1 | 13.1 | 2.29 |
|||12.5|2.29|
|||11.10|2.29|
diff --git a/docs/content/_index.md b/docs/content/_index.md
index 96f6807560..50fbc10943 100644
--- a/docs/content/_index.md
+++ b/docs/content/_index.md
@@ -140,14 +140,17 @@ For more information about which versions of the PostgreSQL Operator include whi
# Supported Platforms
-The Crunchy PostgreSQL Operator is tested on the following Platforms:
+The Crunchy PostgreSQL Operator maintains backwards compatibility to Kubernetes 1.11 and is tested is tested against the following Platforms:
-- Kubernetes 1.13+
-- OpenShift 3.11+
+- Kubernetes 1.17+
+- Openshift 4.4+
+- OpenShift 3.11
- Google Kubernetes Engine (GKE), including Anthos
- Amazon EKS
+- Microsoft AKS
- VMware Enterprise PKS 1.3+
+This list only includes the platforms that the PostgreSQL Operator is specifically tested on as part of the release process: the PostgreSQL Operator works on other Kubernetes distributions as well.
## Storage
The Crunchy PostgreSQL Operator is tested with a variety of different types of Kubernetes storage and Storage Classes, including:
diff --git a/docs/content/releases/4.6.0.md b/docs/content/releases/4.6.0.md
new file mode 100644
index 0000000000..d7719433cf
--- /dev/null
+++ b/docs/content/releases/4.6.0.md
@@ -0,0 +1,233 @@
+---
+title: "4.6.0"
+date:
+draft: false
+weight: 60
+---
+
+Crunchy Data announces the release of the PostgreSQL Operator 4.6.0 on January DD, 2021. You can get started with the PostgreSQL Operator with the following commands:
+
+```
+kubectl create namespace pgo
+kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.6.0-beta.1/installers/kubectl/postgres-operator.yml
+```
+
+The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/).
+
+The PostgreSQL Operator 4.6.0 release includes the following software versions upgrades:
+
+- [pgBackRest](https://pgbackrest.org/) is now at version 2.31.
+- [pgnodemx](https://github.com/CrunchyData/pgnodemx) is now at version 1.0.3
+- [Patroni](https://patroni.readthedocs.io/) is now at version 2.0.1
+
+The monitoring stack for the PostgreSQL Operator uses upstream components as opposed to repackaging them. These are specified as part of the [PostgreSQL Operator Installer](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/postgres-operator/). We have tested this release with the following versions of each component:
+
+- Prometheus: 2.23.0
+- Grafana: 6.7.5
+- Alertmanager: 0.21.0
+
+This release of the PostgreSQL Operator drops support for PostgreSQL 9.5, which goes EOL in February 2021.
+
+PostgreSQL Operator is tested against Kubernetes 1.17 - 1.20, OpenShift 3.11, OpenShift 4.4+, Google Kubernetes Engine (GKE), Amazon EKS, Microsoft AKS, and VMware Enterprise PKS 1.3+, and works on other Kubernetes distributions as well.
+
+## Major Features
+
+### Rolling Updates
+
+During the lifecycle of a PostgreSQL cluster, there are certain events that may require a planned restart, such as an update to a "restart required" PostgreSQL configuration setting (e.g. [`shared_buffers`](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-SHARED-BUFFERS)) or a change to a Kubernetes Deployment template (e.g. [changing the memory request](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/customize-cluster/#customize-cpu-memory)). Restarts can be disruptive in a high availability deployment, which is why many setups employ a ["rolling update" strategy](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/) (aka a "rolling restart") to minimize or eliminate downtime during a planned restart.
+
+Because PostgreSQL is a stateful application, a simple rolling restart strategy will not work: PostgreSQL needs to ensure that there is a primary available that can accept reads and writes. This requires following a method that will minimize the amount of downtime when the primary is taken offline for a restart.
+
+This release introduces a mechanism for the PostgreSQL Operator to perform rolling updates implicitly on certain operations that change the Deployment templates and explicitly through the [`pgo restart`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_restart/) command with the `--rolling` flag. Some of the operations that will trigger a rolling update include:
+
+- Memory resource adjustments
+- CPU resource adjustments
+- Custom annotation changes
+- Tablespace additions
+- Adding/removing the metrics sidecar to a PostgreSQL cluster
+
+Please reference the [documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#rolling-updates) for more details on [rolling updates](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#rolling-updates).
+
+### Pod Tolerations
+
+Kubernetes [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) can help with the scheduling of Pods to appropriate Nodes based upon the taint values of said Nodes. For example, a Kubernetes administrator may set taints on Nodes to restrict scheduling to just the database workload, and as such, tolerations must be assigned to Pods to ensure they can actually be scheduled on thos nodes.
+
+This release introduces the ability to assign tolerations to PostgreSQL clusters managed by the PostgreSQL Operator. Tolerations can be assigned to every instance in the cluster via the `tolerations` attribute on a `pgclusters.crunchydata.com` custom resource, or to individual instances using the `tolerations` attribute on a `pgreplicas.crunchydata.com` custom resource.
+
+Both the [`pgo create cluster`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_cluster/) and [`pgo scale`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_scale/) commands support the `--toleration` flag, which can be used to add one or more tolerations to a cluster. Values accepted by the `--toleration` flag use the following format:
+
+```
+rule:Effect
+```
+
+where a `rule` can represent existence (e.g. `key`) or equality (`key=value`) and `Effect` is one of `NoSchedule`, `PreferNoSchedule`, or `NoExecute`, e.g:
+
+```
+pgo create cluster hippo \
+ --toleration=ssd:NoSchedule \
+ --toleration=zone=east:NoSchedule
+```
+
+Tolerations can also be added and removed from an existing cluster using the [`pgo update cluster`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_update_cluster/) , command e.g:
+
+```
+pgo update cluster hippo \
+ --toleration=zone=west:NoSchedule \
+ --toleration=zone=east:NoSchedule-
+```
+
+or by modifying the `pgclusters.crunchydata.com` custom resource directly.
+
+For more information on how tolerations work, please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
+
+### Node Affinity Enhancements
+
+Node affinity has been a feature of the PostgreSQL Operator for a long time but has received some significant improvements in this release.
+
+It is now possible to control the node affinity across an entire PostgreSQL cluster as well as individual PostgreSQL instances from a custom resource attribute on the `pgclusters.crunchydata.com` and `pgreplicas.crunchydata.com` CRDs. These attributes use the standard [Kubernetes specifications for node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) and should be familiar to users who have had to set this in applications.
+
+Additionally, this release adds support for both "preferred" and "required" node affinity definitions. Previously, one could achieve required node affinity by modifying a template in the `pgo-config` ConfigMap, but this release makes this process more straightforward.
+
+This release introduces the `--node-affinity-type` flag for the `pgo create cluster`, `pgo scale`, and `pgo restore` commands that allows one to specify the node affinity type for PostgreSQL clusters and instances. The `--node-affinity-type` flag accepts values of `preferred` (default) and `required`. Each instance in a PostgreSQL cluster will inherit its node affinity type from the cluster (`pgo create cluster`) itself, but the type of an individual instance (`pgo scale`) will supersede that value.
+
+The `--node-affinity-type` must be combined with the `--node-label` flag.
+
+### TLS for pgBouncer
+
+Since 4.3.0, the PostgreSQL Operator has had support for [TLS connections to PostgreSQL clusters](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/tls/) and an [improved integration with pgBouncer](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/pgbouncer/), used for connection pooling and state management. However, the integration with pgBouncer did not support TLS directly: it could be achieved through modifying the pgBouncer Deployment template.
+
+This release brings TLS support for pgBouncer to the PostgreSQL Operator, allowing for communication over TLS between a client and pgBouncer, and pgBouncer and a PostgreSQL server. In other words, the following is now support:
+
+`Client` <= TLS => `pgBouncer` <= TLS => `PostgreSQL`
+
+In other words, to use TLS with pgBouncer, all connections from a client to pgBouncer and from pgBouncer to PostgreSQL **must** be over TLS. Effectively, this is "TLS only" mode if connecting via pgBouncer.
+
+In order to deploy pgBouncer with TLS, the following preconditions must be met:
+
+- TLS **MUST** be enabled within the PostgreSQL cluster.
+- pgBouncer and the PostgreSQL **MUST** share the same certificate authority (CA) bundle.
+
+You must have a [Kubernetes TLS Secret](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) containing the TLS keypair you would like to use for pgBouncer.
+
+You can enable TLS for pgBouncer using the following commands:
+
+- `pgo create pgbouncer --tls-secret`, where `--tls-secret` specifies the location of the TLS keypair to use for pgBouncer. You **must** already have TLS enabled in your PostgreSQL cluster.
+- `pgo create cluster --pgbouncer --pgbouncer-tls-secret`, where `--tls-secret` specifies the location of the TLS keypair to use for pgBouncer. You **must** also specify `--server-tls-secret` and `--server-ca-secret`.
+
+This adds an attribute to the `pgclusters.crunchydata.com` Customer Resource Definition in the `pgBouncer` section called `tlsSecret`, which will store the name of the TLS secret to use for pgBouncer.
+
+By default, connections coming into pgBouncer have a [PostgreSQL SSL mode](https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-PROTECTION) of `require` and connections going into PostgreSQL using `verify-ca`.
+
+### Enable/Disable Metrics Collection for PostgreSQL Cluster
+
+A common case is that one creates a PostgreSQL cluster with the Postgres Operator and forget to enable it for monitoring with the `--metrics` flag. Prior to this release, adding the `crunchy-postgres-exporter` to an already running PostgreSQL cluster presented challenges.
+
+This release brings the `--enable-metrics` and `--disable-metrics` introduces to the [`pgo update cluster`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_update_cluster/) flags that allow for monitoring to be enabled or disabled on an already running PostgreSQL cluster. As this involves modifying Deployment templates, this action triggers a rolling update that is described in the previous section to limit downtime.
+
+Metrics can also be enabled/disabled using the `exporter` attribute on the `pgclusters.crunchydata.com` custom resource.
+
+This release also changes the management of the PostgreSQL user that is used to collect the metrics. Similar to [pgBouncer](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/pgbouncer/), the PostgreSQL Operator fully manages the credentials for the metrics collection user. The `--exporter-rotate-password` flag on [`pgo update cluster`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_update_cluster/) can be used to rotate the metric collection user's credentials.
+
+## Container Image Reduction & Reorganization
+
+Advances in Postgres Operator functionality have allowed for a culling of the number of required container images. For example, functionality that had been broken out into individual container images (e.g. `crunchy-pgdump`) is now consolidated within the `crunchy-postgres` and `crunchy-postgres-ha` containers.
+
+Renamed container images include:
+
+- `pgo-backrest` => `crunchy-pgbackrest`
+- `pgo-backrest-repo` => `crunchy-pgbackrest-repo`
+
+Removed container images include:
+
+- `crunchy-admin`
+- `crunchy-backrest-restore`
+- `crunchy-backup`
+- `crunchy-pgbench`
+- `crunchy-pgdump`
+- `crunchy-pgrestore`
+- `pgo-sqlrunner`
+- `pgo-backrest-repo-sync`
+- `pgo-backrest-restore`
+
+These changes also include overall organization and build performance optimizations around the container suite.
+
+## Breaking Changes
+
+- [Metrics collection](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/monitoring/) can now be enabled/disabled using the `exporter` attribute on `pgclusters.crunchydata.com`. The previous method to do so, involving a label buried within a custom resource, no longer works.
+- pgBadger can now be enabled/disabled using the `pgBadger` attribute on `pgclusters.crunchydata.com`. The previous method to do so, involving a label buried within a custom resource, no longer works.
+- Several additional labels on the `pgclusters.crunchydata.com` CRD that had driven behavior have been moved to attributes. These include:
+ - `autofail`, which is now represented by the `disableAutofail` attribute.
+ - `service-type`, which is now represented by the `serviceType` attribute.
+ - `NodeLabelKey`/`NodeLabelValue`, which is now replaced by the `nodeAffinity` attribute.
+ - `backrest-storage-type`, which is now represented with the `backrestStorageTypes` attribute.
+The `pgo upgrade` command will properly moved any data you have in these labels into the correct attributes. You can read more about how to use the various CRD attributes in the [Custom Resources](https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/) section of the documentation.
+- The `rootsecretname`, `primarysecretname`, and `usersecretname` attributes on the `pgclusters.crunchydata.com` CRD have been removed. Each of these represented managed Secrets. Additionally, if the managed Secrets are not created at cluster creation time, the Operator will now generate these Secrets.
+- The `collectSecretName` attribute on `pgclusters.crunchydata.com` has been removed. The Secret for the metrics collection user is now fully managed by the PostgreSQL Operator.
+- There are changes to the `exporter.json` and `cluster-deployment.json` templates that reside within the `pgo-config` ConfigMap that could be breaking to those who have customized those templates. This includes removing the opening comma in the `exporter.json` and removing unneeded match labels on the PostgreSQL cluster Deployment. This is resolved by following the [standard upgrade procedure](https://access.crunchydata.com/documentation/postgres-operator/latest/upgrade/).(https://access.crunchydata.com/documentation/postgres-operator/latest/upgrade/), and only affects new clusters and existing clusters that wish to use the enable/disable metric collection feature.
+The `affinity.json` entry in the `pgo-config` ConfigMap has been removed in favor of the updated node affinity support.
+- Failovers can no longer be controlled by creating a `pgtasks.crunchydata.com` custom resource.
+- Remove the `PgMonitorPassword` attribute from `pgo-deployer`. The metric collection user password is managed by the PostgreSQL Operator.
+- Policy creation only supports the method of creating the policy from a file/ConfigMap.
+- Any pgBackRest variables of the format `PGBACKREST_REPO_` now follow the format `PGBACKREST_REPO1_` to be consistent with what pgBackRest expects.
+
+## Features
+
+- [Monitoring](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/monitoring/) can now be enabled/disabled during the lifetime of a PostgreSQL cluster using the `pgo update --enable-metrics` and `pgo update --disable-metrics` flag. This can also be modified directly on a custom resource.
+- The Service Type of a PostgreSQL cluster can now be updated during the lifetime of a cluster with `pgo update cluster --service-type`. This can also be modified directly on a custom resource.
+- The Service Type of pgBouncer can now be independently controlled and set with the `--service-type` flag on `pgo create pgbouncer` and `pgo update pgbouncer`. This can also be modified directly on a custom resource.
+- [pgBackRest delta restores](https://pgbackrest.org/user-guide.html#restore/option-delta), which can efficiently restore data as it determines which specific files need to be restored from backup, can now be used as part of the cluster creation method with `pgo create cluster --restore-from`. For example, if a cluster is deleted as such:
+
+```
+pgo delete cluster hippo --keep-data --keep-backups
+```
+
+It can subsequently be recreated using the delta restore method as such:
+
+```
+pgo create cluster hippo --restore-from=hippo
+```
+
+Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-archive-get/category-general/option-process-max) option to `--restore-opts` can help speed up the restore process based upon the amount of CPU you have available. If the delta restore fails, the PostgreSQL Operator will attempt to perform a full restore.
+
+- `pgo restore` will now first attempt a [pgBackRest delta restore](https://pgbackrest.org/user-guide.html#restore/option-delta), which can significantly speed up the restore time for large databases. Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-archive-get/category-general/option-process-max) option to `--backup-opts` can help speed up the restore process based upon the amount of CPU you have available.
+- A pgBackRest backup can now be deleted with `pgo delete backup`. A backup name must be specified with the `--target` flag. Please refer to the [documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/disaster-recovery/#deleting-a-backup) for how to use this command.
+- pgBadger can now be enabled/disabled during the lifetime of a PostgreSQL cluster using the `pgo update --enable-pgbadger` and `pgo update --disable-pgbadger` flag. This can also be modified directly on a custom resource.
+
+## Changes
+
+- If not provided at installation time, the Operator will now generate its own `pgo-backrest-repo-config` Secret.
+- The `local` storage type option for pgBackRest is deprecated in favor of `posix`, which matches the pgBackRest term. `local` will still continue to work for backwards compatibility purposes.
+- PostgreSQL clusters using multi-repository (e.g. `posix` + `s3` at the same time) archiving will now, by default, take backups to both repositories when `pgo backup` is used without additional options.
+- If not provided a cluster creation time, the Operator will now generate the PostgreSQL user Secrets required for bootstrap, including the superuser (`postgres`), the replication user (`primaryuser`), and the standard user.
+- `crunchy-postgres-exporter` now exposes several pgMonitor metrics related to `pg_stat_statements`.
+- When using the `--restore-from` option on `pgo create cluster` to create a new PostgreSQL cluster, the cluster bootstrap Job is now automatically removed if it completes successfully.
+- The `pgo failover` command now works without specifying a target: the candidate to fail over to will be automatically selected.
+- For clusters that have no healthy instances, `pgo failover` can now force a promotion using the `--force` flag. A `--target` flag must also be specified when using `--force`.
+- If a predefined custom ConfigMap for a PostgreSQL cluster (`-pgha-config`) is detected at bootstrap time, the Operator will ensure it properly initializes the cluster.
+- PostgreSQL JIT compilation is explicitly disabled on new cluster creation. This prevents a memory leak that has been observed on queries coming from the metrics exporter.
+- The credentials for the metrics collection user are now available with `pgo show user --show-system-accounts`.
+- The default user for executing scheduled SQL policies is now the Postgres superuser, instead of the replication user.
+- Add the `--no-prompt` flag to `pgo upgrade`. The mechanism to disable the prompt verification was already in place, but the flag was not exposed. Reported by (@devopsevd).
+- Remove certain characters that causes issues in shell environments from consideration when using the random password generator, which is used to create default passwords or with `--rotate-password`.
+- Remove the long deprecated `archivestorage` attribute from the `pgclusters.crunchydata.com` custom resource definition. As this attribute is not used at all, this should have no effect.
+- The `ArchiveMode` parameter is now removed from the configuration. This had been fully deprecated for awhile.
+- New PostgreSQL Operator deployments will now generate ECDSA keys (P-256, SHA384) for use by the API server.
+
+## Fixes
+
+- Ensure custom annotations are applied if the annotations are supposed to be applied globally but the cluster does not have a pgBouncer Deployment.
+- Fix issue with UBI 8 / CentOS 8 when running a pgBackRest bootstrap or restore job, where duplicate "repo types" could be set. Specifically, the ensures the name of the repo type is set via the `PGBACKREST_REPO1_TYPE` environmental variable. Reported by Alec Rooney (@alrooney).
+- Fix issue where `pgo test` would indicate every Service was a replica if the cluster name contained the word `replica` in it. Reported by Jose Joye (@jose-joye).
+- Do not consider Evicted Pods as part of `pgo test`. This eliminates a behavior where faux primaries are considered as part of `pgo test`. Reported by Dennis Jacobfeuerborn (@dennisjac).
+- Fix `pgo df` to not fail in the event it tries to execute a command within a dangling container from the bootstrap process when `pgo create cluster --restore-from` is used. Reported by Ignacio J.Ortega (@IJOL).
+- `pgo df` will now only attempt to execute in running Pods, i.e. it does not attempt to run in evicted Pods. Reported by (@kseswar).
+- Ensure the sync replication ConfigMap is removed when a cluster is deleted.
+- Fix crash in shutdown logic when attempting to shut down a cluster where no primaries exist. Reported by Jeffrey den Drijver (@JeffreyDD).
+- Fix syntax in recovery check command which could lead to failures when manually promoting a standby cluster. Reported by (@SockenSalat).
+- Fix potential race condition that could lead to a crash in the Operator boot when an error is issued around loading the `pgo-config` ConfigMap. Reported by Aleksander Roszig (@AleksanderRoszig).
+- Do not trigger a backup if a standby cluster fails over. Reported by (@aprilito1965).
+- Remove legacy `defaultMode` setting on the volume instructions for the pgBackRest repo Secret as the `readOnly` setting is used on the mount itself. Reported by (@szhang1).
+- The logger no longer defaults to using a log level of `DEBUG`.
+- Autofailover is no longer disabled when an `rmdata` Job is run, enabling a clean database shutdown process when deleting a PostgreSQL cluster.
+- Major upgrade container now includes references for `pgnodemx`.
+- During a major upgrade, ensure permissions are correct on the old data directory before running `pg_upgrade`.
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index aca1a509c1..dc399d2fd0 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -10,7 +10,7 @@
"deployment-name": "fromcrd",
"name": "fromcrd",
"pg-cluster": "fromcrd",
- "pgo-version": "4.5.1",
+ "pgo-version": "4.6.0-beta.1",
"pgouser": "pgoadmin"
},
"name": "fromcrd",
@@ -45,7 +45,7 @@
"supplementalgroups": ""
},
"ccpimage": "crunchy-postgres-ha",
- "ccpimagetag": "centos8-13.1-4.5.1",
+ "ccpimagetag": "centos8-13.1-4.6.0-beta.1",
"clustername": "fromcrd",
"database": "userdb",
"exporterport": "9187",
@@ -60,7 +60,7 @@
"port": "5432",
"user": "testuser",
"userlabels": {
- "pgo-version": "4.5.1"
+ "pgo-version": "4.6.0-beta.1"
}
}
}
diff --git a/examples/envs.sh b/examples/envs.sh
index b824de2eeb..1811e2d5f7 100644
--- a/examples/envs.sh
+++ b/examples/envs.sh
@@ -20,7 +20,7 @@ export PGO_CONF_DIR=$PGOROOT/installers/ansible/roles/pgo-operator/files
# the version of the Operator you run is set by these vars
export PGO_IMAGE_PREFIX=registry.developers.crunchydata.com/crunchydata
export PGO_BASEOS=centos8
-export PGO_VERSION=4.5.1
+export PGO_VERSION=4.6.0-beta.1
export PGO_IMAGE_TAG=$PGO_BASEOS-$PGO_VERSION
# for setting the pgo apiserver port, disabling TLS or not verifying TLS
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
index 9d1036581d..d137aeef5b 100644
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ b/examples/helm/create-cluster/templates/pgcluster.yaml
@@ -10,7 +10,7 @@ metadata:
deployment-name: {{ .Values.pgclustername }}
name: {{ .Values.pgclustername }}
pg-cluster: {{ .Values.pgclustername }}
- pgo-version: 4.5.1
+ pgo-version: 4.6.0-beta.1
pgouser: admin
name: {{ .Values.pgclustername }}
namespace: {{ .Values.namespace }}
diff --git a/examples/helm/create-cluster/values.yaml b/examples/helm/create-cluster/values.yaml
index ce2abbf974..af38177a81 100644
--- a/examples/helm/create-cluster/values.yaml
+++ b/examples/helm/create-cluster/values.yaml
@@ -4,11 +4,11 @@
# The values is for the namespace and the postgresql cluster name
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
-ccpimagetag: centos8-13.1-4.5.1
+ccpimagetag: centos8-13.1-4.6.0-beta.1
namespace: pgo
pgclustername: hippo
pgoimageprefix: registry.developers.crunchydata.com/crunchydata
-pgoversion: 4.5.1
+pgoversion: 4.6.0-beta.1
hipposecretuser: "hippo"
hipposecretpassword: "Supersecurepassword*"
postgressecretuser: "postgres"
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index 6eb934577b..1b03c207e5 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -44,13 +44,13 @@ pgo show cluster hippo -n pgo
```
You will see something like this if successful:
```
-cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
+cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
pod : hippo-8fb6bd96-j87wq (Running) on gke-xxxx-default-pool-38e946bd-257w (1/1) (primary)
pvc: hippo (1Gi)
deployment : hippo
deployment : hippo-backrest-shared-repo
service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
- labels : pgo-version=4.5.1 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-beta.1 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
```
Feel free to run other pgo cli commands on the hippo cluster
@@ -79,7 +79,7 @@ pgo show cluster dev-hippo -n pgo
```
You will see something like this if successful:
```
-cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
+cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
pod : dev-hippo-588d4cb746-bwrxb (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: dev-hippo (1Gi)
deployment : dev-hippo
@@ -87,7 +87,7 @@ cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
deployment : dev-hippo-pgbouncer
service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
- labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.5.1 pgouser=admin
+ labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.6.0-beta.1 pgouser=admin
```
#### staging
The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
@@ -113,7 +113,7 @@ pgo show cluster staging-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
+cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
pod : staging-hippo-85cf6dcb65-9h748 (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: staging-hippo (1Gi)
pod : staging-hippo-lnxw-cf47d8c8b-6r4wn (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
@@ -128,7 +128,7 @@ cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
pgreplica : staging-hippo-lnxw
pgreplica : staging-hippo-rpl1
- labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.5.1 pgouser=admin vendor=crunchydata
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.6.0-beta.1 pgouser=admin vendor=crunchydata
```
#### production
@@ -154,7 +154,7 @@ pgo show cluster prod-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
+cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
pod : prod-hippo-5d6dd46497-rr67c (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (primary)
pvc: prod-hippo (1Gi)
pod : prod-hippo-flty-84d97c8769-2pzbh (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (replica)
@@ -165,7 +165,7 @@ cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.5.1)
service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
pgreplica : prod-hippo-flty
- labels : pgo-version=4.5.1 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-beta.1 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
```
### Delete the clusters
To delete the clusters run the following pgo cli commands
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index f070683e08..91c0ad713c 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -10,7 +10,7 @@ metadata:
deployment-name: hippo
name: hippo
pg-cluster: hippo
- pgo-version: 4.5.1
+ pgo-version: 4.6.0-beta.1
pgouser: admin
name: hippo
namespace: pgo
@@ -42,7 +42,7 @@ spec:
annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
- ccpimagetag: centos8-13.1-4.5.1
+ ccpimagetag: centos8-13.1-4.6.0-beta.1
clustername: hippo
customconfig: ""
database: hippo
@@ -63,4 +63,4 @@ spec:
port: "5432"
user: hippo
userlabels:
- pgo-version: 4.5.1
+ pgo-version: 4.6.0-beta.1
diff --git a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
index 253995eb69..c371817730 100644
--- a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
+++ b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
@@ -20,4 +20,4 @@ spec:
storagetype: dynamic
supplementalgroups: ""
userlabels:
- pgo-version: 4.5.1
+ pgo-version: 4.6.0-beta.1
diff --git a/installers/ansible/README.md b/installers/ansible/README.md
index 345d035037..b82f698d2a 100644
--- a/installers/ansible/README.md
+++ b/installers/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.5.1
+Latest Release: 4.6.0-beta.1
## General
diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml
index 2675ac8eb6..c8ee2b641a 100644
--- a/installers/ansible/values.yaml
+++ b/installers/ansible/values.yaml
@@ -17,7 +17,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.5.1"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -50,14 +50,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.5.1"
+pgo_client_version: "4.6.0-beta.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.5.1"
+pgo_image_tag: "centos8-4.6.0-beta.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/gcp-marketplace/Makefile b/installers/gcp-marketplace/Makefile
index 6236ae3ad8..ac2e205f8b 100644
--- a/installers/gcp-marketplace/Makefile
+++ b/installers/gcp-marketplace/Makefile
@@ -6,7 +6,7 @@ MARKETPLACE_TOOLS ?= gcr.io/cloud-marketplace-tools/k8s/dev:$(MARKETPLACE_VERSIO
MARKETPLACE_VERSION ?= 0.9.4
KUBECONFIG ?= $(HOME)/.kube/config
PARAMETERS ?= {}
-PGO_VERSION ?= 4.5.1
+PGO_VERSION ?= 4.6.0-beta.1
IMAGE_BUILD_ARGS = --build-arg MARKETPLACE_VERSION='$(MARKETPLACE_VERSION)' \
--build-arg PGO_VERSION='$(PGO_VERSION)'
diff --git a/installers/gcp-marketplace/README.md b/installers/gcp-marketplace/README.md
index af2e60f80c..cd71a9b719 100644
--- a/installers/gcp-marketplace/README.md
+++ b/installers/gcp-marketplace/README.md
@@ -59,7 +59,7 @@ Google Cloud Marketplace.
```shell
IMAGE_REPOSITORY=gcr.io/crunchydata-public/postgres-operator
- export PGO_VERSION=4.5.1
+ export PGO_VERSION=4.6.0-beta.1
export INSTALLER_IMAGE=${IMAGE_REPOSITORY}/deployer:${PGO_VERSION}
export OPERATOR_IMAGE=${IMAGE_REPOSITORY}:${PGO_VERSION}
export OPERATOR_IMAGE_API=${IMAGE_REPOSITORY}/pgo-apiserver:${PGO_VERSION}
diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml
index a90d892520..3be2c2e119 100644
--- a/installers/gcp-marketplace/values.yaml
+++ b/installers/gcp-marketplace/values.yaml
@@ -10,7 +10,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.5.1"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
create_rbac: "true"
db_name: ""
db_password_age_days: "0"
@@ -32,9 +32,9 @@ pgo_admin_role_name: "pgoadmin"
pgo_admin_username: "admin"
pgo_client_container_install: "false"
pgo_client_install: 'false'
-pgo_client_version: "4.5.1"
+pgo_client_version: "4.6.0-beta.1"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.5.1"
+pgo_image_tag: "centos8-4.6.0-beta.1"
pgo_installation_name: '${OPERATOR_NAME}'
pgo_operator_namespace: '${OPERATOR_NAMESPACE}'
scheduler_timeout: "3600"
diff --git a/installers/helm/Chart.yaml b/installers/helm/Chart.yaml
index e7a55444cb..a597693462 100644
--- a/installers/helm/Chart.yaml
+++ b/installers/helm/Chart.yaml
@@ -3,7 +3,7 @@ name: postgres-operator
description: Crunchy PostgreSQL Operator Helm chart for Kubernetes
type: application
version: 0.1.0
-appVersion: 4.5.1
+appVersion: 4.6.0-beta.1
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
keywords:
diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml
index b3362d98e7..272b6f5ebc 100644
--- a/installers/helm/values.yaml
+++ b/installers/helm/values.yaml
@@ -37,7 +37,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.5.1"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -70,14 +70,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.5.1"
+pgo_client_version: "4.6.0-beta.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.5.1"
+pgo_image_tag: "centos8-4.6.0-beta.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/kubectl/client-setup.sh b/installers/kubectl/client-setup.sh
index 1504009506..a98b99de5b 100755
--- a/installers/kubectl/client-setup.sh
+++ b/installers/kubectl/client-setup.sh
@@ -14,7 +14,7 @@
# This script should be run after the operator has been deployed
PGO_OPERATOR_NAMESPACE="${PGO_OPERATOR_NAMESPACE:-pgo}"
PGO_USER_ADMIN="${PGO_USER_ADMIN:-pgouser-admin}"
-PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.5.1}"
+PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.6.0-beta.1}"
PGO_CLIENT_URL="https://github.com/CrunchyData/postgres-operator/releases/download/${PGO_CLIENT_VERSION}"
PGO_CMD="${PGO_CMD-kubectl}"
diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml
index f2d411b295..cb1b16c96d 100644
--- a/installers/kubectl/postgres-operator-ocp311.yml
+++ b/installers/kubectl/postgres-operator-ocp311.yml
@@ -44,7 +44,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.5.1"
+ ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -77,14 +77,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.5.1"
+ pgo_client_version: "4.6.0-beta.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.5.1"
+ pgo_image_tag: "centos8-4.6.0-beta.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -161,7 +161,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index cf48f71d50..f5a7c18178 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -138,7 +138,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.5.1"
+ ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -171,14 +171,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.5.1"
+ pgo_client_version: "4.6.0-beta.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.5.1"
+ pgo_image_tag: "centos8-4.6.0-beta.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -268,7 +268,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/ansible/README.md b/installers/metrics/ansible/README.md
index 57f68cd878..1be5088c15 100644
--- a/installers/metrics/ansible/README.md
+++ b/installers/metrics/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.5.1
+Latest Release: 4.6.0-beta.1
## General
diff --git a/installers/metrics/helm/Chart.yaml b/installers/metrics/helm/Chart.yaml
index 520204c2d1..a368325252 100644
--- a/installers/metrics/helm/Chart.yaml
+++ b/installers/metrics/helm/Chart.yaml
@@ -3,6 +3,6 @@ name: postgres-operator-monitoring
description: Install for Crunchy PostgreSQL Operator Monitoring
type: application
version: 0.1.0
-appVersion: 4.5.1
+appVersion: 4.6.0-beta.1
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
\ No newline at end of file
diff --git a/installers/metrics/helm/helm_template.yaml b/installers/metrics/helm/helm_template.yaml
index d0f8e64273..9cef2423b4 100644
--- a/installers/metrics/helm/helm_template.yaml
+++ b/installers/metrics/helm/helm_template.yaml
@@ -20,5 +20,5 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.5.1"
+pgo_image_tag: "centos8-4.6.0-beta.1"
diff --git a/installers/metrics/helm/values.yaml b/installers/metrics/helm/values.yaml
index edb41f9038..3baa0c0c04 100644
--- a/installers/metrics/helm/values.yaml
+++ b/installers/metrics/helm/values.yaml
@@ -20,7 +20,7 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.5.1"
+pgo_image_tag: "centos8-4.6.0-beta.1"
# =====================
# Configuration Options
diff --git a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
index b9bdd356a3..984fb0f6d1 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
@@ -96,7 +96,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/kubectl/postgres-operator-metrics.yml b/installers/metrics/kubectl/postgres-operator-metrics.yml
index 87ca1f8af1..d005d00200 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics.yml
@@ -165,7 +165,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.5.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/olm/Makefile b/installers/olm/Makefile
index 2650440832..a5f7116d43 100644
--- a/installers/olm/Makefile
+++ b/installers/olm/Makefile
@@ -11,7 +11,7 @@ OLM_TOOLS ?= registry.localhost:5000/postgres-operator-olm-tools:$(OLM_SDK_VERSI
OLM_VERSION ?= 0.15.1
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
-PGO_VERSION ?= 4.5.1
+PGO_VERSION ?= 4.6.0-beta.1
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
CCP_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(PGO_VERSION)
CCP_POSTGIS_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(CCP_POSTGIS_VERSION)-$(PGO_VERSION)
diff --git a/pkg/apis/crunchydata.com/v1/doc.go b/pkg/apis/crunchydata.com/v1/doc.go
index 62cd5bc582..3108c58dfa 100644
--- a/pkg/apis/crunchydata.com/v1/doc.go
+++ b/pkg/apis/crunchydata.com/v1/doc.go
@@ -53,7 +53,7 @@ cluster.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.5.1",
+ '{"ClientVersion":"4.6.0-beta.1",
"Namespace":"pgouser1",
"Name":"mycluster",
$PGO_APISERVER_URL/clusters
@@ -72,7 +72,7 @@ show all of the clusters that are in the given namespace.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.5.1",
+ '{"ClientVersion":"4.6.0-beta.1",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/showclusters
@@ -82,7 +82,7 @@ $PGO_APISERVER_URL/showclusters
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.5.1",
+ '{"ClientVersion":"4.6.0-beta.1",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/clustersdelete
@@ -90,7 +90,7 @@ $PGO_APISERVER_URL/clustersdelete
Schemes: http, https
BasePath: /
- Version: 4.5.1
+ Version: 4.6.0-beta.1
License: Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0
Contact: Crunchy Data https://www.crunchydata.com/
diff --git a/pkg/apiservermsgs/common.go b/pkg/apiservermsgs/common.go
index d52499aa4a..db8319faaa 100644
--- a/pkg/apiservermsgs/common.go
+++ b/pkg/apiservermsgs/common.go
@@ -15,7 +15,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
-const PGO_VERSION = "4.5.1"
+const PGO_VERSION = "4.6.0-beta.1"
// Ok status
const Ok = "ok"
diff --git a/redhat/atomic/help.1 b/redhat/atomic/help.1
index 6f9bfad143..aa67fa0e9a 100644
--- a/redhat/atomic/help.1
+++ b/redhat/atomic/help.1
@@ -56,4 +56,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
\fB\fCRelease=\fR
.PP
-The specific release number of the container. For example, Release="4.5.1"
+The specific release number of the container. For example, Release="4.6.0-beta.1"
diff --git a/redhat/atomic/help.md b/redhat/atomic/help.md
index 1a12dbc144..4354a14dbf 100644
--- a/redhat/atomic/help.md
+++ b/redhat/atomic/help.md
@@ -45,4 +45,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
`Release=`
-The specific release number of the container. For example, Release="4.5.1"
+The specific release number of the container. For example, Release="4.6.0-beta.1"
From 99c8f39d37b94c7e99fefa216a183569183c4d89 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 5 Jan 2021 08:49:12 -0500
Subject: [PATCH 119/276] Amend container compaction list
The pg_basebackup restore container was compacted into crunchy-postgres
---
docs/content/releases/4.6.0.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/docs/content/releases/4.6.0.md b/docs/content/releases/4.6.0.md
index d7719433cf..1b8f01fc41 100644
--- a/docs/content/releases/4.6.0.md
+++ b/docs/content/releases/4.6.0.md
@@ -142,6 +142,7 @@ Removed container images include:
- `crunchy-admin`
- `crunchy-backrest-restore`
- `crunchy-backup`
+- `crunchy-pgbasebackup-restore`
- `crunchy-pgbench`
- `crunchy-pgdump`
- `crunchy-pgrestore`
From 0d90315d3e5a02a7bff5948cce7f86c7a2a33503 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 5 Jan 2021 10:14:03 -0500
Subject: [PATCH 120/276] Remove relative relref from Hugo docs
This appears to have been added in error and affects the official
doc builds.
---
docs/content/Upgrade/manual/upgrade35.md | 4 ++--
docs/content/Upgrade/manual/upgrade4.md | 6 +++---
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/content/Upgrade/manual/upgrade35.md b/docs/content/Upgrade/manual/upgrade35.md
index cb7ec25138..53704d6bf6 100644
--- a/docs/content/Upgrade/manual/upgrade35.md
+++ b/docs/content/Upgrade/manual/upgrade35.md
@@ -114,7 +114,7 @@ We strongly recommend that you create a test cluster before proceeding to the ne
Once the Operator is installed and functional, create a new {{< param operatorVersion >}} cluster matching the cluster details recorded in Step 1. Be sure to use the primary PVC name (also noted in Step 1) and the same major PostgreSQL version as was used previously. This will allow the new clusters to utilize the existing PVCs.
-NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure]( {{< relref "Upgrade/manual/upgrade35#pgbackrest-repo-pvc-renaming" >}}).
+NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure](#pgbackrest-repo-pvc-renaming).
A simple example is given below, but more information on cluster creation can be found [here](/pgo-client/common-tasks#creating-a-postgresql-cluster)
@@ -226,7 +226,7 @@ spec:
volumeName: "crunchy-pv156"
```
-where name matches your new cluster (Remember that this will need to match the "primary PVC" name identified in [Step 2]( {{< relref "Upgrade/manual/upgrade35#step-2" >}}) of the upgrade procedure!) and namespace, storageClassName, accessModes, storage, volumeMode and volumeName match your original PVC.
+where name matches your new cluster (Remember that this will need to match the "primary PVC" name identified in [Step 2](#step-2) of the upgrade procedure!) and namespace, storageClassName, accessModes, storage, volumeMode and volumeName match your original PVC.
##### Step 6
diff --git a/docs/content/Upgrade/manual/upgrade4.md b/docs/content/Upgrade/manual/upgrade4.md
index da11f86f15..f39439cfc6 100644
--- a/docs/content/Upgrade/manual/upgrade4.md
+++ b/docs/content/Upgrade/manual/upgrade4.md
@@ -151,7 +151,7 @@ We strongly recommend that you create a test cluster before proceeding to the ne
Once the Operator is installed and functional, create a new {{< param operatorVersion >}} cluster matching the cluster details recorded in Step 1. Be sure to use the primary PVC name (also noted in Step 1) and the same major PostgreSQL version as was used previously. This will allow the new clusters to utilize the existing PVCs.
-NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure]( {{< relref "Upgrade/manual/upgrade4#pgbackrest-repo-pvc-renaming" >}}).
+NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure](#pgbackrest-repo-pvc-renaming).
A simple example is given below, but more information on cluster creation can be found [here](/pgo-client/common-tasks#creating-a-postgresql-cluster)
@@ -431,7 +431,7 @@ We strongly recommend that you create a test cluster before proceeding to the ne
Once the Operator is installed and functional, create a new {{< param operatorVersion >}} cluster matching the cluster details recorded in Step 1. Be sure to use the same name and the same major PostgreSQL version as was used previously. This will allow the new clusters to utilize the existing PVCs. A simple example is given below, but more information on cluster creation can be found [here](/pgo-client/common-tasks#creating-a-postgresql-cluster)
-NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure]( {{< relref "Upgrade/manual/upgrade4#pgbackrest-repo-pvc-renaming" >}}).
+NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure](#pgbackrest-repo-pvc-renaming).
```
pgo create cluster -n
@@ -543,7 +543,7 @@ spec:
volumeName: "crunchy-pv156"
```
-where name matches your new cluster (Remember that this will need to match the "primary PVC" name identified in [Step 2]( {{< relref "Upgrade/manual/upgrade35#step-2" >}}) of the upgrade procedure!) and namespace, storageClassName, accessModes, storage, volumeMode and volumeName match your original PVC.
+where name matches your new cluster (Remember that this will need to match the "primary PVC" name identified in [Step 2](#step-2) of the upgrade procedure!) and namespace, storageClassName, accessModes, storage, volumeMode and volumeName match your original PVC.
##### Step 6
From e4666c1c0d7dbd092d55e3c3260252c06731b819 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 5 Jan 2021 14:02:01 -0500
Subject: [PATCH 121/276] Adjustment to formatting in custom resource
documentation
This was breaking the PDF generation of the documentation for
unexplained reasons.
---
docs/content/custom-resources/_index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 370a0aee25..62ddc459ba 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -747,7 +747,7 @@ make changes, as described below.
| limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). |
| name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. |
| namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. |
-| nodeAffinity | `create` | Sets the [node affinity rules]({{< relref "/architecture/high-availability/_index.md#node-affinity" >}}) for the PostgreSQL cluster and associated PostgreSQL instances. Can be overridden on a per-instance (`pgreplicas.crunchydata.com`) basis. Please see the `Node Affinity Specification` section below. |
+| nodeAffinity | `create` | Sets the [node affinity rules](/architecture/high-availability/#node-affinity) for the PostgreSQL cluster and associated PostgreSQL instances. Can be overridden on a per-instance (`pgreplicas.crunchydata.com`) basis. Please see the `Node Affinity Specification` section below. |
| pgBadger | `create`,`update` | If `true`, deploys the `crunchy-pgbadger` sidecar for query analysis. |
| pgbadgerport | `create` | If the `PGBadger` label is set, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) |
| pgBouncer | `create`, `update` | If specified, defines the attributes to use for the pgBouncer connection pooling deployment that can be used in conjunction with this PostgreSQL cluster. Please see the specification defined below. |
From d43dbac3c55e21db3a27fa804eeab6a6f1dee82b Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 5 Jan 2021 15:11:52 -0500
Subject: [PATCH 122/276] Allow for empty --service-type on `pgo create
pgbouncer`
This was an oversight in the logic for allowing a default pgBouncer
Service Type to be set.
Issue: [ch10065]
---
internal/apiserver/pgbouncerservice/pgbouncerimpl.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
index 5dfc3dfc47..92030852e5 100644
--- a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
+++ b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go
@@ -116,7 +116,7 @@ func CreatePgbouncer(request *msgs.CreatePgbouncerRequest, ns, pgouser string) m
resp.Status.Msg = fmt.Sprintf("invalid service type %q", request.ServiceType)
return resp
case v1.ServiceTypeClusterIP, v1.ServiceTypeNodePort,
- v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName:
+ v1.ServiceTypeLoadBalancer, v1.ServiceTypeExternalName, "":
cluster.Spec.PgBouncer.ServiceType = request.ServiceType
}
From 01f59bfac2507a1ceeddc5a6ff33a868b344555a Mon Sep 17 00:00:00 2001
From: Joseph Mckulka <16840147+jmckulk@users.noreply.github.com>
Date: Tue, 5 Jan 2021 16:05:50 -0500
Subject: [PATCH 123/276] Update failover e2e test
As part of the move from failover to switchover the output message from
the `pgo failover` command was updated. This change updates the test to
account for the new output message.
---
testing/pgo_cli/cluster_failover_test.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/testing/pgo_cli/cluster_failover_test.go b/testing/pgo_cli/cluster_failover_test.go
index ac4f2a40c6..e7b2f02585 100644
--- a/testing/pgo_cli/cluster_failover_test.go
+++ b/testing/pgo_cli/cluster_failover_test.go
@@ -68,7 +68,7 @@ func TestClusterFailover(t *testing.T) {
"--target="+before[0].Labels["deployment-name"], "--no-prompt",
).Exec(t)
require.NoError(t, err)
- require.Contains(t, output, "created")
+ require.Contains(t, output, "success")
replaced := func() bool {
after := replicaPods(t, namespace(), cluster())
From 6069d0d6589097d21422b4e3f85a6622f91b4e9c Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 5 Jan 2021 16:16:29 -0500
Subject: [PATCH 124/276] Update build variable to default to PostgreSQL 13
---
Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 62661ad617..4f5ca5aef1 100644
--- a/Makefile
+++ b/Makefile
@@ -6,7 +6,7 @@ PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
PGO_VERSION ?= 4.6.0-beta.1
-PGO_PG_VERSION ?= 12
+PGO_PG_VERSION ?= 13
PGO_PG_FULLVERSION ?= 13.1
PGO_BACKREST_VERSION ?= 2.31
PACKAGER ?= yum
From abab93812109f133272ec93a4f525eaf23ad814d Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 7 Jan 2021 11:05:41 -0500
Subject: [PATCH 125/276] Bump Prometheus to v2.24.0
---
docs/content/installation/metrics/metrics-configuration.md | 2 +-
installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/content/installation/metrics/metrics-configuration.md b/docs/content/installation/metrics/metrics-configuration.md
index 559f9273ef..1c86eb7819 100644
--- a/docs/content/installation/metrics/metrics-configuration.md
+++ b/docs/content/installation/metrics/metrics-configuration.md
@@ -111,7 +111,7 @@ and tag as needed to use the RedHat certified containers:
| `grafana_image_tag` | 6.7.5 | **Required** | Configures the image tag to use for the Grafana container. |
| `prometheus_image_prefix` | prom | **Required** | Configures the image prefix to use for the Prometheus container. |
| `prometheus_image_name` | promtheus | **Required** | Configures the image name to use for the Prometheus container. |
-| `prometheus_image_tag` | v2.23.0 | **Required** | Configures the image tag to use for the Prometheus container. |
+| `prometheus_image_tag` | v2.24.0 | **Required** | Configures the image tag to use for the Prometheus container. |
Additionally, these same settings can be utilized as needed to support custom image names,
tags, and additional container registries.
diff --git a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml b/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
index a16a017d63..cb6f5b6b4a 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
@@ -45,7 +45,7 @@ prometheus_custom_config: ""
prometheus_install: "true"
prometheus_image_prefix: "prom"
prometheus_image_name: "prometheus"
-prometheus_image_tag: "v2.23.0"
+prometheus_image_tag: "v2.24.0"
prometheus_port: "9090"
prometheus_service_name: "crunchy-prometheus"
prometheus_service_type: "ClusterIP"
From f4e9de7e7a8c55e0606907c7a82faf953076b007 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 7 Jan 2021 11:51:11 -0500
Subject: [PATCH 126/276] Modify roles for pgo-target-role
This moves the "replicasets" resource to be under the "apps"
group, where it really should have been all along (at least since
1.9). This also adds an explicit permission for viewing pods/logs.
Co-authored-by: Pramodh Mereddy
Issue: [ch10081]
---
deploy/cluster-roles.yaml | 11 ++++--
docs/content/architecture/namespace.md | 35 +++++++++++--------
.../files/pgo-configs/pgo-target-role.json | 18 ++++++++--
.../templates/cluster-rbac.yaml.j2 | 11 ++++--
installers/helm/templates/rbac.yaml | 3 +-
installers/kubectl/postgres-operator.yml | 1 +
6 files changed, 57 insertions(+), 22 deletions(-)
diff --git a/deploy/cluster-roles.yaml b/deploy/cluster-roles.yaml
index cb0bb85b41..d760492836 100644
--- a/deploy/cluster-roles.yaml
+++ b/deploy/cluster-roles.yaml
@@ -41,8 +41,6 @@ rules:
- endpoints
- pods
- pods/exec
- - pods/log
- - replicasets
- secrets
- services
- persistentvolumeclaims
@@ -55,10 +53,19 @@ rules:
- update
- delete
- deletecollection
+ - apiGroups:
+ - ''
+ resources:
+ - pods/log
+ verbs:
+ - get
+ - list
+ - watch
- apiGroups:
- apps
resources:
- deployments
+ - replicasets
verbs:
- get
- list
diff --git a/docs/content/architecture/namespace.md b/docs/content/architecture/namespace.md
index f6b4265723..1a551a8b91 100644
--- a/docs/content/architecture/namespace.md
+++ b/docs/content/architecture/namespace.md
@@ -34,8 +34,8 @@ settings.
Enables full dynamic namespace capabilities, in which the Operator can create, delete and update
any namespaces within a Kubernetes cluster. With `dynamic` mode enabled, the PostgreSQL Operator
-can respond to namespace events in a Kubernetes cluster, such as when a namespace is created, and
-take an appropriate action, such as adding the PostgreSQL Operator controllers for the newly
+can respond to namespace events in a Kubernetes cluster, such as when a namespace is created, and
+take an appropriate action, such as adding the PostgreSQL Operator controllers for the newly
created namespace.
The following defines the namespace permissions required for the `dynamic` mode to be enabled:
@@ -62,8 +62,8 @@ rules:
### `readonly`
-In `readonly` mode, the PostgreSQL Operator is still able to listen to namespace events within a
-Kubernetes cluster, but it can no longer modify (create, update, delete) namespaces. For example,
+In `readonly` mode, the PostgreSQL Operator is still able to listen to namespace events within a
+Kubernetes cluster, but it can no longer modify (create, update, delete) namespaces. For example,
if a Kubernetes administrator creates a namespace, the PostgreSQL Operator can respond and create
controllers for that namespace.
@@ -95,7 +95,7 @@ Operator is unable to dynamically respond to namespace events in the cluster, i
target namespaces are deleted or new target namespaces need to be added, the PostgreSQL Operator
will need to be re-deployed.
-Please note that it is important to redeploy the PostgreSQL Operator following the deletion of a
+Please note that it is important to redeploy the PostgreSQL Operator following the deletion of a
target namespace to ensure it no longer attempts to listen for events in that namespace.
The `disabled` mode is enabled the when the PostgreSQL Operator has not been assigned namespace
@@ -103,22 +103,22 @@ permissions.
## RBAC Reconciliation
-By default, the PostgreSQL Operator will attempt to reconcile RBAC resources (ServiceAccounts,
+By default, the PostgreSQL Operator will attempt to reconcile RBAC resources (ServiceAccounts,
Roles and RoleBindings) within each namespace configured for the PostgreSQL Operator installation.
This allows the PostgreSQL Operator to create, update and delete the various RBAC resources it
requires in order to properly create and manage PostgreSQL clusters within each targeted namespace
(this includes self-healing RBAC resources as needed if removed and/or misconfigured).
In order for RBAC reconciliation to function properly, the PostgreSQL Operator ServiceAccount must
-be assigned a certain set of permissions. While the PostgreSQL Operator is not concerned with
+be assigned a certain set of permissions. While the PostgreSQL Operator is not concerned with
exactly how it has been assigned the permissions required to reconcile RBAC in each target
-namespace, the various [installation methods]({{< relref "installation" >}}) supported by the
+namespace, the various [installation methods]({{< relref "installation" >}}) supported by the
PostgreSQL Operator install a recommended set permissions based on the specific Namespace Operating
Mode enabled (see section [Namespace Operating Modes]({{< relref "#namespace-operating-modes" >}})
above for more information regarding the various Namespace Operating Modes available).
-The following section defines the recommended set of permissions that should be assigned to the
-PostgreSQL Operator ServiceAccount in order to ensure proper RBAC reconciliation based on the
+The following section defines the recommended set of permissions that should be assigned to the
+PostgreSQL Operator ServiceAccount in order to ensure proper RBAC reconciliation based on the
specific Namespace Operating Mode enabled. Please note that each PostgreSQL Operator installation
method handles the initial configuration and setup of the permissions shown below based on the
Namespace Operating Mode configured during installation.
@@ -127,7 +127,7 @@ Namespace Operating Mode configured during installation.
When using the `dynamic` Namespace Operating Mode, it is recommended that the PostgreSQL Operator
ServiceAccount be granted permissions to manage RBAC inside any namespace in the Kubernetes cluster
-via a ClusterRole. This allows for a fully-hands off approach to managing RBAC within each
+via a ClusterRole. This allows for a fully-hands off approach to managing RBAC within each
targeted namespace space. In other words, as namespaces are added and removed post-installation of
the PostgreSQL Operator (e.g. using `pgo create namespace` or `pgo delete namespace`), the Operator
is able to automatically reconcile RBAC in those namespaces without the need for any external
@@ -170,8 +170,6 @@ rules:
- endpoints
- pods
- pods/exec
- - pods/log
- - replicasets
- secrets
- services
- persistentvolumeclaims
@@ -184,10 +182,19 @@ rules:
- update
- delete
- deletecollection
+ - apiGroups:
+ - ''
+ resources:
+ - pods/log
+ verbs:
+ - get
+ - list
+ - watch
- apiGroups:
- apps
resources:
- deployments
+ - replicasets
verbs:
- get
- list
@@ -230,7 +237,7 @@ rules:
### `readonly` & `disabled` Namespace Operating Modes
-When using the `readonly` or `disabled` Namespace Operating Modes, it is recommended that the
+When using the `readonly` or `disabled` Namespace Operating Modes, it is recommended that the
PostgreSQL Operator ServiceAccount be granted permissions to manage RBAC inside of any configured
namespaces using local Roles within each targeted namespace. This means that as new namespaces
are added and removed post-installation of the PostgreSQL Operator, an administrator must manually
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role.json
index 1cb6a31cc5..09b77ef469 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role.json
@@ -15,8 +15,6 @@
"endpoints",
"pods",
"pods/exec",
- "pods/log",
- "replicasets",
"secrets",
"services",
"persistentvolumeclaims"
@@ -32,12 +30,26 @@
"deletecollection"
]
},
+ {
+ "apiGroups": [
+ ""
+ ],
+ "resources": [
+ "pods/log"
+ ],
+ "verbs":[
+ "get",
+ "list",
+ "watch"
+ ]
+ },
{
"apiGroups": [
"apps"
],
"resources": [
- "deployments"
+ "deployments",
+ "replicasets"
],
"verbs":[
"get",
diff --git a/installers/ansible/roles/pgo-operator/templates/cluster-rbac.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/cluster-rbac.yaml.j2
index 771080042e..4212d9107b 100644
--- a/installers/ansible/roles/pgo-operator/templates/cluster-rbac.yaml.j2
+++ b/installers/ansible/roles/pgo-operator/templates/cluster-rbac.yaml.j2
@@ -42,8 +42,6 @@ rules:
- endpoints
- pods
- pods/exec
- - pods/log
- - replicasets
- secrets
- services
- persistentvolumeclaims
@@ -56,10 +54,19 @@ rules:
- update
- delete
- deletecollection
+ - apiGroups:
+ - ''
+ resources:
+ - pods/log
+ verbs:
+ - get
+ - list
+ - watch
- apiGroups:
- apps
resources:
- deployments
+ - replicasets
verbs:
- get
- list
diff --git a/installers/helm/templates/rbac.yaml b/installers/helm/templates/rbac.yaml
index dbef140471..19d6fc06e4 100644
--- a/installers/helm/templates/rbac.yaml
+++ b/installers/helm/templates/rbac.yaml
@@ -73,6 +73,7 @@ rules:
- extensions
resources:
- deployments
+ - replicasets
verbs:
- get
- list
@@ -145,4 +146,4 @@ subjects:
- kind: ServiceAccount
name: {{ include "postgres-operator.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
-{{ end }}
\ No newline at end of file
+{{ end }}
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index f5a7c18178..2a595b3b09 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -60,6 +60,7 @@ rules:
- extensions
resources:
- deployments
+ - replicasets
verbs:
- get
- list
From 7aed5450df5688293b366183da01913d2f787f77 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 7 Jan 2021 11:55:20 -0500
Subject: [PATCH 127/276] Bump v4.6.0-beta.2
---
Makefile | 2 +-
bin/push-ccp-to-gcr.sh | 2 +-
conf/postgres-operator/pgo.yaml | 4 ++--
docs/config.toml | 2 +-
docs/content/releases/4.6.0.md | 3 ++-
examples/create-by-resource/fromcrd.json | 6 +++---
examples/envs.sh | 2 +-
.../helm/create-cluster/templates/pgcluster.yaml | 2 +-
examples/helm/create-cluster/values.yaml | 4 ++--
examples/kustomize/createcluster/README.md | 16 ++++++++--------
.../kustomize/createcluster/base/pgcluster.yaml | 6 +++---
.../overlay/staging/hippo-rpl1-pgreplica.yaml | 2 +-
installers/ansible/README.md | 2 +-
installers/ansible/values.yaml | 6 +++---
installers/gcp-marketplace/Makefile | 2 +-
installers/gcp-marketplace/README.md | 2 +-
installers/gcp-marketplace/values.yaml | 6 +++---
installers/helm/Chart.yaml | 2 +-
installers/helm/values.yaml | 6 +++---
installers/kubectl/client-setup.sh | 2 +-
installers/kubectl/postgres-operator-ocp311.yml | 8 ++++----
installers/kubectl/postgres-operator.yml | 8 ++++----
installers/metrics/ansible/README.md | 2 +-
installers/metrics/helm/Chart.yaml | 2 +-
installers/metrics/helm/helm_template.yaml | 2 +-
installers/metrics/helm/values.yaml | 2 +-
.../kubectl/postgres-operator-metrics-ocp311.yml | 2 +-
.../kubectl/postgres-operator-metrics.yml | 2 +-
installers/olm/Makefile | 2 +-
pkg/apis/crunchydata.com/v1/doc.go | 8 ++++----
pkg/apiservermsgs/common.go | 2 +-
redhat/atomic/help.1 | 2 +-
redhat/atomic/help.md | 2 +-
33 files changed, 62 insertions(+), 61 deletions(-)
diff --git a/Makefile b/Makefile
index 4f5ca5aef1..63af50dea4 100644
--- a/Makefile
+++ b/Makefile
@@ -5,7 +5,7 @@ PGOROOT ?= $(CURDIR)
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
-PGO_VERSION ?= 4.6.0-beta.1
+PGO_VERSION ?= 4.6.0-beta.2
PGO_PG_VERSION ?= 13
PGO_PG_FULLVERSION ?= 13.1
PGO_BACKREST_VERSION ?= 2.31
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index a235fecf59..1f3838f374 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -16,7 +16,7 @@
GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
CCP_IMAGE_PREFIX=crunchydata
-CCP_IMAGE_TAG=centos8-13.1-4.6.0-beta.1
+CCP_IMAGE_TAG=centos8-13.1-4.6.0-beta.2
IMAGES=(
crunchy-prometheus
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index 2b0de113bf..28493c8c4a 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -2,7 +2,7 @@ Cluster:
CCPImagePrefix: registry.developers.crunchydata.com/crunchydata
Metrics: false
Badger: false
- CCPImageTag: centos8-13.1-4.6.0-beta.1
+ CCPImageTag: centos8-13.1-4.6.0-beta.2
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
@@ -81,4 +81,4 @@ Storage:
Pgo:
Audit: false
PGOImagePrefix: registry.developers.crunchydata.com/crunchydata
- PGOImageTag: centos8-4.6.0-beta.1
+ PGOImageTag: centos8-4.6.0-beta.2
diff --git a/docs/config.toml b/docs/config.toml
index 3a59d4523e..7975290ef6 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -25,7 +25,7 @@ disableNavChevron = false # set true to hide next/prev chevron, default is false
highlightClientSide = false # set true to use highlight.pack.js instead of the default hugo chroma highlighter
menushortcutsnewtab = true # set true to open shortcuts links to a new tab/window
enableGitInfo = true
-operatorVersion = "4.6.0-beta.1"
+operatorVersion = "4.6.0-beta.2"
postgresVersion = "13.1"
postgresVersion13 = "13.1"
postgresVersion12 = "13.1"
diff --git a/docs/content/releases/4.6.0.md b/docs/content/releases/4.6.0.md
index 1b8f01fc41..3f423fe16d 100644
--- a/docs/content/releases/4.6.0.md
+++ b/docs/content/releases/4.6.0.md
@@ -22,7 +22,7 @@ The PostgreSQL Operator 4.6.0 release includes the following software versions u
The monitoring stack for the PostgreSQL Operator uses upstream components as opposed to repackaging them. These are specified as part of the [PostgreSQL Operator Installer](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/postgres-operator/). We have tested this release with the following versions of each component:
-- Prometheus: 2.23.0
+- Prometheus: 2.24.0
- Grafana: 6.7.5
- Alertmanager: 0.21.0
@@ -230,5 +230,6 @@ Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-arc
- Remove legacy `defaultMode` setting on the volume instructions for the pgBackRest repo Secret as the `readOnly` setting is used on the mount itself. Reported by (@szhang1).
- The logger no longer defaults to using a log level of `DEBUG`.
- Autofailover is no longer disabled when an `rmdata` Job is run, enabling a clean database shutdown process when deleting a PostgreSQL cluster.
+- Update `pgo-target` permissions to match expectations for modern Kubernetes versions.
- Major upgrade container now includes references for `pgnodemx`.
- During a major upgrade, ensure permissions are correct on the old data directory before running `pg_upgrade`.
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index dc399d2fd0..f1f2043f14 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -10,7 +10,7 @@
"deployment-name": "fromcrd",
"name": "fromcrd",
"pg-cluster": "fromcrd",
- "pgo-version": "4.6.0-beta.1",
+ "pgo-version": "4.6.0-beta.2",
"pgouser": "pgoadmin"
},
"name": "fromcrd",
@@ -45,7 +45,7 @@
"supplementalgroups": ""
},
"ccpimage": "crunchy-postgres-ha",
- "ccpimagetag": "centos8-13.1-4.6.0-beta.1",
+ "ccpimagetag": "centos8-13.1-4.6.0-beta.2",
"clustername": "fromcrd",
"database": "userdb",
"exporterport": "9187",
@@ -60,7 +60,7 @@
"port": "5432",
"user": "testuser",
"userlabels": {
- "pgo-version": "4.6.0-beta.1"
+ "pgo-version": "4.6.0-beta.2"
}
}
}
diff --git a/examples/envs.sh b/examples/envs.sh
index 1811e2d5f7..37fa831db8 100644
--- a/examples/envs.sh
+++ b/examples/envs.sh
@@ -20,7 +20,7 @@ export PGO_CONF_DIR=$PGOROOT/installers/ansible/roles/pgo-operator/files
# the version of the Operator you run is set by these vars
export PGO_IMAGE_PREFIX=registry.developers.crunchydata.com/crunchydata
export PGO_BASEOS=centos8
-export PGO_VERSION=4.6.0-beta.1
+export PGO_VERSION=4.6.0-beta.2
export PGO_IMAGE_TAG=$PGO_BASEOS-$PGO_VERSION
# for setting the pgo apiserver port, disabling TLS or not verifying TLS
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
index d137aeef5b..eb609def15 100644
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ b/examples/helm/create-cluster/templates/pgcluster.yaml
@@ -10,7 +10,7 @@ metadata:
deployment-name: {{ .Values.pgclustername }}
name: {{ .Values.pgclustername }}
pg-cluster: {{ .Values.pgclustername }}
- pgo-version: 4.6.0-beta.1
+ pgo-version: 4.6.0-beta.2
pgouser: admin
name: {{ .Values.pgclustername }}
namespace: {{ .Values.namespace }}
diff --git a/examples/helm/create-cluster/values.yaml b/examples/helm/create-cluster/values.yaml
index af38177a81..bfc9b73bb0 100644
--- a/examples/helm/create-cluster/values.yaml
+++ b/examples/helm/create-cluster/values.yaml
@@ -4,11 +4,11 @@
# The values is for the namespace and the postgresql cluster name
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
-ccpimagetag: centos8-13.1-4.6.0-beta.1
+ccpimagetag: centos8-13.1-4.6.0-beta.2
namespace: pgo
pgclustername: hippo
pgoimageprefix: registry.developers.crunchydata.com/crunchydata
-pgoversion: 4.6.0-beta.1
+pgoversion: 4.6.0-beta.2
hipposecretuser: "hippo"
hipposecretpassword: "Supersecurepassword*"
postgressecretuser: "postgres"
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index 1b03c207e5..8cdd77ab04 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -44,13 +44,13 @@ pgo show cluster hippo -n pgo
```
You will see something like this if successful:
```
-cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
+cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
pod : hippo-8fb6bd96-j87wq (Running) on gke-xxxx-default-pool-38e946bd-257w (1/1) (primary)
pvc: hippo (1Gi)
deployment : hippo
deployment : hippo-backrest-shared-repo
service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
- labels : pgo-version=4.6.0-beta.1 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-beta.2 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
```
Feel free to run other pgo cli commands on the hippo cluster
@@ -79,7 +79,7 @@ pgo show cluster dev-hippo -n pgo
```
You will see something like this if successful:
```
-cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
+cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
pod : dev-hippo-588d4cb746-bwrxb (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: dev-hippo (1Gi)
deployment : dev-hippo
@@ -87,7 +87,7 @@ cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
deployment : dev-hippo-pgbouncer
service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
- labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.6.0-beta.1 pgouser=admin
+ labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.6.0-beta.2 pgouser=admin
```
#### staging
The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
@@ -113,7 +113,7 @@ pgo show cluster staging-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
+cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
pod : staging-hippo-85cf6dcb65-9h748 (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: staging-hippo (1Gi)
pod : staging-hippo-lnxw-cf47d8c8b-6r4wn (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
@@ -128,7 +128,7 @@ cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
pgreplica : staging-hippo-lnxw
pgreplica : staging-hippo-rpl1
- labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.6.0-beta.1 pgouser=admin vendor=crunchydata
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.6.0-beta.2 pgouser=admin vendor=crunchydata
```
#### production
@@ -154,7 +154,7 @@ pgo show cluster prod-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
+cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
pod : prod-hippo-5d6dd46497-rr67c (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (primary)
pvc: prod-hippo (1Gi)
pod : prod-hippo-flty-84d97c8769-2pzbh (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (replica)
@@ -165,7 +165,7 @@ cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.1)
service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
pgreplica : prod-hippo-flty
- labels : pgo-version=4.6.0-beta.1 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-beta.2 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
```
### Delete the clusters
To delete the clusters run the following pgo cli commands
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index 91c0ad713c..f89650796b 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -10,7 +10,7 @@ metadata:
deployment-name: hippo
name: hippo
pg-cluster: hippo
- pgo-version: 4.6.0-beta.1
+ pgo-version: 4.6.0-beta.2
pgouser: admin
name: hippo
namespace: pgo
@@ -42,7 +42,7 @@ spec:
annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
- ccpimagetag: centos8-13.1-4.6.0-beta.1
+ ccpimagetag: centos8-13.1-4.6.0-beta.2
clustername: hippo
customconfig: ""
database: hippo
@@ -63,4 +63,4 @@ spec:
port: "5432"
user: hippo
userlabels:
- pgo-version: 4.6.0-beta.1
+ pgo-version: 4.6.0-beta.2
diff --git a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
index c371817730..359350bb6f 100644
--- a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
+++ b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
@@ -20,4 +20,4 @@ spec:
storagetype: dynamic
supplementalgroups: ""
userlabels:
- pgo-version: 4.6.0-beta.1
+ pgo-version: 4.6.0-beta.2
diff --git a/installers/ansible/README.md b/installers/ansible/README.md
index b82f698d2a..4037651766 100644
--- a/installers/ansible/README.md
+++ b/installers/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.6.0-beta.1
+Latest Release: 4.6.0-beta.2
## General
diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml
index c8ee2b641a..bc6a9a7048 100644
--- a/installers/ansible/values.yaml
+++ b/installers/ansible/values.yaml
@@ -17,7 +17,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -50,14 +50,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.6.0-beta.1"
+pgo_client_version: "4.6.0-beta.2"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.6.0-beta.1"
+pgo_image_tag: "centos8-4.6.0-beta.2"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/gcp-marketplace/Makefile b/installers/gcp-marketplace/Makefile
index ac2e205f8b..c344f572e8 100644
--- a/installers/gcp-marketplace/Makefile
+++ b/installers/gcp-marketplace/Makefile
@@ -6,7 +6,7 @@ MARKETPLACE_TOOLS ?= gcr.io/cloud-marketplace-tools/k8s/dev:$(MARKETPLACE_VERSIO
MARKETPLACE_VERSION ?= 0.9.4
KUBECONFIG ?= $(HOME)/.kube/config
PARAMETERS ?= {}
-PGO_VERSION ?= 4.6.0-beta.1
+PGO_VERSION ?= 4.6.0-beta.2
IMAGE_BUILD_ARGS = --build-arg MARKETPLACE_VERSION='$(MARKETPLACE_VERSION)' \
--build-arg PGO_VERSION='$(PGO_VERSION)'
diff --git a/installers/gcp-marketplace/README.md b/installers/gcp-marketplace/README.md
index cd71a9b719..4d204df786 100644
--- a/installers/gcp-marketplace/README.md
+++ b/installers/gcp-marketplace/README.md
@@ -59,7 +59,7 @@ Google Cloud Marketplace.
```shell
IMAGE_REPOSITORY=gcr.io/crunchydata-public/postgres-operator
- export PGO_VERSION=4.6.0-beta.1
+ export PGO_VERSION=4.6.0-beta.2
export INSTALLER_IMAGE=${IMAGE_REPOSITORY}/deployer:${PGO_VERSION}
export OPERATOR_IMAGE=${IMAGE_REPOSITORY}:${PGO_VERSION}
export OPERATOR_IMAGE_API=${IMAGE_REPOSITORY}/pgo-apiserver:${PGO_VERSION}
diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml
index 3be2c2e119..7bd7a6c903 100644
--- a/installers/gcp-marketplace/values.yaml
+++ b/installers/gcp-marketplace/values.yaml
@@ -10,7 +10,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
create_rbac: "true"
db_name: ""
db_password_age_days: "0"
@@ -32,9 +32,9 @@ pgo_admin_role_name: "pgoadmin"
pgo_admin_username: "admin"
pgo_client_container_install: "false"
pgo_client_install: 'false'
-pgo_client_version: "4.6.0-beta.1"
+pgo_client_version: "4.6.0-beta.2"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.1"
+pgo_image_tag: "centos8-4.6.0-beta.2"
pgo_installation_name: '${OPERATOR_NAME}'
pgo_operator_namespace: '${OPERATOR_NAMESPACE}'
scheduler_timeout: "3600"
diff --git a/installers/helm/Chart.yaml b/installers/helm/Chart.yaml
index a597693462..3f03cf2db0 100644
--- a/installers/helm/Chart.yaml
+++ b/installers/helm/Chart.yaml
@@ -3,7 +3,7 @@ name: postgres-operator
description: Crunchy PostgreSQL Operator Helm chart for Kubernetes
type: application
version: 0.1.0
-appVersion: 4.6.0-beta.1
+appVersion: 4.6.0-beta.2
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
keywords:
diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml
index 272b6f5ebc..bb5ce533cf 100644
--- a/installers/helm/values.yaml
+++ b/installers/helm/values.yaml
@@ -37,7 +37,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -70,14 +70,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.6.0-beta.1"
+pgo_client_version: "4.6.0-beta.2"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.6.0-beta.1"
+pgo_image_tag: "centos8-4.6.0-beta.2"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/kubectl/client-setup.sh b/installers/kubectl/client-setup.sh
index a98b99de5b..e225efa34a 100755
--- a/installers/kubectl/client-setup.sh
+++ b/installers/kubectl/client-setup.sh
@@ -14,7 +14,7 @@
# This script should be run after the operator has been deployed
PGO_OPERATOR_NAMESPACE="${PGO_OPERATOR_NAMESPACE:-pgo}"
PGO_USER_ADMIN="${PGO_USER_ADMIN:-pgouser-admin}"
-PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.6.0-beta.1}"
+PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.6.0-beta.2}"
PGO_CLIENT_URL="https://github.com/CrunchyData/postgres-operator/releases/download/${PGO_CLIENT_VERSION}"
PGO_CMD="${PGO_CMD-kubectl}"
diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml
index cb1b16c96d..0baa77037d 100644
--- a/installers/kubectl/postgres-operator-ocp311.yml
+++ b/installers/kubectl/postgres-operator-ocp311.yml
@@ -44,7 +44,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
+ ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -77,14 +77,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.6.0-beta.1"
+ pgo_client_version: "4.6.0-beta.2"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.6.0-beta.1"
+ pgo_image_tag: "centos8-4.6.0-beta.2"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -161,7 +161,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index 2a595b3b09..3b0764f057 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -139,7 +139,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.6.0-beta.1"
+ ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -172,14 +172,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.6.0-beta.1"
+ pgo_client_version: "4.6.0-beta.2"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.6.0-beta.1"
+ pgo_image_tag: "centos8-4.6.0-beta.2"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -269,7 +269,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/ansible/README.md b/installers/metrics/ansible/README.md
index 1be5088c15..41f7d7690d 100644
--- a/installers/metrics/ansible/README.md
+++ b/installers/metrics/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.6.0-beta.1
+Latest Release: 4.6.0-beta.2
## General
diff --git a/installers/metrics/helm/Chart.yaml b/installers/metrics/helm/Chart.yaml
index a368325252..5a8debf064 100644
--- a/installers/metrics/helm/Chart.yaml
+++ b/installers/metrics/helm/Chart.yaml
@@ -3,6 +3,6 @@ name: postgres-operator-monitoring
description: Install for Crunchy PostgreSQL Operator Monitoring
type: application
version: 0.1.0
-appVersion: 4.6.0-beta.1
+appVersion: 4.6.0-beta.2
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
\ No newline at end of file
diff --git a/installers/metrics/helm/helm_template.yaml b/installers/metrics/helm/helm_template.yaml
index 9cef2423b4..3f89c0d2b5 100644
--- a/installers/metrics/helm/helm_template.yaml
+++ b/installers/metrics/helm/helm_template.yaml
@@ -20,5 +20,5 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.1"
+pgo_image_tag: "centos8-4.6.0-beta.2"
diff --git a/installers/metrics/helm/values.yaml b/installers/metrics/helm/values.yaml
index 3baa0c0c04..b7f79ed8de 100644
--- a/installers/metrics/helm/values.yaml
+++ b/installers/metrics/helm/values.yaml
@@ -20,7 +20,7 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.1"
+pgo_image_tag: "centos8-4.6.0-beta.2"
# =====================
# Configuration Options
diff --git a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
index 984fb0f6d1..b449881000 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
@@ -96,7 +96,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/kubectl/postgres-operator-metrics.yml b/installers/metrics/kubectl/postgres-operator-metrics.yml
index d005d00200..dea5d62e58 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics.yml
@@ -165,7 +165,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.1
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/olm/Makefile b/installers/olm/Makefile
index a5f7116d43..d33f040de4 100644
--- a/installers/olm/Makefile
+++ b/installers/olm/Makefile
@@ -11,7 +11,7 @@ OLM_TOOLS ?= registry.localhost:5000/postgres-operator-olm-tools:$(OLM_SDK_VERSI
OLM_VERSION ?= 0.15.1
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
-PGO_VERSION ?= 4.6.0-beta.1
+PGO_VERSION ?= 4.6.0-beta.2
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
CCP_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(PGO_VERSION)
CCP_POSTGIS_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(CCP_POSTGIS_VERSION)-$(PGO_VERSION)
diff --git a/pkg/apis/crunchydata.com/v1/doc.go b/pkg/apis/crunchydata.com/v1/doc.go
index 3108c58dfa..4c92f8a983 100644
--- a/pkg/apis/crunchydata.com/v1/doc.go
+++ b/pkg/apis/crunchydata.com/v1/doc.go
@@ -53,7 +53,7 @@ cluster.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.1",
+ '{"ClientVersion":"4.6.0-beta.2",
"Namespace":"pgouser1",
"Name":"mycluster",
$PGO_APISERVER_URL/clusters
@@ -72,7 +72,7 @@ show all of the clusters that are in the given namespace.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.1",
+ '{"ClientVersion":"4.6.0-beta.2",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/showclusters
@@ -82,7 +82,7 @@ $PGO_APISERVER_URL/showclusters
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.1",
+ '{"ClientVersion":"4.6.0-beta.2",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/clustersdelete
@@ -90,7 +90,7 @@ $PGO_APISERVER_URL/clustersdelete
Schemes: http, https
BasePath: /
- Version: 4.6.0-beta.1
+ Version: 4.6.0-beta.2
License: Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0
Contact: Crunchy Data https://www.crunchydata.com/
diff --git a/pkg/apiservermsgs/common.go b/pkg/apiservermsgs/common.go
index db8319faaa..b2fabc8e0d 100644
--- a/pkg/apiservermsgs/common.go
+++ b/pkg/apiservermsgs/common.go
@@ -15,7 +15,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
-const PGO_VERSION = "4.6.0-beta.1"
+const PGO_VERSION = "4.6.0-beta.2"
// Ok status
const Ok = "ok"
diff --git a/redhat/atomic/help.1 b/redhat/atomic/help.1
index aa67fa0e9a..afb2703167 100644
--- a/redhat/atomic/help.1
+++ b/redhat/atomic/help.1
@@ -56,4 +56,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
\fB\fCRelease=\fR
.PP
-The specific release number of the container. For example, Release="4.6.0-beta.1"
+The specific release number of the container. For example, Release="4.6.0-beta.2"
diff --git a/redhat/atomic/help.md b/redhat/atomic/help.md
index 4354a14dbf..a18c5a8293 100644
--- a/redhat/atomic/help.md
+++ b/redhat/atomic/help.md
@@ -45,4 +45,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
`Release=`
-The specific release number of the container. For example, Release="4.6.0-beta.1"
+The specific release number of the container. For example, Release="4.6.0-beta.2"
From d2d8e8111d714a4fd373553cbb5d06225f381635 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 8 Jan 2021 14:53:05 -0500
Subject: [PATCH 128/276] Ensure ECDSA keygen explicitly specifies a named
curve
In the EL7 base images, OpenSSL does not appear to explicitly
state that the generated ECDSA key is generated using a named
curve, whereas Go explicitly expects this to be the case:
https://github.com/golang/go/issues/21502#issuecomment-323400475
The issue manifests itself as part of the "pgo-deployer" when the
API server key/certificate pair is created using
`kubectl create secrets tls`, which does some level of introspection.
The fix is to explicitly specify using a named curve, which works
both in the EL7 and EL8 images.
Issue: [ch10097]
---
deploy/gen-api-keys.sh | 1 +
installers/ansible/roles/pgo-operator/tasks/certs.yml | 1 +
2 files changed, 2 insertions(+)
diff --git a/deploy/gen-api-keys.sh b/deploy/gen-api-keys.sh
index 5fd1139ced..3d59b8aa99 100755
--- a/deploy/gen-api-keys.sh
+++ b/deploy/gen-api-keys.sh
@@ -19,6 +19,7 @@ openssl req \
-nodes \
-newkey ec \
-pkeyopt ec_paramgen_curve:prime256v1 \
+ -pkeyopt ec_param_enc:named_curve \
-sha384 \
-keyout $PGOROOT/conf/postgres-operator/server.key \
-out $PGOROOT/conf/postgres-operator/server.crt \
diff --git a/installers/ansible/roles/pgo-operator/tasks/certs.yml b/installers/ansible/roles/pgo-operator/tasks/certs.yml
index 07e3077eee..83f2697df0 100644
--- a/installers/ansible/roles/pgo-operator/tasks/certs.yml
+++ b/installers/ansible/roles/pgo-operator/tasks/certs.yml
@@ -12,6 +12,7 @@
-nodes \
-newkey ec \
-pkeyopt ec_paramgen_curve:prime256v1 \
+ -pkeyopt ec_param_enc:named_curve \
-sha384 \
-days 1825 \
-subj "/CN=*" \
From a7359f58f62f4ad8f216bc9f0054670e66d18d09 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Fri, 8 Jan 2021 17:00:25 -0500
Subject: [PATCH 129/276] Another pgMonitor version bump
I missed this one when bumping the verison in bd046f68
---
installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml b/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
index cb6f5b6b4a..b2e217360b 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml
@@ -9,7 +9,7 @@ delete_metrics_namespace: "false"
metrics_namespace: "pgo"
metrics_image_pull_secret: ""
metrics_image_pull_secret_manifest: ""
-pgmonitor_version: "v4.4-RC7"
+pgmonitor_version: "v4.4"
alertmanager_configmap: "alertmanager-config"
alertmanager_rules_configmap: "alertmanager-rules-config"
From 78f22da4d3db614c57323f41a68edd5684a4ce13 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 10 Jan 2021 09:38:40 -0500
Subject: [PATCH 130/276] Move common job into general utility area
This moves the "rmdata" Job creator into its own area. This
is to prepare for future work where the Job creator can be called
from a different package.
---
cmd/pgo-rmdata/README.txt | 6 ---
.../apiserver/clusterservice/clusterimpl.go | 3 +-
.../apiserver/clusterservice/scaleimpl.go | 4 +-
internal/apiserver/common.go | 40 -------------------
internal/util/cluster.go | 39 ++++++++++++++++++
5 files changed, 42 insertions(+), 50 deletions(-)
delete mode 100644 cmd/pgo-rmdata/README.txt
diff --git a/cmd/pgo-rmdata/README.txt b/cmd/pgo-rmdata/README.txt
deleted file mode 100644
index 3361973ff1..0000000000
--- a/cmd/pgo-rmdata/README.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-
-you can test this program outside of a container like so:
-
-cd $PGOROOT
-
-go run ./pgo-rmdata/pgo-rmdata.go -pg-cluster=mycluster -replica-name= -namespace=mynamespace -remove-data=true -remove-backup=true -is-replica=false -is-backup=false
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 12b8604391..c68ab5d406 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -109,8 +109,7 @@ func DeleteCluster(name, selector string, deleteData, deleteBackups bool, ns, pg
return response
}
- err := apiserver.CreateRMDataTask(cluster.Spec.Name, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope)
- if err != nil {
+ if err := util.CreateRMDataTask(apiserver.Clientset, cluster.Spec.Name, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope); err != nil {
log.Debugf("error on creating rmdata task %s", err.Error())
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index f8ee7c57d8..c3426b2f34 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -294,8 +294,8 @@ func ScaleDown(deleteData bool, clusterName, replicaName, ns string) msgs.ScaleD
isReplica := true
isBackup := false
taskName := replicaName + "-rmdata"
- err = apiserver.CreateRMDataTask(clusterName, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope)
- if err != nil {
+
+ if err := util.CreateRMDataTask(apiserver.Clientset, clusterName, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope); err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
return response
diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go
index 08bc42f6ac..50e0db963a 100644
--- a/internal/apiserver/common.go
+++ b/internal/apiserver/common.go
@@ -19,10 +19,8 @@ import (
"context"
"errors"
"fmt"
- "strconv"
"strings"
- "github.com/crunchydata/postgres-operator/internal/config"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
log "github.com/sirupsen/logrus"
@@ -56,44 +54,6 @@ var (
"Operator installation")
)
-func CreateRMDataTask(clusterName, replicaName, taskName string, deleteBackups, deleteData, isReplica, isBackup bool, ns, clusterPGHAScope string) error {
- ctx := context.TODO()
- var err error
-
- // create pgtask CRD
- spec := crv1.PgtaskSpec{}
- spec.Namespace = ns
- spec.Name = taskName
- spec.TaskType = crv1.PgtaskDeleteData
-
- spec.Parameters = make(map[string]string)
- spec.Parameters[config.LABEL_DELETE_DATA] = strconv.FormatBool(deleteData)
- spec.Parameters[config.LABEL_DELETE_BACKUPS] = strconv.FormatBool(deleteBackups)
- spec.Parameters[config.LABEL_IS_REPLICA] = strconv.FormatBool(isReplica)
- spec.Parameters[config.LABEL_IS_BACKUP] = strconv.FormatBool(isBackup)
- spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName
- spec.Parameters[config.LABEL_REPLICA_NAME] = replicaName
- spec.Parameters[config.LABEL_PGHA_SCOPE] = clusterPGHAScope
-
- newInstance := &crv1.Pgtask{
- ObjectMeta: metav1.ObjectMeta{
- Name: taskName,
- },
- Spec: spec,
- }
- newInstance.ObjectMeta.Labels = make(map[string]string)
- newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName
- newInstance.ObjectMeta.Labels[config.LABEL_RMDATA] = "true"
-
- _, err = Clientset.CrunchydataV1().Pgtasks(ns).Create(ctx, newInstance, metav1.CreateOptions{})
- if err != nil {
- log.Error(err)
- return err
- }
-
- return err
-}
-
// IsValidPVC determines if a PVC with the name provided exits
func IsValidPVC(pvcName, ns string) bool {
ctx := context.TODO()
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index c9d59d9bf6..c3b22fbfaf 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -231,6 +231,45 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
return err
}
+// CreateRMDataTask is a legacy method that was moved into this file. This
+// spawns the "pgo-rmdata" task which cleans up assets related to removing an
+// individual instance or a cluster. I cleaned up the code slightly.
+func CreateRMDataTask(clientset kubeapi.Interface, clusterName, replicaName, taskName string, deleteBackups, deleteData, isReplica, isBackup bool, ns, clusterPGHAScope string) error {
+ ctx := context.TODO()
+
+ // create pgtask CRD
+ task := &crv1.Pgtask{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: taskName,
+ Labels: map[string]string{
+ config.LABEL_PG_CLUSTER: clusterName,
+ config.LABEL_RMDATA: "true",
+ },
+ },
+ Spec: crv1.PgtaskSpec{
+ Name: taskName,
+ Namespace: ns,
+ Parameters: map[string]string{
+ config.LABEL_DELETE_DATA: strconv.FormatBool(deleteData),
+ config.LABEL_DELETE_BACKUPS: strconv.FormatBool(deleteBackups),
+ config.LABEL_IS_REPLICA: strconv.FormatBool(isReplica),
+ config.LABEL_IS_BACKUP: strconv.FormatBool(isBackup),
+ config.LABEL_PG_CLUSTER: clusterName,
+ config.LABEL_REPLICA_NAME: replicaName,
+ config.LABEL_PGHA_SCOPE: clusterPGHAScope,
+ },
+ TaskType: crv1.PgtaskDeleteData,
+ },
+ }
+
+ if _, err := clientset.CrunchydataV1().Pgtasks(ns).Create(ctx, task, metav1.CreateOptions{}); err != nil {
+ log.Error(err)
+ return err
+ }
+
+ return nil
+}
+
// GenerateNodeAffinity creates a Kubernetes node affinity object suitable for
// storage on our custom resource. For now, it only supports preferred affinity,
// though can be expanded to support more complex rules
From da14a107f3aa8ed76ba6e70becbcd3037316996d Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 10 Jan 2021 10:55:01 -0500
Subject: [PATCH 131/276] Deleting a pgcluster resource deletes a PostgreSQL
cluster
This behavior, which had been removed circa 4.1, is now reintroduced
to use the present machienry when a PostgreSQL cluster is being
deleted. Deleting a pgcluster resoruce directly will now delete a
PostgreSQL cluster and all of its associated objects.
To retain the PVCs for the PostgreSQL data directory or the backups
repository, one can set the following Annotations on a pgclusters
custom resource:
- `keep-backups`: indicates to keep the pgBackRest PVC when deleting
the cluster.
- `keep-data`: indicates to keep the PostgreSQL data PVC when deleting
the cluster.
Issue: [ch9435]
Issue: #1203
---
cmd/pgo-rmdata/process.go | 20 ++++---
docs/content/custom-resources/_index.md | 9 +++
.../apiserver/clusterservice/clusterimpl.go | 13 ++---
.../apiserver/clusterservice/scaleimpl.go | 9 +--
internal/config/annotations.go | 6 ++
.../pgcluster/pgclustercontroller.go | 58 +++++++++++++++++--
internal/operator/cluster/cluster.go | 48 ---------------
internal/operator/task/rmdata.go | 38 +++++-------
internal/util/cluster.go | 17 ++++--
9 files changed, 113 insertions(+), 105 deletions(-)
diff --git a/cmd/pgo-rmdata/process.go b/cmd/pgo-rmdata/process.go
index f8923a8a12..60b123b84d 100644
--- a/cmd/pgo-rmdata/process.go
+++ b/cmd/pgo-rmdata/process.go
@@ -101,7 +101,18 @@ func Delete(request Request) {
log.Info("rmdata.Process cluster use case")
- // first, clear out any of the scheduled jobs that may occur, as this would be
+ // attempt to delete the pgcluster object if it has not already been deleted.
+ // quite possibly, we are here because one deleted the pgcluster object
+ // already, so this step is optional
+ if _, err := request.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Get(
+ ctx, request.ClusterName, metav1.GetOptions{}); err == nil {
+ if err := request.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Delete(
+ ctx, request.ClusterName, metav1.DeleteOptions{}); err != nil {
+ log.Error(err)
+ }
+ }
+
+ // clear out any of the scheduled jobs that may occur, as this would be
// executing asynchronously against any stale data
removeSchedules(request)
@@ -111,13 +122,8 @@ func Delete(request Request) {
removeUserSecrets(request)
}
- // handle the case of 'pgo delete cluster mycluster'
+ // remove the cluster Deployments
removeCluster(request)
- if err := request.Clientset.
- CrunchydataV1().Pgclusters(request.Namespace).
- Delete(ctx, request.ClusterName, metav1.DeleteOptions{}); err != nil {
- log.Error(err)
- }
removeServices(request)
removeAddons(request)
removePgreplicas(request)
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 62ddc459ba..43341006df 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -679,6 +679,15 @@ spec:
Save your edits, and in a short period of time, you should see these annotations
applied to the managed Deployments.
+### Delete a PostgreSQL Cluster
+
+A PostgreSQL cluster can be deleted by simply deleting the `pgclusters.crunchydata.com` resource.
+
+It is possible to keep both the PostgreSQL data directory as well as the pgBackRest backup repository when using this method by setting the following annotations on the `pgclusters.crunchydata.com` custom resource:
+
+- `keep-backups`: indicates to keep the pgBackRest PVC when deleting the cluster.
+- `keep-data`: indicates to keep the PostgreSQL data PVC when deleting the cluster.
+
## PostgreSQL Operator Custom Resource Definitions
There are several PostgreSQL Operator Custom Resource Definitions (CRDs) that
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index c68ab5d406..bb42a5dd40 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -83,7 +83,8 @@ func DeleteCluster(name, selector string, deleteData, deleteBackups bool, ns, pg
return response
}
- for _, cluster := range clusterList.Items {
+ for i := range clusterList.Items {
+ cluster := &clusterList.Items[i]
// check if the current cluster is not upgraded to the deployed
// Operator version. If not, do not allow the command to complete
@@ -94,22 +95,16 @@ func DeleteCluster(name, selector string, deleteData, deleteBackups bool, ns, pg
}
log.Debugf("deleting cluster %s", cluster.Spec.Name)
- taskName := cluster.Spec.Name + "-rmdata"
- log.Debugf("creating taskName %s", taskName)
- isBackup := false
- isReplica := false
- replicaName := ""
- clusterPGHAScope := cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE]
// first delete any existing rmdata pgtask with the same name
- err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, taskName, metav1.DeleteOptions{})
+ err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(ctx, cluster.Name+"-rmdata", metav1.DeleteOptions{})
if err != nil && !kerrors.IsNotFound(err) {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
return response
}
- if err := util.CreateRMDataTask(apiserver.Clientset, cluster.Spec.Name, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope); err != nil {
+ if err := util.CreateRMDataTask(apiserver.Clientset, cluster, "", deleteBackups, deleteData, false, false); err != nil {
log.Debugf("error on creating rmdata task %s", err.Error())
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index c3426b2f34..dad920a1a3 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -288,14 +288,7 @@ func ScaleDown(deleteData bool, clusterName, replicaName, ns string) msgs.ScaleD
}
// create the rmdata task which does the cleanup
-
- clusterPGHAScope := cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE]
- deleteBackups := false
- isReplica := true
- isBackup := false
- taskName := replicaName + "-rmdata"
-
- if err := util.CreateRMDataTask(apiserver.Clientset, clusterName, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope); err != nil {
+ if err := util.CreateRMDataTask(apiserver.Clientset, cluster, replicaName, false, deleteData, true, false); err != nil {
response.Status.Code = msgs.Error
response.Status.Msg = err.Error()
return response
diff --git a/internal/config/annotations.go b/internal/config/annotations.go
index 3f42e4e4db..bde7a345f8 100644
--- a/internal/config/annotations.go
+++ b/internal/config/annotations.go
@@ -21,6 +21,12 @@ const (
ANNOTATION_BACKREST_RESTORE = "pgo-backrest-restore"
ANNOTATION_PGHA_BOOTSTRAP_REPLICA = "pgo-pgha-bootstrap-replica"
ANNOTATION_PRIMARY_DEPLOYMENT = "primary-deployment"
+ // ANNOTATION_CLUSTER_KEEP_BACKUPS indicates that if a custom resource is
+ // deleted, ensure the backups are kept
+ ANNOTATION_CLUSTER_KEEP_BACKUPS = "keep-backups"
+ // ANNOTATION_CLUSTER_KEEP_DATA indicates that if a custom resource is
+ // deleted, ensure the data directory is kept
+ ANNOTATION_CLUSTER_KEEP_DATA = "keep-data"
// annotation to track the cluster's current primary
ANNOTATION_CURRENT_PRIMARY = "current-primary"
// annotation to indicate whether a cluster has been upgraded
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index bd7556ec3a..adecbff0e5 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -348,11 +348,61 @@ func (c *Controller) onUpdate(oldObj, newObj interface{}) {
// onDelete is called when a pgcluster is deleted
func (c *Controller) onDelete(obj interface{}) {
- // cluster := obj.(*crv1.Pgcluster)
- // log.Debugf("[Controller] ns=%s onDelete %s", cluster.ObjectMeta.Namespace, cluster.ObjectMeta.SelfLink)
+ ctx := context.TODO()
+ cluster := obj.(*crv1.Pgcluster)
+
+ log.Debugf("pgcluster onDelete for cluster %s (namespace %s)", cluster.Name, cluster.Namespace)
+
+ // a quick guard: see if the "rmdata Job" is running.
+ options := metav1.ListOptions{
+ LabelSelector: fields.AndSelectors(
+ fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, cluster.Name),
+ fields.OneTermEqualSelector(config.LABEL_RMDATA, config.LABEL_TRUE),
+ ).String(),
+ }
+
+ jobs, err := c.Client.BatchV1().Jobs(cluster.Namespace).List(ctx, options)
- // handle pgcluster cleanup
- // clusteroperator.DeleteClusterBase(c.PgclusterClientset, c.PgclusterClient, cluster, cluster.ObjectMeta.Namespace)
+ if err != nil {
+ log.Error(err)
+ return
+ }
+
+ // iterate through the list of Jobs and see if any are currently active or
+ // succeeded.
+ // a succeeded Job could be a remnaint of an old Job for the cluser of a
+ // same name, in which case, we can continue with deleting the cluster
+ for _, job := range jobs.Items {
+ // we will return for one of two reasons:
+ // 1. if the Job is currently active
+ // 2. if the Job is not active but never has completed and is below the
+ // backoff limit -- this could be evidence that the Job is retrying
+ if job.Status.Active > 0 {
+ return
+ } else if job.Status.Succeeded < 1 && job.Status.Failed < *job.Spec.BackoffLimit {
+ return
+ }
+ }
+
+ // we need to create a special pgtask that will create the Job (I know). So
+ // let's attempt to do that here. First, clear out any other pgtask with this
+ // existing name. If it errors because it's not found, we're OK
+ taskName := cluster.Name + "-rmdata"
+ if err := c.Client.CrunchydataV1().Pgtasks(cluster.Namespace).Delete(
+ ctx, taskName, metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) {
+ log.Error(err)
+ return
+ }
+
+ // determine if the data directory or backups should be kept
+ _, keepBackups := cluster.ObjectMeta.GetAnnotations()[config.ANNOTATION_CLUSTER_KEEP_BACKUPS]
+ _, keepData := cluster.ObjectMeta.GetAnnotations()[config.ANNOTATION_CLUSTER_KEEP_DATA]
+
+ // create the deletion job. this will delete any data and backups for this
+ // cluster
+ if err := util.CreateRMDataTask(c.Client, cluster, "", !keepBackups, !keepData, false, false); err != nil {
+ log.Error(err)
+ }
}
// AddPGClusterEventHandler adds the pgcluster event handler to the pgcluster informer
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 05455daaf7..1a96e004a1 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -339,37 +339,6 @@ func AddBootstrapRepo(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (
return
}
-// DeleteClusterBase ...
-func DeleteClusterBase(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) {
- _ = DeleteCluster(clientset, cl, namespace)
-
- // delete any existing configmaps
- if err := deleteConfigMaps(clientset, cl.Spec.Name, namespace); err != nil {
- log.Error(err)
- }
-
- // delete any existing pgtasks ???
-
- // publish delete cluster event
- topics := make([]string, 1)
- topics[0] = events.EventTopicCluster
-
- f := events.EventDeleteClusterFormat{
- EventHeader: events.EventHeader{
- Namespace: namespace,
- Username: cl.ObjectMeta.Labels[config.LABEL_PGOUSER],
- Topic: topics,
- Timestamp: time.Now(),
- EventType: events.EventDeleteCluster,
- },
- Clustername: cl.Spec.Name,
- }
-
- if err := events.Publish(f); err != nil {
- log.Error(err)
- }
-}
-
// ScaleBase ...
func ScaleBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace string) {
ctx := context.TODO()
@@ -759,23 +728,6 @@ func createMissingUserSecrets(clientset kubernetes.Interface, cluster *crv1.Pgcl
return createMissingUserSecret(clientset, cluster, cluster.Spec.User)
}
-func deleteConfigMaps(clientset kubernetes.Interface, clusterName, ns string) error {
- ctx := context.TODO()
- label := fmt.Sprintf("pg-cluster=%s", clusterName)
- list, err := clientset.CoreV1().ConfigMaps(ns).List(ctx, metav1.ListOptions{LabelSelector: label})
- if err != nil {
- return fmt.Errorf("No configMaps found for selector: %s", label)
- }
-
- for _, configmap := range list.Items {
- err := clientset.CoreV1().ConfigMaps(ns).Delete(ctx, configmap.Name, metav1.DeleteOptions{})
- if err != nil {
- return err
- }
- }
- return nil
-}
-
func publishClusterCreateFailure(cl *crv1.Pgcluster, errorMsg string) {
pgouser := cl.ObjectMeta.Labels[config.LABEL_PGOUSER]
topics := make([]string, 1)
diff --git a/internal/operator/task/rmdata.go b/internal/operator/task/rmdata.go
index d65141b207..7f855ecf2e 100644
--- a/internal/operator/task/rmdata.go
+++ b/internal/operator/task/rmdata.go
@@ -83,13 +83,6 @@ func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask
return
}
- // if the clustername is not empty, get the pgcluster
- cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(ctx, clusterName, metav1.GetOptions{})
- if err != nil {
- log.Error(err)
- return
- }
-
jobName := clusterName + "-rmdata-" + util.RandStringBytesRmndr(4)
jobFields := rmdatajobTemplateFields{
@@ -102,40 +95,39 @@ func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask
RemoveBackup: removeBackup,
IsReplica: isReplica,
IsBackup: isBackup,
- PGOImagePrefix: util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix),
+ PGOImagePrefix: util.GetValueOrDefault(task.Spec.Parameters[config.LABEL_IMAGE_PREFIX], operator.Pgo.Pgo.PGOImagePrefix),
PGOImageTag: operator.Pgo.Pgo.PGOImageTag,
SecurityContext: operator.GetPodSecurityContext(task.Spec.StorageSpec.GetSupplementalGroups()),
}
- log.Debugf("creating rmdata job %s for cluster %s ", jobName, task.Spec.Name)
- var doc2 bytes.Buffer
- err = config.RmdatajobTemplate.Execute(&doc2, jobFields)
- if err != nil {
- log.Error(err.Error())
- return
- }
+ log.Debugf("creating rmdata job %s for cluster %s ", jobName, task.Spec.Name)
if operator.CRUNCHY_DEBUG {
_ = config.RmdatajobTemplate.Execute(os.Stdout, jobFields)
}
- newjob := v1batch.Job{}
- err = json.Unmarshal(doc2.Bytes(), &newjob)
- if err != nil {
+ doc := bytes.Buffer{}
+ if err := config.RmdatajobTemplate.Execute(&doc, jobFields); err != nil {
+ log.Error(err)
+ return
+ }
+
+ job := v1batch.Job{}
+ if err := json.Unmarshal(doc.Bytes(), &job); err != nil {
log.Error("error unmarshalling json into Job " + err.Error())
return
}
// set the container image to an override value, if one exists
operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_RMDATA,
- &newjob.Spec.Template.Spec.Containers[0])
+ &job.Spec.Template.Spec.Containers[0])
- j, err := clientset.BatchV1().Jobs(namespace).Create(ctx, &newjob, metav1.CreateOptions{})
- if err != nil {
- log.Errorf("got error when creating rmdata job %s", newjob.Name)
+ if _, err := clientset.BatchV1().Jobs(namespace).Create(ctx, &job, metav1.CreateOptions{}); err != nil {
+ log.Error(err)
return
}
- log.Debugf("successfully created rmdata job %s", j.Name)
+
+ log.Debugf("successfully created rmdata job %s", job.Name)
publishDeleteCluster(task.Spec.Parameters[config.LABEL_PG_CLUSTER],
task.ObjectMeta.Labels[config.LABEL_PGOUSER], namespace)
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index c3b22fbfaf..364ce77993 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -234,35 +234,40 @@ func CreateBackrestRepoSecrets(clientset kubernetes.Interface,
// CreateRMDataTask is a legacy method that was moved into this file. This
// spawns the "pgo-rmdata" task which cleans up assets related to removing an
// individual instance or a cluster. I cleaned up the code slightly.
-func CreateRMDataTask(clientset kubeapi.Interface, clusterName, replicaName, taskName string, deleteBackups, deleteData, isReplica, isBackup bool, ns, clusterPGHAScope string) error {
+func CreateRMDataTask(clientset kubeapi.Interface, cluster *crv1.Pgcluster, replicaName string, deleteBackups, deleteData, isReplica, isBackup bool) error {
ctx := context.TODO()
+ taskName := cluster.Name + "-rmdata"
+ if replicaName != "" {
+ taskName = replicaName + "-rmdata"
+ }
// create pgtask CRD
task := &crv1.Pgtask{
ObjectMeta: metav1.ObjectMeta{
Name: taskName,
Labels: map[string]string{
- config.LABEL_PG_CLUSTER: clusterName,
+ config.LABEL_PG_CLUSTER: cluster.Name,
config.LABEL_RMDATA: "true",
},
},
Spec: crv1.PgtaskSpec{
Name: taskName,
- Namespace: ns,
+ Namespace: cluster.Namespace,
Parameters: map[string]string{
config.LABEL_DELETE_DATA: strconv.FormatBool(deleteData),
config.LABEL_DELETE_BACKUPS: strconv.FormatBool(deleteBackups),
+ config.LABEL_IMAGE_PREFIX: cluster.Spec.PGOImagePrefix,
config.LABEL_IS_REPLICA: strconv.FormatBool(isReplica),
config.LABEL_IS_BACKUP: strconv.FormatBool(isBackup),
- config.LABEL_PG_CLUSTER: clusterName,
+ config.LABEL_PG_CLUSTER: cluster.Name,
config.LABEL_REPLICA_NAME: replicaName,
- config.LABEL_PGHA_SCOPE: clusterPGHAScope,
+ config.LABEL_PGHA_SCOPE: cluster.ObjectMeta.GetLabels()[config.LABEL_PGHA_SCOPE],
},
TaskType: crv1.PgtaskDeleteData,
},
}
- if _, err := clientset.CrunchydataV1().Pgtasks(ns).Create(ctx, task, metav1.CreateOptions{}); err != nil {
+ if _, err := clientset.CrunchydataV1().Pgtasks(cluster.Namespace).Create(ctx, task, metav1.CreateOptions{}); err != nil {
log.Error(err)
return err
}
From 2e0e21b17334f8c4eb66a82f37303d4b41c44092 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 10 Jan 2021 14:51:14 -0500
Subject: [PATCH 132/276] Remove cluster identifier label
This is not referenced anywhere.
---
internal/apiserver/backrestservice/backrestimpl.go | 5 +----
internal/apiserver/clusterservice/scaleimpl.go | 1 -
internal/config/labels.go | 11 +++++------
internal/controller/pgcluster/pgclustercontroller.go | 12 ------------
internal/controller/pgtask/backresthandler.go | 3 +--
internal/operator/backrest/backup.go | 2 --
internal/operator/backrest/restore.go | 2 +-
internal/operator/cluster/clusterlogic.go | 1 -
8 files changed, 8 insertions(+), 29 deletions(-)
diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go
index ec1e9b2f5b..b4b14830f9 100644
--- a/internal/apiserver/backrestservice/backrestimpl.go
+++ b/internal/apiserver/backrestservice/backrestimpl.go
@@ -217,7 +217,6 @@ func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string)
_, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(ctx,
getBackupParams(
- cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER],
clusterName, taskName, crv1.PgtaskBackrestBackup, podname, "database",
util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, apiserver.Pgo.Cluster.CCPImagePrefix),
request.BackupOpts, request.BackrestStorageType, operator.GetS3VerifyTLSSetting(cluster), jobName, ns, pgouser),
@@ -283,7 +282,7 @@ func DeleteBackup(request msgs.DeleteBackrestBackupRequest) msgs.DeleteBackrestB
return response
}
-func getBackupParams(identifier, clusterName, taskName, action, podName, containerName, imagePrefix, backupOpts, backrestStorageType, s3VerifyTLS, jobName, ns, pgouser string) *crv1.Pgtask {
+func getBackupParams(clusterName, taskName, action, podName, containerName, imagePrefix, backupOpts, backrestStorageType, s3VerifyTLS, jobName, ns, pgouser string) *crv1.Pgtask {
var newInstance *crv1.Pgtask
spec := crv1.PgtaskSpec{}
spec.Name = taskName
@@ -311,7 +310,6 @@ func getBackupParams(identifier, clusterName, taskName, action, podName, contain
}
newInstance.ObjectMeta.Labels = make(map[string]string)
newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName
- newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = identifier
newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser
return newInstance
}
@@ -569,7 +567,6 @@ func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreRespo
return resp
}
- pgtask.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]
pgtask.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser
pgtask.Spec.Parameters[crv1.PgtaskWorkflowID] = id
diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go
index dad920a1a3..6115d538f3 100644
--- a/internal/apiserver/clusterservice/scaleimpl.go
+++ b/internal/apiserver/clusterservice/scaleimpl.go
@@ -119,7 +119,6 @@ func ScaleCluster(request msgs.ClusterScaleRequest, pgouser string) msgs.Cluster
spec.Tolerations = request.Tolerations
labels[config.LABEL_PGOUSER] = pgouser
- labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]
for i := 0; i < request.ReplicaCount; i++ {
uniqueName := util.RandStringBytesRmndr(4)
diff --git a/internal/config/labels.go b/internal/config/labels.go
index 327eb74183..d07fdd3219 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -17,12 +17,11 @@ package config
// resource labels used by the operator
const (
- LABEL_NAME = "name"
- LABEL_SELECTOR = "selector"
- LABEL_OPERATOR = "postgres-operator"
- LABEL_PG_CLUSTER = "pg-cluster"
- LABEL_PG_CLUSTER_IDENTIFIER = "pg-cluster-id"
- LABEL_PG_DATABASE = "pgo-pg-database"
+ LABEL_NAME = "name"
+ LABEL_SELECTOR = "selector"
+ LABEL_OPERATOR = "postgres-operator"
+ LABEL_PG_CLUSTER = "pg-cluster"
+ LABEL_PG_DATABASE = "pgo-pg-database"
)
const LABEL_PGTASK = "pg-task"
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index adecbff0e5..576a5d0af9 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -18,7 +18,6 @@ limitations under the License.
import (
"context"
"encoding/json"
- "io/ioutil"
"reflect"
"strings"
@@ -118,8 +117,6 @@ func (c *Controller) processNextItem() bool {
return true
}
- addIdentifier(cluster)
-
// If bootstrapping from an existing data source then attempt to create the pgBackRest repository.
// If a repo already exists (e.g. because it is associated with a currently running cluster) then
// proceed with bootstrapping.
@@ -416,15 +413,6 @@ func (c *Controller) AddPGClusterEventHandler() {
log.Debugf("pgcluster Controller: added event handler to informer")
}
-func addIdentifier(clusterCopy *crv1.Pgcluster) {
- u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid")
- if err != nil {
- log.Error(err)
- }
-
- clusterCopy.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = string(u[:len(u)-1])
-}
-
// updateAnnotations updates any custom annitations that may be on the managed
// deployments, which includes:
//
diff --git a/internal/controller/pgtask/backresthandler.go b/internal/controller/pgtask/backresthandler.go
index f1aff229df..6cbcbff8f8 100644
--- a/internal/controller/pgtask/backresthandler.go
+++ b/internal/controller/pgtask/backresthandler.go
@@ -56,8 +56,7 @@ func (c *Controller) handleBackrestRestore(task *crv1.Pgtask) {
}
log.Debugf("pgtask Controller: added restore job for cluster %s", clusterName)
- backrestoperator.PublishRestore(cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER],
- clusterName, task.ObjectMeta.Labels[config.LABEL_PGOUSER], namespace)
+ backrestoperator.PublishRestore(clusterName, task.ObjectMeta.Labels[config.LABEL_PGOUSER], namespace)
err = backrestoperator.UpdateWorkflow(c.Client, task.Spec.Parameters[crv1.PgtaskWorkflowID],
namespace, crv1.PgtaskWorkflowBackrestRestoreJobCreatedStatus)
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index efdd97f7e9..4d129b60ac 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -148,7 +148,6 @@ func Backrest(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask)
&newjob.Spec.Template.Spec.Containers[0])
newjob.ObjectMeta.Labels[config.LABEL_PGOUSER] = task.ObjectMeta.Labels[config.LABEL_PGOUSER]
- newjob.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = task.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]
backupType := task.Spec.Parameters[config.LABEL_PGHA_BACKUP_TYPE]
if backupType != "" {
@@ -242,7 +241,6 @@ func CreateBackup(clientset pgo.Interface, namespace, clusterName, podName strin
}
newInstance.ObjectMeta.Labels = make(map[string]string)
newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = cluster.Name
- newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]
newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = cluster.ObjectMeta.Labels[config.LABEL_PGOUSER]
_, err = clientset.CrunchydataV1().Pgtasks(cluster.Namespace).Create(ctx, newInstance, metav1.CreateOptions{})
diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go
index 4ff7797cda..92fdd9fd04 100644
--- a/internal/operator/backrest/restore.go
+++ b/internal/operator/backrest/restore.go
@@ -317,7 +317,7 @@ func UpdateWorkflow(clientset pgo.Interface, workflowID, namespace, status strin
}
// PublishRestore is responsible for publishing the 'RestoreCluster' event for a restore
-func PublishRestore(id, clusterName, username, namespace string) {
+func PublishRestore(clusterName, username, namespace string) {
topics := make([]string, 1)
topics[0] = events.EventTopicCluster
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 528ff775dc..90726ec172 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -275,7 +275,6 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
}
cl.Spec.UserLabels[config.LABEL_PGOUSER] = cl.ObjectMeta.Labels[config.LABEL_PGOUSER]
- cl.Spec.UserLabels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cl.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]
// Set the Patroni scope to the name of the primary deployment. Replicas will get scope using the
// 'crunchy-pgha-scope' label on the pgcluster
From e89cd9dbc7d953d8ac0f3f13a6bdceba17727d24 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 11 Jan 2021 11:03:55 -0500
Subject: [PATCH 133/276] Tighten up ephemeral storage limits
This removes an ephemeral storage volume that is no longer used
and places a cap of 64Mi on the pgBadger ephemeral storage
is not mounted unless there is a pgBadger instance directory.
This does not add a size limit to shared memory (dshm), as that
is a much more complicated topic.
Issue: #2188
---
.../files/pgo-configs/cluster-deployment.json | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
index 0e3f2ef6cc..e9f0367b8e 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json
@@ -176,9 +176,6 @@
}, {
"mountPath": "/pgconf",
"name": "pgconf-volume"
- }, {
- "mountPath": "/recover",
- "name": "recover-volume"
},
{
"mountPath": "/dev/shm",
@@ -282,11 +279,11 @@
{{ end }}
{{ end }}
{
- "name": "recover-volume",
- "emptyDir": { "medium": "Memory" }
- }, {
"name": "report",
- "emptyDir": { "medium": "Memory" }
+ "emptyDir": {
+ "medium": "Memory",
+ "sizeLimit": "64Mi"
+ }
},
{
"name": "dshm",
From 6a90c80451aadf085f0f703e85d3bbe7bc81d7fa Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 11 Jan 2021 11:42:23 -0500
Subject: [PATCH 134/276] Update Helm example
This reduces the number of requirements that are needed to deploy
a PostgreSQL cluster with this Helm chart. It does add additional
attributes to help further customize the PostgreSQL cluster, as
well as updated instructions for how to run the example.
---
examples/helm/README.md | 87 +++++++++++++------
.../templates/hippo-secret.yaml | 12 ---
.../create-cluster/templates/pgcluster.yaml | 65 --------------
.../templates/postgres-secret.yaml | 12 ---
examples/helm/create-cluster/values.yaml | 15 ----
.../{create-cluster => postgres}/.helmignore | 0
.../{create-cluster => postgres}/Chart.yaml | 6 +-
.../templates/NOTES.txt | 26 +++++-
.../templates/_helpers.tpl | 0
.../helm/postgres/templates/pgcluster.yaml | 62 +++++++++++++
.../helm/postgres/templates/user-secret.yaml | 12 +++
examples/helm/postgres/values.yaml | 14 +++
12 files changed, 174 insertions(+), 137 deletions(-)
delete mode 100644 examples/helm/create-cluster/templates/hippo-secret.yaml
delete mode 100644 examples/helm/create-cluster/templates/pgcluster.yaml
delete mode 100644 examples/helm/create-cluster/templates/postgres-secret.yaml
delete mode 100644 examples/helm/create-cluster/values.yaml
rename examples/helm/{create-cluster => postgres}/.helmignore (100%)
rename examples/helm/{create-cluster => postgres}/Chart.yaml (88%)
rename examples/helm/{create-cluster => postgres}/templates/NOTES.txt (64%)
rename examples/helm/{create-cluster => postgres}/templates/_helpers.tpl (100%)
create mode 100644 examples/helm/postgres/templates/pgcluster.yaml
create mode 100644 examples/helm/postgres/templates/user-secret.yaml
create mode 100644 examples/helm/postgres/values.yaml
diff --git a/examples/helm/README.md b/examples/helm/README.md
index 390bfbbaae..81fbb97fe8 100644
--- a/examples/helm/README.md
+++ b/examples/helm/README.md
@@ -1,29 +1,20 @@
-# create-cluster
+# Create a Postgres Cluster
-This is a working example of how to create a cluster via the crd workflow
-using a [Helm](https://helm.sh/) chart.
+This is a working example of how to create a PostgreSQL cluster [Helm](https://helm.sh/) chart.
## Prerequisites
### Postgres Operator
-This example assumes you have the Crunchy PostgreSQL Operator installed
-in a namespace called `pgo`.
+This example assumes you have the [Crunchy PostgreSQL Operator installed](https://access.crunchydata.com/documentation/postgres-operator/latest/quickstart/) in a namespace called `pgo`.
### Helm
-Helm will also need to be installed for this example to run
-
-## Documentation
-
-Please see the documentation for more guidance using custom resources:
-
-https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/
+To execute a Helm chart, [Helm](https://helm.sh/) needs to be installed in your local environment.
## Setup
-If you are running Postgres Operator 4.5.1 or later, you can skip the below
-step.
+If you are running Postgres Operator 4.5.1 or later, you can skip the step below.
### Before 4.5.1
@@ -42,38 +33,80 @@ ssh-keygen -t ed25519 -N '' -f "${pgo_cluster_name}-key"
## Running the Example
-For this example we will deploy the cluster into the `pgo` namespace where the
-Postgres Operator is installed and running.
+### Download the Helm Chart
-Return to the `create-cluster` directory:
+For this example we will deploy the cluster into the `pgo` namespace where the Postgres Operator is installed and running.
+
+You will need to download this Helm chart. One way to do this is by cloning the Postgres Operator project into your local environment:
```
-cd postgres-operator/examples/helm/create-cluster
+git clone https://github.com/CrunchyData/postgres-operator.git
+```
+
+Go into the directory that contains the Helm chart for creating a PostgreSQL cluster:
+
```
+cd postgres-operator/examples/helm
+```
+
+### Set Values
+
+There are only three required values to run the Helm chart:
+
+- `name`: The name of your PostgreSQL cluster.
+- `namespace`: The namespace for where the PostgreSQL cluster should be deployed.
+- `password`: A password for the user that will be allowed to connect to the database.
+
+The following values can also be set:
+
+- `cpu`: The CPU limit for the PostgreSQL cluster. Follows standard Kubernetes formatting.
+- `diskSize`: The size of the PVC for the PostgreSQL cluster. Follows standard Kubernetes formatting.
+- `ha`: Whether or not to deploy a high availability PostgreSQL cluster. Can be either `true` or `false`, defaults to `false`.
+- `imagePrefix`: The prefix of the container images to use for this PostgreSQL cluster. Default to `registry.developers.crunchydata.com/crunchydata`.
+- `image`: The name of the container image to use for the PostgreSQL cluster. Defaults to `crunchy-postgres-ha`.
+- `imageTag`: The container image tag to use. Defaults to `centos8-13.1-4.6.0-beta.2`.
+- `memory`: The memory limit for the PostgreSQL cluster. Follows standard Kubernetes formatting.
+- `monitoring`: Whether or not to enable monitoring / metrics collection for this PostgreSQL instance. Can either be `true` or `false`, defaults to `false`.
+
+### Execute the Chart
The following commands will allow you to execute a dry run first with debug
if you want to verify everything is set correctly. Then after everything looks
good run the install command with out the flags:
```
-helm install --dry-run --debug postgres-operator-create-cluster . -n pgo
-helm install postgres-operator-create-cluster . -n pgo
+helm install -n pgo --dry-run --debug postgres-cluster postgres
+helm install -n pgo postgres-cluster postgres
```
+This will deploy a PostgreSQL cluster with the specified name into the specified namespace.
+
## Verify
-Now you can your Hippo cluster has deployed into the pgo namespace by running
-these few commands:
+You can verify that your PostgreSQL cluster is deployed into the `pgo` namespace by running the following commands:
```
kubectl get all -n pgo
+```
-pgo test hippo -n pgo
+Once your PostgreSQL cluster is provisioned, you can connect to it. Assuming you are using the default value of `hippo` for the name of the cluster, in a new terminal window, set up a port forward to the PostgreSQL cluster:
-pgo show cluster hippo -n pgo
```
+kubectl -n pgo port-forward svc/hippo 5432:5432
+```
+
+Still assuming your are using the default values for this Helm chart, you can connect to the Postgres cluster with the following command:
-## NOTE
+```
+PGPASSWORD="W4tch0ut4hippo$" psql -h localhost -U hippo hippo
+```
+
+## Notes
+
+Prior to PostgreSQL Operator 4.6.0, you will have to manually clean up some of the artifacts when running `helm uninstall`.
+
+## Additional Resources
+
+Please see the documentation for more guidance using custom resources:
-As of operator version 4.5.0 when using helm uninstall you will have to manually
-clean up some left over artifacts after running the uninstall.
+[https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/](https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/)
diff --git a/examples/helm/create-cluster/templates/hippo-secret.yaml b/examples/helm/create-cluster/templates/hippo-secret.yaml
deleted file mode 100644
index 8e922196e1..0000000000
--- a/examples/helm/create-cluster/templates/hippo-secret.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-apiVersion: v1
-data:
- password: {{ .Values.hipposecretpassword | b64enc }}
- username: {{ .Values.hipposecretuser | b64enc }}
-kind: Secret
-metadata:
- labels:
- pg-cluster: {{ .Values.pgclustername }}
- vendor: crunchydata
- name: {{ .Values.pgclustername }}-hippo-secret
- namespace: {{ .Values.namespace }}
-type: Opaque
diff --git a/examples/helm/create-cluster/templates/pgcluster.yaml b/examples/helm/create-cluster/templates/pgcluster.yaml
deleted file mode 100644
index eb609def15..0000000000
--- a/examples/helm/create-cluster/templates/pgcluster.yaml
+++ /dev/null
@@ -1,65 +0,0 @@
-apiVersion: crunchydata.com/v1
-kind: Pgcluster
-metadata:
- annotations:
- current-primary: {{ .Values.pgclustername }}
- labels:
- autofail: "true"
- crunchy-pgbadger: "false"
- crunchy-pgha-scope: {{ .Values.pgclustername }}
- deployment-name: {{ .Values.pgclustername }}
- name: {{ .Values.pgclustername }}
- pg-cluster: {{ .Values.pgclustername }}
- pgo-version: 4.6.0-beta.2
- pgouser: admin
- name: {{ .Values.pgclustername }}
- namespace: {{ .Values.namespace }}
-spec:
- BackrestStorage:
- accessmode: ReadWriteOnce
- matchLabels: ""
- name: ""
- size: 3G
- storageclass: ""
- storagetype: dynamic
- supplementalgroups: ""
- PrimaryStorage:
- accessmode: ReadWriteOnce
- matchLabels: ""
- name: {{ .Values.pgclustername }}
- size: 3G
- storageclass: ""
- storagetype: dynamic
- supplementalgroups: ""
- ReplicaStorage:
- accessmode: ReadWriteOnce
- matchLabels: ""
- name: ""
- size: 3G
- storageclass: ""
- storagetype: dynamic
- supplementalgroups: ""
- annotations: {}
- ccpimage: {{ .Values.ccpimage }}
- ccpimageprefix: {{ .Values.ccpimageprefix }}
- ccpimagetag: {{ .Values.ccpimagetag }}
- clustername: {{ .Values.pgclustername }}
- database: {{ .Values.pgclustername }}
- exporter: false
- exporterport: "9187"
- limits: {}
- name: {{ .Values.pgclustername }}
- namespace: {{ .Values.namespace }}
- pgDataSource:
- restoreFrom: ""
- restoreOpts: ""
- pgbadgerport: "10000"
- pgoimageprefix: {{ .Values.pgoimageprefix }}
- podAntiAffinity:
- default: preferred
- pgBackRest: preferred
- pgBouncer: preferred
- port: "5432"
- user: hippo
- userlabels:
- pgo-version: {{ .Values.pgoversion }}
diff --git a/examples/helm/create-cluster/templates/postgres-secret.yaml b/examples/helm/create-cluster/templates/postgres-secret.yaml
deleted file mode 100644
index 914da77e1c..0000000000
--- a/examples/helm/create-cluster/templates/postgres-secret.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-apiVersion: v1
-data:
- password: {{ .Values.postgressecretpassword | b64enc }}
- username: {{ .Values.postgressecretuser | b64enc }}
-kind: Secret
-metadata:
- labels:
- pg-cluster: {{ .Values.pgclustername }}
- vendor: crunchydata
- name: {{ .Values.pgclustername }}-postgres-secret
- namespace: {{ .Values.namespace }}
-type: Opaque
\ No newline at end of file
diff --git a/examples/helm/create-cluster/values.yaml b/examples/helm/create-cluster/values.yaml
deleted file mode 100644
index bfc9b73bb0..0000000000
--- a/examples/helm/create-cluster/values.yaml
+++ /dev/null
@@ -1,15 +0,0 @@
-# Default values for pg_deployment in SDX.
-# This is a YAML-formatted file.
-# Declare variables to be passed into your templates.
-# The values is for the namespace and the postgresql cluster name
-ccpimage: crunchy-postgres-ha
-ccpimageprefix: registry.developers.crunchydata.com/crunchydata
-ccpimagetag: centos8-13.1-4.6.0-beta.2
-namespace: pgo
-pgclustername: hippo
-pgoimageprefix: registry.developers.crunchydata.com/crunchydata
-pgoversion: 4.6.0-beta.2
-hipposecretuser: "hippo"
-hipposecretpassword: "Supersecurepassword*"
-postgressecretuser: "postgres"
-postgressecretpassword: "Anothersecurepassword*"
diff --git a/examples/helm/create-cluster/.helmignore b/examples/helm/postgres/.helmignore
similarity index 100%
rename from examples/helm/create-cluster/.helmignore
rename to examples/helm/postgres/.helmignore
diff --git a/examples/helm/create-cluster/Chart.yaml b/examples/helm/postgres/Chart.yaml
similarity index 88%
rename from examples/helm/create-cluster/Chart.yaml
rename to examples/helm/postgres/Chart.yaml
index 5857415edb..d2c7a63902 100644
--- a/examples/helm/create-cluster/Chart.yaml
+++ b/examples/helm/postgres/Chart.yaml
@@ -1,6 +1,6 @@
apiVersion: v2
name: crunchycrdcluster
-description: A Helm chart for Kubernetes
+description: Helm chart for deploying a PostgreSQL cluster with the Crunchy PostgreSQL Operator
# A chart can be either an 'application' or a 'library' chart.
#
@@ -15,9 +15,9 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
-version: 0.1.0
+version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
-appVersion: 1.16.0
+appVersion: 4.6.0-beta.2
diff --git a/examples/helm/create-cluster/templates/NOTES.txt b/examples/helm/postgres/templates/NOTES.txt
similarity index 64%
rename from examples/helm/create-cluster/templates/NOTES.txt
rename to examples/helm/postgres/templates/NOTES.txt
index 542443a66e..4a3e324405 100644
--- a/examples/helm/create-cluster/templates/NOTES.txt
+++ b/examples/helm/postgres/templates/NOTES.txt
@@ -1,5 +1,3 @@
-Thank you deploying a crunchy postgreSQL cluster v{{ .Chart.AppVersion }}!
-
((((((((((((((((((((((
(((((((((((((%%%%%%%(((((((((((((((
(((((((((((%%% %%%%((((((((((((
@@ -30,5 +28,27 @@ Thank you deploying a crunchy postgreSQL cluster v{{ .Chart.AppVersion }}!
####%%% %%%%% %
%% %%%%
+Thank you deploying a Crunchy PostgreSQL cluster v{{ .Chart.AppVersion }}!
+
+When your cluster has finished deploying, you can connect to it with the
+following credentials:
+
+ Username: {{ if .Values.username }}{{ .Values.username }}{{- else }}{{ .Values.name }}{{- end }}
+ Password: {{ .Values.password }}
+
+To connect to your PostgreSQL cluster, you can set up a port forward to your
+local machine in a separate terminal window:
+
+ kubectl -n {{ .Values.namespace }} port-forward svc/{{ .Values.name }} 5432:5432
+
+And use the following connection string to connect to your cluster:
+
+ PGPASSWORD="{{ .Values.password }}" psql -h localhost -U {{ if .Values.username }}{{ .Values.username }}{{- else }}{{ .Values.name }}{{- end }} {{ .Values.name }}
+
+If you need to log in as the PostgreSQL superuser, you can do so with the following command:
+
+ PGPASSWORD=$(kubectl -n jkatz get secrets {{ .Values.name }}-postgres-secret -o jsonpath='{.data.password}' | base64 -d) psql -h localhost -U postgres {{ .Values.name }}
+
More information about the custom resource workflow the docs can be found here:
-https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/
+
+ https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/
diff --git a/examples/helm/create-cluster/templates/_helpers.tpl b/examples/helm/postgres/templates/_helpers.tpl
similarity index 100%
rename from examples/helm/create-cluster/templates/_helpers.tpl
rename to examples/helm/postgres/templates/_helpers.tpl
diff --git a/examples/helm/postgres/templates/pgcluster.yaml b/examples/helm/postgres/templates/pgcluster.yaml
new file mode 100644
index 0000000000..563d7246de
--- /dev/null
+++ b/examples/helm/postgres/templates/pgcluster.yaml
@@ -0,0 +1,62 @@
+apiVersion: crunchydata.com/v1
+kind: Pgcluster
+metadata:
+ annotations:
+ current-primary: {{ .Values.name | quote }}
+ labels:
+ crunchy-pgha-scope: {{ .Values.name | quote }}
+ deployment-name: {{ .Values.name | quote }}
+ name: {{ .Values.name | quote }}
+ pg-cluster: {{ .Values.name | quote }}
+ pgo-version: {{ .Chart.AppVersion | quote }}
+ pgouser: admin
+ name: {{ .Values.name | quote }}
+ namespace: {{ .Values.namespace | quote }}
+spec:
+ BackrestStorage:
+ accessmode: ReadWriteOnce
+ size: {{ .Values.diskSize | default "2Gi" | quote }}
+ storagetype: dynamic
+ PrimaryStorage:
+ accessmode: ReadWriteOnce
+ name: {{ .Values.name | quote }}
+ size: {{ .Values.diskSize | default "1Gi" | quote }}
+ storagetype: dynamic
+ ReplicaStorage:
+ accessmode: ReadWriteOnce
+ size: {{ .Values.diskSize | default "1Gi" | quote }}
+ storagetype: dynamic
+ ccpimage: {{ .Values.image | default "crunchy-postgres-ha" | quote }}
+ ccpimageprefix: {{ .Values.imagePrefix | default "registry.developers.crunchydata.com/crunchydata" | quote }}
+ ccpimagetag: {{ .Values.imageTag | default "centos8-13.1-4.6.0-beta.2" | quote }}
+ clustername: {{ .Values.name | quote }}
+ database: {{ .Values.name | quote }}
+ {{- if .Values.monitoring }}
+ exporter: true
+ {{- end }}
+ exporterport: "9187"
+ limits:
+ cpu: {{ .Values.cpu | default "0.25" | quote }}
+ memory: {{ .Values.memory | default "1Gi" | quote }}
+ name: {{ .Values.name | quote }}
+ namespace: {{ .Values.namespace | quote }}
+ pgDataSource:
+ restoreFrom: ""
+ restoreOpts: ""
+ pgbadgerport: "10000"
+ pgoimageprefix: {{ .Values.imagePrefix | default "registry.developers.crunchydata.com/crunchydata" | quote }}
+ podAntiAffinity:
+ default: preferred
+ pgBackRest: preferred
+ pgBouncer: preferred
+ port: "5432"
+ {{- if .Values.ha }}
+ replicas: "1"
+ {{- end }}
+ {{- if .Values.username }}
+ user: {{ .Values.username | quote }}
+ {{- else }}
+ user: {{ .Values.name | quote }}
+ {{ end }}
+ userlabels:
+ pgo-version: {{ .Chart.AppVersion | quote }}
diff --git a/examples/helm/postgres/templates/user-secret.yaml b/examples/helm/postgres/templates/user-secret.yaml
new file mode 100644
index 0000000000..b44d31743d
--- /dev/null
+++ b/examples/helm/postgres/templates/user-secret.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: Secret
+type: Opaque
+metadata:
+ labels:
+ pg-cluster: {{ .Values.name | quote }}
+ vendor: crunchydata
+ name: {{ .Values.name }}-{{- if .Values.username }}{{ .Values.username }}{{- else }}{{ .Values.name }}{{- end }}-secret
+ namespace: {{ .Values.namespace | quote }}
+data:
+ password: {{ .Values.password | b64enc | quote }}
+ username: {{ if .Values.username }}{{ .Values.username | b64enc | quote }}{{- else }}{{ .Values.name | b64enc | quote }}{{- end }}
diff --git a/examples/helm/postgres/values.yaml b/examples/helm/postgres/values.yaml
new file mode 100644
index 0000000000..60b4b3e53b
--- /dev/null
+++ b/examples/helm/postgres/values.yaml
@@ -0,0 +1,14 @@
+# The values is for the namespace and the postgresql cluster name
+name: hippo
+namespace: pgo
+password: W4tch0ut4hippo$
+
+# Optional parameters
+# cpu: 0.25
+# diskSize: 5Gi
+# monitoring: true
+# ha: true
+# imagePrefix: registry.developers.crunchydata.com/crunchydata
+# image: crunchy-postgres-ha
+# imageTag: centos8-13.1-4.6.0-beta.2
+# memory: 1Gi
From 52f759e5c05570f952cf1570c624739d76073870 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 11 Jan 2021 11:51:02 -0500
Subject: [PATCH 135/276] Embed pgMonitor assets into pgo-deployer container
This allows for the pgo-deployer container to install the monitoring
stack without making any additional calls to the Internet. If the
installation script does not detect the presence of the asset files,
it will attempt to download them from the Internet.
Issue: [ch10107]
Issue: #1987
---
build/pgo-deployer/Dockerfile | 2 ++
.../ansible/roles/pgo-metrics/tasks/alertmanager.yml | 2 +-
.../ansible/roles/pgo-metrics/tasks/grafana.yml | 6 +++---
.../metrics/ansible/roles/pgo-metrics/tasks/main.yml | 11 +++++++++--
.../ansible/roles/pgo-metrics/tasks/prometheus.yml | 8 ++++----
5 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/build/pgo-deployer/Dockerfile b/build/pgo-deployer/Dockerfile
index c2000eaa87..2f18411334 100644
--- a/build/pgo-deployer/Dockerfile
+++ b/build/pgo-deployer/Dockerfile
@@ -70,6 +70,7 @@ fi
COPY installers/ansible /ansible/postgres-operator
COPY installers/metrics/ansible /ansible/metrics
+ADD tools/pgmonitor /tmp/.pgo/metrics/pgmonitor
COPY installers/image/bin/pgo-deploy.sh /pgo-deploy.sh
COPY bin/uid_daemon.sh /uid_daemon.sh
@@ -78,6 +79,7 @@ ENV HOME="/tmp"
RUN chmod g=u /etc/passwd
RUN chmod g=u /uid_daemon.sh
+RUN chown -R 2:2 /tmp/.pgo/metrics
ENTRYPOINT ["/uid_daemon.sh"]
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml
index dc82e92d1a..13fd013290 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml
@@ -16,7 +16,7 @@
- name: Set pgmonitor Prometheus Directory Fact
set_fact:
- pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}/prometheus"
+ pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor/prometheus"
- name: Copy Alertmanger Config to Output Directory
command: "cp {{ pgmonitor_prometheus_dir }}/{{ item.src }} {{ alertmanager_output_dir }}/{{ item.dst }}"
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml
index 1d528429b5..f0b88e0c65 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml
@@ -9,7 +9,7 @@
grafana_output_dir: "{{ metrics_dir }}/output/grafana"
- name: Ensure Output Directory Exists
- file:
+ file:
path: "{{ grafana_output_dir }}"
state: "directory"
mode: "0700"
@@ -48,7 +48,7 @@
- name: Set pgmonitor Grafana Directory Fact
set_fact:
- pgmonitor_grafana_dir: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}/grafana"
+ pgmonitor_grafana_dir: "{{ metrics_dir }}/pgmonitor/grafana"
- name: Copy Grafana Config to Output Directory
command: "cp {{ pgmonitor_grafana_dir }}/{{ item }} {{ grafana_output_dir }}"
@@ -111,7 +111,7 @@
src: "{{ item }}"
dest: "{{ grafana_output_dir }}/{{ item | replace('.j2', '') }}"
mode: "0600"
- loop:
+ loop:
- grafana-pvc.json.j2
- grafana-service.json.j2
- grafana-deployment.json.j2
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml
index 425d3f8e1b..ae2c27b8e3 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml
@@ -1,7 +1,7 @@
---
- name: Set Metrics Directory Fact
set_fact:
- metrics_dir: "{{ ansible_env.HOME }}/.pgo/metrics/{{ metrics_namespace }}"
+ metrics_dir: "{{ ansible_env.HOME }}/.pgo/metrics"
tags: always
- name: Ensure Output Directory Exists
@@ -54,16 +54,23 @@
- install-metrics
- update-metrics
block:
+ - name: Check for pgmonitor
+ stat:
+ path: "{{ metrics_dir }}/pgmonitor"
+ register: pgmonitor_dir
+
- name: Download pgmonitor {{ pgmonitor_version }}
get_url:
url: https://github.com/CrunchyData/pgmonitor/archive/{{ pgmonitor_version }}.tar.gz
dest: "{{ metrics_dir }}"
mode: "0600"
+ when: not pgmonitor_dir.stat.exists
- name: Extract pgmonitor
unarchive:
src: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}.tar.gz"
- dest: "{{ metrics_dir }}"
+ dest: "{{ metrics_dir }}/pgmonitor"
+ when: not pgmonitor_dir.stat.exists
- name: Create Metrics Image Pull Secret
shell: >
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml
index ffcfa7c625..b9d70aad1f 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml
@@ -9,7 +9,7 @@
prom_output_dir: "{{ metrics_dir }}/output/prom"
- name: Ensure Output Directory Exists
- file:
+ file:
path: "{{ prom_output_dir }}"
state: "directory"
mode: "0700"
@@ -22,7 +22,7 @@
loop:
- prometheus-rbac.json.j2
when: create_rbac | bool
-
+
- name: Create Prometheus RBAC
command: "{{ kubectl_or_oc }} create -f {{ prom_output_dir }}/{{ item }} -n {{ metrics_namespace }}"
loop:
@@ -35,7 +35,7 @@
- name: Set pgmonitor Prometheus Directory Fact
set_fact:
- pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}/prometheus"
+ pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor/prometheus"
- name: Copy Prometheus Config to Output Directory
command: "cp {{ pgmonitor_prometheus_dir }}/{{ item.src }} {{ prom_output_dir }}/{{ item.dst }}"
@@ -88,7 +88,7 @@
src: "{{ item }}"
dest: "{{ prom_output_dir }}/{{ item | replace('.j2', '') }}"
mode: "0600"
- loop:
+ loop:
- prometheus-pvc.json.j2
- prometheus-service.json.j2
- prometheus-deployment.json.j2
From 0c687e6b9c53de00ab37b36f05cf9878fc2a3cba Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 11 Jan 2021 16:39:40 -0500
Subject: [PATCH 136/276] Do not restrict max version of `pgo upgrade`
This created maintenance burden.
---
internal/apiserver/upgradeservice/upgradeimpl.go | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/internal/apiserver/upgradeservice/upgradeimpl.go b/internal/apiserver/upgradeservice/upgradeimpl.go
index ee41b1194c..861cbbe655 100644
--- a/internal/apiserver/upgradeservice/upgradeimpl.go
+++ b/internal/apiserver/upgradeservice/upgradeimpl.go
@@ -36,7 +36,6 @@ import (
// Currently supported version information for upgrades
const (
REQUIRED_MAJOR_PGO_VERSION = 4
- MAXIMUM_MINOR_PGO_VERSION = 5
MINIMUM_MINOR_PGO_VERSION = 1
)
@@ -224,12 +223,9 @@ func supportedOperatorVersion(version string) bool {
log.Errorf("Cannot convert Postgres Operator's minor version to an integer. Error: %v", err)
return false
}
- if minor < MINIMUM_MINOR_PGO_VERSION || minor > MAXIMUM_MINOR_PGO_VERSION {
- return false
- }
// If none of the above is true, the upgrade can continue
- return true
+ return minor >= MINIMUM_MINOR_PGO_VERSION
}
// upgradeTagValid compares and validates the PostgreSQL version values stored
From 89f5b435113eabab0996a23cbfbe7081b97e2d7e Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 11 Jan 2021 16:40:24 -0500
Subject: [PATCH 137/276] Do not delete primary PVC during upgrade
The new pgcluster deletion logic creates an opportunity to delete
the primary and backup PVC, which we do not want to do during an
upgrade. Instead, indicate on the pgcluster object that there is
an upgrade in progress, which will cause the controller to ignore
the change.
---
internal/config/annotations.go | 3 ++
.../pgcluster/pgclustercontroller.go | 8 +++-
internal/operator/cluster/upgrade.go | 40 ++++++++++++++-----
3 files changed, 39 insertions(+), 12 deletions(-)
diff --git a/internal/config/annotations.go b/internal/config/annotations.go
index bde7a345f8..f8a0b32023 100644
--- a/internal/config/annotations.go
+++ b/internal/config/annotations.go
@@ -31,6 +31,9 @@ const (
ANNOTATION_CURRENT_PRIMARY = "current-primary"
// annotation to indicate whether a cluster has been upgraded
ANNOTATION_IS_UPGRADED = "is-upgraded"
+ // annotation to indicate an upgrade is in progress. this has the effect
+ // of causeing the rmdata job in pgcluster to not run
+ ANNOTATION_UPGRADE_IN_PROGRESS = "upgrade-in-progress"
// annotation to store the Operator versions upgraded from and to
ANNOTATION_UPGRADE_INFO = "upgrade-info"
// annotation to store the string boolean, used when checking upgrade status
diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go
index 576a5d0af9..ef23c14b87 100644
--- a/internal/controller/pgcluster/pgclustercontroller.go
+++ b/internal/controller/pgcluster/pgclustercontroller.go
@@ -350,7 +350,13 @@ func (c *Controller) onDelete(obj interface{}) {
log.Debugf("pgcluster onDelete for cluster %s (namespace %s)", cluster.Name, cluster.Namespace)
- // a quick guard: see if the "rmdata Job" is running.
+ // guard: if an upgrade is in progress, do not do any of the rest
+ if _, ok := cluster.ObjectMeta.GetAnnotations()[config.ANNOTATION_UPGRADE_IN_PROGRESS]; ok {
+ log.Debug("upgrade in progress, not proceeding with additional cleanups")
+ return
+ }
+
+ // guard: see if the "rmdata Job" is running.
options := metav1.ListOptions{
LabelSelector: fields.AndSelectors(
fields.OneTermEqualSelector(config.LABEL_PG_CLUSTER, cluster.Name),
diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go
index c55d405a24..93cec5e704 100644
--- a/internal/operator/cluster/upgrade.go
+++ b/internal/operator/cluster/upgrade.go
@@ -104,7 +104,11 @@ func AddUpgrade(clientset kubeapi.Interface, upgrade *crv1.Pgtask, namespace str
_ = createUpgradePGHAConfigMap(clientset, pgcluster, namespace)
// delete the existing pgcluster CRDs and other resources that will be recreated
- deleteBeforeUpgrade(clientset, pgcluster.Name, currentPrimary, namespace)
+ if err := deleteBeforeUpgrade(clientset, pgcluster, currentPrimary, namespace); err != nil {
+ log.Error("refusing to upgrade due to unsuccessful resource removal")
+ PublishUpgradeEvent(events.EventUpgradeClusterFailure, namespace, upgrade, err.Error())
+ return
+ }
// recreate new Backrest Repo secret that was just deleted
recreateBackrestRepoSecret(clientset, upgradeTargetClusterName, namespace, operator.PgoNamespace)
@@ -257,13 +261,25 @@ func SetReplicaNumber(pgcluster *crv1.Pgcluster, numReplicas string) {
// deleteBeforeUpgrade deletes the deployments, services, pgcluster, jobs, tasks and default configmaps before attempting
// to upgrade the pgcluster deployment. This preserves existing secrets, non-standard configmaps and service definitions
// for use in the newly upgraded cluster.
-func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimary, namespace string) {
+func deleteBeforeUpgrade(clientset kubeapi.Interface, pgcluster *crv1.Pgcluster, currentPrimary, namespace string) error {
ctx := context.TODO()
- // first, get all deployments for the pgcluster in question
+ // first, indicate that there is an upgrade occurring on this custom resource
+ // this will prevent the rmdata job from firing off
+ annotations := pgcluster.ObjectMeta.GetAnnotations()
+ annotations[config.ANNOTATION_UPGRADE_IN_PROGRESS] = config.LABEL_TRUE
+ pgcluster.ObjectMeta.SetAnnotations(annotations)
+
+ if _, err := clientset.CrunchydataV1().Pgclusters(namespace).Update(ctx,
+ pgcluster, metav1.UpdateOptions{}); err != nil {
+ log.Errorf("unable to set annotations to keep backups and data: %s", err)
+ return err
+ }
+
+ // next, get all deployments for the pgcluster in question
deployments, err := clientset.
AppsV1().Deployments(namespace).
- List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + clusterName})
+ List(ctx, metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + pgcluster.Name})
if err != nil {
log.Errorf("unable to get deployments. Error: %s", err)
}
@@ -279,7 +295,7 @@ func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimar
}
// wait until the backrest shared repo pod deployment has been deleted before continuing
- waitStatus := deploymentWait(clientset, namespace, clusterName+"-backrest-shared-repo",
+ waitStatus := deploymentWait(clientset, namespace, pgcluster.Name+"-backrest-shared-repo",
180*time.Second, 10*time.Second)
log.Debug(waitStatus)
// wait until the primary pod deployment has been deleted before continuing
@@ -288,7 +304,7 @@ func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimar
log.Debug(waitStatus)
// delete the pgcluster
- _ = clientset.CrunchydataV1().Pgclusters(namespace).Delete(ctx, clusterName, metav1.DeleteOptions{})
+ _ = clientset.CrunchydataV1().Pgclusters(namespace).Delete(ctx, pgcluster.Name, metav1.DeleteOptions{})
// delete all existing job references
deletePropagation := metav1.DeletePropagationForeground
@@ -296,23 +312,25 @@ func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimar
BatchV1().Jobs(namespace).
DeleteCollection(ctx,
metav1.DeleteOptions{PropagationPolicy: &deletePropagation},
- metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + clusterName})
+ metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + pgcluster.Name})
// delete all existing pgtask references except for the upgrade task
// Note: this will be deleted by the existing pgcluster creation process once the
// updated pgcluster created and processed by the cluster controller
- if err = deleteNonupgradePgtasks(clientset, config.LABEL_PG_CLUSTER+"="+clusterName, namespace); err != nil {
- log.Errorf("error while deleting pgtasks for cluster %s, Error: %v", clusterName, err)
+ if err = deleteNonupgradePgtasks(clientset, config.LABEL_PG_CLUSTER+"="+pgcluster.Name, namespace); err != nil {
+ log.Errorf("error while deleting pgtasks for cluster %s, Error: %v", pgcluster.Name, err)
}
// delete the leader configmap used by the Postgres Operator since this information may change after
// the upgrade is complete
// Note: deletion is required for cluster recreation
- _ = clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, clusterName+"-leader", metav1.DeleteOptions{})
+ _ = clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, pgcluster.Name+"-leader", metav1.DeleteOptions{})
// delete the '-pgha-default-config' configmap, if it exists so the config syncer
// will not try to use it instead of '-pgha-config'
- _ = clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, clusterName+"-pgha-default-config", metav1.DeleteOptions{})
+ _ = clientset.CoreV1().ConfigMaps(namespace).Delete(ctx, pgcluster.Name+"-pgha-default-config", metav1.DeleteOptions{})
+
+ return nil
}
// deploymentWait is modified from cluster.waitForDeploymentDelete. It simply waits for the current primary deployment
From 72df1a1fd764059fb0c0da7bae0913bfefc194ed Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 11 Jan 2021 17:51:20 -0500
Subject: [PATCH 138/276] Ensure PostgreSQL cluster comes back up during
upgrade
This explicitly ensures that the PGHA_INIT flag is only set
by the Operator if it has not been explicitly set prior.
Co-authored-by: Andrew L'Ecuyer
---
internal/operator/cluster/cluster.go | 42 ++++++++++++++++++++++++----
1 file changed, 36 insertions(+), 6 deletions(-)
diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go
index 1a96e004a1..d07938e8e7 100644
--- a/internal/operator/cluster/cluster.go
+++ b/internal/operator/cluster/cluster.go
@@ -99,13 +99,20 @@ func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace s
// logic following a restart of the container.
// If the configmap already exists, the cluster creation will continue as this is required
// for certain pgcluster upgrades.
- if err = operator.CreatePGHAConfigMap(clientset, cl,
+ if err := operator.CreatePGHAConfigMap(clientset, cl,
namespace); kerrors.IsAlreadyExists(err) {
- log.Infof("found existing pgha ConfigMap for cluster %s, setting init flag to 'true'",
- cl.GetName())
- err = operator.UpdatePGHAConfigInitFlag(clientset, true, cl.Name, cl.Namespace)
- }
- if err != nil {
+ if !pghaConigMapHasInitFlag(clientset, cl) {
+ log.Infof("found existing pgha ConfigMap for cluster %s without init flag set. "+
+ "setting init flag to 'true'", cl.GetName())
+
+ // if the value is not present, update the config map
+ if err := operator.UpdatePGHAConfigInitFlag(clientset, true, cl.Name, cl.Namespace); err != nil {
+ log.Error(err)
+ publishClusterCreateFailure(cl, err.Error())
+ return
+ }
+ }
+ } else if err != nil {
log.Error(err)
publishClusterCreateFailure(cl, err.Error())
return
@@ -728,6 +735,29 @@ func createMissingUserSecrets(clientset kubernetes.Interface, cluster *crv1.Pgcl
return createMissingUserSecret(clientset, cluster, cluster.Spec.User)
}
+// pghaConigMapHasInitFlag checks to see if the PostgreSQL ConfigMap has the
+// PGHA init flag. Returns true if it does have it set, false otherwise.
+// If any function calls have an error, we will log that error and return false
+func pghaConigMapHasInitFlag(clientset kubernetes.Interface, cluster *crv1.Pgcluster) bool {
+ ctx := context.TODO()
+
+ // load the PGHA config map for this cluster. This more or less assumes that
+ // it exists
+ configMapName := fmt.Sprintf("%s-%s", cluster.Name, operator.PGHAConfigMapSuffix)
+ configMap, err := clientset.CoreV1().ConfigMaps(cluster.Namespace).Get(ctx, configMapName, metav1.GetOptions{})
+
+ // if there is an error getting the ConfigMap, log the error and return
+ if err != nil {
+ log.Error(err)
+ return false
+ }
+
+ // determine if the init flag is set, regardless of it's true or false
+ _, ok := configMap.Data[operator.PGHAConfigInitSetting]
+
+ return ok
+}
+
func publishClusterCreateFailure(cl *crv1.Pgcluster, errorMsg string) {
pgouser := cl.ObjectMeta.Labels[config.LABEL_PGOUSER]
topics := make([]string, 1)
From 64197f179be7429cff672493cd5389cb27531df6 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 13 Jan 2021 12:43:46 -0500
Subject: [PATCH 139/276] Gracefully handle names with "replica" in `pgo test`
This provides an even tighter check than the one introduced in
b0a276ab1 to determine what is a primary vs. replica Service.
Issue: [ch9764]
Issue: #2047
---
internal/apiserver/clusterservice/clusterimpl.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index bb42a5dd40..3dfad2b092 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -483,7 +483,7 @@ func TestCluster(name, selector, ns, pgouser string, allFlag bool) msgs.ClusterT
switch {
default:
endpoint.InstanceType = msgs.ClusterTestInstanceTypePrimary
- case strings.HasSuffix(service.Name, msgs.PodTypeReplica):
+ case (strings.HasSuffix(service.Name, "-"+msgs.PodTypeReplica) && strings.Count(service.Name, "-"+msgs.PodTypeReplica) == 1):
endpoint.InstanceType = msgs.ClusterTestInstanceTypeReplica
case service.Pgbouncer:
endpoint.InstanceType = msgs.ClusterTestInstanceTypePGBouncer
From f37214c63d373991214d396750814dbb564b5bb1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 13 Jan 2021 14:25:57 -0500
Subject: [PATCH 140/276] Make "Effect" optional when setting a toleration via
`pgo` client
Per Kubernetes documentation, one does not need to set an Effect
when setting a Toleration, so our CLI should allow for this to be
optional.
Issue: [ch10147]
---
cmd/pgo/cmd/cluster.go | 30 +++++++++++++++++-----
cmd/pgo/cmd/create.go | 2 ++
docs/content/tutorial/customize-cluster.md | 6 +++++
3 files changed, 31 insertions(+), 7 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index e66a28fc32..848fd0e995 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -555,8 +555,13 @@ func getTablespaces(tablespaceParams []string) []msgs.ClusterTablespaceDetail {
//
// Operator - rule:Effect
//
-// Exists - key:Effect
-// Equals - key=value:Effect
+// Exists:
+// - key
+// - key:Effect
+//
+// Equals:
+// - key=value
+// - key=value:Effect
//
// If the remove flag is set to true, check for a trailing "-" at the end of
// each item, as this will be a remove list. Otherwise, only consider
@@ -575,22 +580,32 @@ func getClusterTolerations(tolerationList []string, remove bool) []v1.Toleration
ruleEffect := strings.Split(t, ":")
// if we don't have exactly two items, then error
- if len(ruleEffect) != 2 {
+ if len(ruleEffect) < 1 || len(ruleEffect) > 2 {
fmt.Printf("invalid format for toleration: %q\n", t)
os.Exit(1)
}
// for ease of reading
- rule, effectStr := ruleEffect[0], ruleEffect[1]
+ rule, effectStr := ruleEffect[0], ""
+ // effect string is only set if ruleEffect is of length 2
+ if len(ruleEffect) == 2 {
+ effectStr = ruleEffect[1]
+ }
// determine if the effect is for removal or not, as we will continue the
- // loop based on that
- if (remove && !strings.HasSuffix(effectStr, "-")) || (!remove && strings.HasSuffix(effectStr, "-")) {
+ // loop based on that.
+ //
+ // In other words, skip processing the value if either:
+ // - This *is* removal mode AND the value *does not* have the removal suffix "-"
+ // - This *is not* removal mode AND the value *does* have the removal suffix "-"
+ if (remove && !strings.HasSuffix(effectStr, "-") && !strings.HasSuffix(rule, "-")) ||
+ (!remove && (strings.HasSuffix(effectStr, "-") || strings.HasSuffix(rule, "-"))) {
continue
}
// no matter what we can trim any trailing "-" off of the string, and cast
// it as a TaintEffect
+ rule = strings.TrimSuffix(rule, "-")
effect := v1.TaintEffect(strings.TrimSuffix(effectStr, "-"))
// see if the effect is a valid effect
@@ -633,7 +648,8 @@ func getClusterTolerations(tolerationList []string, remove bool) []v1.Toleration
func isValidTaintEffect(taintEffect v1.TaintEffect) bool {
return (taintEffect == v1.TaintEffectNoSchedule ||
taintEffect == v1.TaintEffectPreferNoSchedule ||
- taintEffect == v1.TaintEffectNoExecute)
+ taintEffect == v1.TaintEffectNoExecute ||
+ taintEffect == "")
}
// isTablespaceParam returns true if the parameter in question is acceptable for
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index fa6e71e32b..d1cd3eaa14 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -153,6 +153,8 @@ var (
// Exists - key:Effect
// Equals - key=value:Effect
//
+// Effect can be optional.
+//
// Example:
//
// zone=east:NoSchedule,highspeed:NoSchedule
diff --git a/docs/content/tutorial/customize-cluster.md b/docs/content/tutorial/customize-cluster.md
index 8d5f4f941d..7006b70a20 100644
--- a/docs/content/tutorial/customize-cluster.md
+++ b/docs/content/tutorial/customize-cluster.md
@@ -146,6 +146,12 @@ The PostgreSQL Operator supports adding tolerations to PostgreSQL instances usin
rule:Effect
```
+or
+
+```
+rule
+```
+
where a `rule` can represent existence (e.g. `key`) or equality (`key=value`) and `Effect` is one of `NoSchedule`, `PreferNoSchedule`, or `NoExecute`. For more information on how tolerations work, please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
You can assign multiple tolerations to a PostgreSQL cluster.
From b4059a387d5dd6de206657e214273a8de16c6f4b Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 13 Jan 2021 14:45:28 -0500
Subject: [PATCH 141/276] Remove dead code around cluster deletion
This was not being referenced anywhere and caused confusion,
especially in reference to similar funcitonality that exists
elsewhere.
---
internal/operator/cluster/clusterlogic.go | 43 -----------
internal/operator/cluster/rmdata.go | 92 -----------------------
2 files changed, 135 deletions(-)
delete mode 100644 internal/operator/cluster/rmdata.go
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index 90726ec172..cb8b9e3e38 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -353,28 +353,6 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
return deploymentFields
}
-// DeleteCluster ...
-func DeleteCluster(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) error {
- var err error
- log.Info("deleting Pgcluster object" + " in namespace " + namespace)
- log.Info("deleting with Name=" + cl.Spec.Name + " in namespace " + namespace)
-
- // create rmdata job
- isReplica := false
- isBackup := false
- removeData := true
- removeBackup := false
- err = CreateRmdataJob(clientset, cl, namespace, removeData, removeBackup, isReplica, isBackup)
- if err != nil {
- log.Error(err)
- return err
- } else {
- publishDeleteCluster(namespace, cl.ObjectMeta.Labels[config.LABEL_PGOUSER], cl.Spec.Name)
- }
-
- return err
-}
-
// scaleReplicaCreateMissingService creates a service for cluster replicas if
// it does not yet exist.
func scaleReplicaCreateMissingService(clientset kubernetes.Interface, replica *crv1.Pgreplica, cluster *crv1.Pgcluster, namespace string) error {
@@ -600,27 +578,6 @@ func publishScaleError(namespace string, username string, cluster *crv1.Pgcluste
}
}
-func publishDeleteCluster(namespace, username, clusterName string) {
- topics := make([]string, 1)
- topics[0] = events.EventTopicCluster
-
- f := events.EventDeleteClusterFormat{
- EventHeader: events.EventHeader{
- Namespace: namespace,
- Username: username,
- Topic: topics,
- Timestamp: time.Now(),
- EventType: events.EventDeleteCluster,
- },
- Clustername: clusterName,
- }
-
- err := events.Publish(f)
- if err != nil {
- log.Error(err.Error())
- }
-}
-
// ScaleClusterInfo contains information about a cluster obtained when scaling the various
// deployments for a cluster. This includes the name of the primary deployment, all replica
// deployments, along with the names of the services enabled for the cluster.
diff --git a/internal/operator/cluster/rmdata.go b/internal/operator/cluster/rmdata.go
deleted file mode 100644
index 6aa4e986a0..0000000000
--- a/internal/operator/cluster/rmdata.go
+++ /dev/null
@@ -1,92 +0,0 @@
-// Package cluster holds the cluster CRD logic and definitions
-// A cluster is comprised of a primary service, replica service,
-// primary deployment, and replica deployment
-package cluster
-
-/*
- Copyright 2019 - 2021 Crunchy Data Solutions, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-import (
- "bytes"
- "context"
- "encoding/json"
- "os"
- "strconv"
-
- "github.com/crunchydata/postgres-operator/internal/config"
- "github.com/crunchydata/postgres-operator/internal/operator"
- "github.com/crunchydata/postgres-operator/internal/util"
- crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
- log "github.com/sirupsen/logrus"
- v1batch "k8s.io/api/batch/v1"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/client-go/kubernetes"
-)
-
-type RmdataJob struct {
- JobName string
- ClusterName string
- PGOImagePrefix string
- PGOImageTag string
- // SecurityContext string
- RemoveData string
- RemoveBackup string
- IsBackup string
- IsReplica string
-}
-
-func CreateRmdataJob(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string, removeData, removeBackup, isReplica, isBackup bool) error {
- ctx := context.TODO()
- var err error
-
- jobName := cl.Spec.Name + "-rmdata-" + util.RandStringBytesRmndr(4)
-
- jobFields := RmdataJob{
- JobName: jobName,
- ClusterName: cl.Spec.Name,
- PGOImagePrefix: util.GetValueOrDefault(cl.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix),
- PGOImageTag: operator.Pgo.Pgo.PGOImageTag,
- RemoveData: strconv.FormatBool(removeData),
- RemoveBackup: strconv.FormatBool(removeBackup),
- IsBackup: strconv.FormatBool(isReplica),
- IsReplica: strconv.FormatBool(isBackup),
- }
-
- doc := bytes.Buffer{}
-
- if err := config.RmdatajobTemplate.Execute(&doc, jobFields); err != nil {
- log.Error(err.Error())
- return err
- }
-
- if operator.CRUNCHY_DEBUG {
- _ = config.RmdatajobTemplate.Execute(os.Stdout, jobFields)
- }
-
- newjob := v1batch.Job{}
-
- if err := json.Unmarshal(doc.Bytes(), &newjob); err != nil {
- log.Error("error unmarshalling json into Job " + err.Error())
- return err
- }
-
- // set the container image to an override value, if one exists
- operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_RMDATA,
- &newjob.Spec.Template.Spec.Containers[0])
-
- _, err = clientset.BatchV1().Jobs(namespace).
- Create(ctx, &newjob, metav1.CreateOptions{})
- return err
-}
From 3c5143084e7e9df3c597e00f036c62a39d59b48a Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 13 Jan 2021 15:11:44 -0500
Subject: [PATCH 142/276] Have pgBackRest and associated jobs respect cluster
tolerations
pgBackRest is a close corollary to the PostgreSQL cluster itself
and should respect similar tolerations to the PostgreSQL cluster.
Future work could break out tolerations specific for pgBackRest,
but given the two do need to work in concert, it is prudent to
start with this approach.
Issue: [ch10146]
---
cmd/pgo-scheduler/scheduler/policy.go | 1 +
cmd/pgo-scheduler/scheduler/types.go | 1 +
.../files/pgo-configs/backrest-job.json | 3 +++
.../pgo-configs/cluster-bootstrap-job.json | 3 +++
.../files/pgo-configs/pgdump-job.json | 3 +++
.../pgo-backrest-repo-template.json | 3 +++
.../pgo-configs/pgo.sqlrunner-template.json | 3 +++
.../files/pgo-configs/pgrestore-job.json | 3 +++
.../files/pgo-configs/rmdata-job.json | 3 +++
internal/config/labels.go | 1 +
internal/operator/backrest/backup.go | 2 ++
internal/operator/backrest/repo.go | 2 ++
internal/operator/cluster/clusterlogic.go | 6 +++---
internal/operator/clusterutilities.go | 19 -----------------
internal/operator/pgdump/dump.go | 2 ++
internal/operator/pgdump/restore.go | 2 ++
internal/operator/task/rmdata.go | 2 ++
internal/util/cluster.go | 21 +++++++++++++++++++
18 files changed, 58 insertions(+), 22 deletions(-)
diff --git a/cmd/pgo-scheduler/scheduler/policy.go b/cmd/pgo-scheduler/scheduler/policy.go
index c143df1978..00d92b9225 100644
--- a/cmd/pgo-scheduler/scheduler/policy.go
+++ b/cmd/pgo-scheduler/scheduler/policy.go
@@ -142,6 +142,7 @@ func (p PolicyJob) Run() {
PGDatabase: p.database,
PGSQLConfigMap: name,
PGUserSecret: p.secret,
+ Tolerations: util.GetTolerations(cluster.Spec.Tolerations),
}
var doc bytes.Buffer
diff --git a/cmd/pgo-scheduler/scheduler/types.go b/cmd/pgo-scheduler/scheduler/types.go
index 7c766fa539..75a5b297b3 100644
--- a/cmd/pgo-scheduler/scheduler/types.go
+++ b/cmd/pgo-scheduler/scheduler/types.go
@@ -70,4 +70,5 @@ type PolicyTemplate struct {
PGDatabase string
PGUserSecret string
PGSQLConfigMap string
+ Tolerations string
}
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
index bf89971aa6..b2512650f5 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json
@@ -29,6 +29,9 @@
],
"securityContext": {{.SecurityContext}},
"serviceAccountName": "pgo-backrest",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [{
"name": "backrest",
"image": "{{.CCPImagePrefix}}/crunchy-pgbackrest:{{.CCPImageTag}}",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
index ee5e5307a9..42ee6fe2b3 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json
@@ -23,6 +23,9 @@
"spec": {
"securityContext": {{.SecurityContext}},
"serviceAccountName": "pgo-pg",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [{
"name": "database",
"image": "{{.CCPImagePrefix}}/{{.CCPImage}}:{{.CCPImageTag}}",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json
index ef6e1b6d5a..9c44c0ce06 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json
@@ -31,6 +31,9 @@
],
"securityContext": {{.SecurityContext}},
"serviceAccountName": "pgo-default",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [{
"name": "pgdump",
"image": "{{.CCPImagePrefix}}/crunchy-postgres-ha:{{.CCPImageTag}}",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
index 885396322b..32a67d9a99 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json
@@ -46,6 +46,9 @@
"spec": {
"securityContext": {{.SecurityContext}},
"serviceAccountName": "pgo-default",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [{
"name": "database",
"image": "{{.CCPImagePrefix}}/crunchy-pgbackrest-repo:{{.CCPImageTag}}",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
index 56f55dd35e..41e1bfc552 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json
@@ -21,6 +21,9 @@
},
"spec": {
"serviceAccountName": "pgo-default",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [
{
"name": "sqlrunner",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json
index 3759905e95..abcd5c2ee3 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json
@@ -31,6 +31,9 @@
],
"securityContext": {{.SecurityContext}},
"serviceAccountName": "pgo-default",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [
{
"name": "pgrestore",
diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/rmdata-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/rmdata-job.json
index b5f169fa4a..1b593e181e 100644
--- a/installers/ansible/roles/pgo-operator/files/pgo-configs/rmdata-job.json
+++ b/installers/ansible/roles/pgo-operator/files/pgo-configs/rmdata-job.json
@@ -21,6 +21,9 @@
},
"spec": {
"serviceAccountName": "pgo-target",
+ {{ if .Tolerations }}
+ "tolerations": {{ .Tolerations }},
+ {{ end }}
"containers": [{
"name": "rmdata",
"image": "{{.PGOImagePrefix}}/pgo-rmdata:{{.PGOImageTag}}",
diff --git a/internal/config/labels.go b/internal/config/labels.go
index d07fdd3219..f7c55b79ea 100644
--- a/internal/config/labels.go
+++ b/internal/config/labels.go
@@ -60,6 +60,7 @@ const (
LABEL_DELETE_BACKUPS = "delete-backups"
LABEL_IS_REPLICA = "is-replica"
LABEL_IS_BACKUP = "is-backup"
+ LABEL_RM_TOLERATIONS = "rmdata-tolerations"
LABEL_STARTUP = "startup"
LABEL_SHUTDOWN = "shutdown"
)
diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go
index 4d129b60ac..87cacf49f9 100644
--- a/internal/operator/backrest/backup.go
+++ b/internal/operator/backrest/backup.go
@@ -61,6 +61,7 @@ type backrestJobTemplateFields struct {
PgbackrestS3VerifyTLS string
PgbackrestRestoreVolumes string
PgbackrestRestoreVolumeMounts string
+ Tolerations string
}
var (
@@ -117,6 +118,7 @@ func Backrest(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask)
PgbackrestRepo1Type: repoType,
BackrestLocalAndS3Storage: operator.IsLocalAndS3Storage(cluster),
PgbackrestS3VerifyTLS: task.Spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS],
+ Tolerations: util.GetTolerations(cluster.Spec.Tolerations),
}
podCommandOpts, err := getCommandOptsFromPod(clientset, task, namespace)
diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go
index 86304055d1..f06019a633 100644
--- a/internal/operator/backrest/repo.go
+++ b/internal/operator/backrest/repo.go
@@ -65,6 +65,7 @@ type RepoDeploymentTemplateFields struct {
PodAntiAffinityLabelValue string
Replicas int
BootstrapCluster string
+ Tolerations string
}
type RepoServiceTemplateFields struct {
@@ -250,6 +251,7 @@ func getRepoDeploymentFields(clientset kubernetes.Interface, cluster *crv1.Pgclu
PodAntiAffinityLabelName: config.LABEL_POD_ANTI_AFFINITY,
PodAntiAffinityLabelValue: string(operator.GetPodAntiAffinityType(cluster,
crv1.PodAntiAffinityDeploymentPgBackRest, cluster.Spec.PodAntiAffinity.PgBackRest)),
+ Tolerations: util.GetTolerations(cluster.Spec.Tolerations),
}
return &repoFields
diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go
index cb8b9e3e38..9169d1bedb 100644
--- a/internal/operator/cluster/clusterlogic.go
+++ b/internal/operator/cluster/clusterlogic.go
@@ -347,7 +347,7 @@ func getClusterDeploymentFields(clientset kubernetes.Interface,
ReplicationTLSSecret: cl.Spec.TLS.ReplicationTLSSecret,
CASecret: cl.Spec.TLS.CASecret,
Standby: cl.Spec.Standby,
- Tolerations: operator.GetTolerations(cl.Spec.Tolerations),
+ Tolerations: util.GetTolerations(cl.Spec.Tolerations),
}
return deploymentFields
@@ -491,8 +491,8 @@ func scaleReplicaCreateDeployment(clientset kubernetes.Interface,
// Give precedence to the tolerations defined on the replica spec, otherwise
// take any tolerations defined on the cluster spec
Tolerations: util.GetValueOrDefault(
- operator.GetTolerations(replica.Spec.Tolerations),
- operator.GetTolerations(cluster.Spec.Tolerations)),
+ util.GetTolerations(replica.Spec.Tolerations),
+ util.GetTolerations(cluster.Spec.Tolerations)),
}
switch replica.Spec.ReplicaStorage.StorageType {
diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go
index 94744111ca..239bcbe8f3 100644
--- a/internal/operator/clusterutilities.go
+++ b/internal/operator/clusterutilities.go
@@ -955,25 +955,6 @@ func GetSyncReplication(specSyncReplication *bool) bool {
return Pgo.Cluster.SyncReplication
}
-// GetTolerations returns any tolerations that may be defined in a tolerations
-// in JSON format. Otherwise, it returns an empty string
-func GetTolerations(tolerations []v1.Toleration) string {
- // if no tolerations, exit early
- if len(tolerations) == 0 {
- return ""
- }
-
- // turn into a JSON string
- s, err := json.MarshalIndent(tolerations, "", " ")
-
- if err != nil {
- log.Errorf("%s: returning empty string", err.Error())
- return ""
- }
-
- return string(s)
-}
-
// OverrideClusterContainerImages is a helper function that provides the
// appropriate hooks to override any of the container images that might be
// deployed with a PostgreSQL cluster
diff --git a/internal/operator/pgdump/dump.go b/internal/operator/pgdump/dump.go
index 73bf7cc0a1..0808bedf4f 100644
--- a/internal/operator/pgdump/dump.go
+++ b/internal/operator/pgdump/dump.go
@@ -53,6 +53,7 @@ type pgDumpJobTemplateFields struct {
PgDumpFilename string
PgDumpAll string
PgDumpPVC string
+ Tolerations string
}
// Dump ...
@@ -118,6 +119,7 @@ func Dump(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
PgDumpOpts: task.Spec.Parameters[config.LABEL_PGDUMP_OPTS],
PgDumpAll: task.Spec.Parameters[config.LABEL_PGDUMP_ALL],
PgDumpPVC: pvcName,
+ Tolerations: util.GetTolerations(cluster.Spec.Tolerations),
}
var doc2 bytes.Buffer
diff --git a/internal/operator/pgdump/restore.go b/internal/operator/pgdump/restore.go
index 51f36d0f0e..10e6567cb4 100644
--- a/internal/operator/pgdump/restore.go
+++ b/internal/operator/pgdump/restore.go
@@ -51,6 +51,7 @@ type restorejobTemplateFields struct {
CCPImageTag string
PgPort string
NodeSelector string
+ Tolerations string
}
// Restore ...
@@ -106,6 +107,7 @@ func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) {
CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix),
CCPImageTag: operator.Pgo.Cluster.CCPImageTag,
NodeSelector: operator.GetNodeAffinity(nodeAffinity),
+ Tolerations: util.GetTolerations(cluster.Spec.Tolerations),
}
var doc2 bytes.Buffer
diff --git a/internal/operator/task/rmdata.go b/internal/operator/task/rmdata.go
index 7f855ecf2e..5341a66156 100644
--- a/internal/operator/task/rmdata.go
+++ b/internal/operator/task/rmdata.go
@@ -47,6 +47,7 @@ type rmdatajobTemplateFields struct {
RemoveBackup string
IsBackup string
IsReplica string
+ Tolerations string
}
// RemoveData ...
@@ -98,6 +99,7 @@ func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask
PGOImagePrefix: util.GetValueOrDefault(task.Spec.Parameters[config.LABEL_IMAGE_PREFIX], operator.Pgo.Pgo.PGOImagePrefix),
PGOImageTag: operator.Pgo.Pgo.PGOImageTag,
SecurityContext: operator.GetPodSecurityContext(task.Spec.StorageSpec.GetSupplementalGroups()),
+ Tolerations: task.Spec.Parameters[config.LABEL_RM_TOLERATIONS],
}
log.Debugf("creating rmdata job %s for cluster %s ", jobName, task.Spec.Name)
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index 364ce77993..2349cedbb0 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -17,6 +17,7 @@ package util
import (
"context"
+ "encoding/json"
"errors"
"fmt"
"strconv"
@@ -262,6 +263,7 @@ func CreateRMDataTask(clientset kubeapi.Interface, cluster *crv1.Pgcluster, repl
config.LABEL_PG_CLUSTER: cluster.Name,
config.LABEL_REPLICA_NAME: replicaName,
config.LABEL_PGHA_SCOPE: cluster.ObjectMeta.GetLabels()[config.LABEL_PGHA_SCOPE],
+ config.LABEL_RM_TOLERATIONS: GetTolerations(cluster.Spec.Tolerations),
},
TaskType: crv1.PgtaskDeleteData,
},
@@ -379,6 +381,25 @@ func GetS3CredsFromBackrestRepoSecret(clientset kubernetes.Interface, namespace,
return s3Secret, nil
}
+// GetTolerations returns any tolerations that may be defined in a tolerations
+// in JSON format. Otherwise, it returns an empty string
+func GetTolerations(tolerations []v1.Toleration) string {
+ // if no tolerations, exit early
+ if len(tolerations) == 0 {
+ return ""
+ }
+
+ // turn into a JSON string
+ s, err := json.MarshalIndent(tolerations, "", " ")
+
+ if err != nil {
+ log.Errorf("%s: returning empty string", err.Error())
+ return ""
+ }
+
+ return string(s)
+}
+
// SetPostgreSQLPassword updates the password for a PostgreSQL role in the
// PostgreSQL cluster by executing into the primary Pod and changing it
//
From a6beb661652fb8a4f16225248ac022dbf45de15d Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 13 Jan 2021 16:47:44 -0500
Subject: [PATCH 143/276] Ensure standby cluster creates pgBouncer Secret
The code was not allowing for this to happen. Even though the
pgBouncer credential will need to be rotated after a standby is
promoted, we can still create the credential with a nonworking
default.
Issue: [ch10083]
---
internal/operator/cluster/pgbouncer.go | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go
index 35b788e9d8..1cb6b72716 100644
--- a/internal/operator/cluster/pgbouncer.go
+++ b/internal/operator/cluster/pgbouncer.go
@@ -164,6 +164,12 @@ func AddPgbouncer(clientset kubernetes.Interface, restconfig *rest.Config, clust
if err := setPostgreSQLPassword(clientset, restconfig, pod, cluster.Spec.Port, crv1.PGUserPgBouncer, pgBouncerPassword); err != nil {
return err
}
+ } else {
+ // if this is a standby cluster, we still need to create a pgBouncer Secret,
+ // but no credentials are available
+ if err := createPgbouncerSecret(clientset, cluster, ""); err != nil {
+ return err
+ }
}
// next, create the pgBouncer config map that will allow pgBouncer to be
From aa3e16937b25364d342ba496a19db993118eb668 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 13 Jan 2021 17:38:23 -0500
Subject: [PATCH 144/276] Remove restrictions on generic linking options for
pgBackRest
This is useful for creating a new cluster with an external WAL
volume from a cluster that lacks one.
Issue: [ch10157]
---
internal/apiserver/backupoptions/pgbackrestoptions.go | 2 --
1 file changed, 2 deletions(-)
diff --git a/internal/apiserver/backupoptions/pgbackrestoptions.go b/internal/apiserver/backupoptions/pgbackrestoptions.go
index ec18545fef..de5cbcb1b3 100644
--- a/internal/apiserver/backupoptions/pgbackrestoptions.go
+++ b/internal/apiserver/backupoptions/pgbackrestoptions.go
@@ -25,8 +25,6 @@ var pgBackRestOptsDenyList = []string{
"--config",
"--config-include-path",
"--config-path",
- "--link-all",
- "--link-map",
"--lock-path",
"--log-timestamp",
"--neutral-umask",
From 0eeafe7214dd4d53c042ac95614bb2372f76a755 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 13 Jan 2021 18:24:57 -0500
Subject: [PATCH 145/276] Reconcile API server permissions list
Of note is the "Restart" permission which was not added to the
validation list, and removing some permissions for calls that are
no longer available.
Issue: #2203
Issue: #2201
---
docs/content/Security/configure-postgres-operator-rbac.md | 1 +
internal/apiserver/perms.go | 7 +------
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/docs/content/Security/configure-postgres-operator-rbac.md b/docs/content/Security/configure-postgres-operator-rbac.md
index 63c9e7d36e..4a6d9e5da2 100644
--- a/docs/content/Security/configure-postgres-operator-rbac.md
+++ b/docs/content/Security/configure-postgres-operator-rbac.md
@@ -72,6 +72,7 @@ The following list shows the current complete list of possible pgo permissions t
|DfCluster | allow *pgo df*|
|Label | allow *pgo label*|
|Reload | allow *pgo reload*|
+|Restart | allow *pgo restart*|
|Restore | allow *pgo restore*|
|RestoreDump | allow *pgo restore* for pgdumps|
|ShowBackup | allow *pgo show backup*|
diff --git a/internal/apiserver/perms.go b/internal/apiserver/perms.go
index e4e95978ae..01906db43e 100644
--- a/internal/apiserver/perms.go
+++ b/internal/apiserver/perms.go
@@ -40,7 +40,6 @@ const (
CREATE_CLUSTER_PERM = "CreateCluster"
CREATE_DUMP_PERM = "CreateDump"
CREATE_FAILOVER_PERM = "CreateFailover"
- CREATE_INGEST_PERM = "CreateIngest"
CREATE_NAMESPACE_PERM = "CreateNamespace"
CREATE_PGADMIN_PERM = "CreatePgAdmin"
CREATE_PGBOUNCER_PERM = "CreatePgbouncer"
@@ -57,7 +56,6 @@ const (
// DELETE
DELETE_BACKUP_PERM = "DeleteBackup"
DELETE_CLUSTER_PERM = "DeleteCluster"
- DELETE_INGEST_PERM = "DeleteIngest"
DELETE_NAMESPACE_PERM = "DeleteNamespace"
DELETE_PGADMIN_PERM = "DeletePgAdmin"
DELETE_PGBOUNCER_PERM = "DeletePgbouncer"
@@ -71,7 +69,6 @@ const (
SHOW_BACKUP_PERM = "ShowBackup"
SHOW_CLUSTER_PERM = "ShowCluster"
SHOW_CONFIG_PERM = "ShowConfig"
- SHOW_INGEST_PERM = "ShowIngest"
SHOW_NAMESPACE_PERM = "ShowNamespace"
SHOW_PGADMIN_PERM = "ShowPgAdmin"
SHOW_PGBOUNCER_PERM = "ShowPgBouncer"
@@ -114,6 +111,7 @@ func initializePerms() {
DF_CLUSTER_PERM: "yes",
LABEL_PERM: "yes",
RELOAD_PERM: "yes",
+ RESTART_PERM: "yes",
RESTORE_PERM: "yes",
STATUS_PERM: "yes",
TEST_CLUSTER_PERM: "yes",
@@ -124,7 +122,6 @@ func initializePerms() {
CREATE_DUMP_PERM: "yes",
CREATE_CLUSTER_PERM: "yes",
CREATE_FAILOVER_PERM: "yes",
- CREATE_INGEST_PERM: "yes",
CREATE_NAMESPACE_PERM: "yes",
CREATE_PGADMIN_PERM: "yes",
CREATE_PGBOUNCER_PERM: "yes",
@@ -141,7 +138,6 @@ func initializePerms() {
// DELETE
DELETE_BACKUP_PERM: "yes",
DELETE_CLUSTER_PERM: "yes",
- DELETE_INGEST_PERM: "yes",
DELETE_NAMESPACE_PERM: "yes",
DELETE_PGADMIN_PERM: "yes",
DELETE_PGBOUNCER_PERM: "yes",
@@ -155,7 +151,6 @@ func initializePerms() {
SHOW_BACKUP_PERM: "yes",
SHOW_CLUSTER_PERM: "yes",
SHOW_CONFIG_PERM: "yes",
- SHOW_INGEST_PERM: "yes",
SHOW_NAMESPACE_PERM: "yes",
SHOW_PGADMIN_PERM: "yes",
SHOW_PGBOUNCER_PERM: "yes",
From 2b75d18827442f4d105c77a2b8455624c1fc30ea Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 14 Jan 2021 10:38:46 -0500
Subject: [PATCH 146/276] Allow for Postgres system account user passwords to
be updated
This introduces the "--set-system-account-password" flag to allow
for one to update the password for a PostgreSQL system account user.
The flag allows for an override as well as a safety mechanism for
one to think about the action they are going to partake in.
Issue: #2169
---
cmd/pgo/cmd/update.go | 1 +
cmd/pgo/cmd/user.go | 37 ++++++++++---------
.../pgo-client/reference/pgo_update_user.md | 35 +++++++++---------
internal/apiserver/userservice/userimpl.go | 5 ++-
pkg/apiservermsgs/usermsgs.go | 5 ++-
5 files changed, 46 insertions(+), 37 deletions(-)
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index d09c506c7b..d16efba880 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -187,6 +187,7 @@ func init() {
UpdateUserCmd.Flags().BoolVar(&PasswordValidAlways, "valid-always", false, "Sets a password to never expire based on expiration time. Takes precedence over --valid-days")
UpdateUserCmd.Flags().BoolVar(&RotatePassword, "rotate-password", false, "Rotates the user's password with an automatically generated password. The length of the password is determine by either --password-length or the value set on the server, in that order.")
UpdateUserCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
+ UpdateUserCmd.Flags().BoolVar(&ShowSystemAccounts, "set-system-account-password", false, "Allows for a system account password to be set.")
}
// UpdateCmd represents the update command
diff --git a/cmd/pgo/cmd/user.go b/cmd/pgo/cmd/user.go
index 3c61d5671e..bc8f9352ae 100644
--- a/cmd/pgo/cmd/user.go
+++ b/cmd/pgo/cmd/user.go
@@ -55,7 +55,8 @@ var PasswordLength int
var PasswordValidAlways bool
// ShowSystemAccounts enables the display of the PostgreSQL user accounts that
-// perform system functions, such as the "postgres" user
+// perform system functions, such as the "postgres" user, and for taking action
+// on these accounts
var ShowSystemAccounts bool
func createUser(args []string, ns string) {
@@ -366,20 +367,21 @@ func showUser(args []string, ns string) {
func updateUser(clusterNames []string, namespace string) {
// set up the reuqest
request := msgs.UpdateUserRequest{
- AllFlag: AllFlag,
- Clusters: clusterNames,
- Expired: Expired,
- ExpireUser: ExpireUser,
- ManagedUser: ManagedUser,
- Namespace: namespace,
- Password: Password,
- PasswordAgeDays: PasswordAgeDays,
- PasswordLength: PasswordLength,
- PasswordValidAlways: PasswordValidAlways,
- PasswordType: PasswordType,
- RotatePassword: RotatePassword,
- Selector: Selector,
- Username: strings.TrimSpace(Username),
+ AllFlag: AllFlag,
+ Clusters: clusterNames,
+ Expired: Expired,
+ ExpireUser: ExpireUser,
+ ManagedUser: ManagedUser,
+ Namespace: namespace,
+ Password: Password,
+ PasswordAgeDays: PasswordAgeDays,
+ PasswordLength: PasswordLength,
+ PasswordValidAlways: PasswordValidAlways,
+ PasswordType: PasswordType,
+ RotatePassword: RotatePassword,
+ Selector: Selector,
+ SetSystemAccountPassword: ShowSystemAccounts,
+ Username: strings.TrimSpace(Username),
}
// check to see if EnableLogin or DisableLogin is set. If so, set a value
@@ -391,8 +393,9 @@ func updateUser(clusterNames []string, namespace string) {
}
// check to see if this is a system account if a user name is passed in
- if request.Username != "" && utiloperator.IsPostgreSQLUserSystemAccount(request.Username) {
- fmt.Println("Error:", request.Username, "is a system account and cannot be used")
+ if request.Username != "" && utiloperator.IsPostgreSQLUserSystemAccount(request.Username) && !request.SetSystemAccountPassword {
+ fmt.Println("Error:", request.Username, "is a system account and cannot be used. "+
+ "You can override this with the \"--set-system-account-password\" flag.")
os.Exit(1)
}
diff --git a/docs/content/pgo-client/reference/pgo_update_user.md b/docs/content/pgo-client/reference/pgo_update_user.md
index 25c18b73da..5678720621 100644
--- a/docs/content/pgo-client/reference/pgo_update_user.md
+++ b/docs/content/pgo-client/reference/pgo_update_user.md
@@ -32,27 +32,28 @@ pgo update user [flags]
### Options
```
- --all all clusters.
- --disable-login Disables a PostgreSQL user from being able to log into the PostgreSQL cluster.
- --enable-login Enables a PostgreSQL user to be able to log into the PostgreSQL cluster.
- --expire-user Performs expiring a user if set to true.
- --expired int Updates passwords that will expire in X days using an autogenerated password.
- -h, --help help for user
- -o, --output string The output format. Supported types are: "json"
- --password string Specifies the user password when updating a user password or creating a new user. If --rotate-password is set as well, --password takes precedence.
- --password-length int If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server.
- --password-type string The type of password hashing to use.Choices are: (md5, scram-sha-256). This only takes effect if the password is being changed. (default "md5")
- --rotate-password Rotates the user's password with an automatically generated password. The length of the password is determine by either --password-length or the value set on the server, in that order.
- -s, --selector string The selector to use for cluster filtering.
- --username string Updates the postgres user on selective clusters.
- --valid-always Sets a password to never expire based on expiration time. Takes precedence over --valid-days
- --valid-days int Sets the number of days that a password is valid. Defaults to the server value.
+ --all all clusters.
+ --disable-login Disables a PostgreSQL user from being able to log into the PostgreSQL cluster.
+ --enable-login Enables a PostgreSQL user to be able to log into the PostgreSQL cluster.
+ --expire-user Performs expiring a user if set to true.
+ --expired int Updates passwords that will expire in X days using an autogenerated password.
+ -h, --help help for user
+ -o, --output string The output format. Supported types are: "json"
+ --password string Specifies the user password when updating a user password or creating a new user. If --rotate-password is set as well, --password takes precedence.
+ --password-length int If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server.
+ --password-type string The type of password hashing to use.Choices are: (md5, scram-sha-256). This only takes effect if the password is being changed. (default "md5")
+ --rotate-password Rotates the user's password with an automatically generated password. The length of the password is determine by either --password-length or the value set on the server, in that order.
+ -s, --selector string The selector to use for cluster filtering.
+ --set-system-account-password Allows for a system account password to be set.
+ --username string Updates the postgres user on selective clusters.
+ --valid-always Sets a password to never expire based on expiration time. Takes precedence over --valid-days
+ --valid-days int Sets the number of days that a password is valid. Defaults to the server value.
```
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -66,4 +67,4 @@ pgo update user [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/internal/apiserver/userservice/userimpl.go b/internal/apiserver/userservice/userimpl.go
index 2372edc519..8c919859b4 100644
--- a/internal/apiserver/userservice/userimpl.go
+++ b/internal/apiserver/userservice/userimpl.go
@@ -593,9 +593,10 @@ func UpdateUser(request *msgs.UpdateUserRequest, pgouser string) msgs.UpdateUser
// if this involes updating a specific PostgreSQL account, and it is a system
// account, return here
- if request.Username != "" && util.IsPostgreSQLUserSystemAccount(request.Username) {
+ if request.Username != "" && util.IsPostgreSQLUserSystemAccount(request.Username) && !request.SetSystemAccountPassword {
response.Status.Code = msgs.Error
- response.Status.Msg = fmt.Sprintf(errSystemAccountFormat, request.Username)
+ response.Status.Msg = fmt.Sprintf(errSystemAccountFormat, request.Username) +
+ " You can override this with the \"--set-system-account-password\" flag."
return response
}
diff --git a/pkg/apiservermsgs/usermsgs.go b/pkg/apiservermsgs/usermsgs.go
index 4a716966ba..5de550711c 100644
--- a/pkg/apiservermsgs/usermsgs.go
+++ b/pkg/apiservermsgs/usermsgs.go
@@ -129,7 +129,10 @@ type UpdateUserRequest struct {
PasswordValidAlways bool
RotatePassword bool
Selector string
- Username string
+ // SetSystemAccountPassword allows one to override the password for a
+ // designated system account
+ SetSystemAccountPassword bool
+ Username string
}
// UpdateUserResponse contains the response after an update user request
From c8427ae7d5f59fda01f88e4875ac98654b585489 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 14 Jan 2021 10:41:30 -0500
Subject: [PATCH 147/276] Update `pgo` client reference docs
The full run had not been done in awhile.
---
.../content/pgo-client/reference/pgo_apply.md | 4 ++--
.../pgo-client/reference/pgo_backup.md | 2 +-
docs/content/pgo-client/reference/pgo_cat.md | 4 ++--
.../pgo-client/reference/pgo_create.md | 4 ++--
.../reference/pgo_create_cluster.md | 2 +-
.../reference/pgo_create_namespace.md | 4 ++--
.../reference/pgo_create_pgadmin.md | 4 ++--
.../reference/pgo_create_pgbouncer.md | 2 +-
.../reference/pgo_create_pgorole.md | 4 ++--
.../reference/pgo_create_pgouser.md | 4 ++--
.../pgo-client/reference/pgo_create_policy.md | 2 +-
.../reference/pgo_create_schedule.md | 4 ++--
.../pgo-client/reference/pgo_create_user.md | 4 ++--
.../pgo-client/reference/pgo_delete.md | 2 +-
.../pgo-client/reference/pgo_delete_backup.md | 2 +-
.../reference/pgo_delete_cluster.md | 4 ++--
.../pgo-client/reference/pgo_delete_label.md | 4 ++--
.../reference/pgo_delete_namespace.md | 4 ++--
.../reference/pgo_delete_pgadmin.md | 4 ++--
.../reference/pgo_delete_pgbouncer.md | 4 ++--
.../reference/pgo_delete_pgorole.md | 4 ++--
.../reference/pgo_delete_pgouser.md | 4 ++--
.../pgo-client/reference/pgo_delete_policy.md | 4 ++--
.../reference/pgo_delete_schedule.md | 4 ++--
.../pgo-client/reference/pgo_delete_user.md | 4 ++--
docs/content/pgo-client/reference/pgo_df.md | 4 ++--
.../pgo-client/reference/pgo_failover.md | 2 +-
.../content/pgo-client/reference/pgo_label.md | 4 ++--
.../pgo-client/reference/pgo_reload.md | 4 ++--
.../pgo-client/reference/pgo_restart.md | 2 +-
.../pgo-client/reference/pgo_restore.md | 2 +-
.../content/pgo-client/reference/pgo_scale.md | 23 ++++++++++---------
.../pgo-client/reference/pgo_scaledown.md | 4 ++--
docs/content/pgo-client/reference/pgo_show.md | 4 ++--
.../pgo-client/reference/pgo_show_backup.md | 4 ++--
.../pgo-client/reference/pgo_show_cluster.md | 4 ++--
.../pgo-client/reference/pgo_show_config.md | 4 ++--
.../reference/pgo_show_namespace.md | 4 ++--
.../pgo-client/reference/pgo_show_pgadmin.md | 4 ++--
.../reference/pgo_show_pgbouncer.md | 4 ++--
.../pgo-client/reference/pgo_show_pgorole.md | 4 ++--
.../pgo-client/reference/pgo_show_pgouser.md | 4 ++--
.../pgo-client/reference/pgo_show_policy.md | 4 ++--
.../pgo-client/reference/pgo_show_pvc.md | 4 ++--
.../pgo-client/reference/pgo_show_schedule.md | 4 ++--
.../pgo-client/reference/pgo_show_user.md | 4 ++--
.../pgo-client/reference/pgo_show_workflow.md | 4 ++--
.../pgo-client/reference/pgo_status.md | 4 ++--
docs/content/pgo-client/reference/pgo_test.md | 4 ++--
.../pgo-client/reference/pgo_update.md | 4 ++--
.../reference/pgo_update_cluster.md | 2 +-
.../reference/pgo_update_namespace.md | 4 ++--
.../reference/pgo_update_pgbouncer.md | 2 +-
.../reference/pgo_update_pgorole.md | 4 ++--
.../reference/pgo_update_pgouser.md | 4 ++--
.../pgo-client/reference/pgo_upgrade.md | 2 +-
.../pgo-client/reference/pgo_version.md | 4 ++--
.../content/pgo-client/reference/pgo_watch.md | 4 ++--
58 files changed, 114 insertions(+), 113 deletions(-)
diff --git a/docs/content/pgo-client/reference/pgo_apply.md b/docs/content/pgo-client/reference/pgo_apply.md
index 403d6c9d47..8cb65da368 100644
--- a/docs/content/pgo-client/reference/pgo_apply.md
+++ b/docs/content/pgo-client/reference/pgo_apply.md
@@ -28,7 +28,7 @@ pgo apply [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -42,4 +42,4 @@ pgo apply [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_backup.md b/docs/content/pgo-client/reference/pgo_backup.md
index d8e028bf57..bca8d77396 100644
--- a/docs/content/pgo-client/reference/pgo_backup.md
+++ b/docs/content/pgo-client/reference/pgo_backup.md
@@ -44,4 +44,4 @@ pgo backup [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 30-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_cat.md b/docs/content/pgo-client/reference/pgo_cat.md
index cef3887e31..0b4a13747f 100644
--- a/docs/content/pgo-client/reference/pgo_cat.md
+++ b/docs/content/pgo-client/reference/pgo_cat.md
@@ -24,7 +24,7 @@ pgo cat [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -38,4 +38,4 @@ pgo cat [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create.md b/docs/content/pgo-client/reference/pgo_create.md
index 14cc07b5d0..2bd589dd40 100644
--- a/docs/content/pgo-client/reference/pgo_create.md
+++ b/docs/content/pgo-client/reference/pgo_create.md
@@ -30,7 +30,7 @@ pgo create [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -53,4 +53,4 @@ pgo create [flags]
* [pgo create schedule](/pgo-client/reference/pgo_create_schedule/) - Create a cron-like scheduled task
* [pgo create user](/pgo-client/reference/pgo_create_user/) - Create a PostgreSQL user
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md
index 26ee79dcfc..9993e29127 100644
--- a/docs/content/pgo-client/reference/pgo_create_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_create_cluster.md
@@ -136,4 +136,4 @@ pgo create cluster [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 2-Jan-2021
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_namespace.md b/docs/content/pgo-client/reference/pgo_create_namespace.md
index 90894e2b77..bf544aba72 100644
--- a/docs/content/pgo-client/reference/pgo_create_namespace.md
+++ b/docs/content/pgo-client/reference/pgo_create_namespace.md
@@ -28,7 +28,7 @@ pgo create namespace [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -42,4 +42,4 @@ pgo create namespace [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_pgadmin.md b/docs/content/pgo-client/reference/pgo_create_pgadmin.md
index 1e0c43d578..0dd744cbce 100644
--- a/docs/content/pgo-client/reference/pgo_create_pgadmin.md
+++ b/docs/content/pgo-client/reference/pgo_create_pgadmin.md
@@ -25,7 +25,7 @@ pgo create pgadmin [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo create pgadmin [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md b/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
index beef67d591..c0e10c6b41 100644
--- a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
+++ b/docs/content/pgo-client/reference/pgo_create_pgbouncer.md
@@ -46,4 +46,4 @@ pgo create pgbouncer [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Jan-2021
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_pgorole.md b/docs/content/pgo-client/reference/pgo_create_pgorole.md
index 50bcc66915..5320296bb4 100644
--- a/docs/content/pgo-client/reference/pgo_create_pgorole.md
+++ b/docs/content/pgo-client/reference/pgo_create_pgorole.md
@@ -25,7 +25,7 @@ pgo create pgorole [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo create pgorole [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_pgouser.md b/docs/content/pgo-client/reference/pgo_create_pgouser.md
index 35513ea915..aa5d39b5eb 100644
--- a/docs/content/pgo-client/reference/pgo_create_pgouser.md
+++ b/docs/content/pgo-client/reference/pgo_create_pgouser.md
@@ -28,7 +28,7 @@ pgo create pgouser [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -42,4 +42,4 @@ pgo create pgouser [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_policy.md b/docs/content/pgo-client/reference/pgo_create_policy.md
index 7705067853..0bce879654 100644
--- a/docs/content/pgo-client/reference/pgo_create_policy.md
+++ b/docs/content/pgo-client/reference/pgo_create_policy.md
@@ -39,4 +39,4 @@ pgo create policy [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 21-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_schedule.md b/docs/content/pgo-client/reference/pgo_create_schedule.md
index 6549cfe588..d303b28488 100644
--- a/docs/content/pgo-client/reference/pgo_create_schedule.md
+++ b/docs/content/pgo-client/reference/pgo_create_schedule.md
@@ -34,7 +34,7 @@ pgo create schedule [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -48,4 +48,4 @@ pgo create schedule [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_create_user.md b/docs/content/pgo-client/reference/pgo_create_user.md
index cd38c71059..106b27a59f 100644
--- a/docs/content/pgo-client/reference/pgo_create_user.md
+++ b/docs/content/pgo-client/reference/pgo_create_user.md
@@ -36,7 +36,7 @@ pgo create user [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -50,4 +50,4 @@ pgo create user [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete.md b/docs/content/pgo-client/reference/pgo_delete.md
index ea25f615b0..1eb47af507 100644
--- a/docs/content/pgo-client/reference/pgo_delete.md
+++ b/docs/content/pgo-client/reference/pgo_delete.md
@@ -64,4 +64,4 @@ pgo delete [flags]
* [pgo delete schedule](/pgo-client/reference/pgo_delete_schedule/) - Delete a schedule
* [pgo delete user](/pgo-client/reference/pgo_delete_user/) - Delete a user
-###### Auto generated by spf13/cobra on 20-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_backup.md b/docs/content/pgo-client/reference/pgo_delete_backup.md
index 67b40d1c53..9b1f091bde 100644
--- a/docs/content/pgo-client/reference/pgo_delete_backup.md
+++ b/docs/content/pgo-client/reference/pgo_delete_backup.md
@@ -40,4 +40,4 @@ pgo delete backup [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 20-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_cluster.md b/docs/content/pgo-client/reference/pgo_delete_cluster.md
index bf550cf53e..0243dc8c3c 100644
--- a/docs/content/pgo-client/reference/pgo_delete_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_delete_cluster.md
@@ -30,7 +30,7 @@ pgo delete cluster [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -44,4 +44,4 @@ pgo delete cluster [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_label.md b/docs/content/pgo-client/reference/pgo_delete_label.md
index b8ad151b73..2e8efeb5c3 100644
--- a/docs/content/pgo-client/reference/pgo_delete_label.md
+++ b/docs/content/pgo-client/reference/pgo_delete_label.md
@@ -28,7 +28,7 @@ pgo delete label [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -42,4 +42,4 @@ pgo delete label [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_namespace.md b/docs/content/pgo-client/reference/pgo_delete_namespace.md
index 63e9fa95db..a339bf0218 100644
--- a/docs/content/pgo-client/reference/pgo_delete_namespace.md
+++ b/docs/content/pgo-client/reference/pgo_delete_namespace.md
@@ -25,7 +25,7 @@ pgo delete namespace [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo delete namespace [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_pgadmin.md b/docs/content/pgo-client/reference/pgo_delete_pgadmin.md
index d48bacd9d0..5f778c2eb5 100644
--- a/docs/content/pgo-client/reference/pgo_delete_pgadmin.md
+++ b/docs/content/pgo-client/reference/pgo_delete_pgadmin.md
@@ -26,7 +26,7 @@ pgo delete pgadmin [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +40,4 @@ pgo delete pgadmin [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_pgbouncer.md b/docs/content/pgo-client/reference/pgo_delete_pgbouncer.md
index bcf71def78..b1524b1a78 100644
--- a/docs/content/pgo-client/reference/pgo_delete_pgbouncer.md
+++ b/docs/content/pgo-client/reference/pgo_delete_pgbouncer.md
@@ -27,7 +27,7 @@ pgo delete pgbouncer [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -41,4 +41,4 @@ pgo delete pgbouncer [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_pgorole.md b/docs/content/pgo-client/reference/pgo_delete_pgorole.md
index f67359235d..682baf156e 100644
--- a/docs/content/pgo-client/reference/pgo_delete_pgorole.md
+++ b/docs/content/pgo-client/reference/pgo_delete_pgorole.md
@@ -26,7 +26,7 @@ pgo delete pgorole [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +40,4 @@ pgo delete pgorole [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_pgouser.md b/docs/content/pgo-client/reference/pgo_delete_pgouser.md
index 0a4bba911f..2bddabd0e6 100644
--- a/docs/content/pgo-client/reference/pgo_delete_pgouser.md
+++ b/docs/content/pgo-client/reference/pgo_delete_pgouser.md
@@ -26,7 +26,7 @@ pgo delete pgouser [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +40,4 @@ pgo delete pgouser [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_policy.md b/docs/content/pgo-client/reference/pgo_delete_policy.md
index cf40d26835..5f565e764b 100644
--- a/docs/content/pgo-client/reference/pgo_delete_policy.md
+++ b/docs/content/pgo-client/reference/pgo_delete_policy.md
@@ -26,7 +26,7 @@ pgo delete policy [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +40,4 @@ pgo delete policy [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_schedule.md b/docs/content/pgo-client/reference/pgo_delete_schedule.md
index b7de536bbd..600a797d11 100644
--- a/docs/content/pgo-client/reference/pgo_delete_schedule.md
+++ b/docs/content/pgo-client/reference/pgo_delete_schedule.md
@@ -29,7 +29,7 @@ pgo delete schedule [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -43,4 +43,4 @@ pgo delete schedule [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_user.md b/docs/content/pgo-client/reference/pgo_delete_user.md
index ea4f7f75ae..48cef7d07c 100644
--- a/docs/content/pgo-client/reference/pgo_delete_user.md
+++ b/docs/content/pgo-client/reference/pgo_delete_user.md
@@ -29,7 +29,7 @@ pgo delete user [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -43,4 +43,4 @@ pgo delete user [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_df.md b/docs/content/pgo-client/reference/pgo_df.md
index 3a744dfbe9..7b81786c3a 100644
--- a/docs/content/pgo-client/reference/pgo_df.md
+++ b/docs/content/pgo-client/reference/pgo_df.md
@@ -29,7 +29,7 @@ pgo df [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -43,4 +43,4 @@ pgo df [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_failover.md b/docs/content/pgo-client/reference/pgo_failover.md
index c81b3bfd92..deca90ecc7 100644
--- a/docs/content/pgo-client/reference/pgo_failover.md
+++ b/docs/content/pgo-client/reference/pgo_failover.md
@@ -47,4 +47,4 @@ pgo failover [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 4-Jan-2021
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_label.md b/docs/content/pgo-client/reference/pgo_label.md
index 14f6486ad7..abcb3e0115 100644
--- a/docs/content/pgo-client/reference/pgo_label.md
+++ b/docs/content/pgo-client/reference/pgo_label.md
@@ -30,7 +30,7 @@ pgo label [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -44,4 +44,4 @@ pgo label [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_reload.md b/docs/content/pgo-client/reference/pgo_reload.md
index ebc8dc2e1a..f6191cfe17 100644
--- a/docs/content/pgo-client/reference/pgo_reload.md
+++ b/docs/content/pgo-client/reference/pgo_reload.md
@@ -26,7 +26,7 @@ pgo reload [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +40,4 @@ pgo reload [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_restart.md b/docs/content/pgo-client/reference/pgo_restart.md
index 2a56f8ed12..0a1b3e3f4c 100644
--- a/docs/content/pgo-client/reference/pgo_restart.md
+++ b/docs/content/pgo-client/reference/pgo_restart.md
@@ -53,4 +53,4 @@ pgo restart [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 5-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_restore.md b/docs/content/pgo-client/reference/pgo_restore.md
index 78d3584e7e..4894154e73 100644
--- a/docs/content/pgo-client/reference/pgo_restore.md
+++ b/docs/content/pgo-client/reference/pgo_restore.md
@@ -47,4 +47,4 @@ pgo restore [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 31-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_scale.md b/docs/content/pgo-client/reference/pgo_scale.md
index 146645d08c..edcd61aab5 100644
--- a/docs/content/pgo-client/reference/pgo_scale.md
+++ b/docs/content/pgo-client/reference/pgo_scale.md
@@ -18,16 +18,17 @@ pgo scale [flags]
### Options
```
- --ccp-image-tag string The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting.
- -h, --help help for scale
- --no-prompt No command line confirmation.
- --node-label string The node label (key) to use in placing the replica database. If not set, any node is used.
- --replica-count int The replica count to apply to the clusters. (default 1)
- --service-type string The service type to use in the replica Service. If not set, the default in pgo.yaml will be used.
- --storage-config string The name of a Storage config in pgo.yaml to use for the replica storage.
- --toleration strings Set Pod tolerations for each PostgreSQL instance in a cluster.
- The general format is "key=value:Effect"
- For example, to add an Exists and an Equals toleration: "--toleration=ssd:NoSchedule,zone=east:NoSchedule"
+ --ccp-image-tag string The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting.
+ -h, --help help for scale
+ --no-prompt No command line confirmation.
+ --node-affinity-type string Sets the type of node affinity to use. Can be either preferred (default) or required. Must be used with --node-label
+ --node-label string The node label (key) to use in placing the replica database. If not set, any node is used.
+ --replica-count int The replica count to apply to the clusters. (default 1)
+ --service-type string The service type to use in the replica Service. If not set, the default in pgo.yaml will be used.
+ --storage-config string The name of a Storage config in pgo.yaml to use for the replica storage.
+ --toleration strings Set Pod tolerations for each PostgreSQL instance in a cluster.
+ The general format is "key=value:Effect"
+ For example, to add an Exists and an Equals toleration: "--toleration=ssd:NoSchedule,zone=east:NoSchedule"
```
### Options inherited from parent commands
@@ -47,4 +48,4 @@ pgo scale [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 25-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_scaledown.md b/docs/content/pgo-client/reference/pgo_scaledown.md
index deef6123d9..157e24ba12 100644
--- a/docs/content/pgo-client/reference/pgo_scaledown.md
+++ b/docs/content/pgo-client/reference/pgo_scaledown.md
@@ -32,7 +32,7 @@ pgo scaledown [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -46,4 +46,4 @@ pgo scaledown [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show.md b/docs/content/pgo-client/reference/pgo_show.md
index b71032f999..5a4e47350e 100644
--- a/docs/content/pgo-client/reference/pgo_show.md
+++ b/docs/content/pgo-client/reference/pgo_show.md
@@ -33,7 +33,7 @@ pgo show [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -60,4 +60,4 @@ pgo show [flags]
* [pgo show user](/pgo-client/reference/pgo_show_user/) - Show user information
* [pgo show workflow](/pgo-client/reference/pgo_show_workflow/) - Show workflow information
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_backup.md b/docs/content/pgo-client/reference/pgo_show_backup.md
index a15c426d54..adbb331666 100644
--- a/docs/content/pgo-client/reference/pgo_show_backup.md
+++ b/docs/content/pgo-client/reference/pgo_show_backup.md
@@ -25,7 +25,7 @@ pgo show backup [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo show backup [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_cluster.md b/docs/content/pgo-client/reference/pgo_show_cluster.md
index 291d3b6ff6..b21ad78830 100644
--- a/docs/content/pgo-client/reference/pgo_show_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_show_cluster.md
@@ -29,7 +29,7 @@ pgo show cluster [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -43,4 +43,4 @@ pgo show cluster [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_config.md b/docs/content/pgo-client/reference/pgo_show_config.md
index ae3cb75059..65e8efea4f 100644
--- a/docs/content/pgo-client/reference/pgo_show_config.md
+++ b/docs/content/pgo-client/reference/pgo_show_config.md
@@ -24,7 +24,7 @@ pgo show config [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -38,4 +38,4 @@ pgo show config [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_namespace.md b/docs/content/pgo-client/reference/pgo_show_namespace.md
index 9794a12bac..40ce90c983 100644
--- a/docs/content/pgo-client/reference/pgo_show_namespace.md
+++ b/docs/content/pgo-client/reference/pgo_show_namespace.md
@@ -25,7 +25,7 @@ pgo show namespace [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo show namespace [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_pgadmin.md b/docs/content/pgo-client/reference/pgo_show_pgadmin.md
index 73574045aa..f4ebb5d617 100644
--- a/docs/content/pgo-client/reference/pgo_show_pgadmin.md
+++ b/docs/content/pgo-client/reference/pgo_show_pgadmin.md
@@ -28,7 +28,7 @@ pgo show pgadmin [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -42,4 +42,4 @@ pgo show pgadmin [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_pgbouncer.md b/docs/content/pgo-client/reference/pgo_show_pgbouncer.md
index 0a977097a8..707782ccd5 100644
--- a/docs/content/pgo-client/reference/pgo_show_pgbouncer.md
+++ b/docs/content/pgo-client/reference/pgo_show_pgbouncer.md
@@ -28,7 +28,7 @@ pgo show pgbouncer [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -42,4 +42,4 @@ pgo show pgbouncer [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_pgorole.md b/docs/content/pgo-client/reference/pgo_show_pgorole.md
index f8241d4e33..ca967aaeb6 100644
--- a/docs/content/pgo-client/reference/pgo_show_pgorole.md
+++ b/docs/content/pgo-client/reference/pgo_show_pgorole.md
@@ -25,7 +25,7 @@ pgo show pgorole [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo show pgorole [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_pgouser.md b/docs/content/pgo-client/reference/pgo_show_pgouser.md
index 4881d2f1fb..1ad60b4303 100644
--- a/docs/content/pgo-client/reference/pgo_show_pgouser.md
+++ b/docs/content/pgo-client/reference/pgo_show_pgouser.md
@@ -25,7 +25,7 @@ pgo show pgouser [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo show pgouser [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_policy.md b/docs/content/pgo-client/reference/pgo_show_policy.md
index ddeaedbd09..6303392491 100644
--- a/docs/content/pgo-client/reference/pgo_show_policy.md
+++ b/docs/content/pgo-client/reference/pgo_show_policy.md
@@ -26,7 +26,7 @@ pgo show policy [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +40,4 @@ pgo show policy [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_pvc.md b/docs/content/pgo-client/reference/pgo_show_pvc.md
index ea8312dd36..16d7d3f457 100644
--- a/docs/content/pgo-client/reference/pgo_show_pvc.md
+++ b/docs/content/pgo-client/reference/pgo_show_pvc.md
@@ -26,7 +26,7 @@ pgo show pvc [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -40,4 +40,4 @@ pgo show pvc [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_schedule.md b/docs/content/pgo-client/reference/pgo_show_schedule.md
index 7d39ac4cff..eb9033fc1c 100644
--- a/docs/content/pgo-client/reference/pgo_show_schedule.md
+++ b/docs/content/pgo-client/reference/pgo_show_schedule.md
@@ -29,7 +29,7 @@ pgo show schedule [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -43,4 +43,4 @@ pgo show schedule [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_user.md b/docs/content/pgo-client/reference/pgo_show_user.md
index 7dcdfe2b31..653fda1ede 100644
--- a/docs/content/pgo-client/reference/pgo_show_user.md
+++ b/docs/content/pgo-client/reference/pgo_show_user.md
@@ -31,7 +31,7 @@ pgo show user [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -45,4 +45,4 @@ pgo show user [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_show_workflow.md b/docs/content/pgo-client/reference/pgo_show_workflow.md
index 28d5f11666..722ad453ea 100644
--- a/docs/content/pgo-client/reference/pgo_show_workflow.md
+++ b/docs/content/pgo-client/reference/pgo_show_workflow.md
@@ -24,7 +24,7 @@ pgo show workflow [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -38,4 +38,4 @@ pgo show workflow [flags]
* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_status.md b/docs/content/pgo-client/reference/pgo_status.md
index 21a4a84464..f25a662bd1 100644
--- a/docs/content/pgo-client/reference/pgo_status.md
+++ b/docs/content/pgo-client/reference/pgo_status.md
@@ -25,7 +25,7 @@ pgo status [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo status [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_test.md b/docs/content/pgo-client/reference/pgo_test.md
index 36671b5f6e..690efc389d 100644
--- a/docs/content/pgo-client/reference/pgo_test.md
+++ b/docs/content/pgo-client/reference/pgo_test.md
@@ -29,7 +29,7 @@ pgo test [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -43,4 +43,4 @@ pgo test [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_update.md b/docs/content/pgo-client/reference/pgo_update.md
index 669c841701..3045234668 100644
--- a/docs/content/pgo-client/reference/pgo_update.md
+++ b/docs/content/pgo-client/reference/pgo_update.md
@@ -33,7 +33,7 @@ pgo update [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -53,4 +53,4 @@ pgo update [flags]
* [pgo update pgouser](/pgo-client/reference/pgo_update_pgouser/) - Update a pgouser
* [pgo update user](/pgo-client/reference/pgo_update_user/) - Update a PostgreSQL user
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index 3596be5282..78f1a01ff8 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -95,4 +95,4 @@ pgo update cluster [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 3-Jan-2021
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_update_namespace.md b/docs/content/pgo-client/reference/pgo_update_namespace.md
index 396cb9d30b..d176bd9dd4 100644
--- a/docs/content/pgo-client/reference/pgo_update_namespace.md
+++ b/docs/content/pgo-client/reference/pgo_update_namespace.md
@@ -23,7 +23,7 @@ pgo update namespace [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -37,4 +37,4 @@ pgo update namespace [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_update_pgbouncer.md b/docs/content/pgo-client/reference/pgo_update_pgbouncer.md
index b4dd3f0247..b97a9f60c4 100644
--- a/docs/content/pgo-client/reference/pgo_update_pgbouncer.md
+++ b/docs/content/pgo-client/reference/pgo_update_pgbouncer.md
@@ -50,4 +50,4 @@ pgo update pgbouncer [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 1-Jan-2021
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_update_pgorole.md b/docs/content/pgo-client/reference/pgo_update_pgorole.md
index 3c1706b76a..448a4673dc 100644
--- a/docs/content/pgo-client/reference/pgo_update_pgorole.md
+++ b/docs/content/pgo-client/reference/pgo_update_pgorole.md
@@ -25,7 +25,7 @@ pgo update pgorole [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo update pgorole [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_update_pgouser.md b/docs/content/pgo-client/reference/pgo_update_pgouser.md
index 2991939cd7..a460b11ece 100644
--- a/docs/content/pgo-client/reference/pgo_update_pgouser.md
+++ b/docs/content/pgo-client/reference/pgo_update_pgouser.md
@@ -30,7 +30,7 @@ pgo update pgouser [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -44,4 +44,4 @@ pgo update pgouser [flags]
* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_upgrade.md b/docs/content/pgo-client/reference/pgo_upgrade.md
index 534790f189..7ed0c642c9 100644
--- a/docs/content/pgo-client/reference/pgo_upgrade.md
+++ b/docs/content/pgo-client/reference/pgo_upgrade.md
@@ -44,4 +44,4 @@ pgo upgrade [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 20-Dec-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_version.md b/docs/content/pgo-client/reference/pgo_version.md
index 5bf407bc73..cdadd2cf95 100644
--- a/docs/content/pgo-client/reference/pgo_version.md
+++ b/docs/content/pgo-client/reference/pgo_version.md
@@ -25,7 +25,7 @@ pgo version [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo version [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_watch.md b/docs/content/pgo-client/reference/pgo_watch.md
index 0f3e721545..35416c3b1a 100644
--- a/docs/content/pgo-client/reference/pgo_watch.md
+++ b/docs/content/pgo-client/reference/pgo_watch.md
@@ -25,7 +25,7 @@ pgo watch [flags]
### Options inherited from parent commands
```
- --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.
+ --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. Note that the URL should **not** end in a '/'.
--debug Enable additional output for debugging.
--disable-tls Disable TLS authentication to the Postgres Operator.
--exclude-os-trust Exclude CA certs from OS default trust store
@@ -39,4 +39,4 @@ pgo watch [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 1-Oct-2020
+###### Auto generated by spf13/cobra on 14-Jan-2021
From 75e7099b9ed3b654b808ab38eda5ac18efed35f0 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 14 Jan 2021 10:40:11 -0500
Subject: [PATCH 148/276] Bump v4.6.0-beta.3
---
Makefile | 2 +-
bin/push-ccp-to-gcr.sh | 2 +-
conf/postgres-operator/pgo.yaml | 4 ++--
docs/config.toml | 2 +-
docs/content/releases/4.6.0.md | 8 ++++++++
examples/create-by-resource/fromcrd.json | 6 +++---
examples/envs.sh | 2 +-
examples/helm/README.md | 2 +-
examples/helm/postgres/Chart.yaml | 2 +-
examples/helm/postgres/templates/pgcluster.yaml | 2 +-
examples/helm/postgres/values.yaml | 2 +-
examples/kustomize/createcluster/README.md | 16 ++++++++--------
.../kustomize/createcluster/base/pgcluster.yaml | 6 +++---
.../overlay/staging/hippo-rpl1-pgreplica.yaml | 2 +-
installers/ansible/README.md | 2 +-
installers/ansible/values.yaml | 6 +++---
installers/gcp-marketplace/Makefile | 2 +-
installers/gcp-marketplace/README.md | 2 +-
installers/gcp-marketplace/values.yaml | 6 +++---
installers/helm/Chart.yaml | 2 +-
installers/helm/values.yaml | 6 +++---
installers/kubectl/client-setup.sh | 2 +-
installers/kubectl/postgres-operator-ocp311.yml | 8 ++++----
installers/kubectl/postgres-operator.yml | 8 ++++----
installers/metrics/ansible/README.md | 2 +-
installers/metrics/helm/Chart.yaml | 2 +-
installers/metrics/helm/helm_template.yaml | 2 +-
installers/metrics/helm/values.yaml | 2 +-
.../kubectl/postgres-operator-metrics-ocp311.yml | 2 +-
.../kubectl/postgres-operator-metrics.yml | 2 +-
installers/olm/Makefile | 2 +-
pkg/apis/crunchydata.com/v1/doc.go | 8 ++++----
pkg/apiservermsgs/common.go | 2 +-
redhat/atomic/help.1 | 2 +-
redhat/atomic/help.md | 2 +-
35 files changed, 69 insertions(+), 61 deletions(-)
diff --git a/Makefile b/Makefile
index 63af50dea4..bdaf771968 100644
--- a/Makefile
+++ b/Makefile
@@ -5,7 +5,7 @@ PGOROOT ?= $(CURDIR)
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
-PGO_VERSION ?= 4.6.0-beta.2
+PGO_VERSION ?= 4.6.0-beta.3
PGO_PG_VERSION ?= 13
PGO_PG_FULLVERSION ?= 13.1
PGO_BACKREST_VERSION ?= 2.31
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index 1f3838f374..3c98e3829a 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -16,7 +16,7 @@
GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
CCP_IMAGE_PREFIX=crunchydata
-CCP_IMAGE_TAG=centos8-13.1-4.6.0-beta.2
+CCP_IMAGE_TAG=centos8-13.1-4.6.0-beta.3
IMAGES=(
crunchy-prometheus
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index 28493c8c4a..c30e11f41d 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -2,7 +2,7 @@ Cluster:
CCPImagePrefix: registry.developers.crunchydata.com/crunchydata
Metrics: false
Badger: false
- CCPImageTag: centos8-13.1-4.6.0-beta.2
+ CCPImageTag: centos8-13.1-4.6.0-beta.3
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
@@ -81,4 +81,4 @@ Storage:
Pgo:
Audit: false
PGOImagePrefix: registry.developers.crunchydata.com/crunchydata
- PGOImageTag: centos8-4.6.0-beta.2
+ PGOImageTag: centos8-4.6.0-beta.3
diff --git a/docs/config.toml b/docs/config.toml
index 7975290ef6..83e3dc59e7 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -25,7 +25,7 @@ disableNavChevron = false # set true to hide next/prev chevron, default is false
highlightClientSide = false # set true to use highlight.pack.js instead of the default hugo chroma highlighter
menushortcutsnewtab = true # set true to open shortcuts links to a new tab/window
enableGitInfo = true
-operatorVersion = "4.6.0-beta.2"
+operatorVersion = "4.6.0-beta.3"
postgresVersion = "13.1"
postgresVersion13 = "13.1"
postgresVersion12 = "13.1"
diff --git a/docs/content/releases/4.6.0.md b/docs/content/releases/4.6.0.md
index 3f423fe16d..ff1a886735 100644
--- a/docs/content/releases/4.6.0.md
+++ b/docs/content/releases/4.6.0.md
@@ -19,6 +19,7 @@ The PostgreSQL Operator 4.6.0 release includes the following software versions u
- [pgBackRest](https://pgbackrest.org/) is now at version 2.31.
- [pgnodemx](https://github.com/CrunchyData/pgnodemx) is now at version 1.0.3
- [Patroni](https://patroni.readthedocs.io/) is now at version 2.0.1
+- [pgBadger](https://github.com/darold/pgbadger) is now at 11.4
The monitoring stack for the PostgreSQL Operator uses upstream components as opposed to repackaging them. These are specified as part of the [PostgreSQL Operator Installer](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/postgres-operator/). We have tested this release with the following versions of each component:
@@ -193,6 +194,7 @@ Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-arc
- `pgo restore` will now first attempt a [pgBackRest delta restore](https://pgbackrest.org/user-guide.html#restore/option-delta), which can significantly speed up the restore time for large databases. Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-archive-get/category-general/option-process-max) option to `--backup-opts` can help speed up the restore process based upon the amount of CPU you have available.
- A pgBackRest backup can now be deleted with `pgo delete backup`. A backup name must be specified with the `--target` flag. Please refer to the [documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/disaster-recovery/#deleting-a-backup) for how to use this command.
- pgBadger can now be enabled/disabled during the lifetime of a PostgreSQL cluster using the `pgo update --enable-pgbadger` and `pgo update --disable-pgbadger` flag. This can also be modified directly on a custom resource.
+- Managed PostgreSQL system accounts and now have their credentials set and rotated with `pgo update user` by including the `--set-system-account-password` flag. Suggested by (@srinathganesh).
## Changes
@@ -205,13 +207,16 @@ Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-arc
- The `pgo failover` command now works without specifying a target: the candidate to fail over to will be automatically selected.
- For clusters that have no healthy instances, `pgo failover` can now force a promotion using the `--force` flag. A `--target` flag must also be specified when using `--force`.
- If a predefined custom ConfigMap for a PostgreSQL cluster (`-pgha-config`) is detected at bootstrap time, the Operator will ensure it properly initializes the cluster.
+- Deleting a `pgclusters.crunchydata.com` custom resource will now properly delete a PostgreSQL cluster. If the `pgclusters.crunchydata.com` custom resource has the annotations `keep-backups` or `keep-data`, it will keep the backups or keep the PostgreSQL data directory respectively. Reported by Leo Khomenko (@lkhomenk).
- PostgreSQL JIT compilation is explicitly disabled on new cluster creation. This prevents a memory leak that has been observed on queries coming from the metrics exporter.
- The credentials for the metrics collection user are now available with `pgo show user --show-system-accounts`.
- The default user for executing scheduled SQL policies is now the Postgres superuser, instead of the replication user.
- Add the `--no-prompt` flag to `pgo upgrade`. The mechanism to disable the prompt verification was already in place, but the flag was not exposed. Reported by (@devopsevd).
- Remove certain characters that causes issues in shell environments from consideration when using the random password generator, which is used to create default passwords or with `--rotate-password`.
+- Allow for the `--link-map` attribute for a pgBackRest option, which can help with the restore of an existing cluster to a new cluster that adds an external WAL volume.
- Remove the long deprecated `archivestorage` attribute from the `pgclusters.crunchydata.com` custom resource definition. As this attribute is not used at all, this should have no effect.
- The `ArchiveMode` parameter is now removed from the configuration. This had been fully deprecated for awhile.
+- Add an explicit size limit of `64Mi` for the `pgBadger` ephemeral storage mount. Additionally, remove the ephemeral storage mount for the `/recover` mount point as that is not used. Reported by Pierre-Marie Petit (@pmpetit).
- New PostgreSQL Operator deployments will now generate ECDSA keys (P-256, SHA384) for use by the API server.
## Fixes
@@ -227,9 +232,12 @@ Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-arc
- Fix syntax in recovery check command which could lead to failures when manually promoting a standby cluster. Reported by (@SockenSalat).
- Fix potential race condition that could lead to a crash in the Operator boot when an error is issued around loading the `pgo-config` ConfigMap. Reported by Aleksander Roszig (@AleksanderRoszig).
- Do not trigger a backup if a standby cluster fails over. Reported by (@aprilito1965).
+- Ensure pgBouncer Secret is created when adding it to a standby cluster.
- Remove legacy `defaultMode` setting on the volume instructions for the pgBackRest repo Secret as the `readOnly` setting is used on the mount itself. Reported by (@szhang1).
- The logger no longer defaults to using a log level of `DEBUG`.
- Autofailover is no longer disabled when an `rmdata` Job is run, enabling a clean database shutdown process when deleting a PostgreSQL cluster.
+- Allow for `Restart` API server permission to be explicitly set. Reported by Aleksander Roszig (@AleksanderRoszig).
- Update `pgo-target` permissions to match expectations for modern Kubernetes versions.
- Major upgrade container now includes references for `pgnodemx`.
- During a major upgrade, ensure permissions are correct on the old data directory before running `pg_upgrade`.
+- The metrics stack installer is fixed to work in environments that may not have connectivity to the Internet ("air gapped"). Reported by (@eliranw).
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index f1f2043f14..d83b9cd810 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -10,7 +10,7 @@
"deployment-name": "fromcrd",
"name": "fromcrd",
"pg-cluster": "fromcrd",
- "pgo-version": "4.6.0-beta.2",
+ "pgo-version": "4.6.0-beta.3",
"pgouser": "pgoadmin"
},
"name": "fromcrd",
@@ -45,7 +45,7 @@
"supplementalgroups": ""
},
"ccpimage": "crunchy-postgres-ha",
- "ccpimagetag": "centos8-13.1-4.6.0-beta.2",
+ "ccpimagetag": "centos8-13.1-4.6.0-beta.3",
"clustername": "fromcrd",
"database": "userdb",
"exporterport": "9187",
@@ -60,7 +60,7 @@
"port": "5432",
"user": "testuser",
"userlabels": {
- "pgo-version": "4.6.0-beta.2"
+ "pgo-version": "4.6.0-beta.3"
}
}
}
diff --git a/examples/envs.sh b/examples/envs.sh
index 37fa831db8..ebcd9798a0 100644
--- a/examples/envs.sh
+++ b/examples/envs.sh
@@ -20,7 +20,7 @@ export PGO_CONF_DIR=$PGOROOT/installers/ansible/roles/pgo-operator/files
# the version of the Operator you run is set by these vars
export PGO_IMAGE_PREFIX=registry.developers.crunchydata.com/crunchydata
export PGO_BASEOS=centos8
-export PGO_VERSION=4.6.0-beta.2
+export PGO_VERSION=4.6.0-beta.3
export PGO_IMAGE_TAG=$PGO_BASEOS-$PGO_VERSION
# for setting the pgo apiserver port, disabling TLS or not verifying TLS
diff --git a/examples/helm/README.md b/examples/helm/README.md
index 81fbb97fe8..04ab5211aa 100644
--- a/examples/helm/README.md
+++ b/examples/helm/README.md
@@ -64,7 +64,7 @@ The following values can also be set:
- `ha`: Whether or not to deploy a high availability PostgreSQL cluster. Can be either `true` or `false`, defaults to `false`.
- `imagePrefix`: The prefix of the container images to use for this PostgreSQL cluster. Default to `registry.developers.crunchydata.com/crunchydata`.
- `image`: The name of the container image to use for the PostgreSQL cluster. Defaults to `crunchy-postgres-ha`.
-- `imageTag`: The container image tag to use. Defaults to `centos8-13.1-4.6.0-beta.2`.
+- `imageTag`: The container image tag to use. Defaults to `centos8-13.1-4.6.0-beta.3`.
- `memory`: The memory limit for the PostgreSQL cluster. Follows standard Kubernetes formatting.
- `monitoring`: Whether or not to enable monitoring / metrics collection for this PostgreSQL instance. Can either be `true` or `false`, defaults to `false`.
diff --git a/examples/helm/postgres/Chart.yaml b/examples/helm/postgres/Chart.yaml
index d2c7a63902..7a6b0a2287 100644
--- a/examples/helm/postgres/Chart.yaml
+++ b/examples/helm/postgres/Chart.yaml
@@ -20,4 +20,4 @@ version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
-appVersion: 4.6.0-beta.2
+appVersion: 4.6.0-beta.3
diff --git a/examples/helm/postgres/templates/pgcluster.yaml b/examples/helm/postgres/templates/pgcluster.yaml
index 563d7246de..886d6d43d5 100644
--- a/examples/helm/postgres/templates/pgcluster.yaml
+++ b/examples/helm/postgres/templates/pgcluster.yaml
@@ -28,7 +28,7 @@ spec:
storagetype: dynamic
ccpimage: {{ .Values.image | default "crunchy-postgres-ha" | quote }}
ccpimageprefix: {{ .Values.imagePrefix | default "registry.developers.crunchydata.com/crunchydata" | quote }}
- ccpimagetag: {{ .Values.imageTag | default "centos8-13.1-4.6.0-beta.2" | quote }}
+ ccpimagetag: {{ .Values.imageTag | default "centos8-13.1-4.6.0-beta.3" | quote }}
clustername: {{ .Values.name | quote }}
database: {{ .Values.name | quote }}
{{- if .Values.monitoring }}
diff --git a/examples/helm/postgres/values.yaml b/examples/helm/postgres/values.yaml
index 60b4b3e53b..b1a541dbb3 100644
--- a/examples/helm/postgres/values.yaml
+++ b/examples/helm/postgres/values.yaml
@@ -10,5 +10,5 @@ password: W4tch0ut4hippo$
# ha: true
# imagePrefix: registry.developers.crunchydata.com/crunchydata
# image: crunchy-postgres-ha
-# imageTag: centos8-13.1-4.6.0-beta.2
+# imageTag: centos8-13.1-4.6.0-beta.3
# memory: 1Gi
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index 8cdd77ab04..ba466f62eb 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -44,13 +44,13 @@ pgo show cluster hippo -n pgo
```
You will see something like this if successful:
```
-cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
+cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
pod : hippo-8fb6bd96-j87wq (Running) on gke-xxxx-default-pool-38e946bd-257w (1/1) (primary)
pvc: hippo (1Gi)
deployment : hippo
deployment : hippo-backrest-shared-repo
service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
- labels : pgo-version=4.6.0-beta.2 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-beta.3 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
```
Feel free to run other pgo cli commands on the hippo cluster
@@ -79,7 +79,7 @@ pgo show cluster dev-hippo -n pgo
```
You will see something like this if successful:
```
-cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
+cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
pod : dev-hippo-588d4cb746-bwrxb (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: dev-hippo (1Gi)
deployment : dev-hippo
@@ -87,7 +87,7 @@ cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
deployment : dev-hippo-pgbouncer
service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
- labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.6.0-beta.2 pgouser=admin
+ labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.6.0-beta.3 pgouser=admin
```
#### staging
The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
@@ -113,7 +113,7 @@ pgo show cluster staging-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
+cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
pod : staging-hippo-85cf6dcb65-9h748 (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: staging-hippo (1Gi)
pod : staging-hippo-lnxw-cf47d8c8b-6r4wn (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
@@ -128,7 +128,7 @@ cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
pgreplica : staging-hippo-lnxw
pgreplica : staging-hippo-rpl1
- labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.6.0-beta.2 pgouser=admin vendor=crunchydata
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.6.0-beta.3 pgouser=admin vendor=crunchydata
```
#### production
@@ -154,7 +154,7 @@ pgo show cluster prod-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
+cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
pod : prod-hippo-5d6dd46497-rr67c (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (primary)
pvc: prod-hippo (1Gi)
pod : prod-hippo-flty-84d97c8769-2pzbh (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (replica)
@@ -165,7 +165,7 @@ cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.2)
service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
pgreplica : prod-hippo-flty
- labels : pgo-version=4.6.0-beta.2 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-beta.3 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
```
### Delete the clusters
To delete the clusters run the following pgo cli commands
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index f89650796b..00dbc793a7 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -10,7 +10,7 @@ metadata:
deployment-name: hippo
name: hippo
pg-cluster: hippo
- pgo-version: 4.6.0-beta.2
+ pgo-version: 4.6.0-beta.3
pgouser: admin
name: hippo
namespace: pgo
@@ -42,7 +42,7 @@ spec:
annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
- ccpimagetag: centos8-13.1-4.6.0-beta.2
+ ccpimagetag: centos8-13.1-4.6.0-beta.3
clustername: hippo
customconfig: ""
database: hippo
@@ -63,4 +63,4 @@ spec:
port: "5432"
user: hippo
userlabels:
- pgo-version: 4.6.0-beta.2
+ pgo-version: 4.6.0-beta.3
diff --git a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
index 359350bb6f..a9fcb3a2bf 100644
--- a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
+++ b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
@@ -20,4 +20,4 @@ spec:
storagetype: dynamic
supplementalgroups: ""
userlabels:
- pgo-version: 4.6.0-beta.2
+ pgo-version: 4.6.0-beta.3
diff --git a/installers/ansible/README.md b/installers/ansible/README.md
index 4037651766..88b151a5a3 100644
--- a/installers/ansible/README.md
+++ b/installers/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.6.0-beta.2
+Latest Release: 4.6.0-beta.3
## General
diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml
index bc6a9a7048..ad8942500f 100644
--- a/installers/ansible/values.yaml
+++ b/installers/ansible/values.yaml
@@ -17,7 +17,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -50,14 +50,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.6.0-beta.2"
+pgo_client_version: "4.6.0-beta.3"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.6.0-beta.2"
+pgo_image_tag: "centos8-4.6.0-beta.3"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/gcp-marketplace/Makefile b/installers/gcp-marketplace/Makefile
index c344f572e8..d25a30c579 100644
--- a/installers/gcp-marketplace/Makefile
+++ b/installers/gcp-marketplace/Makefile
@@ -6,7 +6,7 @@ MARKETPLACE_TOOLS ?= gcr.io/cloud-marketplace-tools/k8s/dev:$(MARKETPLACE_VERSIO
MARKETPLACE_VERSION ?= 0.9.4
KUBECONFIG ?= $(HOME)/.kube/config
PARAMETERS ?= {}
-PGO_VERSION ?= 4.6.0-beta.2
+PGO_VERSION ?= 4.6.0-beta.3
IMAGE_BUILD_ARGS = --build-arg MARKETPLACE_VERSION='$(MARKETPLACE_VERSION)' \
--build-arg PGO_VERSION='$(PGO_VERSION)'
diff --git a/installers/gcp-marketplace/README.md b/installers/gcp-marketplace/README.md
index 4d204df786..b96a602f73 100644
--- a/installers/gcp-marketplace/README.md
+++ b/installers/gcp-marketplace/README.md
@@ -59,7 +59,7 @@ Google Cloud Marketplace.
```shell
IMAGE_REPOSITORY=gcr.io/crunchydata-public/postgres-operator
- export PGO_VERSION=4.6.0-beta.2
+ export PGO_VERSION=4.6.0-beta.3
export INSTALLER_IMAGE=${IMAGE_REPOSITORY}/deployer:${PGO_VERSION}
export OPERATOR_IMAGE=${IMAGE_REPOSITORY}:${PGO_VERSION}
export OPERATOR_IMAGE_API=${IMAGE_REPOSITORY}/pgo-apiserver:${PGO_VERSION}
diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml
index 7bd7a6c903..762bc7e951 100644
--- a/installers/gcp-marketplace/values.yaml
+++ b/installers/gcp-marketplace/values.yaml
@@ -10,7 +10,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
create_rbac: "true"
db_name: ""
db_password_age_days: "0"
@@ -32,9 +32,9 @@ pgo_admin_role_name: "pgoadmin"
pgo_admin_username: "admin"
pgo_client_container_install: "false"
pgo_client_install: 'false'
-pgo_client_version: "4.6.0-beta.2"
+pgo_client_version: "4.6.0-beta.3"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.2"
+pgo_image_tag: "centos8-4.6.0-beta.3"
pgo_installation_name: '${OPERATOR_NAME}'
pgo_operator_namespace: '${OPERATOR_NAMESPACE}'
scheduler_timeout: "3600"
diff --git a/installers/helm/Chart.yaml b/installers/helm/Chart.yaml
index 3f03cf2db0..357f56a686 100644
--- a/installers/helm/Chart.yaml
+++ b/installers/helm/Chart.yaml
@@ -3,7 +3,7 @@ name: postgres-operator
description: Crunchy PostgreSQL Operator Helm chart for Kubernetes
type: application
version: 0.1.0
-appVersion: 4.6.0-beta.2
+appVersion: 4.6.0-beta.3
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
keywords:
diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml
index bb5ce533cf..ac0c551af2 100644
--- a/installers/helm/values.yaml
+++ b/installers/helm/values.yaml
@@ -37,7 +37,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
+ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -70,14 +70,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.6.0-beta.2"
+pgo_client_version: "4.6.0-beta.3"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.6.0-beta.2"
+pgo_image_tag: "centos8-4.6.0-beta.3"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/kubectl/client-setup.sh b/installers/kubectl/client-setup.sh
index e225efa34a..7306dffe42 100755
--- a/installers/kubectl/client-setup.sh
+++ b/installers/kubectl/client-setup.sh
@@ -14,7 +14,7 @@
# This script should be run after the operator has been deployed
PGO_OPERATOR_NAMESPACE="${PGO_OPERATOR_NAMESPACE:-pgo}"
PGO_USER_ADMIN="${PGO_USER_ADMIN:-pgouser-admin}"
-PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.6.0-beta.2}"
+PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.6.0-beta.3}"
PGO_CLIENT_URL="https://github.com/CrunchyData/postgres-operator/releases/download/${PGO_CLIENT_VERSION}"
PGO_CMD="${PGO_CMD-kubectl}"
diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml
index 0baa77037d..5436f123da 100644
--- a/installers/kubectl/postgres-operator-ocp311.yml
+++ b/installers/kubectl/postgres-operator-ocp311.yml
@@ -44,7 +44,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
+ ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -77,14 +77,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.6.0-beta.2"
+ pgo_client_version: "4.6.0-beta.3"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.6.0-beta.2"
+ pgo_image_tag: "centos8-4.6.0-beta.3"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -161,7 +161,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index 3b0764f057..85ddb15e10 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -139,7 +139,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.6.0-beta.2"
+ ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -172,14 +172,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.6.0-beta.2"
+ pgo_client_version: "4.6.0-beta.3"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.6.0-beta.2"
+ pgo_image_tag: "centos8-4.6.0-beta.3"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -269,7 +269,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/ansible/README.md b/installers/metrics/ansible/README.md
index 41f7d7690d..9a93e81f72 100644
--- a/installers/metrics/ansible/README.md
+++ b/installers/metrics/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.6.0-beta.2
+Latest Release: 4.6.0-beta.3
## General
diff --git a/installers/metrics/helm/Chart.yaml b/installers/metrics/helm/Chart.yaml
index 5a8debf064..8b9828aadf 100644
--- a/installers/metrics/helm/Chart.yaml
+++ b/installers/metrics/helm/Chart.yaml
@@ -3,6 +3,6 @@ name: postgres-operator-monitoring
description: Install for Crunchy PostgreSQL Operator Monitoring
type: application
version: 0.1.0
-appVersion: 4.6.0-beta.2
+appVersion: 4.6.0-beta.3
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
\ No newline at end of file
diff --git a/installers/metrics/helm/helm_template.yaml b/installers/metrics/helm/helm_template.yaml
index 3f89c0d2b5..135a132b19 100644
--- a/installers/metrics/helm/helm_template.yaml
+++ b/installers/metrics/helm/helm_template.yaml
@@ -20,5 +20,5 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.2"
+pgo_image_tag: "centos8-4.6.0-beta.3"
diff --git a/installers/metrics/helm/values.yaml b/installers/metrics/helm/values.yaml
index b7f79ed8de..0c5863629a 100644
--- a/installers/metrics/helm/values.yaml
+++ b/installers/metrics/helm/values.yaml
@@ -20,7 +20,7 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.2"
+pgo_image_tag: "centos8-4.6.0-beta.3"
# =====================
# Configuration Options
diff --git a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
index b449881000..c4d13f9b1c 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
@@ -96,7 +96,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/kubectl/postgres-operator-metrics.yml b/installers/metrics/kubectl/postgres-operator-metrics.yml
index dea5d62e58..0bd8dee095 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics.yml
@@ -165,7 +165,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.2
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/olm/Makefile b/installers/olm/Makefile
index d33f040de4..df7dd962c5 100644
--- a/installers/olm/Makefile
+++ b/installers/olm/Makefile
@@ -11,7 +11,7 @@ OLM_TOOLS ?= registry.localhost:5000/postgres-operator-olm-tools:$(OLM_SDK_VERSI
OLM_VERSION ?= 0.15.1
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
-PGO_VERSION ?= 4.6.0-beta.2
+PGO_VERSION ?= 4.6.0-beta.3
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
CCP_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(PGO_VERSION)
CCP_POSTGIS_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(CCP_POSTGIS_VERSION)-$(PGO_VERSION)
diff --git a/pkg/apis/crunchydata.com/v1/doc.go b/pkg/apis/crunchydata.com/v1/doc.go
index 4c92f8a983..eb6528dfd3 100644
--- a/pkg/apis/crunchydata.com/v1/doc.go
+++ b/pkg/apis/crunchydata.com/v1/doc.go
@@ -53,7 +53,7 @@ cluster.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.2",
+ '{"ClientVersion":"4.6.0-beta.3",
"Namespace":"pgouser1",
"Name":"mycluster",
$PGO_APISERVER_URL/clusters
@@ -72,7 +72,7 @@ show all of the clusters that are in the given namespace.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.2",
+ '{"ClientVersion":"4.6.0-beta.3",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/showclusters
@@ -82,7 +82,7 @@ $PGO_APISERVER_URL/showclusters
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.2",
+ '{"ClientVersion":"4.6.0-beta.3",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/clustersdelete
@@ -90,7 +90,7 @@ $PGO_APISERVER_URL/clustersdelete
Schemes: http, https
BasePath: /
- Version: 4.6.0-beta.2
+ Version: 4.6.0-beta.3
License: Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0
Contact: Crunchy Data https://www.crunchydata.com/
diff --git a/pkg/apiservermsgs/common.go b/pkg/apiservermsgs/common.go
index b2fabc8e0d..5cef464f5f 100644
--- a/pkg/apiservermsgs/common.go
+++ b/pkg/apiservermsgs/common.go
@@ -15,7 +15,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
-const PGO_VERSION = "4.6.0-beta.2"
+const PGO_VERSION = "4.6.0-beta.3"
// Ok status
const Ok = "ok"
diff --git a/redhat/atomic/help.1 b/redhat/atomic/help.1
index afb2703167..2f1de01426 100644
--- a/redhat/atomic/help.1
+++ b/redhat/atomic/help.1
@@ -56,4 +56,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
\fB\fCRelease=\fR
.PP
-The specific release number of the container. For example, Release="4.6.0-beta.2"
+The specific release number of the container. For example, Release="4.6.0-beta.3"
diff --git a/redhat/atomic/help.md b/redhat/atomic/help.md
index a18c5a8293..7101976e61 100644
--- a/redhat/atomic/help.md
+++ b/redhat/atomic/help.md
@@ -45,4 +45,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
`Release=`
-The specific release number of the container. For example, Release="4.6.0-beta.2"
+The specific release number of the container. For example, Release="4.6.0-beta.3"
From 9291d889761743e69690bafd1b8cd94de50f59cc Mon Sep 17 00:00:00 2001
From: Joseph Mckulka <16840147+jmckulk@users.noreply.github.com>
Date: Thu, 14 Jan 2021 18:27:41 -0500
Subject: [PATCH 149/276] Update perms on deployer working directory
The working directory for pgo-deployer (/tmp/.pgo) is now created at build time
instead of install time. When building, the directory is created with root:root
as user:group meaning the installer does not have permission to create the
required subdirectories. This change recursively updates the user and group on
/tmp/.pgo so that the installer can make changes.
---
build/pgo-deployer/Dockerfile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/build/pgo-deployer/Dockerfile b/build/pgo-deployer/Dockerfile
index 2f18411334..79c3516d2c 100644
--- a/build/pgo-deployer/Dockerfile
+++ b/build/pgo-deployer/Dockerfile
@@ -79,7 +79,7 @@ ENV HOME="/tmp"
RUN chmod g=u /etc/passwd
RUN chmod g=u /uid_daemon.sh
-RUN chown -R 2:2 /tmp/.pgo/metrics
+RUN chown -R 2:2 /tmp/.pgo
ENTRYPOINT ["/uid_daemon.sh"]
From d07b899a504c7e7fe1ecf5634564671831eb5ea1 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 17 Jan 2021 10:40:22 -0500
Subject: [PATCH 150/276] Remove references to `--autofail` flag
This has been gone for a bit. Replace with `--enable-autofail`
and `--disable-autofail`.
Issue: #2214
---
cmd/pgo/cmd/update.go | 6 +++---
docs/content/architecture/disaster-recovery.md | 2 +-
docs/content/pgo-client/common-tasks.md | 2 +-
docs/content/pgo-client/reference/pgo_update.md | 4 ++--
docs/content/pgo-client/reference/pgo_update_cluster.md | 2 +-
internal/apiserver/clusterservice/clusterimpl.go | 2 +-
internal/apiserver/clusterservice/clusterservice.go | 4 ++--
7 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/cmd/pgo/cmd/update.go b/cmd/pgo/cmd/update.go
index d16efba880..87df8d5150 100644
--- a/cmd/pgo/cmd/update.go
+++ b/cmd/pgo/cmd/update.go
@@ -196,8 +196,8 @@ var UpdateCmd = &cobra.Command{
Short: "Update a pgouser, pgorole, or cluster",
Long: `The update command allows you to update a pgouser, pgorole, or cluster. For example:
- pgo update cluster --selector=name=mycluster --autofail=false
- pgo update cluster --all --autofail=true
+ pgo update cluster --selector=name=mycluster --disable-autofail
+ pgo update cluster --all --enable-autofail
pgo update namespace mynamespace
pgo update pgbouncer mycluster --rotate-password
pgo update pgorole somerole --pgorole-permission="Cat"
@@ -240,7 +240,7 @@ var UpdateClusterCmd = &cobra.Command{
Short: "Update a PostgreSQL cluster",
Long: `Update a PostgreSQL cluster. For example:
- pgo update cluster mycluster --autofail=false
+ pgo update cluster mycluster --disable-autofail
pgo update cluster mycluster myothercluster --disable-autofail
pgo update cluster --selector=name=mycluster --disable-autofail
pgo update cluster --all --enable-autofail`,
diff --git a/docs/content/architecture/disaster-recovery.md b/docs/content/architecture/disaster-recovery.md
index e49a1be579..eb2947b702 100644
--- a/docs/content/architecture/disaster-recovery.md
+++ b/docs/content/architecture/disaster-recovery.md
@@ -195,7 +195,7 @@ to re-enable autofail if you would like your PostgreSQL cluster to be
highly-available. You can re-enable autofail with this command:
```shell
-pgo update cluster hacluster --autofail=true
+pgo update cluster hacluster --enable-autofail
```
## Scheduling Backups
diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md
index ae0129a86d..5bb672d6eb 100644
--- a/docs/content/pgo-client/common-tasks.md
+++ b/docs/content/pgo-client/common-tasks.md
@@ -697,7 +697,7 @@ high availability on the PostgreSQL cluster manually. You can re-enable high
availability by executing the following command:
```
-pgo update cluster hacluster --autofail=true
+pgo update cluster hacluster --enable-autofail
```
### Logical Backups (`pg_dump` / `pg_dumpall`)
diff --git a/docs/content/pgo-client/reference/pgo_update.md b/docs/content/pgo-client/reference/pgo_update.md
index 3045234668..d54fe5dcce 100644
--- a/docs/content/pgo-client/reference/pgo_update.md
+++ b/docs/content/pgo-client/reference/pgo_update.md
@@ -9,8 +9,8 @@ Update a pgouser, pgorole, or cluster
The update command allows you to update a pgouser, pgorole, or cluster. For example:
- pgo update cluster --selector=name=mycluster --autofail=false
- pgo update cluster --all --autofail=true
+ pgo update cluster --selector=name=mycluster --disable-autofail
+ pgo update cluster --all --enable-autofail
pgo update namespace mynamespace
pgo update pgbouncer mycluster --rotate-password
pgo update pgorole somerole --pgorole-permission="Cat"
diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md
index 78f1a01ff8..ba9b04e683 100644
--- a/docs/content/pgo-client/reference/pgo_update_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_update_cluster.md
@@ -9,7 +9,7 @@ Update a PostgreSQL cluster
Update a PostgreSQL cluster. For example:
- pgo update cluster mycluster --autofail=false
+ pgo update cluster mycluster --disable-autofail
pgo update cluster mycluster myothercluster --disable-autofail
pgo update cluster --selector=name=mycluster --disable-autofail
pgo update cluster --all --enable-autofail
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 3dfad2b092..9219afcfa9 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -1867,7 +1867,7 @@ func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterRespons
for i := range clusterList.Items {
cluster := clusterList.Items[i]
- // set autofail=true or false on each pgcluster CRD
+ // set --enable-autofail / --disable-autofail on each pgcluster CRD
// Make the change based on the value of Autofail vis-a-vis UpdateClusterAutofailStatus
switch request.Autofail {
case msgs.UpdateClusterAutofailEnable:
diff --git a/internal/apiserver/clusterservice/clusterservice.go b/internal/apiserver/clusterservice/clusterservice.go
index c26e91cfa4..197a7c1315 100644
--- a/internal/apiserver/clusterservice/clusterservice.go
+++ b/internal/apiserver/clusterservice/clusterservice.go
@@ -303,8 +303,8 @@ func TestClusterHandler(w http.ResponseWriter, r *http.Request) {
}
// UpdateClusterHandler ...
-// pgo update cluster mycluster --autofail=true
-// pgo update cluster --selector=env=research --autofail=false
+// pgo update cluster mycluster --enable-autofail
+// pgo update cluster --selector=env=research --disable-autofail
// returns a UpdateClusterResponse
func UpdateClusterHandler(w http.ResponseWriter, r *http.Request) {
// swagger:operation POST /clustersupdate clusterservice clustersupdate
From 3fe830f97c84032c5c5649729202e93a399f3e27 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 17 Jan 2021 10:52:55 -0500
Subject: [PATCH 151/276] Adjust replica creation logic as part of a standby
cluster
This ensures an explicit call of standby creation after receiving
a notification that a stanza for the cluster was successfully
created.
Co-authored-by: Andrew L'Ecuyer
---
internal/controller/job/backresthandler.go | 3 +++
1 file changed, 3 insertions(+)
diff --git a/internal/controller/job/backresthandler.go b/internal/controller/job/backresthandler.go
index 2359fa11c0..43f6173311 100644
--- a/internal/controller/job/backresthandler.go
+++ b/internal/controller/job/backresthandler.go
@@ -139,6 +139,9 @@ func (c *Controller) handleBackrestStanzaCreateUpdate(job *apiv1.Job) error {
log.Debugf("job Controller: standby cluster %s will now be set to an initialized "+
"status", clusterName)
_ = controller.SetClusterInitializedStatus(c.Client, clusterName, namespace)
+
+ // now initialize the creation of any replica
+ _ = controller.InitializeReplicaCreation(c.Client, clusterName, namespace)
return nil
}
From acff4bcff8c02483099488d748a2c9a0e5429341 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 17 Jan 2021 16:43:19 -0500
Subject: [PATCH 152/276] Bump Helm chart versions
This should be handled on each release, regardless of the changes
to the chart.
Issue: #2213
---
installers/helm/Chart.yaml | 4 ++--
installers/metrics/helm/Chart.yaml | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/installers/helm/Chart.yaml b/installers/helm/Chart.yaml
index 357f56a686..9224ba438b 100644
--- a/installers/helm/Chart.yaml
+++ b/installers/helm/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v2
name: postgres-operator
description: Crunchy PostgreSQL Operator Helm chart for Kubernetes
type: application
-version: 0.1.0
+version: 0.2.0
appVersion: 4.6.0-beta.3
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
@@ -13,4 +13,4 @@ keywords:
- Postgres
- SQL
- NoSQL
- - RDBMS
\ No newline at end of file
+ - RDBMS
diff --git a/installers/metrics/helm/Chart.yaml b/installers/metrics/helm/Chart.yaml
index 8b9828aadf..e954f3e465 100644
--- a/installers/metrics/helm/Chart.yaml
+++ b/installers/metrics/helm/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v2
name: postgres-operator-monitoring
description: Install for Crunchy PostgreSQL Operator Monitoring
type: application
-version: 0.1.0
+version: 0.2.0
appVersion: 4.6.0-beta.3
home: https://github.com/CrunchyData/postgres-operator
-icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
\ No newline at end of file
+icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
From 5c56a341daf63c7336a9e1a8391b727f65e40442 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Sun, 17 Jan 2021 21:44:41 -0500
Subject: [PATCH 153/276] Update label validation to match Kubernetes rules
The validation rules for certain kinds of keys are now covered,
and values are properly validated against what Kubernetes expects.
Issue: [ch10197]
Issue: #2215
---
internal/apiserver/labelservice/labelimpl.go | 58 +++++++++++++++-----
1 file changed, 43 insertions(+), 15 deletions(-)
diff --git a/internal/apiserver/labelservice/labelimpl.go b/internal/apiserver/labelservice/labelimpl.go
index 2fe6883074..062a12323f 100644
--- a/internal/apiserver/labelservice/labelimpl.go
+++ b/internal/apiserver/labelservice/labelimpl.go
@@ -17,7 +17,7 @@ limitations under the License.
import (
"context"
- "errors"
+ "fmt"
"strings"
"github.com/crunchydata/postgres-operator/internal/apiserver"
@@ -53,7 +53,7 @@ func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
labelsMap, err = validateLabel(request.LabelCmdLabel)
if err != nil {
resp.Status.Code = msgs.Error
- resp.Status.Msg = "labels not formatted correctly"
+ resp.Status.Msg = err.Error()
return resp
}
@@ -181,29 +181,57 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
}
}
+// validateLabel validates if the input is a valid Kubernetes label
+//
+// A label is composed of a key and value.
+//
+// The key can either be a name or have an optional prefix that i
+// terminated by a "/", e.g. "prefix/name"
+//
+// The name must be a valid DNS 1123 value
+// THe prefix must be a valid DNS 1123 subdomain
+//
+// The value can be validated by machinery provided by Kubenretes
+//
+// Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
func validateLabel(LabelCmdLabel string) (map[string]string, error) {
- var err error
- labelMap := make(map[string]string)
- userValues := strings.Split(LabelCmdLabel, ",")
- for _, v := range userValues {
+ labelMap := map[string]string{}
+
+ for _, v := range strings.Split(LabelCmdLabel, ",") {
pair := strings.Split(v, "=")
if len(pair) != 2 {
- log.Error("label format incorrect, requires name=value")
- return labelMap, errors.New("label format incorrect, requires name=value")
+ return labelMap, fmt.Errorf("label format incorrect, requires key=value")
}
- errs := validation.IsDNS1035Label(pair[0])
- if len(errs) > 0 {
- return labelMap, errors.New("label format incorrect, requires name=value " + errs[0])
+ // first handle the key
+ keyParts := strings.Split(pair[0], "/")
+
+ switch len(keyParts) {
+ default:
+ return labelMap, fmt.Errorf("invalid key for " + v)
+ case 2:
+ if errs := validation.IsDNS1123Subdomain(keyParts[0]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("invalid key for %s: %s", v, strings.Join(errs, ","))
+ }
+
+ if errs := validation.IsDNS1123Label(keyParts[1]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("invalid key for %s: %s", v, strings.Join(errs, ","))
+ }
+ case 1:
+ if errs := validation.IsDNS1123Label(keyParts[0]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("invalid key for %s: %s", v, strings.Join(errs, ","))
+ }
}
- errs = validation.IsDNS1035Label(pair[1])
- if len(errs) > 0 {
- return labelMap, errors.New("label format incorrect, requires name=value " + errs[0])
+
+ // handle the value
+ if errs := validation.IsValidLabelValue(pair[1]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("invalid value for %s: %s", v, strings.Join(errs, ","))
}
labelMap[pair[0]] = pair[1]
}
- return labelMap, err
+
+ return labelMap, nil
}
// DeleteLabel ...
From 003b7e8abd99bf6b3b7d75511c846e085bd5776e Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 18 Jan 2021 10:23:59 -0500
Subject: [PATCH 154/276] Refactor of label validation function
This moves the function to a common area in the API code and adds
some much needed testing to it.
---
internal/apiserver/common.go | 57 ++++++++++++++++++
internal/apiserver/common_test.go | 63 ++++++++++++++++++++
internal/apiserver/labelservice/labelimpl.go | 60 +------------------
3 files changed, 122 insertions(+), 58 deletions(-)
diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go
index 50e0db963a..7711b0d7f0 100644
--- a/internal/apiserver/common.go
+++ b/internal/apiserver/common.go
@@ -27,6 +27,7 @@ import (
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/util/validation"
)
const (
@@ -44,6 +45,8 @@ var (
// ErrDBContainerNotFound is an error that indicates that a "database" container
// could not be found in a specific pod
ErrDBContainerNotFound = errors.New("\"database\" container not found in pod")
+ // ErrLabelInvalid indicates that a label is invalid
+ ErrLabelInvalid = errors.New("invalid label")
// ErrStandbyNotAllowed contains the error message returned when an API call is not
// permitted because it involves a cluster that is in standby mode
ErrStandbyNotAllowed = errors.New("Action not permitted because standby mode is enabled")
@@ -127,6 +130,60 @@ func ValidateBackrestStorageTypeForCommand(cluster *crv1.Pgcluster, storageTypeS
return nil
}
+// ValidateLabel is derived from a legacy method and validates if the input is a
+// valid Kubernetes label.
+//
+// A label is composed of a key and value.
+//
+// The key can either be a name or have an optional prefix that i
+// terminated by a "/", e.g. "prefix/name"
+//
+// The name must be a valid DNS 1123 value
+// THe prefix must be a valid DNS 1123 subdomain
+//
+// The value can be validated by machinery provided by Kubenretes
+//
+// Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
+func ValidateLabel(labelStr string) (map[string]string, error) {
+ labelMap := map[string]string{}
+
+ for _, v := range strings.Split(labelStr, ",") {
+ pair := strings.Split(v, "=")
+ if len(pair) != 2 {
+ return labelMap, fmt.Errorf("%w: label format incorrect, requires key=value", ErrLabelInvalid)
+ }
+
+ // first handle the key
+ keyParts := strings.Split(pair[0], "/")
+
+ switch len(keyParts) {
+ default:
+ return labelMap, fmt.Errorf("%w: invalid key for "+v, ErrLabelInvalid)
+ case 2:
+ if errs := validation.IsDNS1123Subdomain(keyParts[0]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("%w: invalid key for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
+ }
+
+ if errs := validation.IsDNS1123Label(keyParts[1]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("%w: invalid key for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
+ }
+ case 1:
+ if errs := validation.IsDNS1123Label(keyParts[0]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("%w: invalid key for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
+ }
+ }
+
+ // handle the value
+ if errs := validation.IsValidLabelValue(pair[1]); len(errs) > 0 {
+ return labelMap, fmt.Errorf("%w: invalid value for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
+ }
+
+ labelMap[pair[0]] = pair[1]
+ }
+
+ return labelMap, nil
+}
+
// ValidateResourceRequestLimit validates that a Kubernetes Requests/Limit pair
// is valid, both by validating the values are valid quantity values, and then
// by checking that the limit >= request. This also needs to check against the
diff --git a/internal/apiserver/common_test.go b/internal/apiserver/common_test.go
index 9f11dc4e49..13507313b8 100644
--- a/internal/apiserver/common_test.go
+++ b/internal/apiserver/common_test.go
@@ -16,6 +16,9 @@ limitations under the License.
*/
import (
+ "errors"
+ "fmt"
+ "strings"
"testing"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
@@ -164,6 +167,66 @@ func TestValidateBackrestStorageTypeForCommand(t *testing.T) {
})
}
+func TestValidateLabel(t *testing.T) {
+ t.Run("valid", func(t *testing.T) {
+ inputs := []map[string]string{
+ map[string]string{"key": "value"},
+ map[string]string{"example.com/key": "value"},
+ map[string]string{"key1": "value1", "key2": "value2"},
+ }
+
+ for _, input := range inputs {
+ labelStr := ""
+
+ for k, v := range input {
+ labelStr += fmt.Sprintf("%s=%s,", k, v)
+ }
+
+ labelStr = strings.Trim(labelStr, ",")
+
+ t.Run(labelStr, func(*testing.T) {
+ labels, err := ValidateLabel(labelStr)
+
+ if err != nil {
+ t.Fatalf("expected no error, got: %s", err.Error())
+ }
+
+ for k := range labels {
+ if v, ok := input[k]; !(ok || v == labels[k]) {
+ t.Fatalf("label values do not matched (%s vs. %s)", input[k], labels[k])
+ }
+ }
+ })
+ }
+ })
+
+ t.Run("invalid", func(t *testing.T) {
+ inputs := []string{
+ "key",
+ "key=value=value",
+ "key=value,",
+ "b@d=value",
+ "b@d-prefix/key=value",
+ "really/bad/prefix/key=value",
+ "key=v\\alue",
+ }
+
+ for _, input := range inputs {
+ t.Run(input, func(t *testing.T) {
+ _, err := ValidateLabel(input)
+
+ if err == nil {
+ t.Fatalf("expected an invalid input error.")
+ }
+
+ if !errors.Is(err, ErrLabelInvalid) {
+ t.Fatalf("expected an ErrLabelInvalid error.")
+ }
+ })
+ }
+ })
+}
+
func TestValidateResourceRequestLimit(t *testing.T) {
t.Run("valid", func(t *testing.T) {
resources := []struct{ request, limit, defaultRequest string }{
diff --git a/internal/apiserver/labelservice/labelimpl.go b/internal/apiserver/labelservice/labelimpl.go
index 062a12323f..d2ee42ccf6 100644
--- a/internal/apiserver/labelservice/labelimpl.go
+++ b/internal/apiserver/labelservice/labelimpl.go
@@ -17,8 +17,6 @@ limitations under the License.
import (
"context"
- "fmt"
- "strings"
"github.com/crunchydata/postgres-operator/internal/apiserver"
"github.com/crunchydata/postgres-operator/internal/config"
@@ -29,7 +27,6 @@ import (
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
- "k8s.io/apimachinery/pkg/util/validation"
)
// Label ... 2 forms ...
@@ -50,7 +47,7 @@ func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
return resp
}
- labelsMap, err = validateLabel(request.LabelCmdLabel)
+ labelsMap, err = apiserver.ValidateLabel(request.LabelCmdLabel)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
@@ -181,59 +178,6 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
}
}
-// validateLabel validates if the input is a valid Kubernetes label
-//
-// A label is composed of a key and value.
-//
-// The key can either be a name or have an optional prefix that i
-// terminated by a "/", e.g. "prefix/name"
-//
-// The name must be a valid DNS 1123 value
-// THe prefix must be a valid DNS 1123 subdomain
-//
-// The value can be validated by machinery provided by Kubenretes
-//
-// Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
-func validateLabel(LabelCmdLabel string) (map[string]string, error) {
- labelMap := map[string]string{}
-
- for _, v := range strings.Split(LabelCmdLabel, ",") {
- pair := strings.Split(v, "=")
- if len(pair) != 2 {
- return labelMap, fmt.Errorf("label format incorrect, requires key=value")
- }
-
- // first handle the key
- keyParts := strings.Split(pair[0], "/")
-
- switch len(keyParts) {
- default:
- return labelMap, fmt.Errorf("invalid key for " + v)
- case 2:
- if errs := validation.IsDNS1123Subdomain(keyParts[0]); len(errs) > 0 {
- return labelMap, fmt.Errorf("invalid key for %s: %s", v, strings.Join(errs, ","))
- }
-
- if errs := validation.IsDNS1123Label(keyParts[1]); len(errs) > 0 {
- return labelMap, fmt.Errorf("invalid key for %s: %s", v, strings.Join(errs, ","))
- }
- case 1:
- if errs := validation.IsDNS1123Label(keyParts[0]); len(errs) > 0 {
- return labelMap, fmt.Errorf("invalid key for %s: %s", v, strings.Join(errs, ","))
- }
- }
-
- // handle the value
- if errs := validation.IsValidLabelValue(pair[1]); len(errs) > 0 {
- return labelMap, fmt.Errorf("invalid value for %s: %s", v, strings.Join(errs, ","))
- }
-
- labelMap[pair[0]] = pair[1]
- }
-
- return labelMap, nil
-}
-
// DeleteLabel ...
// pgo delete label mycluster yourcluster --label=env=prod
// pgo delete label --label=env=prod --selector=group=somegroup
@@ -252,7 +196,7 @@ func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse
return resp
}
- labelsMap, err = validateLabel(request.LabelCmdLabel)
+ labelsMap, err = apiserver.ValidateLabel(request.LabelCmdLabel)
if err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = "labels not formatted correctly"
From c3a21f8c379a2a7e6f7d1001555cb85dc7860db6 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 18 Jan 2021 10:31:28 -0500
Subject: [PATCH 155/276] Unify label validation strategy across API functions
The `pgo create cluster` command was not using the same label
validation as the other label commands were using.
Issue: [ch10201]
---
.../apiserver/clusterservice/clusterimpl.go | 18 ++++++------------
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 9219afcfa9..92ff1f93de 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -576,18 +576,12 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
return resp
}
- userLabelsMap := make(map[string]string)
- if request.UserLabels != "" {
- labels := strings.Split(request.UserLabels, ",")
- for _, v := range labels {
- p := strings.Split(v, "=")
- if len(p) < 2 {
- resp.Status.Code = msgs.Error
- resp.Status.Msg = "invalid labels format"
- return resp
- }
- userLabelsMap[p[0]] = p[1]
- }
+ userLabelsMap, err := apiserver.ValidateLabel(request.UserLabels)
+
+ if err != nil {
+ resp.Status.Code = msgs.Error
+ resp.Status.Msg = err.Error()
+ return resp
}
// validate any parameters provided to bootstrap the cluster from an existing data source
From 630cec5bee21584b3315a188f4f50c567db417d6 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Mon, 18 Jan 2021 11:33:17 -0500
Subject: [PATCH 156/276] Update labeling interface to be consistent with
others
This moves the labeling interface to match that of similar ones,
e.g. "annotations."
- `pgo create cluster`, `pgo label`, and `pgo delete label` all
now have a single `--label` flag. `--label` can be specified multiple
times.
- The API call itself takes a mapping of key/value pairs.
- The API endpoint parameter for `pgo label` and `pgo delete label`
is now called `Labels`
Issue: [ch10202]
---
cmd/pgo/cmd/cluster.go | 2 +-
cmd/pgo/cmd/common.go | 29 +++++++++
cmd/pgo/cmd/create.go | 5 +-
cmd/pgo/cmd/delete.go | 5 +-
cmd/pgo/cmd/label.go | 23 +++----
.../reference/pgo_create_cluster.md | 4 +-
.../pgo-client/reference/pgo_delete_label.md | 4 +-
.../content/pgo-client/reference/pgo_label.md | 4 +-
.../apiserver/clusterservice/clusterimpl.go | 13 ++--
internal/apiserver/common.go | 57 -----------------
internal/apiserver/common_test.go | 63 -------------------
internal/apiserver/labelservice/labelimpl.go | 42 +++----------
internal/util/cluster.go | 57 +++++++++++++++--
internal/util/cluster_test.go | 44 +++++++++++++
pkg/apiservermsgs/clustermsgs.go | 2 +-
pkg/apiservermsgs/labelmsgs.go | 4 +-
pkg/events/eventtype.go | 17 -----
17 files changed, 166 insertions(+), 209 deletions(-)
diff --git a/cmd/pgo/cmd/cluster.go b/cmd/pgo/cmd/cluster.go
index 848fd0e995..7d95f782ff 100644
--- a/cmd/pgo/cmd/cluster.go
+++ b/cmd/pgo/cmd/cluster.go
@@ -267,7 +267,7 @@ func createCluster(args []string, ns string, createClusterCmd *cobra.Command) {
r.PasswordReplication = PasswordReplication
r.Password = Password
r.SecretFrom = SecretFrom
- r.UserLabels = UserLabels
+ r.UserLabels = getLabels(UserLabels)
r.Policies = PoliciesFlag
r.CCPImageTag = CCPImageTag
r.CCPImage = CCPImage
diff --git a/cmd/pgo/cmd/common.go b/cmd/pgo/cmd/common.go
index 087326b57d..66b1f4bd51 100644
--- a/cmd/pgo/cmd/common.go
+++ b/cmd/pgo/cmd/common.go
@@ -20,7 +20,9 @@ import (
"fmt"
"os"
"reflect"
+ "strings"
+ operatorutil "github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
)
@@ -85,6 +87,33 @@ func getHeaderLength(value interface{}, fieldName string) int {
return len(field.String())
}
+// getLabels determines if the provided labels are in the correct format,
+// and if so, will return them in the appropriate map.
+//
+// If not, we will abort.
+func getLabels(labels []string) map[string]string {
+ clusterLabels := map[string]string{}
+
+ for _, label := range labels {
+ parts := strings.Split(label, "=")
+
+ if len(parts) != 2 {
+ fmt.Printf("invalid label: found %q, should be \"key=value\"\n", label)
+ os.Exit(1)
+ }
+
+ clusterLabels[parts[0]] = parts[1]
+ }
+
+ // perform a validation that can save us a round trip to the server
+ if err := operatorutil.ValidateLabels(clusterLabels); err != nil {
+ fmt.Println(err.Error())
+ os.Exit(1)
+ }
+
+ return clusterLabels
+}
+
// getMaxLength returns the maxLength of the strings of a particular value in
// the struct. Increases the max length by 1 to include a buffer
func getMaxLength(results []interface{}, title, fieldName string) int {
diff --git a/cmd/pgo/cmd/create.go b/cmd/pgo/cmd/create.go
index d1cd3eaa14..aabfa84e74 100644
--- a/cmd/pgo/cmd/create.go
+++ b/cmd/pgo/cmd/create.go
@@ -39,7 +39,7 @@ var (
Password string
SecretFrom string
PoliciesFlag, PolicyFile string
- UserLabels string
+ UserLabels []string
Tablespaces []string
ServiceType string
ServiceTypePgBouncer string
@@ -392,7 +392,8 @@ func init() {
createClusterCmd.Flags().StringVarP(&CustomConfig, "custom-config", "", "", "The name of a configMap that holds custom PostgreSQL configuration files used to override defaults.")
createClusterCmd.Flags().StringVarP(&Database, "database", "d", "", "If specified, sets the name of the initial database that is created for the user. Defaults to the value set in the PostgreSQL Operator configuration, or if that is not present, the name of the cluster")
createClusterCmd.Flags().BoolVarP(&DisableAutofailFlag, "disable-autofail", "", false, "Disables autofail capabitilies in the cluster following cluster initialization.")
- createClusterCmd.Flags().StringVarP(&UserLabels, "labels", "l", "", "The labels to apply to this cluster.")
+ createClusterCmd.Flags().StringSliceVar(&UserLabels, "label", []string{}, "Add labels to apply to the PostgreSQL cluster, "+
+ "e.g. \"key=value\", \"prefix/key=value\". Can specify flag multiple times.")
createClusterCmd.Flags().StringVar(&MemoryRequest, "memory", "", "Set the amount of RAM to request, e.g. "+
"1GiB. Overrides the default server value.")
createClusterCmd.Flags().StringVar(&MemoryLimit, "memory-limit", "", "Set the amount of RAM to limit, e.g. "+
diff --git a/cmd/pgo/cmd/delete.go b/cmd/pgo/cmd/delete.go
index 511204ac8e..8397690492 100644
--- a/cmd/pgo/cmd/delete.go
+++ b/cmd/pgo/cmd/delete.go
@@ -170,8 +170,9 @@ func init() {
deleteCmd.AddCommand(deleteLabelCmd)
// pgo delete label --label
// the label to be deleted
- deleteLabelCmd.Flags().StringVar(&LabelCmdLabel, "label", "",
- "The label to delete for any selected or specified clusters.")
+ deleteLabelCmd.Flags().StringSliceVar(&UserLabels, "label", []string{}, "Delete "+
+ "labels to apply to the PostgreSQL cluster, "+"e.g. \"key=value\", \"prefix/key=value\". "+
+ "Can specify flag multiple times.")
// "pgo delete label --selector"
// "pgo delete label -s"
// the selector flag that filters which clusters to delete the cluster
diff --git a/cmd/pgo/cmd/label.go b/cmd/pgo/cmd/label.go
index 70b18061e5..fcda3b7c82 100644
--- a/cmd/pgo/cmd/label.go
+++ b/cmd/pgo/cmd/label.go
@@ -26,9 +26,7 @@ import (
)
var (
- LabelCmdLabel string
- LabelMap map[string]string
- DeleteLabel bool
+ DeleteLabel bool
)
var labelCmd = &cobra.Command{
@@ -47,22 +45,25 @@ var labelCmd = &cobra.Command{
log.Debug("label called")
if len(args) == 0 && Selector == "" {
fmt.Println("Error: A selector or list of clusters is required to label a policy.")
- return
+ os.Exit(1)
}
- if LabelCmdLabel == "" {
+
+ if len(UserLabels) == 0 {
fmt.Println("Error: You must specify the label to apply.")
- } else {
- labelClusters(args, Namespace)
+ os.Exit(1)
}
+
+ labelClusters(args, Namespace)
},
}
func init() {
RootCmd.AddCommand(labelCmd)
+ labelCmd.Flags().BoolVar(&DryRun, "dry-run", false, "Shows the clusters that the label would be applied to, without labelling them.")
+ labelCmd.Flags().StringSliceVar(&UserLabels, "label", []string{}, "Add labels to apply to the PostgreSQL cluster, "+
+ "e.g. \"key=value\", \"prefix/key=value\". Can specify flag multiple times.")
labelCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.")
- labelCmd.Flags().StringVarP(&LabelCmdLabel, "label", "", "", "The new label to apply for any selected or specified clusters.")
- labelCmd.Flags().BoolVarP(&DryRun, "dry-run", "", false, "Shows the clusters that the label would be applied to, without labelling them.")
}
func labelClusters(clusters []string, ns string) {
@@ -78,7 +79,7 @@ func labelClusters(clusters []string, ns string) {
r.Namespace = ns
r.Selector = Selector
r.DryRun = DryRun
- r.LabelCmdLabel = LabelCmdLabel
+ r.Labels = getLabels(UserLabels)
r.DeleteLabel = DeleteLabel
r.ClientVersion = msgs.PGO_VERSION
@@ -111,7 +112,7 @@ func deleteLabel(args []string, ns string) {
req.Selector = Selector
req.Namespace = ns
req.Args = args
- req.LabelCmdLabel = LabelCmdLabel
+ req.Labels = getLabels(UserLabels)
req.ClientVersion = msgs.PGO_VERSION
response, err := api.DeleteLabel(httpclient, &SessionCredentials, &req)
diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md
index 9993e29127..4fa7532a5b 100644
--- a/docs/content/pgo-client/reference/pgo_create_cluster.md
+++ b/docs/content/pgo-client/reference/pgo_create_cluster.md
@@ -45,7 +45,7 @@ pgo create cluster [flags]
--exporter-memory string Set the amount of memory to request for the Crunchy Postgres Exporter sidecar container. Defaults to server value (24Mi).
--exporter-memory-limit string Set the amount of memory to limit for the Crunchy Postgres Exporter sidecar container.
-h, --help help for cluster
- -l, --labels string The labels to apply to this cluster.
+ --label strings Add labels to apply to the PostgreSQL cluster, e.g. "key=value", "prefix/key=value". Can specify flag multiple times.
--memory string Set the amount of RAM to request, e.g. 1GiB. Overrides the default server value.
--memory-limit string Set the amount of RAM to limit, e.g. 1GiB.
--metrics Adds the crunchy-postgres-exporter container to the database pod.
@@ -136,4 +136,4 @@ pgo create cluster [flags]
* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource
-###### Auto generated by spf13/cobra on 14-Jan-2021
+###### Auto generated by spf13/cobra on 18-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_delete_label.md b/docs/content/pgo-client/reference/pgo_delete_label.md
index 2e8efeb5c3..dbd22e4bf6 100644
--- a/docs/content/pgo-client/reference/pgo_delete_label.md
+++ b/docs/content/pgo-client/reference/pgo_delete_label.md
@@ -21,7 +21,7 @@ pgo delete label [flags]
```
-h, --help help for label
- --label string The label to delete for any selected or specified clusters.
+ --label strings Delete labels to apply to the PostgreSQL cluster, e.g. "key=value", "prefix/key=value". Can specify flag multiple times.
-s, --selector string The selector to use for cluster filtering.
```
@@ -42,4 +42,4 @@ pgo delete label [flags]
* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource
-###### Auto generated by spf13/cobra on 14-Jan-2021
+###### Auto generated by spf13/cobra on 18-Jan-2021
diff --git a/docs/content/pgo-client/reference/pgo_label.md b/docs/content/pgo-client/reference/pgo_label.md
index abcb3e0115..143b217ec6 100644
--- a/docs/content/pgo-client/reference/pgo_label.md
+++ b/docs/content/pgo-client/reference/pgo_label.md
@@ -23,7 +23,7 @@ pgo label [flags]
```
--dry-run Shows the clusters that the label would be applied to, without labelling them.
-h, --help help for label
- --label string The new label to apply for any selected or specified clusters.
+ --label strings Add labels to apply to the PostgreSQL cluster, e.g. "key=value", "prefix/key=value". Can specify flag multiple times.
-s, --selector string The selector to use for cluster filtering.
```
@@ -44,4 +44,4 @@ pgo label [flags]
* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface.
-###### Auto generated by spf13/cobra on 14-Jan-2021
+###### Auto generated by spf13/cobra on 18-Jan-2021
diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go
index 92ff1f93de..18999d66eb 100644
--- a/internal/apiserver/clusterservice/clusterimpl.go
+++ b/internal/apiserver/clusterservice/clusterimpl.go
@@ -576,9 +576,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
return resp
}
- userLabelsMap, err := apiserver.ValidateLabel(request.UserLabels)
-
- if err != nil {
+ if err := util.ValidateLabels(request.UserLabels); err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
@@ -753,9 +751,6 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
return resp
}
- log.Debug("userLabelsMap")
- log.Debugf("%v", userLabelsMap)
-
if request.StorageConfig != "" && !apiserver.IsValidStorageName(request.StorageConfig) {
resp.Status.Code = msgs.Error
resp.Status.Msg = fmt.Sprintf("%q storage config was not found", request.StorageConfig)
@@ -857,7 +852,7 @@ func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.
}
// Create an instance of our CRD
- newInstance := getClusterParams(request, clusterName, userLabelsMap, ns)
+ newInstance := getClusterParams(request, clusterName, ns)
newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser
newInstance.Spec.BackrestStorageTypes = backrestStorageTypes
@@ -1066,7 +1061,7 @@ func validateConfigPolicies(clusterName, PoliciesFlag, ns string) error {
return err
}
-func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabelsMap map[string]string, ns string) *crv1.Pgcluster {
+func getClusterParams(request *msgs.CreateClusterRequest, name string, ns string) *crv1.Pgcluster {
spec := crv1.PgclusterSpec{
Annotations: crv1.ClusterAnnotations{
Backrest: map[string]string{},
@@ -1363,7 +1358,7 @@ func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabel
spec.ServiceType = request.ServiceType
- spec.UserLabels = userLabelsMap
+ spec.UserLabels = request.UserLabels
spec.UserLabels[config.LABEL_PGO_VERSION] = msgs.PGO_VERSION
// override any values from config file
diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go
index 7711b0d7f0..50e0db963a 100644
--- a/internal/apiserver/common.go
+++ b/internal/apiserver/common.go
@@ -27,7 +27,6 @@ import (
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/util/validation"
)
const (
@@ -45,8 +44,6 @@ var (
// ErrDBContainerNotFound is an error that indicates that a "database" container
// could not be found in a specific pod
ErrDBContainerNotFound = errors.New("\"database\" container not found in pod")
- // ErrLabelInvalid indicates that a label is invalid
- ErrLabelInvalid = errors.New("invalid label")
// ErrStandbyNotAllowed contains the error message returned when an API call is not
// permitted because it involves a cluster that is in standby mode
ErrStandbyNotAllowed = errors.New("Action not permitted because standby mode is enabled")
@@ -130,60 +127,6 @@ func ValidateBackrestStorageTypeForCommand(cluster *crv1.Pgcluster, storageTypeS
return nil
}
-// ValidateLabel is derived from a legacy method and validates if the input is a
-// valid Kubernetes label.
-//
-// A label is composed of a key and value.
-//
-// The key can either be a name or have an optional prefix that i
-// terminated by a "/", e.g. "prefix/name"
-//
-// The name must be a valid DNS 1123 value
-// THe prefix must be a valid DNS 1123 subdomain
-//
-// The value can be validated by machinery provided by Kubenretes
-//
-// Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
-func ValidateLabel(labelStr string) (map[string]string, error) {
- labelMap := map[string]string{}
-
- for _, v := range strings.Split(labelStr, ",") {
- pair := strings.Split(v, "=")
- if len(pair) != 2 {
- return labelMap, fmt.Errorf("%w: label format incorrect, requires key=value", ErrLabelInvalid)
- }
-
- // first handle the key
- keyParts := strings.Split(pair[0], "/")
-
- switch len(keyParts) {
- default:
- return labelMap, fmt.Errorf("%w: invalid key for "+v, ErrLabelInvalid)
- case 2:
- if errs := validation.IsDNS1123Subdomain(keyParts[0]); len(errs) > 0 {
- return labelMap, fmt.Errorf("%w: invalid key for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
- }
-
- if errs := validation.IsDNS1123Label(keyParts[1]); len(errs) > 0 {
- return labelMap, fmt.Errorf("%w: invalid key for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
- }
- case 1:
- if errs := validation.IsDNS1123Label(keyParts[0]); len(errs) > 0 {
- return labelMap, fmt.Errorf("%w: invalid key for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
- }
- }
-
- // handle the value
- if errs := validation.IsValidLabelValue(pair[1]); len(errs) > 0 {
- return labelMap, fmt.Errorf("%w: invalid value for %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
- }
-
- labelMap[pair[0]] = pair[1]
- }
-
- return labelMap, nil
-}
-
// ValidateResourceRequestLimit validates that a Kubernetes Requests/Limit pair
// is valid, both by validating the values are valid quantity values, and then
// by checking that the limit >= request. This also needs to check against the
diff --git a/internal/apiserver/common_test.go b/internal/apiserver/common_test.go
index 13507313b8..9f11dc4e49 100644
--- a/internal/apiserver/common_test.go
+++ b/internal/apiserver/common_test.go
@@ -16,9 +16,6 @@ limitations under the License.
*/
import (
- "errors"
- "fmt"
- "strings"
"testing"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
@@ -167,66 +164,6 @@ func TestValidateBackrestStorageTypeForCommand(t *testing.T) {
})
}
-func TestValidateLabel(t *testing.T) {
- t.Run("valid", func(t *testing.T) {
- inputs := []map[string]string{
- map[string]string{"key": "value"},
- map[string]string{"example.com/key": "value"},
- map[string]string{"key1": "value1", "key2": "value2"},
- }
-
- for _, input := range inputs {
- labelStr := ""
-
- for k, v := range input {
- labelStr += fmt.Sprintf("%s=%s,", k, v)
- }
-
- labelStr = strings.Trim(labelStr, ",")
-
- t.Run(labelStr, func(*testing.T) {
- labels, err := ValidateLabel(labelStr)
-
- if err != nil {
- t.Fatalf("expected no error, got: %s", err.Error())
- }
-
- for k := range labels {
- if v, ok := input[k]; !(ok || v == labels[k]) {
- t.Fatalf("label values do not matched (%s vs. %s)", input[k], labels[k])
- }
- }
- })
- }
- })
-
- t.Run("invalid", func(t *testing.T) {
- inputs := []string{
- "key",
- "key=value=value",
- "key=value,",
- "b@d=value",
- "b@d-prefix/key=value",
- "really/bad/prefix/key=value",
- "key=v\\alue",
- }
-
- for _, input := range inputs {
- t.Run(input, func(t *testing.T) {
- _, err := ValidateLabel(input)
-
- if err == nil {
- t.Fatalf("expected an invalid input error.")
- }
-
- if !errors.Is(err, ErrLabelInvalid) {
- t.Fatalf("expected an ErrLabelInvalid error.")
- }
- })
- }
- })
-}
-
func TestValidateResourceRequestLimit(t *testing.T) {
t.Run("valid", func(t *testing.T) {
resources := []struct{ request, limit, defaultRequest string }{
diff --git a/internal/apiserver/labelservice/labelimpl.go b/internal/apiserver/labelservice/labelimpl.go
index d2ee42ccf6..ffe975311b 100644
--- a/internal/apiserver/labelservice/labelimpl.go
+++ b/internal/apiserver/labelservice/labelimpl.go
@@ -21,9 +21,9 @@ import (
"github.com/crunchydata/postgres-operator/internal/apiserver"
"github.com/crunchydata/postgres-operator/internal/config"
"github.com/crunchydata/postgres-operator/internal/kubeapi"
+ "github.com/crunchydata/postgres-operator/internal/util"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs"
- "github.com/crunchydata/postgres-operator/pkg/events"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
@@ -34,8 +34,7 @@ import (
// pgo label --label=env=prod --selector=name=mycluster
func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
ctx := context.TODO()
- var err error
- var labelsMap map[string]string
+
resp := msgs.LabelResponse{}
resp.Status.Code = msgs.Ok
resp.Status.Msg = ""
@@ -47,8 +46,7 @@ func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
return resp
}
- labelsMap, err = apiserver.ValidateLabel(request.LabelCmdLabel)
- if err != nil {
+ if err := util.ValidateLabels(request.Labels); err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
@@ -106,12 +104,12 @@ func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse {
resp.Results = append(resp.Results, c.Spec.Name)
}
- addLabels(clusterList.Items, request.DryRun, request.LabelCmdLabel, labelsMap, ns, pgouser)
+ addLabels(clusterList.Items, request.DryRun, request.Labels, ns)
return resp
}
-func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLabels map[string]string, ns, pgouser string) {
+func addLabels(items []crv1.Pgcluster, DryRun bool, newLabels map[string]string, ns string) {
ctx := context.TODO()
patchBytes, err := kubeapi.NewMergePatch().Add("metadata", "labels")(newLabels).Bytes()
if err != nil {
@@ -129,27 +127,6 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
if err != nil {
log.Error(err.Error())
}
-
- // publish event for create label
- topics := make([]string, 1)
- topics[0] = events.EventTopicCluster
-
- f := events.EventCreateLabelFormat{
- EventHeader: events.EventHeader{
- Namespace: ns,
- Username: pgouser,
- Topic: topics,
- EventType: events.EventCreateLabel,
- },
- Clustername: items[i].Spec.Name,
- Label: LabelCmdLabel,
- }
-
- err = events.Publish(f)
- if err != nil {
- log.Error(err.Error())
- }
-
}
}
@@ -183,8 +160,7 @@ func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLab
// pgo delete label --label=env=prod --selector=group=somegroup
func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse {
ctx := context.TODO()
- var err error
- var labelsMap map[string]string
+
resp := msgs.LabelResponse{}
resp.Status.Code = msgs.Ok
resp.Status.Msg = ""
@@ -196,8 +172,7 @@ func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse
return resp
}
- labelsMap, err = apiserver.ValidateLabel(request.LabelCmdLabel)
- if err != nil {
+ if err := util.ValidateLabels(request.Labels); err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = "labels not formatted correctly"
return resp
@@ -253,8 +228,7 @@ func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse
resp.Results = append(resp.Results, "deleting label from "+c.Spec.Name)
}
- err = deleteLabels(clusterList.Items, labelsMap, ns)
- if err != nil {
+ if err := deleteLabels(clusterList.Items, request.Labels, ns); err != nil {
resp.Status.Code = msgs.Error
resp.Status.Msg = err.Error()
return resp
diff --git a/internal/util/cluster.go b/internal/util/cluster.go
index 2349cedbb0..25d85c4609 100644
--- a/internal/util/cluster.go
+++ b/internal/util/cluster.go
@@ -31,6 +31,7 @@ import (
v1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/util/validation"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
@@ -105,10 +106,14 @@ const (
sqlSetPasswordDefault = `ALTER ROLE %s PASSWORD %s;`
)
-// ErrMissingConfigAnnotation represents an error thrown when the 'config' annotation is found
-// to be missing from the 'config' configMap created to store cluster-wide configuration
-var ErrMissingConfigAnnotation error = errors.New("'config' annotation missing from cluster " +
- "configutation")
+var (
+ // ErrLabelInvalid indicates that a label is invalid
+ ErrLabelInvalid = errors.New("invalid label")
+ // ErrMissingConfigAnnotation represents an error thrown when the 'config' annotation is found
+ // to be missing from the 'config' configMap created to store cluster-wide configuration
+ ErrMissingConfigAnnotation error = errors.New("'config' annotation missing from cluster " +
+ "configutation")
+)
// CmdStopPostgreSQL is the command used to stop a PostgreSQL instance, which
// uses the "fast" shutdown mode. This needs a data directory appended to it
@@ -461,3 +466,47 @@ func StopPostgreSQLInstance(clientset kubernetes.Interface, restconfig *rest.Con
return nil
}
+
+// ValidateLabels validates if the input is a valid Kubernetes label.
+//
+// A label is composed of a key and value.
+//
+// The key can either be a name or have an optional prefix that i
+// terminated by a "/", e.g. "prefix/name"
+//
+// The name must be a valid DNS 1123 value
+// THe prefix must be a valid DNS 1123 subdomain
+//
+// The value can be validated by machinery provided by Kubenretes
+//
+// Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
+func ValidateLabels(labels map[string]string) error {
+ for k, v := range labels {
+ // first handle the key
+ keyParts := strings.Split(k, "/")
+
+ switch len(keyParts) {
+ default:
+ return fmt.Errorf("%w: invalid key for "+v, ErrLabelInvalid)
+ case 2:
+ if errs := validation.IsDNS1123Subdomain(keyParts[0]); len(errs) > 0 {
+ return fmt.Errorf("%w: invalid key %s: %s", ErrLabelInvalid, k, strings.Join(errs, ","))
+ }
+
+ if errs := validation.IsDNS1123Label(keyParts[1]); len(errs) > 0 {
+ return fmt.Errorf("%w: invalid key %s: %s", ErrLabelInvalid, k, strings.Join(errs, ","))
+ }
+ case 1:
+ if errs := validation.IsDNS1123Label(keyParts[0]); len(errs) > 0 {
+ return fmt.Errorf("%w: invalid key %s: %s", ErrLabelInvalid, k, strings.Join(errs, ","))
+ }
+ }
+
+ // handle the value
+ if errs := validation.IsValidLabelValue(v); len(errs) > 0 {
+ return fmt.Errorf("%w: invalid value %s: %s", ErrLabelInvalid, v, strings.Join(errs, ","))
+ }
+ }
+
+ return nil
+}
diff --git a/internal/util/cluster_test.go b/internal/util/cluster_test.go
index 6bb8ea472a..98d3a0a1f0 100644
--- a/internal/util/cluster_test.go
+++ b/internal/util/cluster_test.go
@@ -16,11 +16,14 @@ limitations under the License.
*/
import (
+ "errors"
"reflect"
"testing"
crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1"
+
v1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/labels"
)
func TestGenerateNodeAffinity(t *testing.T) {
@@ -114,3 +117,44 @@ func TestGenerateNodeAffinity(t *testing.T) {
}
})
}
+
+func TestValidateLabels(t *testing.T) {
+ t.Run("valid", func(t *testing.T) {
+ inputs := []map[string]string{
+ {"key": "value"},
+ {"example.com/key": "value"},
+ {"key1": "value1", "key2": "value2"},
+ }
+
+ for _, input := range inputs {
+ t.Run(labels.FormatLabels(input), func(*testing.T) {
+ err := ValidateLabels(input)
+
+ if err != nil {
+ t.Fatalf("expected no error, got: %s", err.Error())
+ }
+ })
+ }
+ })
+
+ t.Run("invalid", func(t *testing.T) {
+ inputs := []map[string]string{
+ {"key=value": "value"},
+ {"key": "value", "": ""},
+ {"b@d": "value"},
+ {"b@d-prefix/key": "value"},
+ {"really/bad/prefix/key": "value"},
+ {"key": "v\\alue"},
+ }
+
+ for _, input := range inputs {
+ t.Run(labels.FormatLabels(input), func(t *testing.T) {
+ err := ValidateLabels(input)
+
+ if !errors.Is(err, ErrLabelInvalid) {
+ t.Fatalf("expected an ErrLabelInvalid error, got %T: %v", err, err)
+ }
+ })
+ }
+ })
+}
diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go
index d6cbf91fd2..912cf371c5 100644
--- a/pkg/apiservermsgs/clustermsgs.go
+++ b/pkg/apiservermsgs/clustermsgs.go
@@ -57,7 +57,7 @@ type CreateClusterRequest struct {
PasswordReplication string
Password string
SecretFrom string
- UserLabels string
+ UserLabels map[string]string
Tablespaces []ClusterTablespaceDetail
Policies string
CCPImage string
diff --git a/pkg/apiservermsgs/labelmsgs.go b/pkg/apiservermsgs/labelmsgs.go
index d0a914840e..c28430c818 100644
--- a/pkg/apiservermsgs/labelmsgs.go
+++ b/pkg/apiservermsgs/labelmsgs.go
@@ -21,7 +21,7 @@ type LabelRequest struct {
Selector string
Namespace string
Args []string
- LabelCmdLabel string
+ Labels map[string]string
DryRun bool
DeleteLabel bool
ClientVersion string
@@ -33,7 +33,7 @@ type DeleteLabelRequest struct {
Selector string
Namespace string
Args []string
- LabelCmdLabel string
+ Labels map[string]string
ClientVersion string
}
diff --git a/pkg/events/eventtype.go b/pkg/events/eventtype.go
index ebecda8055..9781277883 100644
--- a/pkg/events/eventtype.go
+++ b/pkg/events/eventtype.go
@@ -52,7 +52,6 @@ const (
EventUpgradeClusterFailure = "UpgradeClusterFailure"
EventDeleteCluster = "DeleteCluster"
EventDeleteClusterCompleted = "DeleteClusterCompleted"
- EventCreateLabel = "CreateLabel"
EventCreateBackup = "CreateBackup"
EventCreateBackupCompleted = "CreateBackupCompleted"
@@ -313,22 +312,6 @@ func (lvl EventCreateBackupCompletedFormat) String() string {
return msg
}
-//----------------------------
-type EventCreateLabelFormat struct {
- EventHeader `json:"eventheader"`
- Clustername string `json:"clustername"`
- Label string `json:"label"`
-}
-
-func (p EventCreateLabelFormat) GetHeader() EventHeader {
- return p.EventHeader
-}
-
-func (lvl EventCreateLabelFormat) String() string {
- msg := fmt.Sprintf("Event %s (create label) - clustername %s - label [%s]", lvl.EventHeader, lvl.Clustername, lvl.Label)
- return msg
-}
-
//----------------------------
type EventCreatePolicyFormat struct {
EventHeader `json:"eventheader"`
From b0395a52779b9ab5d1ee93ea44bb3955b8f71b35 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Tue, 19 Jan 2021 11:17:52 -0500
Subject: [PATCH 157/276] Bump v4.6.0-rc.1
---
Makefile | 2 +-
bin/push-ccp-to-gcr.sh | 2 +-
conf/postgres-operator/pgo.yaml | 4 ++--
docs/config.toml | 2 +-
docs/content/releases/4.6.0.md | 6 ++++++
examples/create-by-resource/fromcrd.json | 6 +++---
examples/envs.sh | 2 +-
examples/helm/README.md | 2 +-
examples/helm/postgres/Chart.yaml | 2 +-
examples/helm/postgres/templates/pgcluster.yaml | 2 +-
examples/helm/postgres/values.yaml | 2 +-
examples/kustomize/createcluster/README.md | 16 ++++++++--------
.../kustomize/createcluster/base/pgcluster.yaml | 6 +++---
.../overlay/staging/hippo-rpl1-pgreplica.yaml | 2 +-
installers/ansible/README.md | 2 +-
installers/ansible/values.yaml | 6 +++---
installers/gcp-marketplace/Makefile | 2 +-
installers/gcp-marketplace/README.md | 2 +-
installers/gcp-marketplace/values.yaml | 6 +++---
installers/helm/Chart.yaml | 2 +-
installers/helm/values.yaml | 6 +++---
installers/kubectl/client-setup.sh | 2 +-
installers/kubectl/postgres-operator-ocp311.yml | 8 ++++----
installers/kubectl/postgres-operator.yml | 8 ++++----
installers/metrics/ansible/README.md | 2 +-
installers/metrics/helm/Chart.yaml | 2 +-
installers/metrics/helm/helm_template.yaml | 2 +-
installers/metrics/helm/values.yaml | 2 +-
.../kubectl/postgres-operator-metrics-ocp311.yml | 2 +-
.../kubectl/postgres-operator-metrics.yml | 2 +-
installers/olm/Makefile | 2 +-
pkg/apis/crunchydata.com/v1/doc.go | 8 ++++----
pkg/apiservermsgs/common.go | 2 +-
redhat/atomic/help.1 | 2 +-
redhat/atomic/help.md | 2 +-
35 files changed, 67 insertions(+), 61 deletions(-)
diff --git a/Makefile b/Makefile
index bdaf771968..86b7166458 100644
--- a/Makefile
+++ b/Makefile
@@ -5,7 +5,7 @@ PGOROOT ?= $(CURDIR)
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= crunchydata
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
-PGO_VERSION ?= 4.6.0-beta.3
+PGO_VERSION ?= 4.6.0-rc.1
PGO_PG_VERSION ?= 13
PGO_PG_FULLVERSION ?= 13.1
PGO_BACKREST_VERSION ?= 2.31
diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh
index 3c98e3829a..28ecdadeb6 100755
--- a/bin/push-ccp-to-gcr.sh
+++ b/bin/push-ccp-to-gcr.sh
@@ -16,7 +16,7 @@
GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test
CCP_IMAGE_PREFIX=crunchydata
-CCP_IMAGE_TAG=centos8-13.1-4.6.0-beta.3
+CCP_IMAGE_TAG=centos8-13.1-4.6.0-rc.1
IMAGES=(
crunchy-prometheus
diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml
index c30e11f41d..e95da5d9d6 100644
--- a/conf/postgres-operator/pgo.yaml
+++ b/conf/postgres-operator/pgo.yaml
@@ -2,7 +2,7 @@ Cluster:
CCPImagePrefix: registry.developers.crunchydata.com/crunchydata
Metrics: false
Badger: false
- CCPImageTag: centos8-13.1-4.6.0-beta.3
+ CCPImageTag: centos8-13.1-4.6.0-rc.1
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
@@ -81,4 +81,4 @@ Storage:
Pgo:
Audit: false
PGOImagePrefix: registry.developers.crunchydata.com/crunchydata
- PGOImageTag: centos8-4.6.0-beta.3
+ PGOImageTag: centos8-4.6.0-rc.1
diff --git a/docs/config.toml b/docs/config.toml
index 83e3dc59e7..8d34478847 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -25,7 +25,7 @@ disableNavChevron = false # set true to hide next/prev chevron, default is false
highlightClientSide = false # set true to use highlight.pack.js instead of the default hugo chroma highlighter
menushortcutsnewtab = true # set true to open shortcuts links to a new tab/window
enableGitInfo = true
-operatorVersion = "4.6.0-beta.3"
+operatorVersion = "4.6.0-rc.1"
postgresVersion = "13.1"
postgresVersion13 = "13.1"
postgresVersion12 = "13.1"
diff --git a/docs/content/releases/4.6.0.md b/docs/content/releases/4.6.0.md
index ff1a886735..499101e086 100644
--- a/docs/content/releases/4.6.0.md
+++ b/docs/content/releases/4.6.0.md
@@ -162,6 +162,8 @@ These changes also include overall organization and build performance optimizati
- `service-type`, which is now represented by the `serviceType` attribute.
- `NodeLabelKey`/`NodeLabelValue`, which is now replaced by the `nodeAffinity` attribute.
- `backrest-storage-type`, which is now represented with the `backrestStorageTypes` attribute.
+- The `--labels` flag on `pgo create cluster` is removed and replaced with the `--label`, which can be specified multiple times. The API endpoint for `pgo create cluster` is also modified: labels must now be passed in as a set of key-value pairs. Please see the "Features" section for more details.
+- The API endpoints for `pgo label` and `pgo delete label` is modified to accept a set of key/value pairs for the values of the `--label` flag. The API parameter for this is now called `Labels`.
The `pgo upgrade` command will properly moved any data you have in these labels into the correct attributes. You can read more about how to use the various CRD attributes in the [Custom Resources](https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/) section of the documentation.
- The `rootsecretname`, `primarysecretname`, and `usersecretname` attributes on the `pgclusters.crunchydata.com` CRD have been removed. Each of these represented managed Secrets. Additionally, if the managed Secrets are not created at cluster creation time, the Operator will now generate these Secrets.
- The `collectSecretName` attribute on `pgclusters.crunchydata.com` has been removed. The Secret for the metrics collection user is now fully managed by the PostgreSQL Operator.
@@ -193,6 +195,8 @@ Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-arc
- `pgo restore` will now first attempt a [pgBackRest delta restore](https://pgbackrest.org/user-guide.html#restore/option-delta), which can significantly speed up the restore time for large databases. Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-archive-get/category-general/option-process-max) option to `--backup-opts` can help speed up the restore process based upon the amount of CPU you have available.
- A pgBackRest backup can now be deleted with `pgo delete backup`. A backup name must be specified with the `--target` flag. Please refer to the [documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/disaster-recovery/#deleting-a-backup) for how to use this command.
+- `pgo create cluster` now accepts a `--label` flag that can be used to specify one or more custom labels for a PostgreSQL cluster. This replaces the `--labels`flag.
+- `pgo label` and `pgo delete label` can accept a `--label` flag specified multiple times.
- pgBadger can now be enabled/disabled during the lifetime of a PostgreSQL cluster using the `pgo update --enable-pgbadger` and `pgo update --disable-pgbadger` flag. This can also be modified directly on a custom resource.
- Managed PostgreSQL system accounts and now have their credentials set and rotated with `pgo update user` by including the `--set-system-account-password` flag. Suggested by (@srinathganesh).
@@ -233,7 +237,9 @@ Passing in the [`--process-max`](https://pgbackrest.org/command.html#command-arc
- Fix potential race condition that could lead to a crash in the Operator boot when an error is issued around loading the `pgo-config` ConfigMap. Reported by Aleksander Roszig (@AleksanderRoszig).
- Do not trigger a backup if a standby cluster fails over. Reported by (@aprilito1965).
- Ensure pgBouncer Secret is created when adding it to a standby cluster.
+- Generally improvements to initialization of a standby cluster.
- Remove legacy `defaultMode` setting on the volume instructions for the pgBackRest repo Secret as the `readOnly` setting is used on the mount itself. Reported by (@szhang1).
+- Ensure proper label parsing based on Kubernetes rules and that it is consistently applied across all functionality that uses labels. Reported by José Joye (@jose-joye).
- The logger no longer defaults to using a log level of `DEBUG`.
- Autofailover is no longer disabled when an `rmdata` Job is run, enabling a clean database shutdown process when deleting a PostgreSQL cluster.
- Allow for `Restart` API server permission to be explicitly set. Reported by Aleksander Roszig (@AleksanderRoszig).
diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json
index d83b9cd810..2876f4c248 100644
--- a/examples/create-by-resource/fromcrd.json
+++ b/examples/create-by-resource/fromcrd.json
@@ -10,7 +10,7 @@
"deployment-name": "fromcrd",
"name": "fromcrd",
"pg-cluster": "fromcrd",
- "pgo-version": "4.6.0-beta.3",
+ "pgo-version": "4.6.0-rc.1",
"pgouser": "pgoadmin"
},
"name": "fromcrd",
@@ -45,7 +45,7 @@
"supplementalgroups": ""
},
"ccpimage": "crunchy-postgres-ha",
- "ccpimagetag": "centos8-13.1-4.6.0-beta.3",
+ "ccpimagetag": "centos8-13.1-4.6.0-rc.1",
"clustername": "fromcrd",
"database": "userdb",
"exporterport": "9187",
@@ -60,7 +60,7 @@
"port": "5432",
"user": "testuser",
"userlabels": {
- "pgo-version": "4.6.0-beta.3"
+ "pgo-version": "4.6.0-rc.1"
}
}
}
diff --git a/examples/envs.sh b/examples/envs.sh
index ebcd9798a0..c26dc96310 100644
--- a/examples/envs.sh
+++ b/examples/envs.sh
@@ -20,7 +20,7 @@ export PGO_CONF_DIR=$PGOROOT/installers/ansible/roles/pgo-operator/files
# the version of the Operator you run is set by these vars
export PGO_IMAGE_PREFIX=registry.developers.crunchydata.com/crunchydata
export PGO_BASEOS=centos8
-export PGO_VERSION=4.6.0-beta.3
+export PGO_VERSION=4.6.0-rc.1
export PGO_IMAGE_TAG=$PGO_BASEOS-$PGO_VERSION
# for setting the pgo apiserver port, disabling TLS or not verifying TLS
diff --git a/examples/helm/README.md b/examples/helm/README.md
index 04ab5211aa..75d7cf2123 100644
--- a/examples/helm/README.md
+++ b/examples/helm/README.md
@@ -64,7 +64,7 @@ The following values can also be set:
- `ha`: Whether or not to deploy a high availability PostgreSQL cluster. Can be either `true` or `false`, defaults to `false`.
- `imagePrefix`: The prefix of the container images to use for this PostgreSQL cluster. Default to `registry.developers.crunchydata.com/crunchydata`.
- `image`: The name of the container image to use for the PostgreSQL cluster. Defaults to `crunchy-postgres-ha`.
-- `imageTag`: The container image tag to use. Defaults to `centos8-13.1-4.6.0-beta.3`.
+- `imageTag`: The container image tag to use. Defaults to `centos8-13.1-4.6.0-rc.1`.
- `memory`: The memory limit for the PostgreSQL cluster. Follows standard Kubernetes formatting.
- `monitoring`: Whether or not to enable monitoring / metrics collection for this PostgreSQL instance. Can either be `true` or `false`, defaults to `false`.
diff --git a/examples/helm/postgres/Chart.yaml b/examples/helm/postgres/Chart.yaml
index 7a6b0a2287..fe2a16284c 100644
--- a/examples/helm/postgres/Chart.yaml
+++ b/examples/helm/postgres/Chart.yaml
@@ -20,4 +20,4 @@ version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
-appVersion: 4.6.0-beta.3
+appVersion: 4.6.0-rc.1
diff --git a/examples/helm/postgres/templates/pgcluster.yaml b/examples/helm/postgres/templates/pgcluster.yaml
index 886d6d43d5..83c3519af9 100644
--- a/examples/helm/postgres/templates/pgcluster.yaml
+++ b/examples/helm/postgres/templates/pgcluster.yaml
@@ -28,7 +28,7 @@ spec:
storagetype: dynamic
ccpimage: {{ .Values.image | default "crunchy-postgres-ha" | quote }}
ccpimageprefix: {{ .Values.imagePrefix | default "registry.developers.crunchydata.com/crunchydata" | quote }}
- ccpimagetag: {{ .Values.imageTag | default "centos8-13.1-4.6.0-beta.3" | quote }}
+ ccpimagetag: {{ .Values.imageTag | default "centos8-13.1-4.6.0-rc.1" | quote }}
clustername: {{ .Values.name | quote }}
database: {{ .Values.name | quote }}
{{- if .Values.monitoring }}
diff --git a/examples/helm/postgres/values.yaml b/examples/helm/postgres/values.yaml
index b1a541dbb3..0af1278336 100644
--- a/examples/helm/postgres/values.yaml
+++ b/examples/helm/postgres/values.yaml
@@ -10,5 +10,5 @@ password: W4tch0ut4hippo$
# ha: true
# imagePrefix: registry.developers.crunchydata.com/crunchydata
# image: crunchy-postgres-ha
-# imageTag: centos8-13.1-4.6.0-beta.3
+# imageTag: centos8-13.1-4.6.0-rc.1
# memory: 1Gi
diff --git a/examples/kustomize/createcluster/README.md b/examples/kustomize/createcluster/README.md
index ba466f62eb..3fe3244bb5 100644
--- a/examples/kustomize/createcluster/README.md
+++ b/examples/kustomize/createcluster/README.md
@@ -44,13 +44,13 @@ pgo show cluster hippo -n pgo
```
You will see something like this if successful:
```
-cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
+cluster : hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-rc.1)
pod : hippo-8fb6bd96-j87wq (Running) on gke-xxxx-default-pool-38e946bd-257w (1/1) (primary)
pvc: hippo (1Gi)
deployment : hippo
deployment : hippo-backrest-shared-repo
service : hippo - ClusterIP (10.0.56.86) - Ports (2022/TCP, 5432/TCP)
- labels : pgo-version=4.6.0-beta.3 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-rc.1 name=hippo crunchy-pgha-scope=hippo deployment-name=hippo pg-cluster=hippo pgouser=admin vendor=crunchydata
```
Feel free to run other pgo cli commands on the hippo cluster
@@ -79,7 +79,7 @@ pgo show cluster dev-hippo -n pgo
```
You will see something like this if successful:
```
-cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
+cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-rc.1)
pod : dev-hippo-588d4cb746-bwrxb (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: dev-hippo (1Gi)
deployment : dev-hippo
@@ -87,7 +87,7 @@ cluster : dev-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
deployment : dev-hippo-pgbouncer
service : dev-hippo - ClusterIP (10.0.62.87) - Ports (2022/TCP, 5432/TCP)
service : dev-hippo-pgbouncer - ClusterIP (10.0.48.120) - Ports (5432/TCP)
- labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.6.0-beta.3 pgouser=admin
+ labels : crunchy-pgha-scope=dev-hippo name=dev-hippo pg-cluster=dev-hippo vendor=crunchydata deployment-name=dev-hippo environment=development pgo-version=4.6.0-rc.1 pgouser=admin
```
#### staging
The staging overlay will deploy a crunchy postgreSQL cluster with 2 replica's with annotations added
@@ -113,7 +113,7 @@ pgo show cluster staging-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
+cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-rc.1)
pod : staging-hippo-85cf6dcb65-9h748 (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (primary)
pvc: staging-hippo (1Gi)
pod : staging-hippo-lnxw-cf47d8c8b-6r4wn (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (replica)
@@ -128,7 +128,7 @@ cluster : staging-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
service : staging-hippo-replica - ClusterIP (10.0.56.57) - Ports (2022/TCP, 5432/TCP)
pgreplica : staging-hippo-lnxw
pgreplica : staging-hippo-rpl1
- labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.6.0-beta.3 pgouser=admin vendor=crunchydata
+ labels : deployment-name=staging-hippo environment=staging name=staging-hippo crunchy-pgha-scope=staging-hippo pg-cluster=staging-hippo pgo-version=4.6.0-rc.1 pgouser=admin vendor=crunchydata
```
#### production
@@ -154,7 +154,7 @@ pgo show cluster prod-hippo -n pgo
```
You will see something like this if successful, (Notice one of the replicas is a different size):
```
-cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
+cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-rc.1)
pod : prod-hippo-5d6dd46497-rr67c (Running) on gke-xxxx-default-pool-21b7282d-rqkj (1/1) (primary)
pvc: prod-hippo (1Gi)
pod : prod-hippo-flty-84d97c8769-2pzbh (Running) on gke-xxxx-default-pool-95cba91c-0ppp (1/1) (replica)
@@ -165,7 +165,7 @@ cluster : prod-hippo (crunchy-postgres-ha:centos8-13.1-4.6.0-beta.3)
service : prod-hippo - ClusterIP (10.0.56.18) - Ports (2022/TCP, 5432/TCP)
service : prod-hippo-replica - ClusterIP (10.0.56.101) - Ports (2022/TCP, 5432/TCP)
pgreplica : prod-hippo-flty
- labels : pgo-version=4.6.0-beta.3 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
+ labels : pgo-version=4.6.0-rc.1 deployment-name=prod-hippo environment=production pg-cluster=prod-hippo crunchy-pgha-scope=prod-hippo name=prod-hippo pgouser=admin vendor=crunchydata
```
### Delete the clusters
To delete the clusters run the following pgo cli commands
diff --git a/examples/kustomize/createcluster/base/pgcluster.yaml b/examples/kustomize/createcluster/base/pgcluster.yaml
index 00dbc793a7..53c874c670 100644
--- a/examples/kustomize/createcluster/base/pgcluster.yaml
+++ b/examples/kustomize/createcluster/base/pgcluster.yaml
@@ -10,7 +10,7 @@ metadata:
deployment-name: hippo
name: hippo
pg-cluster: hippo
- pgo-version: 4.6.0-beta.3
+ pgo-version: 4.6.0-rc.1
pgouser: admin
name: hippo
namespace: pgo
@@ -42,7 +42,7 @@ spec:
annotations: {}
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
- ccpimagetag: centos8-13.1-4.6.0-beta.3
+ ccpimagetag: centos8-13.1-4.6.0-rc.1
clustername: hippo
customconfig: ""
database: hippo
@@ -63,4 +63,4 @@ spec:
port: "5432"
user: hippo
userlabels:
- pgo-version: 4.6.0-beta.3
+ pgo-version: 4.6.0-rc.1
diff --git a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
index a9fcb3a2bf..f5b204e760 100644
--- a/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
+++ b/examples/kustomize/createcluster/overlay/staging/hippo-rpl1-pgreplica.yaml
@@ -20,4 +20,4 @@ spec:
storagetype: dynamic
supplementalgroups: ""
userlabels:
- pgo-version: 4.6.0-beta.3
+ pgo-version: 4.6.0-rc.1
diff --git a/installers/ansible/README.md b/installers/ansible/README.md
index 88b151a5a3..5c63176c50 100644
--- a/installers/ansible/README.md
+++ b/installers/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.6.0-beta.3
+Latest Release: 4.6.0-rc.1
## General
diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml
index ad8942500f..03cf3f10b4 100644
--- a/installers/ansible/values.yaml
+++ b/installers/ansible/values.yaml
@@ -17,7 +17,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
+ccp_image_tag: "centos8-13.1-4.6.0-rc.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -50,14 +50,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.6.0-beta.3"
+pgo_client_version: "4.6.0-rc.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.6.0-beta.3"
+pgo_image_tag: "centos8-4.6.0-rc.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/gcp-marketplace/Makefile b/installers/gcp-marketplace/Makefile
index d25a30c579..afb1b6eba1 100644
--- a/installers/gcp-marketplace/Makefile
+++ b/installers/gcp-marketplace/Makefile
@@ -6,7 +6,7 @@ MARKETPLACE_TOOLS ?= gcr.io/cloud-marketplace-tools/k8s/dev:$(MARKETPLACE_VERSIO
MARKETPLACE_VERSION ?= 0.9.4
KUBECONFIG ?= $(HOME)/.kube/config
PARAMETERS ?= {}
-PGO_VERSION ?= 4.6.0-beta.3
+PGO_VERSION ?= 4.6.0-rc.1
IMAGE_BUILD_ARGS = --build-arg MARKETPLACE_VERSION='$(MARKETPLACE_VERSION)' \
--build-arg PGO_VERSION='$(PGO_VERSION)'
diff --git a/installers/gcp-marketplace/README.md b/installers/gcp-marketplace/README.md
index b96a602f73..b9d6a8356f 100644
--- a/installers/gcp-marketplace/README.md
+++ b/installers/gcp-marketplace/README.md
@@ -59,7 +59,7 @@ Google Cloud Marketplace.
```shell
IMAGE_REPOSITORY=gcr.io/crunchydata-public/postgres-operator
- export PGO_VERSION=4.6.0-beta.3
+ export PGO_VERSION=4.6.0-rc.1
export INSTALLER_IMAGE=${IMAGE_REPOSITORY}/deployer:${PGO_VERSION}
export OPERATOR_IMAGE=${IMAGE_REPOSITORY}:${PGO_VERSION}
export OPERATOR_IMAGE_API=${IMAGE_REPOSITORY}/pgo-apiserver:${PGO_VERSION}
diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml
index 762bc7e951..19661b4702 100644
--- a/installers/gcp-marketplace/values.yaml
+++ b/installers/gcp-marketplace/values.yaml
@@ -10,7 +10,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
+ccp_image_tag: "centos8-13.1-4.6.0-rc.1"
create_rbac: "true"
db_name: ""
db_password_age_days: "0"
@@ -32,9 +32,9 @@ pgo_admin_role_name: "pgoadmin"
pgo_admin_username: "admin"
pgo_client_container_install: "false"
pgo_client_install: 'false'
-pgo_client_version: "4.6.0-beta.3"
+pgo_client_version: "4.6.0-rc.1"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.3"
+pgo_image_tag: "centos8-4.6.0-rc.1"
pgo_installation_name: '${OPERATOR_NAME}'
pgo_operator_namespace: '${OPERATOR_NAMESPACE}'
scheduler_timeout: "3600"
diff --git a/installers/helm/Chart.yaml b/installers/helm/Chart.yaml
index 9224ba438b..6108bd738e 100644
--- a/installers/helm/Chart.yaml
+++ b/installers/helm/Chart.yaml
@@ -3,7 +3,7 @@ name: postgres-operator
description: Crunchy PostgreSQL Operator Helm chart for Kubernetes
type: application
version: 0.2.0
-appVersion: 4.6.0-beta.3
+appVersion: 4.6.0-rc.1
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
keywords:
diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml
index ac0c551af2..563cbc94da 100644
--- a/installers/helm/values.yaml
+++ b/installers/helm/values.yaml
@@ -37,7 +37,7 @@ badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
-ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
+ccp_image_tag: "centos8-13.1-4.6.0-rc.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -70,14 +70,14 @@ pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
-pgo_client_version: "4.6.0-beta.3"
+pgo_client_version: "4.6.0-rc.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
-pgo_image_tag: "centos8-4.6.0-beta.3"
+pgo_image_tag: "centos8-4.6.0-rc.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
diff --git a/installers/kubectl/client-setup.sh b/installers/kubectl/client-setup.sh
index 7306dffe42..020bfa7fc0 100755
--- a/installers/kubectl/client-setup.sh
+++ b/installers/kubectl/client-setup.sh
@@ -14,7 +14,7 @@
# This script should be run after the operator has been deployed
PGO_OPERATOR_NAMESPACE="${PGO_OPERATOR_NAMESPACE:-pgo}"
PGO_USER_ADMIN="${PGO_USER_ADMIN:-pgouser-admin}"
-PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.6.0-beta.3}"
+PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.6.0-rc.1}"
PGO_CLIENT_URL="https://github.com/CrunchyData/postgres-operator/releases/download/${PGO_CLIENT_VERSION}"
PGO_CMD="${PGO_CMD-kubectl}"
diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml
index 5436f123da..a35230a1a0 100644
--- a/installers/kubectl/postgres-operator-ocp311.yml
+++ b/installers/kubectl/postgres-operator-ocp311.yml
@@ -44,7 +44,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
+ ccp_image_tag: "centos8-13.1-4.6.0-rc.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -77,14 +77,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.6.0-beta.3"
+ pgo_client_version: "4.6.0-rc.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.6.0-beta.3"
+ pgo_image_tag: "centos8-4.6.0-rc.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -161,7 +161,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-rc.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml
index 85ddb15e10..f890a89055 100644
--- a/installers/kubectl/postgres-operator.yml
+++ b/installers/kubectl/postgres-operator.yml
@@ -139,7 +139,7 @@ data:
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
- ccp_image_tag: "centos8-13.1-4.6.0-beta.3"
+ ccp_image_tag: "centos8-13.1-4.6.0-rc.1"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
@@ -172,14 +172,14 @@ data:
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
- pgo_client_version: "4.6.0-beta.3"
+ pgo_client_version: "4.6.0-rc.1"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
- pgo_image_tag: "centos8-4.6.0-beta.3"
+ pgo_image_tag: "centos8-4.6.0-rc.1"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
@@ -269,7 +269,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-rc.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/ansible/README.md b/installers/metrics/ansible/README.md
index 9a93e81f72..57dbdb8b6a 100644
--- a/installers/metrics/ansible/README.md
+++ b/installers/metrics/ansible/README.md
@@ -4,7 +4,7 @@
-Latest Release: 4.6.0-beta.3
+Latest Release: 4.6.0-rc.1
## General
diff --git a/installers/metrics/helm/Chart.yaml b/installers/metrics/helm/Chart.yaml
index e954f3e465..66c4aeecf9 100644
--- a/installers/metrics/helm/Chart.yaml
+++ b/installers/metrics/helm/Chart.yaml
@@ -3,6 +3,6 @@ name: postgres-operator-monitoring
description: Install for Crunchy PostgreSQL Operator Monitoring
type: application
version: 0.2.0
-appVersion: 4.6.0-beta.3
+appVersion: 4.6.0-rc.1
home: https://github.com/CrunchyData/postgres-operator
icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png
diff --git a/installers/metrics/helm/helm_template.yaml b/installers/metrics/helm/helm_template.yaml
index 135a132b19..12535de89a 100644
--- a/installers/metrics/helm/helm_template.yaml
+++ b/installers/metrics/helm/helm_template.yaml
@@ -20,5 +20,5 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.3"
+pgo_image_tag: "centos8-4.6.0-rc.1"
diff --git a/installers/metrics/helm/values.yaml b/installers/metrics/helm/values.yaml
index 0c5863629a..88e9f42582 100644
--- a/installers/metrics/helm/values.yaml
+++ b/installers/metrics/helm/values.yaml
@@ -20,7 +20,7 @@ serviceAccount:
# the image prefix and tag to use for the 'pgo-deployer' container
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
-pgo_image_tag: "centos8-4.6.0-beta.3"
+pgo_image_tag: "centos8-4.6.0-rc.1"
# =====================
# Configuration Options
diff --git a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
index c4d13f9b1c..026ad5d04a 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml
@@ -96,7 +96,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-rc.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/metrics/kubectl/postgres-operator-metrics.yml b/installers/metrics/kubectl/postgres-operator-metrics.yml
index 0bd8dee095..c999653f38 100644
--- a/installers/metrics/kubectl/postgres-operator-metrics.yml
+++ b/installers/metrics/kubectl/postgres-operator-metrics.yml
@@ -165,7 +165,7 @@ spec:
restartPolicy: Never
containers:
- name: pgo-metrics-deploy
- image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-beta.3
+ image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.6.0-rc.1
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
diff --git a/installers/olm/Makefile b/installers/olm/Makefile
index df7dd962c5..e3482a7eef 100644
--- a/installers/olm/Makefile
+++ b/installers/olm/Makefile
@@ -11,7 +11,7 @@ OLM_TOOLS ?= registry.localhost:5000/postgres-operator-olm-tools:$(OLM_SDK_VERSI
OLM_VERSION ?= 0.15.1
PGO_BASEOS ?= centos8
PGO_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata
-PGO_VERSION ?= 4.6.0-beta.3
+PGO_VERSION ?= 4.6.0-rc.1
PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION)
CCP_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(PGO_VERSION)
CCP_POSTGIS_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(CCP_POSTGIS_VERSION)-$(PGO_VERSION)
diff --git a/pkg/apis/crunchydata.com/v1/doc.go b/pkg/apis/crunchydata.com/v1/doc.go
index eb6528dfd3..8c7a21173a 100644
--- a/pkg/apis/crunchydata.com/v1/doc.go
+++ b/pkg/apis/crunchydata.com/v1/doc.go
@@ -53,7 +53,7 @@ cluster.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.3",
+ '{"ClientVersion":"4.6.0-rc.1",
"Namespace":"pgouser1",
"Name":"mycluster",
$PGO_APISERVER_URL/clusters
@@ -72,7 +72,7 @@ show all of the clusters that are in the given namespace.
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.3",
+ '{"ClientVersion":"4.6.0-rc.1",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/showclusters
@@ -82,7 +82,7 @@ $PGO_APISERVER_URL/showclusters
curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \
admin:examplepassword -H "Content-Type:application/json" --insecure -X \
POST --data \
- '{"ClientVersion":"4.6.0-beta.3",
+ '{"ClientVersion":"4.6.0-rc.1",
"Namespace":"pgouser1",
"Clustername":"mycluster"}' \
$PGO_APISERVER_URL/clustersdelete
@@ -90,7 +90,7 @@ $PGO_APISERVER_URL/clustersdelete
Schemes: http, https
BasePath: /
- Version: 4.6.0-beta.3
+ Version: 4.6.0-rc.1
License: Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0
Contact: Crunchy Data https://www.crunchydata.com/
diff --git a/pkg/apiservermsgs/common.go b/pkg/apiservermsgs/common.go
index 5cef464f5f..3d24393610 100644
--- a/pkg/apiservermsgs/common.go
+++ b/pkg/apiservermsgs/common.go
@@ -15,7 +15,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
-const PGO_VERSION = "4.6.0-beta.3"
+const PGO_VERSION = "4.6.0-rc.1"
// Ok status
const Ok = "ok"
diff --git a/redhat/atomic/help.1 b/redhat/atomic/help.1
index 2f1de01426..d94801c5cc 100644
--- a/redhat/atomic/help.1
+++ b/redhat/atomic/help.1
@@ -56,4 +56,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
\fB\fCRelease=\fR
.PP
-The specific release number of the container. For example, Release="4.6.0-beta.3"
+The specific release number of the container. For example, Release="4.6.0-rc.1"
diff --git a/redhat/atomic/help.md b/redhat/atomic/help.md
index 7101976e61..f0889e1cdf 100644
--- a/redhat/atomic/help.md
+++ b/redhat/atomic/help.md
@@ -45,4 +45,4 @@ The Red Hat Enterprise Linux version from which the container was built. For exa
`Release=`
-The specific release number of the container. For example, Release="4.6.0-beta.3"
+The specific release number of the container. For example, Release="4.6.0-rc.1"
From 98529f2e769b15a63a698463b75fa74f0f0e28df Mon Sep 17 00:00:00 2001
From: Joseph Mckulka <16840147+jmckulk@users.noreply.github.com>
Date: Tue, 19 Jan 2021 15:41:50 -0500
Subject: [PATCH 158/276] Update pgMonitor path in pgo-deployer
This allows for the successful installation of the Postgres
Operator Monitoring stack in a variety of environments due
to the various permission considerations.
---
build/pgo-deployer/Dockerfile | 3 +-
.../roles/pgo-metrics/tasks/alertmanager.yml | 2 +-
.../roles/pgo-metrics/tasks/grafana.yml | 2 +-
.../ansible/roles/pgo-metrics/tasks/main.yml | 33 ++++++++++++++++---
.../roles/pgo-metrics/tasks/prometheus.yml | 2 +-
5 files changed, 32 insertions(+), 10 deletions(-)
diff --git a/build/pgo-deployer/Dockerfile b/build/pgo-deployer/Dockerfile
index 79c3516d2c..fdfeb39104 100644
--- a/build/pgo-deployer/Dockerfile
+++ b/build/pgo-deployer/Dockerfile
@@ -70,7 +70,7 @@ fi
COPY installers/ansible /ansible/postgres-operator
COPY installers/metrics/ansible /ansible/metrics
-ADD tools/pgmonitor /tmp/.pgo/metrics/pgmonitor
+ADD tools/pgmonitor /opt/crunchy/pgmonitor
COPY installers/image/bin/pgo-deploy.sh /pgo-deploy.sh
COPY bin/uid_daemon.sh /uid_daemon.sh
@@ -79,7 +79,6 @@ ENV HOME="/tmp"
RUN chmod g=u /etc/passwd
RUN chmod g=u /uid_daemon.sh
-RUN chown -R 2:2 /tmp/.pgo
ENTRYPOINT ["/uid_daemon.sh"]
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml
index 13fd013290..9c27302579 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml
@@ -16,7 +16,7 @@
- name: Set pgmonitor Prometheus Directory Fact
set_fact:
- pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor/prometheus"
+ pgmonitor_prometheus_dir: "{{ pgmonitor_dir }}/prometheus"
- name: Copy Alertmanger Config to Output Directory
command: "cp {{ pgmonitor_prometheus_dir }}/{{ item.src }} {{ alertmanager_output_dir }}/{{ item.dst }}"
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml
index f0b88e0c65..020e8cfa6d 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml
@@ -48,7 +48,7 @@
- name: Set pgmonitor Grafana Directory Fact
set_fact:
- pgmonitor_grafana_dir: "{{ metrics_dir }}/pgmonitor/grafana"
+ pgmonitor_grafana_dir: "{{ pgmonitor_dir }}/grafana"
- name: Copy Grafana Config to Output Directory
command: "cp {{ pgmonitor_grafana_dir }}/{{ item }} {{ grafana_output_dir }}"
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml
index ae2c27b8e3..994bc6e9bd 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml
@@ -56,21 +56,44 @@
block:
- name: Check for pgmonitor
stat:
- path: "{{ metrics_dir }}/pgmonitor"
- register: pgmonitor_dir
+ path: "/opt/crunchy/pgmonitor"
+ register: pgmonitor_dir_embed
+
+ - name: Set pgMonitor Directory Fact
+ block:
+ - name: Embeded Path
+ set_fact:
+ pgmonitor_dir: "/opt/crunchy/pgmonitor"
+ when: pgmonitor_dir_embed.stat.exists
+
+ - name: Downloaded Path
+ set_fact:
+ pgmonitor_dir: "{{ metrics_dir }}/pgmonitor"
+ when: not pgmonitor_dir_embed.stat.exists
+
+ - name: Ensure pgMonitor Output Directory Exists
+ file:
+ path: "{{ pgmonitor_dir }}"
+ state: directory
+ mode: 0700
+ when: not pgmonitor_dir_embed.stat.exists
- name: Download pgmonitor {{ pgmonitor_version }}
get_url:
url: https://github.com/CrunchyData/pgmonitor/archive/{{ pgmonitor_version }}.tar.gz
dest: "{{ metrics_dir }}"
mode: "0600"
- when: not pgmonitor_dir.stat.exists
+ when: not pgmonitor_dir_embed.stat.exists
- name: Extract pgmonitor
unarchive:
src: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}.tar.gz"
- dest: "{{ metrics_dir }}/pgmonitor"
- when: not pgmonitor_dir.stat.exists
+ dest: "{{ metrics_dir }}"
+ when: not pgmonitor_dir_embed.stat.exists
+
+ - name: Copy pgmonitor to correct directory
+ command: "cp -R {{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}/. {{ pgmonitor_dir }}"
+ when: not pgmonitor_dir_embed.stat.exists
- name: Create Metrics Image Pull Secret
shell: >
diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml
index b9d70aad1f..729fd4762e 100644
--- a/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml
+++ b/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml
@@ -35,7 +35,7 @@
- name: Set pgmonitor Prometheus Directory Fact
set_fact:
- pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor/prometheus"
+ pgmonitor_prometheus_dir: "{{ pgmonitor_dir }}/prometheus"
- name: Copy Prometheus Config to Output Directory
command: "cp {{ pgmonitor_prometheus_dir }}/{{ item.src }} {{ prom_output_dir }}/{{ item.dst }}"
From ec8d91166dc1abcc57259b422a309739cbc26a91 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Wed, 20 Jan 2021 21:28:52 -0500
Subject: [PATCH 159/276] Update custom configuration docs to clearly specify
creation case
This was handled in the tutorial, but not in the "Advanced Topics"
section.
Issue: #2222
---
docs/content/advanced/custom-configuration.md | 40 +++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/docs/content/advanced/custom-configuration.md b/docs/content/advanced/custom-configuration.md
index 5a7e4f0e34..b1df1c81b0 100644
--- a/docs/content/advanced/custom-configuration.md
+++ b/docs/content/advanced/custom-configuration.md
@@ -75,6 +75,46 @@ files that ship with the Crunchy Postgres container, there is no
requirement to. In this event, continue using the Operator as usual
and avoid defining a global configMap.
+## Create a PostgreSQL Cluster With Custom Configuration
+
+The PostgreSQL Operator allows for a PostgreSQL cluster to be created with a customized configuration. To do this, one must create a ConfigMap with an entry called `postgres-ha.yaml` that contains the custom configuration. The custom configuration follows the [Patorni YAML format](https://access.crunchydata.com/documentation/patroni/latest/settings/). Note that parameters that are placed in the `bootstrap` section are applied once during cluster initialization. Editing these values in a working cluster require following the [Modifying PostgreSQL Cluster Configuration](#modifying-postgresql-cluster-configuration) section.
+
+For example, let's say we want to create a PostgreSQL cluster with `shared_buffers` set to `2GB`, `max_connections` set to `30` and `password_encryption` set to `scram-sha-256`. We would create a configuration file that looks similar to:
+
+```
+---
+bootstrap:
+ dcs:
+ postgresql:
+ parameters:
+ max_connections: 30
+ shared_buffers: 2GB
+ password_encryption: scram-sha-256
+```
+
+Save this configuration in a file called `postgres-ha.yaml`.
+
+Create a [`ConfigMap`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) like so:
+
+```
+kubectl -n pgo create configmap hippo-custom-config --from-file=postgres-ha.yaml
+```
+
+You can then have you new PostgreSQL cluster use `hippo-custom-config` as part of its cluster initialization by using the `--custom-config` flag of `pgo create cluster`:
+
+```
+pgo create cluster hippo -n pgo --custom-config=hippo-custom-config
+```
+
+After your cluster is initialized, [connect to your cluster]({{< relref "tutorial/connect-cluster.md" >}}) and confirm that your settings have been applied:
+
+```
+SHOW shared_buffers;
+
+ shared_buffers
+----------------
+ 2GB
+```
## Modifying PostgreSQL Cluster Configuration
From 731715213cc9abd86e2838357861c553c555ef73 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 21 Jan 2021 22:36:38 -0500
Subject: [PATCH 160/276] Clarification in documentation around custom resource
use.
This cleans up some language that no longer applies to the custom
resources, and modifies the examples slightly.
---
.../architecture/high-availability/_index.md | 1 +
docs/content/custom-resources/_index.md | 33 ++++++++-----------
2 files changed, 15 insertions(+), 19 deletions(-)
diff --git a/docs/content/architecture/high-availability/_index.md b/docs/content/architecture/high-availability/_index.md
index 073df5e599..a2ab9b0148 100644
--- a/docs/content/architecture/high-availability/_index.md
+++ b/docs/content/architecture/high-availability/_index.md
@@ -418,5 +418,6 @@ modification to the custom resource:
- CPU resource adjustments
- Custom annotation changes
- Enabling/disabling the monitoring sidecar on a PostgreSQL cluster (`--metrics`)
+- Enabling/disabling the pgBadger sidecar on a PostgreSQL cluster (`--pgbadger`)
- Tablespace additions
- Toleration modifications
diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md
index 43341006df..f0c5c8a5d4 100644
--- a/docs/content/custom-resources/_index.md
+++ b/docs/content/custom-resources/_index.md
@@ -40,10 +40,8 @@ when manipulating the PostgreSQL Operator Custom Resources directly.
### Create a PostgreSQL Cluster
The fundamental workflow for interfacing with a PostgreSQL Operator Custom
-Resource Definition is for creating a PostgreSQL cluster. However, this is also
-one of the most complicated workflows to go through, as there are several
-Kubernetes objects that need to be created prior to using this method. These
-include:
+Resource Definition is for creating a PostgreSQL cluster. There are several
+that a PostgreSQL cluster requires to be deployed, including:
- Secrets
- Information for setting up a pgBackRest repository
@@ -54,14 +52,11 @@ include:
Additionally, if you want to add some of the other sidecars, you may need to
create additional secrets.
-The following guide goes through how to create a PostgreSQL cluster called
-`hippo` by creating a new custom resource.
-
-The below manifest references the Secrets created in the previous step to add a
-custom resource to the `pgclusters.crunchydata.com` custom resource definition.
+The good news is that if you do not provide these objects, the PostgreSQL
+Operator will create them for you to get your Postgres cluster up and running!
-**NOTE**: You will need to modify the storage sections to match your storage
-configuration.
+The following goes through how to create a PostgreSQL cluster called
+`hippo` by creating a new custom resource.
```
# this variable is the name of the cluster being created
@@ -91,7 +86,7 @@ spec:
name: ""
size: 1G
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
PrimaryStorage:
accessmode: ReadWriteMany
@@ -99,7 +94,7 @@ spec:
name: ${pgo_cluster_name}
size: 1G
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
ReplicaStorage:
accessmode: ReadWriteMany
@@ -107,7 +102,7 @@ spec:
name: ""
size: 1G
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
annotations: {}
ccpimage: crunchy-postgres-ha
@@ -406,7 +401,7 @@ spec:
name: ""
size: 1G
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
PrimaryStorage:
accessmode: ReadWriteMany
@@ -414,7 +409,7 @@ spec:
name: ${pgo_cluster_name}
size: 1G
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
ReplicaStorage:
accessmode: ReadWriteMany
@@ -422,7 +417,7 @@ spec:
name: ""
size: 1G
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
annotations: {}
ccpimage: crunchy-postgres-ha
@@ -539,7 +534,7 @@ spec:
name: ${pgo_cluster_name}-${pgo_cluster_replica_suffix}
size: 1G
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
tolerations: []
userlabels:
@@ -588,7 +583,7 @@ tablespaceMounts:
matchLabels: ""
size: 5Gi
storageclass: ""
- storagetype: create
+ storagetype: dynamic
supplementalgroups: ""
```
From ceba73cd7a1197e006e30fdd376f5cd0140dd345 Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 21 Jan 2021 22:45:10 -0500
Subject: [PATCH 161/276] OLM updates for 4.6
Issue: [ch10170]
---
installers/olm/README.md | 20 ++
installers/olm/description.openshift.md | 195 +++++++++++++++++-
installers/olm/description.upstream.md | 179 +++++++++++++++-
.../olm/postgresoperator.crd.examples.yaml | 49 ++++-
installers/olm/verify.sh | 2 +-
5 files changed, 430 insertions(+), 15 deletions(-)
diff --git a/installers/olm/README.md b/installers/olm/README.md
index 207f85daa8..9cf336ddac 100644
--- a/installers/olm/README.md
+++ b/installers/olm/README.md
@@ -11,3 +11,23 @@ tests. Consult the [technical requirements][hub-contrib] when making changes.
[olm-csv]: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/building-your-csv.md
[OLM]: https://github.com/operator-framework/operator-lifecycle-manager
[scorecard]: https://sdk.operatorframework.io/docs/scorecard/
+
+## Testing
+
+### Setup
+
+```
+make docker-package docker-verify
+```
+
+```
+pip3 install yq
+```
+
+### Testing
+
+```
+make install-olm # install OLM framework
+make package # build OLM package
+make verify # verify OLM package
+```
diff --git a/installers/olm/description.openshift.md b/installers/olm/description.openshift.md
index 6b1e79184b..fe9cee374e 100644
--- a/installers/olm/description.openshift.md
+++ b/installers/olm/description.openshift.md
@@ -15,6 +15,9 @@ providing the essential features you need to keep your PostgreSQL clusters up an
Set how long you want your backups retained for. Works great with very large databases!
- **Monitoring**: Track the health of your PostgreSQL clusters using the open source [pgMonitor][] library.
- **Clone**: Create new clusters from your existing clusters or backups with a single [`pgo create cluster --restore-from`][pgo-create-cluster] command.
+- **TLS**: Secure communication between your applications and data servers by [enabling TLS for your PostgreSQL servers][pgo-task-tls], including the ability to enforce that all of your connections to use TLS.
+- **Connection Pooling**: Use [pgBouncer][] for connection pooling
+- **Affinity and Tolerations**: Have your PostgreSQL clusters deployed to [Kubernetes Nodes][k8s-nodes] of your preference with [node affinity][high-availability-node-affinity], or designate which nodes Kubernetes can schedule PostgreSQL instances to with Kubneretes [tolerations][high-availability-tolerations].
- **Full Customizability**: Crunchy PostgreSQL for OpenShift makes it easy to get your own PostgreSQL-as-a-Service up and running on
and lets make further enhancements to customize your deployments, including:
- Selecting different storage classes for your primary, replica, and backup storage
@@ -27,16 +30,20 @@ and much more!
[disaster-recovery]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/
[high-availability]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/
+[high-availability-node-affinity]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#node-affinity
+[high-availability-tolerations]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#tolerations
[pgo-create-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_cluster/
+[pgo-task-tls]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/tls/
[provisioning]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/provisioning/
[k8s-anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity
+[k8s-nodes]: https://kubernetes.io/docs/concepts/architecture/nodes/
[pgBackRest]: https://www.pgbackrest.org
+[pgBouncer]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/pgbouncer/
[pgMonitor]: https://github.com/CrunchyData/pgmonitor
-
-## Before You Begin
+## Pre-Installation
There are a few manual steps that the cluster administrator must perform prior to installing the PostgreSQL Operator.
At the very least, it must be provided with an initial configuration.
@@ -80,8 +87,156 @@ oc -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \
Once these resources are in place, the PostgreSQL Operator can be installed into the cluster.
+## Installation
+
+You can now go ahead and install the PostgreSQL Operator from OperatorHub.
+
+### Security
+
+For the PostgreSQL Operator and PostgreSQL clusters to run in the recommended `restricted` [Security Context Constraint][],
+edit the ConfigMap `pgo-config`, find the `pgo.yaml` entry, and set `DisableFSGroup` to `true`.
+
+[Security Context Constraint]: https://docs.openshift.com/container-platform/latest/authentication/managing-security-context-constraints.html
+
+You will have to scale the `postgres-operator` Deployment down and up for the above change to take effect:
+
+```
+oc -n pgo scale --replicas 0 deployment/postgres-operator
+oc -n pgo scale --replicas 1 deployment/postgres-operator
+```
+
+## Post-Installation
+
+### Tutorial
+
+For a guide on how to perform many of the daily functions of the PostgreSQL Operator, we recommend that you read the [Postgres Operator tutorial][pgo-tutorial]
+
+[pgo-tutorial]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/create-cluster/
+
+However, the below guide will show you how to create a Postgres cluster from a custom resource or from using the `pgo-client`.
+
+### Create a PostgreSQL Cluster from a Custom Resource
+
+The fundamental workflow for interfacing with a PostgreSQL Operator Custom
+Resource Definition is for creating a PostgreSQL cluster. There are several
+that a PostgreSQL cluster requires to be deployed, including:
+
+- Secrets
+ - Information for setting up a pgBackRest repository
+ - PostgreSQL superuser bootstrap credentials
+ - PostgreSQL replication user bootstrap credentials
+ - PostgresQL standard user bootstrap credentials
+
+Additionally, if you want to add some of the other sidecars, you may need to
+create additional secrets.
+
+The good news is that if you do not provide these objects, the PostgreSQL
+Operator will create them for you to get your Postgres cluster up and running!
+
+The following goes through how to create a PostgreSQL cluster called
+`hippo` by creating a new custom resource.
+
+```
+# this variable is the name of the cluster being created
+export pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+export cluster_namespace=pgo
+# this variable is set to the location of your image repository
+export cluster_image_prefix=registry.developers.crunchydata.com/crunchydata
+
+cat <<-EOF > "${pgo_cluster_name}-pgcluster.yaml"
+apiVersion: crunchydata.com/v1
+kind: Pgcluster
+metadata:
+ annotations:
+ current-primary: ${pgo_cluster_name}
+ labels:
+ crunchy-pgha-scope: ${pgo_cluster_name}
+ deployment-name: ${pgo_cluster_name}
+ name: ${pgo_cluster_name}
+ pg-cluster: ${pgo_cluster_name}
+ pgo-version: ${PGO_VERSION}
+ pgouser: admin
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+spec:
+ BackrestStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ PrimaryStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ${pgo_cluster_name}
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ ReplicaStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ annotations: {}
+ ccpimage: crunchy-postgres-ha
+ ccpimageprefix: ${cluster_image_prefix}
+ ccpimagetag: centos8-13.1-${PGO_VERSION}
+ clustername: ${pgo_cluster_name}
+ database: ${pgo_cluster_name}
+ exporterport: "9187"
+ limits: {}
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+ pgDataSource:
+ restoreFrom: ""
+ restoreOpts: ""
+ pgbadgerport: "10000"
+ pgoimageprefix: ${cluster_image_prefix}
+ podAntiAffinity:
+ default: preferred
+ pgBackRest: preferred
+ pgBouncer: preferred
+ port: "5432"
+ tolerations: []
+ user: hippo
+ userlabels:
+ pgo-version: ${PGO_VERSION}
+EOF
+
+oc apply -f "${pgo_cluster_name}-pgcluster.yaml"
+```
+
+And that's all! The PostgreSQL Operator will go ahead and create the cluster.
+
+If you have the PostgreSQL client `psql` installed on your host machine, you can
+test connection to the PostgreSQL cluster using the following command:
-## After You Install
+```
+# namespace that the cluster is running in
+export PGO_OPERATOR_NAMESPACE=pgo
+# name of the cluster
+export pgo_cluster_name=hippo
+# name of the user whose password we want to get
+export pgo_cluster_username=hippo
+
+# get the password of the user and set it to a recognized psql environmental variable
+export PGPASSWORD=$(oc -n "${PGO_OPERATOR_NAMESPACE}" get secrets \
+ "${pgo_cluster_name}-${pgo_cluster_username}-secret" -o "jsonpath={.data['password']}" | base64 -d)
+
+# set up a port-forward either in a new terminal, or in the same terminal in the background:
+oc -n pgo port-forward svc/hippo 5432:5432 &
+
+psql -h localhost -U "${pgo_cluster_username}" "${pgo_cluster_name}"
+```
+
+### Create a PostgreSQL Cluster the `pgo` Client
Once the PostgreSQL Operator is installed in your OpenShift cluster, you will need to do a few things
to use the [PostgreSQL Operator Client][pgo-client].
@@ -123,3 +278,37 @@ pgo version
# pgo client version ${PGO_VERSION}
# pgo-apiserver version ${PGO_VERSION}
```
+
+
+You can then create a cluster with the `pgo` client as simply as this:
+
+```
+pgo create cluster -n pgo hippo
+```
+
+The cluster may take a few moments to provision. You can verify that the cluster is up and running by using the `pgo test` command:
+
+```
+pgo test cluster -n pgo hippo
+```
+
+If you have the PostgreSQL client `psql` installed on your host machine, you can
+test connection to the PostgreSQL cluster using the following command:
+
+```
+# namespace that the cluster is running in
+export PGO_OPERATOR_NAMESPACE=pgo
+# name of the cluster
+export pgo_cluster_name=hippo
+# name of the user whose password we want to get
+export pgo_cluster_username=hippo
+
+# get the password of the user and set it to a recognized psql environmental variable
+export PGPASSWORD=$(kubectl -n "${PGO_OPERATOR_NAMESPACE}" get secrets \
+ "${pgo_cluster_name}-${pgo_cluster_username}-secret" -o "jsonpath={.data['password']}" | base64 -d)
+
+# set up a port-forward either in a new terminal, or in the same terminal in the background:
+kubectl -n pgo port-forward svc/hippo 5432:5432 &
+
+psql -h localhost -U "${pgo_cluster_username}" "${pgo_cluster_name}"
+```
diff --git a/installers/olm/description.upstream.md b/installers/olm/description.upstream.md
index 9851ee914c..bbad9621a7 100644
--- a/installers/olm/description.upstream.md
+++ b/installers/olm/description.upstream.md
@@ -15,6 +15,9 @@ providing the essential features you need to keep your PostgreSQL clusters up an
Set how long you want your backups retained for. Works great with very large databases!
- **Monitoring**: Track the health of your PostgreSQL clusters using the open source [pgMonitor][] library.
- **Clone**: Create new clusters from your existing clusters or backups with a single [`pgo create cluster --restore-from`][pgo-create-cluster] command.
+- **TLS**: Secure communication between your applications and data servers by [enabling TLS for your PostgreSQL servers][pgo-task-tls], including the ability to enforce that all of your connections to use TLS.
+- **Connection Pooling**: Use [pgBouncer][] for connection pooling
+- **Affinity and Tolerations**: Have your PostgreSQL clusters deployed to [Kubernetes Nodes][k8s-nodes] of your preference with [node affinity][high-availability-node-affinity], or designate which nodes Kubernetes can schedule PostgreSQL instances to with Kubneretes [tolerations][high-availability-tolerations].
- **Full Customizability**: Crunchy PostgreSQL for Kubernetes makes it easy to get your own PostgreSQL-as-a-Service up and running on
and lets make further enhancements to customize your deployments, including:
- Selecting different storage classes for your primary, replica, and backup storage
@@ -27,16 +30,21 @@ and much more!
[disaster-recovery]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/
[high-availability]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/
+[high-availability-node-affinity]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#node-affinity
+[high-availability-tolerations]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#tolerations
[pgo-create-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_cluster/
+[pgo-task-tls]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/tls/
[provisioning]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/provisioning/
[k8s-anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity
+[k8s-nodes]: https://kubernetes.io/docs/concepts/architecture/nodes/
[pgBackRest]: https://www.pgbackrest.org
+[pgBouncer]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/pgbouncer/
[pgMonitor]: https://github.com/CrunchyData/pgmonitor
-## Before You Begin
+## Pre-Installation
There are a few manual steps that the cluster administrator must perform prior to installing the PostgreSQL Operator.
At the very least, it must be provided with an initial configuration.
@@ -73,8 +81,142 @@ kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \
Once these resources are in place, the PostgreSQL Operator can be installed into the cluster.
+## Installation
-## After You Install
+You can now go ahead and install the PostgreSQL Operator from OperatorHub.
+
+## Post-Installation
+
+### Tutorial
+
+For a guide on how to perform many of the daily functions of the PostgreSQL Operator, we recommend that you read the [Postgres Operator tutorial][pgo-tutorial]
+
+[pgo-tutorial]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorial/create-cluster/
+
+However, the below guide will show you how to create a Postgres cluster from a custom resource or from using the `pgo-client`.
+
+### Create a PostgreSQL Cluster from a Custom Resource
+
+The fundamental workflow for interfacing with a PostgreSQL Operator Custom
+Resource Definition is for creating a PostgreSQL cluster. There are several
+that a PostgreSQL cluster requires to be deployed, including:
+
+- Secrets
+ - Information for setting up a pgBackRest repository
+ - PostgreSQL superuser bootstrap credentials
+ - PostgreSQL replication user bootstrap credentials
+ - PostgresQL standard user bootstrap credentials
+
+Additionally, if you want to add some of the other sidecars, you may need to
+create additional secrets.
+
+The good news is that if you do not provide these objects, the PostgreSQL
+Operator will create them for you to get your Postgres cluster up and running!
+
+The following goes through how to create a PostgreSQL cluster called
+`hippo` by creating a new custom resource.
+
+```
+# this variable is the name of the cluster being created
+export pgo_cluster_name=hippo
+# this variable is the namespace the cluster is being deployed into
+export cluster_namespace=pgo
+# this variable is set to the location of your image repository
+export cluster_image_prefix=registry.developers.crunchydata.com/crunchydata
+
+cat <<-EOF > "${pgo_cluster_name}-pgcluster.yaml"
+apiVersion: crunchydata.com/v1
+kind: Pgcluster
+metadata:
+ annotations:
+ current-primary: ${pgo_cluster_name}
+ labels:
+ crunchy-pgha-scope: ${pgo_cluster_name}
+ deployment-name: ${pgo_cluster_name}
+ name: ${pgo_cluster_name}
+ pg-cluster: ${pgo_cluster_name}
+ pgo-version: ${PGO_VERSION}
+ pgouser: admin
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+spec:
+ BackrestStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ PrimaryStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ${pgo_cluster_name}
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ ReplicaStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 1G
+ storageclass: ""
+ storagetype: create
+ supplementalgroups: ""
+ annotations: {}
+ ccpimage: crunchy-postgres-ha
+ ccpimageprefix: ${cluster_image_prefix}
+ ccpimagetag: centos8-13.1-${PGO_VERSION}
+ clustername: ${pgo_cluster_name}
+ database: ${pgo_cluster_name}
+ exporterport: "9187"
+ limits: {}
+ name: ${pgo_cluster_name}
+ namespace: ${cluster_namespace}
+ pgDataSource:
+ restoreFrom: ""
+ restoreOpts: ""
+ pgbadgerport: "10000"
+ pgoimageprefix: ${cluster_image_prefix}
+ podAntiAffinity:
+ default: preferred
+ pgBackRest: preferred
+ pgBouncer: preferred
+ port: "5432"
+ tolerations: []
+ user: hippo
+ userlabels:
+ pgo-version: ${PGO_VERSION}
+EOF
+
+kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml"
+```
+
+And that's all! The PostgreSQL Operator will go ahead and create the cluster.
+
+If you have the PostgreSQL client `psql` installed on your host machine, you can
+test connection to the PostgreSQL cluster using the following command:
+
+```
+# namespace that the cluster is running in
+export PGO_OPERATOR_NAMESPACE=pgo
+# name of the cluster
+export pgo_cluster_name=hippo
+# name of the user whose password we want to get
+export pgo_cluster_username=hippo
+
+# get the password of the user and set it to a recognized psql environmental variable
+export PGPASSWORD=$(kubectl -n "${PGO_OPERATOR_NAMESPACE}" get secrets \
+ "${pgo_cluster_name}-${pgo_cluster_username}-secret" -o "jsonpath={.data['password']}" | base64 -d)
+
+# set up a port-forward either in a new terminal, or in the same terminal in the background:
+kubectl -n pgo port-forward svc/hippo 5432:5432 &
+
+psql -h localhost -U "${pgo_cluster_username}" "${pgo_cluster_name}"
+```
+
+### Create a PostgreSQL Cluster the `pgo` Client
Once the PostgreSQL Operator is installed in your Kubernetes cluster, you will need to do a few things
to use the [PostgreSQL Operator Client][pgo-client].
@@ -118,3 +260,36 @@ pgo version
# pgo client version ${PGO_VERSION}
# pgo-apiserver version ${PGO_VERSION}
```
+
+You can then create a cluster with the `pgo` client as simply as this:
+
+```
+pgo create cluster -n pgo hippo
+```
+
+The cluster may take a few moments to provision. You can verify that the cluster is up and running by using the `pgo test` command:
+
+```
+pgo test cluster -n pgo hippo
+```
+
+If you have the PostgreSQL client `psql` installed on your host machine, you can
+test connection to the PostgreSQL cluster using the following command:
+
+```
+# namespace that the cluster is running in
+export PGO_OPERATOR_NAMESPACE=pgo
+# name of the cluster
+export pgo_cluster_name=hippo
+# name of the user whose password we want to get
+export pgo_cluster_username=hippo
+
+# get the password of the user and set it to a recognized psql environmental variable
+export PGPASSWORD=$(kubectl -n "${PGO_OPERATOR_NAMESPACE}" get secrets \
+ "${pgo_cluster_name}-${pgo_cluster_username}-secret" -o "jsonpath={.data['password']}" | base64 -d)
+
+# set up a port-forward either in a new terminal, or in the same terminal in the background:
+kubectl -n pgo port-forward svc/hippo 5432:5432 &
+
+psql -h localhost -U "${pgo_cluster_username}" "${pgo_cluster_name}"
+```
diff --git a/installers/olm/postgresoperator.crd.examples.yaml b/installers/olm/postgresoperator.crd.examples.yaml
index 49b58fac6c..058e4b56f3 100644
--- a/installers/olm/postgresoperator.crd.examples.yaml
+++ b/installers/olm/postgresoperator.crd.examples.yaml
@@ -2,23 +2,54 @@
apiVersion: crunchydata.com/v1
kind: Pgcluster
metadata:
- name: example
- labels: { archive: 'false' }
+ annotations: { current-primary: 'hippo' }
+ name: hippo
+ labels:
+ crunchy-pgha-scope: hippo
+ deployment-name: hippo
+ name: hippo
+ namespace: pgo
+ pg-cluster: hippo
+ pgo-version: '${PGO_VERSION}'
spec:
- name: example
- clustername: example
+ name: hippo
+ namespace: pgo
+ clustername: hippo
ccpimage: crunchy-postgres-ha
ccpimagetag: '${CCP_IMAGE_TAG}'
+ BackrestStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 5Gi
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
PrimaryStorage:
- accessmode: ReadWriteOnce
- size: 1G
- storageclass: standard
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: hippo
+ size: 5Gi
+ storageclass: ""
+ storagetype: dynamic
+ supplementalgroups: ""
+ ReplicaStorage:
+ accessmode: ReadWriteMany
+ matchLabels: ""
+ name: ""
+ size: 5Gi
+ storageclass: ""
storagetype: dynamic
- database: example
+ supplementalgroups: ""
+ database: hippo
exporterport: '9187'
pgbadgerport: '10000'
+ podAntiAffinity:
+ default: preferred
port: '5432'
- userlabels: { archive: 'false' }
+ user: hippo
+ userlabels:
+ pgo-version: '${PGO_VERSION}'
---
apiVersion: crunchydata.com/v1
diff --git a/installers/olm/verify.sh b/installers/olm/verify.sh
index f241a4e267..400df960fe 100755
--- a/installers/olm/verify.sh
+++ b/installers/olm/verify.sh
@@ -20,7 +20,7 @@ if command -v oc >/dev/null; then
kubectl() { oc "$@"; }
elif ! command -v kubectl >/dev/null; then
# Use a version of `kubectl` that matches the Kubernetes server.
- eval "kubectl() { kubectl-$( kubectl-1.16 version --output=json |
+ eval "kubectl() { kubectl-$( kubectl-1.19 version --output=json |
jq --raw-output '.serverVersion | .major + "." + .minor')"' "$@"; }'
fi
From 902e4856b6949dc457db72e85211661614d0b30f Mon Sep 17 00:00:00 2001
From: "Jonathan S. Katz"
Date: Thu, 21 Jan 2021 22:59:31 -0500
Subject: [PATCH 162/276] Update Operator architecture diagram
This is a more accurate reflection of the current state of the
architecture.
---
docs/static/Operator-Architecture-wCRDs.png | Bin 182556 -> 87064 bytes
docs/static/Operator-Architecture.png | Bin 144647 -> 87064 bytes
2 files changed, 0 insertions(+), 0 deletions(-)
diff --git a/docs/static/Operator-Architecture-wCRDs.png b/docs/static/Operator-Architecture-wCRDs.png
index 291cbefef3f14dbebd378f3386d8b95990f3a3f4..8e41f460317a68f87072a2f3208df3a4319223c3 100644
GIT binary patch
literal 87064
zcmeFZ^;=Zm7dAYipn#xA3J8cwr_$XBNJ@8iN!I{MNT+~wcS#H#f+9!{FvL)jLyvR}
z49^)q-}jgIPk63}>*C_fnR9lmz4nTG-J3`?6?p%XGu-ScOVDtR>
z^RFcaA-JI<*xUEizOe6XfrOqwp6{^z?0Ncu`twiWI3IbQt&>%YqfuX@%0_kF&M
z07d}758ams@^}8b41Kb~^xsvm+W%|xKS})GDN_$hPT^n=duQizlEknBr#MvI*JwhF
zGDJHPC(04cG0_{GYHMn?M3X-l2>L;v_<-|Gu;k>zf(AXk(xSmDNsO8VQ$t1wlxE0C
zNu4)j?;R1ZFa=AlXh(1mCuX)3V1#!x>`bJ$k5e4m(ISDSqsHcWLY0ld7ij7)U2^i_6YNmsMC|g
z;$d05_0*LPW02ok6I5J+>)#H*-S&G(wHPN168to%4pQLdZ?>hnQ8s*qdFyQ?$F$~e
zks=A_JU0>vseAu+*SJleW&&1g2+&F1=4LAFe|I?<=(2xo+^wf_I`u8+Xe-eeI&1jj
z=oLB82Cu+l$Qv~~SXFuydLx8s_dCeR-C*zX+0y}y&_dt5jy8sQ{oHI?^u40i-%d9d
z?`LZW_-TOLOD?xF=%(j$^H@(Mu1j9>Z_+{9Il!MT2P9Eq2QiP?b!MU#F#f8#-)`wnSYe8+Y#3GRk;LYPjDe`R;*~42n2~`N_sgJ2mf9Mq
ztH+Bwb_aIO&gq58pFaOj{*Mhd(KG~fKOV>*Cu_$X0MA7}gw9pFvFalq%!iHM)wITl
z_51jmH#eQqH+|8^&W%(5-C=chc79HWT`$3*6X#5vNUFyb*VB}MUB*SSka~q}=!hBo
zzykvLnGf3^hX*%YJ!GAeO|L$rBA4OxXrpQaZXYk0o$ltKeAPf$ucDx0vMw
zh5wY|c!Rlb8B?RboEDaLr|8ZK(_`cRY$_+s(l0M{lpc5h_23TA&wY#z2hO)&efci^*BWYg6(>?M6A&~s|k
zz$Y%_e@XxXt#kB3ucy{=Ql+i3+x127Fk?Cbf!uIbYy;stIk52OLhV}45{mzGIKf#x
zS%v>S0<;NiInW;;^O+BGjor$8`{LUHqTsy;ttThkZqjo;>0)s~{}gCI{JP{WosDCS
zi+6b2hO00qC647)3(;%vp51F{@h4Y;`CB*hnHzs1ImT}8$9w;N;aPq3)+pML*CuP>
zb$7%r#zE56{-q)euJZirQ|jnZIr~5+kwgEPud&@n%^a~`Z
zDZj-S2fFybnaY7IL>QCEBV*I7BtsUyPyYhUL(++aS^K)p;MTmJPzjL)*KR9^Ok^U+
z1@D-4*>7)oRY>N}`gYhbu`p1?{uDnIEg74fhuOrXPz~Z0JN7ay6=4M6Mg~Y|LTgQD
zN?S{lux~C!XNHY9(LmZKI}zu+8@{&?{BHc)-(60=fghgLZvA1z#Z{J;OC5v^Brtb}
z{jGymNePJUSpTZ`*MyS=X6*XwEOR|gAtCm)f7;+dPd#G7545>JauyxDkPj;`xw=vJ
zt*#5vA$VGf{Gp28Ot_v8BX;YG@{q_cy9j|^kQ1FxvY+nPU(fm3*niqMj<$koN&)c^
z2=vd|{etm&4Iuca2??EpA5O4n7(IAb`PMx)Jx59>S5=;AXOWNIfDrQgNZQHGc-yws
z`al5q^;v4eG*C|ixQ7^e3fZPN?#r8s5*ZU&ot?w1lpgX0YE#`Co6=78m*Wb0#|Sw+
z^yriSI@OB>Ozbb)cBHtdcc>|eo@&T1*ugxdX$)f}Vr1~q@+2Xv`BeuLGFlJ`4?n76+9^0Qc-NF6zOPs&rsX4^a
z-+A;JEGrgVz49SWpX{M;f2v3?UQdZ;KyG^0p1C2La4yGh=hp%p1!em%kzMfc3t;`c
z+xh8Ot=?}Li_1gYTJG4tydU{1P2yYiv=Z$Ja@2bkrozpOBvnu-QKnc
z4>-93yWgH|E_Hk32-I$+Uiti~h-RI;v?(~doP}Pa;$^U2&u)pb?-R9pJA@D?7`tod
z2cQ=sUt0utDeGY*LU*e!h-W(B1YiCoD?jR2T_?RmDID%`QE4nPjiT<%QG_mif<~0;
z^SHjc%4t^?sI?t4)nt`@k+Z7)-fl>Khi0X3wT;}8riKeb!_{xOFTRNq`QfK05O)yV@rP6wXa+^W)mZ0
z1YI?BD0E$SMVf1Z8(X_Z^tgsvHitjF22?`{M-8l6osQmo9HRrB3gwS4~;<%9#
zkwSLzK%ST4l=JanPI9_b-m~cYUVdoq)r*ac)-{XsWnJU-PylcYXHhvvRwQgi3Nea$AT~kf2?e>l`
zL^T~AK@d9|rVjFZuI5_NIihtE=&{e%LeHm@=GHbu#r+nW+d4(3w#Xs!lk@uBJ7E?%
zz81U`h7CoH{a)lE_55e8O`lQCk>WHTXdgM#);est#+92D?{8lAxeZ;LkAW%C+w>ao
zQ;?I{c$&2)cSfJy?;B$@QIK!D=$ag1m9CTd7aB7@EYlN#%dE#I3KT@8=0AHi3ZTCIuykK^7Iu+xq{KoAgvae)e?I>i=hQsAe6Cp7fXusD
z^8EdUCLdvx887Jm^Ga|{w`lFHOCFGB1YdvYump^eWhjnLq71A+yG3_~A~#KQzE{-D
z;3ZAWlK16QXwOy%?}*#}+Wqy_+=FDB^Q^xjUN^ObgZ?y&jQrxL)Q$uE$*If9UjwSO
z*{@O>e)UDky%_z_*;CdM&EAK#96=h>HZQ-RYO)X{;*@51*Ru1epHtV(((g{~4}?Fx
z(Np6uC`fJU%1UU=anN+C2Jwhd*F4_0nLI+;@}A$o+DPM$3oNW9e^Y^Lj5Z(c2rv25J%L
zE2G6lmD5}&9i>ta4|&Vo6+%X)&UME2jZyLrcTf9oEm!s1Zfo;O=KZ2UI~1y1S0U{3
z8xGx~Th(Nq((1?MDDesS5<5g5r`u06M6&u%<2HmPGhrzV@{pkRruxu}qC9`!LEE-G
zB~^vD4AR>b9*G7`jw2ZG<@yMogkUMG9rCaB)OD5aNlc_CnFZFp;j9cF4A;)j==U5x
zv$$$qD&>FETpeDny5|_{RQUek*BdyZ^_L$fd~M>q{#Z=BRKQF0v`eTPpEOKohBGOc-9_VxW^tFq*h{p{gnqi1>S)Gl2f6?u%2n*2#FWf(ZD7y$&
zj^Brj`;$3DDa%rR=!G8r4x#eG)^ZUuoY@JD->#yDnX1jhloOf*z#XK<(h>ytytDob
zcn=G!(Mx>b_lIqEw+f}%qH~ANBJFRjW2lHOTHf^a&V{MqcZKg(8b_%1<*h;0G!eAm
zRb>}!WX6!9#c`5-o?TAqj8?Au$ij!weX>fJNr=s@{i)Y@xrLtloaWdm|Hr+OmnnI0
zDOv&rrL41Ss!d)lm)_m*IO!c|9EG_b2!XS+Y;yDBnuSw=K
zdTk`u?yBL4db%bb)#4y3sy(x(8C~)uDk|Lwc5~f#m(kMh^3QO5MvU<;s)@gU(@FU=
z)nJqeY2i$kpcokqd=HgX{P3a3)23kgx7#S
zYyjroPaFgIio9hvL24s9dV&oYuO1hGSK+7za5HSbBH8yv#D%{oKx$h^uJ-&zC^#bgjE%em==a!2Bz<*wXf#oU}TD`t8Pz&ENJm7E|u>T%?pk#LLtiID5uO=8Zu$%L)n
z{h^#Du1U7+%W}&DYFj=m4EKkhKs0pb50QS$BnbCKIi1pG@));G=T{hDRDzaU@;x%O
zjUO<8_jlg!ErCFSG9k21rA=gR@zFj8(2xpQNVb@s#9334`q5s$^^jFVzoL$OsYmwf
zX2NZI4taOU`yhU)wMVoX=ntPhG^Hp
z8{+4FzVt
z6E21Rse=3@!(iB0DRwHFhfiY{QHI1NxO2PR`UB$xMLwDeUFrmxpdG<%k74eu``+^F
zze<;q+M9S@W?4)0-;T$T+4wg+PF689wcYZYM-ujLK8G
zpR_>#+1A?P!g*Xt2rMvwmPH$I#KCl$aSYAMGO4Y{nLGQ@b0WGRiGxX34lWW%_m^O@
ztB7{IR+k!RP~btt6pcd~vcn~cIEK=jJ%=OLKtNsWdfdUSqVq>NPKw!5D|d>QmjhEp
z^@+Zn-$?KFQFo1CL%){^+~@6CqsIU{;$6ZJBF6oIm6}&7H8o`*sMC$@1AeC`4f;~y
z)~)EadbhmADzJ_iK6e}s-#<+$jXw;{qdc)2pQCNdWk3DwFY>+V>Rsduk@G#jNUZcnjOYSA>dc0B>j**(icjA8c*p7RfjrhzDH!B1k}3+Bpd9svE({(
zKTV`6)GaouxN~*}GIK>=Qax(?-IKiN_M>(wX$FjcJ)EgVH-GEZWf-6xnZ;-RwZp-A
zEcNKN!#tHFj)wVtl>E?St~e(bMduE=|3>rav-;i80UfmK&*RzU4)=n2DOy(2SnX&<
z5sFxy-*V>p<$*U7T_@~=mQ@`C?O(sx)#^}6Tvf^wWVfGO)i2#uT(mY;aGh^G_(3QL
z$?zOW(ij^&3rx!qlRIc(^0`yxq3f<^*Xxiss3`Z_DDB-@2l6Cq5NlK~#0VU;+XOG5
zZH&!lh1PzU-{<{{WPdo_Z&!XFkU`MHW6q8QCjidl+d*PIrA|#{tRhYfbJopJsI;$S
z(T4YP*4F&qBzOG)Kzx9^x*q_*tUO!G36Z-;tpuN~hE}HL)jv9(h72-;^%q67;2L{X
zo6iJl%Y3Avv!c+ZlXjNU!BxG(lmX7G-^SZ~tds5dT#J57_w@LJdh=q>lXCYG&&+;^
zvi`Z<7slIogTIQ;3tlgE8H_7*`gk1mOIrOfa6j;3MIJF2_J(*YA5PV-+{M*XAmvH$
zOr^4EB+v=+scsZ66(XfhJ41IG{HE0U=b|^yUi1=ebyU_0Ta`z~`#XN$Gi=^hY9x=q
zw#BUr5Vo_n@uFpZN5R4-75Vu^K?iF7wCMg3J|Tw8xIX8^^GQBGR7-RT*$G9ic8{~z
z)Ml$Hxi(%KpC1j?dCuP|-C9@~5Pte?GGc@1`W;ySl=CtE03G_(T>C1UendGp{x8sa
zBQYb~=g#rZfy7e5=75Gia*|i}`Q*e=`X=)|W!jNmt(9Z)E48F
zbl}dFFV`cr%SkFuedpD|w|~xXR0aHcV(K(1*^ddOm>5Np_*8GmX+V4
zh2^c)9tbQFh`nz$sgyy{L%IFqL_d7Dw{`u>Ci2dTwoRF3ey%aBiyxXN2#Hb^y?@(C
z1nB;EiH%;Dl=K?v5wem`ypson5A4#cCAHWS)a2z$sYMrFDl2chK>Nm7C`1(*-m!gI
zT}6MknfqCjx2wd&;;fb9G?Gh|owAi}su`lxzyj83zDAUQ>+NikMSK>s0<$}D3i(Am
z$I}aWp5U9ls9WQ0^snUN%;HuFY}~i43KQeIh~Ok4Wu@>$R+OtEJN}`Az3jU`)SX@c
z>PF;kRy3DRt1c%}GFbrA*4i8VE8wvNtEOMg{om+<$B65|J<;jn3{uMb`$TdoQhRVwjU8*Wpt$;kDzY&A-IXQUcT-kdNUkx8ul!v
zjTI)piXi>~_=W)U?=%0Po=@NCh=V((Cj5E$5x&y&0x{
zczaPpAuD`vdG6I@zdx(q3x!S6Sgr32x_#&{(dwf_?{_=PVHkWU_$5#WT`%7Jwbt?N
zdC|v~QRxbN-i?eA7l>~N^d?7hwam0HgYto0
ztYlGMP2K~;N$ZuJL8su!?93~e1HsXcv>hvx9}2A-y*chF3=d_ilfOx;yN8bE
z?4cym&|dYWbT{oFQ|sLRzfHJ>>(jEtR;S2ELMvM_1?u&Oo0rVuw;{(20?l-5@&swA
zd<|;J-p1-<=E~+)g$Ht|Oce!p|1=WPmiduav3!dX7-vWY?#SI8TH{dVDN+
zPHSpyM`x&KptXYgb_l5PuyyfT&S2@q!M1-Wg&ZD+S-2lud-F6#i({W^hF0=47j~)E^0zs0!vwE@K?FGyhgvfRJP_V*E0#Ra)$`C
zS*fGc0X9Y;5*&C&0z2WkLr+Ai;_YCzAMo1!zE^V*R1|5(($_aNRTO*x{ZQ^}dT#j2
zxOONPketS1ppM_?yP(Vb4bA4MXK!Gml7OoaZ>=lP9nIzYRZb%6Z!Q@h1(i7nSc&LZ
z@=)~R^A_sgjeF=WH87!0-+gtF_2umFdoI;_j*fxlY{bfw?X9Gh33g(6!sMgK7gp$X
z?xVRdVq&rLqlT!V3+kr;4)u=Nj1#Xs-cy!c`P~ew*$5&i_#rJr_TrBXHUH)gNVgH+o+z_-cYqApJ4$BptH0@@
zIjG()?eed$f7!*(%*!?u;M*9Xo6H4~sQK06Z(V9r3)L&OL5}6fR(RlR_GjtkLd%BB
zWcYF^H-f-A1)t!`3g{af6&5q3giyznNGVxrr|Ha8O*uJKvfzMoCV+%C#hG_cd0XIP
zAbeB%4;Dy6&AY0z-~Iqto&%Tt2yoY$Zo9~tenZvrA`5`vXODEFyUN`bJAC|a;QO&p
zp7I&eFDBaNmD6Uc@m$rlR~pDM=U_PCX)B;@-a8NX(|_f+
z81FTJ-t>^!%X7cgQ4!9
zo^^R9Kn><0)y~~^SOwJr=cL;;FnTK>Ep=D~UBr1%zmR>@cAQj0^=bPah|a}F7!iUX
zh21WpCYEmYx#G^a*n!X81Z^DOtzIL*Sg5CiJ(vW6Gn5ML?3cZ`#|f4zqe`
z(mwoF^^QJ3A0J1rPaFMZ$l`<$KLh||Pi7Yb=O
z<=@KPh+Uqa`fv2=g&nQtkVR&Htkvh~`UKv&BSSqMV^qK2VXWPW!J`Mh?eTu|4IZdI{KfFvT?Kf@na*55ef
z<5w}~(1q9HDRzCtl>ktw|EJH0$4=c&ve)Sm4f4y#>Au?^pV9^+z29X;?msaljs*?c
zA!eIfXkE+~46x)}yCyac--vO5PA&CginF1TOug#W=D_axJ$s#YFB}wXNtVm4J?$vG
zNg#Ve8{l*f%!t7dNNNd=eqRz>lqo7cj%w?s?Rmv
zTJCkB7H<~3mhK_A*Op!t*Qp(Ul0W--AYR)jG0m2~6Ht@CiKq%Io({4s2>?>U`i%r{
zF)bUmw08$}6F0xWj8+@R^2M+mRZstu9|Sh!LKc{^LodraV#Bakd4?%H77ZX`&`C&*
zQ0@L!=CaUs`IkKAv6
zZn2zAXBD_KdMhkK`Is|F2>zZ}tVBdmJM4O#!#4KL3+BS=uL~}AcJz5GR%}<*rYPGz
zu@dji2Off#{y@Ze8W*xGKbFX7@|fdQAAydhcV11d
zuf^|w*nFbQOLFRH?Q~nHLuvUtORCjW&LeQ7UIF{)R?Mr!5LGC&y|d+kHPXxdqS(=`
zBg~*uZdaHd3R}&t7{4%AGVyV17CA0&u$V{Bqe_OBMbAnaXu;P7bFxPRj4UFYBWXbp
zrHuL2TH8ID&O7!m=}r7^seMt6AMOTj^DZ>9a@`oW1#0*9fBK{@t@Cq+%)5Ga
zDjsbLq-#=)6kv122bWXp?y(n#if5j&rREk;u3170<&6q$QUEVcZm4d#Q?8v*&xFBe
zdLtrB02TojrEYbXV_VFm*F~Nw&qRYh50hz(wz1qgG8fK@GFbg+JXgehw4BT4uD>I!
z>N%KAD>Ugo!GGh{Ez?Ht{Hn}=2o8O<_$<4=d;5!s7P*NHaBU$Y+s6&8UBv1rU`~5Ys-c@c4*4qI6#UFR5t?-cZ<>{=_CNqS&=Rg-|%&hRb!SBTxtzFbu7cl$Qz
zsdz=pG%9xqi!s_Yx8rpNf>E_zgDF;sUpdE>W228cC@5p>VP!$a(AmfDZS(ohb-u(U
zx7UE@^Nc(Qcsy4M&|h_4Pn6e?G~XDzNLC*Q>f}DH^?`u`i~%c@l3PB`HO0nOe4MhQ
z3mDkBwRC++dw*N`26y}d_Df*lBq)gD=xx}HJBl>$P52yjO@3GMc^q`1oYTdFx(vPM
ztjq*Ao;Uc}g)R8~-pisqE~WLAo$hKB$Cpd$uF2GS36Y=Q=&$7~qKH!L^;Pt=LS@p-
zzj|cM&q$WQB21yx9ndK_O;5WW#koLY;#jUk+3x3Z-}@8P^Pd(M4ghpbcAkgQVeR4Ami&
z)ai7BZxqG+~T{-|%Vu;RdW
z&d0t;Z702z|Lo!Ap*T~%Reh}a`S`HWj6PQ=NPptZ)e`d2Kw41iWCj+U0)8iods@n+
z3G(YEIdfA9y0vgiRntxH&c(uW^U;$q94b9TtHLXDr
z0)_4?Sgnv&5&wc~$T884PSTJ3u`FB*%bB_*)=e%hZ<7yg0auZt^=oAY`)NgCOYK-4
z&9C3zmTqX4UN3T<50sL=?A@IipbDpQ0Ua|F+HY
z_mlF9ilh%~7sZrs?^hFEpDUFPmC`P=SC_V-NTcAYP}RVe)3?2@jok-<2f^1Hg3k2G)Xdq^&4d(n`tbGi9JP9IB`iE8H@6Ozoa263K%>(Sg*)}Ia(75Fdg9t%>i
z!p{2&%CU2*b;N?E^r-EBT@X#Uu{<%TLmy^W9~A>B1?vLF-I_@YL^&2aourgELQ!|W
z_XLX2#zO&$;I_*9KSU(qILz$qo-FdXa`$!K8%P-S
zYgK^8$ACh&w(B3Y=Zsl8nm>hgY}A-2FQ|!pKbTZz8S@Ob7FU#@LrKW87~Vt|j@dcy
z?wxlOpb=W{s2G>^gg8r0-c(OdrK}^Dmq^;joQr2<^TSzQX0LE8c;pI`i5^st$DPgA
z)Z|P5I&aXT1qMCfc|9Fn)oAv^kCEDw+)SU9zLj6lf`(Gqt7=jsXdf~Yexp$ohEtew
z-tqNpO*4mAoiAXeH#8nVG}p!t+1mej*yN~NjwT6)9fipzuji!2za*4>Md#SKNzZ%x
zrf{5qXTE*iCGaeKibiaE&q5B$)OuZIj)}4H6xaHF%~YeWi_t|L-K+XvBTQ
zfc#kc-a;UHFfoVyM3hBlzwV40aq)@hXsiDEYmM|>1p)@%FFZYE;RV(Ze_;V-l)`ML%7~p4Lf2?ROs|oVaTzX!4xaG}g>MYZf=JEL+%WLkNznsYErm5Ri&i